Dimensions of Testing Dimensions of Testing #5#5
CFICSEOctober 2002
Diane Kelly Terry Shepard
2© Terry Shepard and Diane Kelly
Dimensions of Testing
Part 5
3© Terry Shepard and Diane Kelly
Test Automation
• “Simply throwing a tool at a testing problem will not make it go away.” – Dorothy Graham, The CAST Report
4© Terry Shepard and Diane Kelly
Test Automation Outline
• Some automation facts– Manual Testing vs. Automation
• Tool support for life-cycle testing: types of tools• The promise of test automation• Common problems of test automation
– Process Issues– Avoiding the Pitfalls of Automation
• Building Maintainable Tests• Evaluating Testing Tools• Choosing a tool to automate testing• Testing Tool Market• references: [21],[22],[23]
5© Terry Shepard and Diane Kelly
Some automation facts
• Automation of one test typically costs about 5 times a single manual test execution– range is roughly 2 to 10, can be as much as 30
• Savings are as high as 80% - eventually• Testing is an interactive cognitive process
– automation is best applied to a narrow spectrum of testing
• not applied to the majority of the test process
• All testing needs human interaction– tools have no imagination
6© Terry Shepard and Diane Kelly
Manual Testing vs. Automation (1)
• Testing and test automation are different skills– good testers have a nose for defects– good automators are skilled at developing test
scripts
• Tool support is most effective after test design is done– there is more payback in automating test
execution and comparison of results than in automating test case generation, coverage measurement, and other test design activities
7© Terry Shepard and Diane Kelly
Manual Testing vs. Automation (2)
• Don’t evaluate manual testing against automated testing for cost justification– manual testing and automated testing are
two different processes– treat test automation as one part of a
multifaceted test strategy
• Don’t decide on automation simply on the basis of saving money– testers typically don’t end up with less work
to do
8© Terry Shepard and Diane Kelly
Tool support for life-cycle testing: types of tools
• test case generators, test data generators– e.g. derive test input
from a specification– e.g. extract random
records from a database
• test management: plans, tracking, tracing,…
• static analysis• coverage• configuration managers• complexity and size
measurers
• dynamic analysis– performance analyzers– capture-replay– debugging tools used as
testing tools– network analyzers
• simulators• capacity testing• test execution and
comparison• compilers• ...
9© Terry Shepard and Diane Kelly
The promise of test automation:
What are the potential benefits? (1)• run more tests more often
• run tests that are difficult or impossible manually– e.g. simulate 200 users– e.g. check for events with no visible output in GUIs
• better use of resources– testers are less bored, make fewer mistakes, and
have more time• to design more test cases• to run those tests that must be run manually
– use CPU cycles more effectively
10© Terry Shepard and Diane Kelly
The promise of test automation:
What are the potential benefits? (2)• consistency and repeatability of tests
• increased reuse of tests leads to better test design and better documentation
• meeting quality targets in less time• increased confidence/quality/reliability
estimation• reduce regression testing costs
11© Terry Shepard and Diane Kelly
Common problems of test automation
– unrealistic expectations– poor testing practice
• automating chaos just gives faster chaos
– expectation that automation will increase defect findings
– false sense of security– maintenance of automated tests: fragility
issues– technical problems– organizational issues
• test automation is an infrastructure issue, not a project issue
12© Terry Shepard and Diane Kelly
Process Issues
• Which tests to automate first?– do not automate too much too fast
• Selecting which tests to run when– subset of test suites
• Order of test execution• Test status
– pass or fail
• Designing software for automated testing• Synchronization• Monitoring progress of automated tests• Processing possibly large amounts of test ouput• Tailoring your regime around test tools
13© Terry Shepard and Diane Kelly
Avoiding the Pitfalls of Automation (1)
• Get your test strategy clear first before contemplating automation
• Tests have to be debugged– test automation is software development and
must be done with same care
• Test automation can encourage a proliferation of useless test cases– evaluate your test suites and clean them up
• Hardest part of automation is interpreting results– human effort required here
14© Terry Shepard and Diane Kelly
Avoiding the Pitfalls of Automation (2)
• Test automation (esp. test case generation) can lead to– a set of weak shallow tests– tests that ignore interesting bugs– the tester spending a lot of time on
extraneous activities related to the tools being used
• Is it useful to repeat same tests over and over?– Study at Borland: over 80% of bugs were
found manually
15© Terry Shepard and Diane Kelly
Building Maintainable Tests (1)
• Don’t let the test suite become “too big”– before adding any new test ask what is the test
contribution to …• defect finding capability• likely maintenance cost
• Ensuring test designers and test builders limit their use of disc space– large amounts of test data have an adverse
impact on test failure analysis and debugging effort
• Keep functional test cases as short in time and as focused as possible
16© Terry Shepard and Diane Kelly
Building Maintainable Tests (2)
• Design tests with debugging in mind– What would I like to know when this test fails?
• Start cautiously when designing tests that chain together– if possible, use “snapshots” to restart a chain of
test cases after one fails
• Adopt a naming convention for test elements
• Document test cases– over view of test items– annotations in scripts
17© Terry Shepard and Diane Kelly
Building Maintainable Tests (3)
• Limit number of complex test cases– difficult to understand even for minor
changes– effort needed to automate and maintain
may wipe out any savings
• Use flexible and portable formats for test data– time taken to convert data often more
acceptable than the cost of maintaining large amounts of data in a specialized format
18© Terry Shepard and Diane Kelly
Evaluating Testing Tools (1)
• Capability– having all the critical features needed
• Reliability– working a long time without failures
• Capacity– handling industrial environments
• Learnability– having a reasonable learning curve or
support for learning
19© Terry Shepard and Diane Kelly
Evaluating Testing Tools (2)
• Operability– offering ease of use of interface
• Performance– advantages in turn-around time versus manual
testing
• Compatibility– ease of integration with application
environment
• Non-intrusiveness– not altering the behaviour of the software under
test
20© Terry Shepard and Diane Kelly
Choosing a tool to automate testing
• Introduction to Chapters 10 and 11 of [23]– Where to start in selecting tools:
• your requirements• NOT the tool market
– The tool selection project – The tool selection team – Identifying your requirements– Identifying your constraints– Build or buy? – Identifying what is available on the market– Evaluating the short listed candidate tools– Making the decision
21© Terry Shepard and Diane Kelly
Testing Tool Market (Ovum - www.ovum.com)
• In 1999, was $450M, growing at 30% per year
• dominant players (with 60% of the total market) were:– Mercury Interactive– Rational– Compuware– Segue
22© Terry Shepard and Diane Kelly
References 1
[1] C.A.R. Hoare, “How did software get so reliable without proof?”, Formal Methods Europe ‘96, Keynote speech
[2] D. Hamlet and R. Taylor, “Partition Testing does not Inspire Confidence”, IEEE Transactions on Software Engineering, Dec. 1990, pp. 1402-1411
[3] Edward Kit, Software Testing in the Real World: Improving the Process, Addison Wesley, 1995
[4] Brian Marick, The Craft of Software Testing, Prentice Hall, 1995
23© Terry Shepard and Diane Kelly
References 2
[5] Boris Beizer, Software Testing Techniques, 2nd Ed’n, Van Nostrand Reinhold, 1990
[6] T.J. Ostrand and M.J. Balcer, “The Category-Partition Method for Specifying and Generating Functional Tests”, Communications of the ACM 31, 6, June 1988, pp. 676-686
[7] Robert Binder, Object Magazine 1995 http://www.rbsc.com/pages/myths.html
[8] Robert Poston, Specification-Based Software Testing, IEEE Computer Society, 1996
24© Terry Shepard and Diane Kelly
References 3
[9] James Bach, General Functionality and Stability Test Procedure, http://www.testingcraft.com/bach-exploratory-procedure.pdf
[10] Bill Hetzel, The Complete Guide to Software Testing, 2nd ed., 1988, Wiley&Sons
[11] Cem Kaner, Jack Falk, Hung Quoc Nguyen, Testing Computer Software, 2nd ed., 1993, Van Nostrand Reinhold
[12] Andrew Rae, Phillippe Robert, Hans-Ludwig Hausen, Software Evaluation for Certification, 1995, McGraw Hill
25© Terry Shepard and Diane Kelly
References 4
[13] William Perry, Effective Methods for Software Testing, 1995, John Wiley & Sons
[14] John McGregor and David Sykes, A Practical Guide to Testing Object-Oriented Software, Addison Wesley, 2001, ISBN 0-201-32564-0 393 pp.
[15] G.J. Myers, The Art of Software testing, Wiley, 1979
[16] Cem Kaner, Jack Faulk, and Hung Quoc Nguyen, Testing Computer Software, 2nd Edition, Van Nostrand, 1993
[17] DO-178B Software Considerations in Airborne System and Equipment Certification
26© Terry Shepard and Diane Kelly
References 5
[18] IEEE 829 - 1998, Standard for Software Test Documentation
[19] Mark Fewster and Dorothy Graham, Software Test Automation: Effective Use of Test Execution Tools, ACM Press, 1999
[20] www.sei.cmu.edu/cmm/cmms/cmms.html[21] James Bach, “Test Automation Snake Oil”,
First published: Windows Tech Journal, 10/96 (see articles at http://www.satisfice.com/
[22] Robert Poston: A Guided Tour of Software Testing Tools, http://www.soft.com/News/TTN-Online/ttnjan98.html
27© Terry Shepard and Diane Kelly
References 6
[23] Fewster and Graham, Software Test Automation:Effective Use of Test Execution Tools, ACM Press/Addison Wesley, 1999
[24] W.J. Gutjahr, “Partition Testing vs. Random Testing: The Influence of Uncertainty”, IEEE TSE, Sep/Oct 99, pp. 661-674, v. 25, n.5
[25] Robert V. Binder, Testing Object Oriented Systems: Models, Patterns, and Tools, Addison-Wesley, 2000 (see http://www.rbsc.com)
[26] Capers Jones, Software quality: analysis and guidelines for success, International Thomson Computer press, 1997
28© Terry Shepard and Diane Kelly
References 7
[27] Andrew Rae, Phillippe Robert, Hans-Ludwig Hausen, Software Evaluation for Certification, 1995, McGraw Hill
[28] Shari Lawrence Pfleeger, Software Engineering Theory and Practice, 2nd ed., Prentice Hall, 2001
[29] Hong Zhu, Patrick Hall, John May, “Software Unit Test Coverage and Adequacy”, ACM Computing Surveys, Vol.29, No.4, Dec.1997
[30] IEEE Std 610.12 - 1990, IEEE Standard Glossary of Software Engineering Terminology
[31] James Whittaker,“What is Software Testing? And Why is it so Hard?”, IEEE Software, Jan./Feb.2000, pp.70-79
Recommended