2
130 BOOK REVIEWS I NTRODUCING SOFTWARE TESTING. By Louise Tamres. Published by Addison-Wesley, Boston, Massachusetts, U.S.A., 2002. ISBN: 0-201-71974- 6, 281 pages. Price: U.K. £29.99, U.S.A. $44.99, Soft Cover. This book has two entrances. The front door is for the main target readership (‘you’re new to testing, have no idea where to start and the product ships Friday’ and ‘the project documentation is incomplete, inaccurate, or worse, non-existent’). Experienced (and more fortu- nate) testers, verifiers and validators should enter by the back door, Chapter 9, which explains that ‘the emphasis of this book is on defining test cases, which is just one aspect of software testing’. There are no mentions in the index of verification (or validation), and reliability aficionados are unlikely to gain from the two pages on reliability and availabil- ity testing (specifically for Web systems). Verification and validation are mentioned briefly in the text and verification is implied in a few places, but the terms seem to be used interchangeably. This is largely a book for testers lacking good requirements and specifications to test against, and/or those given no guidance by their test leaders. It deliberately occupies a ‘pragmatic beginner’ niche, referring the reader to other sources for concepts, methods, techniques, software engineering and so on. It does, however, include some concepts and techniques of its own. The main concepts are embodied in the stages it recommends for testing in a nightmare situation: exploration, baselining a simple test, trends analysis (for example, used when expected results are difficult to determine), inventories of data categories, data combinations and boundaries, devious data and stressing the environment. The more widely applicable techniques include risk analysis, ‘test outlines’ and tables. The intended audience for the book includes managers or mentors ‘who may themselves be experi- enced testers, seeking ideas on how to provide guidance to novice testers’, experienced programmers who have been assigned testing tasks and knowledgeable testers looking for new ideas. However, Louise Tamres’s basic premise seems to be that testers are so often thrown into the deep end of some very dirty swamps that they should learn to swim in those before getting fancy ideas about quality, process improvement, etc.—the proverbial ‘draining the swamp’. The ‘V model’ does not appear in the index, but it is introduced in Chapter 9. Prior to that, the concepts of unit, integration and system levels of testing appear for the first time in Chapter 5, as if by stealth, between a section on state machines and one on multi- ple inputs to test cases. The book’s techniques are said to be equally applicable to all levels of testing and the distinction only matters ‘in the scope and in the interac- tion required to execute the tests’. There seems to be no explicit mention of refinements such as the ‘W model’, or of iterative system development lifecycles. The good news is that the method and techniques presented are sufficiently generalized and flexible to be of use to both novices and experienced testers who have been thrown into an environment of mud and alliga- tors. Tables should be comprehensible by anyone and spreadsheets are readily available to automate tables. The method is (in my paraphrase): seek out whatever requirements you can find, then analyse them in terms of input conditions, design constraints and expected results; use the input conditions to generate a ‘test outline’, which is a tree of all visible input conditions in which a path from root to branch is a testable set of input conditions (it is not clear what is done with the design constraints and expected results); refine the test outline through several iterations: first, list the attributes of the input conditions; second, rationalize them into a usable hierarchy; third, expand them in terms of no data, valid data, duplicated data, invalid data and other exceptional conditions; fertilize the tree further with variations on input mechanisms, processing rules and output mech- anisms (you may be expecting a mangrove anal- ogy by now—the book keeps the roots under the waterline until the tide goes out in Chapter 9!); decide how to use the tree in terms of our now- enhanced view of the requirements: we may know enough to branch out into use cases, or we may find sufficient economy or rigour to trans- late the tree into tables, or we may be content with splitting the tree into cuttings, reorganizing it, or using it as-is; generate a rough schedule estimate for the resulting testing; translate the tree into test cases, including prior state, initial state and expected results; produce a set of test procedures (which may be a synonym for test scenarios) which fulfil test cases by means of steps, test requirements and expected intermediate results. Decision tables and state transition tables are pre- sented as further ways of generating test cases. Trace- ability matrices of test procedures are produced to test requirements. Risk analysis is described in Chapter 8, Copyright c 2003 John Wiley & Sons, Ltd. Softw. Test. Verif. Reliab. 2003; 13:129–131

Book Reviews

Embed Size (px)

Citation preview

Page 1: Book Reviews

130 BOOK REVIEWS

INTRODUCING SOFTWARE TESTING. By LouiseTamres. Published by Addison-Wesley, Boston,Massachusetts, U.S.A., 2002. ISBN: 0-201-71974-6, 281 pages. Price: U.K. £29.99, U.S.A. $44.99,Soft Cover.

This book has two entrances. The front door is for themain target readership (‘you’re new to testing, have noidea where to start and the product ships Friday’ and‘the project documentation is incomplete, inaccurate,or worse, non-existent’). Experienced (and more fortu-nate) testers, verifiers and validators should enter by theback door, Chapter 9, which explains that ‘the emphasisof this book is on defining test cases, which is just oneaspect of software testing’.

There are no mentions in the index of verification(or validation), and reliability aficionados are unlikelyto gain from the two pages on reliability and availabil-ity testing (specifically for Web systems). Verificationand validation are mentioned briefly in the text andverification is implied in a few places, but the termsseem to be used interchangeably. This is largely a bookfor testers lacking good requirements and specificationsto test against, and/or those given no guidance bytheir test leaders. It deliberately occupies a ‘pragmaticbeginner’ niche, referring the reader to other sources forconcepts, methods, techniques, software engineeringand so on. It does, however, include some concepts andtechniques of its own. The main concepts are embodiedin the stages it recommends for testing in a nightmaresituation: exploration, baselining a simple test, trendsanalysis (for example, used when expected results aredifficult to determine), inventories of data categories,data combinations and boundaries, devious data andstressing the environment. The more widely applicabletechniques include risk analysis, ‘test outlines’ andtables. The intended audience for the book includesmanagers or mentors ‘who may themselves be experi-enced testers, seeking ideas on how to provide guidanceto novice testers’, experienced programmers who havebeen assigned testing tasks and knowledgeable testerslooking for new ideas.

However, Louise Tamres’s basic premise seems tobe that testers are so often thrown into the deep endof some very dirty swamps that they should learn toswim in those before getting fancy ideas about quality,process improvement, etc.—the proverbial ‘drainingthe swamp’. The ‘V model’ does not appear in theindex, but it is introduced in Chapter 9. Prior to that, theconcepts of unit, integration and system levels of testingappear for the first time in Chapter 5, as if by stealth,between a section on state machines and one on multi-ple inputs to test cases. The book’s techniques are said

to be equally applicable to all levels of testing and thedistinction only matters ‘in the scope and in the interac-tion required to execute the tests’. There seems to be noexplicit mention of refinements such as the ‘W model’,or of iterative system development lifecycles.

The good news is that the method and techniquespresented are sufficiently generalized and flexible to beof use to both novices and experienced testers who havebeen thrown into an environment of mud and alliga-tors. Tables should be comprehensible by anyone andspreadsheets are readily available to automate tables.The method is (in my paraphrase):

• seek out whatever requirements you can find,then analyse them in terms of input conditions,design constraints and expected results;

• use the input conditions to generate a ‘testoutline’, which is a tree of all visible inputconditions in which a path from root to branchis a testable set of input conditions (it is notclear what is done with the design constraintsand expected results);

• refine the test outline through several iterations:first, list the attributes of the input conditions;second, rationalize them into a usable hierarchy;third, expand them in terms of no data, validdata, duplicated data, invalid data and otherexceptional conditions;

• fertilize the tree further with variations on inputmechanisms, processing rules and output mech-anisms (you may be expecting a mangrove anal-ogy by now—the book keeps the roots under thewaterline until the tide goes out in Chapter 9!);

• decide how to use the tree in terms of our now-enhanced view of the requirements: we mayknow enough to branch out into use cases, or wemay find sufficient economy or rigour to trans-late the tree into tables, or we may be contentwith splitting the tree into cuttings, reorganizingit, or using it as-is;

• generate a rough schedule estimate for theresulting testing;

• translate the tree into test cases, including priorstate, initial state and expected results;

• produce a set of test procedures (which may bea synonym for test scenarios) which fulfil testcases by means of steps, test requirements andexpected intermediate results.

Decision tables and state transition tables are pre-sented as further ways of generating test cases. Trace-ability matrices of test procedures are produced to testrequirements. Risk analysis is described in Chapter 8,

Copyright c© 2003 John Wiley & Sons, Ltd. Softw. Test. Verif. Reliab. 2003; 13:129–131

Page 2: Book Reviews

BOOK REVIEWS 131

with the implication that it is to be used for reducing anover-large number of test cases already created, ratherthan as an up-front test planning aid. Test prioritizationis described before risk analysis, with the relationshipnot made clear. Test results are recorded in the tables,although a couple of passing references are made to thepossibility of separate problem reports.

I found the structure of much of the above difficult tounderstand, especially in terms of the relationships be-tween tables (e.g. test cases, test requirements and testprocedures) and between testing stages (exploration,devious data, etc.) and the outline and table techniques.The book’s author does try to explain that the stages arefor a very chaotic situation, the basic test case tablesare for a less rushed project and the test requirementsand procedures are a more proper solution. However, Iwonder whether testing novices will find it any easierthan I did.

Although all these tables might be seen to implymanual testing, something is said on automation. How-ever, this is nearly all about coverage analysis andfunctional test execution and how the tables may beflexibly automated is not really explained: for example,no mention of ‘action words’ or other automationframeworks. Coverage analysis is presented as entirelyabout code execution measurement (statement, branch,condition) and as being performed after test design,not during. I could find nothing on the static testing ofcode, although reviews and inspections do get a brieftreatment.

In addition to the general material, there are specificchapters on testing object-oriented software and Webapplications. These include items of more general ap-plicability which should perhaps have been presentedas part of the main narrative, rather than implyingthey are specific to object-oriented software (orthog-onal array-based testing) and Web applications (non-functional test types). The author does, however, pointout that orthogonal arrays may indeed be used fortesting any type of software, and non-functional testinggets a brief mention elsewhere.

The final chapter is on software standards, concen-trating deliberately on the test documentation aspectsand leaving process improvement out of scope. How-ever, the brief overviews of the standards are quite com-prehensive, and the test case documentation summarytable (requirements for ISO 9001, 12207, SW-CMMand IEEE 829) is excellent.

In contrast to the plethora of tables throughout thebook, there are few diagrams: of those that are pro-vided, most are screenshots or explanatory structuresfor the example applications (a tax calculator, an oven,

a stopwatch, an image processing application, a shop-ping mall guide and a Web bookstore). Overall thetext is comfortably readable and the author’s enthu-siasm and earnestness shines through. However, severalstatements seem to be over-simplifications bordering onnaivety. As Louise Tamres has been testing since 1983,these may be deliberately aimed at the readership ofbeginners. However, I must mention a few of these:black-box and white-box are stated to be techniques(rather than groups of specific techniques, although itis possible for the reader to infer this) and they areapplied solely to unit testing. ‘White-box’ techniquesare discouraged because of ‘the danger of verifying thatthe code works as written, without ensuring that thelogic is correct’ (the book is almost entirely black-boxin approach). In another example, the differences be-tween testing object-oriented and procedural softwareare stated to be ‘primarily at the unit test level’.

There are also some apparent inconsistencies whichrequire repeated reading to try to understand theauthor’s message. In the current debate about context-driven testing, this is more than mere semanticpedantry. Louise Tamres introduces her ideas on howto begin testing by working through several examples,each deliberately containing defective ‘requirements’,to illustrate that useful test activities can still occurimmediately. However, she states as part of her phi-losophy that she does not advocate working from poorrequirements. This philosophy continues with the sen-tence: ‘Although I do not advocate cutting corners,there are some shortcuts that will help document thetesting activities’. The relevant chapters then presentnumerous tables with what, in my view, is a rather‘long-cut’ appearance.

I hesitate to recommend this book, but I have readother reviews of it, all positive, and it may be thatI am seeking too much clarity and rigour here, or Ihave failed to grasp some of the relationships betweendifferent parts of the material. For the absolute beginnerin a chaotic environment, it does provide some prag-matic guidance and could be beneficial if used with cau-tion (and preferably alongside some other introductorybooks). For the experienced tester and test manager, ithas some value as a back-to-basics ‘sanity check’ anda reminder of various ways in which test specificationcan be flexible and iterative.

NEIL THOMPSONThompson Information Systems Consulting Ltd.,

23 Oast House Crescent,Farnham,

Surrey GU9 0NP, U.K.

Copyright c© 2003 John Wiley & Sons, Ltd. Softw. Test. Verif. Reliab. 2003; 13:129–131