Upload
lehuong
View
224
Download
3
Embed Size (px)
Citation preview
Improving Requirements Testing with Defect Taxonomies
Michael FeldererResearch Group Quality Engineering
Institute of Computer ScienceUniversity of Innsbruck, Austria
The 25th CREST Open Workshop
London, February 11, 2013
• Defect taxonomies (DT) provide information about distribution of failures in projects and are valuable for learning about errors being made
• DT are in practice only used for a‐posteriori allocation of testing resources to prioritize failures for debugging purposes or for prediction
• But DT have potential to control and improve overall system test process• Design of requirements‐based tests• Prioritization of test cases• Tracing of requirements, tests and defects• Controlling of defect management• Provide precise statement about release quality
Motivation
February 11, 2013 Slide 2Improving Requirements Testing with Defect Taxonomies
Defect Taxonomy Supported Testing (DTST)
February 11, 2013 Slide 3Improving Requirements Testing with Defect Taxonomies
Before DTST
Test Planning and Control
Test Analysis and Design
Test Implementation and Execution
Test Evaluation and Reporting
DTST
Test Planning and Control
Test Analysis and Design
Test Implementation and Execution
Test Evaluation and Reporting
• Process for system testing with defect taxonomies, called Defect Taxonomy‐Supported Testing (DTST)• Aligned with ISTQB‐based standard test processes• Prioritized requirements, defect categories, failures• Traceability between requirements, defect categories and failures• Application of specific, goal‐oriented test design techniques• Detailed statement about release quality
• Empirical evaluation of effectiveness of DTST in an industrial case study compared to standard test process
• Decision support for the application of defect taxonomy supported testing based on cost comparison with standard test process
Outline
February 11, 2013 Slide 4Improving Requirements Testing with Defect Taxonomies
Basic Concepts
February 11, 2013 Slide 5Improving Requirements Testing with Defect Taxonomies
+priorityRequirement
Test Strategy
Test Pattern
Defect Taxonomy
Use Case
Hierarchy Element
Test Case
+severityDefect Category
+severityFailure
Test Technique
Test Strength
0..*
1..*
1..*
1..*
1..*high
1
1low
1
1
1..*1..*
1
1..*
10..*
0..*
0..*
0..*
1
medium
1
1
0..*
0..*
0..1
1..*
11..*1
DTST Steps and Integration Standard Test Process
February 11, 2013 Slide 6Improving Requirements Testing with Defect Taxonomies
(4) Test Evaluation and Reporting
(2) Test Analysis and Design
(1) Test Planning
Step 1: Analysis and Prioritization of Requirements
Step 2: Creation of a Product‐Specific Defect Taxonomy
Step 3: Linkage of Requirements and Defect Categories
Step 4: Definition of a Test Strategy with Test Patterns
Step 5: Analysis and Categorization of Failures after a Test Cycle
(3) Test Execution
• Prioritized requirements are assigned to use cases additionally defined by business processes, business rules and user interfaces
Step 1: Analysis and Prioritization of Requirements
February 11, 2013 Slide 7Improving Requirements Testing with Defect Taxonomies
UC Identification
UC Search Client REQ# Description Priority of REQ
… … …
REQ# Description Priority of REQ
39 ANF_0053 Logging and Auditing normal
40 ANF_0057 LOGIN via OID high
• Top‐level categories of Beizer are mapped to product‐specific defectcategories which are then further refined to concrete low‐level defect categories with assigned identifier and severity
Step 2: Creation of a Product‐Specific Defect Taxonomy
February 11, 2013 Slide 8Improving Requirements Testing with Defect Taxonomies
Defect Category of Beizer
Product-Specific Category DC Description of DC Severity
1xxx . . Requirements R1 Client not identified correctly critical11xx . . Requirements incorrect R2 Goals and measures of case manager are not
processed correctlynormal
16xx . . Requirement changes R3 Update and termination of case incorrect normal
12xx . . Logic GUI-layout13xx . . Completeness Syntactic specifications of input fields
Error massages
4xxx . Data D1 Incorrect access / update of client information, states etc.
normal
42xx . . Data access and handling D2 Erroneous save of critical data critical
R4 major
Unsuitability of the system taking the organizational processes and procedures into account.
Incorrect handling of the syntactic or semantic constraints of GUI.
• Experience‐based assignment of requirements to defect categories• Peer review of assignment important• Tests are derived for each requirement assignment
Step 3: Linkage of Requirements and Defect Categories
February 11, 2013 Slide 9Improving Requirements Testing with Defect Taxonomies
REQ# Description Priority of REQ
39 ANF_0053 Logging and Auditing normal
40 ANF_0057 LOGIN via OID high
Defect Category of Beizer
Product-Specific Category DC Description of DC Severity
1xxx . . Requirements R1 Client not identified correctly critical11xx . . Requirements incorrect R2 Goals and measures of case manager are not
processed correctlynormal
16xx . . Requirement changes R3 Update and termination of case incorrect normal
12xx . . Logic GUI-layout13xx . . Completeness Syntactic specifications of input fields
Error massages
4xxx . Data D1 Incorrect access / update of client information, states etc.
normal
42xx . . Data access and handling D2 Erroneous save of critical data critical
R4 major
Unsuitability of the system taking the organizational processes and procedures into account.
Incorrect handling of the syntactic or semantic constraints of GUI.
• A test pattern consists of a test design technique with three test strength and has assigned defect categories
Step 4: Definition of a Test Strategy with Test Patterns
February 11, 2013 Slide 10Improving Requirements Testing with Defect Taxonomies
Id Test Design Technique
Defect Categories
Test Strength 1 (low)
Test Strength 2 (normal)
Test Strength 3 (high)
S1Use case-based testing; process cycle tests
R1, R2, R3, F1,F2, F3 Main paths Branch coverage Loop coverage
S3 State transition testing
I1, I2, F7, F8, F9 State coverage State transition coverage
Path coverage
D: Data oriented D1
CRUD (Create, Read, Update and Delete)
D1, D2 Data cycle tests Data cycle tests
D3 EP: Equivalence partitioning
F3, F5, F6 EP valid EP valid+invalid EP valid+invalid
D4 BVA: Boundary value analysis
F3, F5, F6 BVA valid BVA valid+invalid BVA r values at boundaries
S: Sequence oriented
Test Design and Execution
February 11, 2013 Slide 11Improving Requirements Testing with Defect Taxonomies
REQ Text PR1 Workflow High2 Data Normal
RequirementsD efect C atego ry o f B eizer P ro duct-Specif ic C atego ry D C D escript io n o f D C Severity
1xxx . . Requirements R 1 Client not identified correctly critical
11xx . . Requirements incorrect R 2 Goals and measures of case manager are not processed correctly normal
16xx . . Requirement changes R 3 Update and termination of case incorrect normal
12xx . . Logic GUI-layout
13xx . . Completeness Syntactic specifications of input fields
Error massages
R 4 major
Unsuitability of the system taking the organizational processes and procedures into account.
Incorrect handling of the syntactic o r semantic constraints of GUI.
Id Test Design Technique
Defect Categories Test Strength 1 (low )
Test Strength 2 (normal)
Test Strength 3 (high)
S1Use case-based testing; process cycle tests
R1, R2, R3, F1,F2, F3 Main paths Branch coverage Loop coverage
S3 State transition testing
I1, I2, F7, F8, F9 State coverage State transition coverage
Path coverage
D: Data oriented D1
CRUD (Create, Read, Update and Delete)
D1, D2 Data cycle tests Data cycle tests
S: Sequence oriented
Defect Taxonomy
Test Design
ID Description Result Comments Severity1 see test spec. pass no2 see test spec. pass no3 see test spec. fail no critical4 see test spec. pass no5 see test spec. fail no minor
Test Execution
PR SDC, SF Test Strengthhigh blocker, critical, major 3normal blocker, critical, major 3normal normal, minor 2low minor, trivial 1
• Defects are exported from defect management tool (Bugzilla)• Severity assigned by testers is compared to severity of defect category
and priority of requirement• Weights are adapted if needed• Precise statement about release quality is possible
• Additional information valuable for release planning
Step 5: Analysis and Categorization of Failures after a Test Cycle
February 11, 2013 Slide 12Improving Requirements Testing with Defect Taxonomies
• Research Questions(RQ1) Defect taxonomy‐supported testing reduces the number of system
test cases.
(RQ2) Defect taxonomy‐supported testing increases the number of identified system failures per system test case.
• Evaluation via comparing (RQ1) normalized number of tests (NOT/SIZE)
(RQ2) test effectiveness (NOF/NOT)in two similar projects from a public health insurance institution
Case Study
February 11, 2013 Slide 13Improving Requirements Testing with Defect Taxonomies
• Two projects performed by the same test organization in a public health insurance institution
Studied Projects
February 11, 2013 Slide 14Improving Requirements Testing with Defect Taxonomies
Project A Project B
Area Application for case managers
Administration of clients of the public health insurance institution.
Staff About 7 Up to 10
Duration 9 month development, now under maintenance
9 month development, now under maintenance
Number of iterations 4 3
SIZE: NOR + NUC 41+14 28+20
Ratio of system testing
27% of overall project effort
28% of overall project effort
Test Process
ISTQB process + defect taxonomy‐supported
testingManual regression testing
ISTQB processManual regression testing
1. Requirements stored in Excel, Priorities assigned by Domain Experts2. Defect Taxonomies stored in Excel, Severity assigned by Test Manager3. Requirements assigned to Defect Categories in Excel4. Test Cases implemented in test management tool TEMPPO5. Defects stored in Bugzilla, Severity assigned by Testers6. Defects exported to Excel and assigned to Defect Categories
Data Collection Procedure
February 11, 2013 Slide 15Improving Requirements Testing with Defect Taxonomies
• Collected metrics for Project A and Project B
• NOT/SIZE indicates reduction of test cases in DTST (RQ1)• NOF/NOT indicates increased test effectiveness in DTST (RQ2)
Results 1/2
February 11, 2013 Slide 16Improving Requirements Testing with Defect Taxonomies
Metrics Project A Project BNOR 41 28NUC 14 20SIZE (NUC+NOR) 55 48NOT 148 170NOF 169 114NOT/SIZE 2.69 3.54NOF/NOT 1.14 0.67
• RQ1 is additionally supported by the estimated number of test cases without DTST and the actual number of test cases with DTST
• A paired two sample t‐test based on this table shows that the reduction of test cases is even significant (T=2.209, df=5, p=0.039)
Results 2/2
February 11, 2013 Slide 17Improving Requirements Testing with Defect Taxonomies
Module NOT Estimated NOT DTST Reduction GUI 32 29 9% M1 9 8 11% M2 58 41 29% M3 43 34 21% M4 14 13 7% M5 26 23 12% Total 182 148 19%
• In case study we quantitatively observed that• DTST reduces the number of test cases and increases test effectiveness• Test cases are more goal‐oriented to identify failures• Less resources for test implementation, execution and evaluation are
needed
• Results are supported qualitatively by feedback of stakeholders• Testers noted that with DTST their test cases find more defects of high
severity• Testers and managers noted less discussion on resource allocation• Test managers noted more realistic severity values and less unplanned
releases
• Limitations of case study• Public health insurance domain• Two projects• Beizer defect taxonomy
Interpretation
February 11, 2013 Slide 18Improving Requirements Testing with Defect Taxonomies
• Construct Validity• Application of standard measures on carefully selected projects• Data quality checks
• Internal Validity• Triangulation of data
• Two independent projects• Consideration of estimated values• Qualitative feedback of stakeholders
• External Validity• DTST is independent of defect taxonomy and based on generic test process• Replication of case study in other context as future work
• Reliability• DTST and evaluation procedure are well‐defined, data available
Validity of Results
February 11, 2013 Slide 19Improving Requirements Testing with Defect Taxonomies
February 11, 2013 Slide 20Improving Requirements Testing with Defect Taxonomies
• Estimation is structured according to the phases and steps of the test process
• Comparison of cost (measured in time) of DTST and standard ISTQB‐based test process
• Break‐even and recommendation of application if cost of DTST is smaller than cost of ISTQB
• Return on Investment provides same qualitative result• Adaptation and simulation with parameters
• Analysis and prioritization of a requirement• Linkage of a requirement to defect category• Design and implementation of a test case• Execution time per test case• Maintenance time per test case
Cost Comparison and Decision of Application
February 11, 2013 Slide 21Improving Requirements Testing with Defect Taxonomies
Diff of CostsPhases Ph Ph PhTE Test Planning ‐50.00TE Test Analysis and Design 34.00TE Test Execution, Evaluation, Maintenance 17.50TP 50.00 100.00 ‐50.00TAD 250.00 266.00 ‐16.00TC1 360.00 358.50 1.50TC2 470.00 451.00 19.00TC3 580.00 543.50 36.50TC4 690.00 636.00 54.00TC5 800.00 728.50 71.50TC6 910.00 821.00 89.00TC7 1020.00 913.50 106.50TC8 1130.00 1006.00 124.00
ISTQB DTST
50.00 100.00200.00 166.00110.00 92.50
A ctivit ies P h A ctivit ies P h
Analysis and prioritization of requirements40 requirements @ 0.5 P h
20,00Step 1: Analysis and prioritization of requirements40 requirements @ 0.5 P h
20,00
Step2: Creation of a product-specific defect taxonomy
30,00
Step 3: Linkage of requirements and defect categories40 requirements @ 0.25 P h
10,00
Definition of test strategy 30,00Step 4: Definition of a test strategy with test patterns 40,00
50,00 100,00
Cost Comparison for Project A
February 11, 2013 Slide 22Improving Requirements Testing with Defect Taxonomies
Cost comparison with average test execution time 0.5h
Cost comparison with average test execution time 0.1h
0
200
400
600
800
1000
1200
TP TAD TC1 TC2 TC3 TC4 TC5 TC6 TC7 TC8
DTST
ISTQB
Test Cycles
Ph
0
100
200
300
400
500
600
TP TAD TC1 TC2 TC3 TC4 TC5 TC6 TC7 TC8
DTST
ISTQB
Test Cycles
Ph
Cost Comparison Scenarios for Project A
• Summary• Standard aligned defect taxonomy supported system test process• Industrial case study indicates more effective test cases• Procedure for deciding whether to apply the approach based on cost
comparison• Future work
• Impact of different defect taxonomy types• Automatic recommendations for assignments• Requirements validation with defect taxonomies• Regression testing with defect taxonomies• Application of defect taxonomy supported testing for release planning• Investigate the value contribution of defect taxonomy‐supported testing,
e.g. reduced release planning uncertainty, severe bugs in operation
Summary and Future Work
February 11, 2013 Slide 23Improving Requirements Testing with Defect Taxonomies
Questions or Suggestions ?
February 11, 2013 Slide 24Improving Requirements Testing with Defect Taxonomies