View
213
Download
0
Category
Preview:
Citation preview
Author: Jan FishEmail: jan.fish@philips.comDivision: Philips Lifeline ITSeptember, 2008
Measure Quality on the Way In – Not Just on the Way Out
2
For Your Consideration
• Traditional Measurements for Test Organizations
• Value Added Opportunities
• 4 Phase Approach – Target versus Actual Progress– Bug Patterns within Builds / Deployments– Workload Assessment for Outstanding Bugs– Bug Injection Points and Bug Removal Points
• Measure Quality on the Way In; Not Just on the Way Out
3
Traditional Measurements for Test Organizations
TEST METRICS:
Standard of measurement Gauge effectiveness and efficiency Gathered and interpreted throughout the test effort Objectively measure success
PHILOSOPHY:
Keep it simple Make it meaningful Track it Use it
4
Traditional Measurements for Test Organizations
BASE METRICS may include numbers and/or percentages for:
- Test cases created - Test cases in review- Test cases to be executed - Test cases re-executed- Test cases executed - Total executes- Test cases passed - Total test cases passed- Test cases failed - Total test cases failed- Test cases blocked - Defect removal cost- Bad test cases - Bad fixes- Defects corrected - Test effectiveness (QA /
QA + Prod)
5
Value Added OpportunitiesForecast, Track and Respond:
Inputs: Number of Test Cases, Testers and Cycle Time
First Run Failure Rate: Inputs: Failed Test Cases / Executed Test Cases
What is the Pattern: Inputs: Known Patterns vs. Current Pattern
Workload Distribution: Inputs: Number and Type of Bugs in DEV, QA and Resolved
Bug Injection and Removal Points: Inputs: Error Created and Error Found
Plan It, Track It and Graph It: Inputs: Actual Progress to Targeted Goal
6
Target vs. Actual – Existing Project
Target Date
Testcases Executed
Testcases Passed
Testcases Failed
Failure Rate (%)
Plan Actual Plan Actual Plan Actual Forecast Actual
Week 1 30 19 24 16 6 3 20.0% 15.8%
Week 8 275 110 384 59 11 46 20.0% 41.8%
Week 14 20 6 8 0 2 6 10.0% 100%
Running total 475 201 431 102 19 87 4.0% 43.3%
43% : target
8
New and Improved Target versus ActualTarget Date
Test Cases Executed
Test Cases Passed
Test Cases Failed
Retests Failure Rate (%) % Done
Target Actual Target Actual Target Actual Pending Done Forecast Actual Target Actual
Smoke Tests
6/18 3 3 3 2 0 1 1 0 0% 33%
Running total 3 3 3 2 0 1 1 0 0% 33% 9% 9%
Functional Tests
6/19 4 3 3 0 1 3 4 0 25% 100%
Running total 7 6 6 2 1 4 4 0 14% 67% 21% 18%
6/20 6 7 5 6 1 1 5 0 17% 14%
Running total 13 13 11 8 2 5 5 0 15% 38% 38% 38%
Exploratory Tests
6/23 2 2 2 2 0 0 3 2 0% 0%
Running total 15 15 10 10 2 5 3 2 13% 33% 35% 44%
Load Tests
6/24 3 3 3 3 0 0 0 3 0% 0%
Running total 18 18 10 13 2 5 0 5 11% 28% 35% 53%
Regression
6/25 15 15 15 14 0 1 1 0 0% 7%
Running total 33 33 25 27 2 6 1 5 6% 18% 79% 97%
6/26 1 1 1 1 0 0 0 1 0% 0%
Running total 34 34 26 28 2 6 0 6 6% 18% 82% 100%
9
New and Improved Chart
Planned Value vs. Earned Value
0%
20%
40%
60%
80%
100%
120%
6/18 6/20 6/22 6/24 6/26
NB
R T
est C
ases
Targeted
Actual
10
Target vs. Actual Math• Target Test Cases Executed math = Target Test Cases Passed + Target Test
Cases Failed
• Actual Test Cases Executed math = Actual Test Cases Passed + Actual Test Cases Failed
• Pending Retests math = Previous Pending Retests + Actual Test Cases Failed - Retests Done
• Running Total math = Actual Values + Previous Reported Values
• Forecast Failure Rate math = Target Test Case Failed / Target Test Cases Executed
• Actual Failure Rate math = Actual Test Case Failed / Actual Test Cases Executed
• Forecast % Done math = Current Target (Passed + Failed) / Final Target Test Cases Executed
• Actual % Done math = Current Actual (Passed + Failed) / Final Target Test Cases Executed
11
The Added Values of Target vs. Actual
TRACK to plan and REACT immediately
Is quality built in Is the build / deployment truly ready for test Is the test resource on schedule with each type of test
execution Functional Regression Load / Performance / Security Bug Fix Validation
12
The Added Values of Target vs. Actual
PREDICT how many test cases should be run in a given time period
To date, our First Run Failure Rates are:
7% - 12% maintenance of existing functionality 20% - 35% added functionality to existing Application 30% - 47% new Application developed on-site 10% - 25% new Application developed off-site
13
The Added Values of Target vs. Actual
ADJUST time and / or staff on an immediate basis
Substantiate “Gut Feel” Demonstrate facts Moderate re-work and know if it fits the schedule Adjust the plan and determine the level of effort well
before the last quadrant of the test cycle
14
The Added Values of Target vs. Actual
SET realistic, track-based entrance criteria (as agreed upon with upstream partners) for what is or is not acceptable quality at the start of the project
USE the results to document expectations into project contracts (internal or external)
Establish what constitutes ”acceptable quality level” Set contract conditions for scaled fees based on
exceeding, meeting or failing quality objectives
15
The Added Values of Target vs. Actual
PUBLISH and POST printed copy in a common area
Eliminate the arguments Eliminate the negative chatter Document facts; don’t point fingers Grid each application/area Failure Rates in like-sets Review to determine likely process improvements Continue to track implemented improvement and assess
if it had a positive or negative impact
16
Bug Pattern Recognition – Sample 1 Build Name
Date Critical High Medium Low Total Running Total
b1 02/0102/06 02/13
2 01
15105
302710
132328
606044
60120164
b2 02/15 02/20 02/22 02/26
10500
315235
06
1213
00
147
13264925
177203252277
b3 02/29 27 12 0 0 39 316
TOTALS 45 88 98 85 316 316
17
Pattern Recognition – Existing Project 2
Build Date Date C H M L TT Run TT
02/04/2008 02/05/2008 0 0
02/07/2008 1 1 1
02/11/2008 02/12/2008 2 2 3
02/14/2008 0 3
02/19/2008 02/20/2008 1 1 4
02/22/2008 2 2 6
02/25/2008 02/26/2008 1 1 7
02/28/2008 1 1 8
TOTAL 7 1 0 0 8 8
18
The Added Values of Pattern Recognition
INSTITUTE a process change and know the effect
INCREASE the precision of estimates
IDENTIFY trends and observations supporting or hindering test and project plan
INFORM team and management of patterns found in a crisp, clean and simple manner such that the whole team can make “next step” decisions
19
Bug, Bug, who‘s got the Bug?
Bugs in Dev Bugs in QA
Pending Development Fix
Pending QA Retest
Resolved
C H M L TT Rev Fix TT C Rej Fut TT
11/02 0 1 2 0 3 0 0 0 0 0 0 0
11/19 0 2 2 0 4 4 7 11 4 2 0 6
11/27 1 10 9 2 22 3 5 8 9 2 0 11
20
The Added Values of Bug Reporting by Group
WORKLOADS can be easily identified by management and team members
INFORMED DECISIONS can be made on next steps without drilling into details (who, when, what, how and where)
RE-WORK can be tracked and compared to planned work
PROGRESS toward resolving outstanding bugs
22
Added Values of Bug Injection & Removal Points
BASIS for estimating the number of bugs to be found at each phase of the Software Development Lifecycle
IDENTIFIED work cycles that would benefit from process improvements and inspection points
VALIDATE that improvements work
DEMONSTRATE that quality must be the goal of all team members, not just the responsibility of the test organization
“Quality is never an accident; it is always the result of intelligent effort.” *
*John Ruskin, English writer and critic of art, architecture and society. 1819 - 1900
23
Measure Quality on the Way In; Not Just on the Way Out
• PEOPLE do not intentionally make a bad plan but they may not be able to quickly adjust the plan to current circumstances and abate risks
• PEOPLE do look for paths of least resistance and, if it is easy to blame others, they will
• MEASURING quality once product is in production tells the tale of what was not trapped and fixed but not the tale of quality before production
• QUALITY can not be tested into a product but you can measure quality coming into your test organization
• FOCUS on the level of quality at the project level
Recommended