Upload
techwellpresentations
View
250
Download
2
Tags:
Embed Size (px)
DESCRIPTION
Why bother with measurement and metrics? If you never use the data you collect, this is a valid question—and the answer is “Don’t bother, it’s a waste of time.” In that case, you’ll manage with opinions, personalities, and guesses—or even worse, misconceptions and misunderstandings. Based on his more than forty years of software and systems development experience, Ed Weller describes reasons for measurement, key measures in both traditional and agile environments, decisions enabled by measurement, and lessons learned from successful—and not so successful—measurement programs. Find out how to develop and maintain consistent data and valid measures so you can estimate reliably, deliver products with known quality, and have happy users and customers—the ultimate trailing indicator. Learn to manage projects dynamically with the support of current metrics and data from past projects to guide your management planning and control. Join Ed to explore how to invest in measurements that provide leading indicators to help you meet your company and customer goals.
Citation preview
TJ Half‐day Tutorial 6/4/2013 8:30 AM
"Software Metrics: Taking the Guesswork Out of
Software Projects"
Presented by:
Ed Weller Integrated Productivity Solutions, LLC
Brought to you by:
340 Corporate Way, Suite 300, Orange Park, FL 32073 888‐268‐8770 ∙ 904‐278‐0524 ∙ [email protected] ∙ www.sqe.com
Ed Weller Integrated Productivity Solutions, LLC
Ed Weller is the principal in Integrated Productivity Solutions, providing solutions to companies seeking to improve productivity. Ed is internationally recognized as an expert in software engineering and in particular software measurement. His focus on quality started with his work on the Apollo program with General Electric; was reinforced during his work as a hardware, test, software, and systems engineer and manager on mainframe systems at Honeywell and Groupe Bull; and continued as the process group manager on the Motorola Iridium Project. For the past fourteen years, Ed has worked to improve the quality and productivity in small to very large companies that develop engineering and IT applications.
Software Metrics: Taking the Guesswork Out of
Software ProjectsSoftware ProjectsEd Weller
Integrated Productivity Solutions
© 2013, E. Weller
Software M
easureme
Agenda
Why do we measure?What should we measure?How should we measure?ent
ow s ould we easu e?What do we do with measurements?
2
Software M
easureme
Demographics
How many of you Have a recognizable Life Cycle Process• Agile/SCRUM?ent • Waterfall or other life-cycle?• Don’t know?
Are project managers/SCRUM Masters?Are testers?Are developers?Are managers (one or more levels above project Are managers (one or more levels above project managers)?Measurement analysts?
3
Software M
easureme
We All know How to Count
We learned to count before starting schoolWe learned to multiply/divide in the 3rd or 4th gradeSo arithmetic isn’t the problem!ent
So a t et c s t t e p oble !
It is knowing why, what and how to measure, and then knowing what to do with the results
4
Software M
easuremeent
WHY DO WE MEASURE?
5
Software M
easureme
What’s Our Target?
All too often the end is measurement itself“Measurement is good”“We gotta measure something”ent
We gotta easu e so et g“Go forth and measure!”
6
Software M
easureme
Measurement Is an Input to Decision Making
Regardless of what we build, how we build it or who builds it, someone somewhere is making decisions
Should we invest in product A or B?ent Should we invest in company A or B?Should we ship this product?Should we cancel this project?Do we have problems needing corrective action?Will we have problems that need preventive
ti ?action?Today’s measurement is used in tomorrow’s estimates
An investment in future decision making
7
Software M
easureme
Information For Decision Making
Informed decision making requiresUnderstanding what’s important for successRelating what’s important to indicators thatent • Identify significant deviations• Tell us we are on track• Predict we will stay on track
Indicators are based on what we measureMeasurement needs to be• Reasonably accurate
• Consistent measurement• Consistent measurement• Clear definitions
• Worth the cost• Seen as valuable or useful by data providers
8
Software M
easureme
Measurement Allows Evaluation and Decisions
SubjectiveFeels goodLooks rightent
Looks rightFun to use
ObjectiveReturn on InvestmentFitness for use• Performance• Reliability• Usability
Comparison, evaluation, tracking
9
Software M
easureme
We Measure to Enable Correct Decisions
PersonalWhere to take vacationsWhat brand of ____ to buyent What airline to fly
BusinessWhich product to buildStaffing levelsSchedulesProject statusProject statusProduct release• Quality • Time to Market
10
Software M
easureme
Subjective Measurement?
Grand Canyon National Park
ent
11
Software M
easureme
Subjective Measurement
“Just a big hole in the ground” *“ I wanted my father to see this” (overseas tourist)Opinions matter when value is subjectiveent
Op o s atte w e value s subject ve
* Context sensitive - could be objective if stated in cubic kilometers
12
Software M
easureme
Objective Measurement
Facts matter when value is objectiveWhat product should we invest in?How much will it cost?ent
How much will it cost?When will it be ready?Will it satisfy our customers (quality aspects)?• Functionality • Reliability• Performance• Security• Security• Cost• Etc.
13
Software M
easureme
When Objective Measures Are Not Available
Opinions and loud voices are the basis for decisionsWhat’s your opinion worth?Compare that to opinions held by your boss, ent
Co pa e t at to op o s eld by you boss, grandboss, or great-grandbossWho wins?
In the absence of data, Managers only have opinions, experience, intuition and “gut-feel” a basis for decisions
Data is welcomed (most of the time)Data is welcomed (most of the time)Data will trump opinions (most of the time)
14
Software M
easureme
Business Imperatives
Businesses need to be profitable to survive in the long run
Cost to build the product includesent • Effort (developers, testers, managers, support, etc)• Development environment• Test Environment• Others?
Must deliver to meet market demands• How long will it take/When will we be finished?
• With sufficient functionality to create demand• With sufficient quality to (at least) satisfy users
What will customers pay for the product?
15
Software M
easureme
Business Imperatives and Decisions
Decisions are made using a range of estimating inputsGuesswork and intuitionExperience with similar products or servicesent
pe e ce w t s la p oducts o se v cesData from similar products or services
Have you ever faced this bargaining method:“If we cannot deliver by xxx, we will go out of business”“If you cannot do this project with this budget b thi d t I ill fi d h ”by this date I will find someone who can”
16
Software M
easureme
When Opinion Trumps Data
A tale of two companiesCompany 1 – Owned a market niche, but was facing new entrantsent • Marketing demanded 6 month schedules in the face of
one year estimates from development• 6 months in, faced with reality, project was cancelled• Repeat the above two steps for 18 months• No new product delivery in 18 months, lost 50% of the
market
Company 2 made customer commitments Company 2 – made customer commitments without regard to development estimates• Similar cycle to above, division was eventually closed
down
17
Software M
easureme
How Can Measurement Help?
Historical data sets the bounds of realityWhen reality and desires do not match, something has to giveent Less functionality (prioritized functionality)
More time Less wasteMore effective and efficient development and test methods
18
Software M
easuremeent
ELEMENTS OF METRICS
19
Software M
easureme
Metrics
Base Measures = what we can countDerived Measures = Relationship between two base measuresent Indicators = Base or Derived Measures that tell us something (useful)
How do we drill down from business objectives to indicators that identify the measures?
20
Software M
easureme
Drilling Down From Business Imperatives/Objectives(1)
What are the elements of cost?People cost = effort * rate (sometimes just person hours - rate is not used)ent $$ cost for development and test environment$$ cost for COTS or custom software Overhead costs (vacation, sick leave, training)
What are the elements of value?Volume and sale price (product)Contribution to business (internal IT Infrastructure) or cost of lost business
21
Software M
easureme
Drilling Down From Business Imperatives/Objectives(2)
What are the elements of time/schedule?Elapsed timeSchedule variationent
Sc edule va at oWhat are the elements of Quality?
Defects (pre-ship)• Functional – easy to quantify• Non-functional – Hard(er) to quantify as judgment is
sometimes subjective
Failures (post ship)Failures (post ship)Customer surveys• Level of subjective/objective evaluation varies
22
Software M
easureme
Providers and Users (1)
Base Measures are typically provided by the bottom of the pyramid Users are distributed across the levelsent
Exec
Management
Project
Feedback is criticalData
Project
Providers
23
Software M
easureme
Providers and Users (2)
What happens when the users forget to tell the providers how the data is used
“Collecting all that data is a waste of time”ent “You can’t use that data for planning, we made it up”Measurement becomes a standing joke at the provider levelRandom number generator provides data
Data providers must see the value of time spent Data providers must see the value of time spent collecting and reporting the data
24
Software M
easuremeent
MEASURING COST
25
Software M
easureme
Why Track Cost?
To know what we have spent on a projectTo know what is left of the budgetTo know (estimate) whether or not we will finish ent
o ow (est ate) w et e o ot we w ll s within budget
Do we need to add resources?Should we cancel the project?
To provide a basis for estimating future projectsFunding person or organization has the right to k if th ki d i t tknow if they are making a sound investmentIf you cannot estimate, how can you make decisions?
26
Software M
easureme
Components of Cost
Effort in person hours/days/monthsUsually the primary cost elementFunctional organizations complicate loggingent
Functional organizations complicate logging• Multitasking amongst multiple projects• Inaccurate logging
Simplified in Agile/SCRUM• Team size * Length of sprint• Minus training, non-project activities, vacation, etc.
Most (?) companies track project cost – the minimum Most (?) companies track project cost the minimum needed for financial accounting But what is the effort spent on productive tasks?
27
Software M
easureme
Development vs Rework
Why do we need to track rework?Cost of poor quality often/usually exceeds 50% of the total project or organization budgetent • If you do not know what your ratio is, it is virtually
certain rework is >50% of the total• Cost of poor quality = effort spent on rework
Rework is waste
28
Software M
easureme
Rework is waste
Where’s the Money Going?
ent
Budget
Development Defect ReworkDevelopment Defect Rework
Need to differentiate development costs and rework costs
29
Software M
easureme
Rework = SCRAP
I started in hardware developmentDefects resulted in scrapScrap was written off of inventoryent
Sc ap was w tte o o ve to yInventory was counted by financeWe paid attention to rework costs
= LOSS
30
Software M
easureme
Software Scrap = ?
How many of you measure your “software scrap”?How do you define it?How do you measure it?ent
How do you measure it?What do you do with it?
Rework definitionEffort spent redoing something that should have worked• Developer effort to fix defects found in reviews, test or
production production • Test effort to retest and regression test fixes
So what is a defect?
31
Software M
easureme
Identifying Defects and Rework Effort (1)
If there are formal test plans and activities, a defect is nonconformance to specification found in reviews or test
All effort spent on identifying fixing and
ent All effort spent on identifying, fixing, and retesting is rework
No formal test plans or activitiesTotal project effort spent in testing activities (estimate by headcount and months in test)Subtract effort to complete one pass of all tests (cost of conformance)(cost of conformance)• This cost is usually less than 10% of the total in the
absence of accurate data collection• If you do not know this number, ignore it as the total
cost is close enough
32
Software M
easureme
Identifying Defects and Rework Effort (2)
Agile/SCRUM developmentLots or disagreement on what is a defect• In Test Driven Development (TDD) tests may be run ent before functionality is complete; test failures are not
defects• However, if the functionality was “done”, test failures
should/could be classified as defects
Defects within a sprint will take care of themselves – no need to track separately• A high defect rate requiring rework will lower velocity• A high defect rate requiring rework will lower velocity
Defects found later will result in user stories in a future sprint• These need to be tagged as “rework points”
33
Software M
easureme
Tracking Agile Rework (1)
Rework points If the defect pushes completion to the next sprint, velocity in the current sprint is reduced –ent “self correcting”If system test or production defects are converted to story cards and points in future iterations, track these points as rework that lowers the “net velocity”
Defects found outside the sprint suggests more defects p ggwere found and fixed inside the sprint (Inverting “buggy products continue to be buggy” to “If it is buggy now, it was buggier earlier”)
34
Software M
easureme
Tracking Agile Rework (2)
If your velocity looks something like this:
25
Velocity
ent
0
5
10
15
20
1 2 3 4 5 6 7 8 9
Velocity
You could have a rework problem
35
Software M
easureme
Impact of Rework
For the velocity shown, 13% of the velocity is due to rework (red)If you do not measure this, you are losing productivity ent and don’t know it (green = net velocity)
5
10
15
20
25Velocity
Poin
ts
5
10
15
20
25Net Velocity
Points
0
5
1 2 3 4 5 6 7 8 90
5
1 2 3 4 5 6 7 8 9
36
Software M
easureme
Points and Effort (1)
How should we measure and compare points across Agile teams (or teams regardless of methodology)?
Point “effort” between teams working in ent different domains, products or languages will be differentTrying to make “points” between teams “equal” would jeopardize team estimating• Consistent team velocity and size (point) estimating is
critical to team success
Move any normalization outside the team• Effort/Velocity by team will be more useful than forcing
a measure across multiple teams• Do not assign a “goodness” rating to velocity
37
Software M
easureme
Points and Effort (2)
How do you normalize?No easy solution
Different technologies, complexity, etcent
e e t tec olog es, co ple ty, etc“Traditional approaches”• Function points• Lines of code (only for identical languages and similar
work)• Product value
Not a pure numbers comparison = ?p p• If A > B, we have to evaluate what that really means
• Do not assume A is better than B• Use differences to stimulate thinking about why
there are differences
38
Software M
easureme
Points and Effort (3)
The real goal is to maximize productivity of the teamUpward trend in points until it consistently achieves a similar valueent • Minimal rework• Retrospectives focus on efficiency (or lack thereof)
• Product owner not available• Manual test vs. automated test• Annual budget commitment delays• Multi-tasking
39
Software M
easureme
Free Time Is Not Free (E.G., Overtime)
“Free Time” is unpaid/casual work over 40 hrs/weekUse of unpaid overtime has personnel impact we understand (but often ignore)ent Business impact is rarely evaluated or understoodSomeone, somewhere is deciding where to allocate resources for competing projectsWrong decisions can be due to
Inaccurate estimatingWillf l d ti ti d di “f Willful underestimating depending on “free time”Let’s look at two examples
40
Software M
easureme
Tale of Two Projects (1)
Same net return, same initial estimate, but one project uses 50% additional “free time”
250ent
50
100
150
200
1
2
0Estimate Free Time Actual Value Est "ROI" Actual
"ROI"
41
Software M
easureme
Tale of Two Projects (3)
Same return, one project hides free time or underestimates by 50%Project 2 looks better, but another project might be ent better than both
60
80
100
120
140
160
1
2
0
20
40
Estimate Free Time Actual Cost Value Est "ROI" Actual "ROI"
42
Software M
easureme
No Free Time
Whether or not free time is counted, it is a resource used by projectsWhen it is ignored, poor estimating leads to poor ent decision making
True effort cost of the project is hiddenOther opportunities with better returns are not chosen
43
Software M
easuremeent
MEASURING SCHEDULE
44
Software M
easureme
Easy to Measure, Hard to Get “Right”
Of cost, schedule and quality, schedule is the easiest to measureIf “Right” means delivering on the date set at the ent project start, many forces conspire to make it hard
Market forces• Annual dates• Competition• Regulatory agencies
Poor product planningp p g• Catch up with product features and applications• No control over customer requests by marketing –
everything is “#1” priority
45
Software M
easureme
Schedule Measures
Days, weeks or months ahead of/behind schedule“What is the probability of finishing late”• Project managers can answer thisent “What is the probability of finishing early”• “What do you mean, finish early? This is a best case
schedule” *
Can be combined with effort measures – Earned Value “Schedule Performance Index” (SPI)
For effort spent and tasks completed, where are p p ,we with respect to schedule expressed as a value relative to 1 (<1 - behind, <1 - ahead)
*From “Controlling Software Projects” by Tom DeMarco
46
Software M
easureme
Critical Path Schedule Measures
Single dimensional view of progress – only looks at tasks on the critical path
Ignores tasks not on the critical pastent Often used with “Line of Balance” charts to hide problemsFollowing slide is a simple representation of tasks, with the critical path “on schedule” todayConveniently ignores the impact of the two tasks that are behindthat are behindUsually get some mumbo-jumbo about the green offsetting the yellow
47
Software M
easureme
Gantt Chart Critical Path Fakery
Line 1
JAN FEB MAR APR MAY JUN JUL AUG SEP OCT NOV DEC
Todayent Line 2Line 3
Line 4Line 5
Line 6
Line 7
Line 8Line 8
Line 9
Line 10
Task EarlyCritical Path Task not complete
48
Software M
easureme
Use Both Critical Path and SPI
Focus on critical path tasksDo not overlook non-critical task backlog that could end up on the critical pathent Any task slipping far enough will end up on the critical path
49
Software M
easuremeent
MEASURING DEFECTS
50
Software M
easureme
Why Is It So Difficult to Use Defect Data?
Nearly everyone has a defect tracking tool of some kindHow can we use the data effectively to understand, ent plan and control their workAlmost no one has any idea of the number of defects found and fixed in developer testingWhy? Two reasons:
51
Software M
easureme
What Do Defects Tell Us?
A measure of qualityNot perfect – relationship to production failures is not obvious; inconsistent measurementent • Disregarding severity when counting defects• Counted only in some testing activities
• Unit testing numbers are rarely counted• Integration testing defects may not be counted
• Production defect tracking often ends shortly (2-3 months) after deliveryS ll d f t t l t ( t $1 4B ?)• Small defects can cost a lot (can anyone top $1.4B ?)
• Most defects lie dormant for a long time *
But there is a gold mine of information when used correctly
*Edward Adams, Optimizing Preventive Service of Software Products, IBM Systems Journal, Jan 1984
52
Software M
easureme
Using Historical Defect Data
What’s the choice?Trailing indicators• Production failures and high defect ratesent • Loss of business• Inefficient internal IT operations
Leading indicators• Development and test defect data
• Defect removal effectiveness• Defect injection rates• Complexity• Unit Test coverage• Etc.
53
Software M
easureme
Defect Removal Effectiveness (1)
Use historical system test to production defect ratio to predict the future
Need to track multiple releasesent Need to track for the life of the release – 2-3 months will not suffice
System test effectiveness is typically near or below 50% (if you measure the lifetime of the release)*Variability makes this measure unreliable
Dependent on uncontrolled development and Dependent on uncontrolled development and early test activities• Unit test variability from 20% to 80% - 4:1 more (or
fewer) defects into System Test* See any of Capers Jones books on software quality
54
Software M
easureme
Defect Removal Effectiveness (2)
How do we extend to all testing and review activities?Remember?
ent
Need to address both issues to get good dataLost cause in “punishment centric” organizations
55
Software M
easureme
Defect Removal Effectiveness (3)
When measurement turns sourEarly inspection data showed 15:1 differences in defect detectionent • Equally difficult work• Equally proficient development teams• Confidential interviews identified the “cause”• Six months later we identified the real cause
When measurement is done rightCollecting unit test datag• Customer focus• Anonymous reporting• Demonstrated “no harm” environment
56
Software M
easureme
Defect Removal Effectiveness (4)
Defect data from Inspections (formal peer reviews)Historic defect removal effectiveness (DRE) related to individual preparation rateent Defect injection rate per unit of size (lines of code, function points, etc.)Useful leading indicator to predict remaining defectsDRE = defects found divided by (defects found + defects found later)
=
defects found later)• If the last 4 releases removed 50% of the defects in
system test, then in the next release we can estimate the number to be found in production will equal those found in system test (better than guessing)
57
Software M
easureme
Defect Clustering
So much to do, so little timeDefect history can identify the defect prone parts of your softwareent Use this to focus defect removal effort via
inspections and testBut testing isn’t a bug hunt, so use appropriately!
Planning impactMore defects can mean longer test cyclesMore defects can mean longer test cyclesMatch team skills to problem areas
58
Software M
easuremeent
LEADING AND TRAILING INDICATORS
59
Software M
easureme
Trailing Indicators
After the fact indication things “Did not go well”Corrective action to fixCost to fix is usually highent
Cost to s usually gSometimes it is too late to fixExamples• Lost customer• Product development cancelled• Poor estimation discovered “late”• High defect discoveries in system test• High defect discoveries in system test
60
Software M
easureme
Leading Indicators
Before the fact indication that things will not go wellPreventive action to recover or prevent significant deviationsent Usually costs less than corrective actionExamples• Trend data showing early and consistent slips in effort
applied or tasks completed• Higher or lower defect detection in inspections• Backlog growth (or slow reduction)
61
Software M
easureme
Leading Indicators in Agile
How can you predict quality as measured by test or production defects?
Review data is missingent Sketchy unit test dataFirst measured defect data may be in release testing
Product defect injection is largely a function of how well pairing works
How do you measure pairing for leading How do you measure pairing for leading indicators????
62
Software M
easureme
Why Don’t We Listen?
Leading indicators are often ignored – why?Already up to our neck in alligators, new and future problems are not welcomedent Good trend analysts are often viewed as doomsayers or “not team players”Prediction is a lot easier after the fact• No matter how often you are right with predictions, one
failure and you are busted
63
Software M
easuremeent
SUMMARY AND CLOSING REMARKS
64
Software M
easureme
Key Points
Measurement must meet the business needs of the organization
Project managers, support groups, line ent managers, executive managementMeasurement needs to be simple, unambiguous, and usedCulture will trump reason – it can be a tough sellNever assume – investigate both “good” and “bad” analysis to avoid shooting yourself in the footanalysis to avoid shooting yourself in the foot
65
Software M
easureme
Implementation Tips (1)
Keep the collection overhead minimalUnits of measure must be well defined and understood - ambiguous or confusing definitions ent frustrate the providersALWAYS provide a “None of the above” or “Other” selection• If it isn’t clear, you can get anything as an answer, often
the first or last selection in a list• Sending a message that accuracy isn’t as important as
filli th ffilling the form
66
Software M
easureme
Implementation Tips (2)
Do not ask providers to do what is properly the work of the metrics analysts
If they use the analysis results of their data – it ent can be their job if• Analysis is straightforward and quick• A tool supports the analysis
If the data is used in project reviews, it may be the project manager’s job – see the 2 sub-bullets aboveAnything else is usually best done by the measurement specialists (until the analysis is automated
67
Software M
easureme
The Corporate Metrics Mandate
“Measurement is good, therefore you shall measure (something)”
One size fits all mandateent • You shall collect and report on xxx• Reporting is more important than what is measured
Measurement becomes a standing joke• “I need to create the monthly metrics report”• “Since we are measuring, everything must be OK”
68
Software M
easureme
Usage Tips
FEEDBACK – FEEDBACK – FEEDBACKBe sure the providers see• Decisions based on the data they provideent • How the data they provide helps the organization
become more efficient and effective
Never punish a person based solely on the data they provide, whether perception or reality
Guarantees future data will be flawedMakes any measurement very difficulty y
69
Software M
easureme
Parting Thoughts
Lord Kevin“I often say that when you can measure what you are speaking about, and express it in numbers, you know something about it; but when you cannot measure it ent
something about it; but when you cannot measure it, when you cannot express it in numbers, your knowledge is of a meagre and unsatisfactory kind; it may be the beginning of knowledge, but you have scarcely in your thoughts advanced to the state of Science, whatever the matter may be.” [PLA, vol. 1, "Electrical Units of Measurement", 1883-05-03]“There is nothing new to be discovered in physics now, all g p y ,that remains is more and more precise measurement.”
Ed Weller“Think about what your measurements mean”
70
Software M
easureme
Contact Information
Ed WellerIntegrated Productivity [email protected]
[email protected] if you type like me ☺[email protected]
71
Software M
easureme
Defect Depletion
With assumed or actual defect injection and removal rates, it is possible to predict residual defects
Useful for “what-if” evaluationsent Demonstrated the relative cost of removing defects
Alternatively, historical defect removal effectiveness can be used to predict residual defects
For relatively stable inspection and test processes these numbers do not change processes, these numbers do not change significantly
72
Software M
easureme
Using Removal Effectiveness
If historical data shows that 60% of defects are removed prior to the start of test, use this number as a predictor
Required defects found in reviews to be
ent Required defects found in reviews to be equivalent to defects discovered in test or useCannot count spelling, grammar, or maintainability defects
If inspections find 540 defects, then the total defects are 540/.60 = 900, so the residual defects total 360
Modest checking of the inspection process is Modest checking of the inspection process is required• Individual preparation rates and coverage• Team member selection
73
Software M
easureme
Defect Depletion “What-if” Analysis
Requires historical data for defects injected and removed in each activity or phaseCost data for defect identification and repair in each ent stageSee “Managing the Software Process” by Watts Humphrey for a full discussion of this technique in Chapter 16
74
Software M
easureme
In this example, enter your estimated values in the yellow cells
Evaluate changes in removal effectivenessent Insert your estimate of defects injected per activity/phase“Recidivism Rate” is the “bad fix” rateIgnoring user detected defects (??) introduces a 3% error
Defect Depletion Curve ExampleSize in KLOC 100 Recidivism Rate 0.2
Dev Activity Req Ana HLD LLD Code Unit Test Integ Test System Test UserInjected 100.0 300.0 600.0 2000.0 3000.0 93.2 75.5 41.5Cumulative 100.0 400.0 1000.0 3000.0 6000.0 6093.2 6168.7 6210.3Removal Rate 0.5 0.7 0.7 0.6 0.7 0.4 0.5 0.5Inj + Prev remaining 350.0 705.0 2211.5 3884.6 1258.6 830.7 456.9Est Removed 50 245 494 1327 2719 503 415 228Remaining 50.0 105.0 211.5 884.6 1165.4 755.2 415.3 228.4 ??Total Removed 50.0 295.0 788.5 2115.4 4834.6 5338.1 5753.4 5981.8Cost 2 2 2 2 2 6 16 35
100 490 987 2654 5438 3021 6645 7995Insp Cost 9669 Test Cost 17661
Total 27331© 2012, Ed Weller, Permission to copy and duplicate is given as long as attribution is included.
75
Software M
easureme
Graph of Defect Removal
Defect Depletion Curve
Injected Est Removed Cumulative Total Removedent
0
1000
2000
3000
4000
5000
6000
7000
Req Ana HLD LLD Code Unit Test Integ Test System Test
Def
ects
Development Phase (or Activity)
76