Upload
regina-reed
View
214
Download
0
Embed Size (px)
Citation preview
Copyright 1999-2009 USC-CSSE 1
Quality Management – Lessons ofCOQUALMO
(COnstructive QUALity MOdel)A Software Defect Density Prediction Model
AWBrown and Sunita Chulani, Ph.D.
{AWBrown, sdevnani}@CSSE.usc.edu}
USC-Center for Systems & Software Engineering (USC-CSSE)
Copyright 1999-2009 USC-CSSE 2
OutlineBehavioral Underpinnings
• Hidden Factory
• Defect Types
COQUALMO Framework
• The Defect Introduction Sub-Model– Expert-Judgment Model + Some Initial Data Results
• The Defect Removal Sub-Model– Expert-Judgment Model
(Result of COQUALMO Workshop)
• COQUALMO Integrated with COCOMO II
Copyright 1999-2009 USC-CSSE 3
USC Modeling Methodology
Copyright 1999-2009 USC-CSSE 4
Delphi Assessment• Ask each expert the range for driver
– Apply personal experience, – Look at completed projects, – Guess (WAG),
• Collect and share in a meeting: discuss why/how different people made their estimate
• Repeat until no changes• Final values (for each parameter)
Max=(Hmax + 4*AVEmax + Lmax)/6
Copyright 1999-2009 USC-CSSE 5
Adding Project Data
• Effort Adjustment Multipliers (typical)– Linear Regression
Copyright 1999-2009 USC-CSSE 6
Fig 11., Pg 170 SwCEwCIIa posteiori Baysian Update
• Used to combine expert judgement with sampled data: spread of datasets weights combination
Copyright 1999-2009 USC-CSSE 7
Outline
• Model FrameworkThe Defect Introduction Sub-Model
– Expert-Judgment Model + Some Initial Data Results
• The Defect Removal Sub-Model– Expert-Judgment Model (Result of COQUALMO
Workshop)
• COQUALMO Integrated with COCOMO II
Copyright 1999-2009 USC-CSSE 8
COQUALMO Model Framework
• • •
ResidualSoftwareDefects
Code Defects
Requirements Defects
Design Defects
Defect Introduction pipes
Defect Removal pipes
Copyright 1999-2009 USC-CSSE 9
Software Devel. Process(es) (cont.): the hidden factory
Copyright 1999-2009 USC-CSSE 10
Outline
• Model FrameworkThe Defect Introduction Sub-Model
– Expert-Judgment Model + Some Initial Data Results
• The Defect Removal Sub-Model– Expert-Judgment Model (Result of COQUALMO
Workshop)
• COQUALMO Integrated with COCOMO II
Copyright 1999-2009 USC-CSSE 11
The Defect Introduction (DI) Sub-Model
Software Size estimate
Software product, process, computer and personnel attributes(subset of COCOMO II factors)
Defect IntroductionSub-Model
Number of non-trivial requirements, design and code defects introduced
Copyright 1999-2009 USC-CSSE 12
A-Priori Expert-Judgment Based Code DI Ranges
1.00 1.50 2.00 2.50 3.00
PMAT
RELY
RESL
CPLX
PCAP
PCON
TOOL
PREC
PVOL
LTEX
SCED
TEAM
SITE
DOCU
PEXP
AEXP
ACAP
DATA
TIME
STOR
RUSE
FLEX
Copyright 1999-2009 USC-CSSE 13
DI Model Equations
•For each artifact j, Quality Adjustment Factor (QAF)
QAF DIR - driverij
i 1
22j
j 1
3
jB
iji 1
21
A (Size) (DI driver)j
•Estimated Number of Defects Introduced =
j identifies the 3 artifact types (requirements, design and coding). A is the multiplicative calibration constant. B is initially set to 1 (DI-driver)ij is the Defect Introduction driver for the jth artifact and the ith factor.
j 1
3
jB
jA (Size) QAFj
•Estimated Number of Defects Introduced =
Copyright 1999-2009 USC-CSSE 14
Initial Data Analysis on the DI Model
Type ofArtifact
1970’sBaseline
DIRs
QualityAdjustment
Factor
PredictedDIR
Actual DIR CalibratedConstant
(A)
1990’sBaseline
DIRsRequirements 5 0.5 2.5 4.5 1.8 9
Design 25 0.44 11 8.4 0.77 19Code 15 0.5 7.5 16.6 2.21 33
DIR = Defect Introduction Rate
Copyright 1999-2009 USC-CSSE 15
Outline• Model Framework
• The Defect Introduction Sub-Model– Expert-Judgment Model + Some Initial Data
Results
The Defect Removal Sub-Model– Expert-Judgment Model (Result of
COQUALMO Workshop)
• COQUALMO Integrated with COCOMO II
Copyright 1999-2009 USC-CSSE 16
Defect Removal
Sub-Model
Defect removal activity levels
Number of non-trivial requirements, design and coding defects introduced
Number of residual defects/ unit of size
Software Size Estimate
The Defect Removal (DR) Sub-Model
Copyright 1999-2009 USC-CSSE 17
Defect Removal Profiles
• 3 relatively orthogonal profiles– Automated Analysis– People Reviews– Execution Testing and Tools
• Each profile has 6 levels– Very Low, Low, Nominal,
High, Very High, Extra High• Very Low--removes the least number of defects • Extra High--removes the most defects
Copyright 1999-2009 USC-CSSE 18
Automated AnalysisRating Automated Analysis
Very Low Simple compiler syntax checking.
LowBasic compiler or additional tools capabilities for static module-level codeanalysis, and syntax- and type-checking.
Nominal All of the above, plusSome compiler extensions for static module and inter-module level codeanalysis, and syntax- and type-checking.Basic requirements and design consistency; and traceability checking.
High All of the above, plusIntermediate-level module and inter-module code syntax and semantic analysis.Simple requirements/design consistency checking across views.
Very High All of the above, plusMore elaborate requirements/design view consistency checking.Basic distributed-processing and temporal analysis, model checking, symbolicexecution.
Extra High All of the above, plusFormalized* specification and verification.Advanced distributed-processing and temporal analysis, model checking,symbolic execution.
*Consistency-checkable pre-conditions and post-conditions, but notmathematical theorems.
Copyright 1999-2009 USC-CSSE 19
Static [Module-Level Code] Analysis"Static code analysis is the analysis of computer software
that is performed without actually executing programs built from that software (analysis performed on executing programs is known as dynamic analysis). In most cases the analysis is performed on some version of the source code and in the other cases some form of the object code. "*
* http://en.wikipedia.org/wiki/Static_code_analysis
Copyright 1999-2009 USC-CSSE 20
Static [Module-Level Code] AnalysisSWEBOK [sans references]"4.2. Quality Analysis and Evaluation Techniques Various tools and techniques
can help ensure a software design’s quality. • "Software design reviews: informal or semiformal, often group-based,
techniques to verify and ensure the quality of design artifacts (for example, architecture reviews, design reviews and inspections, scenario-based techniques, requirements tracing)
• "Static analysis: formal or semiformal static (nonexecutable) analysis that can be used to evaluate a design (for example, fault-tree analysis or automated cross-checking)
• "Simulation and prototyping: dynamic techniques to evaluate a design (for example, performance simulation or feasibility prototype)"*
* Software Engineering Body of Knowledge. Alain Abran, et al., Swebok_Ironman_June_23_ 2004.pdf, pg 54
Copyright 1999-2009 USC-CSSE 21
Peer Reviews
Rating Peer ReviewsVery Low No people reviews.
Low Ad-hoc informal walkthroughsMinimal preparation, no follow-up.
Nominal Well-defined sequence of preparation, review, minimal follow-up.Informal review roles and procedures.
High Formal review roles and procedures applied to all products using basicchecklists*, follow up.
VeryHigh
Formal review roles and procedures applied to all product artifacts &changes; formal change control boards.Basic review checklists, root cause analysis.Use of historical data on inspection rate, preparation rate, fault density.
ExtraHigh
Formal review roles and procedures for fixes, change control.Extensive review checklists, root cause analysis.Continuous review process improvement.User/Customer involvement, Statistical Process Control.* Checklists are lists of things to look for or check against (e.g. exit criteria)
Copyright 1999-2009 USC-CSSE 22
Syntactic Versus Semantic CheckingBoth sentences below are syntactically correct,
only one is semantically correct (makes sense).• A panda enters the bar, eats shoots and leaves.• A panda enters the bar, eats, shoots and leaves.
What a difference an extra comma can make!
Copyright 1999-2009 USC-CSSE 23
Execution Testing and Tools
Rating Execution Testing and ToolsVery Low No testing.
Low Ad-hoc testing and debugging.Basic text-based debugger
Nominal Basic unit test, integration test, system test process.Basic test data management, problem tracking support.Test criteria based on checklists.
High Well-defined test sequence tailored to organization (acceptance, alpha, beta,flight, etc. test).Basic test coverage tools, test support system.Basic test process management.
VeryHigh
More advanced test tools, test data preparation, basic test oracle support,distributed monitoring and analysis, active assertion checking.Metrics-based test process management.
ExtraHigh
Highly advanced tools for test oracles, distributed monitoring and analysis,assertion checking.Integration of automated analysis and test tools.Model-based test process management.
Copyright 1999-2009 USC-CSSE 24
Technique Selection Guidance
“Under specified conditions, …”
• Peer reviews are more effective than functional testing for faults of omission and incorrect specification (UMD, USC)
• Functional testing is more effective than reviews for faults concerning numerical approximations and control flow (UMD, USC)
Copyright 1999-2009 USC-CSSE 25
Residual Defects Equation
• Estimated Number of Residual Defects
DResEst, j = Estimated No. of Residual Defects for the jth artifactCj = Calibration Constant for the jth artifactDIEst, j = Estimated No. of Defects Introduced for the jth artifact (output of DI Sub-Model)i = Defect Removal profileDRFij= Defect Removal Fraction
3
i=1
Copyright 1999-2009 USC-CSSE 26
Defect Densities from Expert-Judgment Calibrated COQUALMO
Copyright 1999-2009 USC-CSSE 27
Validation of Defect Densities
• Average defect density using Jones’ data weighted by CMM maturity level distribution of 542 organizations is 13.9 defects/kSLOC
• Average defect density using COQUALMO is
14.3 defects/kSLOC
542 Organizations
0.669
0.1960.118
0.013 0.0040%
10%
20%
30%
40%
50%
60%
70%
80%
90%
100%
Initial Defined Optimizing
% o
f O
rgan
izat
ion
s
Leading Average Lagging1.4 7.5 18.3
Residual defect density from Jones’ data
(.135*1.4 + .196*7.5 + .669*18.3)= 13.9 defects/kSLOC
Copyright 1999-2009 USC-CSSE 28
An IndependentValidation Study
• Aim: To validate expert-determined COQUALMO has correct trends in defect rates
• Sample Project: Size = 110.5 kSLOC
Type ofArtifact
DI DIR Quality AdjustmentFactor (QAFj)
BaselineDIR
1970’sBaseline DIR
Reqts 209 2 0.74 3 5Design 1043 9.5 0.74 13 25Code 767 7 0.84 8 15
Type ofArtifact
AutomatedAnalysis
(VL)
PeerReviews(L - VL)
Execution Testingand Tools (H - VH)
Product(1-DRF)
DI/kSLOC
DRes/kSLOC
Reqts 0 0.13 0.54 0.4 2 0.8Design 0 0.14 0.61 0.34 9.5 3.23Code 0 0.15 0.74 0.22 7 1.54
Total: 5.57
Actual Defect Density = 6 defects/kSLOC
Copyright 1999-2009 USC-CSSE 29
Outline• Model Framework
• The Defect Introduction Sub-Model– Expert-Judgment Model + Some Initial Data
Results
• The Defect Removal Sub-Model– Expert-Judgment Model (Result of
COQUALMO Workshop)
COQUALMO Integrated with COCOMO II
Copyright 1999-2009 USC-CSSE 30
Integrated COQUALMO
COCOMO II
COQUALMO
Defect RemovalModel
Software Size estimate
Software platform,project, product andpersonnel attributes
Defect removal profilelevels
Software development effort, costand schedule estimate
Number of residual defects
Defect IntroductionModel
Defect density per unit of size