Upload
others
View
8
Download
0
Embed Size (px)
Citation preview
MTAT.03.244 Software Economics
Lecture 5: Software Cost Estimation
Marlon Dumas
marlon.dumas ät ut . ee
2
Outline
• Estimating Software Size • Estimating Effort • Estimating Duration
3
For Discussion
• It is hopeless to accurately estimate software costs. Most often than not, such estimates are wrong. So why should we bother?
• We have 6 months and 10 analysts/developers, so it will take 6 months and 60 person-months.
4
There are lies, dammed lies and statistics.
• What about a method to estimate software costs from a high-level design, that is: • within 20% of the actual size 50% of the time • within 30% of the actual size 66% of the time
• Is this good enough? • Can we do better?
5
Cone of Uncertainty
6
Estimating Size
• From the early design, you can count the function-points.
• Can we estimate the size (LOC) from there? – Answer: yes, through “backfiring”
• Caper Jones’s database: > 9000 projects with both function-points and actual LOC Cobol, – C ≈ 120 LOC/FP – Cobol, Fortran ≈ 100 LOC/FP – Pascal, Ada, (PHP) ≈ 70-90 LOC/FP – OO Languages ≈ 30 LOC/FP
7
Estimating Effort
• Parkinson's Law – If we have 600 person-months, it will take 600 person-months
• Estimation by analogy – This project is about 20% more complex than the previous one
• Expert judgement (e.g. based on WBS) – Wideband Delphi, Planning Poker
• Parametric cost models – SLIM (Putnam) – COCOMO 81 and COCOMO II.2000 (Boehm et al.)
• No approach is perfect – consider combinations.
8
Example – Analogy/expert-based estimation
Wide-Band Delphi • Ask each team member their estimate
– Apply personal experience, – Look at completed projects, – Extrapolate from modules known to date
• Collect and share in a meeting: discuss why/how different people made their estimate
• Repeat • When stable, Size = (H + 4 X Ave. + L)/6
9
• Real-time embedded systems: 40-160 LOC/PM • Systems programming (e.g. games, graphics):
150-400 LOC/PM • Commercial applications: 200-900 LOC/PM
– Web apps with simple business logic > 500 – Heavy transactional business logic, high scalability
requirements < 500
Productivity estimates
From I. Sommerville’s Software Engineering
10
Estimation Models
• It took me one month to fully develop (end-to-end) a small software application of 1000 LOC
• Can I develop an application of 10000 LOC in 10 months?
• I have four friends with similar experience as mine, can we develop an application of 10000 LOC in 2 months?
• Hints: Brook’s law, Farr & Nanus study
11
Non-Linear Productivity
• There is overwhelming evidence that, except for simple projects, development effort goes up exponentially with size, so this is probably wrong: – Effort = P x Size
• This might be closer to the mark: – Effort = A x M x SizeB
where A is a constant derived from historical data, and M is dependent on each project (effort multiplier), and B is dependent on the complexity of the project
12 (c) 2005-08 USC CSSE
Diseconomy of Scale
• Nonlinear relationship when exponent > 1
13
COCOMO
• Stands for “Constructive Cost Model” • Developed at USC (Barry Boehm et al.) based
on a database of 63-161 projects • First version of COCOMO (now COCOMO 81)
Most recent version COCOMO II.2000 • Based on statistical model building (fitting actual
data to equation) • Can be calibrated based on company-specific
historical data
14
Basic COCOMO 81
Complexity Formula Description
Organic PM = 2.4 (KLOC)1.05 Well-understood applications developed by small teams with strong prior experience in related systems.
Semi-Detached
PM = 3.0 (KDSI)1.12 More complex projects where team members may have limited experience of related systems.
Embedded PM = 3.6 (KDSI)1.20 Complex projects where the software is constrained by hardware limitations (embedded), needs to respond in real-time, or is critical.
15
Intermediate COCOMO 81
a b Organic 3.2 1.05 Semi-detached 3.0 1.12 Embedded 2.8 1.2
• E = a KLOCb x EAF • EAF is the product of 15 factors • Check out Cocomo 81 calculator
16
Estimating Time
• The Cocomo model is calibrated under the assumption of “nominal time”
• Nominal time in Cocomo 81 model: – D = c Ed
c d Organic 2.5 0.38 Semi-detached 2.5 0.35 Embedded 2.5 0.32
17
Nominal versus Optimal Time
18
Exercise
• See exercise “Cocomo I” on course web page • Use the Cocomo 81 calculator (see link on
“Readings” page)
19
COCOMO 81 limitations
• Over time, Cocomo 81s database became outdates by new tools, languages and practices
• Cocomo 81 was designed for the waterfall model, which was largely superseded by incremental, iterative methods
• Cocomo 81 had only three possible exponents – could not explain for various factors affecting non-linearity of productivity
• Did not take into account different levels of information available throughout the lifecycle
20
Cocomo II.2000
• Designed for an iterative development method (MBASE)
• Calibrated with a larger database (161 projects) • More refined set of cost drivers (6-17) • Multiple exponential scale drivers:
PM = a x Sizeb x Π EMi (i = 1 to 6 or 17) where a = 2.94
b = 0.91 + 0.01 x Σ SFj (j = 1 to 5)
21
COCOMO 2 models
• COCOMO 2 incorporates a range of sub-models that produce increasingly detailed software estimates.
• The sub-models in COCOMO 2 are: – Application composition model. Used when software is
composed from existing parts. – Early design model. Used when requirements are available but
design has not yet started (6 cost drivers). – Reuse model. Used to compute the effort of integrating reusable
components. – Post-architecture model. Used once the system architecture has
been designed and more information about the system is available (17 cost drivers).
From I. Sommerville’s Software Engineering
22
Use of COCOMO 2 models
From I. Sommerville’s Software Engineering
23 (c) 2005-08 USC CSSE
Cost Factors
• Significant factors of development cost: – scale drivers are sources of exponential effort
variation – cost drivers are sources of linear effort variation
• product, platform, personnel and project attributes • effort multipliers associated with cost driver ratings
– Defined to be as objective as possible • Each factor is rated between very low and very
high per rating guidelines – relevant effort multipliers adjust the cost up or down
24 (c) 2005-08 USC CSSE
Scale Drivers
• Precedentedness (PREC) – Degree to which system is new/past experience applies
• Development Flexibility (FLEX) – Need to conform with specified requirements
• Architecture/Risk Resolution (RESL) – Degree of design thoroughness and risk elimination
• Team Cohesion (TEAM) – Need to synchronize stakeholders and minimize conflict
• Process Maturity (PMAT) – SEI CMM process maturity rating
25 (c) 2005-08 USC CSSE
Scale Factors
• Sum scale factors Wi across all of the factors to determine a scale exponent, B, using B = .91 + .01 Σ Wi
26 (c) 2005-08 USC CSSE
Precedentedness (PREC) and Development Flexibility (FLEX)
27 (c) 2005-08 USC CSSE
Architecture / Risk Resolution (RESL)
• Use a subjective weighted average of:
28 (c) 2005-08 USC CSSE
Team Cohesion (TEAM) • Use a subjective weighted average of the
characteristics to account for project turbulence and entropy due to difficulties in synchronizing the project's stakeholders.
• Stakeholders include users, customers, developers, maintainers, interfacers, and others
29 (c) 2005-08 USC CSSE
Process Maturity (PMAT)
• Two methods based on the Software Engineering Institute's Capability Maturity Model (CMM)
• Method 1: Overall Maturity Level (CMM Level 1 through 5)
• Method 2: Key Process Areas (see next slide)
30 (c) 2005-08 USC CSSE
Key Process Areas
• Decide the percentage of compliance for each of the KPAs as determined by a judgment-based averaging across the goals for all 18 Key Process Areas.
31
• A company takes on a project in a new domain. The client has not defined the process to be used and has not allowed time for risk analysis. The company has a CMM level 2 rating. – Precedenteness - new project – 0.4 – Development flexibility - no client involvement - Very high – 0.1 – Architecture/risk resolution - No risk analysis - V. Low – 0.5 – Team cohesion - new team – nominal – 0.3 – Process maturity - some control – nominal – 0.3
• Scale factor = 1.17.
Example of Scale Factors
From I. Sommerville’s Software Engineering
32 (c) 2005-08 USC CSSE
Cost Drivers (Post-Architectural Model)
• Product Factors – Reliability (RELY) – Data (DATA) – Complexity (CPLX) – Reusability (RUSE) – Documentation (DOCU)
• Platform Factors – Time constraint (TIME) – Storage constraint
(STOR) – Platform volatility (PVOL)
• Personnel factors – Analyst capability (ACAP) – Program capability (PCAP) – Applications experience
(APEX) – Platform experience (PLEX) – Language and tool experience
(LTEX) – Personnel continuity (PCON)
• Project Factors – Software tools (TOOL) – Multisite development (SITE) – Required schedule (SCED)
33 (c) 2005-08 USC CSSE
Example Cost Driver - Required Software Reliability (RELY)
• Measures the extent to which the software must perform its intended function over a period of time.
• Ask: what is the effect of a software failure?
34 (c) 2005-08 USC CSSE
Example Effort Multiplier Values for RELY
Very Low Low High Very High
Slight Inconvenience
Low, Easily Recoverable
Losses
High Financial Loss
Risk to Human Life
1.15
0.75
0.88
1.39
1.0 Moderate, Easily
Recoverable Losses
Nominal
E.g. a highly reliable system costs 39% more than a nominally reliable system 1.39/1.0=1.39)
or a highly reliable system costs 85% more than a very low reliability system (1.39/.75=1.85)
35
COCOMO II – Schedule Estimation
D = c x Ed x SCED%/100
where c = 3.67
d = 0.33 + 0.2 x [b - 1.01]
SCED% = percentage of required schedule compression
36
Software Costing and Pricing
• Caution: All of the above is about effort and schedule estimation
• From effort and schedule, one can estimate cost – Estimate technical effort cost based on PM x monthy
total salary cost – Add licensing costs and overhead cost for
administrative support, infrastructure, etc. • But cost ≠ price • Price depends on many other factors:
– Risk margin, requirements volatility, competitive advantage, market opportunity, need to win the bid…
37
Final Word of Caution
• COCOMO and similar models are just MODELS • COCOMO comes calibrated by a set of projects
that might not reflect a particular project’s context
• Should be combined with analogy, expert assessment, and continuous cost control
38
Homework (10 points)
• In teams of 4-5 (same teams as for homework 1) • Take the Function-Point estimate of homework 1 • Make a size estimate
– Compare this estimate with the actual size – Explain the difference, if there is one
• Make an effort and schedule estimation using Cocomo II (post-architectural). Explain your choice of cost and scale drivers
• E-mail me a 2-5 pages report by Monday 12 Oct. • Short 5-minutes presentation on 13 Oct.