91
James Nowotarski 17 October 2006 SE 325/425 Principles and Practices of Software Engineering Autumn 2006

James Nowotarski 17 October 2006 SE 325/425 Principles and Practices of Software Engineering Autumn 2006

  • View
    215

  • Download
    0

Embed Size (px)

Citation preview

Page 1: James Nowotarski 17 October 2006 SE 325/425 Principles and Practices of Software Engineering Autumn 2006

James Nowotarski

17 October 2006

SE 325/425Principles and

Practices of Software Engineering

Autumn 2006

Page 2: James Nowotarski 17 October 2006 SE 325/425 Principles and Practices of Software Engineering Autumn 2006

2

Topic Duration

Testing recap 20 minutes

Project planning & estimating 60 minutes

*** Break

Current event reports 30 minutes

Software metrics 60 minutes

Today’s Agenda

Page 3: James Nowotarski 17 October 2006 SE 325/425 Principles and Practices of Software Engineering Autumn 2006

3

Verification & Validation

Testing is just part of a broader topic referred to as Verification and Validation (V&V)

Pressman/Boehm:Verification: Are we building the

product right?Validation: Are we building the

right product?

Page 4: James Nowotarski 17 October 2006 SE 325/425 Principles and Practices of Software Engineering Autumn 2006

4

Stage Containment

Communication project initiation requirements

Modeling analysis design

Construction code test

Deployment delivery support

Planning & Managing

errorError detection defect fault

Error origination

Page 5: James Nowotarski 17 October 2006 SE 325/425 Principles and Practices of Software Engineering Autumn 2006

5

V-Model

Requirements

Functional Design

IntegrationTest

Unit Test

Code

TechnicalDesign

DetailedDesign

AcceptanceTest

SystemTest

Testing: Test that the product implements the specification

Flow of Work

Verification

Validation

Legend:

Page 6: James Nowotarski 17 October 2006 SE 325/425 Principles and Practices of Software Engineering Autumn 2006

6

Statement coverage:Goal is to execute each statement at least once.

Branch coverageGoal is to execute each branch at least once.

Path coverageWhere a path is a feasible sequence of statements that can be taken during the execution of the program.

What % of each type of coverage does this test execution provide?

5/10 = 50%

2/6 33%

¼ 25% Where does the 4 come from?

= tested

= not tested

Test Coverage Metrics

Page 7: James Nowotarski 17 October 2006 SE 325/425 Principles and Practices of Software Engineering Autumn 2006

7

Page 8: James Nowotarski 17 October 2006 SE 325/425 Principles and Practices of Software Engineering Autumn 2006

8

Example of pair programming “Since then, [Adobe’s] Mr. Ludwig has adopted Fortify

software and improved communication between his team of security experts and programmers who write software. A few years ago, each group worked more or less separately: The programmers coded, then the quality-assurance team checked for mistakes. Now, programmers and security types often sit side by side at a computer, sometimes lobbing pieces of code back and forth several times a day until they believe it is airtight. The result: ‘Issues are being found earlier,’ Mr. Ludwig says. But, he adds, ‘I'm still trying to shift that curve.’ “

Vara, V. (2006, May 4). Tech companies check software earlier for flaws. Wall Street Journal. Retrieved October 16, 2006, from http://online.wsj.com/public/article/SB114670277515443282-qA6x6jia_8OO97Lutaoou7Ddjz0_20060603.html?mod=tff_main_tff_top

Page 9: James Nowotarski 17 October 2006 SE 325/425 Principles and Practices of Software Engineering Autumn 2006

9

V-Model

IntegrationTest

Unit Test

Code

AcceptanceTest

SystemTest

White box

Black box

Page 10: James Nowotarski 17 October 2006 SE 325/425 Principles and Practices of Software Engineering Autumn 2006

10

Sequence

if Case

while until

Where each circle represents one or more nonbranching set of source code statements.

Flow Graph Notation

Page 11: James Nowotarski 17 October 2006 SE 325/425 Principles and Practices of Software Engineering Autumn 2006

11

Continued…

i = 1;total.input = total.valid = 0;sum = 0;DO WHILE value[i] <> -999 AND total.input < 100

increment total.input by 1;IF value[i] >= minimum and value[i] <= maximum

THEN increment total.valid by 1;sum = sum + value[i]

ELSE skipENDIFincrement I by 1;

ENDDO

IF total.valid > 0THEN average = sum / total.valid;ELSE average = -999;

ENDIF

END average

12

34

56

7

8

9

1011

1213

Page 12: James Nowotarski 17 October 2006 SE 325/425 Principles and Practices of Software Engineering Autumn 2006

12

1. Use the design or code as a foundation and draw corresponding flow graph.

1

2

3

4

5

6

78

9

10

13

12 11

Steps for deriving test cases

2. Determine the cyclomatic complexity of the resultant flow graph.

V(G) = 17 edges – 13 nodes + 2 = 6V(G) = 5 predicate nodes + 1 = 6.

Page 13: James Nowotarski 17 October 2006 SE 325/425 Principles and Practices of Software Engineering Autumn 2006

13

3. Determine a basis set of linearly independent paths.

1

2

3

4

5

6

78

9

10

13

12 11

Steps for deriving test cases

Path 1: 1-2-10-11-13Path 2: 1-2-10-12-13Path 3: 1-2-3-10-11-13Path 4: 1-2-3-4-5-8-9-2…Path 5: 1-2-3-4-5-6-8-9-2…Path 6: 1-2-3-4-5-6-7-8-9-2…

4. Prepare test cases that will force execution of each path in the basis set.

Page 14: James Nowotarski 17 October 2006 SE 325/425 Principles and Practices of Software Engineering Autumn 2006

14

Topic Duration

Testing recap 20 minutes

Project planning & estimating 60 minutes

*** Break

Current event reports 30 minutes

Software metrics 60 minutes

Today’s Agenda

Page 15: James Nowotarski 17 October 2006 SE 325/425 Principles and Practices of Software Engineering Autumn 2006

15

People trump process

“A successful software methodology (not new, others have suggested it):

(1) Hire really smart people(2) Set some basic direction/goals(3) Get the hell out of the way

In addition to the steps about, there's another key: RETENTION”

http://steve-yegge.blogspot.com/2006/09/good-agile-bad-agile_27.html

Page 16: James Nowotarski 17 October 2006 SE 325/425 Principles and Practices of Software Engineering Autumn 2006

16

Project Management

Communication project initiation requirements

Modeling analysis design

Construction code test

Deployment delivery support

Planning & Managing

Our

focu

s

Page 17: James Nowotarski 17 October 2006 SE 325/425 Principles and Practices of Software Engineering Autumn 2006

17

Planning & Managing

Scope Time Cost People Quality Risk Integration (incl. change) Communications Procurement

Project Management Institute

Page 18: James Nowotarski 17 October 2006 SE 325/425 Principles and Practices of Software Engineering Autumn 2006

18

Today’s focus

Usersrequirements 1

Negotiatereqts

negotiatedrequirements

2

Decom-pose

workbreakdownstructure

4

Estimateresources

workmonths

3

Estimatesize

deliverablesize

5

Developschedule

schedule

Iterate as necessary

productivity rate

Page 19: James Nowotarski 17 October 2006 SE 325/425 Principles and Practices of Software Engineering Autumn 2006

19

1. Breaks project into a hierarchy.

2. Creates a clear project structure.

3. Avoids risk of missing project elements.

4. Enables clarity of high level planning.

Work Breakdown Structure

Page 20: James Nowotarski 17 October 2006 SE 325/425 Principles and Practices of Software Engineering Autumn 2006

20

Today’s focus

Usersrequirements 1

Negotiatereqts

negotiatedrequirements

2

Decom-pose

workbreakdownstructure

4

Estimateresources

workmonths

3

Estimatesize

deliverablesize

5

Developschedule

schedule

Iterate as necessary

productivity rate

Page 21: James Nowotarski 17 October 2006 SE 325/425 Principles and Practices of Software Engineering Autumn 2006

21

Units of Size

Lines of code (LOC)

Function points (FP)

Components

Page 22: James Nowotarski 17 October 2006 SE 325/425 Principles and Practices of Software Engineering Autumn 2006

22

LOC

How many physical source lines are there in this C language program?

#define LOWER 0 /* lower limit of table */#define UPPER 300 /* upper limit */#define STEP 20 /* step size */

main() /* print a Fahrenheit->Celsius conversion table */{ int fahr; for(fahr=LOWER; fahr<=UPPER; fahr=fahr+STEP)

printf(“%4d %6.1f\n”, fahr, (5.0/9.0)*(fahr-32));}

Page 23: James Nowotarski 17 October 2006 SE 325/425 Principles and Practices of Software Engineering Autumn 2006

23

LOC

Need standards to ensure repeatable, consistent size counts

Include Exclude

1. Executable 2. Nonexecutable3. Declarations 4. Compiler directives 5. Comments6. On their own lines 7. On lines with source

. . .

Page 24: James Nowotarski 17 October 2006 SE 325/425 Principles and Practices of Software Engineering Autumn 2006

24

A Case StudyComputer Aided Design (CAD) for mechanical components.System is to execute on an engineering workstation.Interface with various computer graphics peripherals including a mouse, digitizer, high-resolution color display, & laser printer.Accepts two & three dimensional geometric data from an engineer.Engineer interacts with and controls CAD through a user interface.All geometric data & supporting data will be maintained in a CAD database.Required output will display on a variety of graphics devices. Assume the following

major software functions are identified

Page 25: James Nowotarski 17 October 2006 SE 325/425 Principles and Practices of Software Engineering Autumn 2006

25

Estimation of LOCCAD program to represent mechanical parts

Estimated LOC = (Optimistic + 4(Likely)+ Pessimistic)/6

Major Software Functions  

Opt-imistic

Most Likely

Pess-imistic

Estimated LOC

User interface and control facilities UICF 1,500 2,300 3,100 2,300

Two-dimensional geometric analysis 2DGA 3,800 5,200 7,200 5,300

Three-dimensional geometric analysis 3DGA 4,600 6,900 8,600 6,800

Database management DBM 1,600 3,500 4,500 3,350

Computer graphics display features CGDF 3,700 5,000 6,000 4,950

Peripheral control PC 1,400 2,200 2,400 2,100

Design analysis modules DAM 7,200 8,300 10,000 8,400

Estimated lines of code   23,800 33,400 41,800 33,200

Page 26: James Nowotarski 17 October 2006 SE 325/425 Principles and Practices of Software Engineering Autumn 2006

26

LOC

“Lines of code is a useless measurement in the face of code that shrinks when we learn better ways of programming” (Kent Beck)

Page 27: James Nowotarski 17 October 2006 SE 325/425 Principles and Practices of Software Engineering Autumn 2006

27

Function Points

A measure of the size of computer applications

The size is measured from a functional, or user, point of view.

It is independent of the computer language, development methodology, technology or capability of the project team used to develop the application.

Can be subjective

Can be estimated EARLY in the software development life cycle

Two flavors: Delivered size = total application size delivered,

including packages, assets, etc. Developed size = portion built for the release

Page 28: James Nowotarski 17 October 2006 SE 325/425 Principles and Practices of Software Engineering Autumn 2006

Computing Function PointsMeasurement parameter Count Sim-

pleAvg Com

-plex

Number of user inputs X 3 4 6 =

Number of user outputs X 4 5 7 =

Number of user inquiries

X 3 4 6 =

Number of files X 7 10 15 =

Number of external interfaces

X 5 7 10 =

Count (Unadjusted Function Points) UFP

5

8

10

8

2

15

32

40

80

10

177

Page 29: James Nowotarski 17 October 2006 SE 325/425 Principles and Practices of Software Engineering Autumn 2006

Calculate Degree of Influence (DI)

0 1 2 3 4 5

No influence

Incidental Moderate Average Significant Essential

1. Does the system require reliable backup and recovery?2. Are data communications required?3. Are there distributed processing functions?4. Is performance critical?5. Will the system run in an existing, heavily utilized operational environment?6. Does the system require on-line data entry?7. Does the on-line data entry require the input transaction to be built over multiple

screens or operations?8. Are the master files updated on-line?9. Are the inputs, outputs, files, or inquiries complex?10. Is the internal processing complex?11. Is the code designed to be reusable?12. Are conversion and installation included in the design?13. Is the system designed for multiple installations in different organizations?14. Is the application designed to facilitate change and ease of use by the user?

3

3

33

1

4

4

35

1

2

1

2

1

Page 30: James Nowotarski 17 October 2006 SE 325/425 Principles and Practices of Software Engineering Autumn 2006

The FP Calculation:Inputs include:

Count TotalDI = Fi (i.e., sum of the Adjustment factors F1.. F14)

Calculate Function points using the following formula:FP = UFP X [ 0.65 + 0.01 X Fi ]

In this example:FP = 177 X [0.65 + 0.01 X (3+4+1+3+2+4+3+3+2+1+3+5+1+1)FP = 177 X [0.65 + 0.01 X (36)FP = 177 X [0.65 + 0.36]FP = 177 X [1.01]

FP = 178.77

TCF: Technical complexity factor

Page 31: James Nowotarski 17 October 2006 SE 325/425 Principles and Practices of Software Engineering Autumn 2006

Reconciling FP and LOC

http://www.theadvisors.com/langcomparison.htm

LANGUAGEAVERAGE SOURCE STATEMENTS PER FUNCTION POINT

1032/AF 16

1st Generation default 320

2nd Generation default 107

3rd Generation default 80

4th Generation default 20

5th Generation default 5

Assembly (Basic) 320

BASIC 107

C 128

C++ 53

COBOL 107

JAVA 53

Visual Basic 5 29

Page 32: James Nowotarski 17 October 2006 SE 325/425 Principles and Practices of Software Engineering Autumn 2006

32

Components

Simple Medium Hard

# Database tables

Criteria:

Simple –

Medium –

Hard –

Page 33: James Nowotarski 17 October 2006 SE 325/425 Principles and Practices of Software Engineering Autumn 2006

33

Bottom-up estimating

Divide project into size units (LOC, FP, components)

Estimate person-hours per size unit Most projects are estimated in this

way, once details are known about size units

Page 34: James Nowotarski 17 October 2006 SE 325/425 Principles and Practices of Software Engineering Autumn 2006

34

Project Management

Communication project initiation requirements

Modeling analysis design

Construction code test

Deployment delivery support

Planning & Managing

top down estimating bottom up estimating

Page 35: James Nowotarski 17 October 2006 SE 325/425 Principles and Practices of Software Engineering Autumn 2006

Using FP to estimate effort:

If for a certain projectFPEstimated = 372

Organization’s average productivity for systems of this type is 6.5 FP/person month.

Burdened labor rate of $8000 per month

Cost per FP

$8000/6.5 $1230

Total project cost

372 X $1230 = $457,650

Page 36: James Nowotarski 17 October 2006 SE 325/425 Principles and Practices of Software Engineering Autumn 2006

Empirical Estimation Models Empirical data supporting most empirical models is derived

from a limited sample of projects.

NO estimation model is suitable for all classes of software projects.

USE the results judiciously.

General model:

E = A + B X (ev)c where A, B, and C are empirically derived constants.E is effort in person monthsev is the estimation variable (either in LOC or FP)

Page 37: James Nowotarski 17 October 2006 SE 325/425 Principles and Practices of Software Engineering Autumn 2006

Be sure to include contingency

The earlier “completed programs” size and effort data points in Figure 2 are the actual sizes and efforts of seven software products built to an imprecisely-defined specification [Boehm et al. 1984]†. The later“USAF/ESD proposals” data points are from five proposals submitted to the U.S. Air Force Electronic Systems Division in response to a fairly thorough specification [Devenny 1976].

http://sunset.usc.edu/research/COCOMOII/index.html

Page 38: James Nowotarski 17 October 2006 SE 325/425 Principles and Practices of Software Engineering Autumn 2006

Some famous words from Aristotle

Aristotle(384-322 B.C.)

It is the mark of an instructed mind to rest satisfied with the degree of precision which the nature of a subject admits, and not to seek exactness when only approximation of the truth is possible….

Page 39: James Nowotarski 17 October 2006 SE 325/425 Principles and Practices of Software Engineering Autumn 2006

39

GANTT Schedule

• View Project in Context of time.

• Critical for monitoring a schedule.

• Granularity 1 –2 weeks.

Page 40: James Nowotarski 17 October 2006 SE 325/425 Principles and Practices of Software Engineering Autumn 2006

40

Suppose a project comprises five activities: A,B,C,D, and E. A and B have no preceding activities, but activity C requires that activity B must be completed before it can begin.

Activity D cannot start until both activities A and B are complete. Activity E requires activities A and C to be completed before it can start. If the activity times are A: 9 days; B: 3 days; C: 9 days; D: 5 days; and E: 4 days, determine the shortest time necessary to complete this project.

Gantt Example 1:

  1 2 3 4 5 6 7 8 910

11

12

13

14

15

16

17

18

A                                    

B                                    

C                                    

D                                    

E                                    

http://acru.tuke.sk/doc/PM_Text/PM_Text.doc

Identify those activities which are critical in terms of completing the project in the shortest possible time.

Page 41: James Nowotarski 17 October 2006 SE 325/425 Principles and Practices of Software Engineering Autumn 2006

41

Gantt Example 2:

http://acru.tuke.sk/doc/PM_Text/PM_Text.doc

Activity Immediate Predecessor

s

Activity Time (days)

A - 12

B - 6

C A 13

D A, B 12

E C, D 11

F D 13

G E, F 11

Construct a Gantt chart which will provide an overview of the planned project. How soon could the project be completed?Which activities need to be completed on time in order to ensure that the project is completed as soon as possible?

Page 42: James Nowotarski 17 October 2006 SE 325/425 Principles and Practices of Software Engineering Autumn 2006

42

Estimating Schedule Time

Rule of thumb (empirical)

Schedule Time (months)

=

3.0 * person-months1/3

Page 43: James Nowotarski 17 October 2006 SE 325/425 Principles and Practices of Software Engineering Autumn 2006

43

People trump process

One good programmer will always outcode 100 hacks in the long run, no matter how good of a process or IDE you give them

http://steve-yegge.blogspot.com/2006/09/good-agile-bad-agile_27.html

Page 44: James Nowotarski 17 October 2006 SE 325/425 Principles and Practices of Software Engineering Autumn 2006

44

Topic Duration

Testing recap 20 minutes

Project planning & estimating 60 minutes

*** Break

Current event reports 30 minutes

Software metrics 60 minutes

Today’s Agenda

Page 45: James Nowotarski 17 October 2006 SE 325/425 Principles and Practices of Software Engineering Autumn 2006

45

Why Measure?

“You can’t control what you can’t measure” (Tom Demarco)

“Show me how you will measure me, and I will show you how I will perform” (Eli Goldratt)

“Anything that can’t be measured doesn’t exist” (Locke, Berkeley, Hume)

Page 46: James Nowotarski 17 October 2006 SE 325/425 Principles and Practices of Software Engineering Autumn 2006

46

Scope of our discussion

Director - IS/IT

Manager, Systems Development &

Maintenance

Manager,Computer Operations

FinancialSystems

ManufacturingSystems

Customer FulfillmentSystems

Our

focu

s

Sample IT Organization

Page 47: James Nowotarski 17 October 2006 SE 325/425 Principles and Practices of Software Engineering Autumn 2006

47

Examples of systems development metrics

Category Metric Units of measureSpeed of delivery

Delivery rate

Elapsed months/Function point

Schedule reliability

Duration variance %

Schedule variance %

Software quality

Fault density

Faults/Function point

Productivity Productivity rate

Functions points/Staff month

Page 48: James Nowotarski 17 October 2006 SE 325/425 Principles and Practices of Software Engineering Autumn 2006

48

Example: Speed of delivery

0

10

20

30

40

50

60

70

0 2000 4000 6000 8000 10000 12000

Developed Function Points

Ela

pse

d M

onth

s

= Is a single project release (Average elapsed months =14.8, n=33).

Industry Average line is determined from Software Productivity Research

Page 49: James Nowotarski 17 October 2006 SE 325/425 Principles and Practices of Software Engineering Autumn 2006

49

Example: Schedule reliability

0%

10%

20%

30%

40%

50%

60%

2000 4000 6000 8000 10000 12000

Developed Function Points

Sch

edu

le V

aria

nce

abo

ve c

omm

itmen

t

= Is a single project release (n=33).

Industry Average line is determined from Software Productivity Research

Page 50: James Nowotarski 17 October 2006 SE 325/425 Principles and Practices of Software Engineering Autumn 2006

50

Example: Software quality

0

1000

2000

3000

4000

5000

6000

7000

0 2000 4000 6000 8000 10000 12000

Developed Function Points

Fau

lts (

3 m

onth

s)

Faults reported over the first three months in operations (n=27) An estimated industry average for faults found in the first three months of operations. The assumption is that half the total faults are found in the first three months in operation. This average is one half of the industry average of the total faults from C. Jones, Applied Software Measurement, 1996, p.232.

Page 51: James Nowotarski 17 October 2006 SE 325/425 Principles and Practices of Software Engineering Autumn 2006

51

Example: Productivity

0

2

4

6

8

10

12

0 2000 4000 6000 8000 10000 12000

Developed Function Points

Fun

ctio

n P

oint

s pe

r S

taff

Mon

th

Is a single project release (n=33) Industry Average line is determined from SoftwareProductivity Research.

Page 52: James Nowotarski 17 October 2006 SE 325/425 Principles and Practices of Software Engineering Autumn 2006

52

Objectives of Software Measurement

Improve planning, estimating, and staffing; bring marketing down to reality

Understand software quality, productivity of people, ability to deliver on time

Identify areas for improvement Compare your performance

Page 53: James Nowotarski 17 October 2006 SE 325/425 Principles and Practices of Software Engineering Autumn 2006

53

Objectives of Software Measurement Help a systems development unit understand their

performance

Evaluate performance relative to goals

Allow for comparisons to, e.g.,: Other organizations Alternative development approaches (custom,

packaged, outsourced, etc.) and technologies Other standards/targets

Improve estimating ability

Promote desired behaviors, e.g., reuse

Page 54: James Nowotarski 17 October 2006 SE 325/425 Principles and Practices of Software Engineering Autumn 2006

54

Hawthorne Effect

Famous study conducted in the Hawthorne plant of General Electric Corporation

Plant managers implemented changes in working conditions and recorded data on the plant’s production output

They found that production increased no matter what changes in working conditions they implemented!

What does this example reveal about how people act when they know that an experiment is being conducted?

Page 55: James Nowotarski 17 October 2006 SE 325/425 Principles and Practices of Software Engineering Autumn 2006

55

Goal Question Metric

Goal 1 Goal 2

Question QuestionQuestion Question Question

Metric MetricMetric Metric Metric Metric

Page 56: James Nowotarski 17 October 2006 SE 325/425 Principles and Practices of Software Engineering Autumn 2006

56

Goal Question Metric

Technique for identifying suitable measurements to collect Assumption: It is only worthwhile measuring

things to satisfy goals Goals are desired end states Questions identify the information needs

associated with goals, help determine whether or not goals are being met

Metrics are specific items that can be measured to answer the questions

Page 57: James Nowotarski 17 October 2006 SE 325/425 Principles and Practices of Software Engineering Autumn 2006

57

GQM Example (High Level)

Improve systems delivery performanceGoal

What is the qualityof our deliverables? How predictable is

our process?How quickly do we deliver?

How efficient are we?

Question

MetricFault density Delivery rate Productivity rate Duration variance

percentage

Page 58: James Nowotarski 17 October 2006 SE 325/425 Principles and Practices of Software Engineering Autumn 2006

58

Case Study Exercise

1. Get team assignment

2. Read the case study

3. Review example exercise

4. Identify 1 goal

5. Identify 2-3 questions pertinent to this goal.

6. Identify at least 1 metric (indicator) per question

7. Brief the class

Page 59: James Nowotarski 17 October 2006 SE 325/425 Principles and Practices of Software Engineering Autumn 2006

59

Measurement and Continuous Improvement

Continuous Improvement Measurement

Page 60: James Nowotarski 17 October 2006 SE 325/425 Principles and Practices of Software Engineering Autumn 2006

60

Measurement and Continuous Improvement

Continuous Improvement Measurement

Focuses program objectives Enables tracking of improvement

progress Enables communication of program

benefit

Clarifies measurement’s purpose and role Clarifies which measures to collect Provides a mechanism for acting on

findings Enables top-to-bottom organizational

support

Page 61: James Nowotarski 17 October 2006 SE 325/425 Principles and Practices of Software Engineering Autumn 2006

61

Continuous Process Improvement

Approach to Quality and Measurement

Plan

Do

Check

Act

1. Identify performance standards and goals

2. Measure project performance

3. Compare metrics against goals

4. Eliminate causes of deficient performance- fix defects- fix root causes

Page 62: James Nowotarski 17 October 2006 SE 325/425 Principles and Practices of Software Engineering Autumn 2006

62

Metrics Program Change Plan

QUALITY MANAGEMENT

Enable

Change

Technology

Process

People

Metrics Awareness Education

Metrics Network

Vital Few Metrics Definitions Vital Few Metrics Implementation

Technology Strategy

KM Support for Measurement Community of Practice

Measurement Process Improvement

Large Project Network

Metrics Strategy Commitment / Ownership

Distributed Support Units

Metrics Repository and tools

Measurement Process Definition

Roles & Responsibilities

PROGRAM MANAGEMENT

Achieve-1

Change

Sustain

Change

Achieve-2

Change

Metrics Rollout Education/Training

Pilot Project Group

Ongoing Metrics Education / Training

System Building Improvement Goals

Metrics Definition & Implementation for Delivery Centers

Metrics Embedded in System Building Methods

Dashboard metrics Implementation

Pilot Selected Projectsand Selected Delivery Centers

Enable Large Projectsand Remaining Centers

Page 63: James Nowotarski 17 October 2006 SE 325/425 Principles and Practices of Software Engineering Autumn 2006

63

Measurement Program Mortality

Most programs fail, usually within 2 years

Number of companies

400

350

300

250

200

150

100

50

0

1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991

Year

Cumulative startsCumulative successes

Page 64: James Nowotarski 17 October 2006 SE 325/425 Principles and Practices of Software Engineering Autumn 2006

64

Reasons for Metric Program Failure

Lack of [visible] executive sponsorship Lack of alignment with organizational goals Tendency to collect too much data Measures not calibrated, normalized, or

validated Not comparing apples-to-apples

Fear of [individual] evaluation Learning curve (e.g., function points) Cost overhead

Page 65: James Nowotarski 17 October 2006 SE 325/425 Principles and Practices of Software Engineering Autumn 2006

65

Key Success Factors Ensure that measurement is part of something larger,

typically performance improvement “Trojan Horse” strategy Ensure alignment with organizational goals

Start small, iterate Strongly recommend doing a pilot test

Automate capture of metrics data Rigorously define a limited, balanced set of metrics

“Vital Few” Portfolio approach Comparability

Aggregate appropriately Focus should be on processes, not individuals

Obtain [visible] executive sponsorship Understand and address the behavioral implications

Page 66: James Nowotarski 17 October 2006 SE 325/425 Principles and Practices of Software Engineering Autumn 2006

66

Other Quotes

“Count what is countable, measure what is measurable, and what is not measurable,

make measurable”

Galileo

Page 67: James Nowotarski 17 October 2006 SE 325/425 Principles and Practices of Software Engineering Autumn 2006

67

Other Quotes

“In God we trust – All others must bring data”

W. Edwards Deming

Page 68: James Nowotarski 17 October 2006 SE 325/425 Principles and Practices of Software Engineering Autumn 2006

68

Some Courses at DePaul SE 468: Software Measurement and

Estimation Software metrics. Productivity, effort and defect models.

Software cost estimation. PREREQUISTE(S):CSC 423 and either SE 430 or CSC 315 or consent

SE 477: Software and System Project Management Planning, controlling, organizing, staffing and directing

software development activities or information systems projects. Theories, techniques and tools for scheduling, feasibility study, cost-benefit analysis. Measurement and evaluation of quality and productivity. PREREQUISTE(S):SE 465 or CSC 315

Page 69: James Nowotarski 17 October 2006 SE 325/425 Principles and Practices of Software Engineering Autumn 2006

69

Read Pressman Chapters 8-9 Assignment 3 (see course home page) Current event reports:

AlonzoPonRodenbostel

For October 24

Page 70: James Nowotarski 17 October 2006 SE 325/425 Principles and Practices of Software Engineering Autumn 2006

70

Extra slides

Page 71: James Nowotarski 17 October 2006 SE 325/425 Principles and Practices of Software Engineering Autumn 2006

71

Change Control Process

Create InitialSections

Create/ModifyDraft

Review Draft(V&V)

Create Changes to Incorporate

Changes Needed In Document

DocumentApproved

Create Review Revise ReviewReview Approved

Time

...

Document in Production and Under Formal Change Control

Document in Production and Under Formal Change Control

Document Under Development and User Change Control

Document Under Development and User Change Control

Page 72: James Nowotarski 17 October 2006 SE 325/425 Principles and Practices of Software Engineering Autumn 2006

72

Waterfall model

Systemrequirements

Softwarerequirements

Analysis

Program design

Coding

Testing

Operations

Source: Royce, W.  "Managing the Development of Large Software Systems."

Page 73: James Nowotarski 17 October 2006 SE 325/425 Principles and Practices of Software Engineering Autumn 2006

73

Technology

ProcessPeople

The focus of SE 425 is the process component of software engineering

Core Concepts

Technology

ProcessPeople

… for the delivery of technology-enabled business solutions

Page 74: James Nowotarski 17 October 2006 SE 325/425 Principles and Practices of Software Engineering Autumn 2006

74

V-Model

IntegrationTest

Unit Test

Code

AcceptanceTest

SystemTest

Page 75: James Nowotarski 17 October 2006 SE 325/425 Principles and Practices of Software Engineering Autumn 2006

75

• Program Evaluation and Review Technique.

• Help understand relationship between tasks and project activity flow.

Page 76: James Nowotarski 17 October 2006 SE 325/425 Principles and Practices of Software Engineering Autumn 2006

76

Start week = Resources Available.

1. High level analysis start week 1, 5 days, sequential

2.Selection of hardware platform start week 1, 1 day, sequential, dependent on (1)

3.Installation and commissioning of hardware start week 3, 2 weeks, parallel dependent on (2), any time after

4.Detailed analysis of core modules start week 1, 2 weeks, sequential dependent on (1)

1. List all activities in plan.

2. Plot tasks onto chart. (Tasks = arrows. End Tasks = dots)

3. Show dependencies.

4. Schedule activities – Sequential activities on critical path. Parallel activities. Slack time for hold-ups.

Page 77: James Nowotarski 17 October 2006 SE 325/425 Principles and Practices of Software Engineering Autumn 2006

77

Carrying out the example critical path analysis above shows us:

• That if all goes well the project can be completed in 10 weeks

• That if we want to complete the task as rapidly as possible, we need:

• 1 analyst for the first 5 weeks

• 1 programmer for 6 weeks starting week 4

• 1 programmer for 3 weeks starting week 6

• Quality assurance for weeks 7 and 9

• Hardware to be installed by the end of week 7

• That the critical path is the path for development and installation of supporting modules

• That hardware installation is a low priority task as long as it is completed by the end of week 7

Page 78: James Nowotarski 17 October 2006 SE 325/425 Principles and Practices of Software Engineering Autumn 2006

• Schedule Metrics (55%)

• Lines of Code (46%)

• Schedule, Quality, Cost Tradeoffs (38%)

• Requirements Metrics (37%)

• Test Coverage % (36%)

• Overall Project Risk % (36%)

Tasks completed/late/rescheduled.

KLOC, Function Point – for scheduling and costing.

# of tasks completed on schedule / late / rescheduled.

• Fault Density

• Fault arrival and close rates

# or % changed / new requirements (Formal RFC)

Fraction of lines of code covered. (50/60% 90%).

Level of confidence in achieving a schedule date.

Unresolved faults. (eg: Release at .25 /KNCSS)

Determine -ready to deploy. Easier to find than solve.

Metrics Strategy:

1. Gather Historical Data (from source code, project schedules, RFC, reports etc)2. Record metrics.3. Use current metrics within the context of historical data. Compare effort

required on similar projects etc.

Commonly Used Metrics(Taken from http://www.klci.com “Software Metrics “State of the Art” – 2000)

Page 79: James Nowotarski 17 October 2006 SE 325/425 Principles and Practices of Software Engineering Autumn 2006

Lines of Code

Project LOC Effort $ (000) pp.doc. Errors Defects Peoplealpha 12,100 24 168 365 134 29 3beta 27,200 62 440 1224 321 86 5gamma 20,200 43 314 1050 256 64 6

From this data we can develop:

• Errors per KLOC (thousand lines of code)

• Defects per KLOC

• $ per LOC

• Pages of Documentation per KLOC

• Errors / person-month

• LOC per person-month

• $/page of documentation

Evaluating LOC metric:

+ Artifact of ALL software development projects.

+ Easily countable

+ Well used (many models)

- Programming language dependent

- Penalize well designed – shorter programs

- Problem with non-procedural langs.

- Level of detail required is not known early in the project.

Change in Metric usage 1999 – 2000

Lines of Code DOWN 10% Function Points UP 3%

Page 80: James Nowotarski 17 October 2006 SE 325/425 Principles and Practices of Software Engineering Autumn 2006

LOC-Oriented Estimation Models

E = 5.2 X (KLOC)0.91 Walston-Felix Model

E = 5.5 + 0.73 X (KLOC)1.16 Bailey-Basili Model

E = 3.2 X (KLOC)1.05 Boehm simple model

E = 5.288 X (KLOC)1.047 Dot Model for KLOC > 9

LOC-Oriented Estimation Models

E = -13.39 + 0.0545 FP Albrecht and Gaffney Model

E = 60.62 X 7.728 X 10-8 FP3 Kemerer model

E = 585.7 + 15.12 FP Matson, Barnett, Mellichamp model

Page 81: James Nowotarski 17 October 2006 SE 325/425 Principles and Practices of Software Engineering Autumn 2006

The COCOMO ModelA hierarchy of estimation models

Model 1: BasicComputes software development effort (& cost) as a function of size expressed in estimated lines of code.

Model 2: IntermediateComputes effort as a function of program size and a set of “cost drivers” that include subjective assessments of product, hardware, personnel, and project attributes.

Model 3: AdvancedIncludes all aspects of the intermediate model with an assessment of the cost driver’s impact on each step (analysis, design, etc).

Page 82: James Nowotarski 17 October 2006 SE 325/425 Principles and Practices of Software Engineering Autumn 2006

Three classes of software projectsOrganic

Relatively small, simple. Teams with good application experience work to a set of less rigid requirements.

Semi-detachedIntermediate in terms of size and complexity. Teams with mixed experience levels meet a mix of rigid and less rigid requirements. (Ex: transaction processing system)

EmbeddedA software project that must be developed within a set of tight hardware, software and operational constraints. (Ex: flight control software for an aircraft)

Page 83: James Nowotarski 17 October 2006 SE 325/425 Principles and Practices of Software Engineering Autumn 2006

Basic COCOMO Model

Basic COCOMO equations

Nominal effort in person months: E = abKLOCbb

Development time in chronological monthsD = cbE

eb

Software Project ab bb cb db

Organic 2.4 1.05 2.5 0.38

Semi-detached 3.0 1.12 2.5 0.35

Embedded 3.6 1.20 2.5 0.32

Page 84: James Nowotarski 17 October 2006 SE 325/425 Principles and Practices of Software Engineering Autumn 2006

Major Software Functions  

EstLOC

User interface and control facilities UICF 2,300

Two-dimensional geometric analysis 2DGA 5,300

Three-dimensional geometric analysis 3DGA 6,800

Database management DBM 3,350

Computer graphics display features CGDF 4,950

Peripheral control PC 2,100

Design analysis modules DAM 8,400

Estimated lines of code   33,200

E = abKLOCbb

E = 2.3(KLOC)1.05

= 2.3(33.2)1.05

= 95 person-months

D = 2.5E0.35

D = 2.5( 95)0.35

= 12.3 months.

Software Project

ab bb cb db

Organic 2.4 1.05 2.5 3.8

An example of Basic COCOMO

Page 85: James Nowotarski 17 October 2006 SE 325/425 Principles and Practices of Software Engineering Autumn 2006

Intermediate COCOMO Model

Intermediate COCOMO equations

Effort in person months: E = abKLOCbb X EAFwhere EAF = an effort adjustment factor

Development time in chronological monthsD = cbE

eb

Software Project ab bb

Organic 3.2 1.05

Semi-detached 3.0 1.12

Embedded 2.8 1.20

Page 86: James Nowotarski 17 October 2006 SE 325/425 Principles and Practices of Software Engineering Autumn 2006

Major Software Functions  

EstLOC

User interface and control facilities UICF 2,300

Two-dimensional geometric analysis 2DGA 5,300

Three-dimensional geometric analysis 3DGA 6,800

Database management DBM 3,350

Computer graphics display features CGDF 4,950

Peripheral control PC 2,100

Design analysis modules DAM 8,400

Estimated lines of code   33,200

E = abKLOCbb X EAF

E = 3.2(KLOC)1.05 X 1

= 3.2 (33.2)1.05 X 1

Software Project ab bb

Organic 3.2 1.05

The same example in Intermediate COCOMO

EAF is calculated as a product of the multipliers. In this case we set them all to NOMINAL.

http://sern.ucalgary.ca/courses/seng/621/W98/johnsonk/cost.htm#Intermediate

Page 87: James Nowotarski 17 October 2006 SE 325/425 Principles and Practices of Software Engineering Autumn 2006

Importance of soft skills Junior developers and senior software engineers must actively develop the

soft skills that are becoming crucial to their success. Managers must communicate with staff regarding future opportunities, to keep people engaged. In our experience, there have been job impacts following our move toward using offshore partners. Primarily, we’ve moved toward having more managers, project leads, and architects at the expense of more junior staff engineers.

- Cusick, J. & Prasad, A. (2006, Sept/Oct). A practical management and engineering approach to offshore collaboration. IEEE Software. Retrieved October 15, 2006, from http://www.computer.org/portal/cms_docs_software/software/homepage/2006/20_29_0506.pdf

Page 88: James Nowotarski 17 October 2006 SE 325/425 Principles and Practices of Software Engineering Autumn 2006

Difficult to measure productivity

But it's exceptionally difficult to measure software developerproductivity, for all sorts of famous reasons.

I thought the main reason was that no one tries to do it.

Question: how would you evaluate two methodologies if you really wanted to? What would you need?

Answer: conduct a social science experiment using teams of volunteers. Undergraduate CS majors might do in a pinch.

Question: who is going to do this experiment? Computer Science professors? Ha! CS profs want to scribble equations, Design Languages, maybe, once in a great while, they'll be willing towrite a program or two... but conduct a social science experiment? Feh, that's some other department, isn't?

So instead we're stuck with anecdotes, religious fanatics, andhucksters holding seminars...

http://steve-yegge.blogspot.com/2006/09/good-agile-bad-agile_27.html#comment-1481897388024210108

Page 89: James Nowotarski 17 October 2006 SE 325/425 Principles and Practices of Software Engineering Autumn 2006

89

Measures of quality “The chief lesson is that the number of lines

of code is not an indicator of quality. Smaller programs can have plenty of bugs while larger projects, such as the Linux kernel, can be tightly controlled. Quality is more accurately reflected by the ratio of developers to the size of the code base and by the number of users who use the software (and provide feedback).” Study of open source software quality funded by Department of Homeland Security -

http://www.washingtontechnology.com/news/1_1/daily_news/28134-1.html

Page 90: James Nowotarski 17 October 2006 SE 325/425 Principles and Practices of Software Engineering Autumn 2006

90

Hackers find use for Google code search

“The company's new source-code search engine, unveiled Thursday as a tool to help simplify life for developers, can also be misused to search for software bugs, password information and even proprietary code that shouldn't have been posted to the Internet, security experts said Friday.

Unlike Google's main Web search engine, Google Code Search peeks into the lines of code whenever it finds source-code files on the Internet. This will make it easier for developers to search source code directly and dig up open source tools they may not have known about, but it has a drawback.

‘The downside is that you could also use that kind of search to look for things that are vulnerable and then guess who might have used that code snippet and then just fire away at it,’ says Mike Armistead, vice president of products with source-code analysis provider Fortify Software. “

McMillan, R. (2006, October 6). Hackers find use for Google code search. Network World. Retrieved October 16, 2006, from http://www.networkworld.com/news/2006/100606-hackers-find-use-for-google.html

Page 91: James Nowotarski 17 October 2006 SE 325/425 Principles and Practices of Software Engineering Autumn 2006

91

How to Measure Hours?

Include Exclude

OvertimeCompensated (paid)Uncompensated (unpaid)

ContractTemporary employees SubcontractorConsultant

ManagementTest personnelSoftware quality assurance

. . .