31
Slide 1 New Directions for Software Metrics Norman Fenton Agena Ltd and Queen Mary University of London PROMISE 20 May 2007

Promise Keynote

Embed Size (px)

DESCRIPTION

Norman Fenton

Citation preview

Page 1: Promise Keynote

Slide 1

New Directions for Software Metrics

Norman Fenton

Agena Ltd and Queen Mary University of London

PROMISE

20 May 2007

Page 2: Promise Keynote

Slide 2

Overview

• History of software metrics

• Good and bad newsHard project constraintsProject trade-offsDecision-making and intervention

• The true objective of software metrics?

• Why we need a causal approach

• Models in action

• Results

Page 3: Promise Keynote

Slide 3

Metrics History: Typical Approach

What I really want to measure (Y)

What I can measure (X)

Y = f (X)

Page 4: Promise Keynote

Slide 4

Metrics History: the drivers

• ‘productivity’=size/effort

• ‘effort’=a*sizeb

• ‘quality’=defects/size

Page 5: Promise Keynote

Slide 5

Metrics history: size matters!

• LOC

• Improved size metrics

• Improved complexity metrics

Page 6: Promise Keynote

Slide 6

Some Decent News About Metrics

• Empirical results/banchmarks

• Significant industrial activity

• Academic/research output

• Metrics in programmer toolkits

Page 7: Promise Keynote

Slide 7

….But Now the Bad News

• Lack of commercial relevance

• Programmes doomed by data

• Activity poorly motivated

Failed to meet true objective of quantitative risk assessment

Page 8: Promise Keynote

Slide 8

Page 9: Promise Keynote

Slide 9

Regression models….?

Page 10: Promise Keynote

Slide 10

Using metrics and fault data to predict quality

0

0

Post-release faults

10

20

30

40 80 120 160

Pre-release faults

?

Page 11: Promise Keynote

Slide 11

Pre-release vs post-release faults: actual

Post-release faults

0

10

20

30

0 40 80 120 160

Pre-release faults

Page 12: Promise Keynote

Slide 12

What we need

What I think is... ?

Page 13: Promise Keynote

Slide 13

The Good News

• It is possible to use metrics to meet the real objective

• Don’t need a heavyweight ‘metrics programme’

• A lot of the hard stuff has been done

Page 14: Promise Keynote

Slide 14

A Causal Model (Bayesian net)

Residual DefectsTesting Effort

Problemcomplexity

Defects found and fixed

Defects IntroducedDesign processquality

Operational defectsOperational usage

Page 15: Promise Keynote

Slide 15

A Model in action

Page 16: Promise Keynote

Slide 16

Page 17: Promise Keynote

Slide 17

https://intranet.dcs.qmul.ac.uk/courses/coursenotes/DCS235/

Page 18: Promise Keynote

Slide 18

Page 19: Promise Keynote

Slide 19

Page 20: Promise Keynote

Slide 20

A Model in action

Page 21: Promise Keynote

Slide 21

Page 22: Promise Keynote

Slide 22

Page 23: Promise Keynote

Slide 23

A Model in action

Page 24: Promise Keynote

Slide 24

Page 25: Promise Keynote

Slide 25

Page 26: Promise Keynote

Slide 26

Page 27: Promise Keynote

Slide 27

Page 28: Promise Keynote

Slide 28

Page 29: Promise Keynote

Slide 29

Actual versus predicted defects

0

200

400

600

800

1000

1200

1400

1600

1800

2000

0 500 1000 1500 2000 2500

Actual defects

pre

dic

ted

def

ects

Page 30: Promise Keynote

Slide 31

Conclusions

• Heavyweight data and classical statistics NOT the answer

• Empirical studies laid groundwork• Causal models for quantitative risk

Page 31: Promise Keynote

Slide 32

…And

You can use the technology NOW

www.agenarisk.com