30
Software Engineering Software Engineering Software Metrics James Gain ([email protected] ) http://people.cs.uct.ac.za/~ jgain /courses/ SoftEng /

Software Engineering Software Metrics James Gain ([email protected])[email protected] jgain/courses/SoftEng

Embed Size (px)

Citation preview

Page 1: Software Engineering Software Metrics James Gain (jgain@cs.uct.ac.za)jgain@cs.uct.ac.za jgain/courses/SoftEng

Software EngineeringSoftware Engineering

Software MetricsJames Gain

([email protected])

http://people.cs.uct.ac.za/~jgain/courses/SoftEng/

Page 2: Software Engineering Software Metrics James Gain (jgain@cs.uct.ac.za)jgain@cs.uct.ac.za jgain/courses/SoftEng

ObjectivesObjectives

Introduce the necessity for software metrics

Differentiate between process, project and product metrics

Compare and contrast Lines-Of-Code (LOC) and Function Point (FP) metrics

Consider how quality is measured in Software Engineering

Describe statistical process control for managing variation between projects

Page 3: Software Engineering Software Metrics James Gain (jgain@cs.uct.ac.za)jgain@cs.uct.ac.za jgain/courses/SoftEng

Measurement & MetricsMeasurement & Metrics

Against:

Collecting metrics is too hard ... it’s too time consuming ... it’s too political ... they can be used against individuals ... it won’t prove anything

For:

In order to characterize, evaluate, predict and improve the process and product a metric baseline is essential.

“Anything that you need to quantify can be measured in some way that is superior to not measuring it at all” Tom Gilb

Page 4: Software Engineering Software Metrics James Gain (jgain@cs.uct.ac.za)jgain@cs.uct.ac.za jgain/courses/SoftEng

TerminologyTerminology

Measure: Quantitative indication of the extent, amount, dimension, or size of some attribute of a product or process. A single data point

Metrics: The degree to which a system, component, or process possesses a given attribute. Relates several measures (e.g. average number of errors found per person hour)

Indicators: A combination of metrics that provides insight into the software process, project or product

Direct Metrics: Immediately measurable attributes (e.g. line of code, execution speed, defects reported)

Indirect Metrics: Aspects that are not immediately quantifiable (e.g. functionality, quantity, reliability)

Faults: Errors: Faults found by the practitioners during software development

Defects: Faults found by the customers after release

Page 5: Software Engineering Software Metrics James Gain (jgain@cs.uct.ac.za)jgain@cs.uct.ac.za jgain/courses/SoftEng

A Good Manager A Good Manager MeasuresMeasures

measurement

What do weuse as abasis? • • size? • • function?

project metrics

process metricsprocess

product

product metrics

“Not everything that can be counted counts, and not everything that counts can be counted.” - Einstein

Page 6: Software Engineering Software Metrics James Gain (jgain@cs.uct.ac.za)jgain@cs.uct.ac.za jgain/courses/SoftEng

Process MetricsProcess Metrics

Focus on quality achieved as a consequence of a repeatable or managed process. Strategic and Long Term.

Statistical Software Process Improvement (SSPI). Error Categorization and Analysis: All errors and defects are categorized by origin The cost to correct each error and defect is recorded The number of errors and defects in each category is computed Data is analyzed to find categories that result in the highest cost

to the organization Plans are developed to modify the process

Defect Removal Efficiency (DRE). Relationship between errors (E) and defects (D). The ideal is a DRE of 1:

)/( DEEDRE

Page 7: Software Engineering Software Metrics James Gain (jgain@cs.uct.ac.za)jgain@cs.uct.ac.za jgain/courses/SoftEng

Project MetricsProject Metrics

Used by a project manager and software team to adapt project work flow and technical activities. Tactical and Short Term.

Purpose: Minimize the development schedule by making the necessary

adjustments to avoid delays and mitigate problems Assess product quality on an ongoing basis

Metrics: Effort or time per SE task Errors uncovered per review hour Scheduled vs. actual milestone dates Number of changes and their characteristics Distribution of effort on SE tasks

Page 8: Software Engineering Software Metrics James Gain (jgain@cs.uct.ac.za)jgain@cs.uct.ac.za jgain/courses/SoftEng

Product MetricsProduct Metrics

Focus on the quality of deliverables

Product metrics are combined across several projects to produce process metrics

Metrics for the product: Measures of the Analysis Model

Complexity of the Design Model

1. Internal algorithmic complexity

2. Architectural complexity

3. Data flow complexity

Code metrics

Page 9: Software Engineering Software Metrics James Gain (jgain@cs.uct.ac.za)jgain@cs.uct.ac.za jgain/courses/SoftEng

Metrics GuidelinesMetrics Guidelines

Use common sense and organizational sensitivity when interpreting metrics data

Provide regular feedback to the individuals and teams who have worked to collect measures and metrics.

Don’t use metrics to appraise individuals Work with practitioners and teams to set clear goals and

metrics that will be used to achieve them Never use metrics to threaten individuals or teams Metrics data that indicate a problem area should not be

considered “negative.” These data are merely an indicator for process improvement

Don’t obsess on a single metric to the exclusion of other important metrics

Page 10: Software Engineering Software Metrics James Gain (jgain@cs.uct.ac.za)jgain@cs.uct.ac.za jgain/courses/SoftEng

Normalization for MetricsNormalization for Metrics

How does an organization combine metrics that come from different individuals or projects?

Depend on the size and complexity of the projec

Normalization: compensate for complexity aspects particular to a product

Normalization approaches: Size oriented (lines of code approach)

Function oriented (function point approach)

Page 11: Software Engineering Software Metrics James Gain (jgain@cs.uct.ac.za)jgain@cs.uct.ac.za jgain/courses/SoftEng

Typical Normalized MetricsTypical Normalized Metrics

Size-Oriented: errors per KLOC (thousand lines of code), defects per KLOC, R

per LOC, page of documentation per KLOC, errors / person-month, LOC per person-month, R / page of documentation

Function-Oriented: errors per FP, defects per FP, R per FP, pages of documentation

per FP, FP per person-month

Project LOC FP Effort (P/M)

R(000) Pp. doc

Errors Defects People

alpha 12100 189 24 168 365 134 29 3

beta 27200 388 62 440 1224 321 86 5

gamma 20200 631 43 314 1050 256 64 6

Page 12: Software Engineering Software Metrics James Gain (jgain@cs.uct.ac.za)jgain@cs.uct.ac.za jgain/courses/SoftEng

Why Opt for FP Measures?Why Opt for FP Measures?

Independent of programming language. Some programming languages are more compact, e.g. C++ vs. Assembler

Use readily countable characteristics of the “information domain” of the problem

Does not “penalize” inventive implementations that require fewer LOC than others

Makes it easier to accommodate reuse and object-oriented approaches

Original FP approach good for typical Information Systems applications (interaction complexity)

Variants (Extended FP and 3D FP) more suitable for real-time and scientific software (algorithm and state transition complexity)

Page 13: Software Engineering Software Metrics James Gain (jgain@cs.uct.ac.za)jgain@cs.uct.ac.za jgain/courses/SoftEng

Computing Function PointsComputing Function Points

Establish count for input domain and system interfaces

Analyze information domain of the application and develop counts

Weight each count by assessing complexity

Assign level of complexity (simple, average, complex) or weight to each count

Grade significance of external factors, F_i, such as reuse, concurrency, OS, ...

Assess the influence of global factors that affect the application

Compute function points

FP = SUM(count x weight) x C where complexity multiplier C = (0.65+0.01 x N) degree of influence N = SUM(F_i)

Page 14: Software Engineering Software Metrics James Gain (jgain@cs.uct.ac.za)jgain@cs.uct.ac.za jgain/courses/SoftEng

Analyzing the Information DomainAnalyzing the Information Domain

complexity multiplier

function points

number of user inputs number of user outputs number of user inquiries number of files number of ext.interfaces

measurement parameter

3 4 3 7 5

countweighting factor

simple avg. complex

4 5 4 10 7

6 7 6 15 10

= = = = =

count-total

X X X X X

Page 15: Software Engineering Software Metrics James Gain (jgain@cs.uct.ac.za)jgain@cs.uct.ac.za jgain/courses/SoftEng

Taking Complexity into AccountTaking Complexity into Account Complexity Adjustment Values (F_i) are rated on a scale of 0 (not

important) to 5 (very important):1. Does the system require reliable backup and recovery?

2. Are data communications required?

3. Are there distributed processing functions?

4. Is performance critical?

5. System to be run in an existing, heavily utilized environment?

6. Does the system require on-line data entry?

7. On-line entry requires input over multiple screens or operations?

8. Are the master files updated on-line?

9. Are the inputs, outputs, files, or inquiries complex?

10. Is the internal processing complex?

11. Is the code designed to be reusable?

12. Are conversion and instillation included in the design?

13. Multiple installations in different organizations?

14. Is the application designed to facilitate change and ease-of-use?

Page 16: Software Engineering Software Metrics James Gain (jgain@cs.uct.ac.za)jgain@cs.uct.ac.za jgain/courses/SoftEng

Exercise: Function PointsExercise: Function Points

Compute the function point value for a project with the following information domain characteristics:Number of user inputs: 32

Number of user outputs: 60

Number of user enquiries: 24

Number of files: 8

Number of external interfaces: 2

Assume that weights are average and external complexity adjustment values are not important.

Answer:7.40165.0)72108424560432(

Page 17: Software Engineering Software Metrics James Gain (jgain@cs.uct.ac.za)jgain@cs.uct.ac.za jgain/courses/SoftEng

Example: SafeHome FunctionalityExample: SafeHome Functionality

UserUser SafeHome System

SafeHome System

SensorsSensors

UserUser

Monitor and

Response System

Monitor and

Response System

System Config Data

System Config Data

Password

Zone Inquiry

Sensor Inquiry

Panic Button

(De)activate

Messages

Zone Setting

Sensor Status

(De)activate

Alarm AlertPassword, Sensors, etc.

Test Sensor

Page 18: Software Engineering Software Metrics James Gain (jgain@cs.uct.ac.za)jgain@cs.uct.ac.za jgain/courses/SoftEng

Example: SafeHome FP CalcExample: SafeHome FP Calc

complexity multiplier

function points

number of user inputs number of user outputs number of user inquiries number of files number of ext.interfaces

measurement parameter

3 4 3 7 5

countweighting factor

simple avg. complex

4 5 4 10 7

6 7 6 15 10

= = = = =

count-total

X X X X X

3

2

2

1

4

9

8

6

7

22

58

52

]46.065.0[]01.065.0[ iF 1.11

Page 19: Software Engineering Software Metrics James Gain (jgain@cs.uct.ac.za)jgain@cs.uct.ac.za jgain/courses/SoftEng

Exercise: Function PointsExercise: Function Points

Compute the function point total for your project. Hint: The complexity adjustment values should be low ( )

Some appropriate complexity factors are (each scores 0-5):1. Is performance critical?

2. Does the system require on-line data entry?

3. On-line entry requires input over multiple screens or operations?

4. Are the inputs, outputs, files, or inquiries complex?

5. Is the internal processing complex?

6. Is the code designed to be reusable?

7. Is the application designed to facilitate change and ease-of-use?

10 iF

Page 20: Software Engineering Software Metrics James Gain (jgain@cs.uct.ac.za)jgain@cs.uct.ac.za jgain/courses/SoftEng

OO Metrics: Distinguishing OO Metrics: Distinguishing CharacteristicsCharacteristics

The following characteristics require that special OO metrics be developed: Encapsulation — Concentrate on classes rather than

functions Information hiding — An information hiding metric will

provide an indication of quality Inheritance — A pivotal indication of complexity Abstraction — Metrics need to measure a class at different

levels of abstraction and from different viewpoints

Conclusion: the class is the fundamental unit of measurement

Page 21: Software Engineering Software Metrics James Gain (jgain@cs.uct.ac.za)jgain@cs.uct.ac.za jgain/courses/SoftEng

OO Project MetricsOO Project Metrics

Number of Scenario Scripts (Use Cases): Number of use-cases is directly proportional the number of classes

needed to meet requirements

A strong indicator of program size

Number of Key Classes (Class Diagram): A key class focuses directly on the problem domain

NOT likely to be implemented via reuse

Typically 20-40% of all classes are key, the rest support infrastructure (e.g. GUI, communications, databases)

Number of Subsystems (Package Diagram): Provides insight into resource allocation, scheduling for parallel

development and overall integration effort

Page 22: Software Engineering Software Metrics James Gain (jgain@cs.uct.ac.za)jgain@cs.uct.ac.za jgain/courses/SoftEng

OO Analysis and Design MetricsOO Analysis and Design Metrics

Related to Analysis and Design Principles Complexity:

Weighted Methods per Class (WMC): Assume that n methods with cyclomatic complexity are defined for a class C:

Depth of the Inheritance Tree (DIT): The maximum length from a leaf to the root of the tree. Large DIT leads to greater design complexity but promotes reuse

Number of Children (NOC): Total number of children for each class. Large NOC may dilute abstraction and increase testing

icWMC

nccc ,...,, 21

Page 23: Software Engineering Software Metrics James Gain (jgain@cs.uct.ac.za)jgain@cs.uct.ac.za jgain/courses/SoftEng

Further OOA&D Metrics Further OOA&D Metrics

Coupling: Coupling between Object Classes (COB): Total number of

collaborations listed for each class in CRC cards. Keep COB low because high values complicate modification and testing

Response For a Class (RFC): Set of methods potentially executed in response to a message received by a class. High RFC implies test and design complexity

Cohesion: Lack of Cohesion in Methods (LCOM): Number of methods in a

class that access one or more of the same attributes. High LCOM means tightly coupled methods

Page 24: Software Engineering Software Metrics James Gain (jgain@cs.uct.ac.za)jgain@cs.uct.ac.za jgain/courses/SoftEng

OO Testability MetricsOO Testability Metrics

Encapsulation: Percent Public and Protected (PAP): Percentage of attributes that are

public. Public attributes can be inherited and accessed externally. High PAP means more side effects

Public Access to Data members (PAD): Number of classes that access another classes attributes. Violates encapsulation

Inheritance: Number of Root Classes (NRC): Count of distinct class hierarchies.

Must all be tested separately Fan In (FIN): The number of superclasses associated with a class. FIN

> 1 indicates multiple inheritance. Must be avoided Number of Children (NOC) and Depth of Inheritance Tree (DIT):

Superclasses need to be retested for each subclass

)/( DEEDRE

Page 25: Software Engineering Software Metrics James Gain (jgain@cs.uct.ac.za)jgain@cs.uct.ac.za jgain/courses/SoftEng

Quality MetricsQuality Metrics

Measures conformance to explicit requirements, following specified standards, satisfying of implicit requirements

Software quality can be difficult to measure and is often highly subjective

1. Correctness: The degree to which a program operates according to

specification

Metric = Defects per FP

2. Maintainability: The degree to which a program is amenable to change

Metric = Mean Time to Change. Average time taken to analyze, design, implement and distribute a change

Page 26: Software Engineering Software Metrics James Gain (jgain@cs.uct.ac.za)jgain@cs.uct.ac.za jgain/courses/SoftEng

Quality Metrics: Further MeasuresQuality Metrics: Further Measures

3. Integrity: The degree to which a program is impervious to outside attack

Summed over all types of security attacks, i, where t = threat (probability that an attack of type i will occur within a given time) and s = security (probability that an attack of type i will be repelled)

4. Usability: The degree to which a program is easy to use. Metric = (1) the skill required to learn the system, (2) the time

required to become moderately proficient, (3) the net increase in productivity, (4) assessment of the users attitude to the system

Covered in HCI course

i

ii st )1(

Page 27: Software Engineering Software Metrics James Gain (jgain@cs.uct.ac.za)jgain@cs.uct.ac.za jgain/courses/SoftEng

Quality Metrics: McCall’s ApproachQuality Metrics: McCall’s Approach

PRODUCT TRANSITIONPRODUCT TRANSITIONPRODUCT REVISIONPRODUCT REVISION

PRODUCT OPERATIONPRODUCT OPERATIONCorrectness

Reliability

Usability

IntegrityEfficiency

Maintainability

Flexibility

Testability

Portability

ReusabilityInteroperability

McCall’s Triangle of Quality

Page 28: Software Engineering Software Metrics James Gain (jgain@cs.uct.ac.za)jgain@cs.uct.ac.za jgain/courses/SoftEng

Quality Metrics: Deriving McCall’s Quality Metrics: Deriving McCall’s Quality MetricsQuality Metrics

Assess a set of quality factors on a scale of 0 (low) to 10 (high) Each of McCall’s Quality Metrics is a weighted sum of

different quality factors Weighting is determined by product requirements Example:

Correctness = Completeness + Consistency + Traceability Completeness is the degree to which full implementation of required

function has been achieved Consistency is the use of uniform design and documentation techniques Traceability is the ability to trace program components back to analysis

This technique depends on good objective evaluators because quality factor scores can be subjective

Page 29: Software Engineering Software Metrics James Gain (jgain@cs.uct.ac.za)jgain@cs.uct.ac.za jgain/courses/SoftEng

Managing VariationManaging Variation

How can we determine if metrics collected over a series of projects improve (or degrade) as a consequence of improvements in the process rather than noise?

Statistical Process Control: Analyzes the dispersion (variability) and location

(moving average) Determine if metrics are:

(a) Stable (the process exhibits only natural or controlled changes) or

(b) Unstable (process exhibits out of control changes and metrics cannot be used to predict changes)

Page 30: Software Engineering Software Metrics James Gain (jgain@cs.uct.ac.za)jgain@cs.uct.ac.za jgain/courses/SoftEng

Control ChartControl Chart

Compare sequences of metrics values against mean and standard deviation. e.g. metric is unstable if eight consecutive values lie on one side of the mean.

0

1

2

3

4

5

6

1 3 5 7 9 11 13 15 17 19

Projects

Er, Er

rors

foun

d/

revie

who

ur

Mean

- std. dev.

+ std. dev.