31
16 Practices for Improving Software Project Success LCDR Jane Lochner DASN (C4I/EW/Space) Software Technology Conference 3-6 May 1999 (703) 602-6887 [email protected]. mil

16 Practices for Improving Software Project Success LCDR Jane Lochner DASN (C4I/EW/Space) Software Technology Conference 3-6 May 1999 (703) 602-6887 [email protected]

Embed Size (px)

Citation preview

16 Practices for ImprovingSoftware Project Success

LCDR Jane LochnerDASN (C4I/EW/Space)

Software Technology Conference3-6 May 1999

(703) [email protected]

Productivity vs. Size*

0

2

4

6

8

10

12

14

16

10

20

40

80

16

0

32

0

64

0

12

80

25

60

51

20

10

24

0

Software Size in Function Points

* Capers Jones, Becoming Best in Class, Software Productivity Research, 1995 briefing

FunctionPointsPerPersonMonth

Percent Cancelled vs. Size(1)

0

2000

4000

6000

8000

10000

12000

5% 10% 15% 20% 25% 30% 35% 40% 45% 50%Probability of Software Project Being Cancelled

(1) Capers Jones, Becoming Best in Class, Software Productivity Research, 1995 briefing(2) 80 SLOC of Ada to code 1 function point, 128 SLOC of C

Software Size in Function Points(2)

The drivers of the bottom line change dramatically with software size.

When a software development is cancelled, management failure is usually the reason.

"…The task force is convinced that today's major problems with military software development are not technical problems, but

management problems."

When tasks of a team effort are interrelated, the total effort increases in proportion to the square of the number of persons on the team

– Complexity of effort, difficulty of coordination

* Report of the Defense Science Task force on Military Software, Sept 1987, Fred Brooks Chairman

Management is thethe Problem,Problem, But Complexity is the VillainVillain

Software Program Managers Network (SPMN)

Consulting Software Productivity Centers of Excellence

– Bring academia & industry together

Delta One– Pilot program to attract, train, retain software workers for

Navy and DoD programs

Airlie Council– Identify fundamental processes and proven solutions

essential for large-scale software project success

Board of Directors– Chaired by Bgen Nagy, USAF

Airlie Council

Membership– Successful managers of large-scale s/w projects– Recognized methodologists & Metric authorities– Prominent Consultants– Executives from major s/w companies

Product Approval– Software Advisory Group (SAG)

Three Foundations for Project Success

3 Foundations 16 Essential PracticesTM

Construction IntegrityProject Integrity Product Integrity and Stability

Adopt Continuous Risk Management

Estimate Empirically Cost and Schedule

Use Metrics to Manage Track Earned Value Track Defects against

Quality Targets Treat People as the

Most Important Resource

Adopt Life Cycle Configuration Management

Manage and Trace Requirements

Use System-Based Software Design

Ensure Data and Database Interoperability

Define and Control Interfaces

Design Twice, Code Once Assess Reuse Risks and

Costs

Inspect Requirements and Design

Manage Testing as a Continuous Process

Compile & Smoke Test Frequently

Project Integrity

Management Practices that:

– Give early indicators of potential problems

– Coordinate the work and the communications of the development team

– Achieve a stable development team with needed skills

– Are essential to deliver the complete product on-time, within budget, and with all documentation required to maintain the product after delivery

Project Integrity

Adopt Continuous Risk ManagementEstimate Empirically Cost and ScheduleUse Metrics to ManageTrack Earned-value Track Defects Against Quality TargetsTreat People as the Most Important Resource

Risk Management Practice Essentials

Identify risks over entire life cycle including at least: Cost; schedule; technical; staffing;

external dependencies; supportability; sustainability; political

For EACH risk, estimate:– Likelihood that it will become a

Problem– Impact if it does– Mitigation & Contingency plans– Measurement Method

Update & Report risk status at least monthly

ALARMS: Trivial Risks Risks from unproven

technology not identified No trade studies for high-risk

technical requirements Management & Workers have

different understanding of risks

Cost and Schedule Estimation Practice Essentials

Use actual costs measured on past comparable projects

Identify all reused code (COTS/GOTS), evaluate applicability and estimate amount of code modification and new code required to integrate

Compare empirical top-down cost estimate with a bottom-up engineering estimate

Never compress the schedule more than 85% (from nominal)

ALARMS High productivity estimates

based on unproven technology Estimators not familiar with

industry norms No cost associated with code

reuse System requirements are

incomplete “Bad” earned value metrics No risk materialization costs

Use Metrics to Manage Practice Essentials

Collect metrics on:– Risk Materialization

– Product Quality

– Process Effectiveness

– Process Conformance

Make decisions based on data not older than one week

Make metrics data available to all team members

Define thresholds that trigger predefined actions

ALARMS: Large price tag attached to

request for metrics data Not reported at least

monthly Rebaselining Inadequate task activity

network

Track Earned Value Practice Essentials

Establish unambiguous exit criteria for EACH task– Take BCWP credit for tasks

when the exit criteria have been verified as passed and report ACWP for those tasks

Establish cost and schedule budgets that are within uncertainty acceptable to the project

Allocate labor and other resources to each task

ALARMS: Software tasks not separate

from non-software tasks More than 20% of the total

development effort is LOE Task durations greater than

2 weeks Rework doesn’t appear as

separate task Data is old

Track Defects Against Quality Targets Practice Essentials

Establish Unambiguous Quality Goals at Project Inception– Understandability; Reliability &

Maintainability; Modularity; Defect density

Classify defects by– Type; Severity; Urgency; Discovery

Phase Report defects by:

– When created; When found; Number of inspections present but not found; Number closed and currently open by category

ALARMS: Defects not managed by CM Culture penalizes discovery of

defects Not aware of effectiveness of

defect removal methods Earned-Value credit is taken

before defects are fixed or formally deferred

Quality target failures not recorded as one or more defects

Treat People as Most Important Resource Practice Essentials

Provide staff the tools to be efficient and productive– Software– Equipment– Facilities, work areas

Recognize team members for performance– Individual goals– Program requirements

Make professional growth opportunities available– Technical– Managerial

ALARMS: Excessive or unpaid

overtime Excessive pressure Large, unidentified, staff

increases Key software staff not

receiving competitive compensation

Staff turnover greater than industry/locale norms

Construction Integrity

Development Practices that: Provide a stable, controlled, predictable development or maintenance environment

Increase the probability that what was to be built is actually in the product when delivered

Construction Integrity

Adopt Life-cycle Configuration Management

Manage and Trace Requirements

Use System-Based Software Design

Ensure Data and Database Interoperability

Define and Control Interfaces

Design Twice, Code Once

Assess Reuse Risks and Costs

Configuration Management Practice Essentials

Institute CM for:– COTS, GOTS, NDI and other shared

engineering artifacts– Design Documentation– Code – Test Documentation– Defects

Incorporate CM activities as tasks within project plans and activity network

Conduct Functional & Physical Configuration Audits

Maintain version and semantic consistency between CIs

ALARMS: Developmental baseline not

under CM control CM activities don’t have budgets,

products and unambiguous exit criteria

CM does not monitor and control the delivery and release-to-operation process

Change status not reported No ICWGs for external interfaces CCCBs don’t assess system

impact

Manage & Trace Requirements Practice Essentials

Trace System Requirements down through all derived requirements and layers of design to the lowest level and to individual test cases

Trace each CI back to one or more System Requirements

For Incremental Release Model, develop Release Build-plan that traces all System Requirements into planned releases

For Evolutionary Model, trace new requirements into Release Build-plan as soon as they are defined

ALARMS: Layer design began before

requirements for Performance; Reliability; Safety; External interfaces; and Security had been allocated

System Requirements:– Not defined by real end users

– Did not include Operational Scenarios

– Did not specify inputs that will stress the system

Traceability is not to the code level

System Based Software Design Practice Essentials

Develop System and Software Architectures IAW structured methodologies

Develop System and Software Architectures from the same partitioning perspective

– Information

– Objects

– Functions

– States

Design System and Software Architecture to give views of static, dynamic and physical structures

ALARMS: Modifications and additions to

reused legacy/COTS/GOTS software not minimized

Security, Reliability, Performance, Safety, and Interoperability Requirements not included

Design not specified for all internal and external interfaces

Requirements not verified through M&S before start of Software design

Software Engineers did not participate in Architecture development

Data and Database Interoperability Practice Essentials

Design Information Systems with Very Loose Coupling Between Hardware, Persistent Data, and application software

Define data element names, definitions, minimum accuracy, data type, units of measure, and range of values – Identified using several processes

– Minimizes the amount of translation required to share data with external systems

– Relationships between data items defined based on queries to be made on the database

ALARMS: Data Security requirements,

business rules and high-volume transactions on the database not specified before database physical design begins

Compatibility analysis not performed because DBMS is “SQL Compliant”

No time/resources budgeted to translate COTS databases to DoD standards

Define and Control Interfaces Practice Essentials

Ensure interfaces comply with applicable public, open API standards and data interoperability standards

Define user interface requirements through user participation

Avoid use of proprietary features of COTS product interfaces

Place interface under CM Control before developing software components that use it

Track external interface dependencies in activity network

ALARMS: Assumption that two interfacing

applications that comply with the JTA and DII COE TAFIM Interface Standards are interoperable

– E.g. JTA and TAFIM include both Microsoft and UNIX interface standards and MS and UNIX design these standards to exclude the other.

Interface testing not done under heavy stress

Reasons for using proprietary features not documented

Design Twice, Code Once Practice Essentials

Describe– Execution process characteristics

and features– End-user functionality– Physical software components

and their interfaces– Mapping of software components

onto hardware components– States and State Transitions

Use design methods that are consistent with those used for System and are defined in SDP

ALARMS: Graphics not used to describe

different views of the design Operational scenarios not defined

that show how the different views of design interact

Reuse, COTS, GOTS, and program library components not mapped to the software/database components

System and Software Requirements not traced to the software/database components

Assess Reuse Risks & Costs Practice Essentials

ALARMS: Development of "wrappers" needed to

translate reuse software external interfaces

Positive and negative impact of COTS proprietary features not identified

No analysis of GOTS sustainment organization

No plan/cost for COTS upgrades Cost of reuse code is less than 30% of

new code Reuse code has less than 25%

functionality fit

Conduct trade study to select reuse or new architecture– Establish quantified selection criteria and

acceptability thresholds– analyze full lifecycle costs of each

candidate component

Identify reuse code at program inception before start of architecture design

Use architectural frameworks that dominate commercial markets– CORBA/JavaBeans– ActiveX/DCOM

Product Integrity

Quality Practices that: Help assure that, when delivered, the product will meet customer quality requirements Provide an environment where defects are caught when inserted and any which leak are caught as early as possible

Product Integrity

Inspect Requirements and Design

Manage Testing As a Continuous Process

Compile and Smoke Test Frequently

Inspect Requirements and Design Practice Essentials

Inspect products that will be inputs to other tasks Establish a well-defined, structured inspection technique

Train employees how to conduct inspections

Collect & Report defect metrics for each formal inspection

ALARMS: Less than 80% of defects

discovered by inspections Predominant inspection is

informal code walkthrough Less than 100% inspection

of architecture & design products

Less than 50% inspection of test plans

Manage Testing as a Continuous Process

Practice Essentials Deliver inspected test products

IAW integration-test plan Ensure every CSCI

requirement has at least one test case

Include both White- and Black-Box tests– Functional, interface, error

recovery, out-of-bounds input, and stress tests

– Scenarios designed to model field operation

ALARMS:

– Builds for all tests not done by CM

– Pass/Fail criteria not established for each test

– No test stoppage criteria

– No automated test tools

– High-risk and safety- or security-critical code not tested early on

– Compressed test schedules

Compile and Test FrequentlyPractice Essentials

Use orderly integration build process with VDD– Identifies version of software units in

the build

– Identifies open and fixed defects against

the build Use independent test organization

to conduct integration tests Include evolving regression testing Document defects; CM track

defects

ALARMS:– Integration build and

test done less than weekly

– Builds not done by CM; small CM staff

– Excessive use of patches

– Lack of automated build tools

There is Hope!!!

A small number of high-leverage proven practices

can be put in place quickly

to achieve relatively rapid bottom-line improvements.

The 16-Point PlanTMThe 16-Point PlanTM