15
Technical controlling and software process improvement Christof Ebert 1 Alcatel Switching Systems Division, Fr.-Wellesplein 1, B-2018 Antwerpen, Belgium Received 28 April 1997; received in revised form 7 September 1997; accepted 25 September 1997 Abstract Successful process improvement depends heavily on a close tracking of actual results versus the improvement plan. For concerting dierent reengineering activities, process managers must implement a proactive, forward-looking oversight mechanism designed to ensure that the various ongoing projects within the company operate in a performance zone that provides the expected business value. Technical controlling of software projects is introduced as a comprehensive controlling activity consisting of ana- lyzing and interpreting project information for strategy formulation, planning and tracking activities, decision-making, and cost accounting. For practical guidelines this article focuses on introducing and maintaining a corporate program for technical con- trolling in a highly distributed large organization. Experiences are shared that show how technical controlling closely relates and thus supports an ongoing software process improvement initiative. Ó 1999 Elsevier Science Inc. All rights reserved. Keywords: Metrics; Process improvement; Project control; Quality metrics; Software metrics; Technical controlling 1. Introduction Not everything that counts can be counted and not everything that can be counted counts. – Albert Einstein Among the most critical decisions that managers make are those concerning investment in new processes or technologies. Such decisions have enormous strategic impact, as they influence the competitive position of the company through cost and revenue implications for many years into the future. Technology change and software process improvement is used for both eciency improvement and value innovation. Several recent studies indicate that organizations with well managed software process improvement programs report several times productivity improvement, up to 90% defect re- duction and over 70% schedule reduction (El Emam and Briand, 1997; Jones, 1997). While the motivation is obvious, the concerted introduction to continuously running projects often becomes the major headache and reason for over 50% failure rate of corporate software related to the complexity of quantitatively modeling the exact problem and the change objectives as well as of assessing the impact, both financial and otherwise, of such changes. As with all engineering disciplines it is impossible to fully understand what is happening and what the out- comes of prospective changes will be unless supported by quantitative data. While software metrication has been a discipline with many (often predictive) flavors for almost 30 years, it has not yet become a comprehensive, underlying supportive activity for management, such as accounting (Fenton and Pfleeger, 1997; Grady, 1992). This article tries to show, based on practical experience, how various software metrics can be used to control projects from a technical perspective. Although the ex- amples are derived from medium–large-sized telecom- munication projects, they are applicable in other industries as well. Technical controlling of software projects is con- cerned with identifying, analyzing and interpreting project information. Objectives of these activities are: Decision-making: What should I do? Attention-directing: What should I look at? Performance evaluation: How well am I doing? Planning: What can we reasonably achieve in a given period? Target-setting: How much can we improve in a given period? The Journal of Systems and Software 46 (1999) 25–39 1 Tel.: +32 3 240 4081; fax: +32 3 240 9935; e-mail: christof.ebert@ alcatel.be 0164-1212/99/$ – see front matter Ó 1999 Elsevier Science Inc. All rights reserved. PII: S 0 1 6 4 - 1 2 1 2 ( 9 8 ) 1 0 0 8 6 - 9

Technical controlling and software process improvement

Embed Size (px)

Citation preview

Page 1: Technical controlling and software process improvement

Technical controlling and software process improvement

Christof Ebert 1

Alcatel Switching Systems Division, Fr.-Wellesplein 1, B-2018 Antwerpen, Belgium

Received 28 April 1997; received in revised form 7 September 1997; accepted 25 September 1997

Abstract

Successful process improvement depends heavily on a close tracking of actual results versus the improvement plan. For

concerting di�erent reengineering activities, process managers must implement a proactive, forward-looking oversight mechanism

designed to ensure that the various ongoing projects within the company operate in a performance zone that provides the expected

business value. Technical controlling of software projects is introduced as a comprehensive controlling activity consisting of ana-

lyzing and interpreting project information for strategy formulation, planning and tracking activities, decision-making, and cost

accounting. For practical guidelines this article focuses on introducing and maintaining a corporate program for technical con-

trolling in a highly distributed large organization. Experiences are shared that show how technical controlling closely relates and

thus supports an ongoing software process improvement initiative. Ó 1999 Elsevier Science Inc. All rights reserved.

Keywords: Metrics; Process improvement; Project control; Quality metrics; Software metrics; Technical controlling

1. Introduction

Not everything that counts can be counted and noteverything that can be counted counts. ± AlbertEinstein

Among the most critical decisions that managersmake are those concerning investment in new processesor technologies. Such decisions have enormous strategicimpact, as they in¯uence the competitive position of thecompany through cost and revenue implications formany years into the future. Technology change andsoftware process improvement is used for both e�ciencyimprovement and value innovation. Several recentstudies indicate that organizations with well managedsoftware process improvement programs report severaltimes productivity improvement, up to 90% defect re-duction and over 70% schedule reduction (El Emam andBriand, 1997; Jones, 1997). While the motivation isobvious, the concerted introduction to continuouslyrunning projects often becomes the major headache andreason for over 50% failure rate of corporate software

related to the complexity of quantitatively modeling theexact problem and the change objectives as well as ofassessing the impact, both ®nancial and otherwise, ofsuch changes.

As with all engineering disciplines it is impossible tofully understand what is happening and what the out-comes of prospective changes will be unless supportedby quantitative data. While software metrication hasbeen a discipline with many (often predictive) ¯avors foralmost 30 years, it has not yet become a comprehensive,underlying supportive activity for management, such asaccounting (Fenton and P¯eeger, 1997; Grady, 1992).This article tries to show, based on practical experience,how various software metrics can be used to controlprojects from a technical perspective. Although the ex-amples are derived from medium±large-sized telecom-munication projects, they are applicable in otherindustries as well.

Technical controlling of software projects is con-cerned with identifying, analyzing and interpretingproject information. Objectives of these activities are:· Decision-making: What should I do?· Attention-directing: What should I look at?· Performance evaluation: How well am I doing?· Planning: What can we reasonably achieve in a given

period?· Target-setting: How much can we improve in a given

period?

The Journal of Systems and Software 46 (1999) 25±39

1 Tel.: +32 3 240 4081; fax: +32 3 240 9935; e-mail: christof.ebert@

alcatel.be

0164-1212/99/$ ± see front matter Ó 1999 Elsevier Science Inc. All rights reserved.

PII: S 0 1 6 4 - 1 2 1 2 ( 9 8 ) 1 0 0 8 6 - 9

Page 2: Technical controlling and software process improvement

As such it is more than only ensuring overall tech-nical correctness of a project as earlier de®ned (Rook,1986). Unfortunately many organizations who considersoftware development as their core business still strictlyseparate between business performance monitoring andevaluation and what is labeled as low-level softwaremetrics. Software metrics are seen as tool for experi-ments or for predictions, while the day-to-day projectmanagement is not studied in depth (P¯eeger et al.,1997). Only a few approaches have been published in-dicating how an integrated measurement program cansupport management decision-making (Putnam andMyers, 1997; Grady, 1992).

Therefore our motivation while building up a cor-porate software metrics program was to embed it withinthe corporate technical controlling in order to align thedi�erent levels of target setting and tracking activities.Only the close link of corporate strategy with clearlyspeci®ed business goals and with the operational projectmanagement achieves overall improvements. We there-fore coordinated the metrics program for operationaldevelopment management with our software processimprovement initiative to ensure that goals on all levelscorrespond with each other. Although both softwareand technology change are closely related, we will focuson software process change management for its inherenthigher complexity than technology change management.After all, de®ned development processes are the majorfacilitator of any technology change management ac-tivity (Humphrey, 1989).

For this study we are providing data gathered duringdevelopment and ®eld operation of several releases ofthe Alcatel 1000 S12 switching system. This switchingsystem accounts for over 110 million installed linesworldwide. There are over 20 development centers, in-volving more than 5000 engineers. All of the introducedanalyses that altogether comprise technical controllingare concerned with issues that may vary from companyto company. Any data presented here on e�ort or valueof Software Process Improvement (SPI) are only ap-proximate and closely linked to the environment it was

extracted from. Activities and processes may di�er aswell as work products and entire life cycle models. It isthus necessary for each company to go through theprocess of de®ning their own norms, based on data re-lated to their individual project history, rather thansolely relying on ``benchmarking'' data from the litera-ture. Entire absence of internal reliable project data ofcourse might recommend using such external data forsetting up immediately reasonable targets (Jones, 1995).

The paper is organized as follows. Section 2 gives abrief overview of metrics used for decision-making insoftware project management. Relating reengineeringactivities and setting improvements objectives is furtherdescribed in Section 3. Section 4 looks on how to comeup with a ROI calculation. Starting a corporate pro-gram for technical controlling in a distributed organi-zation as part of a software process improvement (SPI)initiative is introduced in Section 5. Experiences thathold especially from the perspective of organizationallearning are provided in Section 6. Finally, Section 7summarizes the results and gives an outlook on howtechnical controlling and process improvement in theirmutual dependence will evolve over time in an organi-zation.

2. Applications of technical controlling

Several repetitive stages can be identi®ed in technicalcontrolling that can all be mapped to the measurementprocess (Fig. 1):· Set objectives, both short- and long-term for products

and process.· Forecast and develop plans both for projects and for

departments.· Communicate information; coordinate and imple-

ment plans.· Understand and agree to commitments and their

changes.· Motivate people to accomplish plans.· Measure achievement in projects and budget centers.

Fig. 1. Metrics-driven project management.

26 C. Ebert / The Journal of Systems and Software 46 (1999) 25±39

Page 3: Technical controlling and software process improvement

· Predict the development direction of process andproduct relative to goals and control limits.

· Identify and analyze potential risks; compare withobjectives.

· Evaluate performance; investigate signi®cant devia-tions.

· Determine if the project is under control and whetherthe plan is still valid.

· Identify corrective actions and reward/penalize per-formance.

· Implement corrective actions.Control is only achievable if measures of performance

have been de®ned and implemented, objectives havebeen de®ned and agreed, predictive models are estab-lished for the entire life cycle, and the ability to act isgiven. The remainder of this section will further in-vestigate examples for each of these conditions.

2.1. Decision-making

The single best technology for getting some controlover deadlines and other resource constraints is to setformal objectives for quality and resources in a mea-surable way (Jones, 1997; Fenton and P¯eeger, 1997;Rook, 1986; Stark et al., 1994). Planning and controlactivities cannot be separated. Managers control bytracking actuals against plans and acting on observeddeviations. Controls should focus on signi®cant devia-tions from standards and at the same time suggest ap-propriate ways for ®xing the problems. Typically thesestandards are schedules, budgets, and quality targetsestablished by the project plan. To ensure e�ectivetracking, all critical attributes should be establishedboth measurable and testable. The worst acceptablelevel should be clear although the internal target is in

most cases higher. An example from a real project isgiven in Fig. 2. It shows the observations from test (i.e.status of tests per week and the respective planningcurves for total tests, passed tests and approached tests)and several derived actions that the Project Leader fol-lowed up.

The in¯uence of metrics de®nition and applicationfrom project start (e.g. estimation, target setting andplanning) to steering (e.g. tracking and budget control,quality management) to maintenance (e.g. failure rates,operations planning) is very well described in the relatedIEEE Standard for Developing Software Life CycleProcesses (IEEE, 1992). This standard helps also in de-®ning the di�erent processes that constitute the entiredevelopment process including relationships to man-agement activities.

Most bene®ts that we recorded since establishing acomprehensive technical controlling program are indeedrelated to project management:· Improved tracking and control of each development

project based on uniform mechanisms.· Earlier identi®cation of deviations from the given tar-

gets and plans.· Accumulation of history data from all di�erent types

of projects that are reused for improving estimationsand planning of further projects.

· Tracking process improvements and deviations fromprocesses.One of the main targets of any kind of measurement

is that it should provide an objective way of expressinginformation, free of value judgments. This is particularlyimportant when the information concerned is ``badnews'', for instance related to productivity or cost, andthus not necessarily be well received. Often the observedhuman tendency is to ignore any criticism related to

Fig. 2. Practical controlling activity during test (example with observations and actions).

C. Ebert / The Journal of Systems and Software 46 (1999) 25±39 27

Page 4: Technical controlling and software process improvement

one's own area and direct attention to somebody else's.Test articulates that ``the design is badly structured'',while operations emphasize that ``software has not beenadequately tested''. Any improvement activities musttherefore be based on hard numerical evidence. The ®rstuse of metrics is most often to investigate the currentstate of the software process. Table 1 relates the mostrelevant metrics to di�erent levels of process maturity(Humphrey, 1989).

A selection of the most relevant project trackingmetrics is provided in Fig. 3. These metrics includemilestone tracking, cost evolution, a selection of processmetrics, work product deliveries, and faults both withstatus information. Similar metrics can be found inthose leading companies that took Tom de Marco lit-erally when he said that ``what gets measured get done''(Grady, 1992). Such tracking metrics are periodicallyupdated and provide easy overview on the project sta-tus, even for very small projects. Based on this set ofmetrics several metrics can be selected to track weeklywork products' status and progress (e.g. module deliv-ery, faults), while others are reported periodically tobuild up a history database (e.g. size, e�ort). Most ofthese metrics are actually byproducts from automaticcollection tools related to planning and SCM databases.Project-trend indicators based on such simple trackingcurves are much more e�ective in alerting managers thanthe delayed and super®cial task completion trackingwith PERT charts.

E�ective project tracking and implementation of im-mediate corrective actions requires a strong project or-ganization. As long as department heads interfere withproject managers, decisions can be misleading or evencontradictory. Frequent task and priority changes onthe practitioner level with all related drawbacks are theconsequence. A major milestone within our SPI pro-

gram was reached when speci®c project managementresponsibilities were focused entirely on the project or-ganization. Today project managers are given speci®croles and responsibilities and they are required to de-velop plans and objectives. Measurable quality targetsare de®ned within the project quality plans and latertracked through the entire project life cycle. Projectmanagers report project and quality progress usingstandardized reporting mechanisms, such as fault re-moval e�ciency (similar to the chart in Fig. 4) or pro-gress vs. schedule in terms of deliverables (similar to thecurves and completeness ®gures in Fig. 3). Committingto such targets and being forced to track them over thedevelopment process ensures that project managers andtherefore the entire project organization carefully ob-serve and implement SPI activities.

Test tracking is a good example for how to make useof tracking metrics (Fig. 2). Test results can be used tosuggest an appropriate course of action to take eitherduring testing activities or towards their completion.Based on a test plan all testing activities should be seenin the context of the full test schedule, rather than asindependent actions. If the goal is, for instance, how wellintegration test detects faults then models of both testprogress and defects must be available. Information thatsupports interpretation (e.g. fault density per subsystem,relationships to feature stability) must be collected andintegrated. Besides the typical S-shaped appearanceseveral questions must be answered before being able tojudge progress or e�ciency. For example, how e�ec-tively was the speci®c test method applied? what is thecoverage in terms of code, data relations, or featuresduring regression test? How many faults occurred dur-ing similar projects with similar test case suites overtime? With such premises it is feasible to set up not onlyrelease-oriented phase end targets but also phase entry

Table 1

Appropriate metrics for di�erent CMM levels

CMM Description Metrics

5 Continuous improvements are institutionalized Process metrics for the control of process change management; analysis of

process and product metrics according to documented procedures to

prevent defects and to continuously improve processes

4 Products and processes are quantitatively managed Process metrics for the control of single processes; quantitative goals are

set and managed; control limit charts over time for size growth, costs,

completion, time; assess bene®cial process innovations and manage process

change; maintain managed and controlled process database across

projects

3 Appropriate techniques are institutionalized De®ned and established product metrics (measurement standards and

experience-based estimation guidelines); statistical control charts; formal

records for retrieving level 2 metrics; automatic metric collection;

experience database

2 Project management is established De®ned and reproducible project metrics for planning and tracking

(contents, requirements status, faults and their status in all phases, e�ort,

size, design and test progress); pro®les over time for these metrics; few

process metrics for SPI progress tracking

1 Process is informal and ad hoc Few project metrics (size, e�ort, faults); however metrics are inconsistent

and not reproducible

28 C. Ebert / The Journal of Systems and Software 46 (1999) 25±39

Page 5: Technical controlling and software process improvement

criteria that allow for rejection to module test or in-spections if the system quality is inadequate.

2.2. Cost control

Because it seems clear that cost in software projectsare predominantly related to labor (i.e. e�ort) it is rareto ®nd practical cost control besides deadline- or pri-ority-driven resource allocation and shifting. The use ofcost information, however is manifold. In decision-making cost information is used to determine relevantcosts (e.g. sunk costs, avoidable vs. unavoidable costs,variable vs. ®xed costs) in a given project or process,while in management control the focus is on controllablecosts vs. non-controllable costs. A major step towardsdecision-support is an accounting that moves fromhead-count based e�ort models to activity-based cost-

ing. Functional cost analysis and even target-costingapproaches are increasingly relevant because customerstend to pay for features instead of entire packages asbefore. Not surprisingly cost reduction can only beachieved if it is clear how activities relate to costs. Thedi�erence is to assign costs to activities or processesinstead of departments.

Activity-based models allow for more accurate esti-mates and tracking than using holistic models that onlyfocus on size and e�ort for the project as one unit. Ef-fects of processes and their changes, resources and theirskill distribution or factors related to each of the de-velopment activities can be considered depending on thebreakdown granularity. An example is given in Table 2which includes a ®rst breakdown to major developmentphases. The percentages and costs per KLOC are forreal-time embedded systems and should not be taken as

Fig. 3. Overview project metrics for milestones, cost, process metrics, deliveries, and faults.

Fig. 4. Typical benchmark e�ects of detecting faults earlier in the life cycle.

C. Ebert / The Journal of Systems and Software 46 (1999) 25±39 29

Page 6: Technical controlling and software process improvement

®xed quantities (Jones, 1991; Software, 1995). Typicallyboth columns for percentages and productivity arebased on a project history database that is continuouslyimproved and tailored to the speci®c project situation.Even process allocation might vary, projects with reusehave a di�erent distribution with more emphasis to-wards integration and less e�ort for TLD. Unit costvalues are likely to decrease in the long-term as the cu-mulative e�ects of technological and process changesbecome visible.

Although activity-based accounting means a moredetailed e�ort reporting throughout each project, it al-lows for a clear separation between value adding andnon-value adding activities, process value analysis, andimproved performance measures and incentive schemes.Once process related costs are obvious, it is easy to as-sign all overhead costs, such as IT support or tools,related to the processes where they are necessary andagain to the respective projects. Instead of allocatingsuch overhead to projects based on overall developmente�ort per project, it is allocated related to activities rel-evant in the projects. For instance, upfront design ac-tivities should not contribute to allocation of expensivetest equipment.

While dealing with controlling cost, often the ques-tion comes up, which tracking system is to be used.Most companies have rather independent ®nancialtracking systems in place that provide monthly reportson cost per project and sometimes even on an activitybase. The reports are often fed with timesheet systemsand relate e�ort to other kinds of cost. Unfortunatelysuch ®nancial systems are in many cases so independentfrom engineering that neither the activities clusters northe reporting frequency are helpful for making anyshort-term decisions.

Variance analysis is applied to control costs evolutionover time. It is based on standard costs that are esti-mated (or known) to perform a single activity within aprocess under e�cient operating conditions. Typicallysuch standard costs are based on well-de®ned outputs ofthe activity, for instance test cases performed and errorsfound in testing. Knowing the e�ort per test case duringintegration test and the e�ort to detect an error (whichincludes regression testing but not correction), a stan-

dard e�ort can be estimated for the whole project.Functionality, size, reuse degree, stability and com-plexity of the project determine the two input parame-ters, namely test cases and estimated number of faults tobe detected in the speci®c test process. Variances arethen calculated as a relative ®gure: variance� (standardcost ÿ actual cost)/standard cost.

Variance analysis serves to ®nd practical pointers tocauses of o�-standard performance so that project-management or department heads can improve opera-tions and increase e�ciency. It is however not an end toitself because variances might be caused by other vari-ances or are related to a di�erent target. Predictorsshould thus be self-contained, such as in the given ex-ample. Test cases alone are insu�cient because an un-stable product due to insu�cient design causes moree�ort in testing.

ROI (return on investment) is a critical and mis-leading expression when it comes to development cost orjusti®cation of new techniques (Jones, 1995; McGibbon,1996). Too often heterogeneous cost elements with dif-ferent meaning and unclear accounting relationships arecombined to one ®gure that is then optimized. For in-stance reducing ``cost of quality'' that includes appraisalcost and prevention cost is misleading when comparedwith cost of non-conformance because certain appraisalcost (e.g. module test) are a component of regular de-velopment. Cost of non-conformance on the other handis incomplete if only considering internal cost for faultdetection, correction and redelivery because they mustinclude opportunity cost due to rework at the customersite, late deliveries or simply binding resources thatotherwise might have been used for a new project.

3. Setting objectives ± the case for process improvement

A primary business goal for any telecommunicationsystems supplier today is to control and reduce softwaredevelopment cost. Software development increasinglycontributes to product cost. It has been shown withinvarious companies that the SEI CMM (Capability Ma-turity Program of the Software Engineering Institute) isan e�ective roadmap for achieving cost-e�ective solu-tions (Jones, 1995; McGibbon, 1996). Many of theproblems that lead to project delays or failures aretechnical, however the critical ones are managerial.Software development depends on work products, pro-cesses and communication. Such development behavesin a structured way only if a structure is imposed andcontrolled. This is where the CMM ®ts in (Humphrey,1989). Fig. 3 was compiled from di�erent industrialdatabases gives an overview of fault detection withinorganizations according to their respective maturitylevel. Obviously what is done right in software devel-opment is done early. There is little chance for catching

Table 2

Activity-based e�ort allocation

Activity Percent PM/KLOC

Requirements management 7% 0,7

Top level design 17% 1,7

Detailed design 22% 2,2

Coding until module test 22% 2,2

Integration 16% 1,6

System test, quali®cation 16% 1,6

Total 100% 100%

30 C. Ebert / The Journal of Systems and Software 46 (1999) 25±39

Page 7: Technical controlling and software process improvement

up when things are discovered to be going wrong later inthe development.

Since the CMM provides both a guideline for iden-tifying strengths and weaknesses of the software devel-opment process and a roadmap for improvementactions, Alcatel Telecom has also based its improvementactivities on the CMM. Assessments are conducted in allmajor development sites. The direct ®ndings accordingto the CMM are analyzed according to their prospectiveimpacts on Alcatel's business goals and then prioritizedto select those areas with highest improvement potential.Based on this ranking a concrete planning of improve-ment actions is repeatedly re®ned (Fig. 5), resulting inan action plan with detailed descriptions of improve-ment tasks with responsibilities, e�ort estimates, etc.Checkpointing assessments of the same type are re-peatedly done to track the implementation of the im-provement plan.

As indicated in Fig. 5 metrics for tracking both pro-cess conformance and output availability or quality area key instrument for process improvement. Typical ex-amples for such measurement goals are to track e�ortrelated to process improvement or to monitor the resultsand achievements. Di�erent groups typically work to-wards individually controlled goals that build up tobusiness division level goals and corporate goals. Anexample for improved maintainability demonstrates thishierarchy. A business division level goal could be toimprove maintainability within legacy systems, as it isstrategically important for all telecommunication sup-pliers. Design managers might break that further downto redesigning exactly those components that are at theedge of being unmaintainable. Project managers on theother hand face a trade-o� with time to market andmight emphasize on incremental builds instead. Clearlyboth need appropriate indicators to support their se-

lection processes which de®ne the way towards theneeded metrics related to these goals. Obviously one ofthe key success criteria for SPI is to understand thepolitical context and various hidden agendas behindtechnical decisions in order to make compromises orweigh alternatives.

Objectives related to individual processes must beunambiguous and agreed by the respective groups. Thisis obvious for test and design groups. While the ®rst arereinforced for ®nding defects and thus focus on writingand executing e�ective test suites, design groups aretargeting to deliver code that can be executed withoutdefects. In case of defects they must be corrected e�-ciently, which allows for setting up another metric for adesign group which is the backlog of faults it has toresolve.

It is thus important to consider di�erent perspectivesand their individual goals related to promotion, projectsand the business. Most organizations have at least four:the practitioner, the project manager, the departmenthead, and corporate executives. Their motivation andtypical activities di�er much and often create confusinggoals which at the worst level are resolved on thepractitioner level. Reuse is another example that con-tinuously creates trade-o� discussions. When a projectincurs expenses due to keeping components maintain-able and to promote their reusability, who pays for itand where is it recorded in a history database thatcompares e�ciency of projects and thus of their man-agement?

Managing and tracking SPI can be done on di�erentlevels of abstraction. Senior management is interested inthe overall achievements based on what has been in-vested in the program. Related metrics include the ef-fectiveness of fault detection because the obviousrelationship to cost of quality is directly related to the

Fig. 5. Deriving concrete actions from improvement goals and strategic goals.

C. Ebert / The Journal of Systems and Software 46 (1999) 25±39 31

Page 8: Technical controlling and software process improvement

most common business goal of cost reduction. Lead-time reduction and e�ort reduction is related to reducedrework and as such also related to less defects and earlydetection. On the project level SPI management includesa variety of process metrics that compare e�ciencies andthus relate on the microscopic level to achieving thebusiness goals.

4. ROI calculation

The ROI is di�cult to calculate in software devel-opment. This is not so much any more due to not havingcollected e�ort ®gures but rather by distinguishing theactual e�ort ®gures that relate to investment (that wouldhave otherwise not been done) and the returns (as dif-ference to what would have happened if not having in-vested). For many years the ROI data are reported butin most cases not backed up by real data, or where realdata existed, it was counter to the current mainstreamviewpoints (Fenton and P¯eeger, 1997). Only recentlyseveral studies have been published that try to compareresults of software process improvement activities(Jones, 1995; McGibbon, 1996).

For calculating the ROI e�ects the following rulesshould be considered:· Samples should consider projects before and after the

start of the improvement program to be evaluatedwith the ROI.

· Controlling should be able to provide history data(e.g. e�ort).

· Aggregated or combined e�ort or cost ®gures must beseparated (i.e. prevention, appraisal cost, cost of non-performance, cost of performance ± which are typi-cally spent in any case).

· Include only those e�ects which trace back to rootcauses that were part of the original improvementplanning.

· Check cost data on consistency within one projectand across projects.The ROI is most e�ciently presented according to the

following ¯ow:1. Current results (these are the potentials; i.e. prob-

lems, cost; causes).2. Known e�ects in other (competing) companies (i.e.

improvement programs in other companies; bench-marking data; cost±bene®t estimation for these com-panies).

3. ROI calculation (calculate cost of quality per monthfor several sample projects; calculate the savings sincestart of the improvement program; extrapolate thesesavings for all a�ected projects which is bene®t; com-pare the bene®t with the cost of the improvementprogram which is ROI; never include cost of perfor-mance since this is regular e�ort).

We will try to provide insight in a ROI calculation.The data which are used for calculating results fromaverage values that have been gathered in the S12 his-tory database (Table 3). The history database currentlyrepresents 50 projects with an average size of 70 new orchanged KStmt and roughly 2 MStmt reused code perproject.

We will compare the e�ect of increased e�ort forcombined code reading and code inspection activities asa key result of our improvement program. The summaryshows that by reducing the amount of code to be in-spected per hour by more than a factor three, the e�-ciency in terms of faults detected increased signi®cantly.As a result the percentage of faults detected duringcoding increases dramatically. While reading speed re-¯ects only the actual e�ort spent for fault detection, thee�ort per KStmt includes both detection and correction,thus resulting in around 3 Ph/Fault which seems stable.

Given an average sized development project and onlyfocusing on the new and changed software withouttaking into account any defect-preventive results overtime, the following calculation can be derived. E�ortspent for code reading and inspection activities increasesby 1610 Ph. Assuming a constant average combinedappraisal cost and cost of non-performance (i.e. detec-tion and correction e�ort) after coding of 15 Ph/Fault,the total e�ect is 9940 Ph less spent in 1997. This resultsin a ROI value of 6.2 (i.e. each additional hour spentduring code reading and inspections yields 6.2 savedhours of appraisal and non-performance activities af-terwards).

We made the following experiences with ROI calcu-lations:· It is better to collect the di�erent e�ort ®gures during

a project than afterwards.· Activities related to distinct e�ort ®gures must be de-

®ned (activity based costing helps a lot).

Table 3

Process improvements with focus on code reading/inspections (defect

preventive activities are not considered for example)

1995 1996 1997

Checking speed (Stmt/Ph) 142 68 39

E�ort per KStmt 15 24 36

E�ort per fault 7.5 3 3

Faults per KStmt 2 8 13

E�ectiveness (% of all) 2 18 29

Project: 70Kstmt; 3150 Faults

E�ort for code reading/insp. 1050 2660

Faults found in code reading/insp. 140 910

Remaining faults 3010 2240

Corr. e�ort after code reading/Insp. (Ph)

(based on 15 Ph/F average corr. e�ort)

45,150 33,600

Total correction e�ort (Ph) 46,200 36,260

ROI � saved total e�ort/add. det. e�ort 6.2

32 C. Ebert / The Journal of Systems and Software 46 (1999) 25±39

Page 9: Technical controlling and software process improvement

· Cost and e�ort must not be estimated, but rather col-lected in projects (typically the inputs to the estima-tion are questioned until the entire calculation isnot acceptable any more).

· Detailed quality cost are helpful for root cause ana-lyses and related defect prevention activities.

· Tangible cost savings are the single best support for arunning improvement program.

· Cost of non-performance is a perfect trigger for a SPIprogram.

· Obvious management and software engineering prac-tices are typically not ROI topics.

· There are many ``hidden'' ROI potentials that are of-ten di�cult to quantify (e.g. customer satisfaction;improved market share because of better quality, de-livery accuracy and lower per feature costs; opportu-nity costs; reduced maintenance costs in follow-onprojects; improved reusability; employee satisfaction;resources are available for new projects instead ofwasting them for ®re®ghting).

· There are also hidden investments that must be ac-counted (e.g. training, infrastructure, coaching, addi-tional metrics, additional management activities,process maintenance).

· Without any metrics managers need not to be con-vinced (poor managers entirely believe in qualitativereasoning and call it ``intuition''; excuse for lack ofmotivation (without metrics no improvement andwithout improvement no metrics).Not all the ROI calculations are based on monetary

bene®ts. Depending on the business goals it can as wellbe directly presented in reduced lead time or higher ef-®ciency and productivity.

5. Setting up technical controlling in a distributed

organization

5.1. The metrics process, roles and responsibilities

Obviously the introduction of technical controllingand thus software metrics to projects has to follow astepwise approach that must be carefully coached. Eachnew metric that needs tools support must be piloted ®rstin order to ®nd out whether de®nitions and tools de-scription are su�cient for the collection. The focus,however is never on tools at ®rst which is why there is nodedicated tools or warehouse chapter in this article.Experience shows that tools can build for any metricgiven that the raw data are collected. As such the ques-tion is rather what the best alignment and presentation ofthe tools should be. Then the institutionalization must beplanned and coached in order to obtain valid data. In-dependent of the targets of a measurement program, itwill only be taken seriously if the right people are givenresponsibility for it (Fenton and P¯eeger, 1997).

For that reason the following three roles have beenestablished in Alcatel:1. Project metrics responsibles within each single project

serve as a focal point for engineers or the projectmanagement in the project (Fig. 6). They ensure thatthe metric program is uniformly implemented andunderstood. The role includes support for data collec-tion across functions and analysis of the project met-rics. The latter is most relevant for a project managerbecause he must be well aware of progress, deviationsand risks with respect to quality or delivery targets.By creating the role of a project's metric responsiblewe guaranteed that the responsibility was clearly as-signed as of project start, while still allowing for dis-tributed (functional) metrics collection.

2. Local metrics teams in each location serve as a focalpoint for all metrics related questions in a location, tosynchronize metrics activities, to emphasize on com-monality, and to collect local requirements from thedi�erent projects. Besides being the focal point forthe metrics program in a single location they providetraining and coaching of management and practitio-ners on metrics, their use and application. In additionthey ensure that heterogeneous tools are increasinglyaligned or that tools and forms for data collectionand analysis are made available to the projects andfunctional organization.

3. A central (business division) metrics team formed byrepresentatives of the local metrics teams of the orga-nization coordinates metrics and technical controllingbeyond locations. They altogether ensure that ratio-nalization and standardization of a common set ofmetrics and the related tools and charts is accelerated.Upon building a corporate metrics program andaligning processes within the software process im-provement activities the creation of a history data-base for the entire organization is an importantresult to improve estimates.This structure guarantees that not only each single

change of metrics or underlying tools, but also needsfrom projects can be easily communicated from a singleproject to the whole organization. Use of teams shouldhowever be done cautiously. While a team has the ca-pability to take advantage of diverse backgrounds andexpertise, the e�ort is most e�ective when there are notmore than three people involved in a distinct task.Larger teams spend too much time backtracking onmetrics choices. We also found that when potential usersworked jointly to develop the metrics set with the met-rics support sta�, the program was more readily ac-cepted.

The related metrics process is applicable in the day-to-day project environment. It is based on a set of de-®ned metrics and rather supports the setting up andtracking of project targets and improvement goals(Fig. 7):

C. Ebert / The Journal of Systems and Software 46 (1999) 25±39 33

Page 10: Technical controlling and software process improvement

1. Based on a set of prede®ned corporate metrics the®rst step is to select metrics suitable for the project.

2. Then the raw data are collected to calculate metrics.Be aware of the operational systems that people workwith that need to supply data. If the data are notavailable easily chances are high that the metric is in-accurate and people tend to ignore further metrics re-quests. People might then even comply to the letter ofthe de®nition but not to the spirit of the metric.

3. Metrics are then analyzed and reported through theappropriate channels. Analysis includes two steps.First data are validated to make sure it is complete,correct, and consistent with the goals it addresses.Do not assume that automatic metrics are alwaystrustworthy. At least perform sample checks. The realchallenge is the second step which investigates what isbehind the metrics. Some conclusions are straightfor-ward, while others require an in-depth understandingof how the metrics relate with each other. Consolidat-

ing metrics and aggregating results must be done withgreat caution, even if apples and apples might ®tneatly, so to speak. Results are useless unless report-ed back to the people who make improvements or de-cisions.

4. Finally the necessary decisions and actions are madebased on the results of the analysis. The same metricsmight trigger di�erent decisions based on the targetaudience. While senior management may just wantto get an indication how well the improvement pro-gram is doing, the local Software Engineering ProcessGroup (SEPG) leader might carefully study processmetrics to eliminate de®ciencies.

5.2. Metrics selection and de®nition

Software metrics can perform four functions (Fentonand P¯eeger, 1997; Jones, 1991): They can help to un-derstand more about software work products or un-

Fig. 6. Roles and responsibilities within metrics program.

Fig. 7. Integrated measurement process overview.

34 C. Ebert / The Journal of Systems and Software 46 (1999) 25±39

Page 11: Technical controlling and software process improvement

derlying processes. They can be used to evaluate workproducts or processes against established standards.Metrics can provide the information necessary to con-trol resources and processes to produce the software.And they can be used to predict attributes of softwareentities in the future. The basic template for metricsde®nition thus states the objective, one of the fourfunctions, and the attribute of the entity being mea-sured.

The problem with many software metrics is that theyare typically not described in business terms and notlinked or aligned to the needs of the business. Whiletraditional business indicators look on revenue, pro-ductivity, or order cycle time, their counterparts insoftware development measure size, faults or e�ort.Clearly both sides must be aligned to identify thosesoftware product or process metrics that support busi-ness decisions. One of the ®rst steps towards relating themany dimensions of business indicators was the Bal-anced Scorecard concept (Kaplan and Norton, 1992).The link to software metrics is given by investigating theoperational view and the improvement view of the bal-anced scorecard. Questioning how the operationalbusiness can stay competitive yields critical success fac-tors (e.g. cost per new/changed functionality, ®eld per-formance, maturity level). Relating the actual situation(e.g. elapse time for average set of requirements) to thebest of practice values in most cases motivates a processimprovement program. Keeping the relationships frombusiness objectives to critical success factors to opera-tional management and ®nally to software processes inmind ensures that customer reported defects are not seenas yet another fault category or percentage to be foundearlier, but within the broader scope of customer satis-faction and sustained growth.

Each metric's de®nition should ensure consistent in-terpretation and collection across the organization.Capturing precise metrics information not only helpswith communicating what is behind the ®gures but alsobuilds the requirements for automatic tools support andprovides basic material for training course development.We have used the following sections within the template:· Name and identi®er.· Brief description.· Relationships to goals or improvement targets (this

includes business goals or tracking distinct improve-ment activities).

· De®nition with precise calculation.· Underlying raw data (or metrics primitives) used for

calculating the metric.· Tools support (links and references to the supporting

tools, such as databases, spreadsheets, etc.).· Visualization with references to templates (e.g. chart

or table type, combination with other metrics, visual-ization of goals or planning curves).

· Collection period and reporting frequency.

· Target and alarm levels for interpretation (e.g. releasecriteria, interpretation of trends).

· Con®guration control with links to storage of (peri-odically collected) metrics.

· Distribution control (e.g. availability date, audience,access control).A project-speci®c measurement plan links the generic

metrics de®nition to concrete projects with their indi-vidual goals and responsibilities. Additional metrics tobe used only in that project are referred to in the mea-surement plan. The measurement plan is linked to thequality plan to facilitate alignment of targets.

Often terminology must be reworked especially whenthe projects are scattered in a distributed organization,such as Alcatel Telecom. The standard de®nition mightinclude several sections applicable for di�erent projecttypes. Project size as well as design paradigms in¯uencede®nitions and metric calculation, even if the metricgoals and underlying rationales are the same, such aswith deliverables tracking.

The typical suite of project metrics includes the fol-lowing set to start with (see Fig. 3 for presentation ofsuch metrics):· Faults and failures (faults across development phases;

e�ectiveness of fault detection during developmentprocess; faults per KLOC in new/changed software;failures per execution time during test and in the®eld).

· Project size in KLOC or KStmt (new, changed, re-used, or total).

· E�ort (total e�ort in project; e�ort distribution acrosslife cycle phases; productivity; e�ciency in importantprocesses).

· Calendar or elapsed time (with respect to milestones,reviews, and work product deliveries; compared withplanning data).Most tracking metrics cover work product comple-

tion, open corrective action requests, and review cov-erage to identify and track the extent to whichdevelopment and quality activities have been appliedand completed on individual software deliverables.These metrics provide visibility to buyers and vendorsabout the progress of software development and canindicate di�culties or problems that might hamper theaccomplishment of quality targets.

Progress during crucial early phases such as TLD andDD can be tracked based on e�ort spent for the re-spective processes on the one side and defects found onthe other hand. Work product tracking is more di�cultbecause for several weeks the completeness is not reallytangible. Especially defect-based tracking is very helpfulfor reaching high-level management attention in suchsituations because this is the kind of decision driver thataccompanies all major phase-end decisions. When e�ortis below plan, the project will typically be behindschedule because the work is simply not getting done. On

C. Ebert / The Journal of Systems and Software 46 (1999) 25±39 35

Page 12: Technical controlling and software process improvement

the other hand design might have progressed but with-out the level of detail necessary to move to the nextdevelopment phase. Both metrics should be reportedweekly and can easily be compared with a planningcurve related to the overall estimated e�ort and defectsfor the two phases. Of course any design deliverablesmight also be tracked (e.g. SDL descriptions), howeverthese come often late in the design process and are thusnot a good indicator for focusing management attention.

5.3. After all its people not numbers

An e�ect often overlooked in establishing corporatetechnical controlling is the impact on the people in-volved. Although introducing metrics means a culturalchange to typically all involved parties, the focus is toooften only on tools and de®nitions. Knowing the bene-®ts of metrics for better project management or forsteering the course of SPI initiatives does not at all implythat people will buy the decision to be measured easily.To tell the truth from the beginning and to provide thewhole picture is better than super®cial statements aboutmanagement bene®ts. If faults, e�ciency or task com-pletion are measured, it is not some abstract productwhich is involved, it is the practitioners who know thatthey will be compared. Sta� at all levels is su�cientlyexperienced to know when the truth is being obscured.

Plan to position metrics from the beginning as amanagement tool for improvement and tell that one ofthe targets is to improve e�ciency in the competitiveenvironment. However, be clear at the same time not toabuse metrics for management ammunition to outsourceor cutback. For instance if faults are counted for the ®rsttime over the life cycle, establish a task force with rep-resentatives from di�erent levels to investigate resultsfrom the viewpoint of root cause analysis and criticalityreduction. If their reports are backed by valid metricsthe people should never be left alone. Even the mostsimple and straightforward metrics, such as faults, donot point to people but towards critical areas in thesoftware components.

Providing limited visibility and access to the metricsfrom outside helps to create credibility among practi-tioners, especially in the beginning. Before introducingmetrics however it is even more important to indicatethe application of the metrics (such as individual im-provements with individual data) based on a supportiveclimate. It is often helpful to change perspective towardsthe one providing raw data: is the activity adding valueto her daily work? Statistical issues might not auto-matically align with emotional priorities. If the metricsprogram is truly to measure processes and their im-provement instead of individuals, then rumors to theopposite can be minimized by posting minutes of themetrics meetings. Remember that practitioners' percep-tion is their reality.

Measurement is based on corporate targets and spe-ci®c goals to be achieved. Once the di�erent targets havebeen identi®ed and communicated su�ciently so thatpeople on all levels can identify themselves with them,the necessary metrics to track achievement of targets arerather obvious. Everyone involved in measurementneeds to know how their participation will bene®t theirposition or the corporation as a whole. If people realizethat corporate goals are their own-perhaps in anotherlanguage, and that they have su�cient space to movetowards the goals, they will commit much easier. If theyagree with both goals and metrics but do not see anychance to move within their own environment, chancesare high that they will not buy the program for the lackof management backup.

Resistance to change is human. Some people alwayssee only the dangers in a change, while others realizedthat being a detractor can attract attention. Such peoplewill only buy a metric program if there are some con-crete bene®ts that are speci®cally targeted to their dailywork. Instead of installing metrics tools that only pro-vide reports and charts for management oversight, theright step is rather to buy visualization tools for codethat improve personal e�ciency and quality of code (e.g.for branch coverage in module test) that as a byproductdeliver the related oversight metrics. It is well worth toplan and implement together with such a group of in-dividuals because once they are bought they will becomegood lobbyists.

6. Experiences with technical controlling

This section shares some selected experiences wemade while setting up a globally distributed technicalcontrolling program in the di�erent locations of theSwitching Systems Division. The following key successfactors could be identi®ed:· Metrics start with improvement goals. Goals must be

in line with each other and on various levels. Thebusiness strategy and the related business goals mustbe clear before discussing lower level improvementtargets. From the overall business strategy thosestrategies and goals must be extracted that dependon successful software development, use, and sup-port. Size of defect metrics alone do not give much in-formation. They become meaningful only as an inputto a decision process. This is where the balancedscorecard approach comes in and helps in relatingthe measurement program to speci®c corporate goals(Kaplan and Norton, 1992). Business goals must bebroken down to project goals and those must bealigned with department goals and contents of qualityplans.

· Motivate technical controlling with concrete andachievable improvement goals. Unless targets are

36 C. Ebert / The Journal of Systems and Software 46 (1999) 25±39

Page 13: Technical controlling and software process improvement

achievable and clearly communicated to middle man-agement and practitioners they will clearly feel met-rics as yet another instrument of managementcontrol. Clearly communicated priorities might helpwith individual decisions.

· Start small and immediately (see initial timetable inTable 4). It is de®nitely not enough only to selectgoals and metrics. Tools and reporting must be inline; and all of this takes its time. It must howeverbe clearly determined what needs to be measured be-fore deciding based on what can be measured.

· Determine the critical success factors of the underly-ing improvement program. The targets of any im-provement program must be clearly communicatedand perceived by all levels as realistic enough to®ght for. Each single process change must be accom-panied with the respective goals and supportive met-rics that are aligned. Those a�ected need to feel thatthey have some role in setting targets. Where goalsare not shared and the climate is dominated bythreats and frustration, the metrics program is morelikely to fail.

· Provide training both for practitioners who after allhave to deliver the accurate raw data, and for man-agement who will use the metrics. The cost and e�ortof training is often stopping its e�ective delivery. Anytraining takes time, money, and personnel to prepare,update, deliver, or receive it. Use external consultantswhere needed to get additional experience and au-thority.

· Establish focal points for metrics in each project anddepartment. Individual roles and responsibilitiesmust be made clear to ensure a sustainable metricsprogram that endures initial SPI activities.

· De®ne and align the software processes to enablecomparing metrics. While improving processes or set-

ting up new processes, ensure that the related metricsare maintained at the same time. Once estimationmoves from e�ort to size to functionality, clearlythe related product metrics must follow.

· Collect objective and reproducible data. Ensure thechosen metrics are relevant for the selected goals(e.g. tracking because to reduce milestone delay)and acceptable for the target community (e.g. it isnot wise to start with productivity metrics).

· Get support from management. Enduring buy-in ofmanagement can only be achieved if the responsibili-ty for improvements and the span of necessary con-trol are aligned with realistic targets. Since in manycases metrics beyond test tracking and faults arenew instruments for parts of management they mustbe provided with the necessary training.

· Avoid abuse of metrics by any means. Metrics mustbe ``politically correct'' in a sense that they shouldnot target persons or satisfy needs for personalblames. Metrics might hurt but should not blame.

· Communicate success stories where metrics enabledbetter tracking or cost control. This includes identify-ing metrics advocates that help in selling the measure-ment program. Champions must be identi®ed at alllevels of management, especially at senior level, thatreally use metrics and thus help to support the pro-gram. Metrics can even tie in an individual's workto the bigger picture if communicated adequately.When practitioners get feedback on the data they col-lect and see that it is analyzed for decision-making, itgives them a clear indication that the data are beingused rather than going into a data cemetery.

· Slowly enhance the metric program. This includes de-®ning ``success criteria'' to be used to judge the resultsof the program. Since there is no perfect metrics pro-gram, it is necessary to determine something like a

Table 4

Time table for setting up technical controlling

Activity Elapsed time Duration

Initial targets set up 0 2 weeks

Creation and kick-o� of metric team 2 weeks 1 day

Goal determination for projects and processes 3 weeks 2 weeks

Identifying impact factors 4 weeks 2 weeks

Selection of initial suite of metrics 5 weeks 1 week

Report de®nition 6 weeks 1 week

Kick-o� with management 6 weeks 2 hours

Initial tool selection and tuning 6 weeks 3 weeks

Selection of projects/metric plan 6 weeks 1 week

Kick-o� with project teams/managers 7 weeks 2 hours

Collection of metric baselines 7 weeks 2 weeks

Metric reports, tool application 8 weeks Continuously

Review and tuning of reports 10 weeks 1 week

Monthly metric-based status reports within projects 12 weeks Continuously

Application of metrics for project tracking and process improvement 16 weeks Continuously

Control and feedback on metric program 24 weeks Quarterly

Enhancements of metric program 1 year Continuously

C. Ebert / The Journal of Systems and Software 46 (1999) 25±39 37

Page 14: Technical controlling and software process improvement

``80% available'' acceptance limit that allows to de-clare success when those metrics are available.

· Do not overemphasize the numbers. It is much morerelevant what they bring to light, such as emergingtrends. After all the focus is on successful projectsand e�ciency improvement and not on metrics.Technical controlling must make sense to everybody

within the organization who will be in contact with it.Therefore, metrics should be piloted and evaluated aftersome time. Potential evaluation questions include:· Are the selected metrics consistent with the original

improvement targets? Do the metrics provide addedvalue? Do they make sense from di�erent anglesand can that meaning be communicated withoutmany slides? If metrics are considering what is mea-surable but do not support improvement trackingthey are perfect for hiding issues but should not be la-beled metrics.

· Do the chosen metrics send the right message aboutwhat the organization considers relevant? Metricsshould spotlight by default and without cumbersomeinvestigations of what might be behind. Are the rightthings being spotlighted?

· Do the metrics clearly follow a perspective that al-lows comparisons? If metrics include ambiguities orheterogeneous viewpoints they cannot be used as his-tory data.Many small and independent metrics initiatives had

been started before within various groups and depart-ments. Most died prematurely because of their overlynarrow focus on just one activity. Our experience showsthat technical controlling of software projects is mostlikely to succeed as a part of a larger software processimprovement initiative or as part of company-wideTotal Quality Management (TQM) program. The con-trolling program bene®ts from an aligned engineeringimprovement and business improvement spirit that en-courages continuous and focused improvement withsupport of quantitative methods.

7. Conclusions

Software process improvement is these days a bigissue on the agenda of all organizations with software asa core business. As such it is also a major research topic,that may continue to grow in importance well into the21st century. However, some software technologies havea shorter lifetime and for sure the management attentionis focused rather on short-term achievements with im-pact to the score card. Unless tangible results can beachieved in the related short timeframe, interest in SPIwill quickly wane.

We have presented the introduction of technicalcontrolling as part of the SPI initiative of Alcatel'sSwitching Systems Division. It is obvious by now that

the more advanced the corporate process maturity, themore various and speci®c the metrics that are used.While maintaining technical controlling it must be em-phasized that controlling needs evolve over time. Thecritical precondition of any SPI program is the focus oncontrolling.

So far many of the industrial discussions and articlesrelated to reengineering of software processes are basedon facts, while research is targeting theories and small-scale examples. Both are valid ± from the respectiveviewpoint. It would however be helpful to bridge the gapwith corporate studies related to answering the twoimportant questions:· What does it cost to improve software processes?· How long will it take to make tangible improve-

ments?Answering such questions of course needs some focus

on areas such as quality improvement, better produc-tivity, shorter lead-time, or higher customer satisfaction.

Future directions for technical controlling include:· Relate business-oriented analysis approaches to soft-

ware decision-making. Currently there still seem to betwo worlds in many organizations: business perfor-mance monitoring and evaluation with scorecard-type systems on the one hand and technical control-ling and project-level decision-making on the otherhand. Both must be combined to align operationaldecision-making (i.e. project management) with cor-porate strategy and business goals and their tracking.

· Emphasis on data quality. The garbage put into theraw data appears manifold in the subsequent metrics.Poor data quality can thus have a severe impact onthe overall e�ectiveness of the organization. Whilethe indicators and tracking metrics are at our ®nger-tips, ready for mining and drilling to the originalwork products, they can be entirely misleading ifnot properly de®ned and recorded. Data qualityand its dimensions (e.g. reliability, timeliness, com-pleteness or consistency) must be operationalizedfrom an organizational and process viewpoint ± wellbefore discussing the accuracy or preciseness dimen-sions as done in most current metrics research.

· Integration of metrics of suppliers with the respectiveprograms of operators. Technical controlling must beused to improve present operations and provide a dis-tinct degree of mutual trust within the entire develop-ment process. Metrics have to be integrated into theprocess to ensure that problems are not only detectedand corrected upfront but also to eliminate the prob-lem causes.

References

El. Emam, K., Briand, L., 1997. Costs and bene®ts of software process

improvement. Int. Software Engineering Research Network.

Technical Report: ISERN-97-12, Kaiserslautern, Deutschland.

38 C. Ebert / The Journal of Systems and Software 46 (1999) 25±39

Page 15: Technical controlling and software process improvement

Fenton, N.E., P¯eeger, S.L., 1997. Software Metrics: A Practical and

Rigorous Approach. Chapman and Hall, London, UK.

Grady, R.B., 1992. Practical Software Metrics for Project Manage-

ment and Process Improvement. Prentice-Hall, Englewood Cli�s,

NJ.

Humphrey, W., 1989. Managing the Software Process. Addison±

Wesley, Reading, MA, USA.

IEEE, 1992. Standard for developing software life cycle processes.

IEEE Std 1074-1991, IEEE, Piscataway, USA.

Jones, C., 1991. Applied Software Measurement±Assuring Productiv-

ity and Quality. McGraw-Hill, New York, NY, USA.

Jones, T.C., 1995. Return on Investment in Software Measurement.

Proceedings of the 6th International Conference on Applications

of Software Measurement. Orlando, FL, USA.

Jones, T.C., 1997. Software Quality. Analyses and Guidelines for

Success. Thomson, London.

Kaplan, R., Norton, D., 1992. The balanced scorecard±measures that

drive performance. Harvard Business Review.

McGibbon, T., 1996. A business case for software process improve-

ment. DACS State-of-the-Art Report. Rome Laboratory. http://

www.dacs.com/techs/roi.soar/soar.html#research.

P¯eeger, S.L. et al., 1997. Status report on software measurement.

IEEE Software 14 (2), 33±43.

Putnam, L.H., Myers, W., 1997. Industrial Strength Software. IEEE

Computer Soc. Press, Los Alamitos.

Rook, P., 1986. Controlling software projects. Software Engineering

Journal 1 (1), 7±16.

Software Measurement Cookbook, 1995. International Thomson

Computer Press, London.

Stark, G., Durst, R.C., Vowell, C.W., 1994. Using metrics in

management decision making. IEEE Computer 27 (9), 42±48.

Christof Ebert received in 1994 a PhD with honors from the Universityof Stuttgart on a software engineering related topic. He is currentlySoftware Process Manager in Alcatel's Switching Systems Division inAntwerp, Belgium. Among other things he built up the softwaremetrics program of this division. He has published over sixty refereedpapers in the area of software metrics, software quality assurance, real-time software development and CASE support for such activities. Hiscurrent research topics include software metrics, software processanalysis and improvement, and requirements engineering. Dr. Ebertserves on the editorial board of IEEE Software and on programcommittees of software conferences. He co-authored ``Software Met-rics in Industry'' published by Springer, Germany in 1996. He is amember of the IEEE, GI, and the Alpha Lambda Delta honor society.

C. Ebert / The Journal of Systems and Software 46 (1999) 25±39 39