12
Technical controlling in software development Christof Ebert* Alcatel Telecom, Switching Systems Division, SSD SEPG, SD-97, Fr.-Wellesplein 1, B-2018 Antwerp Belgium Collecting and analyzing metrics is critical to objectively identifying and quantifying process improvements. Technical controlling of software projects is introduced as a comprehensive con- trolling activity concerned with identifying, measuring, accumulating, analyzing and interpreting project information for strategy formulation, planning and tracking activities, decision-making, and cost accounting. Progress metrics are particularly relevant for having insight into projects and at the same time into process improvements. This article focuses on introducing and main- taining a corporate program for technical controlling in a highly distributed large organization. Experiences are shared that show how technical controlling closely relates and thus supports an ongoing software process improvement initiative. Results from Alcatel Telecom’s Switching Sys- tem Division are included to show practical impacts. # 1998 Elsevier Science Ltd and IPMA. All rights reserved Keywords: metrics, process improvement, project control, quality metrics, software metrics, technical controlling Introduction Poor management can increase software costs more rapidly than any other factor (Barry Boehm) Technical controlling for software projects is defined as a controlling activity concerned with identifying, measuring, accumulating, analyzing and interpreting project information for strategy formulation, planning and tracking activities, decision-making, and cost accounting. As such it is more than only ensuring overall technical correctness of a project as earlier defined. 1 Objectives are derived from these activities: . Decision-making: what should I do? . Attention-directing: what should I look at? . Performance evaluation: how well am I doing? . Planning: what can we reasonably achieve in a given period? . Target-setting: how much can we improve in a given period? Unfortunately many organizations who consider software development as their core business still strictly separate business performance monitoring and evaluation and what is labeled as low-level software metrics. 2 Our motivation while building up a corporate software metrics program was to embed it within the corporate technical controlling in order to align the dierent levels of target setting and tracking activities. Only the close link of corporate strategy with clearly specified business goals and the operational project management help in achieving overall improvements. We therefore coordinated the metrics program for op- erational development management with our software process improvement initiative to ensure that goals on all levels correspond with each other. For this study we are providing data gathered during development and field operations of several releases of the Alcatel 1000 S12 switching system. All of the introduced analyses that altogether comprise technical controlling are concerned with issues that may vary from company to company. Activities and processes may dier as well as work products and entire lifecycle models. It is thus necessary for each company to go through the process of defining their own norms, based on data related to their individual project history, rather than solely relying on ‘bench- marking’ data from the literature. Entire absence of in- ternal reliable project data of course might recommend using such external data for setting up immediately reasonable targets. 3 The Alcatel 1000 S12 is a digital switching system that is currently used in over 40 countries world-wide. It provides a wide range of functionality (small local exchanges, transit exchanges, international exchanges, network service centers, or intelligent networks) and scaleability (from small remote exchanges to large local exchanges). Its typical size is more than 4 million lines of code (MLOC) of which a big portion is custo- International Journal of Project Management Vol. 17, No. 1, pp. 17–28, 1999 # 1998 Elsevier Science Ltd and IPMA. All rights reserved Printed in Great Britain 0263-7863/99 $19.00 + 0.00 PII: S0263-7863(97)00065-3 *Tel.: +32-3-240-4081, Fax: +32-3-240-9935, e-mail: christof.eber- [email protected]. 17

Technical controlling in software development

Embed Size (px)

Citation preview

Page 1: Technical controlling in software development

Technical controlling in softwaredevelopment

Christof Ebert*Alcatel Telecom, Switching Systems Division, SSD SEPG, SD-97, Fr.-Wellesplein 1, B-2018 Antwerp

Belgium

Collecting and analyzing metrics is critical to objectively identifying and quantifying processimprovements. Technical controlling of software projects is introduced as a comprehensive con-trolling activity concerned with identifying, measuring, accumulating, analyzing and interpretingproject information for strategy formulation, planning and tracking activities, decision-making,and cost accounting. Progress metrics are particularly relevant for having insight into projectsand at the same time into process improvements. This article focuses on introducing and main-taining a corporate program for technical controlling in a highly distributed large organization.Experiences are shared that show how technical controlling closely relates and thus supports anongoing software process improvement initiative. Results from Alcatel Telecom's Switching Sys-tem Division are included to show practical impacts. # 1998 Elsevier Science Ltd and IPMA.All rights reserved

Keywords: metrics, process improvement, project control, quality metrics, software metrics, technical controlling

Introduction

Poor management can increase software costs more

rapidly than any other factor (Barry Boehm)

Technical controlling for software projects is de®nedas a controlling activity concerned with identifying,measuring, accumulating, analyzing and interpretingproject information for strategy formulation, planningand tracking activities, decision-making, and costaccounting. As such it is more than only ensuringoverall technical correctness of a project as earlierde®ned.1 Objectives are derived from these activities:

. Decision-making: what should I do?

. Attention-directing: what should I look at?

. Performance evaluation: how well am I doing?

. Planning: what can we reasonably achieve in a givenperiod?

. Target-setting: how much can we improve in a givenperiod?

Unfortunately many organizations who considersoftware development as their core business stillstrictly separate business performance monitoring andevaluation and what is labeled as low-level softwaremetrics.2 Our motivation while building up a corporatesoftware metrics program was to embed it within thecorporate technical controlling in order to align the

di�erent levels of target setting and tracking activities.Only the close link of corporate strategy with clearlyspeci®ed business goals and the operational projectmanagement help in achieving overall improvements.We therefore coordinated the metrics program for op-erational development management with our softwareprocess improvement initiative to ensure that goals onall levels correspond with each other.For this study we are providing data gathered

during development and ®eld operations of severalreleases of the Alcatel 1000 S12 switching system. Allof the introduced analyses that altogether comprisetechnical controlling are concerned with issues thatmay vary from company to company. Activities andprocesses may di�er as well as work products andentire lifecycle models. It is thus necessary for eachcompany to go through the process of de®ning theirown norms, based on data related to their individualproject history, rather than solely relying on `bench-marking' data from the literature. Entire absence of in-ternal reliable project data of course might recommendusing such external data for setting up immediatelyreasonable targets.3

The Alcatel 1000 S12 is a digital switching systemthat is currently used in over 40 countries world-wide.It provides a wide range of functionality (small localexchanges, transit exchanges, international exchanges,network service centers, or intelligent networks) andscaleability (from small remote exchanges to largelocal exchanges). Its typical size is more than 4 millionlines of code (MLOC) of which a big portion is custo-

International Journal of Project Management Vol. 17, No. 1, pp. 17±28, 1999# 1998 Elsevier Science Ltd and IPMA. All rights reserved

Printed in Great Britain0263-7863/99 $19.00+0.00

PII: S0263-7863(97)00065-3

*Tel.: +32-3-240-4081, Fax: +32-3-240-9935, e-mail: [email protected].

17

Page 2: Technical controlling in software development

mized for local network operators. The code used forS12 is realized in Assembler, C and CHILL. In termsof functionality, S12 covers almost all areas of soft-ware and computer engineering. This includes operat-ing systems, database management and distributedreal-time software.The paper is organized as follows. Section 2 gives a

brief introduction to technical controlling and how it®ts into the meta framework of project managementand metrics as the major tool for decision making insoftware project management. Relating reengineeringactivities and setting improvements objectives is furtherdescribed in Section 3. Starting a corporate programfor technical controlling in a distributed organizationas part of a software process improvement (SPI) in-itiative is introduced in Section 4. Experiences thathold especially from the perspective of organizationallearning are provided in Section 5. Finally, Section 6summarizes the results and gives an outlook on howtechnical controlling and SPI will evolve over time inan organization.

Applications of technical controlling

The failure of so many software projects until today isnot so much triggered by incompetent project man-agers or designers working on these projects, butrather by wrong management techniques being used.Management techniques derived and built on experi-ence from small projects, that often do not even stemfrom software projects, are inadequate for large sys-tems development. As a result, the delivered softwareis late, of low quality and of much higher cost thanoriginally estimated. It is obvious that speci®c manage-ment techniques are needed because software projectsare often unique (i.e. they are not repeated); they yieldto intangible products; and the underlying processesfor creating the products are not entirely understood.Unlike other engineering disciplines at university, soft-ware engineering education until recently meant anar-chic programming techniques instead of proper projectmanagement.The basic activities within software project manage-

ment can be clustered as

. Tendering and requirements management

. Estimation and costing

. Resource management

. Planning and scheduling

. Monitoring and reviews

. Product control.

Much has been written on those parts related toclassic project management, however, the monitoringaspect is often neglected. We will therefore focus onthis part and call it `technical controlling' to distinguishfrom other controlling activities (e.g. budget).Promoting designers to management positions multi-plies the risk that people who experienced that soft-ware is intangible and as such impossible to monitor,forecast and control, sell this belief again and again.No wonder that some key techniques in technical con-trolling were driven by classic management instead ofbit-juggling software managers.We have earlier de®ned technical controlling of soft-

ware projects as a controlling activity concerned withidentifying, measuring, accumulating, analyzing andinterpreting project information for strategy formu-lation, planning and tracking activities, decision-mak-ing, and cost accounting. Obviously other (perhapseven better known) controlling activities within soft-ware projects, such as co®guration or change controlof deliveries and intermediate working products arenot included in this de®nition.Several repetitive stages can be identi®ed in technical

controlling that can all be mapped to a simplemeasurement process (Figure 1):

. Set objectives, both short- and long-term for pro-ducts and process

. Forecast and develop plans both for projects andfor departments

. Communicate information

. Coordinate and implement plans

. Understand and agree to commitments and theirchanges

. Motivate people to accomplish plans

. Measure achievement in projects and budget centers

. Predict the development direction of process andproduct relative to goals and control limits

. Identify and analyze potential risks

. Compare with objectives

. Evaluate performance

. Investigate signi®cant deviations

. Determine if the project is under control andwhether the plan is still valid

. Identify corrective actions and reward/penalize per-formance

Figure 1 Metrics-driven project management

Technical controlling in software development: C. Ebert

18

Page 3: Technical controlling in software development

. Implement corrective actions.

Di�erent controlling objectives request di�erentlevels of aggregation of metrics. Individual designersor testers need progress metrics on their level of deli-verable work products and contribution to the project.Team leaders and functional coordinators need thesame information on the level of their work area.Department heads request metrics that relate to time,e�ort and budget within their departments. Projectmanagers, on the other hand, look much more ontheir projects' individual budgets, schedule and deliver-ables to have insight into overall performance and pro-gress. Clearly all users of metrics need immediateinsight into the lower and more detailed levels as soonas performance targets are missed or the projects driftaway from schedule and budget. Easy access to thedi�erent levels of information is thus of high import-ance.While metrics-based decision making is more accu-

rate and reproducible than so-called `intuitive' and ad-hoc approaches for managing software projects, itshould be clear that there are components of uncer-tainty that must be considered for risk assessments

. Requirements are becoming increasingly unstable toachieve shorter lead times and faster reaction tochanging markets. The risk is that the project isbuilt on a moving baseline which is one of the mostoften quoted reasons for project failure.

. Any forecast and plan is based on average perform-ance indicators and history data. The smaller theproject and the more that critical paths are estab-lished due to requested expert knowledge, the higherthe risk of having a reasonable plan from a macro-scopic viewpoint which never achieves the targets onthe microscopic level of individual experts' avail-ability, e�ectiveness and skills. It has been shown inseveral studies that individual experience and per-formance contributes with up to 70% to overall pro-ductivity ranges.4

. Estimations and forecasts are based on individualjudgment and as such are highly subjective.Applying any estimation model expresses ®rst of allthe experience and judgment of the assigned expert.Even simple models such as Function Points arereported as yielding reproducibility inaccuracy of>30%.4 To reduce the risk related to decision mak-ing based on such estimates a Delphi-style approachcan be applied that focuses multiple expert inputs toone estimate.

. Most reported software metrics are based on manualinputs of the raw data to operational databases.Faults, changes, e�ort, even the task break-downare recorded by individuals that often don't necess-arily care for such things, especially when it comesto delivery and time pressure. Software metrics, evenif perceived as accurate (after all, it's based on num-bers), must be related to a certain amount of error,which as experience shows is clearly in the range of10±20%.4

Control is only achievable if measures of perform-ance have been de®ned and implemented, objectiveshave been de®ned and agreed, predictive models areestablished for the entire life cycle, and the ability to

act is given. The remainder of this section will furtherinvestigate examples for each of these conditions.

Decision making

The single best technology for getting some controlover deadlines and other resource constraints is to setformal objectives for quality and resources in a mea-surable way.1, 4, 5 Planning and control activities cannotbe separated. Managers control by tracking actualsagainst plans and acting on observed deviations.Controls should focus on signi®cant deviations fromstandards and at the same time suggest appropriateways for ®xing the problems. Typically, these stan-dards are schedules, budgets, and quality targets estab-lished by the project plan. All critical attributes shouldbe established both measurable and testable to ensuree�ective tracking. The worst acceptable level should beclear although the internal target is in most caseshigher.The in¯uence of metrics de®nition and application

from project start (e.g. estimation, target setting andplanning) to steering (e.g. tracking and budget control,quality management) to maintenance (e.g. failure rates,operations planning) is very well described in the re-lated IEEE Standard for Developing Software LifeCycle Processes.6 This standard also helps in de®ningthe di�erent processes that constitute the entire devel-opment process including relationships to managementactivities.Most bene®ts that we have recorded since establish-

ing a comprehensive technical controlling program areindeed related to project management:

. Improved tracking and control of each developmentproject based on uniform mechanisms

. Earlier identi®cation of deviations from the giventargets and plans

. Accumulation of history data from all di�erenttypes of projects that are reused for improving esti-mations and planning of further projects

. Tracking process improvements and deviations fromprocesses.

One of the main targets of any kind of measurementis that it should provide an objective way of expressinginformation, free of value judgments. This is particu-larly important when the information concerned is`bad news', for instance related to productivity or cost,and thus may not necessarily be well received. Oftenthe observed human tendency is to ignore any criticismrelated to one's own area and direct attention to some-body else's. Tests articulate that `the design is badlystructured', while operations emphasize that `softwarehas not been adequately tested'. Any improvement ac-tivities must therefore be based on hard numerical evi-dence. The ®rst use of metrics is most often toinvestigate the current state of the software process.Table 1 relates the most relevant metrics to di�erentlevels of process maturity.7 Basically any applicationof metrics is mainly restricted due to non-repeatableprocesses and thus a limited degree of consistencyacross projects.A selection of the most relevant project tracking

metrics is provided in Figure 2. These metrics includemilestone tracking, cost evolution, a selection of pro-cess metrics, work product deliveries, and faults both

Technical controlling in software development: C. Ebert

19

Page 4: Technical controlling in software development

with status information. Such tracking metrics areperiodically updated and provide easy overview on theproject status, even for very small projects. Based onthis set of metrics several metrics can be selected forweekly tracking work products' status and progress(e.g. module delivery, faults), while others are period-ically reported to build up a history database (e.g. size,e�ort). Most of these metrics are actually byproductsfrom automatic collection tools related to planningand software con®guration management (SCM) data-bases. Project-trend indicators based on such simpletracking curves are much more e�ective in alertingmanagers than the delayed and super®cial task com-pletion tracking with PERT charts.E�ective project tracking and implementation of im-

mediate corrective actions requires a strong project or-ganization. As long as department heads interfere withproject managers, decisions can be misleading or evencontradictive. Frequent task and priority changes atthe practitioner level with all related drawbacks are theconsequence. A major milestone within our SPI pro-gram was reached when speci®c project managementresponsibilities were focused entirely on the project or-

ganization. Today, project managers have their speci®croles and responsibilities outlined, and they arerequired to develop plans and objectives. Measurablequality targets are de®ned within the project qualityplans and later tracked through the entire project lifecycle. Project managers report project and quality pro-gress using standardized reporting mechanisms, suchas fault removal e�ciency (similar to the chart inFigure 3) or progress vs schedule in terms of deliver-ables (similar to the curves and completeness ®gures inFigure 2). Committing to such targets and being forcedto track them over the development process ensuresthat project managers, and therefore the entire projectorganization, carefully observe and implements SPI ac-tivities.Test tracking is a good example for how to make

use of tracking metrics (Figure 4). Test results can beused to suggest an appropriate course of action to takeeither during testing activities or towards their com-pletion. Based on a test plan all testing activitiesshould be seen in the context of the full test schedule,rather than as independent actions. If the goal is, forinstance, how well the integration test detects faults

Table 1 Appropriate metrics for di�erent CMM levels

CMM Description Metrics

5 Continuous improvements areinstitutionalized

Process metrics for the control of process change management; analysis of process and productmetrics according to documented procedures to prevent defects and to continuously improveprocesses

4 Products and processes arequantitatively managed

Process metrics for the control of single processes; quantitative goals are set and managed; controllimit charts over time for size growth, costs, completion, time; assess bene®cial process innovationsand manage process change; maintain managed and controlled process database across projects

3 Appropriate techniques areinstitutionalized

De®ned and established product metrics (measurement standards and experience-based estimatonguidelines); statistical control charts; formal records for retrieving level 2 metrics; automatic metriccollection; experience database

2 Project management isestablished

De®ned and reproducible project metrics for planning and tracking (contents, requirements status,faults and their status in all phases, e�ort, size, design and test progress); pro®les over time for thesemetrics; few process metrics for SPI progress tracking

1 Process is informal and ad hoc Few project metrics (size, e�ort, faults); however metrics are inconsistent and not reproducible

Figure 2 Overview project metrics for milestones, cost, process metrics, deliveries and faults

Technical controlling in software development: C. Ebert

20

Page 5: Technical controlling in software development

then models of both test progress and defects must beavailable. Information that supports interpretation(e.g. fault density per subsystem, relationships to fea-ture stability) must be collected and integrated. A testtracking curve is presented in Figure 4. Besides thetypical S-shaped appearance several questions must beanswered before being able to judge progress or e�-ciency. For example, how e�ectively was the speci®ctest method applied? What is the coverage in terms ofcode, data relations, or features during regression test?How many faults occurred during similar projects withsimilar test case suites over time?With such premises it is feasible to set up not only

release-oriented phase end targets but also phase entrycriteria that allows for rejection to module tests orinspections if the system quality is inadequate. Relatedtest process metrics include test coverage, number ofopen fault reports by severity, closure delay of faultreports, and other product-related quality, reliably,and stability metrics. Such metrics allow judgments insituations when due to di�culties in testing decisionson the nature and sequence of alternative pathsthrough the testing task should be made, while consid-ering both the entire testing plan and the present pro-ject priorities. For example, there are circumstances inwhich full testing of a limited set of features will bepreferred to an incomplete level of testing across full(contracted) functionality.

Cost control

Because it seems clear that cost in software projectsare predominantly related to labor (i.e. e�ort) it is rare

to ®nd practical cost control besides deadline- or pri-ority-driven resource allocation and shifting. However,the use of cost information is manifold. In decision-making cost information is used to determine relevantcosts (e.g. sunk costs, avoidable vs unavoidable costs,variable vs ®xed costs) in a given project or process,while in management control the focus is on controlla-ble costs vs non-controllable costs. A major steptowards decision-support is an accounting that movesfrom head-count based e�ort models to activity-basedcosting. Functional cost analysis and even target-cost-ing approaches are increasingly relevant because custo-mers tend to pay for features instead of entirepackages as before. Not surprisingly, cost reductioncan only be achieved if it is clear how activities relateto costs. The di�erence is to assign costs to activitiesor processes instead of departments.Activity-based models allow for more accurate esti-

mates and tracking than using holistic models thatonly focus on size and e�ort for the project as oneunit. E�ects of processes and their changes, resourcesand their skill distribution or factors related to each ofthe development activities can be considered dependingon the breakdown granularity. An example is given inTable 2 which includes a ®rst breakdown to majordevelopment phases. The percentages and costs perKLOC are for real-time embedded systems and shouldnot be taken as ®xed quantities.8, 9 Typically both col-umns for percentages and productivity are based on aproject history database that is continuously improvedand tailored to the speci®c project situation. Even pro-cess allocation might vary, projects with reuse have adi�erent distribution with more emphasis towards inte-gration and less e�ort for top level design (TLD). Unitcost values are likely to decrease in the long-term asthe cumulative e�ects of technological and processchanges become visible.All activities that form the development process

must be considered to avoid uncontrollable overheadcosts. Cost estimation is derived from size of new orreused software related to the overall productivity andthe cost per activity. The recommended sequence ofestimation activities is to ®rst estimate the size of thesoftware product, then estimate the cost, and ®nallyestimate the development schedule based on the sizeand the cost estimates. These estimates should berevised towards end of TLD, detailed design (DD) andagain towards end of unit test.Although activity-based accounting means a more

detailed e�ort reporting throughout each project, itallows for a clear separation between value adding andnon-value adding activities, process value analysis, andimproved performance measures and incentiveschemes. Once process related costs are obvious, it iseasy to assign all overhead costs, such as integrationtest (IT) support or tools, related to the processes

Table 2 Activity-based e�ort allocation

Activity Percent PM/KLOC

Requirements management 7 0.7Top level design 17 1.7Detailed design 22 2.2Coding until module test 22 2.2Integration 16 1.6System test, quali®cation 16 1.6Total 100 37

Figure 3 Typical benchmark e�ects of detecting faults ear-lier in the life cycle

Figure 4 Test tracking

Technical controlling in software development: C. Ebert

21

Page 6: Technical controlling in software development

where they are necessary and again to the respectiveprojects. Instead of allocating such overheads to pro-jects based on overall development e�ort per project, itis allocated to activities relevant in the projects. Forinstance, upfront design activities should not contrib-ute to allocation of expensive test equipment.While dealing with controlling cost, often the ques-

tion of which tracking system is to be used, comes up.Most companies have rather independent ®nancialtracking systems in place that provide monthly reportson cost per project and sometimes even on an activitybase. The reports are often fed with timesheet systemsand relate e�ort to other kinds of cost. Unfortunately,such ®nancial systems are in many cases so indepen-dent from engineering that neither the activities clus-ters nor the reporting frequency are helpful for makingany short-term decision.Variance analysis is applied to control costs evol-

ution over time. It is based on standard costs that areestimated (or known) costs to perform a single activitywithin a process under e�cient operating conditions.Typically such standard costs are based on well-de®ned outputs of the activity, for instance test casesperformed and errors found in testing. Knowing thee�ort per test case during integration test and thee�ort to detect an error (which includes regression test-ing but not correction), a standard e�ort can be esti-mated for the whole project. Functionality, size, reusedegree, stability and complexity of the project deter-mine the two input parameters, namely test cases andestimated number of faults to be detected in thespeci®c test process. Variances are then calculated as arelative ®gure: variance = (standard cost±actual cost)/standard cost.Variance analysis serves to ®nd practical pointers to

causes of o�-standard performance so that project-management or department heads can improve oper-ations and increase e�ciency. It is, however, not anend in itself because variances might be caused byother variances or are related to a di�erent target.Predictors should thus be self-contained, such as in thegiven example. Test cases alone are insu�cient becausean unstable product due to insu�cient design causesmore e�ort in testing.

Return on investment (ROI) is a critical and mis-leading expression when it comes to development costor justi®cation of new techniques.3, 10 Too often hetero-geneous cost elements with di�erent meaning andunclear accounting relationships are combined to one®gure that is then optimized. For instance, reducing`cost of quality' that includes appraisal cost and pre-vention cost is misleading when compared with cost ofnonconformance because certain appraisal cost (e.g.module test) are a component of regular development.Cost of nonconformance on the other hand is incom-plete if only considering internal cost for fault detec-tion, correction and redelivery because they mustinclude opportunity cost due to rework at the custo-mer site, late deliveries or simply binding resourcesthat otherwise might have been used for a new project.

Setting objectives±the case for process improvement

A primary business goal for any telecommunicationsystems supplier today is to control and reduce soft-ware development cost. Software development increas-ingly contributes to product cost. It has been shownwithin various companies that the capability maturitymodel of the Software Engineering Institute (SEICMM) is an e�ective roadmap for achieving cost-e�ec-tive solutions.3, 10 Many of the problems that lead toproject delays or failures are technical, however thecritical ones are managerial. Software developmentdepends on work products, processes and communi-cation. It is only structured if a structure is imposedand controlled. This is where the CMM ®ts in7.Figure 2, which was compiled from di�erent industrialdatabases, gives an overview of fault detection withinorganizations according to their respective maturitylevel. Obviously what is done right in software devel-opment is done early. There is little chance for catch-ing up when things are discovered to be going wronglater in the development.Since the CMM provides both a guideline for identi-

fying strengths and weaknesses of the software devel-opment process and a roadmap for improvementactions, Alcatel Telecom also based its improvementactivities on the CMM. Assessments are conducted in

Figure 5 Deriving concrete actions from improvement goals and strategic goals

Technical controlling in software development: C. Ebert

22

Page 7: Technical controlling in software development

all major development sites. The direct ®ndingsaccording to the CMM are analyzed according to theirprospective impacts on Alcatel's business goals andthen prioritized to select those areas with highestimprovement potential. Based on this ranking a con-crete planning of improvement actions is repeatedlyre®ned (Figure 5), resulting in an action plan withdetailed descriptions of improvement tasks withresponsibilities, e�ort estimates, etc. Checkpointingassessments of the same type are repeatedly done totrack the implementation of the improvement plan.As indicated in ®gure 5, metrics for tracking both

process conformance and output availability or qualityare a key instrument for process improvement. Typicalexamples for such measurement goals are to tracke�ort related to process improvement or to monitorthe results and achievements. Di�erent groups typicallywork towards individually controlled goals that buildup to business division level goals and corporate goals.An example for improved maintainability indicates thishierarchy. A business division level goal could be toimprove maintainability within legacy systems, as it isstrategically important for all telecommunication sup-pliers. Design managers might break that down furtherto redesigning exactly those components that are atthe edge of being maintainable. Project managers, onthe other hand, face a trade-o� with time to marketand might emphasize on incremental builds instead.Clearly both need appropriate indicators to supporttheir selection processes which de®ne the way towardsthe needed metrics related to these goals. Obviouslyone of the key success criteria for SPI is to understandthe political context and various hidden agendasbehind technical decisions in order to make compro-mises or weigh alternatives.Objectives related to individual processes must be

unambiguous and agreed by the respective groups.This is obvious for test and design groups: while the®rst is reinforced for ®nding defects and thus focus onwriting and executing e�ective test suites, designgroups are targeting delivering codes that can be exe-cuted without defects. In case of defects they must becorrected e�ciently, which allows for setting upanother metric for a design group which is the backlogof faults that it needs to resolve.It is thus important to consider di�erent perspectives

and their individual goals related to promotion, pro-jects and the business. Most organizations have atleast four: the practitioner, the project manager, thedepartment head, and corporate executives. Their mo-tivation and typical activities di�er widely and oftencreate confusing goals which at the worst level areresolved on the practitioner level. Reuse is anotherexample that continuously creates trade-o� discus-sions. When a project incurs expenses due to keepingcomponents maintainable and to promoting their reu-sability, who pays for it and where is it recorded in ahistory database that compares e�ciency of projectsand thus of their management?Managing and tracking SPI can be done on di�erent

levels of abstraction. Senior management is interestedin the overall achievements based on what has beeninvested in the program. Related metrics include thee�ectiveness of fault detection because the obvious re-lationship to cost of quality is directly related to themost common business goal of cost reduction. Lead-

time reduction and e�ort reduction is related toreduced rework and as such also related to less defectsand early detection. On the project level SPI manage-ment includes a variety of process metrics that com-pare e�ciencies and thus relate on the microscopiclevel to achieving the business goals.

Setting up technical controlling in a distributedorganization

The metrics process, roles and responsibilities

Obviously the introduction of technical controllingand thus software metrics to projects has to follow astepwise approach that must be carefully coached.Each new metric that needs tools support must bepiloted ®rst in order to ®nd out whether de®nitionsand tools description are su�cient for the collection.Then the institutionalization must be planned and coa-ched in order to obtain valid data. Independent of thetargets of a measurement program, it will only betaken seriously if the right people are given responsi-bility for it.4

For that reason the following three roles have beenestablished in Alcatel Telecom:

1. Project metrics responsibles within each single pro-ject serve as a focal point for engineers or the pro-ject management in the project (Figure 6). In manycases metrics ¯ow in both directions of the dashedlines. They ensure that the metric program is uni-formly implemented and understood. The roleincludes support for data collection across functionsand analysis of the project metrics. The latter ismost relevant for a project manager because hemust be well aware of progress, deviations and riskswith respect to quality or delivery targets. By creat-ing the role of a project's metric responsible weguaranteed that the responsibility was clearlyassigned as of project start, while still allowing fordistributed (functional) metrics collection.

2. Local metrics teams in each location serve as a focalpoint for all metrics-related questions in a location,to synchronize metrics activities, to emphasize oncommonality, and to collect local requirements fromthe di�erent projects. Besides being the focal pointfor the metrics program in a single location theyprovide training and coaching of management andpractitioners on metrics, their use and application.In addition they ensure that heterogeneous tools areincreasingly aligned or that tools and forms for datacollection and analysis are made available to theprojects and functional organization.

3. A central (business division) metrics team formedby representatives of the local metrics teams of theorganization coordinates metrics and technical con-trolling beyond locations. Altogether they ensurethe rationalization and standardization of a com-mon set of metrics and the related tools and chartsare accelerated. Upon building a corporate metricsprogram and aligning processes within the softwareprocess improvement activities the creation of a his-tory database for the entire organization is an im-portant result to improve estimates.

This structure guarantees that each single change orre®nement of metrics and underlying tools, but also

Technical controlling in software development: C. Ebert

23

Page 8: Technical controlling in software development

needs from projects can be easily communicated froma single project to the whole organization. Use ofteams should however be done cautiously. While ateam has the capability to take advantage of diversebackgrounds and expertise, the e�ort is most e�ectivewhen there are not more than three people involved ina distinct task. Larger teams spend too much timebacktracking on metrics choices. We also found thatwhen potential users worked jointly to develop themetrics set with the metrics support sta�, the programwas more readily accepted.The related metrics process is applicable in the day-

to-day project environment. It is based on a set ofde®ned metrics and rather supports the setting up andtracking of project targets and improvement goals(Figure 7):

1. Based on a set of prede®ned corporate metrics the®rst step is to select metrics suitable for the project.

2. Then the raw data is collected to calculate metrics.Be aware of the operational systems that peoplework with that need to supply data. If the data isnot available easily chances are high that the metricis inaccurate and people tend to ignore furthermetrics requests. People might then even comply to

the letter of the de®nition but not to the spirit ofthe metric.

3. Metrics are then analyzed and reported through theappropriate channels. Analysis includes two steps.First data is validated to make sure it is complete,correct, and consistent with the goals it addresses.Don't assume that automatic metrics are alwaystrustworthy. At least perform sample checks. Thereal challenge is the second step which investigateswhat is behind the metrics. Some conclusions arestraightforward, while others require an in-depthunderstanding of how the metrics relate with eachother. Consolidating metrics and aggregating resultsmust be done with great caution, even if apples andapples might ®t neatly, so to speak. Results are use-less unless reported back to the people who makeimprovements or decisions.

4. Finally the necessary decisions and actions aremade based on the results of the analysis. The samemetrics might trigger di�erent decisions based onthe target audience. While senior management mayjust want to get an indication how well theimprovement program is doing, the local SEPG lea-der might carefully study process metrics to elimin-ate de®ciencies.

Figure 6 Roles and responsibilities within metrics program

Figure 7 Integrated measurement process overview

Technical controlling in software development: C. Ebert

24

Page 9: Technical controlling in software development

Metrics selection and de®nition

Software metrics can perform four functions:4, 7 (1)They can help to understand more about softwarework products or underlying processes; (2) they can beused to evaluate work products or processes againstestablished standards; (3) metrics can provide the in-formation necessary to control resources and processesto produce the software, and (4) they can be used topredict attributes of software entities in the future. Thebasic template for metrics de®nition thus states theobjective, one of the four functions, and the attributeof the entity being measured.The problem with many software metrics is that

they are typically not described in business terms andnot linked or aligned to the needs of the business.While traditional business indicators look on revenue,productivity, or order cycle time, their counterparts insoftware development measure size, faults or e�ort.Clearly both sides must be aligned to identify thosesoftware product or process metrics that supportbusiness decisions. One of the ®rst steps towards relat-ing the many dimensions of business indicators wasthe Balanced Scorecard concept.11 The link to softwaremetrics is given by investigating the operational viewand the improvement view of the balanced scorecard.Questioning how the operational business can staycompetitive yields critical success factors (e.g. cost pernew/changed functionality, ®eld performance, maturitylevel). Relating the actual situation (e.g. elapse timefor average set of requirements) to the best of practicevalues in most cases motivates a process improvementprogram. Keeping these relationships from businessobjectives towards critical success factors to oper-ational management and ®nally to software processesin mind, ensures that customer reported defects arenot seen as yet another fault category and percentageto be found earlier, but within the broader scope ofcustomer satisfaction and sustained growth.Each metric's de®nition should ensure consistent in-

terpretation and collection across the organization.Capturing precise metrics information not only helpswith communicating what's behind the ®gures but alsobuilds the requirements for automatic tools supportand provides basic material for training course devel-opment. We have used the following sections withinthe template:

. Name and identi®er

. Brief description

. Relationships to goals or improvement targets (thisincludes business goals or tracking distinct improve-ment activities)

. De®nition with precise calculation

. Underlying raw data (or metrics primitives) used forcalculating the metric

. Tools support (links and references to the support-ing tools, such as databases, spreadsheets, etc.)

. Visualization with references to templates (e.g. chartor table type, combination with other metrics, visu-alization of goals or planning curves)

. Collection period and reporting frequency

. Target and alarm levels for interpretation (e.g.release criteria, interpretation of trends)

. Con®guration control with links to storage of(periodically collected) metrics

. Distribution control (e.g. availability date, audience,access control)

A project-speci®c measurement plan links the gen-eric metrics de®nition to concrete projects with theirindividual goals and responsibilities. Additionalmetrics to be used only in that project are referred toin the measurement plan. The measurement plan islinked to the quality plan to facilitate alignment of tar-gets.Often terminology must be reworked especially

when the projects are scattered in a distributed organ-ization, such as Alcatel Telecom. The standard de®-nition might include several sections applicable fordi�erent project types. Project size as well as designparadigms in¯uence de®nitions and metric calculation,even if the metric goals and underlying rationales arethe same, such as with deliverables tracking.The typical suite of project metrics includes the fol-

lowing set to start with (see Figure 2 for presentationof such metrics):

. Faults and failures (faults across developmentphases; e�ectiveness of fault detection during devel-opment process; faults per KLOC in new/changedsoftware; failures per execution time during test andin the ®eld)

. Project size in KLOC or KStmt (new, changed,reused, or total)

. E�ort (total e�ort in project; e�ort distributionacross life cycle phases; productivity; e�ciency inimportant processes)

. Calendar or elapsed time (with respect to milestones,reviews, and work product deliveries; compared withplanning data).

Most of our tracking metrics cover work productcompletion, open corrective action requests, andreview coverage to identify and track the extent towhich development and quality activities have beenapplied and completed on individual software deliver-ables. These metrics provide visibility to buyers andvendors about the progress of software development

Table 3 Visibility, access, and timing of metrics

Private data for practitioner Private data for project team Corporate data

. Immediate access (i.e. minutes) .Hourly or daily access .Weekly or monthly access

.Fault rates of individuals .Fault rates in subsystems .Fault rates in project

.Fault rates in module before integration .New/changed code per module .Failure rates in project

.Fault rates during coding .Estimated e�ort and new/changed size permodule .New/changed code in project

.Number of local compile runs .Number of repeated reviews and inspections .E�ort per project

.E�ort spent for single module .Fault rates during design .E�ort per delivered code size (e�ciency).E�ort per fault.E�ort per phase and in processes.Elapse time per process and phases

Technical controlling in software development: C. Ebert

25

Page 10: Technical controlling in software development

and can indicate di�culties or problems that mighthamper the accomplishment of quality targets.Progress during TLD and DD can be tracked based

on e�ort spent on the respective processes, on the oneside, and defects found on the other hand. Especiallydefect-based tracking is very helpful for reaching high-level management attention because this is the kind ofdecision driver that accompanies all major release de-cisions. When e�ort is below plan, the project willtypically be behind schedule because the work simplyisn't getting done. On the other hand, design mighthave progressed but without the level of detail necess-ary to move to the next development phase. Bothmetrics should be reported weekly and can easily becompared with a planning curve related to the overallestimated e�ort and defects for the two phases. Ofcourse any design deliverables might also be tracked(e.g. SDL descriptions). However, these often comelate in the design process and are thus not a good indi-cator for focusing management attention.

After all it's people not numbers

An e�ect often overlooked in establishing corporatetechnical controlling is the impact on the peopleinvolved. Although introducing metrics means a cul-tural change to typically all involved parties, the focusis too often only on tools and de®nitions. Knowingthe bene®ts of metrics for better project managementor for steering the course of SPI initiatives does not atall imply that people will buy the decision to bemeasured easily. To tell the truth from the beginningand to provide the whole picture is better than super-®cial statements about management bene®ts. If faults,e�ciency or task completion are measured, it's notsome abstract product which is involved, but the prac-titioners who know that they will be compared. Sta�at all levels is su�ciently experienced to know whenthe truth is being obscured.Plan to position metrics from the beginning as a

management tool for improvement and tell that one ofthe targets is to improve e�ciency in the competitiveenvironment. However, be clear at the same time notto abuse metrics for management ammunition to out-source or cutback. For instance if faults are countedfor the ®rst time over the life cycle, establish a taskforce with representatives from di�erent levels to inves-tigate results from the viewpoint of root cause analysisand criticality reduction. If their reports are backed byvalid metrics the people should never be left alone.Even the most simple and straightforward metrics,such as faults, do not point to people but towardscritical areas in the software components.Certainly limited visibility and access to the metrics

helps in creating credibility among practitioners es-pecially in the beginning (Table 3; adapted from HPsources12). Before introducing metrics however it iseven more important to indicate the application of themetrics (such as individual improvements with individ-ual data) based on a supportive climate. It is oftenhelpful to change perspective towards the one provid-ing raw data: is the activity adding value to his/herdaily work? Statistical issues might not automaticallyalign with emotional priorities. If the metrics programis truly to measure processes and their improvementinstead of individuals, then rumors to the opposite can

be minimized by posting minutes of the metrics meet-ings. Remember that their perception is their reality.

Experiences with technical controlling

This section shares some selected experiences we madewhile setting up a globally distributed technical con-trolling program in the di�erent locations of theSwitching Systems Division. The following key successfactors could be identi®ed:

. Metrics start with improvement goals. Goals mustbe in line with each other and on various levels. Thebusiness strategy and the related business goals mustbe clear before discussing lower level improvementtargets. From the overall business strategy thosestrategies and goals must be extracted that dependon successful software development, use, and sup-port. Size of defect metrics alone do not give muchinformation. They become meaningful only as aninput to a decision process. This is where thebalanced scorecard approach comes in and helps inrelating the measurement program to speci®c cor-porate goals.11 Business goals must be broken downto project goals and those must be aligned withdepartment goals and contents of quality plans.

. Motivate technical controlling with concrete andachievable improvement goals. Unless targets areachievable and clearly communicated to middlemanagement and practitioners they will clearly feelmetrics as yet another instrument of managementcontrol. Clearly communicated priorities might helpwith individual decisions.

. Start small and immediately (see initial timetable inTable 4). It is de®nitely not enough only to selectgoals and metrics. Tools and reporting must be inline; and all of this takes time. It must, however, beclearly determined what needs to be measured beforedeciding, based on what can be measured.

. Determine the critical success factors of the under-lying improvement program. The targets of anyimprovement program must be clearly communi-cated and perceived by all levels as realistic enoughto ®ght for. Each single process change must be ac-companied with the respective goals and supportivemetrics that are aligned. Those a�ected need to feelthat they have some role in setting targets. Wheregoals are not shared and the climate is dominatedby threats and frustration, the metrics program ismore likely to fail.

. Provide training both for practitioners who after allhave to deliver the accurate raw data, and for man-agement who will use the metrics. The cost ande�ort of training is often stopping its e�ective deliv-ery. Any training takes time, money, and personnelto prepare, update, deliver, or receive it. Use exter-nal consultants where needed to get additional ex-perience and authority.

. Establish focal points for metrics in each projectand department. Individual roles and responsibilitiesmust be made clear to ensure a sustainable metricsprogram that endures initial SPI activities (Figure 6).

. De®ne and align the software processes to enablecomparing metrics. While improving processes, orsetting up new processes, ensure that the relatedmetrics are maintained at the same time. Once esti-

Technical controlling in software development: C. Ebert

26

Page 11: Technical controlling in software development

mation moves from e�ort to size to functionality,clearly the related product metrics must follow.

. Collect objective and reproducible data. Ensure thechosen metrics are relevant for the selected goals(e.g. tracking because to reduce milestone delay) andacceptable for the target community (e.g. it's notwise to start with productivity metrics).

. Get support from management. Enduring buy-in ofmanagement can only be achieved if the responsibil-ity for improvements and the span of necessary con-trol are aligned with realistic targets. Since in manycases metrics beyond test tracking and faults arenew instruments for parts of management they mustbe provided with the necessary training.

. Avoid abuse of metrics by any means. Metrics mustbe `politically correct' in a sense that they shouldnot immediately target persons or satisfy needs forpersonal blames. Metrics might hurt but should notblame.

. Communicate success stories where metrics enabledbetter tracking or cost control. This includes identi-fying metrics advocates that help in selling themeasurement program. Champions must be ident-i®ed at all levels of management, especially at seniorlevel, that really use metrics and thus help to sup-port the program. Metrics can even tie in an indivi-dual's work to the bigger picture if communicatedadequately. When practitioners get feedback on thedata they collect and see that it is analyzed for de-cision making, it gives them a clear indication thatthe data is being used rather than going into a datacemetery.

. Slowly enhance the metric program. This includesde®ning `success criteria' to be used to judge theresults of the program. Since there is no perfectmetrics program, it is necessary to determine some-thing like an `80% available' acceptance limit thatallows to declare success when those metrics areavailable.

. Don't overemphasize the numbers. It is much morerelevant what they bring to light, such as emergingtrends or patterns. After all, the focus is on success-ful projects and e�ciency improvement and not onmetrics.

Technical controlling must make sense to everybodywithin the organization who will be in contact with it.Therefore, metrics should be piloted and evaluated

after some time. Potential evaluation questionsinclude:

. Are the selected metrics consistent with the originalimprovement targets? Do the metrics provide addedvalue? Do they make sense from di�erent angles andcan that meaning be communicated without manyslides? If metrics are considering what is measurablebut don't support improvement tracking they areperfect for hiding issues but should not be labeledmetrics.

. Do the chosen metrics send the right message aboutwhat the organization considers relevant? Metricsshould spotlight by default and without cumbersomeinvestigations of what might be behind. Are theright things being spotlighted?

. Do the metrics clearly follow a perspective thatallows comparisons? If metrics include ambiguitiesor heterogeneous viewpoints they cannot be used ashistory data.

Many small and independent metrics initiatives hadbeen started before within various groups and depart-ments. Our experience shows that technical controllingof software projects is most likely to succeed as a partof a larger software process improvement initiative oras part of company-wide TQM program. In that casethe controlling program bene®ts from an aligned en-gineering improvement and business improvementspirit that encourages continuous and focusedimprovement with support of quantitative methods.

Conclusions

We have presented the introduction and application oftechnical controlling within Alcatel Telecom'sSwitching Systems Division. The targets of technicalcontrolling are as follows:

. Setting process and product goals

. Quantitative tracking of project performance duringsoftware development

. Analyzing measurement data to discover any exist-ing or anticipated problems

. Determining risk

. Early available release criteria for all work products

. Creation of an experience database for improvedproject management

. Motivation and trigger for actual improvements

Table 4 Timetable for setting up technical controlling

Activity Elapsed time Duration

Initial targets set up 0 2 weeksCreation and kick-o� of metric team 2 weeks 1 dayGoal determination for projects and processes 3 weeks 2 weeksIdentifying impact factors 4 weeks 2 weeksSelection of initial suite of metrics 5 weeks 1 weekReport de®nition 6 weeks 1 weekKick-o� with management 6 weeks 2 hoursInitial tool selection and tuning 6 weeks 3 weeksSelection of projects/metric plan 6 weeks 1 weekKick-o� with project teams/managers 7 weeks 2 hoursCollection of metric baselines 7 weeks 2 weeksMetric reports, tool application 8 weeks continuouslyReview and tuning of reports 10 weeks 1 weekMonthly metric-based status reports within projects 12 weeks continuouslyApplication of metrics for project tracking and process improvement 16 weeks continuouslyControl and feedback on metric program 24 weeks quarterlyEnhancements of metric program 1 year continuously

Technical controlling in software development: C. Ebert

27

Page 12: Technical controlling in software development

. Success control during reengineering and improve-ment projects

Metrics have been introduced as the vehicle to mak-ing technical controlling run. Any data presented hereon e�ort or value of SPI is only approximate and clo-sely linked to the environment it was extracted from.It is obvious by now that the more advanced the cor-porate process maturity, the more various and speci®cthe metrics that are used. While maintaining technicalcontrolling it must be emphasized that controllingneeds to evolve over time. The critical precondition ofany SPI program is the focus on metrics and theire�ective exploitation.So far many of the industrial discussions and articles

related to reengineering of software processes arebased on facts, while research is targeting theories andsmall-scale examples. Both is validÐfrom the respect-ive viewpoint. It would however be helpful to bridgethe gap with corporate studies related to answering thetwo important questions:

. What does it cost to improve software processes?

. How long will it take to make tangible improve-ments?

Answering such questions of course needs somefocus on areas such as quality improvement, betterproductivity, shorter lead-time, or higher customer sat-isfaction.Future directions for technical controlling include:

. Relate business-oriented analysis approaches to soft-ware decision-making. Currently there still seems tobe two worlds in many organizations: business per-formance monitoring and evaluation with scorecard-type systems on the one hand and technical control-ling and project-level decision making on the other.Both must be combined to align operational decisionmaking (i.e. project management) with corporatestrategy and business goals and their tracking.

. Integration of metrics of suppliers with the respect-ive programs of operators. Technical controllingmust be used to improve present operations andprovide a distinct degree of mutual trust within theentire development process. Metrics have to be inte-grated into the process to ensure that problems arenot only detected and corrected upfront but also toeliminate the problem causes.

References1. Rook, P., Controlling software projects. Software Engineering

Journal, 1986, 1(1), 7±16.2. P¯eeger, S. L.et al., Status Report on Software Measurement.

IEEE Software, 1997, 14(2), 33±43.3. Jones, T. C., Return on Investment in Software Measurement.

Proc. 6. Int. Conf. Applications of Software Measurement.Orlando, FL, USA, 01. Nov. 1995.

4. Fenton, N. E. and P¯eeger, S. L., Software Metrics: A Practicaland Rigorous Approach. Chapman and Hall, London, UK,1997.

5. Stark, G., Durst, R. C. and Vowell, C. W., Using metrics inmanagement decision making. IEEE Computer, 1994, 27(9), 42±48.

6. IEEE Standard for Developing Software Life Cycle Processes.IEEE Std 1074-1991, IEEE, Piscataway, USA, 1992.

7. Humphrey, W., Managing the Software Process. Addison-Wesley, Reading, USA, 1989.

8. Jones, C., Applied Software MeasurementÐAssuringProductivity and Quality. McGraw-Hill, New York, USA, 1991.

9. Software Measurement Cookbook. Int. Thomson ComputerPress, London, 1995.

10. McGibbon, T., A Business Case for Software ProcessImprovement. DACS State-of-the-Art Report. RomeLaboratory, http://www.dacs.com/techs/roi.soar/soar.html#re-search, 1996.

11. Kaplan, R. and Norton, D., The Balanced ScorecardÐMeasures that Drive Performance. Harvard Business Review,Jan. 1992.

12. Grady, R. B., Practical Software Metrics for ProjectManagement and Process Improvement. Prentice Hall,Englewood Cli�s, 1992.

Christof Ebert received a Master'sdegree in Electrical Engineeringfrom the University of Stuttgart,Germany in 1990. As a FulbrightFellow he was involved in softwareengineering research at the KansasState University. In 1994 hereceived a PhD with honors fromthe University of Stuttgart inElectrical Engineering on a softwareengineering related topic. Since1994 Dr Ebert is with AlcatelTelecom. He is currently SoftwareEngineering Process Group Leaderof Alcatel Telecom's SwitchingSystems Division in Antwerp,Belgium. He coordinates the software metrics program of AlcatelTelecom's Switching Systems Divison. He has published over sixtyrefereed papers in the area of software metrics, software qualityassurance, real-time software development and CASE support forsuch activities. His current research topics include software metrics,software process analysis and improvement, and requirements en-gineering. Dr Ebert serves on the editorial board of IEEE Softwareand on program committees of software conferences. He co-authored ``Software Metrics in Industry'' published by Springer,Germany in 1996. He is a member of the IEEE, GI, VDI, and theAlpha Lambda Delta honor society.

Technical controlling in software development: C. Ebert

28