15
Software process simulation modeling: Why? What? How? Marc I. Kellner a, * , Raymond J. Madachy b , David M. Rao c a Software Engineering Institute, Carnegie Mellon University, Pittsburgh, PA 15213-3890, USA b USC Center for Software Engineering, University of Southern California, Los Angeles, CA 90089-0781, USA c School of Business Administration, Portland State University, Portland, OR 97207-0751, USA Received 10 November 1998; accepted 11 November 1998 Abstract Software process simulation modeling is increasingly being used to address a variety of issues from the strategic management of software development, to supporting process improvements, to software project management training. The scope of software process simulation applications ranges from narrow focused portions of the life cycle to longer term product evolutionary models with broad organizational impacts. This article provides an overview of work being conducted in this field. It identifies the questions and issues that simulation can be used to address (‘why’), the scope and variables that can be usefully simulated (‘what’), and the modeling approaches and techniques that can be most productively employed (‘how’). It includes a summary of the papers in this special issue of the Journal of Systems and Software, which were presented at the First International Silver Falls Workshop on Software Process Simulation Modeling (ProSim’98). It also provides a framework that helps characterize work in this field, and applies this new characterization scheme to many of the articles in this special issue. This paper concludes by oering some guidance in selecting a simulation modeling approach for practical application, and recommending some issues warranting additional re- search. Ó 1999 Published by Elsevier Science Inc. All rights reserved. 1. Introduction Over the past few decades the software industry has been assailed by numerous accounts of schedule and cost overruns as well as poor product quality delivered by software development organizations in both the commercial and government sectors. At the same time, increasing customer demands for ‘better, faster, cheaper’ and the increasing complexity of software have signifi- cantly ‘raised the bar’ for software developers to im- prove performance. The industry has received help in the form of a plethora of case tools, new computer languages, and more advanced and sophisticated machines. A key question is ‘How can the tools, technologies and people work together in order to achieve these increasingly challenging goals?’ Potential answers to this question imply changes to the software development process or software organization. Possible changes will require a significant amount of resources to implement and have significant implications on the firm – good or bad. How can organizations gain insights into potential solutions to these problems and their likely impacts on the orga- nization? One area of research that has attempted to address these questions and has had some success in predicting the impact of some of these proposed solu- tions is software process simulation modeling. Software process simulation modeling is gaining in- creasing interest among academic researchers and practitioners alike as an approach for analyzing complex business and policy questions. Although simulation modeling has been applied in a variety of disciplines for a number of years, it has only recently been applied to the area of software development and evolution pro- cesses. Currently, software process simulation modeling is beginning to be used to address a variety of issues from the strategic management of software development, to supporting process improvements, to software project management training. The scope of software process simulation applications ranges from narrow focused portions of the life cycle to longer term product evolu- tionary models with broad organizational impacts. This article provides an overview of work in the field of software process simulation modeling. Moreover, it identifies the questions and issues that simulation can be used to address (i.e., why simulate), the scope and The Journal of Systems and Software 46 (1999) 91–105 * Corresponding author. E-mail: [email protected] 0164-1212/99/$ – see front matter Ó 1999 Published by Elsevier Science Inc. All rights reserved. PII: S 0 1 6 4 - 1 2 1 2 ( 9 9 ) 0 0 0 0 3 - 5

Software process simulation modeling: Why? … that can be usefully simulated (i.e., what to simulate), and the modeling approaches and techniques that can be most productively employed

  • Upload
    dinhtu

  • View
    213

  • Download
    0

Embed Size (px)

Citation preview

Software process simulation modeling: Why? What? How?

Marc I. Kellner a,*, Raymond J. Madachy b, David M. Ra�o c

a Software Engineering Institute, Carnegie Mellon University, Pittsburgh, PA 15213-3890, USAb USC Center for Software Engineering, University of Southern California, Los Angeles, CA 90089-0781, USA

c School of Business Administration, Portland State University, Portland, OR 97207-0751, USA

Received 10 November 1998; accepted 11 November 1998

Abstract

Software process simulation modeling is increasingly being used to address a variety of issues from the strategic management of

software development, to supporting process improvements, to software project management training. The scope of software

process simulation applications ranges from narrow focused portions of the life cycle to longer term product evolutionary models

with broad organizational impacts. This article provides an overview of work being conducted in this ®eld. It identi®es the questions

and issues that simulation can be used to address (`why'), the scope and variables that can be usefully simulated (`what'), and the

modeling approaches and techniques that can be most productively employed (`how'). It includes a summary of the papers in this

special issue of the Journal of Systems and Software, which were presented at the First International Silver Falls Workshop on

Software Process Simulation Modeling (ProSim'98). It also provides a framework that helps characterize work in this ®eld, and

applies this new characterization scheme to many of the articles in this special issue. This paper concludes by o�ering some guidance

in selecting a simulation modeling approach for practical application, and recommending some issues warranting additional re-

search. Ó 1999 Published by Elsevier Science Inc. All rights reserved.

1. Introduction

Over the past few decades the software industry hasbeen assailed by numerous accounts of schedule andcost overruns as well as poor product quality deliveredby software development organizations in both thecommercial and government sectors. At the same time,increasing customer demands for `better, faster, cheaper'and the increasing complexity of software have signi®-cantly `raised the bar' for software developers to im-prove performance.

The industry has received help in the form of aplethora of case tools, new computer languages, andmore advanced and sophisticated machines. A keyquestion is `How can the tools, technologies and peoplework together in order to achieve these increasinglychallenging goals?' Potential answers to this questionimply changes to the software development process orsoftware organization. Possible changes will require asigni®cant amount of resources to implement and havesigni®cant implications on the ®rm ± good or bad. Howcan organizations gain insights into potential solutions

to these problems and their likely impacts on the orga-nization? One area of research that has attempted toaddress these questions and has had some success inpredicting the impact of some of these proposed solu-tions is software process simulation modeling.

Software process simulation modeling is gaining in-creasing interest among academic researchers andpractitioners alike as an approach for analyzing complexbusiness and policy questions. Although simulationmodeling has been applied in a variety of disciplines fora number of years, it has only recently been applied tothe area of software development and evolution pro-cesses.

Currently, software process simulation modeling isbeginning to be used to address a variety of issues fromthe strategic management of software development, tosupporting process improvements, to software projectmanagement training. The scope of software processsimulation applications ranges from narrow focusedportions of the life cycle to longer term product evolu-tionary models with broad organizational impacts. Thisarticle provides an overview of work in the ®eld ofsoftware process simulation modeling. Moreover, itidenti®es the questions and issues that simulation can beused to address (i.e., why simulate), the scope and

The Journal of Systems and Software 46 (1999) 91±105

* Corresponding author. E-mail: [email protected]

0164-1212/99/$ ± see front matter Ó 1999 Published by Elsevier Science Inc. All rights reserved.

PII: S 0 1 6 4 - 1 2 1 2 ( 9 9 ) 0 0 0 0 3 - 5

variables that can be usefully simulated (i.e., what tosimulate), and the modeling approaches and techniquesthat can be most productively employed (i.e., how tosimulate). This article focuses on the `big picture' ando�ers the reader a broad perspective. In addition, it in-troduces this special issue of the Journal of Systems andSoftware containing papers from ProSim'98 and itprovides a framework to help characterize other work inthis ®eld. Table 1 lists the papers included in this issue.

2. Background

This section brie¯y discusses the key words from thetitle: `process', `model', and `simulation', and what theymean when they are put together. An organizational(e.g., business) process is a logical structure of people,technology and practices that are organized into workactivities designed to transform information, materialsand energy into speci®ed end result(s) (adapted from(Pall, 1987)). Examples of organizational/business pro-cesses include software development and evolution,systems engineering, engineering design, travel expensereimbursement, work selection and funding, and projectmanagement, to name but a few. This article focuses onsoftware processes (i.e., those used for software devel-opment and evolution), although much of what is saidapplies equally well to many other business processes ±particularly those involving design and engineering. Asoftware process has been speci®cally de®ned as ``a set ofactivities, methods, practices and transformations thatpeople use to develop and maintain software and theassociated products (e.g., project plans, design docu-ments, code, test cases and user manuals)'' (Paulk et al.,1993).

A model is an abstraction (i.e., a simpli®ed repre-sentation) of a real or conceptual complex system. Amodel is designed to display signi®cant features andcharacteristics of the system which one wishes to study,predict, modify or control. Thus a model includes some,but not all, aspects of the system being modeled. A

model is valuable to the extent that it provides usefulinsights, predictions and answers to the questions it isused to address.

A simulation model is a computerized model whichpossesses the characteristics described above and thatrepresents some dynamic system or phenomenon. Oneof the main motivations for developing a simulationmodel or using any other modeling method is that it isan inexpensive way to gain important insights when thecosts, risks or logistics of manipulating the real systemof interest are prohibitive. Simulations are generallyemployed when the complexity of the system beingmodeled is beyond what static models or other tech-niques can usefully represent. Complexity is often en-countered in real systems and can take any of thefollowing forms:· System uncertainty and stochasticity ± the risk or un-

certainty of the system is an essential feature to con-sider. Bounds on this uncertainty and theimplications of potential outcomes must be under-stood and evaluated. Analytical models have con-straints on the number and kind of randomvariables that can be included in various types ofmodels. Simulation provides a ¯exible and usefulmechanism for capturing uncertainty related to com-plex systems without these restrictive constraints.

· Dynamic behavior ± system behavior can changeover time. For example, key variables such as pro-ductivity and defect detection rates can change overtime. A dynamic model is necessary when accountingfor or controlling these changes is important. How-ever, analytic techniques such as dynamic program-ming can become intractable when the system beingmodeled becomes overly complex. Dynamic simula-tion models are very ¯exible and support modelingof a wide variety of system structures and dynamicinteractions.

· Feedback mechanisms ± behavior and decisions madeat one point in the process impact others in complexor indirect ways and must be accounted for. For ex-ample, in a software development process the deci-

Table 1

List of papers in this special issue

Author (s) Title

Kellner, Madachy and Ra�o Software process simulation modeling: Why? What? How?

Christie Simulation in support of CMM-based process improvement

Drappa and Ludewig Quantitative modeling for the interactive simulation of software projects

Lehman and Ramil The impact of feedback in the global software process

Pfahl and Lebsanft Integration of system dynamics modelling with descriptive process modelling and goal-oriented measurement

Powell, Mander and Brown Strategies for lifecycle concurrency and iteration: A system dynamics approach

Ra�o, Vandeville and Martin Software process simulation to achieve higher CMM levels

Rus, Collofello and Lakey Software process simulation for reliability management

Scacchi Experience with software process simulation and modeling

Wernick and Lehman Software process white box modelling for FEAST/1

Williford and Chang Modeling Fed Ex's IT division: A system dynamics approach to strategic IT planning

92 M.I. Kellner et al. / The Journal of Systems and Software 46 (1999) 91±105

sion to hire or not to hire a new employee has multi-ple impacts and implications over the entire course ofthe project. The decision to inspect requirements ornot also has many implications. For complex feed-back or feedforward systems, analytic models maynot be useful or even possible. Simulation enables amore usable alternative.

Common purposes of simulation models are to providea basis for experimentation, predict behavior, answer`what if' questions, teach about the system being mod-eled, etc. Such models are usually quantitative, althoughthis is not always the case.

A software process simulation model focuses on someparticular software development/maintenance/evolutionprocess. It can represent such a process as currentlyimplemented (as-is) or as planned for future imple-mentation (to-be). Since all models are abstractions, amodel represents only some of the many aspects of asoftware process that potentially could be modeled ±namely the ones believed by the model developer to beespecially relevant to the issues and questions the modelis used to address.

The papers in this special issue provide a goodsampling of recent work in the area of software processsimulation modeling. The bibliographies of these arti-cles provide an extensive set of additional references torelated work. Other signi®cant work in this ®eld in-cludes (Abdel-Hamid and Madnick, 1991; Akhavi andWilson, 1993; Burke, 1997; Gruhn, 1992; Gruhn, 1993;Kellner, 1991; Kusumoto, 1997; Madachy, 1994; Ra�o,1996; Ra�o and Kellner, 1999; Tvedt, 1996) andothers.

3. Why simulate?

There is a wide variety of reasons for undertakingsimulations of software process models. In many cases,simulation is an aid to decision making. It also helps inrisk reduction and helps management at the strategic,tactical and operational levels. We have clustered themany reasons for using simulations of software pro-cesses into six categories of purpose:· strategic management;· planning;· control and operational management;· process improvement and technology adoption;· understanding; and· training and learning.When developing software process simulation models,identifying the purpose and the questions/issues man-agement would like to address is central to de®ning themodel scope and data that need to be collected. In thefollowing paragraphs, examples of practical questionsand issues that can be addressed with simulation arepresented for each of these six purpose categories.

3.1. Strategic management

Simulation can help address a broad range of stra-tegic management questions, such as the following.Should work be distributed across sites or should it becentralized at one location? Would it be better to per-form work in-house or to out-source (subcontract) it?To what extent should commercial-o�-the-shelf (COTS)components be utilized and integrated, as opposed todeveloping custom components and systems? Would itbe more bene®cial to employ a product-line approach todeveloping similar systems or would the more tradi-tional, individual product development approach bebetter suited? What is the likely long-term impact ofcurrent or prospective policies and initiatives (e.g., hir-ing practices, training policies, software process im-provement initiatives)? In each of these cases, simulationmodels would contain company organizational param-eters and be developed to investigate speci®c questions.Managers would compare the results from simulationmodels of the alternative scenarios to assist in their de-cision making. One article focusing on strategic ques-tions in this special issue is `Modeling Fed Ex's ITdivision: A system dynamics approach to strategic ITplanning' by Williford and Chang.

3.2. Planning

Simulation can support management planning in anumber of straightforward ways, including· forecast e�ort/cost, schedule and product quality;· forecast sta�ng levels needed across time;· cope with resource constraints and resource alloca-

tion;· forecast service-level provided (e.g., for product sup-

port); and· analyze risks.All of these can be applied to both initial planning andsubsequent re-planning. Papers in this special issue byChristie, Pfahl and Lebsanft, Powell et al., Rus et al. andWilliford and Chang address various issues falling intothis category.

In the preceding cases, the process to be used is takenas ®xed. However, simulation can also be used to helpselect, customize and tailor the best process for a speci®cproject context. These are process planning issues. Somerepresentative questions in this category are· Should process A or process B be used on our new

project?· Will process C enable us to reach our productivity

goals for this new project?· What portions of process D can be minimized or

skipped to save cost and schedule, without seriouslysacri®cing quality?

In these cases, simulations would be run of the process(or alternatives) under consideration and their results

M.I. Kellner et al. / The Journal of Systems and Software 46 (1999) 91±105 93

evaluated along the dimensions of interest (often cost,cycle-time and quality) in order to answer the questionsposed.

3.3. Control and operational management

Simulation can also provide e�ective support formanagerial control and operational management. Sim-ulation can facilitate project tracking and oversight be-cause key project parameters (e.g., actual status andprogress on the work products, resource consumptionto-date and so forth) can be monitored and comparedagainst planned values computed by the simulation.This helps project managers determine when possiblecorrective action may be needed. By using Monte Carlosimulation techniques (see Section 5.2), probability dis-tributions can be developed for important plannedvariables (e.g., milestone dates for key events, such asdesign approval, start of integration testing, etc.; e�ortconsumption through to key events or planned dates;defect levels detected at key points). Statistical tech-niques such as control charts can then be used to de-termine whether corrective action is needed.

Project managers can also use simulation to supportoperational decisions, such as whether to commencemajor activities (e.g., coding, integration testing). To dothis, managers would evaluate current project statususing timely project data and employ simulation topredict the possible outcome if a proposed action (e.g.,commence integration testing) was taken then or de-layed. Simulation can also be employed in operationalmanagement decisions to modify planned work priori-ties, detailed allocation of sta�, scheduling, hiring needs,etc., based on current circumstances and projectedoutcomes. For examples of a papers in this domain, seePfahl and Lebsanft, and Powell et al. in this issue.

It should be noted that models used for operationalmonitoring and control require up-to-date detailedproject data to be meaningful (see Sections 4.4 and 4.2for common input and result parameters, Section 5.3 forissues related to metrics and data collection for simu-lation models, also see (Ra�o et al., 1998) for furtherinformation on developing operational planning andcontrol models vs. planning models).

3.4. Process improvement and technology adoption

Simulation can support process improvement andtechnology adoption in a variety of ways. In processimprovement settings, organizations are often facedwith many suggested improvements. Simulation can aidspeci®c process improvement decisions (such as go/no-go on any speci®c proposal, or prioritization of multipleproposals) by forecasting the impact of a potentialprocess change before putting it into actual practice inthe organization. Thus, simulation can assist in the en-

gineering of standard, recommended processes. Theseapplications use simulation a priori to compare processalternatives, by comparing the projected outcomes ofimportance to decision makers (often cost, cycle-timeand quality) resulting from simulation models of alter-native processes. These same simulation techniques werediscussed in Section 3.2 in the context of process plan-ning, to help select, customize and tailor the best processfor a speci®c context. Although the techniques are muchthe same, the purpose here is process improvement, incontrast to planning. Simulation can also be used expost to evaluate the results of process changes or selec-tions already implemented. Here, the actual results ob-served would be compared against simulations of theprocesses not selected, after updating those simulationsto re¯ect actual project characteristics seen (e.g., size,resource constraints). The actual results would also beused to calibrate the model of the process that was used,in order to improve future applications of that model. Anumber of articles in this special issue ®t into this areaincluding Christie, Pfahl and Lebsanft, Powell et al.,Ra�o et al. and Scacchi.

Just as organizations face many process improvementquestions and decisions, the same is true for technologyadoption. The analysis of inserting new technologiesinto a software development process (or business pro-cess) would follow the same approach as for processchange and employ the same basic model. This is largelybecause adoption of a new technology is generally ex-pected to a�ect things that are usually re¯ected as inputparameters to a simulation (e.g., defect injection rate,coding productivity rate) and/or to change the associ-ated process in other more fundamental ways.

3.5. Understanding

Simulation can promote enhanced understanding ofmany process issues. For example, simulation modelscan help people ± such as project managers, softwaredevelopers and quality assurance personnel ± betterunderstand process ¯ow, i.e., sequencing, parallelism,¯ows of work products, etc. Animated simulations,Gantt charts and the like are useful in presenting sim-ulation results to help people visualize these process ¯owissues. Simulations can also help people to understandthe e�ects of the complex feedback loops and delaysinherent in software processes; even experienced soft-ware professionals have di�culty projecting these e�ectsby themselves due to the complicated interactions overtime. (For example, how many project managers resistinspections or thorough testing because of their seeminge�ect of extending schedule, failing to appreciate thatlater savings will usually more than compensate for theshort-term loss?) In addition, simulation models canhelp researchers to identify and understand consistent,pervasive properties of software development and

94 M.I. Kellner et al. / The Journal of Systems and Software 46 (1999) 91±105

maintenance processes (a.k.a. software laws). Moreover,simulations (especially Monte Carlo techniques) canhelp people understand the inherent uncertainty inforecasting software process outcomes and the likelyvariability in actual results seen. Finally, simulationshelp facilitate communication, common understandingand consensus building within a team or larger organi-zation. All simulation models help with process or or-ganizational understanding to some degree. Inparticular the papers of Drappa and Ludewig, Lehmanand Ramil, Powell et al., Ra�o et al., Scacchi andWernick and Lehman fall into this category.

3.6. Training and learning

Simulation can help with training and learning aboutsoftware processes in several ways. Although this pur-pose cluster is closely related to that of `understanding',the particular setting envisioned here is an explicitlyinstructional one. Simulations provide a way for per-sonnel to practice/learn project management; this isanalogous to pilots practicing on ¯ight simulators. Asimulated environment can help management traineeslearn the likely impacts of common decisions (oftenmistakes), e.g., rushing into coding, skipping inspectionsor reducing testing time. Finally, training through par-ticipation in simulations can help people to accept theunreliability of their initial expectations about the re-sults of given actions; most people do not possess goodskills or inherent abilities to predict the behavior ofsystems with complex feedback loops and/or uncer-tainties (as are present in software processes). Overall,

active participation in a good mix of simulations canprovide learning opportunities that could otherwise onlybe gained through years of real-world experience. Thearticles by Christie and by Drappa and Ludewig in thisspecial issue discuss these applications further.

4. What to simulate?

Having reviewed the major reasons for undertakingsimulations of software processes, we now turn to adiscussion of what to simulate. In general, the basicpurpose of the simulation model (why simulate), cou-pled with the speci®c questions or issues, to be ad-dressed, will largely determine what to simulate (seeFig. 1). As can be seen in Fig. 1, many aspects of whatto simulate are inter-related and driven based on themodel purpose and key questions described in Section 3.The following paragraphs discuss aspects of what tosimulate in terms of (1) model scope, (2) result variables,(3) process abstraction and (4) input parameters, re-spectively.

4.1. Model scope

Determining model scope is an important issue andan iterative process. The scope of the model needs to belarge enough to fully address the key questions posed.This requires understanding the implications of thequestions being addressed. For example, a companymay be considering a process change that is localized tothe code and unit test steps. They would like to develop

Fig. 1. Relationships among `why' and `what' aspects.

M.I. Kellner et al. / The Journal of Systems and Software 46 (1999) 91±105 95

a model to predict the impact of this change on overallproject performance. Although the change being madeto the process is limited to the code and unit test steps,the change will likely have implications for the qualityand e�ort performance of later phases of testing.Therefore, the scope of the model needs to re¯ect theseexpanded impacts of the process change. Moreover, inorder to address the key questions related to `overallproject performance', result variables will need to beidenti®ed to predict speci®c performance dimensions atthe required levels. Consideration of these result vari-ables may in turn lead to reconsideration of the modelscope needed and so on in an iterative fashion.

The scope of a software process simulation is gener-ally one of the following:· a portion of the life cycle (e.g., design phase, code in-

spection, some or all of testing, requirements man-agement);

· a development project (i.e., single product develop-ment life cycle);

· multiple, concurrent projects (e.g., across a depart-ment or division);

· long-term product evolution (i.e., multiple, successivereleases of a single product); and

· long-term organization (e.g., strategic organizationalconsiderations spanning successive releases of multi-ple products over a substantial time period).Underlying these ®ve scope categories are two di-

mensions: time span and organizational breadth. Wecan think of time span with some approximate guide-lines as:· short (less than 12 months);· medium (between 12±24 months; about the duration

of one development project or maintenance cycle);and

· long (more than 24 months).We think of organizational simulation breadth as per-taining to:· less than one product/project team;· one product/project team; and· multiple product/project teams.Table 2 shows how the ®ve scope categories can bemapped onto the dimensions of time span and organi-zational breadth. Two cells are unnamed because peopledo not generally develop simulations with those scopes.One scope category (portion of life cycle) ®ts three cells

in the table. For example: the design is phase is (short,� 1) because it has a short time span (less than 1 yr) buttypically engages the entire project team. The inspectionprocess is (short, <1) because it has a short time spanand engages only a portion of the project team. Therequirements management process is (medium, <1) be-cause it goes on through most of the project but onlyinvolves a portion of the team.

Some of the purpose categories have signi®cant im-plications for the selection of scope. For example, apurpose of `strategic management' likely implies thescope `long-term organization'; certainly this would bethe case for the product-line approach question posedabove (Section 3.1). Nevertheless, some other `strategicmanagement' issues and questions may be asked in re-lation to as small a scope as a `development project'; anexample is provided by the out-sourcing question posedabove (Section 3.1). In essence, the question and itscontext must be well understood by the model developerin order to determine the appropriate scope for thesimulation.

4.2. Result variables

The result variables are the information elementsneeded to answer the key questions that were speci®edalong with the purpose of the model. Depending on thekey questions being asked, a great many di�erent vari-ables could be devised as the results of a process simu-lation. However, the most typical result variables forsoftware process simulation models include the follow-ing:· e�ort/cost;· cycle-time (a.k.a. duration, schedule, time to market,

interval);· defect level;· sta�ng requirements over time;· sta� utilization rate;· cost/bene®t, return on investment (ROI) or other eco-

nomic measures (usually of interest for changes);· throughput/productivity; and· queue lengths (backlogs).

Once again, the questions and issues being addressedlargely determine the choice of result variables. Forexample, technology adoption questions often suggestan economic measure such as ROI; operational man-

Table 2

Model scope by organizational breadth and time span

Organizational breadth Time span

Short Medium Long

Less than one product/project team Portion of life cycle Portion of life cycle

One product/project team Portion of life cycle Development project Long-term product evolution

Multiple product/project teams Multiple, concurrent projects Long-term organization

96 M.I. Kellner et al. / The Journal of Systems and Software 46 (1999) 91±105

agement issues often focus on the traditional projectmanagement concerns of e�ort/cost, cycle-time and de-fect levels. It should also be noted that it is quite com-mon to produce multiple result variables from a singlesimulation model.

When de®ning the result variables for a model, it isimportant to specify their scope. For example, aremanagers interested in predictions of overall end-of-project e�ort? If so, the scope of the model may need toinclude the full process life cycle. At a minimum, themodel needs to capture all process steps after the pointthat changes are made. This may cause the scope of themodel to be modi®ed if it has already been de®ned. Inmost cases, as the purpose and key questions of themodel are re®ned, model scope and result variables getre®ned in an iterative process. In addition, some resultvariables are best looked at as continuous functions(e.g., sta�ng requirements), while others only makesense at the end of the process being simulated (e.g.,ROI).

De®nition of the result variables has implications forthe level of abstraction of the model and model inputparameters as well. For example, in addition to pre-dicting end-of-project e�ort, suppose management wasalso interested in e�ort predictions for each of the majorlife-cycle steps. Perhaps management is interested indetailed e�ort estimates for process steps within themajor life-cycle steps. Requesting these result variableshas signi®cant implications for the level of abstractionof the model as well as the kinds of data and inputparameters that need to be developed.

4.3. Process abstraction

When planning and developing a simulation model,the model builder needs to identify the key elements ofthe process, their inter-relationships and behavior, forinclusion in the model. The focus should be on thoseaspects of the process that are especially relevant tothe purpose of the model and believed to a�ect theresult variables. For example, it is important to iden-tify:· key activities and tasks;· primary objects (e.g., code units, designs, problem re-

ports);· vital resources (e.g., sta�, hardware);· activity dependencies, ¯ows of objects among activi-

ties and sequencing;· iteration loops, feedback loops and decision points;

and· other structural interdependencies (e.g., the e�ort to

revise a code unit after an inspection, as a functionof the number of defects found during the inspec-tion).

These should then be represented in the simulationmodel when it is developed.

4.4. Input parameters

The input parameters (key factors, drivers, indepen-dent variables) to include in the model largely dependupon the result variables desired and the process ab-stractions identi®ed. Typically, many parameters areneeded to drive software process simulation models;some require a few hundred parameters (e.g., Abdel-Hamid and Madnick, 1991). Examples of several typicalinput parameters are provided below:· amount of incoming work (often measured in terms

of software size (LOC) or function points, etc.);· e�ort for design as a function of size;· defect detection e�ciency during testing or inspec-

tions;· e�ort for code rework as a function of size and num-

ber of defects identi®ed for correction;· defect removal and injection rates during code re-

work;· decision point outcomes; number of rework cycles;· hiring rate; sta� turnover rate;· personnel capability and motivation, over time;· amount and e�ect of training provided;· resource constraints; and· frequency of product version releases.

Some of these are usually treated as constant over thecourse of a simulation (e.g., the functions for e�ort),while others are often treated as varying over time (e.g.,personnel capability and motivation). It is also note-worthy that inter-dependencies among some of theseparameters may be represented in the model, rather thantreating them as independent variables. For example,personnel capability may be in¯uenced by sta� turnoverand hiring rates.

5. How to simulate?

After determining why and what to simulate, manytechnical options, considerations and issues remain forthe model developer. These are outlined below under theheadings of (1) simulation approaches and languages,(2) simulation techniques and (3) data/measurement is-sues.

5.1. Simulation approaches/languages

Various modeling approaches have been used to in-vestigate di�erent aspects of the software process. De-spite best intentions, the implementation approach oftenin¯uences what is being modeled. Therefore, the mostappropriate approach to use will be the one best suitedto the particular case at hand, i.e., the purpose, ques-tions, scope, result variables desired, etc.

A variety of simulation approaches and languageshave been applied to software processes, including

M.I. Kellner et al. / The Journal of Systems and Software 46 (1999) 91±105 97

· state-based process models;· general discrete event simulation;· system dynamics (or continuous simulation);· rule-based languages;· Petri-net models;· queuing models;· project management (e.g., CPM and PERT); and· scheduling approaches (from management science,

manufacturing, etc.).Each of the ®rst four categories are represented anddiscussed further in other articles in this special journalissue.

5.2. Simulation techniques

The purpose of this section is to highlight some of theimportant capabilities and techniques that should beconsidered when selecting a particular simulation tool.This set of considerations is based upon the authors'collective experience in employing a variety of tools inthe course of developing software process simulations inindustrial settings.

The model may be depicted in a form that is primarilyvisual or textual. Visual models (i.e., graphical, dia-grammatic or iconic) have become the norm for soft-ware process simulations because they promoteunderstandability and ease of development. These toolsoften include an ability to animate the model duringsimulation to show the ¯ows of objects (e.g., code units,designs, problem reports) through the process, the ac-tivities currently being performed and so forth. This canbe very helpful as a further aid to understanding and asa tool to use during validation of the model with processexperts.

Even when a model is primarily visual, it almost in-variably entails supplemental textual information speci-fying interrelationships among components, equations,distributions for random variables, etc. As a result,suitable repository capabilities that can store, presentand report this information easily are highly desirable.Another useful capability o�ered by many simulationtools is the simultaneous reporting of results (usuallyplots of result variables over time) while the model isexecuting.

Many tools support interactive simulation, where theuser speci®es or modi®es some of the input parametersduring model execution rather than only beforehand,and/or can step the model through its execution. Sometools also allow batches of simulations to be executedautomatically from a single set-up, and results accu-mulated across the individual runs in the batch.

A simulation can be deterministic, stochastic ormixed. In the deterministic case, input parameters arespeci®ed as single values (e.g., coding for this unit willrequire 5 work-days of e�ort or 4 h per hundred lines ofcode; there will be two rework cycles on code and unit

test; etc.). Stochastic modeling recognizes the inherentuncertainty in many parameters and relationships.Rather than using (deterministic) point estimates, sto-chastic variables are random numbers drawn from aspeci®ed probability distribution. Mixed modeling em-ploys both deterministic and stochastic parameters.

In a purely deterministic model, only one simulationrun is needed for a given set of parameters. However,with stochastic or mixed modeling the result variablesdi�er from one run to another because the randomnumbers actually drawn di�er from run to run. In thiscase the result variables are best analyzed statistically(e.g., mean, standard deviation, distribution form)across a batch of simulation runs; this is termed MonteCarlo simulation. Although many process simulationtools support stochastic models, only some of themconveniently support Monte Carlo simulation by han-dling batches well.

Finally, sensitivity analysis is a very useful tech-nique involving simulation models. Sensitivity analysesexplore the e�ects on the key result variables, ofvarying selected parameters over a plausible range ofvalues. This allows the modeler to determine the likelyrange of results due to uncertainties in key parameters.It also allows the modeler to identify which parametershave the most signi®cant e�ects on results, suggestingthat those be measured and/or controlled more care-fully. As a simple example, if a 10% increase in pa-rameter A leads to a 30% change in a key resultvariable, while a 10% increase in parameter B leads toonly a 5% change in that result variable, one shouldbe somewhat more careful in specifying or controllingparameter A. Sensitivity analysis is applicable to alltypes of simulation modeling. However, it is usuallyperformed by varying the parameters manually, sincethere are few simulation tools that automate sensitivityanalyses.

5.3. Data/measurement issues

Simulation models should undergo calibration andvalidation to the extent possible. Validation can beperformed, in part, by inspections and walkthroughs ofthe model (providing what is often referred to as facevalidity). In addition, actual data should be used tovalidate the model empirically and to calibrate it againstreal-world results. Considerations of the questions to beaddressed, desired result variables, input parameters,validation and calibration, often suggest metric data(measurements) that would be valuable to collect andshow how they are useful. Unfortunately a lack of rel-evant, desired data in practical settings is all too com-mon.

Accurate simulation results depend on accurate val-ues of the parameters. Similarly, calibration also de-pends on accurate measurement of the results seen in

98 M.I. Kellner et al. / The Journal of Systems and Software 46 (1999) 91±105

practice. In many `real world' settings these variableshave not been measured or at least not carefully mea-sured. Strategies that can be useful in coping with thissituation include the following (Kellner and Ra�o,1997):· Adjust existing values to approximate desired vari-

ables, e.g., if cost is desired but only e�ort is avail-able, one can generally approximate cost using anaverage hourly rate.

· Construct values from other detailed records, e.g.,data on when defects were injected (by phase) maynot be available as part of summary reports from in-spections, but might be found on the original, de-tailed, inspection forms.

· Obtain estimates from personnel involved, based ontheir past experience or hypothetical expectationswhen necessary.

· Utilize typical values taken from the literature, e.g.,defect detection e�ciencies for source code inspec-tions and unit tests, measured in other organizations,are widely available.

6. Characterizing software process simulations

Speci®c simulation models of software processes willdi�er widely along any of the dimensions discussed inSections 3±5 (each of the `why', `what' and `how' sub-categories). It is our opinion that the most importantdimensions for broadly characterizing a simulationmodel are:· purpose (`why');· scope (`what');· key result variables (`what');· simulation approach/language employed (`how').Characterizing di�erent modeling work reported in theliterature is a ®rst step toward understanding the majordi�erences in the various work that has been conducted.To support such a characterization, we use the tax-onomies presented in the preceding sections of this ar-ticle along the four dimensions just proposed. To thatend, we have developed a characterization grid, which isintroduced below.

As shown in Fig. 2, we have used purpose and scopeas the two dimensions of the basic grid. We include thekey result variables as appropriate for each cell (a cellbeing a speci®c purpose and scope combination). Wealso note the simulation approach/language employed ineach model.

By way of example, the authors have used this grid tocharacterize some of our own past work. Fig. 2 showsthe work reported in (Kellner, 1991; Madachy, 1994;Ra�o, 1996) (the latter two represent Ph.D. disserta-tions). As we have applied this scheme, we have strivento indicate only those cells and results actually reported.

However, the grid could also be used to characterizewhat a particular simulation approach/language is wellsuited for.

Kellner's (Kellner, 1991) article focused on showinghow simulation of process models could be used tosupport management planning and control. The State-mateÒ 1-based process modeling approach used (Kell-ner, 1989; and Curtis et al., 1992) also supportsthorough understanding of the process, which is facili-tated by the multi-perspective diagrammatic modelrepresentation, animation, interactive as well as batchsimulation and (more recently) simultaneous reportingof results. The paper (Kellner, 1991) demonstrated de-terministic as well as stochastic modeling, the formerwith single simulation runs and the later with the MonteCarlo batch approach.

Madachy's (Madachy, 1994) dissertation used sys-tem dynamics to model the e�ect of performing formalinspections. In particular, the cost, schedule and qualityimpacts were compared for varying degrees of con-ducting inspections throughout the life cycle. It mod-eled the interrelated ¯ows of tasks, errors and personnelthroughout di�erent development phases, detailedinspection activities and was calibrated to industrialdata.

Ra�o's (Ra�o, 1996) dissertation focused on showinghow software process simulation modeling could beused to support process improvement, by forecasting theimpact of potential process changes before they are ac-tually implemented in the organization. This workcontributed many substantial extensions and improve-ments to the techniques originally reported in Kellner(1991), as well as applying them to a di�erent purpose,broader scope and with additional result variables ±which can be clearly seen in Fig. 2.

Finally, Fig. 3 shows our placement of many of thearticles included in this special issue on our character-ization grid (once again, the articles are listed in full inTable 1). It is readily apparent that these articles cover awide variety of applications, thereby demonstrating thevalue of software process simulation modeling in abroad range of problem settings. Although the grid isappropriate for characterization of speci®c models, it isnot suited to description of more abstract articles dis-cussing various applications. Accordingly, the articles inthis issue by Christie, by Lehman and Ramil, and byScacchi are not amenable to inclusion in this grid. Notethat only the major purposes and scopes of the simula-tions are shown for each article, since most of them canalso be expanded to other cells in terms of secondaryresults and/or future extensions.

1 Statemate is a registered trademark of i-Logix Ino. Andover,

Massachusetts (http://www.ilogix.com).

M.I. Kellner et al. / The Journal of Systems and Software 46 (1999) 91±105 99

7. Summary of special issue articles

This section brie¯y summarizes the articles in thisissue that were presented at ProSim'98. The papersummaries appear alphabetically by author name. Asillustrated in Fig. 3, these papers address a variety ofpurposes (why), scopes and result variables (what), andmodeling approaches (how). Fig. 3 expounds on thesummaries by showing the various dimensions uponwhich the simulation work can be classi®ed as well asthe primary result variables.

In `Simulation in support of CMM-based processimprovement' Alan Christie from the Software Engi-neering Institute (SEI) reviews how software processsimulation can be applied at all levels of the CMMÒ. 2

At the lowest level, simulation can help improve

awareness of dynamic process behavior on processperformance. As process maturity improves, simulationis increasingly tied to operational metrics, to validateone's understanding of process behavior and improvethe simulations' predictive power. This predictive powerallows new processes to be tested o�-line and introducedwith much greater con®dence.

Drappa and Ludewig's article, `Quantitative model-ing for the interactive simulation of software projects'presents current research on the SESAM project (soft-ware engineering simulation by animated models). Thegoal of the project is to address the problems facingmany software development projects, such as scheduleand cost overruns as well as product quality issues,through improved training. Students using the simulatorcan control the simulated project interactively, makingdecisions about various process and product attributesand ultimately making the project more or lesssuccessful. The model is expressed in a newly de®ned,rule-based modeling language. The paper discusses the

2 CMM is a registered (in the U.S. Patent and Trademark O�ce)

mark of Carnegie Mellon University.

Fig. 2. Characterization of authors' past work.

100 M.I. Kellner et al. / The Journal of Systems and Software 46 (1999) 91±105

simulation system, the model-description language anda new quality assurance model.

In their paper, `The impact of feedback in the globalsoftware process' Lehman and Ramil address generaland fundamental issues relating to modeling softwareprocesses. One key focus is identifying essential activitiesthat often underlie all software processes. To that end,they discuss how the presence of feedback loops impactor drive the evolution of most computer systems beingused in the real world today. As a result, when modelingand simulating a software process to assess its perfor-mance and improve it, feedback loops as well as mech-anisms and controls that impact the processcharacteristics signi®cantly impact the process outcomes

and should be included in the model. This includesfeedback loops as well as mechanisms and controls thatoriginate outside the process. Through the FEAST/1project, a number of industrial software processes arebeing modeled and studied using several di�erent mod-eling and simulation techniques. See the companionpaper by Wernick and Lehman for a system dynamicsapproach.

Over the last ten years, system dynamics modelinghas been applied in software organizations to helpcompare process alternatives and to support projectplanning. In the paper `Integration of system dynamicsmodelling with descriptive process modelling and goal-oriented measurement' Pfahl and Lebsanft present

Fig. 3. Characterization of seven articles in this special issue.

M.I. Kellner et al. / The Journal of Systems and Software 46 (1999) 91±105 101

methods for combining system dynamics modeling withthe descriptive process modeling (DPM) and measure-ment-based quantitative modeling (QM) approaches.This new approach, called integrated measurement,modeling and simulation (IMMS), is based on lessonslearned that were derived from an industrial systemdynamics modeling activity. The approach helps to ad-dress the need for a well de®ned and repeatable proce-dure for generating and using information based onexperience and empirical evidence.

Improving time to market and development e�ciencyare topics of interest to all software development ®rms.In `Strategies for lifecycle concurrency and iteration: Asystem dynamics approach' Powell, Mander and Brownpresent the use of a system dynamics simulation modelto evaluate the use of advanced software life-cycletechniques for accelerating software developmentschedules. The paper describes work conducted byRolls-Royce York University Technology Centre to in-vestigate strategies for optimizing the degree and man-ner in which concurrent software engineering andstaged-delivery life-cycle techniques should be appliedon a collection of partially interdependent software de-velopment projects. This is done with the goal ofmeeting competitive demands on both product time-to-market and business performance.

In `Software process simulation to achieve higherCMM levels' Ra�o, Vandeville and Martin present re-search utilizing state-based stochastic simulation modelsof a large-scale, real-world, software development pro-cess. The goal of the work was to provide a quantitativeanalysis of proposed process change alternatives interms of key performance measures of cost, quality andschedule. The model also supports a quantitative as-sessment of risk or uncertainty associated with each al-ternative. The approach is viewed as being key tosuccessfully achieving higher level CMM practices re-lated to Quantitative Process Management and Soft-ware Quality Management (CMM ± Level 4) as well asProcess and Technology Change Management (CMM ±Level 5).

Rus, Collofello and Lakey's paper, `Software processsimulation for reliability management' discusses a vari-ety of issues related to software reliability engineeringand current e�orts at Boeing's St. Louis site to modelcomplex development processes using a variety of toolsincluding expert systems, as well as system dynamics anddiscrete event simulation models. The paper describes aprocess simulator for evaluating how di�erent softwarereliability strategies could be used. The process simula-tor is a part of a decision support system that can assistproject managers in planning or tailoring the softwaredevelopment process, in a quality driven manner. Theexpert system is a rule-based fuzzy logic system and theprocess simulator is developed using the system dy-namics modeling paradigm.

In his paper, `Experience with software process sim-ulation and modeling' Walt Scacchi presents an over-view of the research that has been conducted throughthe USC System Factory project and the USC ATRI-UM Laboratory over the past 10 years. This researchhas investigated various kinds of modeling and simula-tion issues related to understanding the organizationalprocesses involved in software system developmentusing a variety of techniques including both knowledge-based systems and discrete event simulation. This paperhighlights some of the experiences and lessons learnedthrough these research e�orts.

In a companion paper to Lehman and Ramil (seeaforementioned) titled `Software process white boxmodelling for FEAST/1', Wernick and Lehman discussthe impact of feedback and feedback control on theevolution of software systems. System dynamics modelswere constructed to test the hypothesis that the dy-namics of real-world software evolution processes leadto a degree of process autonomy. One software processbeing examined re¯ects the development of a defensesoftware system. The process examined involves thepreparation of successive releases, each subjected to a®eld trial whose time-scales and objectives are beyondthe developers' control. The model has been successfullycalibrated and has provided useful results in predictingsoftware system evolutionary trends.

Federal Express' use of simulation to help address thechallenges of Year 2000 conversion and other strategicinitiatives faced by IT management is presented in`Modeling Fed Ex's IT division: A system dynamicsapproach to strategic IT planning' by Williford andChang. This paper describes how Fed Ex developed amacro-scale system dynamics model to predict sta�ng,training and infrastructure funding over a 5 yr period.The paper presents highlights of Fed Ex's models ofdevelopment and support processes that balance hiringof permanent employees with contract workers. In ad-dition, an infrastructure rollout model is presented thatestimates server hardware purchases to support com-puting workloads as they are migrated from a main-frame environment.

8. Conclusion

8.1. Choosing a simulation approach

In some instances, die-hard proponents of a givensimulation modeling approach have seemed to arguethat theirs is the best approach to use and is entirelyappropriate for every situation. It is probably true that amodel developer who is very skillful with a particularapproach and tool can succeed in modeling almost anyprocess situation with that approach and tool ± nomatter how awkward and unnatural the representation

102 M.I. Kellner et al. / The Journal of Systems and Software 46 (1999) 91±105

may ultimately be. However, we are convinced that nosingle modeling approach or tool is the most natural andconvenient one to use in all software process situations.In fact, this is one of the principles that was discussed atProSim'98.

Therefore the best advice we can o�er practitioners inselecting a simulation modeling approach is to use onethat is well-suited to the particular case at hand, i.e., thepurpose, questions, scope, result variables desired, etc.In fact, an important open research direction is tocompare alternative simulation modeling approachesfor software processes, with the aim of understandingthe range of situations where each is particularly suit-able, natural and convenient to apply. Nevertheless,some more detailed guidance (based on current under-standing 3) follows:· Continuous-time simulations (e.g., system dynamics)

tend to be convenient for strategic analyses, initial ap-proximations, long term trends, high-level (global)perspectives, etc. ± essentially analyses above the de-tailed process level.

· Discrete event and state-based simulations tend to beconvenient for detailed process analyses and perspec-tives, resource utilization, queuing, relatively shorter-term analyses, etc.For example, analysis of unit development costs of a

modi®ed inspection process can be modeled in full detailwith a discrete event or state-based modeling tool, as in(Ra�o, 1996). Alternatively, system dynamics can beused to model the aggregate e�ect of a modi®ed in-spection process, as in (Madachy, 1994) (and should besomehow calibrated to the low-level activities containedin a discrete model).

We now turn to a more speci®c comparison of thediscrete-event and system dynamics simulation ap-proaches. Discrete event models contain distinct (iden-ti®able and potentially di�ering) entities that movethrough the process and can have attached attributes.Changes happen in discrete steps. This supports so-phisticated, detailed analyses of the process and projectperformance. System dynamics, on the other hand,models the levels and ¯ows of entities involved in the

process, although those entities are not individuallytraced through the process. Changes happen in a con-tinuous fashion. Both methods can handle stochastice�ects. Table 3 compares the two methods and is de-rived from (Kocaoglu et al., 1998). As alluded to above,virtually all of the `disadvantages' in Table 3 can behandled by knowledgeable model developers ± some-times elegantly but sometimes with ugly arti®cialkludges.

Ideally, a process simulation modeling approachwould also support process representation, guidanceand execution capabilities. Unfortunately, many simu-lation approaches are not very useful for these addi-tional purposes. As an example of a partial solution,however, the Statemate-based modeling approach iswell-suited to simulation, representation and guidance(Kellner, 1991; Kellner, 1989) and it has been shownthat such a model can be `compiled' into an executableform suitable for a process-centered environment (Hei-neman, 1993).

8.2. Hybrid simulation

Given the preceding arguments, it is apparent thatthere are cases with su�cient complexity and breadththat no single modeling approach will be well-suited toaddress all aspects of that situation. A hybrid simulationapproach may then be in order 4. For instance, a processmay involve important issues that are naturally discretebut others that are inherently continuous. As an exam-ple, individual developer's productivity can be nicelymodeled as varying over time as a function of experi-ence, morale level, motivation, etc. ± i.e., as a continu-ous variable. For the same process and modelingsituation, however, the start of major development

3 The authors gratefully acknowledge discussions with Gregory A.

Hansen (CAPI) that contributed to the development of this guidance.

Table 3

Comparison of system dynamics and discrete event modeling techniques

System Dynamics Discrete Event

Advantages Accurately captures the e�ects of feedback CPU e�cient because time advances at events

Clear representation of the relationships between dynamic variables Attributes allow entities to vary

Queues and interdependence capture resource constraints

Disadvantages Sequential activities are more di�cult to represent Continuously changing variables not modeled accurately

No ability to represent entities or attributes No mechanism for states

4 The term `hybrid simulation' was originally used for simulation

that involved both analog computing (for continuous aspects) and

digital computing (for discrete aspects). However, analog computing

for simulation has been largely replaced by continuous modeling tools

for digital computers (e.g., the variety of system dynamics modeling

tools available for PCs). Therefore, we extend the meaning of hybrid

simulation to cover models that include both continuous and discrete

aspects (following Hansen, 1997) such as the two cases described in this

section. In general, we would use the term hybrid simulation to apply

to any combination of fundamentally di�erent modeling approaches.

M.I. Kellner et al. / The Journal of Systems and Software 46 (1999) 91±105 103

projects can most naturally be viewed as discrete events.Thus, an important question that should be answeredduring modeling is `What aspects of this particularsoftware process are best represented as being continu-ous and what aspects are best represented as being dis-crete?' There are some tools that allow both continuousand discrete aspects within a single model, e.g., Ex-tendÒ 5, but the integration is not as smooth as is de-sirable. Some work is currently being done to overcomethese limitations and formulate solutions (Kocaogluet al., 1998), however more work is necessary.

In other cases, one may wish to consider both low-level details about aspects of the process, as well as abroad scope such as `long-term product evolution'. Insuch an instance, multi-stage modeling (micro thenmacro) seems promising. This might involve discreteevent or state-based simulations at the detailed level,with their outputs feeding in as parameters to a higherlevel continuous model covering the desired scope. Justas in software design or estimation, a combined top-down and bottom-up approach is often superior to ei-ther single approach used alone.

8.3. Recommended research issues

Finally, we conclude with a brief list of interestingopen research issues in the area of quantitative processmodel simulation. (Some of these have already beennoted earlier in this article, but are repeated here for thesake of collecting them into one place for easy refer-ence.) It is hoped that this list will stimulate future re-search:· Investigate ways to support the integration of repre-

sentation, guidance, simulation and execution capa-bilities for models of software processes. This mayinclude making simulation models more user-friendlyto open them up to a wider audience.

· Improve approaches for the validation and calibra-tion of simulation models against limited real-worlddata.

· Compare alternative simulation approaches with theaim of understanding the range of situations whereeach is particularly suitable, natural and convenientto apply.

· Explore and develop techniques for hybrid simula-tion modeling and understand when to apply them.

· Develop extensions to better support (re)planning is-sues under stochastic modeling, e.g., critical path de-termination, probability of meeting targets, resourceallocation.

· Develop generalized process simulation models thatmay be easily adapted by others. Investigate the fea-

sibility of validating standard `plug and play' modelcomponents.

References

Abdel-Hamid, T., Madnick, S., 1991. Software Project Dynamics.

Prentice-Hall, Englewood Cli�s, NJ.

Akhavi, M., Wilson, W., 1993. Dynamic simulation of software

process. In: Proceedings of the 5th Software Engineering Process

Group National Meeting, Costa Mesa, California, April 26±29.

Software Engineering Institute, Carnegie Mellon University,

Pittsburgh, PA.

Burke, S., 1997. Radical improvements require radical actions:

Simulating a high-maturity software organization, Technical

Report CMU/SEI-96-TR-024. Software Engineering Institute,

Carnegie Mellon University, Pittsburgh, PA.

Curtis, B., Kellner, M., Over, J., 1992. Process modeling. Communi-

cations of the ACM 35 (9), 75±90.

Gruhn, V., 1992. Software process simulation on arbitrary levels of

abstraction. Computational Systems Analysis, pp. 439±444.

Gruhn, V., 1993. Software process simulation in MELMAC. SAMS

13, 37±57.

Hansen, G., 1997. Automating business process reengineering: Using

the power of visual simulation strategies to improve performance

and pro®t, 2nd ed. Prentice-Hall, Englewood Cli�s, NJ.

Heineman, G., 1993. Automatic translation of process modeling

formalisms, Technical Report CUCS-036-93. Department of

Computer Science, Columbia University, New York, NY.

Kellner, M., 1989. Software process modeling: value and experience.

In: SEI Technical Review. Software Engineering Institute,

Carnegie Mellon University, Pittsburgh, PA, pp. 23±54.

Kellner, M., 1991. Software process modeling support for management

planning and control. In: Proceedings of the First International

Conference on the Software Process, Redondo Beach, California,

October 21±22. IEEE Computer Society Press, Los Alamitos,

CA, pp. 8±28.

Kellner, M., Ra�o, D., 1997. Measurement issues in quantitative

simulations of process models. In: Proceedings of the Workshop

on Process Modelling and Empirical Studies of Software Evolu-

tion (in conjunction with the 19th International Conference on

Software Engineering), Boston, Massachusetts, May 18, pp. 33±

37.

Kocaoglu, D., Martin, R., Ra�o, D., 1998. Moving toward a uni®ed

continuous and discrete event simulation model for software

development. In: Proceedings of the Silver Falls Workshop on

Software Process Simulation Modeling (ProSim'98), Silver Falls,

OR, June 22±24.

Kusumoto, S., et al., 1997. A new software project simulator based on

generalized stochastic petri-net. In: Proceedings of the 19th

International Conference on Software Engineering (ICSE-19),

Boston, Massachusetts, May 17±23. IEEE Computer Society

Press, Los Alamitos, CA, pp. 293±302.

Madachy, R., 1994. A software project dynamics model for process

cost, schedule and risk assessment, Ph.D. Dissertation. Depart-

ment of Industrial and Systems Engineering, University of

Southern California, Los Angeles, CA.

Pall, G., 1987. Quality process management. Prentice-Hall, Englewood

Cli�s, NJ.

Paulk, M., et al., 1993. Key practices of the capability maturity model,

Version 1.1, Technical Report CMU/SEI-93-TR-25. Software

Engineering Institute, Carnegie Mellon University, Pittsburgh,

PA.

Ra�o, D., 1996. Modeling software processes quantitatively and

assessing the impact of potential process changes on process

performance, Ph.D. Dissertation. Graduate School of Indus-

5 Extend is a trademark of Imagine That, San Jose, California (http://

www.imaginethatinc.com).

104 M.I. Kellner et al. / The Journal of Systems and Software 46 (1999) 91±105

trial Administration, Carnegie Mellon University, Pittsburgh,

PA.

Ra�o, D., Harrison, W., Keast, L., 1998. Coordinating models and

metrics to manage software projects, Working Paper, School of

Business Administration, Portland State University, Portland,

OR.

Ra�o, D., Kellner, M., 1999. Modeling software processes quantita-

tively and evaluating the performance of process alternatives. In:

K. El Emam and N. Madhavji (Eds.), Elements of Software

Process Assessment and Improvement. IEEE Computer Society

Press, Los Alamitos, CA (forthcoming).

Tvedt, J., 1996. A modular model for predicting the impact of process

improvements on software development time, Ph.D. Dissertation.

Computer Science and Engineering Department, Arizona State

University, Tempe, AZ.

Dr. Marc I. Kellner is a senior scientist at the Software EngineeringInstitute (SEI) of Carnegie Mellon University and has pioneered muchof the work on software process modeling and de®nition conducted atthe SEI. He has published more than 30 refereed papers on softwareprocess issues and has delivered approximately 100 technical presen-tations at numerous conferences world-wide. He has also taught tu-torials on process modeling, de®nition and related topics to more than1,100 software professionals. Currently, Kellner leads a team devel-oping exemplary process guides for paper and for the Web, as well ascontinuing his research and development work in other areas, in-cluding quantitative process model simulation. Prior to joining the SEIin 1986, Kellner was a professor at Carnegie Mellon University, wherehe established and directed a B.S. degree program in InformationSystems. He has also served on the faculty of The University of Texas(Austin) and consulted for several organizations. Kellner received hisPh.D. in Industrial Administration ± Systems Sciences (specializing in

MIS) from Carnegie Mellon University. He also holds a B.S. inPhysics, a B.S. in Mathematics, both with University Honors, an M.S.in Computational Physics and an M.S. in Systems Sciences, all fromCarnegie Mellon University.

Dr. Raymond J. Madachy is an Adjunct Assistant Professor in theComputer Science department at the University of Southern Californiaand the Manager of the Software Engineering Process Group at LittonGuidance and Control Systems. He has published over 35 articles andis currently writing the book `Software Process Modeling with SystemDynamics' with Dr. Barry Boehm. He received his Ph.D. in Industrialand Systems Engineering at USC, has an M.S. in Systems Science fromthe University of California, San Diego and a B.S. in MechanicalEngineering from the University of Dayton. He is a member of IEEE,ACM, INCOSE, Tau Beta Pi, Pi Tau Sigma and serves as co-chairmanof the Los Angeles Software Process Improvement Network (SPIN)steering committee and program chair for the International Forum onCOCOMO and Software Cost Modeling.

Dr. David M. Ra�o received his Ph.D. in Operations Managementand Masters degrees in Manufacturing Engineering and IndustrialAdministration from Carnegie Mellon University. His current researchis in the area of strategic software process management and softwareprocess simulation modeling. Ra�o has twenty-one refereed publica-tions in the software engineering and management ®elds. He has re-ceived research grants from IBM, Tektronix, the National ScienceFoundation, the Software Engineering Research Center (SERC) andNorthrop-Grumman. Prior professional experience includes managingsoftware development and consulting projects at Arthur D. Little, Inc.,where he received the company's Presidential Award for outstandingperformance. Currently, Dr. Ra�o is an Assistant Professor of Oper-ations Management and Information Systems in the School of Busi-ness Administration at Portland State University. He is Co-Director ofPortland State University's Center for Software Process Improvementand Modeling.

M.I. Kellner et al. / The Journal of Systems and Software 46 (1999) 91±105 105