7
0018-9162/00/$10.00 © 2000 IEEE May 2000 27 PERSPECTIVES Gaining Intellectual Control of Software Development R ecent disruptions caused by several events have shown how thor- oughly the world has come to depend on software. The rapid pro- liferation of the Melissa virus hinted at a dark side to the ubiq- uitous connectivity that supports the information-rich Internet and lets e-commerce thrive. Although the oft-predicted Y2K apoc- alypse failed to materialize, many software experts insist that disaster was averted only because countries around the globe spent billions to ensure their critical software would be Y2K-compliant. When denial-of-service attacks shut down some of the largest sites on the Web last February, the concerns caused by the disruptions spread far beyond the complaints of frustrated customers, affecting even the stock prices of the targeted sites. Indeed, as software plays an ever-greater role in managing the daily functions of modern life, its economic importance becomes proportionately greater. It’s no coincidence that technology stocks have led the upsurge of stock market indices, that the US government’s antitrust case against Microsoft has become headline news around the world, or that some companies’ aggressive pursuit of software patents has caused widespread controversy. Yet despite its critical importance, software remains surprisingly fragile. Prone to unpredictable performance, dangerously open to malicious attack, and vul- nerable to failure at implementation despite the most rigorous development processes, in many cases software has been assigned tasks beyond its maturity and reliability. TAKING THE LONG VIEW Fortunately, the groundwork for finding solutions to many of these short- comings has already been laid. In 1997, recognizing the US economy’s increas- ing dependence on information technology, the Clinton administration established the President’s Information Technology Advisory Committee. PITAC’s primary responsibility was to determine whether government-supported R&D “is helping to maintain United States’ leadership in advanced computing and communications technologies and their applications.” After many meetings with the research community, funding agencies, industry, and the public at large, PITAC issued a hard-hitting report that recommended increasing government- The results from two workshops on software engineering research strategies, commissioned by the National Science Foundation last year, hint at new directions that software develop- ment might take. Barry Boehm USC Center for Software Engineering Victor R. Basili University of Maryland

Gaining intellectual control of software development

  • Upload
    vr

  • View
    212

  • Download
    0

Embed Size (px)

Citation preview

0018-9162/00/$10.00 © 2000 IEEE May 2000 27

P E R S P E C T I V E S

GainingIntellectualControl ofSoftwareDevelopment

Recent disruptions caused by several events have shown how thor-oughly the world has come to depend on software. The rapid pro-liferation of the Melissa virus hinted at a dark side to the ubiq-uitous connectivity that supports the information-rich Internetand lets e-commerce thrive. Although the oft-predicted Y2K apoc-

alypse failed to materialize, many software experts insist that disaster wasaverted only because countries around the globe spent billions to ensure theircritical software would be Y2K-compliant. When denial-of-service attacks shutdown some of the largest sites on the Web last February, the concerns causedby the disruptions spread far beyond the complaints of frustrated customers,affecting even the stock prices of the targeted sites.

Indeed, as software plays an ever-greater role in managing the daily functionsof modern life, its economic importance becomes proportionately greater. It’sno coincidence that technology stocks have led the upsurge of stock marketindices, that the US government’s antitrust case against Microsoft has becomeheadline news around the world, or that some companies’ aggressive pursuit ofsoftware patents has caused widespread controversy.

Yet despite its critical importance, software remains surprisingly fragile. Proneto unpredictable performance, dangerously open to malicious attack, and vul-nerable to failure at implementation despite the most rigorous developmentprocesses, in many cases software has been assigned tasks beyond its maturityand reliability.

TAKING THE LONG VIEWFortunately, the groundwork for finding solutions to many of these short-

comings has already been laid. In 1997, recognizing the US economy’s increas-ing dependence on information technology, the Clinton administrationestablished the President’s Information Technology Advisory Committee.PITAC’s primary responsibility was to determine whether government-supportedR&D “is helping to maintain United States’ leadership in advanced computingand communications technologies and their applications.” After many meetingswith the research community, funding agencies, industry, and the public at large,PITAC issued a hard-hitting report that recommended increasing government-

The results from twoworkshops on softwareengineering researchstrategies, commissionedby the National ScienceFoundation last year,hint at new directionsthat software develop-ment might take.

Barry BoehmUSC Center for Software Engineering

Victor R. BasiliUniversity of Maryland

28 Computer

sponsored IT research by $1.37 billion annu-ally within five years.

To support their findings, the committeefound that the “Federal information technologyR&D investment is inadequate ... [and] tooheavily focused on near-term problems” and recommended “a strategic initiative in long-terminformation technology R&D,” which shouldsponsor “research that is visionary and high-risk.” The committee focused on softwarebecause the nation has “become dangerouslydependent on large software systems whose

behavior is not well understood and which often fail inunpredicted ways.” Improving software developmentmethods is so important that, the committee argued,we should “make software research a substantive com-ponent of every major IT research initiative.”

In response to these recommendations, the com-puting directorate at the National Science Foundationcommissioned two major software workshops lastyear.

NSF WORKSHOP FINDINGSThe first workshop, the NSF Workshop on a

Software Research Program for the 21st Century, con-cluded that software research must expand the scien-tific and engineering basis for constructing no-surprisesoftware of all types.1 To do so, workshop participantsfound that software developers need to

• “Develop the empirical science underlying soft-ware as rapidly as possible. One important activ-ity will be to analyze how some commercial andgovernment organizations have learned to buildno-surprise systems in stable environments. Byextracting principles from these analyses, empir-ical research can help enlarge the no-surpriseenvelope. By validating principles derived fromtheoretical research, where many excellent butunused ideas originate, it can enlarge the toolkitof software developers.

• “Advance our understanding of the basic elementsof the computer science discipline, which is thefoundation for all software construction. Progressin formal methods, algorithms, operating systems,database management systems, programming lan-guages, and many other areas is essential.Otherwise, we risk running out of ideas and meth-ods for creating the ‘unprecedented’ software ofthe future that will maintain our global competi-tiveness and national security. To help in the con-struction of real-world no-surprise systems,theoretical research should be sensitive to theissues raised by empirical analyses and to the scal-ability problem.

• “Address human needs significantly better as we

engineer the large, unprecedented systems of thenext century subject to concurrent safety, evolv-ability, and resource constraints.

• “Form teams to build important advanced appli-cations that will both serve as testbeds for thenew ideas and address a serious problem identi-fied by the PITAC: ‘desperately needed softwareis not being developed.’”

After the workshop, questions arose about whetherthis report recommends a “software engineering” ora “software research” agenda, and how softwareresearch should address such areas as operating sys-tems, networking, artificial intelligence, and databasesoftware.

USC WORKSHOP FINDINGSTo clarify the nature and role of software engi-

neering in IT research and to identify software engi-neering research strategies, a second meeting—theNSF Software Engineering Research StrategiesWorkshop—took place at the University of SouthernCalifornia in August 1999. Participants presentedtheir findings to the NSF a month later.

The USC meeting participants felt that under-standing the proper place of software engineeringresearch meant beginning with a discussion of thefuture. They saw an enormous increase in complexityover the horizon as every conceivable device becomesconnected to everything else—not only computers andtrains, planes, and automobiles, but also appliancesof all types, from refrigerators to information gener-ators and receivers. Also, with increasing dependenceon independently evolving commercial components,control over content diminishes rapidly, raising impor-tant technical and legal issues.

The fast new global delivery mechanism also meansthat time-to-market considerations are becomingparamount. Product cycles that only recently spannedan already high-pressured 18 months have droppedto six months or less. Yet, as a consequence, fantasticopportunities have emerged at all levels.

Synergizing IT and software engineeringParticipants noted that if an organization wants a

good information-technology-based system, theremust be a synergetic relationship between the IT com-ponents that comprise the delivered system and thesoftware engineering elements that guide its definition,development, component selection, integration, andvalidation. You cannot go from good to great IT sys-tems if you improve only the SE elements and neglectthe IT components. As Figure 1 shows, it is equallytrue that focusing only on great IT components with-out attention to complementary SE practices generallyleads to failure as well.2

You cannot go from good to great IT systems if youimprove only the SE elements and

neglect the IT components.

May 2000 29

A good example of the latter approach is theAESOP project experience.3 This project assumed thata small number of advanced IT components—a user-interface generator, an object-oriented DBMS, andtwo middleware components—could be integratedquickly and cheaply: in six months, by only two verybright software engineering researchers. It was a “sim-ple matter of programming.” The result: a scheduleoverrun by a factor of four, an effort overrun by a fac-tor of five, and slow, unresponsive system perfor-mance. Similar approaches on large-scale systemsgenerally experience even worse results,4 but oftenremain undocumented.

Fortunately, the AESOP experience was well ana-lyzed by the SE researchers, who identified architec-tural mismatch as the key success inhibitor. Thisanalysis led to a rich experience-based SE researchprogram that addresses and copes with the sources ofarchitectural mismatch.5

Creating stronger and better-integrated IT compo-

nents and SE capabilities to achieve great, no-surpriseIT systems would be a tremendous challenge even ifsoftware and IT remained static. But given the con-stantly increasing pace of change, the challengebecomes overwhelming. The Web and the Internetconnect everything with everything else. Autonomousagents that make deals in cyberspace create manyopportunities for chaos, and systems of systems, net-works of networks, and agents of agents create hugeintellectual-control problems.

Further, the economics of software componentryleave system developers with no choice but to incor-porate large commercial-off-the-shelf componentsinto their systems. Unfortunately, developers have noway of knowing what is inside these COTS compo-nents, and they have no control over the direction oftheir evolution. Software architecture and COTS deci-sions are made hastily, and the time pressures forcingthis “marriage in haste” leave no opportunity forrepenting even at leisure.

Human-computer Interfaceand collaboration

Operational stakeholdersDevelopment stakeholdersUser interfaces

Great IT components + =Great SW engineering Great systems

User applications

AI, agents

OS, DBMS, middleware

User applications

Information distributionand management

Connectivity andinformation access

Definition, development,test, verification, and

usage evaluation tools

Modeling and analysis

System definition,composition,

verification, andevolution processes

Architectures,compositionframeworks,

and principles

Quality-of-service

technologies

Networks

Figure 1. An organization that wants a good IT-based system needs a synergetic relationship between the IT components (blue), which comprise thedelivered system, and the software engineering elements (red), which guide the system’s definition, development, component selection, integration,and verification.

30 Computer

Having explored the nature and contributions ofsoftware engineering research, the workshop turned tothe problem of characterizing software engineeringelements and their individual roles in realizing vari-ous classes of future IT systems.

Focus on two challenge areasParticipants began their characterization of SE

research elements by reviewing and discussing twoexcellent sources of overall software Grand Challengeproblems.6 With the aid of these sources, we definedtwo SE-intensive challenge areas for achieving thequality-of-life improvements envisioned in the PITACreport. These areas involved harnessing future IT in“empowering people and groups” and in “weaving a

new information fabric” that is much more reliable,supple, and adaptable than current technology.

Within each of these two challenge areas, we iden-tified four specific sample applications, and charac-terized their potential future IT- and SE-enabledcapabilities and the relative importance of individualSE research areas in achieving those capabilities. Wethen aggregated these parameters into a cross-impactmatrix that clarified the relationships and led to anumber of conclusions about the nature of SE researchstrategies, as Figure 2 shows.7

As one example, consider user programming. Toachieve safe and effective user programming in thefuture, we face an essential dependence on achievingsignificant progress in

Empoweringpeople

and groups

Softwareengineeringtechnologies

Weavingthe new

informationfabric

Underlyingscience

Process technologies

System definition

Architecture

Composition

Test and verification

Usage evaluation and evolution

Process modeling and management

Product technologies

HCI and collaboration

User domain componentry

Connectivity and info. access

Info. distribution and management

Quality-of-service technologies

High assurance

Massive scalability

Change resilience

Modeling and analysis technologies

Domain modeling

Software economics modeling

Quality-of-service modeling

Integration of technology elements

Emb

edd

edm

edic

al s

yste

ms

Use

r p

rog

ram

min

g

Emp

ow

ered

tea

ms

Life

lon

g le

arn

ing

Co

mp

ute

r sc

ien

ce

Cri

sis

man

agem

ent

Air

tra

ffic

co

ntr

ol

Net

-cen

tric

bu

sin

ess

Med

ical

info

rmat

ics

Do

mai

n s

cien

ces

Beh

avio

ral s

cien

ces

Eco

no

mic

s

Degree of dependence

Essential

Strong

Moderate

None

Figure 2. Software engineering technologies, mission challenges, and underlying science. Each column in the chart addressesa functional area, with each row listing a specific SE technology. A given functional area’s degree of dependence on a specifictechnology ranges from essential (blue) to none (yellow).

May 2000 31

• safe and effective composition of componentsand applications;

• test and verification that the users’ programs haveno serious faults or adverse side effects;

• supportive human-computer interface technol-ogy;

• user domain componentry for composing userapplications;

• high-assurance technologies for ensuring thatuser programs do not compromise reliability, pri-vacy, availability, safety, mission performance, ororganizational viability;

• change resilience to ensure that user modifica-tions can be done quickly and with high assur-ance; and

• ensuring that each technology integrates wellwith the others (for example, that user domaincomponentry evolves in ways consistent with thechange resilience assumptions).

Other software engineering technologies are lessessential to successful user programming, primarilybecause the programs and projects are relatively small.We do not need massive scalability at all, and processand economics modeling contribute only moderately.The strong dependencies for the other technologies pri-marily address their need to establish sound supportframeworks and environments for user program-ming—system definition, architecture, connectivityand information access, and information distributionand management.

We also considered each software engineering tech-nology’s relative degree of dependence on four pri-mary underlying science areas: computer science,domain sciences, behavioral sciences, and economics.Suppose that a set of stakeholders sought to negoti-ate the definition of the best air-traffic-control systemthat can be built within a given budget and schedule.The system definition support capabilities woulddepend in an essential way on a knowledge of com-puter science (the performance of algorithms); domainsciences (the computations needed to predict clear airturbulence); behavioral sciences (for both stakeholderrequirements negotiations and air-traffic-controllergroup performance); and economics (for performingcost-benefit analyses of various alternatives).

Determining the most appropriate logical andphysical IT architecture for the air-traffic-control sys-tem would involve a strong dependence on all fourunderlying sciences, but the dependence on gettingthe best information structures would be consider-ably more essential for computer sciences than for theother sciences.

Primary concernsWe can draw many conclusions about software

engineering research strategies from the pre-ceding analysis.

• The field’s needs are significant. Each ofthe eight sample challenge applicationsexhibited essential or strong dependen-cies on improved capabilities in most soft-ware engineering technology areas.Having integrated, mutually reinforcingtechnology elements was essential for allthe challenge applications. Having a goodproduct or a good process technologyalone makes producing a good systemunlikely; you must have both.

• The field’s needs are interdisciplinary.System definition technology, architecture tech-nology, and the other SE technologies require morethan traditional computer science to ensure suc-cessful IT applications. This truth does not implythat single-discipline research is unimportant. Butit does mean that the body of knowledge requiredfor successful software engineering includes con-siderably more than computer science.

• There is no silver bullet for success. Our discus-sions corroborated Fred Brooks’s thesis8 that nosilver-bullet solution exists. We concluded that abalanced portfolio of research investments andan emphasis on integration of software engi-neering and information technology solutions viaexperimental application are most likely to showprogress toward addressing the challenges.

• Developers must “skate to where the puck isgoing.” Workshop participants felt that too muchresearch focuses on past problems. Examples offuture trends where we need more research intoproblems of intellectual control include hetero-geneous distributed systems, dynamically chang-ing software structures, and interactions amongautonomous agents.

• Developers need to explore new metaphors forsoftware composition and evolution. One ap-proach to future problems looks for new per-spectives or metaphors for intellectual control.Examples include biologically self-testing or self-adaptive software systems and the socio-economic employment of software goal andreward structures.

Participants expressed serious concern about soft-ware engineering technology transition, which hasalways been exceptionally difficult. Adopting newpractice or technology involves behavioral change anddeferred gratification. You can plug in a faster chip oralgorithm without changing your project practices,and you can see the effect on performance immedi-ately. Changing development processes or adopting a

Changingdevelopment

processes or adoptinga software productline requires that

people change theirbehavior on projects,often for payoffs only

seen much later.

32 Computer

software product line requires that peoplechange their behavior on projects, often for pay-offs only seen much later and that are hard totrace definitively. Collaborative university-industry research and experimentation canexpedite transition. Several good suggestedmechanisms for pursuing this approach can befound elsewhere.1

Relating software research results to key chal-lenge applications will provide greater under-standing about which processes, architecturalstyles, and IT components best fit such applica-tions. This work provides the foundation for no-surprise projects in other application domains

and the ability to reason about what level of capabil-ity can be achieved within a given time or budget.

Keeping track of progress toward this objective willhelp clarify which challenge problems lie outside theno-surprise boundary, providing stakeholders withmore realistic expectations about encountering prob-lems. Tracking progress will also enable stakeholdersto consider alternative strategies for bounding risk, suchas using delivery time or cost as their independent vari-able and desired features as the dependent variable.

Applying metrics to software developmentFinally, workshop participants also discussed the

formulation of appropriate metrics for software devel-opment progress and their use in evaluating the effectsof advances in software engineering research. The par-ticipants concluded that this task’s difficulty shouldnot be a barrier to doing it better.

The primary difficulty in determining such metricsis that software projects are effectively unrepeatable inpractice. We find differences among team skills anddynamics or among learning-curve effects on similarproducts developed by the same team. Increasingly,rapid changes in the IT marketplace and infrastruc-ture continually change the rules of software devel-opment from one project to the next.

This changing nature means that most traditionalmeasures of software productivity are increasinglyirrelevant. A good example is the metric of new sourcelines of code produced per person-day on a large pro-ject. The value of this metric continues to hoveraround 10, giving the impression of no progress.However, analyzing several decades of experience atBell Laboratories9 demonstrates that the number ofexecuting lines of machine code generated by a line ofsource code has increased by roughly an order of mag-nitude every 20 years.

Even stronger progress indicators can be generatedby combining productivity per IT platform with thenumber of platforms in use. Effective examples in thecomputer hardware and communications field includeplots showing high exponential growth in number of

transistors in use, in petaflops of computing poweravailable, or in numbers of Internet packets handledper year. A software counterpart to these examples,using similar counting rules, is lines of code in service(LOCS), obtained by multiplying executable lines ofmachine code per computing platform by the numberof platforms in use. As observed elsewhere,10 over sev-eral decades the US Department of Defense’s LOCShave increased by an order of magnitude roughlyevery seven years, with cost per LOCS currentlydecreasing at about the same rate.

Metrics such as LOCS, transistors, petaflops, andpackets at least pass a market test for value becausethey are items that people and organizations have paidmarket prices to obtain. Beyond this, however, wewould prefer metrics that more closely reflect valueadded to people and organizations. Such metrics areemerging for software, but face challenges in applyingthem across different application areas and acrossstakeholders having different values and priorities.

The automated collection and analysis of usage-experience data from human and computer sourcespresents a significant challenge in “weaving the newinformation fabric.” Soon, we will be able to sampleand analyze billions of concurrent transactions. Thiscapability can be harnessed not only for better per-sonal, business, and government decisions, but alsofor analyzing and improving the various dimensionsof effectiveness in the software and IT systems we pro-duce in the future.

POTENTIAL ROADBLOCKSAlthough the two workshops held in 1999 helped

establish a firm foundation for the NSF’s researchefforts, challenges remain. For example, any companythat hopes to succeed in today’s fiercely competitivemarket cannot spend too much time building qualityin. Doing so will cost it so much market share that itsproduct will fail when it finally reaches the market-place, even if the software is more robust than its com-petitors. Thus, a major NSF goal is to devise atechnology that lets developers accelerate the softwaredevelopment process without compromising quality.

On the other hand, some analysts believe that therace-to-market model may evolve toward a slower,more quality-focused paradigm when a given marketmatures. Some US analysts, for example, speculatethat CMM (Capability Maturity Model) Level 5 com-panies in India could someday build more robust ver-sions of popular software products, thereby reprisingthe role Japanese automakers played in the 1970swhen their less-expensive and better-engineered carschallenged domestic auto manufacturers’ marketdominance in the US. Opinions differ on how likelythis scenario is, however. History didn’t repeat itself,for example, when the Japanese software factories

The primarydifficulty indetermining

appropriate metricsis that software

projects areeffectively

unrepeatable in practice.

May 2000 33

went head-to-head with less structured but more agileUS companies. But quality-focused developers mayyet discover an effective way to compete.

During the USC workshop, several participantsdiscussed how we might leverage some of the tech-niques the open source community uses to buildrobustness into software. Aside from concerns aboutdevising a workable economic model, participantsnoted that the open source model works best whenit supports a product with broad enough appeal toattract a critical mass of contributing programmers.Thus, what works for Linux or Apache may not bepractical with niche applications that have muchsmaller user bases.

Creating a smooth interface between governmentand free enterprise also raised some concerns at theworkshop. Cultural differences, for example, mayarise between fast-paced software developers whoplace a premium on speedy progress and governmentofficials who must work within stringent regulations.

Abstract by nature, software’s apparently lim-itless flexibility is both its greatest strengthand greatest weakness. We can no longer

afford to let this increasingly critical component ofthe Information Age’s infrastructure proliferate in theform of ill-conceived, hastily crafted, and failure-prone products. The NSF has taken the first stepstoward grasping the reins of software developmentand regaining control not only of the developmentprocess but also of the role software will play in ourfuture. ✸

References1. Final Report, “NSF Workshop on a Research Program

for the 21st Century,” http://www.cs.umd.edu/projects/SoftEng/tame/nsfw98/.

2. “Role of Software Engineering in IT Research and Sys-tems,” chart 5, http://sunset.usc.edu/Activities/aug24-25/NSFtalks/Boehm.ppt.

3. D. Garlan, R. Allen, and J. Ockerbloom, “ArchitecturalMismatch: Why Reuse Is So Hard,” IEEE Software,Nov. 1995, pp. 17-26.

4. Standish Group, “CHAOS,” http://www.standishgroup.com/chaos.html.

5. M. Shaw and D. Garlan, Software Architecture, Pren-tice Hall, Upper Saddle River, N.J., 1996.

6. J. Gray, What Next? A Dozen Information-TechnologyResearch Goals, (draft Turing lecture), tech. report MS-TR-99-50, Microsoft, Redmond, Wash., June 1999.

7. “Software Engineering Technologies, Mission Chal-lenges, Underlying Science,” chart 12, http://sunset.usc.edu/Activities/aug24-25/NSFtalks/Boehm.ppt.

8. F.P. Brooks, “No Silver Bullet: Essence and Accidents ofSoftware Engineering,” Computer, Apr. 1987, pp. 10-19.

9. L. Bernstein, “Software Investment Strategies,” Bell LabsTechnical J., Summer 1997, pp. 233-242.

10. B. Boehm, “Managing Software Productivity and Reuse,”Computer, Sept. 1999, pp. 111-113.

Barry Boehm is director of the USC Center for Soft-ware Engineering. He developed the ConstructiveCost Model, the software process Spiral Model, andthe Theory W (win-win) approach to software man-agement and requirements determination. Contacthim at [email protected].

Victor R. Basili is a professor in the Institute forAdvanced Computer Studies and the Computer Sci-ence Department at the University of Maryland. Hehas participated in the design and development of sev-eral software projects, including the SIMPL family ofprogramming languages. He is currently measuringand evaluating software development in industrial andgovernment settings and has consulted with manyagencies and organizations. Contact him at [email protected].

computer.org/e-News

Available for FREEto members.

Good news for your in-box.

Be alerted to• articles and special issues

• conference news• submission and

registrationdeadlines

• interactive forums

Sign Up Today forthe IEEEComputerSociety’se-News

Sign Up Today forthe IEEEComputerSociety’se-News