Transcript

Editor’s Corner

‘ ‘Dangerous and Misleading’ ’ : A Look at Software Research via the Parnas Papers

The controversy over the Star Wars defense system rages on.

The discussion which follows is NOT, however, about Star Wars. It is about a different view of software which stems from the Star Wars controversy.

Probably you already know that Professor David Parnas, one of the leading computer scientists of our time, withdrew from his role in Star Wars because he believes the software problems of such a system to be insurmountable.

Probably you have read the Parnas papers which ap- peared in the American ~cien~~s~, Sept.-Oct. 1985, and were reprinted in various computing professional joul-- nals. Those papers explain why he believes the software portions of the system cannot be built successfully.

But I want to take another, a different look, at what Parnas said. In his analysis of WHY he believes the system cannot be built, he said some pretty devastating things about the state of the art of software.

This was not an attack on the state of the PRAC- TICE of software. Those unfortunately fairly common attacks by some computer scientists say software is “al- ways over budget, behind schedule, and unreliable.” You can form your own opinion about those words.

This was, instead, an attack on the state of the RE- SEARCH of software. Item by item, Parnas explained in his papers why he thought each of the leading re- search directions of our time will not help Star Wars. And in so doing, he cast a serious shadow on that re- search. Cet us look at some examples of what he said.

You’ve probably read that dramatic improvements in software productivity are only a new language or a new toolset away. Parnas says no. “We cannot expect [new programming languages] to make . . . a big dif- ference,” and “problems with our programming envi- ronment have not been a major impediment in our . . . work.”

Well, then, what about so-called “automatic pro- gramming systems,” those methodologies that are sup- posed to make programmers obsolete within a few

years? “I believe that the claims that have been made for our automatic programming systems are greatly ex- aggerated,” says Parnas. He goes on to say that “if the input specification is not a description of an algorithm, the result is woefully inefficient. . . there will be no sub- stantial change from our present capability” coming from nonprocedural, automated programming systems.

Perhaps, then, the dramatic gains of artificial intel- ligence can help us. Here, again, Pamas takes a dim view. There are, Parnas says, “two quite different defi- nitions of AI in common use today . . .

AI-I : The use of computers to solve problems that pre- viously could be solved only by applying human intelligence.

AI-2: The use of . . . rule-based programming . , . to solve a problem the way humans seem to solve it.

“I have seen some outstanding AI-1 work,” says Par- nas, but “I cannot identify a body of techniques. . . that is unique to this field.” In other words, the learning ex- perience of solving one problem with AI-1 methods does not extend very well to the next problem.

“I find the approaches taken in AI-2 to be dangerous and much of the work misleading. . . program behavior is poorly understood and hard to predict . . . the tech- niques . . . do not generalize.” Parnas’ attack here is devastating. Expert systems, he is saying, may not be very trustworthy.

Parnas is also concerned about software reliability. “We do not know how to guarantee the reliability of software,” he says. But what then of proof of correct- ness, that mathematics-like approach by which a soft- ware system can be proven to match its specification? “It is inconceivable to me that one could provide a con- vincing proof of correctness of even a small portion of [massive] software . . . I do not know what such a proof would mean if I had it.” He gives an example: “We have no techniques for proving the correctness of pro-

The Journal of Systems and Software 3,217-218 (1986) Q Elsevier Science Publishing Co.. inc., 1986 217

218 Editor’s Corner

grams in the presence of unknown hardware failures and errors in input data.”

Taken in their entirety, the Parnas papers seem to represent a powerful attack on the status of software research. Does Parnas come to grips with this specifically?

“Good software engineering is far from easy,” he says. “Those who think that software designs will be- come easy [via new technologies] and that errors will disappear, have not attacked subs~ntial problems.” “I don’t expect the next 20 years of research to change [the difficulties of building massive systems].” “Very little of [software research] leads to results that are use- ful. Many useful results go unnoticed because the good work is buried in the rest.”

If software research is indeed off on some question- able directions, what should be done about it? Parnas covers that, also.

“Only people closely familiar with the practical as- pects of a problem can judge whether or not they could use the results of a research project . . . Applied re- search must be judged by teams that include both suc- cessful researchers and experienced systems engineers.”

In other words, researchers who want to produce useful results must involve practitioners in the evaluation process.

Let’s stand back now and look at the totality of what Parnas has said. It is important to remember, of course, that his remarks were made in the context of massive real-time software systems of the magnitude of Star Wars. But an equally important observation is that many of his objections to the value of current research apply to their use on a broader spectrum of large and perhaps even not-solarge software systems.

Where can we expect productivity breakthroughs in software, the kind that will lead to orders of magnitude improvement? Not in languages or tools. Not in auto- matic programming methodologies. Not in formal ver- ification of software. Not in artificial intelligence. Not, in fact, in any of the currently popular software re- search endeavors.

It is a discouraging picture for the software practi- tioner who was hoping for breakthroughs. It is vital food for thought for the software researcher.

Robert L. Glass