Editor's corner: “Dangerous and misleading”: A look at software research via the parnas papers

  • Published on

  • View

  • Download


<ul><li><p>Editors Corner </p><p> Dangerous and Misleading : A Look at Software Research via the Parnas Papers </p><p>The controversy over the Star Wars defense system rages on. </p><p>The discussion which follows is NOT, however, about Star Wars. It is about a different view of software which stems from the Star Wars controversy. </p><p>Probably you already know that Professor David Parnas, one of the leading computer scientists of our time, withdrew from his role in Star Wars because he believes the software problems of such a system to be insurmountable. </p><p>Probably you have read the Parnas papers which ap- peared in the American ~cien~~s~, Sept.-Oct. 1985, and were reprinted in various computing professional joul-- nals. Those papers explain why he believes the software portions of the system cannot be built successfully. </p><p>But I want to take another, a different look, at what Parnas said. In his analysis of WHY he believes the system cannot be built, he said some pretty devastating things about the state of the art of software. </p><p>This was not an attack on the state of the PRAC- TICE of software. Those unfortunately fairly common attacks by some computer scientists say software is al- ways over budget, behind schedule, and unreliable. You can form your own opinion about those words. </p><p>This was, instead, an attack on the state of the RE- SEARCH of software. Item by item, Parnas explained in his papers why he thought each of the leading re- search directions of our time will not help Star Wars. And in so doing, he cast a serious shadow on that re- search. Cet us look at some examples of what he said. </p><p>Youve probably read that dramatic improvements in software productivity are only a new language or a new toolset away. Parnas says no. We cannot expect [new programming languages] to make . . . a big dif- ference, and problems with our programming envi- ronment have not been a major impediment in our . . . work. </p><p>Well, then, what about so-called automatic pro- gramming systems, those methodologies that are sup- posed to make programmers obsolete within a few </p><p>years? I believe that the claims that have been made for our automatic programming systems are greatly ex- aggerated, says Parnas. He goes on to say that if the input specification is not a description of an algorithm, the result is woefully inefficient. . . there will be no sub- stantial change from our present capability coming from nonprocedural, automated programming systems. </p><p>Perhaps, then, the dramatic gains of artificial intel- ligence can help us. Here, again, Pamas takes a dim view. There are, Parnas says, two quite different defi- nitions of AI in common use today . . . </p><p>AI-I : The use of computers to solve problems that pre- viously could be solved only by applying human intelligence. </p><p>AI-2: The use of . . . rule-based programming . , . to solve a problem the way humans seem to solve it. </p><p>I have seen some outstanding AI-1 work, says Par- nas, but I cannot identify a body of techniques. . . that is unique to this field. In other words, the learning ex- perience of solving one problem with AI-1 methods does not extend very well to the next problem. </p><p>I find the approaches taken in AI-2 to be dangerous and much of the work misleading. . . program behavior is poorly understood and hard to predict . . . the tech- niques . . . do not generalize. Parnas attack here is devastating. Expert systems, he is saying, may not be very trustworthy. </p><p>Parnas is also concerned about software reliability. We do not know how to guarantee the reliability of software, he says. But what then of proof of correct- ness, that mathematics-like approach by which a soft- ware system can be proven to match its specification? It is inconceivable to me that one could provide a con- vincing proof of correctness of even a small portion of [massive] software . . . I do not know what such a proof would mean if I had it. He gives an example: We have no techniques for proving the correctness of pro- </p><p>The Journal of Systems and Software 3,217-218 (1986) Q Elsevier Science Publishing Co.. inc., 1986 217 </p></li><li><p>218 Editors Corner </p><p>grams in the presence of unknown hardware failures and errors in input data. </p><p>Taken in their entirety, the Parnas papers seem to represent a powerful attack on the status of software research. Does Parnas come to grips with this specifically? </p><p>Good software engineering is far from easy, he says. Those who think that software designs will be- come easy [via new technologies] and that errors will disappear, have not attacked subs~ntial problems. I dont expect the next 20 years of research to change [the difficulties of building massive systems]. Very little of [software research] leads to results that are use- ful. Many useful results go unnoticed because the good work is buried in the rest. </p><p>If software research is indeed off on some question- able directions, what should be done about it? Parnas covers that, also. </p><p>Only people closely familiar with the practical as- pects of a problem can judge whether or not they could use the results of a research project . . . Applied re- search must be judged by teams that include both suc- cessful researchers and experienced systems engineers. </p><p>In other words, researchers who want to produce useful results must involve practitioners in the evaluation process. </p><p>Lets stand back now and look at the totality of what Parnas has said. It is important to remember, of course, that his remarks were made in the context of massive real-time software systems of the magnitude of Star Wars. But an equally important observation is that many of his objections to the value of current research apply to their use on a broader spectrum of large and perhaps even not-solarge software systems. </p><p>Where can we expect productivity breakthroughs in software, the kind that will lead to orders of magnitude improvement? Not in languages or tools. Not in auto- matic programming methodologies. Not in formal ver- ification of software. Not in artificial intelligence. Not, in fact, in any of the currently popular software re- search endeavors. </p><p>It is a discouraging picture for the software practi- tioner who was hoping for breakthroughs. It is vital food for thought for the software researcher. </p><p>Robert L. Glass </p></li></ul>