1
Editor’s Corner * Robert L. Glass The Legend of the Bad Software Project Once upon a time, there was a software practitioner who got into big trouble. The trouble this practitioner got into was that his software was over budget, and behind schedule, and unreliable. What’s so unusual about that, you may be thinking, if you’ve read the literature on the so-called software crisis? Well, for this fellow it ~(2s unusual. And for a lot more reasons than you’d think, I’d like to add! You see, this software practitioner had graduated from one of the best computer science schools in the land. He had several years of good solid programming experience after that. He’d even found a way to blend the best of what he’d learned in school with the best of the practice over those several years. In other words, this software practitioner was all the things a good software practitioner should be. So what went wrong? It all started back when the problem his program solved first surfaced. This was a hot problem, one with the potential for making a lot of money for the company. Management told him so. Marketing told him so. There seemed little doubt that this particular problem needed to be treated in a special way. The first special way that it needed to be treated was that it had to be done by a particular date. Management said so. Marketing said so. Never mind that this particular practitioner didn’t think it could be done by then. It had to be! Being a cooperative fellow, the practitioner began work on the problem in spite of his concern. Manage- ment did listen to him and his concerns, though. While he developed the software, they tried a bunch of cost- estimation modeling programs until they found one that gave the answer they wanted. “See,” management * This editorial was reprinted with permission from the colunm “Software Reflections” by Robert L. Glass, in System Development, P.O. Box 9280, Phoenix, AZ 85068. pointed out to him with glee, “you can too develop this software in the time you have! ” As time went on and the deadline approached, the practitioner grew more and more intense. At first he had taken care to develop the software to his own personal standards of quality, going over the requirements care- fully, doing the design thoroughly, desk-checking each line of code with considerable thought. But with the deadline at hand, the practitioner began to short-circuit good practice. Testing was done skimp- ily, with fingers crossed. Integration put together some modules that were tested with some that were not. Beta test got a product that wouldn’t pass a normal internal test. But the deadline seemed sacred, and other things- seemingly, under the pressure, less important-were sacrificed. In the end, the practitioner failed to meet the schedule, by about what his initial estimate said. Costs were higher than predicted, of course, because costs were predicted to the optimistic schedule. And reliability? It, too, had suffered, as the practitioner tried too many short cuts attempting to meet schedule. It was just like the software crisis people said-yet another software project behind schedule, over budget, and unreliable. Once upon a time, in other words, a good software practitioner produced a bad software product. Deep down, the developer knew that he shouldn’t have cut so many comers. But also deep down, the software developer knew that there had really only been one mistake in the development process. That mistake was not in the way he had built the software. Instead, it was in trying to meet a schedule that was wrong from the beginning. When the software practitioner got his review from management, he noticed that he was graded down for poor performance on this bad project. And he wondered! How many others suffered the same fate as he? How much of the software crisis was really due to poor or contrived estimation? He is still wondering. 1 The Journalof Systemsand S&ware 10, I (1989) 0 1989 Elsevier SciencePublishing Co., Inc. 0164-1212/89/$3.50

Editor's corner: The legend of the bad software project

Embed Size (px)

Citation preview

Editor’s Corner * Robert L. Glass

The Legend of the Bad Software Project

Once upon a time, there was a software practitioner who got into big trouble.

The trouble this practitioner got into was that his software was over budget, and behind schedule, and unreliable.

What’s so unusual about that, you may be thinking, if you’ve read the literature on the so-called software crisis?

Well, for this fellow it ~(2s unusual. And for a lot more reasons than you’d think, I’d like to add!

You see, this software practitioner had graduated from one of the best computer science schools in the land. He had several years of good solid programming experience after that. He’d even found a way to blend the best of what he’d learned in school with the best of the practice over those several years.

In other words, this software practitioner was all the things a good software practitioner should be.

So what went wrong? It all started back when the problem his program

solved first surfaced. This was a hot problem, one with the potential for making a lot of money for the company. Management told him so. Marketing told him so. There seemed little doubt that this particular problem needed to be treated in a special way.

The first special way that it needed to be treated was that it had to be done by a particular date. Management said so. Marketing said so. Never mind that this particular practitioner didn’t think it could be done by then. It had to be!

Being a cooperative fellow, the practitioner began work on the problem in spite of his concern. Manage- ment did listen to him and his concerns, though. While he developed the software, they tried a bunch of cost- estimation modeling programs until they found one that gave the answer they wanted. “See,” management

* This editorial was reprinted with permission from the colunm “Software Reflections” by Robert L. Glass, in System Development, P.O. Box 9280, Phoenix, AZ 85068.

pointed out to him with glee, “you can too develop this software in the time you have! ”

As time went on and the deadline approached, the practitioner grew more and more intense. At first he had taken care to develop the software to his own personal standards of quality, going over the requirements care- fully, doing the design thoroughly, desk-checking each line of code with considerable thought.

But with the deadline at hand, the practitioner began to short-circuit good practice. Testing was done skimp- ily, with fingers crossed. Integration put together some modules that were tested with some that were not. Beta test got a product that wouldn’t pass a normal internal test.

But the deadline seemed sacred, and other things- seemingly, under the pressure, less important-were sacrificed.

In the end, the practitioner failed to meet the schedule, by about what his initial estimate said. Costs were higher than predicted, of course, because costs were predicted to the optimistic schedule. And reliability? It, too, had suffered, as the practitioner tried too many short cuts attempting to meet schedule.

It was just like the software crisis people said-yet another software project behind schedule, over budget, and unreliable.

Once upon a time, in other words, a good software practitioner produced a bad software product. Deep down, the developer knew that he shouldn’t have cut so many comers. But also deep down, the software developer knew that there had really only been one mistake in the development process.

That mistake was not in the way he had built the software. Instead, it was in trying to meet a schedule that was wrong from the beginning.

When the software practitioner got his review from management, he noticed that he was graded down for poor performance on this bad project.

And he wondered! How many others suffered the same fate as he? How much of the software crisis was really due to poor or contrived estimation?

He is still wondering.

1 The Journal of Systems and S&ware 10, I (1989) 0 1989 Elsevier Science Publishing Co., Inc. 0164-1212/89/$3.50