14
The practical, physical (and sometimes dangerous) consequences of control must be respected, and the underlying principles must be clearly and well taught. By Gunter Stein F eedback control systems are all around us in modern technological life. They are at work in our homes, our cars, our factories, our transpor- tation systems, our defense sys- tems—everywhere we look. Certainly, one of the great achievements of the interna- tional controls research community is that the design principles for these systems are well de- veloped and broadly understood by control engi- neers, so that the systems operate productively and safely in so many applications. In this article, I want to talk about two trends that threaten to undermine this achievement. My objective is to heighten our awareness of these trends and hopefully bring about an appropriate re- sponse to them. The first trend has to do with the applications themselves. Among the abundance of control sys- tems operating today are increasing numbers of dangerous ones. Society trusts our technology. We are permitted to do things with automatic controls that cannot be done manually and that, if done im- properly, can have dire consequences for property, the environment, and human life. Most, but not all, of these dangerous applications involve open-loop unstable plants with divergence rates violent enough to elude manual control. This characteriza- tion motivates the title of the article, and I will de- scribe specific examples of such applications. The second trend has been evident at our con- ferences, and certainly in our journals, over the years. This trend is the increasing worship of ab- stract mathematical results in control at the ex- pense of more specific examinations of their practical, physical consequences. I will provide ex- amples of this trend as well. 12 IEEE Control Systems Magazine August 2003 0272-1708/03/$17.00©2003IEEE Gunter Stein’s Bode Lecture A n understanding of fundamental limitations is an essential element in all engineering. Shannon’s early results on chan- nel capacity have always had center court in signal process- ing. Strangely, the early results of Bode were not accorded the same attention in control. It was therefore highly appropriate that the IEEE Control Systems Society created the Bode Lecture Award, an honor which also came with the duty of delivering a lecture. Gunter Stein gave the first Hendrik W. Bode Lecture at the IEEE Conference on Decision and Control in Tampa, Florida, in December 1989. In his lecture he focused on Bode’s important observation that there are fundamental limitations on the achievable sensitivity function ex- pressed by Bode’s integral. Gunter has a unique position in the con- trols community because he combines the insight derived from a large number of industrial applications at Honeywell with long ex- perience as an influential adjunct professor at the Massachusetts In- stitute of Technology from 1977 to 1996. In his lecture, Gunter also emphasized the importance of the interaction between instability and saturating actuators and the consequences of the fact that con- trol is becoming increasingly mission critical. After more than 13 years I still remember Gunter’s superb lecture. I also remember comments from young control scientists who had been brought up on state-space theory who said: “I believed that controllability and observability were the only things that mattered.” At Lund University we made Gunter’s lecture a key part of all courses in control system design. Gunter was brought into the classroom via videotapes published by the IEEE Control Systems Society and the written lecture notes. It was a real drawback that the lecture was not available in more archival form. I am therefore delighted that IEEE Control Systems Magazine is publishing this article. I sincerely hope that this will be followed by a DVD version of the videotape. The lecture is like really good wine; it ages superbly. —Karl J Åström, Professor Emeritus Lund University, Lund, Sweden

The practical, physical (and sometimes dangerous) …aaclab.mit.edu/material/handouts/GunterStein-bodelecture... · 2019. 11. 27. · Gripen JAS39 prototype accident on 2 February

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

Page 1: The practical, physical (and sometimes dangerous) …aaclab.mit.edu/material/handouts/GunterStein-bodelecture... · 2019. 11. 27. · Gripen JAS39 prototype accident on 2 February

The practical, physical (and sometimes dangerous)consequences of control must be respected, and theunderlying principles must be clearly and well taught.

By Gunter Stein

Feedback control systems are allaround us in modern technologicallife. They are at work in our homes,our cars, our factories, our transpor-tation systems, our defense sys-tems—everywhere we look. Certainly,

one of the great achievements of the interna-tional controls research community is that thedesign principles for these systems are well de-veloped and broadly understood by control engi-neers, so that the systems operate productivelyand safely in so many applications.

In this article, I want to talk about two trendsthat threaten to undermine this achievement. Myobjective is to heighten our awareness of thesetrends and hopefully bring about an appropriate re-sponse to them.

The first trend has to do with the applicationsthemselves. Among the abundance of control sys-tems operating today are increasing numbers ofdangerous ones. Society trusts our technology. Weare permitted to do things with automatic controlsthat cannot be done manually and that, if done im-properly, can have dire consequences for property,the environment, and human life. Most, but not all,of these dangerous applications involve open-loopunstable plants with divergence rates violentenough to elude manual control. This characteriza-tion motivates the title of the article, and I will de-scribe specific examples of such applications.

The second trend has been evident at our con-ferences, and certainly in our journals, over theyears. This trend is the increasing worship of ab-stract mathematical results in control at the ex-pense of more specific examinations of theirpractical, physical consequences. I will provide ex-amples of this trend as well.

12 IEEE Control Systems Magazine August 20030272-1708/03/$17.00©2003IEEE

Gunter Stein’s Bode Lecture

An understanding of fundamental limitations is an essentialelement in all engineering. Shannon’s early results on chan-nel capacity have always had center court in signal process-

ing. Strangely, the early results of Bode were not accorded the sameattention in control. It was therefore highly appropriate that the IEEEControl Systems Society created the Bode Lecture Award, an honorwhich also came with the duty of delivering a lecture. Gunter Steingave the first Hendrik W. Bode Lecture at the IEEE Conference onDecision and Control in Tampa, Florida, in December 1989. In hislecture he focused on Bode’s important observation that there arefundamental limitations on the achievable sensitivity function ex-pressed by Bode’s integral. Gunter has a unique position in the con-trols community because he combines the insight derived from alarge number of industrial applications at Honeywell with long ex-perience as an influential adjunct professor at the Massachusetts In-stitute of Technology from 1977 to 1996. In his lecture, Gunter alsoemphasized the importance of the interaction between instabilityand saturating actuators and the consequences of the fact that con-trol is becoming increasingly mission critical.

After more than 13 years I still remember Gunter’s superblecture. I also remember comments from young control scientistswho had been brought up on state-space theory who said: “Ibelieved that controllability and observability were the onlythings that mattered.” At Lund University we made Gunter’slecture a key part of all courses in control system design. Gunterwas brought into the classroom via videotapes published by theIEEE Control Systems Society and the written lecture notes. It wasa real drawback that the lecture was not available in more archivalform. I am therefore delighted that IEEE Control SystemsMagazine is publishing this article. I sincerely hope that this willbe followed by a DVD version of the videotape. The lecture is likereally good wine; it ages superbly.

—Karl J Åström, Professor EmeritusLund University, Lund, Sweden

Page 2: The practical, physical (and sometimes dangerous) …aaclab.mit.edu/material/handouts/GunterStein-bodelecture... · 2019. 11. 27. · Gripen JAS39 prototype accident on 2 February

Together, these trends threaten to undermine ourgood standing in society as masters of a technology thatcan be trusted.

The Punch LineIn slightly dramatized form, the message of this article canbe summarized by drawing some contrasts. Consider thefollowing mathematical statement:

Theorem: Given plant g s( ), compensator k s( ), and

gk s n s d s s s gk s

Z z n z z

P

( ) ( ) ( ); ( ) [ ( )]

| ( ) , ( )

= = += = ≥=

1 1

0 0Re

p d p p| ( ) , ( ) .= ≥0 0Re

Then s s( ) is stable if and only if

s s s

s z z Z

s p p P

( ) ( )

( )

( ) .

< ∞ ∀ ≥= ∀ ∈= ∀ ∈

Re 0

1

0

In words, this theorem states that the sensitivity functionof a feedback system must not only be finite in the right-halfplane, but it must pass through certain interpolation pointscorresponding to right-half-plane singularities of the loop.Most of us recognize this immediately as an elegant andcompact description of control system constraints imposedby unstable, nonminimum-phase systems. It was formallydeveloped in the context of parametrizing all stabilizingcontrollers, and it was popularized in the 1980s as part ofthe interpolation-theoretic approach to H∞ optimization.(Of course, it was understood as stated above for single-in-put, single-output (SISO) systems as far back as the 1950s.Some historical notes on this theorem can be found in [1].)

Unfortunately, we are not as quick to recog-nize that this mathematical description includessome very dangerous systems. For example, thetheorem applies to the JAS-39 airplane (the SAABGripen), which crashed on landing in 1989 in oneof its first test flights. Figure 1 shows a videoframe from the crash. Fortunately, the pilot sur-vived, but the airplane was lost and its develop-ment program substantially delayed.

The theorem also applies to the Chernobylnuclear plant, shown in Figure 2 as it appearedshortly after its accident in 1986. We are all famil-iar with the consequences of that accident—hundreds of people dead, hundreds of thou-sands evacuated, and hundreds of millions ofdollars in cleanup costs.

These and other examples dramatize the con-trast between elegant mathematical statementsand the real physical systems that they purportto describe. I have selected these two examplesbecause both catastrophes involve explicit,

traceable responsibilities of control systems. We will take acloser look at those responsibilities later.

My point is this: As society permits control engineers tooperate more such dangerous systems, we who teachthose engineers and fashion their tools cannot hide fromresponsibility under a cloak of mathematics. We dare notinstill the notion that mathematical rigor is the only goal tostrive for in control. We must also instill respect for thepractical, physical consequences of control, and we mustmake certain that its underlying principles are taughtclearly and well.

I want to explore this point by reviewing some basic factsabout unstable plants. Again, we will consider unstable to besynonymous with dangerous, even though this is not all in-clusive. I want to review these facts:

• unstable systems are fundamentally, and quantifiably,more difficult to control than stable ones

• controllers for unstable systems are operationallycritical

• closed-loop systems with unstable components areonly locally stable.

These facts should be well known to all of us, but as wewill see, they are not always taken to heart.

August 2003 IEEE Control Systems Magazine 13

Basic Facts About Unstable Plants• Unstable systems are fundamentally, and quantifiably,

more difficult to control than stable ones.• Controllers for unstable systems are operationally

critical.• Closed-loop systems with unstable components are only

locally stable.

Figure 1. Gripen JAS39 prototype accident on 2 February 1989. The pilotreceived only minor injuries.

©FA

ME

PIC

TU

RE

S

Page 3: The practical, physical (and sometimes dangerous) …aaclab.mit.edu/material/handouts/GunterStein-bodelecture... · 2019. 11. 27. · Gripen JAS39 prototype accident on 2 February

Fact 1: Fundamental,Quantifiable Control DifficultyOne of the best-known examples of instability is the invertedpendulum, or the “broomstick balancing problem.” Those ofus who learned our craft in the 1950s and 1960s know thisproblem well, either through the little cart experiments thatappeared during those years in various university labsaround the world or through textbook examples and simula-tions. Recall that this problem was motivated by the space

race. Control engineers had to learn how to balance rocketson top of their plumes on their way to earth orbit. It is still animportant problem today, in the post-Challenger era, as mod-ern boosters, now built in several countries around theworld, become more difficult to balance.

I bring up the inverted pendulum because it nicely illus-trates our first fact. The illustration is this: I can obviouslybalance an ordinary stable pendulum without difficulty. Ican also easily balance a long inverted pendulum, even infront of a room full of people. However, I find it more difficultto balance a shorter inverted pendulum, and I find it impos-sible to balance a very short one. You have probably triedthis yourself. The exact lengths you can balance might bedifferent, but the trend will be the same.

Certainly the theorem just cited has an explanation forthis illustration hidden in it somewhere, and we could try toextract that explanation using all the modern machinery atour disposal. Perhaps we could compute some minimum H∞

norms or even some minimum structured singular valuesachievable by human controllers. Of course, in our calcula-tions, we would need to pay due attention to the inherenthandicaps of such controllers, such as reaction time,neuromuscular lags, limb inertias, and many other uncer-tainties covering the fact that humans, and most everythingelse in the physical world, are not finite dimensional, linear,and time invariant.

The Bode IntegralsI will try to explain these observations in the frequency do-main the way Hendrik Bode might have done it. It turns outthat such an explanation is insightful, plain, and clear and istherefore preferable to many modern ones.

First, note that the difference between balanc-ing long sticks and short sticks has to do with thelocation of the unstable mode. This is obviousfrom the linearized equations of motion of thestick. Under the simplifying assumption that allmass is concentrated at the end of the stick, theseequations show an unstable pole at g L/ , whereg denotes the acceleration of gravity and L de-notes length. Divergence becomes more rapid asL decreases.

A frequency domain quantification of controldifficulty in the face of such changing instabili-ties is captured by the Bode integrals (seesidebar). At the risk of sounding dogmatic, I be-lieve that every control theoretician and everycontrol engineer should know these integralsand understand their meaning. Unfortunately,we have not always taught them well.

My own background illustrates this last obser-vation. You can judge for yourself, using your ownbackground. During my entire control educationas an undergraduate and graduate student in the1960s, I ran across only the first integral, only

14 IEEE Control Systems Magazine August 2003

Figure 2. Chernobyl nuclear power plant shortly after the accident on 26 April1986.

Bode Integrals

The first integral applies to stable plants and the sec-ond to unstable plants. They are valid for every stabi-lizing controller, assuming only that both plant and

controller have finite bandwidths. In words, the integralsstate that the log of the magnitude of sensitivity function of aSISO feedback system, integrated over frequency, is con-stant. The constant is zero for stable plants, and it is positivefor unstable ones. It becomes larger as the number of unsta-ble poles increases and/or as the poles move farther into theright-half plane. (Technically, we must count all unstablepoles here, including those in the compensator, if any.)

ln| ( )|

ln| ( )| ( )

0

0

0∞

∫ ∑

=

=

s j d

s j d pp P

ω ω

ω ω π Re

©A

P/W

IDE

WO

RLD

PH

OT

OS

Page 4: The practical, physical (and sometimes dangerous) …aaclab.mit.edu/material/handouts/GunterStein-bodelecture... · 2019. 11. 27. · Gripen JAS39 prototype accident on 2 February

once, and only in an optional reading of an unassigned chap-ter in one of the classical textbooks. This integral surfaced forme for the second time in the mid 1970s, referenced in a paperby Isaac Horowitz titled “On the Superiority of Transfer Func-tions over State-Variable Methods. . . .” It appeared as a per-spectives paper in IEEE Transactions on Automatic Controlamid a certain amount of controversy [2].

The second integral did not surface for me until 1983, in atalk by Jim Freudenberg at an IEEE Conference on Decisionand Control in San Antonio [3]. If memory serves, someonepointed out at the time that this result was “just a version ofJensen’s theorem,” well known in mathematics for a longtime. Perhaps this historical reference reduced the value ofthe result in the minds of some listeners, but it should nothave, because the integral explains so much about the diffi-culties of controlling unstable systems.

A Bode Integral InterpretationI like to think of Bode’s integrals as conservation laws. Theystate precisely that a certain quantity—the integrated valueof the log of the magnitude of the sensitivity function—isconserved under the action of feedback. The total amountof this quantity is always the same. It is equal to zero for sta-ble plant/compensator pairs, and it is equal to some fixedpositive amount for unstable ones.

Since we are talking about the log of sensitivity magnitude,it follows that negative values are good (i.e., sensitivities lessthan unity, better than open loop) and positive values are bad(i.e., sensitivities greater than unity, worse than open loop).So for open-loop stable systems, the average sensitivity im-provement a feedback loop achieves over frequency is ex-actly offset by its average sensitivity deterioration. Foropen-loop unstable systems, things are worse because theaverage deterioration is always larger than theimprovement. This applies to every controller,no matter how it was designed. Sensitivity im-provements in one frequency range must be paidfor with sensitivity deteriorations in another fre-quency range, and the price is higher if the plantis open-loop unstable.

It is curious, somehow, that our field has notadopted a name for this quantity being con-served (i.e., the integrated log of sensitivitymagnitude), to put it on a par with some of thegreat quantities of physics such as mass, mo-mentum, or energy. But since it has not, we arefree to choose a name right now. Let me proposethat we simply call it dirt. It is stuff we wouldrather not have around; the less we have, thebetter. I want to choose this name because itlets me liken the job of a serious control de-signer to that of a ditch digger, as illustrated inFigure 3. He moves dirt from one place to an-other, using appropriate tools, but he never getsrid of any of it. For every ditch dug somewhere,

a mound is deposited somewhere else. This fact is most evi-dent to the ditch digger, because he is right there to see ithappen.

In the same spirit, I can also illustrate the job of a more ac-ademic control designer with more abstract tools such aslinear quadratic Gaussian (LQG), H∞ , convex optimization,and the like, at his disposal. This designer guides a powerfulditch-digging machine by remote control from the safety ofhis workstation (Figure 4). He sets parameters (weights) athis station to adjust the contours of the machine’s diggingblades to get just the right shape for the sensitivity function.He then lets the machine dig down as far as it can, and hesaves the resulting compensator. Next, he fires up his auto-matic code generator to write the implementation code forthe compensator, ready to run on his target microprocessor.

August 2003 IEEE Control Systems Magazine 15

10

0.1

Log

Mag

nitu

de

1.0

2.00.0 0.5 1.0 1.5Frequency

Serious Design s.g

Figure 3. Sensitivity reduction at low frequency unavoidablyleads to sensitivity increase at higher frequencies.

Formal SynthesisMachine

1.0

0.10.0 0.5 1.0 1.5 2.0

Frequency

s.g

Formal Design

Log

Mag

nitu

de

Figure 4. Sensitivity shaping automated by modern control tools.

Page 5: The practical, physical (and sometimes dangerous) …aaclab.mit.edu/material/handouts/GunterStein-bodelecture... · 2019. 11. 27. · Gripen JAS39 prototype accident on 2 February

He downloads that code automatically to the microprocessorand hits the power-on button on the control system. Indeed,this entire process can become so automatic and insulatedthat the designer may never look at what has actually hap-pened to his control loop, and all too often the power-on cre-ates a rude surprise.

Available BandwidthMany rude surprises that occur in automated design scenar-ios have to do with excessive bandwidth. The designer un-wittingly allows the machine to dig too deeply, piling up dirtat high frequencies where it cannot be supported.

The notion that dirt piled at high frequencies needs sup-port is not taken seriously enough in the theoretical commu-nity, even today. For example, an argument is sometimesmade that the Bode integrals are not really restrictive be-cause we only seek to dig holes over finite frequency bands.We then have an infinite frequency range left over into whichto dump the dirt, so we can make the layer arbitrarily thin.The weakness of this argument is evident from standardclassical theory. A thin layer, say with ln| |s = ε, requires aloop transfer function whose Nyquist diagram falls on anear-unit circle, centered at (– )1 0+ j with radius ≈ ( – )1 ε ,over a wide frequency range. This means that the loop can-not simply attenuate at high frequencies but must attenuatein a very precise way. The loop must maintain very good fre-quency response fidelity over wide frequency ranges.

But a key fact about physical systems is that they do notexhibit good frequency response fidelity beyond a certainbandwidth. This is due to uncertain or unmodeled dynamicsin the plant, to digital control implementations, to power lim-its, to nonlinearities, and to many other factors. Let us callthat bandwidth the “available bandwidth,” Ωa , to distinguishit from other bandwidths such as crossover or 3-dB magni-tude loss. The available bandwidth is the frequency up to

which we can keep gk j( )ω close to a nominal design and be-yond which we can only guarantee that the actual loop mag-nitude will attenuate rapidly enough (e.g., | |gk < δ /ω2). Intoday’s popular robust control jargon, the available band-width is the frequency range over which the unstructuredmultiplicative perturbations are substantially less than unity.

Note that the available bandwidth is not a function of thecompensator or of the control design process. Rather, it isan a priori constraint imposed by the physical hardware weuse in the control loop. Most importantly, the availablebandwidth is always finite.

Given all this, Bode’s integrals really reduce to finiteintegrals over the range 0 ≤ ≤ω Ωa , i.e,

ln ( )

ln ( ) (

0

0

Ω

Ω

a

a

s j d

s j d pp P

∫ ∑

=

=∈

ω ω δ

ω ω π

stable loops

Re ) + δ unstable loops.

All the action of the feedback design, the sensitivity im-provements as well as the sensitivity deteriorations, mustoccur within 0 ≤ ≤ω Ωa . Only a small error ( )δ occurs out-side that range, associated with the tail of the completeintegrals. Since the details of this tail are not guaranteed bythe design, the error can be either positive or negative. Thedesign only guarantees that it will be small.

As an aside, you probably know that other constraintsimposed by right-half-plane zeros also give rise to finiteintegrals, although in different variables [4]. I did notchoose to use these additional integrals here because theystill require infinite available bandwidth, and I believe thatfiniteness of available bandwidth is a more significant con-cept in control than nonminimum phaseness, even though itis not nearly so elegant.

An Explanationof the BroomstickThese last integrals bring us to the pointwhere we can explain the broomstick balanc-ing illustration in a Bode-like way. First, whatis the available bandwidth of the feedbackloop in that experiment? Looking at the plantalone, it is fairly high. The stick is stiff, air dragis negligible, and little else prevents the stickfrom moving as required. The compensator,however, is another matter. Its physical imple-mentation by a human operator has manycomplex limitations associated with percep-tion, computation, and actuation of limbs.Many years of study and experimentationhave gone into the characterization of theselimitations, especially for piloting tasks in mil-itary airplanes. Since we obviously cannot gothrough all that here, let us simply agree that

16 IEEE Control Systems Magazine August 2003

Figure 5. Sensitivity constraints as a function of broomstick length.

Page 6: The practical, physical (and sometimes dangerous) …aaclab.mit.edu/material/handouts/GunterStein-bodelecture... · 2019. 11. 27. · Gripen JAS39 prototype accident on 2 February

the compensator is good for a frequency range up to ≈2 Hz(say, 10-15 rad/s) and that its control strategy is to keepsensitivity as small as possible over that range (i.e., keepthe loop’s Nyquist diagram as far away from the criticalpoint as possible everywhere).

The curves in Figure 5 show how well the closed-loopsystem will perform. These curves are sim-ply restatements of the finite Bode inte-gral, with sensitivity assumed equal to aconstant minimum achievable value over0 ≤ ≤ω Ωa and equal to unity elsewhere.The minimum achievable value is an expo-nential function of the ratio of unstable pole to availablebandwidth ( / Ωp a )and with p given by g L/ , it is an explicitfunction of the broomstick length. Notice that a dramatic in-crease in sensitivity occurs below a foot and a half. Theselarge sensitivities put the loop close to the critical point, andeven minor imperfections in the implementation will causeinstability. That, quite simply, is the reason we humans havetrouble balancing short sticks, and other controllers withsimilar available bandwidth limits have trouble as well. Thisreason follows directly from the most fundamental conser-vation laws of feedback, Bode’s integrals, and needs none ofour modern mathematical machinery. More importantly,none of the modern results overcome these limitations.They can (and do!) hide them from us, but they do not re-move them.

The X-29 Airplane StoryAlthough the broomstick-balancing illustration itself isonly a toy, it represents some very serious physical sys-tems with similar dynamics. In addition to the launchbooster analogy already mentioned, there is also a directanalogy to unstable airplanes. As an example of the latter,I would like to tell a story about the X-29, the forward-swept-wing research airplane shown in Figure 6.

This airplane was built to demonstrate basicaerodynamic performance improvements thatmight be gained from new composite materialstailored specifically for aerodynamic efficiency.The wings are swept forward instead of aft, sothat bending enhances lift instead of decreasingit at high angles of attack; a large canard surfaceis placed close in to take advantage of favorableinteractions with the wing; and there are severalother features as well. Once demonstrated, thesetechnologies will make their way into next-gener-ation airplanes.

The airplane was built by Grumman AircraftCompany, under U.S. Air Force, DARPA, andNASA sponsorship, and underwent successfulflight tests for several years. Honeywell suppliedthe control hardware and software. Control lawswere designed at Grumman, with supporting de-sign activity at Honeywell, as well as severalother places.

Although the airplane’s controls were not a major focusof the flight demonstration, they are of interest here be-cause they illustrate how extreme our worship of formalmethods has become.You see, all of the various control de-sign teams used modern digging machines early in the de-sign process. As a result, we were well insulated from the

fundamental difficulties imposed by the airplane’s violentopen-loop instability. We discovered only at the last mo-ment that the vehicle was almost too unstable to controlwith the given hardware.

I want to relate this story by talking first about whatmakes the airplane unstable, then about the hardware fea-tures that restrict available bandwidth, and finally, I will putthese together with Bode’s integral to show the airplane’sfundamental control limitations.

Static InstabilityAn airplane is open-loop unstable when its center of pres-sure (cp, the effective point of action of lift forces) is locatedahead of its center of gravity (cg). Since lift forces grow in di-rect proportion to pitch (nose up or down) attitude, any ini-tial pitching motion changes lift, which acts through the cp –cg offset to produce moments in the same direction, and theattitude diverges. In aeronautical circles, this condition iscalled static instability. The associated linearized dynamicslook very much like broomstick-balancing equations. Thereare two roots—one stable, one unstable—approximatelyequal in magnitude.

Static instability does not happen arbitrarily. It is deliber-ately designed into an airplane by locating lifting surfaces

August 2003 IEEE Control Systems Magazine 17

Figure 6. NASA X-29 forward-swept-wing aircraft (photo courtesy of NASA).

We will consider unstable to besynonymous with dangerous.

Page 7: The practical, physical (and sometimes dangerous) …aaclab.mit.edu/material/handouts/GunterStein-bodelecture... · 2019. 11. 27. · Gripen JAS39 prototype accident on 2 February

and distributing mass appropriately. In the years beforefull-authority automatic flight controls, most airplane de-signs constrained the placement of these elements to ensurestability over all flight regimes and all loading conditions.(The Wright brothers’ airplane is a notable exception.) To-day, however, when automatic controls are accepted andtrusted in so many applications, there are important benefitsto be gained by making the basic design unstable. For in-stance, note that a stable airplane requires a tail to balancethe moment produced by lift acting through the cp – cg offset.The force produced by the tail is down, opposing the lift andmaking the overall configuration aerodynamically less effi-cient. The airplane also needs to carry the weight of the tail,which reduces payload. The less stable an airplane is, thesmaller these performance and weight penalties. There arealso other reasons for wanting instability, having to do withmaneuverability and speeds of command response.

In the case of the X-29, the benefits of instability were de-sired in the transonic and supersonic flight regimes, so theairplane was designed to be modestly unstable in those re-gimes. Unfortunately, there is a basic aerodynamic phenom-enon that moves the center of pressure of a lifting surface

dramatically aft as speeds go from sub- to supersonic. As aresult, the X-29’s slight instability at supersonic speedsturns into a much more dramatic instability at subsonicspeeds. Indeed, the airplane’s real pole is as large as +6 rad/sin some flight regimes.

Recalling the broomstick’s formula ( / )p g L= , flyingthis airplane manually corresponds to balancing a 1-ft-longstick. It is not something we can expect a pilot to do withoutautomatic assistance, at least for very long.

X-29 Available BandwidthAs with the broomstick, control difficulties associated withthe X-29’s unstable pole should not arise merely because thepole is large. Rather, they should arise if the pole is largecompared with available bandwidth. Some of the majorhardware elements in the control loop that limit this band-width include the following:

• Sensors: rate gyros and accelerometers, used for inner-loop stabilization. Their bandwidths, measured in theusual 3-dB-gain sense, are typically 120 rad/s or more.

• Control processors: digital computer systems samplingsensor data and computing surface commands at 80Hz. This update rate can pass signals with good fidelityup to 30-40 rad/s (two to three samples per radian).

• Actuators: high-pressure hydraulic systems with ser-vos to position each aerodynamic control surface. Forthe pitch axis, control surfaces include the canard, in-board and outboard flaperons along the wing, andstrakes at the tail. The servo bandwidths, again mea-sured by 3-dB gain, are approximately 20 rad/s. Basi-cally, each of the servos is a position feedback looparound a hydraulic ram (an integrator). This gives adominant first-order-lag response that retains itscharacteristics up to perhaps 70-80 rad/s, where valvedynamics, fluid compressibility, and local structuraldynamics begin to take effect. The bottom line is thatthe available bandwidth of actuators is approximately70 rad/s, at least for small signals.

• Aerodynamics: flow conditions around the airplanethat map its geometric configuration (e.g., net orienta-tion and control surface positions) into forces and mo-ments. At low frequencies, this map is algebraic andwell known, but because air itself has mass and mo-mentum, the map becomes dynamic and poorly knownon time scales comparable to characteristic length di-vided by velocity. For a 2-ft surface traveling at 200 ft/s,that time scale is 0.01 s, so the available bandwidth foraerodynamics is approximately 100 rad/s.

• Airframe: the mechanical structure linking the attach-ment points of actuators to the attachment points ofsensors. (These are the important structural pointsaffecting control. Other disciplines are, of course, in-terested in additional attachment points, such aswhere the wings are attached, where the pilot sits,etc.) The structure is rigid at low frequencies, but itbegins to bend and flex dramatically at higher fre-

18 IEEE Control Systems Magazine August 2003

Figure 7. Prototype X-29 sensitivity function.

Figure 8. Minimum X-29 sensitivities.

Page 8: The practical, physical (and sometimes dangerous) …aaclab.mit.edu/material/handouts/GunterStein-bodelecture... · 2019. 11. 27. · Gripen JAS39 prototype accident on 2 February

quencies. Typical fighter-sized airplanes have a firstfuselage bending mode at ≈ 7.0 Hz (40 rad/s). The fre-quency of this mode, as well as its mode shape anddamping characteristics, move around substantiallywith mass distribution (fuel, payloads) and with dy-namic loading (maneuvers). This makes it very diffi-cult to maintain good frequency response fidelitybeyond the first mode frequency.

This list shows that the most severe limitation on avail-able bandwidth for the X-29 is approximately 40 rad/s andcomes from the airplane’s mechanical structure and fromthe sampling rate of its computers. I have elected to de-scribe the various other items, at least briefly, to emphasizethe point that real physical systems have a multitude of limi-tations on available bandwidth, not just one or two that canbe easily pushed out or removed. We cannot simply ignorethem, as we often do in formal theories. Instead, we shouldfashion theories to clearly expose the consequences ofthese limitations, and we should honestly tell ourselves andour employers what the consequences are.

As mentioned earlier, in the initial X-29 designs, the vari-ous design teams used some of the formal digging ma-chines, which hide these consequences. Not until late inthe design process, and not without heated argumentsamong designers, did we all agree that they are indeed realand unavoidable.

X-29 LimitationsUsing Bode’s integral, the limitations are easy to demon-strate. We have a plant with an unstable pole, p = 6 rad/s, andwith an available bandwidth, Ωa = 40 rad/s. We want toachieve a sensitivity function shaped approximately like theprototype shown in Figure 7. This prototype has small sensi-tivity at low frequencies, rising with a +1 slope up to Ω1. Itthen stays flat up to Ωa , so as to pay as small a penalty aspossible, consistent with Bode’s integral. Finally, it returnsto unity (0 dB) beyond Ωa . A formula for the smallest sensi-tivity penalty, smin, is easy to derive directly from the integraland is also shown in the figure.

Some smin curves based on this formula are shown in Fig-ure 8. We see that the penalty level rises as the airplane be-comes more unstable and also as the frequency Ω1, up towhich we want good performance, increases. The designpoint for the X-29 is shown at p = 6 rad/s and Ω1 3= rad/s.The resulting smallest sensitivity penalty is ≈ 1.75. Classicaldesigners will immediately recognize this to be marginal be-cause traditional phase margins will not be satisfied. Thesemargins are given in terms of smin by well-known formulas[e.g., PM s= −2 1 21sin ( / )min ]. Boundaries for standard mili-tary flight control specifications, a 45°-phase margin and a6-dB gain margin, are shown in the figure.

Notice that we have not actually done any design work.We have made no reference to design methods or to bigditch-digging machines. Instead, we have just used a ba-sic Bode integral calculation to determine that the situa-tion is marginal.

To confirm that the situation is marginal, Figure 9 showsa Bode diagram of a loop transfer function correspondingto a realizable version of the prototype shape in Figure 7.This realizable version was found by taking a fourth-orderapproximation of Figure 7 (basically rounding the cor-ners), selecting one free parameter (the new smin) to satisfyBode’s integral, and then solving explicitly for the looptransfer function (i.e., gk s s s( ) ( ) –= −1 1). Note that the re-sulting loop is well rolled off at the 40-rad/s available band-width and that 35° is the largest phase margin achieved bythe prototype.

Based on what we have just shown, no controller canmake smin smaller under the given constraints. Thus, weshould not be surprised that no acceptable controller wasever found, even with several design teams working on theproblem. Indeed, the airplane flies today only because spe-cial specification relief was granted. As described in [5],the airplane’s stability margins were actually measured ex-plicitly in flight. Two curves from these flight tests areshown as Bode diagrams in Figure 10, one with slightly dif-ferent control gains than the other. Note the remarkablyclose correspondence between the Bode-integral-derivedprototype and the actual final flight control loop.

Based on this experience with the X-29, it is unlikely thatairplanes now being developed, such as the Saab JAS-39, theAdvanced Technology Fighter (ATF), and the National Aero-space Plane (NASP X-30), will be so violently unstable. Agood rule of thumb, seen from Figure 8, is that availablebandwidth should exceed the airplane’s unstable pole by atleast a factor of ten.

August 2003 IEEE Control Systems Magazine 19

Figure 9. Prototype Bode diagram for the X-29.

Page 9: The practical, physical (and sometimes dangerous) …aaclab.mit.edu/material/handouts/GunterStein-bodelecture... · 2019. 11. 27. · Gripen JAS39 prototype accident on 2 February

Fact 2: OperationallyCritical ControllersWe now turn to the second fact about unstable/dangeroussystems that I want to emphasize, namely, that controllersfor these systems are operationally critical. Controllersmust work properly for the systems to be safe. In the case ofairplanes, such controllers are called “flight critical.”Bluntly stated, this means that if the controller fails, youeject or you die.

Although this may be obvious, it is well worth some dis-cussion. In effect, today’s control engineers working withdangerous applications must design and build control sys-tems that are so nearly perfect that we are willing to stakeour fortunes and sometimes our lives on them.

More often than not, the backbone of a control system in-volves electronic equipment, and it is safe to say that mostof us have had experiences with such equipment—with ourTVs, stereo systems, personal computers, etc.—that do notencourage excessive risk taking. Even the very best equip-ment fails at a rate of 10−3 to 10−4 failures per operating hour.The job of control hardware designers, therefore, is to buildvery reliable systems out of unreliable components. Thiscan be done with careful use of redundant elements ar-ranged in architectures that let the overall hardware/soft-ware system maintain its function even though componentswithin it have failed. Indeed, an entire engineeringsubspecialty has evolved to handle such design problems. Itis not my intent to cover much of this subspecialty here, butwe should be aware of some of its basic concepts and issues.Certainly we need to recognize that this work is every bit asimportant to control as are the control theories and designtools that are our own specialty.

Perhaps the most basic architectural concept for buildingreliable systems out of unreliable components is shown inFigure 11. We simply replicate the entire hardware set of acontrol loop (sensors, processors, and input/output devices)in several channels and arrange for some sort of voting logicto keep only good channels in control. A simple majority-vot-ing scheme suggests that it takes2 1f + channels to still be op-erational after f failures. This is because there are still f +1channels that agree and can outvote the f failed ones. So if asystem must be operational after one failure (so-calledfail-op), we get a three-channel or triplex architecture. If oper-ation is required after two arbitrary failures (fail-op squared),we get a five-channel architecture, and so on. (In practice, it iscustomary to claim fail-op squared capability even with onlyfour channels. This relies on an unstated assumption that thetwo failures are not simultaneous and that the first has beenisolated before the second occurs.)

The statistical reliability of the architecture also in-creases with the number of channels, as given by the formu-lae in the figure.Qc is the failure rate of a channel andQ is theoverall failure rate of the implementation.

20 IEEE Control Systems Magazine August 2003

Figure 10. X-29 flight data (courtesy Mr. J. Gera, NASA).

Figure 11. Basic redundant control architecture.

Page 10: The practical, physical (and sometimes dangerous) …aaclab.mit.edu/material/handouts/GunterStein-bodelecture... · 2019. 11. 27. · Gripen JAS39 prototype accident on 2 February

VotingBefore this basic concept sinks in too firmly, letme caution that the picture presented in Figure 11hides most of the difficulties and issues that makeredundancy management a challenging engineer-ing discipline. In particular, the figure makes refer-ence to “some sort of voting scheme” todistinguish good channels from bad. The schemeis not shown, but consider how it might be built.The basic idea is to compare outputs of variouschannels and to proceed with the majority. How-ever, the outputs are not discrete yes-or-no votes,but are digital words, 8 to 12 bits long. In separatechannels, the words will be different because thesensors read different signals and processors runat different rates, and perhaps for other reasonsas well. So we cannot simply vote, but must makecomparisons against preset thresholds instead.

This raises the first big issue: How large must the thresh-olds be? Consider a case where each processor imple-ments a control law that includes a sophisticated controlalgorithm—say, an adaptive law with explicit identification(e.g., parameter estimates evolving according tod t eTφ φ/ d = +L). Each channel executes this law with dif-ferent noisy sensor data. The channel differences will thenconsist of free integrations driven by noise, and from ourfirst course in stochastic processes, these differences be-have like Brownian motion and will exceed any particularthreshold infinitely often. This is not good for voting!

Note that the fancy control law is not the culprit here. Thesame problem arises for any algorithm with free integrationsor unstable dynamics, as well as for control laws with modelogic, saturation protections, and other discrete switches. Itarises with any control algorithm for which small input differ-ences can produce large output differences.

A common solution to this voting issue is to utilizecross-channel communication to “equalize” all channels(e.g., force them to synchronize clocks and to process iden-tical data). Voting then reduces to simple bit-by-bit compar-isons. Unfortunately, this solution only peels back the firstlayer of difficulty, because the communications hardwareneeded to equalize channels must also be very reliable. Thisrequires still more levels of redundancy, fortunately re-stricted to fewer components [6]. Even with appropriatelyreliable communications, however, there remain certainfailures that cannot be detected, namely those that produceidentical symptoms in each channel.

Failures with identical symptoms are called “genericfaults.” At first glance they would appear to be unlikely(two or more identical failures at the same time), until werealize that undiscovered design errors in hardware orsoftware have precisely this characteristic. In fact, the un-deniable possibility of such errors has caused mostbit-by-bit synchronous architectures to also carry one ormore backup channels, using different control laws, differ-

ent hardware, and different software. The X-29, for exam-ple, has a triplex digital system as its primary controller,but carries three backup channels with very basic controllaws implemented in analog electronics. The Space Shuttlehas a quadruplex digital system as its primary and a fifthdissimilar digital channel as a backup.

Heterogeneous RedundancyAs an alternative to identical primaries with dissimilar back-ups, other architectures use dissimilar primary channels—dif-ferent hardware, software, and control laws. Dissimilarchannels make the voting problem more difficult (how to setthresholds?), but they alleviate problems with generic faults.

This alternative, incidentally, is one in which some of ushave placed a lot of trust. It is used on the Airbus A320 trans-port aircraft, shown in Figure 12. This airplane was certified forcommercial service in 1989 and is now flying regular routes inEurope and the United States. Although the hardware architec-ture of the A320 does not fit exactly into the prototypes wehave looked at, it basically consists of four dissimilar chan-nels. Two channels are built out of one brand of microproces-sor, and two others are built out of a second brand. Thesoftware in each of these similar hardware channels is differ-ent, developed by different design teams using different pro-gramming languages. Unfortunately, I do not know muchabout the control laws themselves, but presumably they aresimilar enough to permit reasonably tight voting thresholds.

In the context of this article, it is important to point outthat the A320 is not statically unstable. However, it is afly-by-wire airplane, which means that there are no mechan-ical connections between the pilot’s input device (a side-stick hand controller, not the traditional control wheel) andthe main control surfaces. Only electronic connections existthrough the control computers. If these connections fail, theairplane can continue to cruise and can be brought downwith the aid of a low-authority mechanical trim system. Onthe other hand, complete loss of control functions duringcritical phases of autoland (e.g., just before touchdown)could well be catastrophic.

August 2003 IEEE Control Systems Magazine 21

Figure 12. Airbus A320.

©A

IRB

US

,200

3

Page 11: The practical, physical (and sometimes dangerous) …aaclab.mit.edu/material/handouts/GunterStein-bodelecture... · 2019. 11. 27. · Gripen JAS39 prototype accident on 2 February

So in the spirit of my discussions so far, the A320 qualifiesas a dangerous system that we should treat with appropri-ate respect. And the A320 is only the beginning. Both Boeingand McDonnell Douglas expect to build their next trans-ports as fly-by-wire also. It is perhaps comforting to knowthat neither of these manufacturers expects to build stati-cally unstable commercial transports in the immediate fu-

ture. Designers at Boeing have told me that un-stable fly-by-wire transports are not likelyunless they are supersonic. The argument is thesame as for the X-29. Relaxed stability for aero-dynamic efficiency at supersonic speeds meanspronounced instability at subsonic speeds.

Paper studies of new supersonic transportsare ongoing but are not likely to produce com-mercial vehicles until the next century. But thenext century is not far away! Engineers who willdesign the controls for those vehicles are beingtrained by us right here today. I, for one, still ex-pect to be traveling when these new vehiclescome online, so I have a vested interest that weteach the fundamentals well, not just the formaltools and the mathematics!

Fact 3: Local StabilityFinally, let us turn to the third fact I want to re-view: Closed-loop systems with unstable com-ponents are only locally stable. This should bewell known to all of us. An unstable system can-not be stabilized globally with bounded controlauthority. For linear systems, this follows di-rectly from the equations. Consider motionalong an unstable eigenvector. If the control sig-nal is constrained to have a bounded compo-nent along that vector, then there exist initialconditions large enough to keep the state-deriv-ative positive, and the system diverges. This ar-gument holds both for magnitude limits on thecontrols and for rate limits.

An example of local stability due to rate lim-its is given in Figure 13. This figure shows re-sponses of the prototype X-29 controller from

Figure 9 implemented with rate limits and excited by severaldifferent step commands. We see that small commands givelinear responses. There is about 100% overshoot, consis-tent with the (unavoidably) large sensitivity of the design.As commands increase, the control rate saturates, and withvery little additional command, the system diverges. Pleasekeep the general characteristics of these traces in mind—the triangular waveform of the control at the edge of insta-bility and finally the dramatic divergence with control ratefully saturated. Later I will present some very similar plots,only much more frightening.

For control engineers, local stability means that specialcare must be taken to avoid excessive commands (i.e., limitson operator inputs). We must also verify that worst-case up-sets due to external disturbances do not drive the closed-loop system out of its region of attraction.

These observations bring us back to our first example,the Saab JAS-39 airplane in Figure 1. This airplane was stati-cally unstable, with divergence rates about half as severe asthe X-29. Like the X-29, it carried a triplex digital control sys-tem with three analog backup channels. As one might ex-

22 IEEE Control Systems Magazine August 2003

Figure 13. Simulation traces for the X-29 demonstrating rate limit constraints.

ControlRods

Reactor

3200MWth

SteamSeparator

FeedwaterPumps

Condenser

2x500 MWe

Turbine Gen

Unit 4 Schematic Diagram

Figure 14. Schematic diagram of Chernobyl Unit 4 reactor.

Page 12: The practical, physical (and sometimes dangerous) …aaclab.mit.edu/material/handouts/GunterStein-bodelecture... · 2019. 11. 27. · Gripen JAS39 prototype accident on 2 February

pect, extensive investigations have been going on ever sincethe airplane’s accident, with final official conclusions stillnot out. However, various reports in the media suggest thatthe control hardware did not malfunction. Instead, unstablebehavior of the control laws in the face of surface rate limitsis being cited as the cause. The control laws have alreadybeen redesigned to cure the problem (see [7], for example).

Heuristically, we can think of rate limits as acting to reduceavailable bandwidth for large signal levels (i.e., the controlscan only follow position commands whose frequency-ampli-tude product is less than the rate limit). This suggests that theeffects of rate limits can indeed be alleviated by redesigningcontrol laws for lower assumed available bandwidth. Accord-ing to Bode’s integrals, however, such redesigned controllerswill achieve poorer sensitivity performance, especially for un-stable problems. So the improvements that can be gained forthe JAS-39 will necessarily be limited.

Once again, it is evident that unstable/dangerous sys-tems must be treated with more than casual respect.

The Chernobyl StoryFinally, to make this point most dramatically, let me recountthe story of the nuclear accident at Chernobyl (Figure 2). On 28April 1986, news came out of Ukraine that a nuclear powerplant had destroyed itself two days earlier and had releasedsignificant amounts of radioactive contaminants over a widearea. Short of nuclear war or impending long-term climatechanges, this kind of accident certainly looms large in the pub-lic mind as one of the more serious threats to our well being.

Whether we choose to recognize it or not, control playeda major role in that accident. The plant’s hardware did notfail. No valve hung up, no electronic box went dead, and nometallurgical flaw caused a critical part to break. Instead,the reactor control system systematically drove the plantinto an operating condition from which there was no safeway to recover. This is true, at least, if we count the controlsystem’s hardware, its human operators, and its operatingpolicies as part of the system.

The PlantTo tell this story, we need a few facts about the plant itself.The information comes from a seminar given by HerbertKouts of Brookhaven National Laboratory summarizingthe accident [8]. I want to emphasize, however, that thevery simplified control interpretations I am about to makeare my own, so the blame for any errors and absurdities ismine as well.

The plant at Chernobyl consisted of four units, each laidout as shown in Figure 14. Unit 4 is the one that experiencedthe accident. It had a boiling water reactor that took in waterat the bottom and heated it to produce a mixture of waterand steam. The steam was separated in steam drums anddrove conventional turbine-generators to supply power tothe distribution grid. There were two turbines rated at 500MWe (electrical) each and two complete water/steam flowcircuits through the reactor. The water/steam circuits oper-ated at about 1,000 psi pressure, with a boiling temperatureinside the reactor of about 540 °F.

August 2003 IEEE Control Systems Magazine 23

ChernobylEvents of the Night of 25 April 1986

• 1) Power had been brought down during the previous day toaround 700 MWt, the edge of the legal low-power operatinglimit, ready to run the test.

• 2) A switchover was made to different flux detectors, bettersuited for power sensing at this low level, which was appar-ently a standard procedure. During the switchover, how-ever, the operator neglected to engage the power-holdmode. This oversight set up the conditions for the accident.

• 3) Without automatic power hold, reactor power droppedrapidly to 30 MWt. The operator halted this drop and re-covered to 200 MWt by withdrawing control rods.

• 4) With automatic control of power and manual control offeedwater, the plant successfully maintained 200 MWt. Be-cause feedwater flow settings were high, however, the steamvoid in the reactor dropped to zero. Lower reactivity wascompensated for by pulling even more control rods. Only sixto eight rods remained in the reactor, far fewer than the mini-mum number (30) required by operating regulations.

• 5) To avoid automatic shutdown triggered by out-of-range steam drum and feedwater signals, the operator

disabled the associated automatic scram control cir-cuits.

• 6) Recognizing excessive feedwater, the operator finally re-duced pumping rates. The steam void recovered, producingincreased reactivity. Automatic power controls respondedto keep power regulated. This response was rate limited andbarely stable.

• 7) Next, in preparation for the actual intended test, the oper-ator disabled the automatic scram circuits associated withturbine trip signals.

• 8) Finally, the test was actually started. Steam was cut to thetest turbine. The steam void began to rise, and the power con-troller responded by inserting all three available banks of con-trol rods at maximum rates. This was too little controlauthority, applied too slowly. A huge power rise followed, upto an estimated 300,000 MWt (100 times rated capacity). Thereactor was destroyed. Steam at primary working pressurewas released into the reactor containment chamber. The1,000-ton cover plate of the chamber blew off, and the entireradioactive debris was exposed to the environment.

Page 13: The practical, physical (and sometimes dangerous) …aaclab.mit.edu/material/handouts/GunterStein-bodelecture... · 2019. 11. 27. · Gripen JAS39 prototype accident on 2 February

The reactor itself consisted of a graphite core with manypressure tubes (≈1,600) through which the water flowed tobe boiled. Each tube contained several fuel rods of uraniumoxide that produced the nuclear fission reaction. The gener-ated thermal power was measured by ion flux devices andcontrolled by several banks of control rods, inserted or with-drawn from the core to moderate the reaction. A second ma-jor control input was provided by the feedwater pumps,which regulated flow rates into the bottom of the reactor.

The reason this plant is of interest is that its reactor de-sign has a so-called positive void coefficient. The void refersto the ratio of steam to water in the pressure tubes. The voidcoefficient refers to the gradient of thermal power with re-

spect to the void. In Chernobyl’s de-sign, this coefficient is positive at lowpower, leading in the linearized senseto a first-order, open-loop instability.Heuristically, as more steam fills thetubes, the reaction becomes stronger(water is apparently a better modera-tor than steam). This produces moresteam, which produces more reactiv-ity, and so it goes.

At nominal operating power, 3,200MWt (thermal), the positive void co-efficient is swamped by other effectsand the reactor design is stable. Forthis reason, operating policies onlypermitted sustained operation abovea minimum power threshold of 700MWt. Of course, to get the reactorstarted and stopped, it had to transi-tion through the unstable region.

An ExperimentThe accident occurred in connectionwith an experiment conducted on theelectrical side of the plant. The opera-tors wanted to evaluate a scheme fordrawing electrical power from a tur-bine-generator coasting down after atrip from the main power grid. Duringsuch a trip, electrical power at theplant is momentarily lost, the reactorscrams (shuts down) automatically,and backup diesels are started to re-supply local electrical equipment atthe plant. The new scheme would gothrough this sequence without localpower interruption.

The test was scheduled in con-junction with a routine reactor shut-down for maintenance. The intentwas to slowly bring power down intothe 700- to 1,000-MWt range, load thetest turbine-generator with some of

the circulation pumps, cut steam to the turbine, andscram the reactor.

A summary of the events of that evening is given in thesidebar. Traces of the last few seconds of this event areshown at the tail end of Figure 15 and continue on an ex-panded time scale into Figure 16. We can see a slight steamvoid drop as pressure increased momentarily after steamwas cut to the turbine. The steam void then began to buildsteadily, the control rods dropped in at maximum rate,and finally the power rose uncontrollably. Simpletime-to-double calculations approximated from the finalpower rise place the reactor’s unstable pole between 1.5

24 IEEE Control Systems Magazine August 2003

Feedwater

ControlRods

ControlRods

Flow

ControlRods

PowerSteam Void Fraction

APR 26 1:21:30 - 1:23:30

Figure 15. Trace of reactor events preceding Chernobyl disaster.

Flow

ControlRods

Power

Control Rods

Steam Void FractionControlRods

Feedwater

APR 26 1:23:30 - 1:23:44

Power

~e2t

Figure 16. Trace of power increase associated with the Chernobyl disaster.

Page 14: The practical, physical (and sometimes dangerous) …aaclab.mit.edu/material/handouts/GunterStein-bodelecture... · 2019. 11. 27. · Gripen JAS39 prototype accident on 2 February

and 2.0 rad/s, equivalent to a 10-ft broomstick. Again, ex-cept for signs, these traces are very similar to Figure 13.

Back to the Punch LineWhy have I taken the time to describe these details of theChernobyl accident? Certainly I do not want to focus on itstragic consequences, nor do I want to make a case forantinuclear advocates. I simply find this accident to be themost compelling example of blatant disregard for the basicfacts I have restated here. We all claim to know these facts,to respect them, and to teach them. Yet the operators atChernobyl did not appear to know them. Indeed, it can be ar-gued that the plant’s designers did not know them either.How else would they be content to write paper regulationsprohibiting operation at low power and not insist on reliablehardware to enforce such regulations? On at least two occa-sions, the operators were able simply to shut off criticalsafety circuits. Had these shutoffs not been made or notbeen possible, there would have been adequate time toscram the reactor automatically and to save the plant, evenafter item 6 of the fateful sequence of events. In redundancymanagement terms, this possibility qualifies as a genericfault, a single design flaw able to defeat the safety of the en-tire control architecture.

The Chernobyl accident also provides the most com-pelling motivation I can think of for us to improve the waywe do our job. This reactor control application, as well asthe airplane applications I talked about earlier, illustratethat society does indeed permit control engineers to op-erate dangerous systems. The number of such applica-tions increases steadily. Not all of them have suchdramatic consequences as Chernobyl, but they are dan-gerous nevertheless. Control designers and operatorsappreciate and respect the practical, physical conse-quences of these applications only to the extent that we,their teachers, value and instill that appreciation. Unfor-tunately, our behavior over the past few years at confer-ences, in our journals, and I suspect also in our lecturehalls, places little value upon it. Instead, our behaviortends to uphold mathematical rigor as the only virtue tostrive for in control. This trend is incompatible with thetrend in applications.

We must place renewed emphasis on stating and teach-ing the principles of our subject clearly and well. The appli-cations out there are simply too serious for us to hide fromresponsibility under a cloak of mathematics.

Historical NotesMuch has changed since this article was first presented asthe Bode Prize Lecture in December 1989. Here are somebrief notes of how certain topics in the lecture haveplayed out:

1) The SAAB JAS-39 accident was indeed attributed to un-stable oscillations involving actuator saturations.Control laws were redesigned and retested. The new

design experienced a second accident in August 1993,for similar causes. Another redesign followed, thistime successful, and the aircraft reached a productionmilestone with the delivery of 30 aircraft completed in1996. A prototype of the USAF F-22 fighter also experi-enced unstable oscillations in 1992, barely avoidingloss of the aircraft.

2) The X-29 research aircraft completed its flight test pro-gram without incident. It was retired in 1992 after 374test flights.

3) After a shaky start, the Airbus A320 fly-by-wire commer-cial transport has accumulated a strong record ofsafety and performance and continues to be in serviceworldwide. It has been joined by Boeing’s 777, alsofly-by-wire, with a similar strong safety and perfor-mance record. No supersonic transports are foreseenin the near future. Design studies during the late 1990sshowed their economics to be unattractive using cur-rently projected material and propulsion technologies.

4) The sarcophagus at Chernobyl continues to stand as astark reminder of the need to “respect the unstable.”

References[1] S. Boyd, C. Barratt, and S. Norman, “Linear controller design: Limits of per-formance via convex optimization,” Proc. IEEE, vol. 78, pp. 529-574, Mar. 1990.[2] I.M. Horowitz and U. Shaked, “Superiority of transfer function overstate-variable methods in linear time-invariant feedback system design,”IEEE Trans. Automat. Control, vol. AC-20, pp. 84-97, Feb. 1975.[3] J.S. Freudenberg and D.P. Looze, “Sensitivity reduction, nonminimumphase zeros, and design tradeoffs in single loop feedback systems,” in Proc.Conf. Decision and Control, San Antonio, TX, 1983.[4] J.S. Freudenberg and D.P. Looze, Frequency Domain Properties of Scalarand Multivariable Feedback Systems. New York: Springer-Verlag, 1988.[5] J. Gera and J.T. Bosworth, “Dynamic stability and handling qualities testson a highly augmented statically unstable airplane,” in, Proc. AIAA Guidance,Navigation and Control Conference, Monterey, CA, 1987, AIAA Paper 87-2258.[6] M. Pease, R. Shostak, and L. Lamport, “Reaching agreement in the pres-ence of faults,” J. Assoc. Comp. Machinery, vol. 27, no. 2, pp. 228-234, Apr. 1980[7] Aviation Week Space Techno., Feb. 13, June 26, Oct. 16, 1989.[8] H. Kouts, “The Chernobyl accident,” Seminar Notes, Brookhaven NationalLab., Upton, NY, 1986.

Gunter Stein is a chief scientist (retired) of HoneywellTechnology Center (now Honeywell Labs). He received aPh.D. in electrical engineering from Purdue University in1969. His technical specialization is in systems and con-trol, particularly aircraft flight controls (fighters, trans-ports, and experimental vehicles), spacecraft attitudeand orbit controls, and navigation systems for strategic,tactical, and commercial applications. From 1977 to 1997,he also served as adjunct professor in electrical engineer-ing and computer science at MIT, teaching control sys-tems theory and design. He is also active in thedevelopment of computer aids for control system design.He was elected Fellow of the IEEE in 1985, awarded theIEEE Control System Society’s first Hendrik W. Bode Prizein 1989, elected to the National Academy of Engineering in1994, and was awarded the IFAC’s Nathaniel Nichols Prizein 1999. He can be contacted at 168 Windsor Ct., St. Paul,MN 55112, U.S.A., [email protected].

August 2003 IEEE Control Systems Magazine 25