905

Modern Control Engineering - IAUNresearch.iaun.ac.ir/pd/mahmoodian/pdfs/UploadFile_9809.pdf · Modern Control Engineering Fifth Edition Katsuhiko Ogata Prentice Hall Boston Columbus

Embed Size (px)

Citation preview

  • Modern Control EngineeringFifth Edition

    Katsuhiko Ogata

    Prentice HallBoston Columbus Indianapolis New York San Francisco Upper Saddle River

    Amsterdam Cape Town Dubai London Madrid Milan Munich Paris Montreal TorontoDelhi Mexico City Sao Paulo Sydney Hong Kong Seoul Singapore Taipei Tokyo

  • VP/Editorial Director, Engineering/Computer Science: Marcia J. HortonAssistant/Supervisor: Dolores MarsSenior Editor: Andrew GilfillanAssociate Editor: Alice DworkinEditorial Assistant: William OpaluchDirector of Marketing: Margaret WaplesSenior Marketing Manager: Tim Galligan Marketing Assistant: Mack PattersonSenior Managing Editor: Scott DisannoArt Editor: Greg Dulles Senior Operations Supervisor: Alan FischerOperations Specialist: Lisa McDowellArt Director: Kenny BeckCover Designer: Carole AnsonMedia Editor: Daniel Sandin

    Credits and acknowledgments borrowed from other sources and reproduced, with permission, in thistextbook appear on appropriate page within text.

    MATLAB is a registered trademark of The Mathworks, Inc., 3 Apple Hill Drive, Natick MA 01760-2098.

    Copyright 2010, 2002, 1997, 1990, 1970 Pearson Education, Inc., publishing as Prentice Hall, One LakeStreet, Upper Saddle River, New Jersey 07458. All rights reserved. Manufactured in the United States ofAmerica. This publication is protected by Copyright, and permission should be obtained from the publisherprior to any prohibited reproduction, storage in a retrieval system, or transmission in any form or by anymeans, electronic, mechanical, photocopying, recording, or likewise. To obtain permission(s) to use materialfrom this work, please submit a written request to Pearson Education, Inc., Permissions Department, OneLake Street, Upper Saddle River, New Jersey 07458.

    Many of the designations by manufacturers and seller to distinguish their products are claimed astrademarks. Where those designations appear in this book, and the publisher was aware of a trademarkclaim, the designations have been printed in initial caps or all caps.

    Library of Congress Cataloging-in-Publication Data on File

    10 9 8 7 6 5 4 3 2 1

    ISBN 10: 0-13-615673-8

    ISBN 13: 978-0-13-615673-4

  • C

    iii

    Contents

    Preface ix

    Chapter 1 Introduction to Control Systems 1

    11 Introduction 1

    12 Examples of Control Systems 4

    13 Closed-Loop Control Versus Open-Loop Control 7

    14 Design and Compensation of Control Systems 9

    15 Outline of the Book 10

    Chapter 2 Mathematical Modeling of Control Systems 13

    21 Introduction 13

    22 Transfer Function and Impulse-Response Function 15

    23 Automatic Control Systems 17

    24 Modeling in State Space 29

    25 State-Space Representation of Scalar Differential Equation Systems 35

    26 Transformation of Mathematical Models with MATLAB 39

  • 27 Linearization of Nonlinear Mathematical Models 43

    Example Problems and Solutions 46

    Problems 60

    Chapter 3 Mathematical Modeling of Mechanical Systems and Electrical Systems 63

    31 Introduction 63

    32 Mathematical Modeling of Mechanical Systems 63

    33 Mathematical Modeling of Electrical Systems 72

    Example Problems and Solutions 86

    Problems 97

    Chapter 4 Mathematical Modeling of Fluid Systems and Thermal Systems 100

    41 Introduction 100

    42 Liquid-Level Systems 101

    43 Pneumatic Systems 106

    44 Hydraulic Systems 123

    45 Thermal Systems 136

    Example Problems and Solutions 140

    Problems 152

    Chapter 5 Transient and Steady-State Response Analyses 159

    51 Introduction 159

    52 First-Order Systems 161

    53 Second-Order Systems 164

    54 Higher-Order Systems 179

    55 Transient-Response Analysis with MATLAB 183

    56 Rouths Stability Criterion 212

    57 Effects of Integral and Derivative Control Actions on System Performance 218

    58 Steady-State Errors in Unity-Feedback Control Systems 225

    Example Problems and Solutions 231

    Problems 263

    iv Contents

  • Chapter 6 Control Systems Analysis and Design by the Root-Locus Method 269

    61 Introduction 269

    62 Root-Locus Plots 270

    63 Plotting Root Loci with MATLAB 290

    64 Root-Locus Plots of Positive Feedback Systems 303

    65 Root-Locus Approach to Control-Systems Design 308

    66 Lead Compensation 311

    67 Lag Compensation 321

    68 LagLead Compensation 330

    69 Parallel Compensation 342

    Example Problems and Solutions 347

    Problems 394

    Chapter 7 Control Systems Analysis and Design by the Frequency-Response Method 398

    71 Introduction 398

    72 Bode Diagrams 403

    73 Polar Plots 427

    74 Log-Magnitude-versus-Phase Plots 443

    75 Nyquist Stability Criterion 445

    76 Stability Analysis 454

    77 Relative Stability Analysis 462

    78 Closed-Loop Frequency Response of Unity-Feedback Systems 477

    79 Experimental Determination of Transfer Functions 486

    710 Control Systems Design by Frequency-Response Approach 491

    711 Lead Compensation 493

    712 Lag Compensation 502

    713 LagLead Compensation 511

    Example Problems and Solutions 521

    Problems 561

    Chapter 8 PID Controllers and Modified PID Controllers 567

    81 Introduction 567

    82 ZieglerNichols Rules for Tuning PID Controllers 568

    Contents v

  • 83 Design of PID Controllers with Frequency-Response Approach 577

    84 Design of PID Controllers with Computational Optimization Approach 583

    85 Modifications of PID Control Schemes 590

    86 Two-Degrees-of-Freedom Control 592

    87 Zero-Placement Approach to Improve Response Characteristics 595

    Example Problems and Solutions 614

    Problems 641

    Chapter 9 Control Systems Analysis in State Space 648

    91 Introduction 648

    92 State-Space Representations of Transfer-Function Systems 649

    93 Transformation of System Models with MATLAB 656

    94 Solving the Time-Invariant State Equation 660

    95 Some Useful Results in Vector-Matrix Analysis 668

    96 Controllability 675

    97 Observability 682

    Example Problems and Solutions 688

    Problems 720

    Chapter 10 Control Systems Design in State Space 722

    101 Introduction 722

    102 Pole Placement 723

    103 Solving Pole-Placement Problems with MATLAB 735

    104 Design of Servo Systems 739

    105 State Observers 751

    106 Design of Regulator Systems with Observers 778

    107 Design of Control Systems with Observers 786

    108 Quadratic Optimal Regulator Systems 793

    109 Robust Control Systems 806

    Example Problems and Solutions 817

    Problems 855

    vi Contents

  • Appendix A Laplace Transform Tables 859

    Appendix B Partial-Fraction Expansion 867

    Appendix C Vector-Matrix Algebra 874

    References 882

    Index 886

    Contents vii

  • This page intentionally left blank

  • P

    ix

    Preface

    This book introduces important concepts in the analysis and design of control systems.Readers will find it to be a clear and understandable textbook for control system coursesat colleges and universities. It is written for senior electrical, mechanical, aerospace, orchemical engineering students. The reader is expected to have fulfilled the followingprerequisites: introductory courses on differential equations, Laplace transforms, vector-matrix analysis, circuit analysis, mechanics, and introductory thermodynamics.

    The main revisions made in this edition are as follows:

    The use of MATLAB for obtaining responses of control systems to various inputshas been increased.

    The usefulness of the computational optimization approach with MATLAB has beendemonstrated.

    New example problems have been added throughout the book. Materials in the previous edition that are of secondary importance have been deleted

    in order to provide space for more important subjects. Signal flow graphs weredropped from the book. A chapter on Laplace transform was deleted. Instead,Laplace transform tables, and partial-fraction expansion with MATLAB are pre-sented in Appendix A and Appendix B, respectively.

    A short summary of vector-matrix analysis is presented in Appendix C; this will helpthe reader to find the inverses of n x n matrices that may be involved in the analy-sis and design of control systems.

    This edition of Modern Control Engineering is organized into ten chapters.The outline ofthis book is as follows: Chapter 1 presents an introduction to control systems. Chapter 2

  • deals with mathematical modeling of control systems.A linearization technique for non-linear mathematical models is presented in this chapter. Chapter 3 derives mathematicalmodels of mechanical systems and electrical systems. Chapter 4 discusses mathematicalmodeling of fluid systems (such as liquid-level systems, pneumatic systems, and hydraulicsystems) and thermal systems.

    Chapter 5 treats transient response and steady-state analyses of control systems.MATLAB is used extensively for obtaining transient response curves. Rouths stabilitycriterion is presented for stability analysis of control systems. Hurwitz stability criterionis also presented.

    Chapter 6 discusses the root-locus analysis and design of control systems, includingpositive feedback systems and conditionally stable systems Plotting root loci with MAT-LAB is discussed in detail. Design of lead, lag, and lag-lead compensators with the root-locus method is included.

    Chapter 7 treats the frequency-response analysis and design of control systems.TheNyquist stability criterion is presented in an easily understandable manner.The Bode di-agram approach to the design of lead, lag, and lag-lead compensators is discussed.

    Chapter 8 deals with basic and modified PID controllers. Computational approachesfor obtaining optimal parameter values for PID controllers are discussed in detail, par-ticularly with respect to satisfying requirements for step-response characteristics.

    Chapter 9 treats basic analyses of control systems in state space. Concepts of con-trollability and observability are discussed in detail.

    Chapter 10 deals with control systems design in state space.The discussions includepole placement, state observers, and quadratic optimal control. An introductory dis-cussion of robust control systems is presented at the end of Chapter 10.

    The book has been arranged toward facilitating the students gradual understanding of control theory. Highly mathematical arguments are carefully avoided in the presen-tation of the materials. Statement proofs are provided whenever they contribute to theunderstanding of the subject matter presented.

    Special effort has been made to provide example problems at strategic points so thatthe reader will have a clear understanding of the subject matter discussed. In addition,a number of solved problems (A-problems) are provided at the end of each chapter,except Chapter 1. The reader is encouraged to study all such solved problems carefully;this will allow the reader to obtain a deeper understanding of the topics discussed. Inaddition, many problems (without solutions) are provided at the end of each chapter,except Chapter 1. The unsolved problems (B-problems) may be used as homework orquiz problems.

    If this book is used as a text for a semester course (with 56 or so lecture hours), a goodportion of the material may be covered by skipping certain subjects. Because of theabundance of example problems and solved problems (A-problems) that might answermany possible questions that the reader might have, this book can also serve as a self-study book for practicing engineers who wish to study basic control theories.

    I would like to thank the following reviewers for this edition of the book: Mark Camp-bell, Cornell University; Henry Sodano, Arizona State University; and Atul G. Kelkar,Iowa State University. Finally, I wish to offer my deep appreciation to Ms.Alice Dworkin,Associate Editor, Mr. Scott Disanno, Senior Managing Editor, and all the people in-volved in this publishing project, for the speedy yet superb production of this book.

    Katsuhiko Ogata

    x Preface

  • 1

    Introduction to Control Systems

    11 INTRODUCTION

    Control theories commonly used today are classical control theory (also called con-ventional control theory), modern control theory, and robust control theory. This bookpresents comprehensive treatments of the analysis and design of control systems basedon the classical control theory and modern control theory.A brief introduction of robustcontrol theory is included in Chapter 10.

    Automatic control is essential in any field of engineering and science. Automaticcontrol is an important and integral part of space-vehicle systems, robotic systems, mod-ern manufacturing systems, and any industrial operations involving control of temper-ature, pressure, humidity, flow, etc. It is desirable that most engineers and scientists arefamiliar with theory and practice of automatic control.

    This book is intended to be a text book on control systems at the senior level at a col-lege or university. All necessary background materials are included in the book. Math-ematical background materials related to Laplace transforms and vector-matrix analysisare presented separately in appendixes.

    Brief Review of Historical Developments of Control Theories and Practices.The first significant work in automatic control was James Watts centrifugal gover-nor for the speed control of a steam engine in the eighteenth century. Othersignificant works in the early stages of development of control theory were due to

    1

  • 2 Chapter 1 / Introduction to Control Systems

    Minorsky, Hazen, and Nyquist, among many others. In 1922, Minorsky worked onautomatic controllers for steering ships and showed how stability could be deter-mined from the differential equations describing the system. In 1932, Nyquistdeveloped a relatively simple procedure for determining the stability of closed-loopsystems on the basis of open-loop response to steady-state sinusoidal inputs. In 1934,Hazen, who introduced the term servomechanisms for position control systems,discussed the design of relay servomechanisms capable of closely following a chang-ing input.

    During the decade of the 1940s, frequency-response methods (especially the Bodediagram methods due to Bode) made it possible for engineers to design linear closed-loop control systems that satisfied performance requirements. Many industrial controlsystems in 1940s and 1950s used PID controllers to control pressure, temperature, etc.In the early 1940s Ziegler and Nichols suggested rules for tuning PID controllers, calledZieglerNichols tuning rules. From the end of the 1940s to the 1950s, the root-locusmethod due to Evans was fully developed.

    The frequency-response and root-locus methods, which are the core of classical con-trol theory, lead to systems that are stable and satisfy a set of more or less arbitrary per-formance requirements. Such systems are, in general, acceptable but not optimal in anymeaningful sense. Since the late 1950s, the emphasis in control design problems has beenshifted from the design of one of many systems that work to the design of one optimalsystem in some meaningful sense.

    As modern plants with many inputs and outputs become more and more complex,the description of a modern control system requires a large number of equations. Clas-sical control theory, which deals only with single-input, single-output systems, becomespowerless for multiple-input, multiple-output systems. Since about 1960, because theavailability of digital computers made possible time-domain analysis of complex sys-tems, modern control theory, based on time-domain analysis and synthesis using statevariables, has been developed to cope with the increased complexity of modern plantsand the stringent requirements on accuracy, weight, and cost in military, space, and in-dustrial applications.

    During the years from 1960 to 1980, optimal control of both deterministic and sto-chastic systems, as well as adaptive and learning control of complex systems, were fullyinvestigated. From 1980s to 1990s, developments in modern control theory were cen-tered around robust control and associated topics.

    Modern control theory is based on time-domain analysis of differential equationsystems. Modern control theory made the design of control systems simpler becausethe theory is based on a model of an actual control system. However, the systemsstability is sensitive to the error between the actual system and its model. Thismeans that when the designed controller based on a model is applied to the actualsystem, the system may not be stable. To avoid this situation, we design the controlsystem by first setting up the range of possible errors and then designing the con-troller in such a way that, if the error of the system stays within the assumedrange, the designed control system will stay stable. The design method based on thisprinciple is called robust control theory.This theory incorporates both the frequency-response approach and the time-domain approach. The theory is mathematically verycomplex.

  • Section 11 / Introduction 3

    Because this theory requires mathematical background at the graduate level, inclu-sion of robust control theory in this book is limited to introductory aspects only. Thereader interested in details of robust control theory should take a graduate-level controlcourse at an established college or university.

    Definitions. Before we can discuss control systems, some basic terminologies mustbe defined.

    Controlled Variable and Control Signal or Manipulated Variable. The controlledvariable is the quantity or condition that is measured and controlled.The control signalor manipulated variable is the quantity or condition that is varied by the controller soas to affect the value of the controlled variable. Normally, the controlled variable is theoutput of the system. Control means measuring the value of the controlled variable ofthe system and applying the control signal to the system to correct or limit deviation ofthe measured value from a desired value.

    In studying control engineering, we need to define additional terms that are neces-sary to describe control systems.

    Plants. A plant may be a piece of equipment, perhaps just a set of machine partsfunctioning together, the purpose of which is to perform a particular operation. In thisbook, we shall call any physical object to be controlled (such as a mechanical device, aheating furnace, a chemical reactor, or a spacecraft) a plant.

    Processes. The MerriamWebster Dictionary defines a process to be a natural, pro-gressively continuing operation or development marked by a series of gradual changesthat succeed one another in a relatively fixed way and lead toward a particular result orend; or an artificial or voluntary, progressively continuing operation that consists of a se-ries of controlled actions or movements systematically directed toward a particular re-sult or end. In this book we shall call any operation to be controlled a process. Examplesare chemical, economic, and biological processes.

    Systems. A system is a combination of components that act together and performa certain objective. A system need not be physical. The concept of the system can beapplied to abstract, dynamic phenomena such as those encountered in economics. Theword system should, therefore, be interpreted to imply physical, biological, economic, andthe like, systems.

    Disturbances. A disturbance is a signal that tends to adversely affect the value of the output of a system. If a disturbance is generated within the system, it is calledinternal, while an external disturbance is generated outside the system and is an input.

    Feedback Control. Feedback control refers to an operation that, in the presenceof disturbances, tends to reduce the difference between the output of a system and somereference input and does so on the basis of this difference. Here only unpredictable dis-turbances are so specified, since predictable or known disturbances can always be com-pensated for within the system.

  • 4 Chapter 1 / Introduction to Control Systems

    12 EXAMPLES OF CONTROL SYSTEMS

    In this section we shall present a few examples of control systems.

    Speed Control System. The basic principle of a Watts speed governor for an en-gine is illustrated in the schematic diagram of Figure 11. The amount of fuel admittedto the engine is adjusted according to the difference between the desired and the actualengine speeds.

    The sequence of actions may be stated as follows: The speed governor is ad-justed such that, at the desired speed, no pressured oil will flow into either side ofthe power cylinder. If the actual speed drops below the desired value due todisturbance, then the decrease in the centrifugal force of the speed governor causesthe control valve to move downward, supplying more fuel, and the speed of theengine increases until the desired value is reached. On the other hand, if the speedof the engine increases above the desired value, then the increase in the centrifu-gal force of the governor causes the control valve to move upward. This decreases the supply of fuel, and the speed of the engine decreases until the desired value isreached.

    In this speed control system, the plant (controlled system) is the engine and thecontrolled variable is the speed of the engine. The difference between the desiredspeed and the actual speed is the error signal. The control signal (the amount of fuel)to be applied to the plant (engine) is the actuating signal. The external input to dis-turb the controlled variable is the disturbance. An unexpected change in the load isa disturbance.

    Temperature Control System. Figure 12 shows a schematic diagram of tem-perature control of an electric furnace.The temperature in the electric furnace is meas-ured by a thermometer, which is an analog device.The analog temperature is converted

    Oil underpressure

    Powercylinder

    CloseOpen

    Pilotvalve

    Controlvalve

    Fuel

    Engine Load

    Figure 11Speed controlsystem.

  • Section 12 / Examples of Control Systems 5

    Thermometer

    Heater

    InterfaceController

    InterfaceAmplifier

    A/Dconverter

    Programmedinput

    Electricfurnace

    RelayFigure 12Temperature controlsystem.

    to a digital temperature by an A/D converter. The digital temperature is fed to a con-troller through an interface.This digital temperature is compared with the programmedinput temperature, and if there is any discrepancy (error), the controller sends out a sig-nal to the heater, through an interface, amplifier, and relay, to bring the furnace tem-perature to a desired value.

    Business Systems. A business system may consist of many groups. Each taskassigned to a group will represent a dynamic element of the system. Feedback methodsof reporting the accomplishments of each group must be established in such a system forproper operation. The cross-coupling between functional groups must be made a mini-mum in order to reduce undesirable delay times in the system. The smaller this cross-coupling, the smoother the flow of work signals and materials will be.

    A business system is a closed-loop system.A good design will reduce the manageri-al control required. Note that disturbances in this system are the lack of personnel or ma-terials, interruption of communication, human errors, and the like.

    The establishment of a well-founded estimating system based on statistics is manda-tory to proper management. It is a well-known fact that the performance of such a systemcan be improved by the use of lead time, or anticipation.

    To apply control theory to improve the performance of such a system, we must rep-resent the dynamic characteristic of the component groups of the system by a relative-ly simple set of equations.

    Although it is certainly a difficult problem to derive mathematical representationsof the component groups, the application of optimization techniques to business sys-tems significantly improves the performance of the business system.

    Consider, as an example, an engineering organizational system that is composed ofmajor groups such as management, research and development, preliminary design, ex-periments, product design and drafting, fabrication and assembling, and tesing. Thesegroups are interconnected to make up the whole operation.

    Such a system may be analyzed by reducing it to the most elementary set of com-ponents necessary that can provide the analytical detail required and by representing thedynamic characteristics of each component by a set of simple equations. (The dynamicperformance of such a system may be determined from the relation between progres-sive accomplishment and time.)

  • 6 Chapter 1 / Introduction to Control Systems

    Requiredproduct

    ManagementResearch

    anddevelopment

    Preliminarydesign

    ExperimentsProduct

    design anddrafting

    Fabricationand

    assemblingTesting

    Product

    Figure 13Block diagram of an engineering organizational system.

    A functional block diagram may be drawn by using blocks to represent the func-tional activities and interconnecting signal lines to represent the information orproduct output of the system operation. Figure 13 is a possible block diagram forthis system.

    Robust Control System. The first step in the design of a control system is toobtain a mathematical model of the plant or control object. In reality, any model of aplant we want to control will include an error in the modeling process.That is, the actualplant differs from the model to be used in the design of the control system.

    To ensure the controller designed based on a model will work satisfactorily whenthis controller is used with the actual plant, one reasonable approach is to assumefrom the start that there is an uncertainty or error between the actual plant and itsmathematical model and include such uncertainty or error in the design process of thecontrol system. The control system designed based on this approach is called a robustcontrol system.

    Suppose that the actual plant we want to control is (s) and the mathematical modelof the actual plant is G(s), that is,

    (s)=actual plant model that has uncertainty (s)G(s)=nominal plant model to be used for designing the control system

    (s) and G(s) may be related by a multiplicative factor such as

    or an additive factor

    or in other forms.Since the exact description of the uncertainty or error (s) is unknown, we use an

    estimate of (s) and use this estimate, W(s), in the design of the controller. W(s) is ascalar transfer function such that

    where is the maximum value of for and is called the Hinfinity norm of W(s).

    0 v qW(jv)W(s)q

    (s)q 6 W(s)q = max0vq

    W(jv)

    G

    (s) = G(s) + (s)

    G

    (s) = G(s)[1 + (s)]

    G

    G

    G

  • Section 13 / Closed-Loop Control versus Open-Loop Control 7

    Using the small gain theorem, the design procedure here boils down to the deter-mination of the controller K(s) such that the inequality

    is satisfied, where G(s) is the transfer function of the model used in the design process,K(s) is the transfer function of the controller, and W(s) is the chosen transfer functionto approximate (s). In most practical cases, we must satisfy more than one suchinequality that involves G(s), K(s), and W(s)s. For example, to guarantee robust sta-bility and robust performance we may require two inequalities, such as

    for robust stability

    for robust performance

    be satisfied. (These inequalities are derived in Section 109.) There are many differentsuch inequalities that need to be satisfied in many different robust control systems.(Robust stability means that the controller K(s) guarantees internal stability of allsystems that belong to a group of systems that include the system with the actual plant.Robust performance means the specified performance is satisfied in all systems that be-long to the group.) In this book all the plants of control systems we discuss are assumedto be known precisely, except the plants we discuss in Section 109 where an introduc-tory aspect of robust control theory is presented.

    13 CLOSED-LOOP CONTROL VERSUS OPEN-LOOP CONTROL

    Feedback Control Systems. A system that maintains a prescribed relationshipbetween the output and the reference input by comparing them and using the differenceas a means of control is called a feedback control system. An example would be a room-temperature control system. By measuring the actual room temperature and comparingit with the reference temperature (desired temperature), the thermostat turns the heat-ing or cooling equipment on or off in such a way as to ensure that the room tempera-ture remains at a comfortable level regardless of outside conditions.

    Feedback control systems are not limited to engineering but can be found in variousnonengineering fields as well. The human body, for instance, is a highly advanced feed-back control system. Both body temperature and blood pressure are kept constant bymeans of physiological feedback. In fact, feedback performs a vital function: It makesthe human body relatively insensitive to external disturbances, thus enabling it to func-tion properly in a changing environment.

    Ws(s)1 + K(s)G(s)

    q

    6 1

    Wm(s)K(s)G(s)1 + K(s)G(s)

    q

    6 1

    W(s)1 + K(s)G(s)

    q

    6 1

  • 8 Chapter 1 / Introduction to Control Systems

    Closed-Loop Control Systems. Feedback control systems are often referred toas closed-loop control systems. In practice, the terms feedback control and closed-loopcontrol are used interchangeably. In a closed-loop control system the actuating errorsignal, which is the difference between the input signal and the feedback signal (whichmay be the output signal itself or a function of the output signal and its derivativesand/or integrals), is fed to the controller so as to reduce the error and bring the outputof the system to a desired value.The term closed-loop control always implies the use offeedback control action in order to reduce system error.

    Open-Loop Control Systems. Those systems in which the output has no effecton the control action are called open-loop control systems. In other words, in an open-loop control system the output is neither measured nor fed back for comparison with theinput. One practical example is a washing machine. Soaking, washing, and rinsing in thewasher operate on a time basis. The machine does not measure the output signal, thatis, the cleanliness of the clothes.

    In any open-loop control system the output is not compared with the reference input.Thus, to each reference input there corresponds a fixed operating condition; as a result,the accuracy of the system depends on calibration. In the presence of disturbances, anopen-loop control system will not perform the desired task. Open-loop control can beused, in practice, only if the relationship between the input and output is known and ifthere are neither internal nor external disturbances. Clearly, such systems are not feed-back control systems. Note that any control system that operates on a time basis is openloop. For instance, traffic control by means of signals operated on a time basis is anotherexample of open-loop control.

    Closed-Loop versus Open-Loop Control Systems. An advantage of the closed-loop control system is the fact that the use of feedback makes the system response rela-tively insensitive to external disturbances and internal variations in system parameters.It is thus possible to use relatively inaccurate and inexpensive components to obtain theaccurate control of a given plant, whereas doing so is impossible in the open-loop case.

    From the point of view of stability, the open-loop control system is easier to build be-cause system stability is not a major problem. On the other hand, stability is a majorproblem in the closed-loop control system, which may tend to overcorrect errors andthereby can cause oscillations of constant or changing amplitude.

    It should be emphasized that for systems in which the inputs are known ahead oftime and in which there are no disturbances it is advisable to use open-loop control.Closed-loop control systems have advantages only when unpredictable disturbancesand/or unpredictable variations in system components are present. Note that theoutput power rating partially determines the cost, weight, and size of a control system.The number of components used in a closed-loop control system is more than that fora corresponding open-loop control system. Thus, the closed-loop control system isgenerally higher in cost and power.To decrease the required power of a system, open-loop control may be used where applicable. A proper combination of open-loop andclosed-loop controls is usually less expensive and will give satisfactory overall systemperformance.

    Most analyses and designs of control systems presented in this book are concernedwith closed-loop control systems. Under certain circumstances (such as where nodisturbances exist or the output is hard to measure) open-loop control systems may be

  • desired. Therefore, it is worthwhile to summarize the advantages and disadvantages ofusing open-loop control systems.

    The major advantages of open-loop control systems are as follows:

    1. Simple construction and ease of maintenance.2. Less expensive than a corresponding closed-loop system.3. There is no stability problem.4. Convenient when output is hard to measure or measuring the output precisely is

    economically not feasible. (For example, in the washer system, it would be quite ex-pensive to provide a device to measure the quality of the washers output, clean-liness of the clothes.)

    The major disadvantages of open-loop control systems are as follows:

    1. Disturbances and changes in calibration cause errors, and the output may bedifferent from what is desired.

    2. To maintain the required quality in the output, recalibration is necessary fromtime to time.

    14 DESIGN AND COMPENSATION OF CONTROL SYSTEMS

    This book discusses basic aspects of the design and compensation of control systems.Compensation is the modification of the system dynamics to satisfy the given specifi-cations. The approaches to control system design and compensation used in this bookare the root-locus approach, frequency-response approach, and the state-space ap-proach. Such control systems design and compensation will be presented in Chapters6, 7, 9 and 10. The PID-based compensational approach to control systems design isgiven in Chapter 8.

    In the actual design of a control system, whether to use an electronic, pneumatic, orhydraulic compensator is a matter that must be decided partially based on the nature ofthe controlled plant. For example, if the controlled plant involves flammable fluid, thenwe have to choose pneumatic components (both a compensator and an actuator) toavoid the possibility of sparks. If, however, no fire hazard exists, then electronic com-pensators are most commonly used. (In fact, we often transform nonelectrical signals intoelectrical signals because of the simplicity of transmission, increased accuracy, increasedreliability, ease of compensation, and the like.)

    Performance Specifications. Control systems are designed to perform specifictasks. The requirements imposed on the control system are usually spelled out as per-formance specifications. The specifications may be given in terms of transient responserequirements (such as the maximum overshoot and settling time in step response) andof steady-state requirements (such as steady-state error in following ramp input) or maybe given in frequency-response terms. The specifications of a control system must begiven before the design process begins.

    For routine design problems, the performance specifications (which relate to accura-cy, relative stability, and speed of response) may be given in terms of precise numericalvalues. In other cases they may be given partially in terms of precise numerical values and

    Section 14 / Design and Compensation of Control Systems 9

  • partially in terms of qualitative statements. In the latter case the specifications may haveto be modified during the course of design, since the given specifications may never besatisfied (because of conflicting requirements) or may lead to a very expensive system.

    Generally, the performance specifications should not be more stringent than neces-sary to perform the given task. If the accuracy at steady-state operation is of prime im-portance in a given control system, then we should not require unnecessarily rigidperformance specifications on the transient response, since such specifications willrequire expensive components. Remember that the most important part of controlsystem design is to state the performance specifications precisely so that they will yieldan optimal control system for the given purpose.

    System Compensation. Setting the gain is the first step in adjusting the systemfor satisfactory performance. In many practical cases, however, the adjustment of thegain alone may not provide sufficient alteration of the system behavior to meet the givenspecifications. As is frequently the case, increasing the gain value will improve thesteady-state behavior but will result in poor stability or even instability. It is then nec-essary to redesign the system (by modifying the structure or by incorporating addi-tional devices or components) to alter the overall behavior so that the system willbehave as desired. Such a redesign or addition of a suitable device is called compensa-tion. A device inserted into the system for the purpose of satisfying the specificationsis called a compensator. The compensator compensates for deficient performance of theoriginal system.

    Design Procedures. In the process of designing a control system, we set up amathematical model of the control system and adjust the parameters of a compensator.The most time-consuming part of the work is the checking of the system performanceby analysis with each adjustment of the parameters.The designer should use MATLABor other available computer package to avoid much of the numerical drudgery neces-sary for this checking.

    Once a satisfactory mathematical model has been obtained, the designer must con-struct a prototype and test the open-loop system. If absolute stability of the closed loopis assured, the designer closes the loop and tests the performance of the resulting closed-loop system. Because of the neglected loading effects among the components, nonlin-earities, distributed parameters, and so on, which were not taken into consideration inthe original design work, the actual performance of the prototype system will probablydiffer from the theoretical predictions. Thus the first design may not satisfy all the re-quirements on performance. The designer must adjust system parameters and makechanges in the prototype until the system meets the specificications. In doing this, he orshe must analyze each trial, and the results of the analysis must be incorporated intothe next trial. The designer must see that the final system meets the performance apec-ifications and, at the same time, is reliable and economical.

    15 OUTLINE OF THE BOOK

    This text is organized into 10 chapters.The outline of each chapter may be summarizedas follows:

    Chapter 1 presents an introduction to this book.

    10 Chapter 1 / Introduction to Control Systems

  • Chapter 2 deals with mathematical modeling of control systems that are describedby linear differential equations. Specifically, transfer function expressions of differentialequation systems are derived.Also, state-space expressions of differential equation sys-tems are derived. MATLAB is used to transform mathematical models from transferfunctions to state-space equations and vice versa.This book treats linear systems in de-tail. If the mathematical model of any system is nonlinear, it needs to be linearized be-fore applying theories presented in this book. A technique to linearize nonlinearmathematical models is presented in this chapter.

    Chapter 3 derives mathematical models of various mechanical and electrical sys-tems that appear frequently in control systems.

    Chapter 4 discusses various fluid systems and thermal systems, that appear in controlsystems. Fluid systems here include liquid-level systems, pneumatic systems, and hydraulicsystems. Thermal systems such as temperature control systems are also discussed here.Control engineers must be familiar with all of these systems discussed in this chapter.

    Chapter 5 presents transient and steady-state response analyses of control systemsdefined in terms of transfer functions. MATLAB approach to obtain transient andsteady-state response analyses is presented in detail. MATLAB approach to obtainthree-dimensional plots is also presented. Stability analysis based on Rouths stabilitycriterion is included in this chapter and the Hurwitz stability criterion is briefly discussed.

    Chapter 6 treats the root-locus method of analysis and design of control systems. Itis a graphical method for determining the locations of all closed-loop poles from theknowledge of the locations of the open-loop poles and zeros of a closed-loop systemas a parameter (usually the gain) is varied from zero to infinity. This method was de-veloped by W. R. Evans around 1950. These days MATLAB can produce root-locusplots easily and quickly.This chapter presents both a manual approach and a MATLABapproach to generate root-locus plots. Details of the design of control systems using leadcompensators, lag compensators, are laglead compensators are presented in thischapter.

    Chapter 7 presents the frequency-response method of analysis and design of controlsystems. This is the oldest method of control systems analysis and design and was de-veloped during 19401950 by Nyquist, Bode, Nichols, Hazen, among others. This chap-ter presents details of the frequency-response approach to control systems design usinglead compensation technique, lag compensation technique, and laglead compensationtechnique. The frequency-response method was the most frequently used analysis anddesign method until the state-space method became popular. However, since H-infini-ty control for designing robust control systems has become popular, frequency responseis gaining popularity again.

    Chapter 8 discusses PID controllers and modified ones such as multidegrees-of-freedom PID controllers. The PID controller has three parameters; proportional gain,integral gain, and derivative gain. In industrial control systems more than half of the con-trollers used have been PID controllers. The performance of PID controllers dependson the relative magnitudes of those three parameters. Determination of the relativemagnitudes of the three parameters is called tuning of PID controllers.

    Ziegler and Nichols proposed so-called ZieglerNichols tuning rules as early as1942. Since then numerous tuning rules have been proposed.These days manufacturersof PID controllers have their own tuning rules. In this chapter we present a computeroptimization approach using MATLAB to determine the three parameters to satisfy

    Section 15 / Outline of the Book 11

  • given transient response characteristics.The approach can be expanded to determine thethree parameters to satisfy any specific given characteristics.

    Chapter 9 presents basic analysis of state-space equations. Concepts of controllabil-ity and observability, most important concepts in modern control theory, due to Kalmanare discussed in full. In this chapter, solutions of state-space equations are derived indetail.

    Chapter 10 discusses state-space designs of control systems. This chapter first dealswith pole placement problems and state observers. In control engineering, it is frequentlydesirable to set up a meaningful performance index and try to minimize it (or maximizeit, as the case may be). If the performance index selected has a clear physical meaning,then this approach is quite useful to determine the optimal control variable. This chap-ter discusses the quadratic optimal regulator problem where we use a performance indexwhich is an integral of a quadratic function of the state variables and the control vari-able.The integral is performed from t=0 to t= .This chapter concludes with a briefdiscussion of robust control systems.

    q

    12 Chapter 1 / Introduction to Control Systems

  • 2

    13

    Mathematical Modeling of Control Systems

    21 INTRODUCTION

    In studying control systems the reader must be able to model dynamic systems in math-ematical terms and analyze their dynamic characteristics.A mathematical model of a dy-namic system is defined as a set of equations that represents the dynamics of the systemaccurately, or at least fairly well. Note that a mathematical model is not unique to agiven system.A system may be represented in many different ways and, therefore, mayhave many mathematical models, depending on ones perspective.

    The dynamics of many systems, whether they are mechanical, electrical, thermal,economic, biological, and so on, may be described in terms of differential equations.Such differential equations may be obtained by using physical laws governing a partic-ular systemfor example, Newtons laws for mechanical systems and Kirchhoffs lawsfor electrical systems. We must always keep in mind that deriving reasonable mathe-matical models is the most important part of the entire analysis of control systems.

    Throughout this book we assume that the principle of causality applies to the systemsconsidered.This means that the current output of the system (the output at time t=0)depends on the past input (the input for t0).

    Mathematical Models. Mathematical models may assume many different forms.Depending on the particular system and the particular circumstances, one mathemati-cal model may be better suited than other models. For example, in optimal control prob-lems, it is advantageous to use state-space representations. On the other hand, for the

  • 14 Chapter 2 / Mathematical Modeling of Control Systems

    transient-response or frequency-response analysis of single-input, single-output, linear,time-invariant systems, the transfer-function representation may be more convenientthan any other. Once a mathematical model of a system is obtained, various analyticaland computer tools can be used for analysis and synthesis purposes.

    Simplicity Versus Accuracy. In obtaining a mathematical model, we must makea compromise between the simplicity of the model and the accuracy of the results ofthe analysis. In deriving a reasonably simplified mathematical model, we frequently findit necessary to ignore certain inherent physical properties of the system. In particular,if a linear lumped-parameter mathematical model (that is, one employing ordinary dif-ferential equations) is desired, it is always necessary to ignore certain nonlinearities anddistributed parameters that may be present in the physical system. If the effects thatthese ignored properties have on the response are small, good agreement will be obtainedbetween the results of the analysis of a mathematical model and the results of theexperimental study of the physical system.

    In general, in solving a new problem, it is desirable to build a simplified model so thatwe can get a general feeling for the solution.A more complete mathematical model maythen be built and used for a more accurate analysis.

    We must be well aware that a linear lumped-parameter model, which may be valid inlow-frequency operations, may not be valid at sufficiently high frequencies, since the neg-lected property of distributed parameters may become an important factor in the dynamicbehavior of the system. For example, the mass of a spring may be neglected in low-frequency operations, but it becomes an important property of the system at high fre-quencies. (For the case where a mathematical model involves considerable errors, robustcontrol theory may be applied. Robust control theory is presented in Chapter 10.)

    Linear Systems. A system is called linear if the principle of superpositionapplies. The principle of superposition states that the response produced by thesimultaneous application of two different forcing functions is the sum of the twoindividual responses. Hence, for the linear system, the response to several inputs canbe calculated by treating one input at a time and adding the results. It is this principlethat allows one to build up complicated solutions to the linear differential equationfrom simple solutions.

    In an experimental investigation of a dynamic system, if cause and effect are pro-portional, thus implying that the principle of superposition holds, then the system canbe considered linear.

    Linear Time-Invariant Systems and Linear Time-Varying Systems. A differ-ential equation is linear if the coefficients are constants or functions only of the in-dependent variable. Dynamic systems that are composed of linear time-invariantlumped-parameter components may be described by linear time-invariant differen-tial equationsthat is, constant-coefficient differential equations. Such systems arecalled linear time-invariant (or linear constant-coefficient) systems. Systems thatare represented by differential equations whose coefficients are functions of timeare called linear time-varying systems. An example of a time-varying control sys-tem is a spacecraft control system. (The mass of a spacecraft changes due to fuelconsumption.)

  • Section 22 / Transfer Function and Impulse-Response Function 15

    Outline of the Chapter. Section 21 has presented an introduction to the math-ematical modeling of dynamic systems. Section 22 presents the transfer function andimpulse-response function. Section 23 introduces automatic control systems and Sec-tion 24 discusses concepts of modeling in state space. Section 25 presents state-spacerepresentation of dynamic systems. Section 26 discusses transformation of mathemat-ical models with MATLAB. Finally, Section 27 discusses linearization of nonlinearmathematical models.

    22 TRANSFER FUNCTION AND IMPULSE-RESPONSE FUNCTION

    In control theory, functions called transfer functions are commonly used to character-ize the input-output relationships of components or systems that can be described by lin-ear, time-invariant, differential equations. We begin by defining the transfer functionand follow with a derivation of the transfer function of a differential equation system.Then we discuss the impulse-response function.

    Transfer Function. The transfer function of a linear, time-invariant, differentialequation system is defined as the ratio of the Laplace transform of the output (responsefunction) to the Laplace transform of the input (driving function) under the assumptionthat all initial conditions are zero.

    Consider the linear time-invariant system defined by the following differential equation:

    where y is the output of the system and x is the input. The transfer function of this sys-tem is the ratio of the Laplace transformed output to the Laplace transformed inputwhen all initial conditions are zero, or

    By using the concept of transfer function, it is possible to represent system dynam-ics by algebraic equations in s. If the highest power of s in the denominator of the trans-fer function is equal to n, the system is called an nth-order system.

    Comments on Transfer Function. The applicability of the concept of the trans-fer function is limited to linear, time-invariant, differential equation systems.The trans-fer function approach, however, is extensively used in the analysis and design of suchsystems. In what follows, we shall list important comments concerning the transfer func-tion. (Note that a system referred to in the list is one described by a linear, time-invariant,differential equation.)

    =Y(s)

    X(s)=

    b0 sm + b1 sm - 1 + p + bm - 1 s + bma0 sn + a1 sn - 1 + p + an - 1 s + an

    Transfer function = G(s) =l[output]

    l[input]2zero initial conditions

    = b0 x(m)

    + b1x(m - 1)

    + p + bm - 1 x# + bm x (n m)

    a0 y(n)

    + a1y(n - 1)

    + p + an - 1 y# + an y

  • 16 Chapter 2 / Mathematical Modeling of Control Systems

    1. The transfer function of a system is a mathematical model in that it is an opera-tional method of expressing the differential equation that relates the output vari-able to the input variable.

    2. The transfer function is a property of a system itself, independent of the magnitudeand nature of the input or driving function.

    3. The transfer function includes the units necessary to relate the input to the output;however, it does not provide any information concerning the physical structure ofthe system. (The transfer functions of many physically different systems can beidentical.)

    4. If the transfer function of a system is known, the output or response can be stud-ied for various forms of inputs with a view toward understanding the nature ofthe system.

    5. If the transfer function of a system is unknown, it may be established experimen-tally by introducing known inputs and studying the output of the system. Onceestablished, a transfer function gives a full description of the dynamic character-istics of the system, as distinct from its physical description.

    Convolution Integral. For a linear, time-invariant system the transfer functionG(s) is

    where X(s) is the Laplace transform of the input to the system and Y(s) is the Laplacetransform of the output of the system, where we assume that all initial conditions in-volved are zero. It follows that the output Y(s) can be written as the product of G(s) andX(s), or

    (21)

    Note that multiplication in the complex domain is equivalent to convolution in the timedomain (see Appendix A), so the inverse Laplace transform of Equation (21) is givenby the following convolution integral:

    where both g(t) and x(t) are 0 for t

  • Section 23 / Automatic Control Systems 17

    The inverse Laplace transform of the output given by Equation (22) gives the impulseresponse of the system. The inverse Laplace transform of G(s), or

    is called the impulse-response function. This function g(t) is also called the weightingfunction of the system.

    The impulse-response function g(t) is thus the response of a linear time-invariantsystem to a unit-impulse input when the initial conditions are zero.The Laplace trans-form of this function gives the transfer function. Therefore, the transfer function andimpulse-response function of a linear, time-invariant system contain the same infor-mation about the system dynamics. It is hence possible to obtain complete informa-tion about the dynamic characteristics of the system by exciting it with an impulseinput and measuring the response. (In practice, a pulse input with a very short dura-tion compared with the significant time constants of the system can be considered animpulse.)

    23 AUTOMATIC CONTROL SYSTEMS

    A control system may consist of a number of components. To show the functionsperformed by each component, in control engineering, we commonly use a diagramcalled the block diagram. This section first explains what a block diagram is. Next, itdiscusses introductory aspects of automatic control systems, including various controlactions.Then, it presents a method for obtaining block diagrams for physical systems, and,finally, discusses techniques to simplify such diagrams.

    Block Diagrams. A block diagram of a system is a pictorial representation of thefunctions performed by each component and of the flow of signals. Such a diagram de-picts the interrelationships that exist among the various components. Differing from apurely abstract mathematical representation, a block diagram has the advantage ofindicating more realistically the signal flows of the actual system.

    In a block diagram all system variables are linked to each other through functionalblocks.The functional block or simply block is a symbol for the mathematical operationon the input signal to the block that produces the output. The transfer functions of thecomponents are usually entered in the corresponding blocks, which are connected by ar-rows to indicate the direction of the flow of signals. Note that the signal can pass onlyin the direction of the arrows.Thus a block diagram of a control system explicitly showsa unilateral property.

    Figure 21 shows an element of the block diagram. The arrowhead pointing towardthe block indicates the input, and the arrowhead leading away from the block repre-sents the output. Such arrows are referred to as signals.

    l-1 CG(s) D = g(t)

    Transferfunction

    G(s)Figure 21Element of a blockdiagram.

  • 18 Chapter 2 / Mathematical Modeling of Control Systems

    +

    R(s) E(s)G(s)

    C(s)

    Summingpoint

    Branchpoint

    Figure 23Block diagram of aclosed-loop system.

    Note that the dimension of the output signal from the block is the dimension of theinput signal multiplied by the dimension of the transfer function in the block.

    The advantages of the block diagram representation of a system are that it is easyto form the overall block diagram for the entire system by merely connecting the blocksof the components according to the signal flow and that it is possible to evaluate thecontribution of each component to the overall performance of the system.

    In general, the functional operation of the system can be visualized more readily byexamining the block diagram than by examining the physical system itself. A block di-agram contains information concerning dynamic behavior, but it does not include anyinformation on the physical construction of the system. Consequently, many dissimilarand unrelated systems can be represented by the same block diagram.

    It should be noted that in a block diagram the main source of energy is not explicitlyshown and that the block diagram of a given system is not unique.A number of differentblock diagrams can be drawn for a system, depending on the point of view of the analysis.

    Summing Point. Referring to Figure 22, a circle with a cross is the symbol thatindicates a summing operation. The plus or minus sign at each arrowhead indicateswhether that signal is to be added or subtracted. It is important that the quantities beingadded or subtracted have the same dimensions and the same units.

    Branch Point. A branch point is a point from which the signal from a block goesconcurrently to other blocks or summing points.

    Block Diagram of a Closed-Loop System. Figure 23 shows an example of ablock diagram of a closed-loop system. The output C(s) is fed back to the summingpoint, where it is compared with the reference input R(s). The closed-loop nature ofthe system is clearly indicated by the figure. The output of the block, C(s) in this case,is obtained by multiplying the transfer function G(s) by the input to the block, E(s).Anylinear control system may be represented by a block diagram consisting of blocks, sum-ming points, and branch points.

    When the output is fed back to the summing point for comparison with the input, itis necessary to convert the form of the output signal to that of the input signal. Forexample, in a temperature control system, the output signal is usually the controlledtemperature. The output signal, which has the dimension of temperature, must be con-verted to a force or position or voltage before it can be compared with the input signal.This conversion is accomplished by the feedback element whose transfer function is H(s),as shown in Figure 24.The role of the feedback element is to modify the output beforeit is compared with the input. (In most cases the feedback element is a sensor that measures

    +

    a a b

    b

    Figure 22Summing point.

  • the output of the plant.The output of the sensor is compared with the system input, andthe actuating error signal is generated.) In the present example, the feedback signal thatis fed back to the summing point for comparison with the input is B(s) = H(s)C(s).

    Open-Loop Transfer Function and Feedforward Transfer Function. Refer-ring to Figure 24, the ratio of the feedback signal B(s) to the actuating error signalE(s) is called the open-loop transfer function. That is,

    The ratio of the output C(s) to the actuating error signal E(s) is called the feed-forward transfer function, so that

    If the feedback transfer function H(s) is unity, then the open-loop transfer function andthe feedforward transfer function are the same.

    Closed-Loop Transfer Function. For the system shown in Figure 24, the outputC(s) and input R(s) are related as follows: since

    eliminating E(s) from these equations gives

    or

    (23)

    The transfer function relating C(s) to R(s) is called the closed-loop transfer function. Itrelates the closed-loop system dynamics to the dynamics of the feedforward elementsand feedback elements.

    From Equation (23), C(s) is given by

    C(s) =G(s)

    1 + G(s)H(s)R(s)

    C(s)

    R(s)=

    G(s)

    1 + G(s)H(s)

    C(s) = G(s) CR(s) - H(s)C(s) D = R(s) - H(s)C(s)

    E(s) = R(s) - B(s)

    C(s) = G(s)E(s)

    Feedforward transfer function =C(s)

    E(s)= G(s)

    Open-loop transfer function =B(s)

    E(s)= G(s)H(s)

    Section 23 / Automatic Control Systems 19

    R(s)

    B(s)

    E(s)G(s)

    H(s)

    C(s)+

    Figure 24Closed-loop system.

  • 20 Chapter 2 / Mathematical Modeling of Control Systems

    G1(s)

    G1(s)

    G2(s)

    G2(s)C(s)R(s)

    C(s)

    C(s)

    R(s)

    R(s)

    ++

    G1(s)

    G2(s)

    +

    (a)

    (b)

    (c)

    Figure 25(a) Cascaded system;(b) parallel system;(c) feedback (closed-loop) system.

    Thus the output of the closed-loop system clearly depends on both the closed-loop trans-fer function and the nature of the input.

    Obtaining Cascaded, Parallel, and Feedback (Closed-Loop) Transfer Functionswith MATLAB. In control-systems analysis, we frequently need to calculate the cas-caded transfer functions, parallel-connected transfer functions, and feedback-connected(closed-loop) transfer functions. MATLAB has convenient commands to obtain the cas-caded, parallel, and feedback (closed-loop) transfer functions.

    Suppose that there are two components G1(s) and G2(s) connected differently asshown in Figure 25 (a), (b), and (c), where

    To obtain the transfer functions of the cascaded system, parallel system, or feedback(closed-loop) system, the following commands may be used:

    [num, den] = series(num1,den1,num2,den2)[num, den] = parallel(num1,den1,num2,den2)[num, den] = feedback(num1,den1,num2,den2)

    As an example, consider the case where

    MATLAB Program 21 gives C(s)/R(s)=numden for each arrangement of G1(s)and G2(s). Note that the command

    printsys(num,den)

    displays the numden Cthat is, the transfer function C(s)/R(s) D of the system considered.

    G1(s) =10

    s2 + 2s + 10=

    num1den1

    , G2(s) = 5s + 5 =num2den2

    G1(s) =num1den1

    , G2(s) = num2den2

  • Section 23 / Automatic Control Systems 21

    Automatic Controllers. An automatic controller compares the actual value ofthe plant output with the reference input (desired value), determines the deviation, andproduces a control signal that will reduce the deviation to zero or to a small value.The manner in which the automatic controller produces the control signal is calledthe control action. Figure 26 is a block diagram of an industrial control system, which

    MATLAB Program 21

    num1 = [10];den1 = [1 2 10];num2 = [5];den2 = [1 5];[num, den] = series(num1,den1,num2,den2);printsys(num,den)

    num/den =

    [num, den] = parallel(num1,den1,num2,den2);printsys(num,den)

    num/den =

    [num, den] = feedback(num1,den1,num2,den2);printsys(num,den)

    num/den =

    10s + 50s^3 + 7s^2 + 20s + 100

    5s^2 + 20s + 100s^3 + 7s^2 + 20s + 50

    50s^3 + 7s^2 + 20s + 50

    Automatic controller

    Error detector

    Amplifier Actuator PlantOutput

    Sensor

    Referenceinput

    Actuatingerror signal

    Setpoint

    +

    Figure 26Block diagram of anindustrial controlsystem, whichconsists of anautomatic controller,an actuator, a plant,and a sensor(measuring element).

  • 22 Chapter 2 / Mathematical Modeling of Control Systems

    consists of an automatic controller, an actuator, a plant, and a sensor (measuring ele-ment). The controller detects the actuating error signal, which is usually at a very lowpower level, and amplifies it to a sufficiently high level. The output of an automaticcontroller is fed to an actuator, such as an electric motor, a hydraulic motor, or apneumatic motor or valve. (The actuator is a power device that produces the input tothe plant according to the control signal so that the output signal will approach thereference input signal.)

    The sensor or measuring element is a device that converts the output variable into an-other suitable variable, such as a displacement, pressure, voltage, etc., that can be used tocompare the output to the reference input signal.This element is in the feedback path ofthe closed-loop system.The set point of the controller must be converted to a referenceinput with the same units as the feedback signal from the sensor or measuring element.

    Classifications of Industrial Controllers. Most industrial controllers may beclassified according to their control actions as:

    1. Two-position or onoff controllers2. Proportional controllers3. Integral controllers4. Proportional-plus-integral controllers5. Proportional-plus-derivative controllers6. Proportional-plus-integral-plus-derivative controllers

    Most industrial controllers use electricity or pressurized fluid such as oil or air aspower sources. Consequently, controllers may also be classified according to the kind ofpower employed in the operation, such as pneumatic controllers, hydraulic controllers,or electronic controllers. What kind of controller to use must be decided based on thenature of the plant and the operating conditions, including such considerations as safety,cost, availability, reliability, accuracy, weight, and size.

    Two-Position or OnOff Control Action. In a two-position control system, theactuating element has only two fixed positions, which are, in many cases, simply on andoff.Two-position or onoff control is relatively simple and inexpensive and, for this rea-son, is very widely used in both industrial and domestic control systems.

    Let the output signal from the controller be u(t) and the actuating error signal be e(t).In two-position control, the signal u(t) remains at either a maximum or minimum value,depending on whether the actuating error signal is positive or negative, so that

    where U1 and U2 are constants. The minimum value U2 is usually either zero or U1 .Two-position controllers are generally electrical devices, and an electric solenoid-oper-ated valve is widely used in such controllers. Pneumatic proportional controllers with veryhigh gains act as two-position controllers and are sometimes called pneumatic two-position controllers.

    Figures 27(a) and (b) show the block diagrams for two-position or onoff controllers.The range through which the actuating error signal must move before the switching occurs

    = U2 , for e(t) 6 0 u(t) = U1 , for e(t) 7 0

  • Section 23 / Automatic Control Systems 23

    is called the differential gap. A differential gap is indicated in Figure 27(b). Such a dif-ferential gap causes the controller output u(t) to maintain its present value until the ac-tuating error signal has moved slightly beyond the zero value. In some cases, the differentialgap is a result of unintentional friction and lost motion; however, quite often it is inten-tionally provided in order to prevent too-frequent operation of the onoff mechanism.

    Consider the liquid-level control system shown in Figure 28(a), where the electromag-netic valve shown in Figure 28(b) is used for controlling the inflow rate.This valve is eitheropen or closed.With this two-position control, the water inflow rate is either a positive con-stant or zero. As shown in Figure 29, the output signal continuously moves between thetwo limits required to cause the actuating element to move from one fixed position to theother. Notice that the output curve follows one of two exponential curves, one correspon-ding to the filling curve and the other to the emptying curve. Such output oscillation be-tween two limits is a typical response characteristic of a system under two-position control.

    (a) (b)

    U1

    U2

    ue U1

    U2

    ue

    Differential gap

    +

    +

    Figure 27(a) Block diagram ofan onoff controller;(b) block diagram ofan onoff controllerwith differential gap.

    115 V

    Float

    R

    C h

    (a) (b)

    qi

    Movable iron core

    Magnetic coil

    Figure 28(a) Liquid-levelcontrol system;(b) electromagneticvalve.

    h(t)

    t0

    Differentialgap

    Figure 29Level h(t)-versus-tcurve for the systemshown in Figure 28(a).

  • 24 Chapter 2 / Mathematical Modeling of Control Systems

    From Figure 29, we notice that the amplitude of the output oscillation can be reduced by decreasing the differential gap. The decrease in the differential gap, however, increases the number of onoff switchings per minute and reduces the useful life of the component. The magnitude of the differential gap must be determined from such considerations as the accuracy required and the life of the component.

    Proportional Control Action. For a controller with proportional control action,the relationship between the output of the controller u(t) and the actuating error signale(t) is

    or, in Laplace-transformed quantities,

    where Kp is termed the proportional gain.Whatever the actual mechanism may be and whatever the form of the operating

    power, the proportional controller is essentially an amplifier with an adjustable gain.

    Integral Control Action. In a controller with integral control action, the value ofthe controller output u(t) is changed at a rate proportional to the actuating error signale(t). That is,

    or

    where Ki is an adjustable constant. The transfer function of the integral controller is

    Proportional-Plus-Integral Control Action. The control action of a proportional-plus-integral controller is defined by

    u(t) = Kp e(t) +KpTi 3

    t

    0e(t) dt

    U(s)

    E(s)=

    Kis

    u(t) = Ki3t

    0e(t) dt

    du(t)

    dt= Ki e(t)

    U(s)

    E(s)= Kp

    u(t) = Kp e(t)

  • Section 23 / Automatic Control Systems 25

    or the transfer function of the controller is

    where is called the integral time.

    Proportional-Plus-Derivative Control Action. The control action of a proportional-plus-derivative controller is defined by

    and the transfer function is

    where is called the derivative time.

    Proportional-Plus-Integral-Plus-Derivative Control Action. The combination ofproportional control action, integral control action, and derivative control action istermed proportional-plus-integral-plus-derivative control action. It has the advantagesof each of the three individual control actions. The equation of a controller with thiscombined action is given by

    or the transfer function is

    where Kp is the proportional gain, is the integral time, and is the derivative time.The block diagram of a proportional-plus-integral-plus-derivative controller is shown inFigure 210.

    TdTi

    U(s)

    E(s)= Kp a1 + 1Ti s + Td s b

    u(t) = Kp e(t) +KpTi 3

    t

    0e(t) dt + Kp Td

    de(t)

    dt

    Td

    U(s)

    E(s)= KpA1 + Td sB

    u(t) = Kp e(t) + Kp Tdde(t)

    dt

    Ti

    U(s)

    E(s)= Kp a1 + 1Ti s b

    +

    E(s) U(s)Kp(1 + Tis + Ti Tds2)

    TisFigure 210Block diagram of aproportional-plus-integral-plus-derivative controller.

  • 26 Chapter 2 / Mathematical Modeling of Control Systems

    R(s)G1(s) G2(s)

    H(s)

    DisturbanceD(s)

    C(s)+

    +

    +

    Figure 211Closed-loop systemsubjected to adisturbance.

    Closed-Loop System Subjected to a Disturbance. Figure 211 shows a closed-loop system subjected to a disturbance. When two inputs (the reference input and dis-turbance) are present in a linear time-invariant system, each input can be treatedindependently of the other; and the outputs corresponding to each input alone can beadded to give the complete output. The way each input is introduced into the system isshown at the summing point by either a plus or minus sign.

    Consider the system shown in Figure 211. In examining the effect of the distur-bance D(s), we may assume that the reference input is zero; we may then calculate theresponse CD(s) to the disturbance only. This response can be found from

    On the other hand, in considering the response to the reference input R(s), we mayassume that the disturbance is zero.Then the response CR(s) to the reference input R(s)can be obtained from

    The response to the simultaneous application of the reference input and disturbancecan be obtained by adding the two individual responses. In other words, the responseC(s) due to the simultaneous application of the reference input R(s) and disturbanceD(s) is given by

    Consider now the case where |G1(s)H(s)| 1 and |G1(s)G2(s)H(s)| 1. In thiscase, the closed-loop transfer function CD(s)/D(s) becomes almost zero, and the effectof the disturbance is suppressed. This is an advantage of the closed-loop system.

    On the other hand, the closed-loop transfer function CR(s)/R(s) approaches 1/H(s)as the gain of G1(s)G2(s)H(s) increases.This means that if |G1(s)G2(s)H(s)| 1, thenthe closed-loop transfer function CR(s)/R(s) becomes independent of G1(s) and G2(s)and inversely proportional to H(s), so that the variations of G1(s) and G2(s) do notaffect the closed-loop transfer function CR(s)/R(s). This is another advantage of theclosed-loop system. It can easily be seen that any closed-loop system with unity feedback,H(s)=1, tends to equalize the input and output.

    =G2(s)

    1 + G1(s)G2(s)H(s) CG1(s)R(s) + D(s) D C(s) = CR(s) + CD(s)

    CR(s)

    R(s)=

    G1(s)G2(s)

    1 + G1(s)G2(s)H(s)

    CD(s)

    D(s)=

    G2(s)

    1 + G1(s)G2(s)H(s)

  • Section 23 / Automatic Control Systems 27

    Procedures for Drawing a Block Diagram. To draw a block diagram for a sys-tem, first write the equations that describe the dynamic behavior of each component.Then take the Laplace transforms of these equations, assuming zero initial conditions,and represent each Laplace-transformed equation individually in block form. Finally, as-semble the elements into a complete block diagram.

    As an example, consider the RC circuit shown in Figure 212(a). The equations forthis circuit are

    (24)

    (25)

    The Laplace transforms of Equations (24) and (25), with zero initial condition, become

    (26)

    (27)

    Equation (26) represents a summing operation, and the corresponding diagram isshown in Figure 212(b). Equation (27) represents the block as shown in Figure 212(c).Assembling these two elements, we obtain the overall block diagram for the system asshown in Figure 212(d).

    Block Diagram Reduction. It is important to note that blocks can be connectedin series only if the output of one block is not affected by the next following block. Ifthere are any loading effects between the components, it is necessary to combine thesecomponents into a single block.

    Any number of cascaded blocks representing nonloading components can bereplaced by a single block, the transfer function of which is simply the product of theindividual transfer functions.

    Eo(s) =I(s)

    Cs

    I(s) =Ei(s) - Eo(s)

    R

    eo =1i dt

    C

    i =ei - eo

    R

    (d)

    Ei(s) I(s) Eo(s)1R

    1Cs

    Eo(s)

    (b)

    Ei(s) I(s)1R

    (c)

    I(s) Eo(s)1Cs

    (a)

    R

    C eoei

    i

    +

    +

    Figure 212(a) RC circuit;(b) block diagramrepresentingEquation (26);(c) block diagramrepresentingEquation (27);(d) block diagram ofthe RC circuit.

  • 28 Chapter 2 / Mathematical Modeling of Control Systems

    RG1

    H1

    H2

    G3G2C

    RG1

    H1

    G3G2C

    RG3

    C

    R C

    R C

    (a)

    (b)

    (c)

    (d)

    (e)

    H2G1

    H2G1

    G1G21 G1G2H1

    G1G2G31 G1G2H1 + G2G3H2

    G1G2G31 G1G2H1 + G2G3H2 + G1G2G3

    +

    +

    ++

    +

    +

    ++

    +

    +

    +Figure 213

    (a) Multiple-loopsystem;(b)(e) successivereductions of theblock diagram shownin (a).

    A complicated block diagram involving many feedback loops can be simplified bya step-by-step rearrangement. Simplification of the block diagram by rearrangementsconsiderably reduces the labor needed for subsequent mathematical analysis. It shouldbe noted, however, that as the block diagram is simplified, the transfer functions in newblocks become more complex because new poles and new zeros are generated.

    EXAMPLE 21 Consider the system shown in Figure 213(a). Simplify this diagram.By moving the summing point of the negative feedback loop containing H2 outside the posi-

    tive feedback loop containing H1, we obtain Figure 213(b). Eliminating the positive feedback loop,we have Figure 213(c).The elimination of the loop containing H2/G1 gives Figure 213(d). Finally,eliminating the feedback loop results in Figure 213(e).

  • Section 24 / Modeling in State Space 29

    Notice that the numerator of the closed-loop transfer function C(s)/R(s) is the product of thetransfer functions of the feedforward path. The denominator of C(s)/R(s) is equal to

    (The positive feedback loop yields a negative term in the denominator.)

    24 MODELING IN STATE SPACE

    In this section we shall present introductory material on state-space analysis of controlsystems.

    Modern Control Theory. The modern trend in engineering systems is towardgreater complexity, due mainly to the requirements of complex tasks and good accu-racy. Complex systems may have multiple inputs and multiple outputs and may be timevarying. Because of the necessity of meeting increasingly stringent requirements onthe performance of control systems, the increase in system complexity, and easy accessto large scale computers, modern control theory, which is a new approach to the analy-sis and design of complex control systems, has been developed since around 1960.Thisnew approach is based on the concept of state. The concept of state by itself is notnew, since it has been in existence for a long time in the field of classical dynamics andother fields.

    Modern Control Theory Versus Conventional Control Theory. Modern con-trol theory is contrasted with conventional control theory in that the former is appli-cable to multiple-input, multiple-output systems, which may be linear or nonlinear,time invariant or time varying, while the latter is applicable only to linear time-invariant single-input, single-output systems. Also, modern control theory is essen-tially time-domain approach and frequency domain approach (in certain cases such asH-infinity control), while conventional control theory is a complex frequency-domainapproach. Before we proceed further, we must define state, state variables, state vector,and state space.

    State. The state of a dynamic system is the smallest set of variables (called statevariables) such that knowledge of these variables at t=t0, together with knowledge ofthe input for t t0 , completely determines the behavior of the system for any timet t0.

    Note that the concept of state is by no means limited to physical systems. It is appli-cable to biological systems, economic systems, social systems, and others.

    State Variables. The state variables of a dynamic system are the variables mak-ing up the smallest set of variables that determine the state of the dynamic system. If at

    = 1 - G1 G2 H1 + G2 G3 H2 + G1 G2 G3

    = 1 + A-G1 G2 H1 + G2 G3 H2 + G1 G2 G3B1 +a (product of the transfer functions around each loop)

  • 30 Chapter 2 / Mathematical Modeling of Control Systems

    least n variables x1, x2, p , xn are needed to completely describe the behavior of a dy-namic system (so that once the input is given for t t0 and the initial state at t=t0 isspecified, the future state of the system is completely determined), then such n variablesare a set of state variables.

    Note that state variables need not be physically measurable or observable quantities.Variables that do not represent physical quantities and those that are neither measura-ble nor observable can be chosen as state variables. Such freedom in choosing state vari-ables is an advantage of the state-space methods. Practically, however, it is convenientto choose easily measurable quantities for the state variables, if this is possible at all, be-cause optimal control laws will require the feedback of all state variables with suitableweighting.

    State Vector. If n state variables are needed to completely describe the behaviorof a given system, then these n state variables can be considered the n components of avector x. Such a vector is called a state vector. A state vector is thus a vector that deter-mines uniquely the system state x(t) for any time t t0 , once the state at t=t0 is givenand the input u(t) for t t0 is specified.

    State Space. The n-dimensional space whose coordinate axes consist of the x1axis, x2 axis, p , xn axis, where x1, x2, p , xn are state variables, is called a state space.Anystate can be represented by a point in the state space.

    State-Space Equations. In state-space analysis we are concerned with three typesof variables that are involved in the modeling of dynamic systems: input variables, out-put variables, and state variables. As we shall see in Section 25, the state-space repre-sentation for a given system is not unique, except that the number of state variables isthe same for any of the different state-space representations of the same system.

    The dynamic system must involve elements that memorize the values of the input fort t1 . Since integrators in a continuous-time control system serve as memory devices,the outputs of such integrators can be considered as the variables that define the inter-nal state of the dynamic system.Thus the outputs of integrators serve as state variables.The number of state variables to completely define the dynamics of the system is equalto the number of integrators involved in the system.

    Assume that a multiple-input, multiple-output system involves n integrators.Assumealso that there are r inputs u1(t), u2(t), p , ur(t) and m outputs y1(t), y2(t), p , ym(t).Define n outputs of the integrators as state variables: x1(t), x2(t), p , xn(t) Then thesystem may be described by

    (28)

    x#n(t) = fnAx1 , x2 , p , xn ; u1 , u2 , p , ur ; tB

    x#2(t) = f2Ax1 , x2 , p , xn ; u1 , u2 , p , ur ; tB x# 1(t) = f1Ax1 , x2 , p , xn ; u1 , u2 , p , ur ; tB

  • Section 24 / Modeling in State Space 31

    The outputs y1(t), y2(t), p , ym(t) of the system may be given by

    (29)

    If we define

    ym(t) = gmAx1 , x2 , p , xn ; u1 , u2 , p , ur ; tB

    y2(t) = g2Ax1 , x2 , p , xn ; u1 , u2 , p , ur ; tB y1(t) = g1Ax1 , x2 , p , xn ; u1 , u2 , p , ur ; tB

    u(t) = Fu1(t)

    u2(t)

    ur(t)

    Vg(x, u, t) = Fg1Ax1 , x2 , p , xn ; u1 , u2 , p , ur ; tBg2Ax1 , x2 , p , xn ; u1 , u2 , p , ur ; tB

    gmAx1 , x2 , p , xn ; u1 , u2 , p , ur ; tBV , y(t) = F

    y1(t)

    y2(t)

    ym(t)

    V ,

    f(x, u, t) = Ff1Ax1 , x2 , p , xn ; u1 , u2 , p , ur ; tBf2Ax1 , x2 , p , xn ; u1 , u2 , p , ur ; tB

    fnAx1 , x2 , p , xn ; u1 , u2 , p , ur ; tBV , x(t) = F

    x1(t)

    x2(t)

    xn(t)

    V ,

    then Equations (28) and (29) become

    (210)

    (211)

    where Equation (210) is the state equation and Equation (211) is the output equation.If vector functions f and/or g involve time t explicitly, then the system is called a time-varying system.

    If Equations (210) and (211) are linearized about the operating state, then wehave the following linearized state equation and output equation:

    (212)

    (213)

    where A(t) is called the state matrix, B(t) the input matrix, C(t) the output matrix, andD(t) the direct transmission matrix. (Details of linearization of nonlinear systems about

    y(t) = C(t)x(t) + D(t)u(t)

    x# (t) = A(t)x(t) + B(t)u(t)

    y(t) = g(x, u, t)

    x# (t) = f(x, u, t)

  • 32 Chapter 2 / Mathematical Modeling of Control Systems

    m

    k

    b

    u(t)

    y(t)

    Figure 215Mechanical system.

    u(t)

    D(t)

    B(t)

    A(t)

    C(t)y(t)x(t)

    dtx(t)

    ++

    ++

    Figure 214Block diagram of thelinear, continuous-time control systemrepresented in statespace.

    the operating state are discussed in Section 27.) A block diagram representation ofEquations (212) and (213) is shown in Figure 214.

    If vector functions f and g do not involve time t explicitly then the system is called atime-invariant system. In this case, Equations (212) and (213) can be simplified to

    (214)

    (215)

    Equation (214) is the state equation of the linear, time-invariant system and Equation(215) is the output equation for the same system. In this book we shall be concernedmostly with systems described by Equations (214) and (215).

    In what follows we shall present an example for deriving a state equation and outputequation.

    EXAMPLE 22 Consider the mechanical system shown in Figure 215. We assume that the system is linear. Theexternal force u(t) is the input to the system, and the displacement y(t) of the mass is the output.The displacement y(t) is measured from the equilibrium position in the absence of the externalforce. This system is a single-input, single-output system.

    From the diagram, the system equation is

    (216)

    This system is of second order.This means that the system involves two integrators. Let us definestate variables x1(t) and x2(t) as

    Then we obtain

    or

    (217)

    (218)

    The output equation is(219)y = x1

    x#2 = -

    k

    mx1 -

    b

    mx2 +

    1

    mu

    x#1 = x2

    x#2 =

    1

    m A-ky - by# B + 1

    m u

    x#1 = x2

    x2(t) = y#(t)

    x1(t) = y(t)

    my$ + by# + ky = u

    y# (t) = Cx(t) + Du(t)

    x# (t) = Ax(t) + Bu(t)

  • In a vector-matrix form, Equations (217) and (218) can be written as

    (220)

    The output equation, Equation (219), can be written as

    (221)

    Equation (220) is a state equation and Equation (221) is an output equation for the system.They are in the standard form:

    where

    Figure 216 is a block diagram for the system. Notice that the outputs of the integrators are statevariables.

    Correlation Between Transfer Functions and State-Space Equations. In whatfollows we shall show how to derive the transfer function of a single-input, single-outputsystem from the state-space equations.

    Let us consider the system whose transfer function is given by

    (222)

    This system may be represented in state space by the following equations:

    (223)

    (224) y = Cx + Du

    x# = Ax + Bu

    Y(s)

    U(s)= G(s)

    A = C 0-

    k

    m

    1

    -b

    m

    S , B = C 01m

    S , C = [1 0] , D = 0 y = Cx + Du x# = Ax + Bu

    y = [1 0]Bx1x2R

    Bx# 1x#2R = C 0

    -k

    m

    1

    -b

    m

    S Bx1x2R + C 01

    m

    S u

    Section 24 / Modeling in State Space 33

    u 1m

    bm

    km

    x2x2 x1 = y +

    ++

    Figure 216Block diagram of themechanical systemshown in Figure 215.

  • 34 Chapter 2 / Mathematical Modeling of Control Systems

    where x is the state vector, u is the input, and y is the output.The Laplace transforms ofEquations (223) and (224) are given by

    (225)

    (226)

    Since the transfer function was previously defined as the ratio of the Laplace transformof the output to the Laplace transform of the input when the initial condition