44
Rosario Toscano Structured Controllers for Uncertain Systems A Stochastic Optimization Approach Springer

Structured Controllers for Uncertain Systemsrosario.toscano.free.fr/bookChap1.pdf · structure feedback controllers for uncertain dynamical systems. Feedback controllers are required

  • Upload
    others

  • View
    6

  • Download
    1

Embed Size (px)

Citation preview

Page 1: Structured Controllers for Uncertain Systemsrosario.toscano.free.fr/bookChap1.pdf · structure feedback controllers for uncertain dynamical systems. Feedback controllers are required

Rosario Toscano

Structured Controllers forUncertain Systems

A Stochastic Optimization Approach

Springer

Page 2: Structured Controllers for Uncertain Systemsrosario.toscano.free.fr/bookChap1.pdf · structure feedback controllers for uncertain dynamical systems. Feedback controllers are required
Page 3: Structured Controllers for Uncertain Systemsrosario.toscano.free.fr/bookChap1.pdf · structure feedback controllers for uncertain dynamical systems. Feedback controllers are required
Page 4: Structured Controllers for Uncertain Systemsrosario.toscano.free.fr/bookChap1.pdf · structure feedback controllers for uncertain dynamical systems. Feedback controllers are required

Preface

This monograph is concerned with the problem of designing low-order/fixed-structure feedback controllers for uncertain dynamical systems. Feedbackcontrollers are required to reduce the effect of unmeasured disturbance act-ing on the system and to reduce the effect of uncertainties related to thesystem dynamics. This requires some knowledge about the system in orderto design an effective feedback controller. The available knowledge is gen-erally expressed as a mathematical description of the real system which iscalled the model of the system. However the model thus obtained is alwaysan approximation of the reality and is thus subject to some uncertainties.Therefore, the design of an effective feedback controller for the real systemmust take into account model uncertainties. A controller is said to be robustif the stability and/or the desired performance of the closed-loop system arenot affected by the presence of bounded modelling uncertainties.

Uncertainty and robustness

Since the early 1940s, robustness has been recognized of paramount impor-tance when designing a feedback controller. Indeed, in the classical controltheory of Black-Nyquist-Bode, plant uncertainties are taken into account viagain and phase margins which are measures of stability in the frequency do-main. These stability margins confer to the closed-loop system fairly goodrobustness properties. However, classical control is mainly concerned withsingle-input single-output systems.

The transition to the so called modern control theory was done in theearly 1960s by the development of optimal control. The basic principle is todetermine the controller that minimizes a given performance index. This ap-proach applies to multivariable systems represented in the time domain by aset of first order ordinary differential equations called a state space model. Inthe context of linear systems, the main result of this theory is certainly theWiener-Hopf-Kalman optimal regulator, also known as the Linear QuadraticGaussian (LQG) regulator. An inherent limitation of LQG control is that

vii

Page 5: Structured Controllers for Uncertain Systemsrosario.toscano.free.fr/bookChap1.pdf · structure feedback controllers for uncertain dynamical systems. Feedback controllers are required

viii Preface

uncertainties are considered only in the form of exogenous stochastic distur-bances having known statistical properties, while the system is assumed to beperfectly described by a linear, possibly time-varying, state space represen-tation. These assumptions are so strong that the LQG control leads to poorperformance when applied to systems for which no precise model is available.The need of a control theory capable of dealing with modelling errors anddisturbance uncertainty thus became very clear.

A major step toward a robust control theory was taken in 1981 whenZames introduced the optimal H∞ control problem. It was soon recognizedthat the H∞ norm can be used to quantify not only disturbance attenuationbut also robustness against modelling errors. Since then, many contributionsin robust control theory have been made such as the introduction of struc-tured singular value, the two-Riccati-equation method, the H∞ loop-shapingand the linear matrix inequality approach, to cite only the most importantadvances. Nowadays optimal robust control is a mature theory and can beapplied to a number of industrial problems which were beyond the scope ofboth classical control theory and LQG control theory. This is due to the factthat the robust control theory provides a systematic treatment of robustnessagainst modelling errors and disturbance uncertainty for both scalar and mul-tivariable systems.

Limitations

A weakness of traditional robust control is that the controller obtained is offull order, in other words, the order of the controller is always greater thanor equal to the dimension of the process model itself which can be very high.This is a serious limitation especially when the memory and computationalpower available are limited, in embedded controllers. Moreover, traditionalrobust control is unable to incorporate constraints into the structure of thecontroller. This is also a strong limitation especially when the control lawmust be implemented on commercially available controllers that have inher-ently a fixed structure such as PID or lead-lag compensators.

All these reasons justify the need for designing robust reduced-order/fixed-structure controllers. Unfortunately, the problem of designing a robust con-troller with a given fixed structure (e.g. a PID) remains an open issue. Thisis mainly due to the fact that the set of all fixed-order/structure stabilizingcontrollers is non-convex and disconnected in the space of controller parame-ters. This is a major source of computational intractability and conservatism.Nevertheless, due to their practical importance, some new approaches forstructured control have been proposed in the literature. Most of them arebased on the resolution of Linear Matrix Inequalities LMIs or Riccati equa-tions. However, a major drawback with this kind of approach is the use ofLyapunov variables, whose number grows quadratically with the system size.For instance, if we consider a system of order 70, this requires, at least, theintroduction of 2485 unknown variables whereas we are looking for the pa-

Page 6: Structured Controllers for Uncertain Systemsrosario.toscano.free.fr/bookChap1.pdf · structure feedback controllers for uncertain dynamical systems. Feedback controllers are required

Preface ix

rameters of a fixed-order/structure controller which contains a comparativelyvery small number of unknowns. It is then necessary to introduce new tech-niques capable of dealing with the non-convexity of optimization problemsarising in automatic control without introducing extra unknown variables.

Stochastic optimization via HKA

The main optimization tool used in this book to tackle the problem of non-convexity is the so-called Heuristic Kalman Algorithm (HKA). The maincharacteristic of HKA is the use of a stochastic search mechanism to solve agiven optimization problem. From a computational point of view, the use ofa stochastic search procedure appears essential for dealing with non-convexproblems.

The HKA method falls into the category of the so-called “population-basedstochastic optimization techniques”. However, its search heuristic is entirelydifferent from other known stochastic algorithms such as genetic algorithm(GA) or particle swarm optimization (PSO). Indeed, HKA explicitly consid-ers the optimization problem as a measurement process designed to give anestimate of the optimum. A specific procedure, based on the Kalman estima-tor, is utilized to improve the quality of the estimate obtained through themeasurement process. HKA shares with GA and PSO interesting featuressuch as: ease of implementation, low memory and CPU speed requirements,a search procedure based only on the values of the objective function, andno need for strong assumptions such as linearity, differentiability, convexityetc, to solve the optimization problem. In fact it could be used even whenthe objective function cannot be expressed in an analytic form; in this case,the objective function is evaluated through simulations. The main advantageof HKA compared to other stochastic methods, lies in the small number ofparameters that need to be set by the user (only three). In addition theseparameters have an understandable effect on the search procedure. Theseproperties make the algorithm easy to use for non-specialists.

Structure of the book

This book focuses on the development of simple and easy to use designstrategies for robust low-order/fixed-structure controllers. HKA is used tosolve the underlying constrained non-convex optimization problems.

Chapter 1 introduces some basic definitions and concepts from the classi-cal optimization theory and indicates some limitations of the classical opti-mization methods. The class of convex optimization problems is also brieflypresented with an emphasis on semi-definite programs. Some aspects relatedto the optimization in engineering design are also introduced. After that, themain objectives of the book are presented.

Chapter 2 introduces some basic materials related to signal and systemsnorms. Many control objectives can be stated in terms of the size of some

Page 7: Structured Controllers for Uncertain Systemsrosario.toscano.free.fr/bookChap1.pdf · structure feedback controllers for uncertain dynamical systems. Feedback controllers are required

x Preface

particular signals. Therefore, a quantitative treatment of the performance ofcontrol systems requires the introduction of appropriate norms, which givemeasurements of the sizes of the signals considered. Another concept closelyrelated to the size of a signal, is the size of an LTI system. The latter conceptis of great practical importance because it is at the basis of H∞ control aswell as the robustness analysis.

Chapter 3 introduces how a control design problem can be formulated asan optimization problem. To this end, the standard control problem as wellas the notion of stabilizing controllers are first briefly reviewed. After that,some closed-loop system performance measurements are presented, they areessential to evaluate the quality of a given controller. These performancesmeasurements are then used to formulate the optimal controller design prob-lem which is multi-objective in nature. The resulting multi-objective problemis scalarized using the notion of ideal point. The last part of the chapter isdedicated to the case of structured controllers i.e., structural constraintshave to be taken into account in the optimization problem. These struc-tural constraints make the resulting optimization problem non-smooth andnon-convex, which results in intractability. This is why the use of stochas-tic optimization methods is suggested to find an acceptable solution. Therobustness issue is also briefly discussed. It is pointed out that the optimalrobust control problem in addition to being non-smooth and non-convex isalso semi-infinite. This means that the optimization problem has an infinitenumber of constraints and a finite number of optimization variables.

Chapter 4 introduces the notion of acceptable solution; after that, a briefoverview of the main stochastic methods which can be used to solve contin-uous non-convex constrained optimization problems is presented i.e., PureRandom Search Methods, Simulated Annealing, Genetic Algorithm, and Par-ticle Swarm Optimization. The last part is dedicated to the problem of robustoptimization, i.e., optimization in the presence of uncertainties in the prob-lem data.

Chapter 5 introduces a new optimization method, called Heuristic KalmanAlgorithm (HKA). This algorithm is proposed as an alternative approach forsolving continuous non-convex optimization problems. The performance ofHKA is evaluated in detail through several non-convex test problems, bothin the unconstrained and constrained cases. The results are compared tothose obtained via other metaheuristics. These various numerical experimentsshow that the HKA has very interesting potentialities for solving non-convexoptimization problems, especially with regard to the computation time andthe success ratio.

Chapter 6 deals with the concept of uncertain system. This is a key no-tion when designing a robust feedback controller. The objective is indeed todetermine the controller parameters ensuring acceptable performance of theclosed-loop system despite the unknown disturbances affecting the systemas well as the uncertainties related to the plant dynamics. To this end, it isnecessary to be able to take into account the model uncertainties during the

Page 8: Structured Controllers for Uncertain Systemsrosario.toscano.free.fr/bookChap1.pdf · structure feedback controllers for uncertain dynamical systems. Feedback controllers are required

Preface xi

design phase of the controller. In this chapter, we briefly describe some basicconcepts regarding uncertain systems and robustness analysis. The last partof this chapter is dedicated to structured robust control for which a specificstochastic algorithm is developed.

In chapter 7 we consider the design of fixed structure controllers for un-certain systems in the H∞ framework. Although the design procedures pre-sented apply for any kind of structured controller, we focus mainly on themost widely used of them, that is the PID controller structure. Two de-sign approaches will be considered: the mixed sensitivity method and theH∞ loop-shaping design procedure. Using these methods, the resulting PIDdesign problem is formulated as an inherently non-convex optimization prob-lem. The resulting tuning method is applicable both to stable and to unstablesystems, without any limitation concerning the order of the process to be con-trolled. Various design examples are presented to give some practical insightsinto the methods presented.

Chapter 8 is concerned with the design of structured controller for uncer-tain parametric systems in the H2 and mixed H2/H∞ framework. We restrictourselves to the case of static output feedback (SOF) controllers; this is notrestrictive because any dynamical controller can be reformulated as SOF foran augmented plant. Some design examples are presented to illustrate thedesign methods proposed.

Chapter 9 is devoted to the design of a nonlinear structured controller forsystems that can be well described by uncertain multi-models. In a first part,the concept of multi-model is introduced and some examples are given to showhow this works. After that, the problem of designing a nonlinear structuredcontroller for a given uncertain multi-model is considered. A characterizationof the set of quadratically stabilizing controllers is first introduced. This resultis then used to design a nonlinear structured controller that quadraticallystabilizes the uncertain multi-model, while satisfying a given performanceobjective. Some design examples are presented to illustrate the main pointsintroduced in this chapter.

Finally, chapter 10 concludes the book by recalling the general philosophybehind the approach developed from chapter to chapter as well as the diffi-culty we can encounter when designing a structured controller and thus thedevelopment that needs to be done.

Acknowledgments

Place(s), Firstname Surnamemonth year Firstname Surname

Page 9: Structured Controllers for Uncertain Systemsrosario.toscano.free.fr/bookChap1.pdf · structure feedback controllers for uncertain dynamical systems. Feedback controllers are required
Page 10: Structured Controllers for Uncertain Systemsrosario.toscano.free.fr/bookChap1.pdf · structure feedback controllers for uncertain dynamical systems. Feedback controllers are required

Contents

1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 General Formulation of an Optimization Problem . . . . . . . . . . . 1

1.1.1 Design Variables, Constraints and Objective Function . 31.1.2 Global and Local Minimum, Descent Direction . . . . . . . 51.1.3 Optimality Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

1.2 Convex Optimization Problems. . . . . . . . . . . . . . . . . . . . . . . . . . . 91.2.1 Semidefinite Programming . . . . . . . . . . . . . . . . . . . . . . . . . 121.2.2 Dual Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

1.3 Optimization in Engineering Design . . . . . . . . . . . . . . . . . . . . . . . 151.4 Main objectives of the book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

1.4.1 The Uncertain System G . . . . . . . . . . . . . . . . . . . . . . . . . . 181.4.2 Structured Controller K . . . . . . . . . . . . . . . . . . . . . . . . . . . 191.4.3 Interconnection [G, K] . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191.4.4 Performance Specifications . . . . . . . . . . . . . . . . . . . . . . . . 201.4.5 Algorithms for Finding an Acceptable Solution . . . . . . . 22

1.5 Notes and references . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

2 Signal and System Norms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252.1 Signal Norms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

2.1.1 L1-space and L1-norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262.1.2 L2-space and L2-norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262.1.3 L∞-space and L∞-norm . . . . . . . . . . . . . . . . . . . . . . . . . . . 272.1.4 Extended Lp-space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272.1.5 RMS-value . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

2.2 LTI Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292.2.1 System Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302.2.2 Controllability, Observability . . . . . . . . . . . . . . . . . . . . . . 322.2.3 Transfer Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

2.3 System Norms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352.3.1 Definition of the H2-norm and H∞-norm of a system . 362.3.2 Singular values of a transfer matrix . . . . . . . . . . . . . . . . . 37

xiii

Page 11: Structured Controllers for Uncertain Systemsrosario.toscano.free.fr/bookChap1.pdf · structure feedback controllers for uncertain dynamical systems. Feedback controllers are required

xiv Contents

2.3.3 Singular Values and H2, H∞-norms . . . . . . . . . . . . . . . . 392.3.4 Computing norms from the state space equation . . . . . . 40

2.4 Notes and References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

3 Optimization in Control Theory . . . . . . . . . . . . . . . . . . . . . . . . . . 473.1 Notion of System and Feedback Control . . . . . . . . . . . . . . . . . . . 473.2 The Standard Control Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

3.2.1 The Standard Control Problem as an OptimizationProblem. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

3.2.2 Extended System Model . . . . . . . . . . . . . . . . . . . . . . . . . . . 523.2.3 Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 533.2.4 Closed-loop System and Stabilizing Controllers . . . . . . . 54

3.3 Performance Specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 633.3.1 Tracking Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 653.3.2 Control Input . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 663.3.3 Time-domain Specifications . . . . . . . . . . . . . . . . . . . . . . . . 66

3.4 Optimal Controller design and Multi-objective Optimization . 703.4.1 Scalarization of the Multi-Objective Function . . . . . . . . 733.4.2 Limits of a Convex Formulation of the Optimal

Control Design Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . 743.5 Structured Controllers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75

3.5.1 Important Examples of Structured Controllers . . . . . . . 763.5.2 Difficulties in Solving the Structured Control Problem 783.5.3 Robustness issue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79

3.6 Notes and references . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80

4 Stochastic Optimization Methods . . . . . . . . . . . . . . . . . . . . . . . . . 834.1 Motivations and Basic Notions . . . . . . . . . . . . . . . . . . . . . . . . . . . 83

4.1.1 Notion of Acceptable Solution . . . . . . . . . . . . . . . . . . . . . . 854.1.2 Some Characteristics of Stochastic Methods . . . . . . . . . 86

4.2 Pure Random Search Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . 884.2.1 Non-localized Search Method . . . . . . . . . . . . . . . . . . . . . . 884.2.2 Localized Search Method . . . . . . . . . . . . . . . . . . . . . . . . . . 91

4.3 Simulated Annealing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 924.3.1 Metropolis algorithm and simulated annealing . . . . . . . 924.3.2 Simulated annealing algorithm . . . . . . . . . . . . . . . . . . . . . 93

4.4 Genetic Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 964.4.1 The main steps of a Genetic Algorithm. . . . . . . . . . . . . . 964.4.2 The standard genetic algorithm . . . . . . . . . . . . . . . . . . . . 99

4.5 Particle Swarm Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1004.5.1 Dynamics of the particles of a swarm . . . . . . . . . . . . . . . 1004.5.2 The standard PSO algorithm . . . . . . . . . . . . . . . . . . . . . . 102

4.6 Robust Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1034.6.1 Worst-Case Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1044.6.2 Average Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105

Page 12: Structured Controllers for Uncertain Systemsrosario.toscano.free.fr/bookChap1.pdf · structure feedback controllers for uncertain dynamical systems. Feedback controllers are required

Contents xv

4.7 Notes and references . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107

5 Heuristic Kalman Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1115.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1115.2 Principle of the Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112

5.2.1 Gaussian Generator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1135.2.2 Measurement Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1145.2.3 Kalman Estimator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115

5.3 Equation of the Kalman Estimator . . . . . . . . . . . . . . . . . . . . . . . 1165.4 Heuristic Kalman Algorithm and Implementation . . . . . . . . . . . 117

5.4.1 Updating Rule of the Gaussian Distribution . . . . . . . . . 1185.4.2 Initialization and Parameter Settings . . . . . . . . . . . . . . . 1195.4.3 Stopping Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1205.4.4 The Feasibility Issue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120

5.5 Numerical Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1215.5.1 Unconstrained Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1215.5.2 Constrained Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122

5.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1275.7 Notes and References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128

6 Uncertain Systems and Robustness . . . . . . . . . . . . . . . . . . . . . . . 1336.1 Notion of uncertain system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133

6.1.1 Parametric and dynamic uncertainty . . . . . . . . . . . . . . . . 1346.1.2 General representation of an uncertain linear system . . 134

6.2 Parametric uncertainty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1376.2.1 Affine parameter-dependent model . . . . . . . . . . . . . . . . . . 1386.2.2 Polytopic model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1396.2.3 LFT model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140

6.3 Dynamic Uncertainty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1426.3.1 Multiplicative Uncertainty . . . . . . . . . . . . . . . . . . . . . . . . . 1436.3.2 Additive Uncertainty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1456.3.3 Coprime Uncertainty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147

6.4 Mixed Uncertainty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1486.5 Structured Robust Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149

6.5.1 Robust Stability Condition . . . . . . . . . . . . . . . . . . . . . . . . 1526.5.2 Robust Performance Condition . . . . . . . . . . . . . . . . . . . . . 1586.5.3 General Stochastic Algorithm . . . . . . . . . . . . . . . . . . . . . . 160

6.6 Notes and References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161

7 H∞ Design of Structured Controllers . . . . . . . . . . . . . . . . . . . . . 1657.1 General Formulation of the Structured H∞ Control Problem . 1657.2 Mixed Sensitivity Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1677.3 H∞ Loop Shaping Design Approach . . . . . . . . . . . . . . . . . . . . . . 171

7.3.1 The H∞ Loop-Shaping Design Procedure . . . . . . . . . . . 1737.3.2 Robustness and ν-Gap Metric . . . . . . . . . . . . . . . . . . . . . . 175

Page 13: Structured Controllers for Uncertain Systemsrosario.toscano.free.fr/bookChap1.pdf · structure feedback controllers for uncertain dynamical systems. Feedback controllers are required

xvi Contents

7.3.3 Loop-Shaping Design with Structured Controllers . . . . 1777.4 Design Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179

7.4.1 Design Example 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1807.4.2 Design Example 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1847.4.3 Design Example 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1927.4.4 Design Example 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198

7.5 Notes and References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200

8 H2 and Mixed H2/H∞ Design of Structured Controllers . . 2038.1 H2 Design of Structured Controllers . . . . . . . . . . . . . . . . . . . . . . 203

8.1.1 Formulation of the robust H2 design problem . . . . . . . . 2048.1.2 Set of robustly stable SOF controllers . . . . . . . . . . . . . . . 2068.1.3 Worst-Case Performance and Average Performance . . . 2098.1.4 Guaranteed LQ Cost with Time Varying Parameters . . 211

8.2 Mixed H2/H∞ Design of Structured Controllers . . . . . . . . . . . . 2168.2.1 Problem formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2168.2.2 Set of robustly stable SOF controllers . . . . . . . . . . . . . . . 218

8.3 Design Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2188.3.1 Design Example 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2198.3.2 Design Example 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223

8.4 Notes and References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230

9 Nonlinear Structured Control via the Multi-ModelApproach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2379.1 Multi-Model Representation of a Given Nonlinear System . . . 237

9.1.1 Global Representation of a System from Local Models 2399.1.2 Some Considerations for Building a Multi-model . . . . . 243

9.2 Design of a Nonlinear Structured Controller . . . . . . . . . . . . . . . 2529.2.1 The System Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2529.2.2 The Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2559.2.3 Closed-loop Quadratic Stability . . . . . . . . . . . . . . . . . . . . 258

9.3 Design Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2609.3.1 Design Example 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2609.3.2 Design Example 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2659.3.3 Design Example 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268

9.4 Notes and References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273

10 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27710.1 The necessity of the uncertain model . . . . . . . . . . . . . . . . . . . . . . 277

10.1.1 Certainty Prevision . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27710.1.2 Uncertain prevision . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279

10.2 The closed-loop interconnection . . . . . . . . . . . . . . . . . . . . . . . . . . 28010.3 Design Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280

A Convergence Properties of the HKA . . . . . . . . . . . . . . . . . . . . . . 283

Page 14: Structured Controllers for Uncertain Systemsrosario.toscano.free.fr/bookChap1.pdf · structure feedback controllers for uncertain dynamical systems. Feedback controllers are required

Contents xvii

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297

Page 15: Structured Controllers for Uncertain Systemsrosario.toscano.free.fr/bookChap1.pdf · structure feedback controllers for uncertain dynamical systems. Feedback controllers are required
Page 16: Structured Controllers for Uncertain Systemsrosario.toscano.free.fr/bookChap1.pdf · structure feedback controllers for uncertain dynamical systems. Feedback controllers are required

Notation and Acronyms

Sets

R set of real numbersRn set of real column vectors with n entriesRn×m set of real matrices of dimension n×mC set of complex numbersC− open left-half planeCn set of complex column vectors with n entriesCn×m set of complex matrices of dimension n×mH ⊂ Rn×n set of real Hurwitz matricesSn set of real symmetric matrices of size n, i.e., Sn = {S ∈ Rn×n : S = ST }PH set of Hurwitz polynomialsL1 set of absolute-value integrable signalsL2 set of square integrable signalsL∞ set of signals bounded in amplitudeRHn×m

2 set of strictly proper and stable transfer matrices of dimension n×mRHn×m

∞ set of proper and stable transfer matrices of dimension n×m

xix

Page 17: Structured Controllers for Uncertain Systemsrosario.toscano.free.fr/bookChap1.pdf · structure feedback controllers for uncertain dynamical systems. Feedback controllers are required

xx Notation and Acronyms

Relational Operators

= equal to≈ approximately equal to< less than≤ less than or equal to� much less than> greater than≥ greater than or equal to� much greater than≺, � component-wise inequalities between vectors<e, ≤e component-wise inequalities between matrices⇒ implies⇔ is equivalent to

Miscellaneous

s the complex Laplace variablej the imaginary unit

√−1

π the ratio of a circle’s circumference to its diameter π ≈ 3.1416exp(.) exponential of the quantity passed in argument also denoted e(.)

ln(.) the natural (or Napierian) logarithm of the quantity passed in argument∈ belongs to⊂ subset of∪ union∃ there exists∀ for all: such thatRe(.) real part of the complex number passed in argumentIm(.) imaginary part of the complex number passed in argumentx ∈ [a, b] a ≤ x ≤ b, where a, x, b ∈ Rlimx→a f(x) the value of f(x) in the limit as x tends to a

∇f(x) gradient vector of f(x), ∇f(x) =(∂f∂x1· · · ∂f∂xn

)T, x = (x1, · · · , xn)T

∇xf(x, y) gradient vector of f(x, y) with respect to the vector x[G, K] standard feedback interconnection of system G and controller K

Page 18: Structured Controllers for Uncertain Systemsrosario.toscano.free.fr/bookChap1.pdf · structure feedback controllers for uncertain dynamical systems. Feedback controllers are required

Notation and Acronyms xxi

Matrix Operations

0 zero matrix of compatible dimensionI identity matrix of compatible dimensionIn identity matrix of dimension n× nAT transpose of matrix AA∗ conjugate transpose of the complex matrix AA−1 inverse of matrix AA−T denotes (A−1)T or equivalently (AT )−1)det(A) determinant of matrix Adiag(v) diagonal matrix with the elements of the vector v on the main diagonaldiag(A1, · · · , An) block-diagonal matrix with matrices Ai on the main diagonalrank(A) rank of matrix Atrace(A) trace of matrix Avect(A) vector of the column vectors of the matrix Avectd(A) vector of the diagonal elements of the square matrix AFl(G,K) lower linear fractional transformation of matrices G and KFu(G,K) upper linear fractional transformation of matrices G and KA � 0 symmetric matrix A = AT with strictly positive eigenvaluesA � 0 symmetric matrix A = AT with non-negative eigenvaluesA ≺ 0 symmetric matrix A = AT with strictly negative eigenvaluesA � 0 symmetric matrix A = AT with non-positive eigenvaluesA ≺ B denotes (A−B) ≺ 0

Measure of Size

λi(A) ith eigenvalue of the matrix Aλ̄(A) largest eigenvalue of the symmetric matrix Aλ(A) smallest eigenvalue of the symmetric matrix Aσi(A) ith singular value of the matrix Aσ̄(A) largest singular value of the matrix Aσ(A) smallest singular value of the matrix A|x| modulus (or magnitude) of x ∈ C‖x‖ Euclidean norm of the real or complex vector x, also denoted ‖x‖2‖u‖1 1-norm of the signal u‖u‖2 two-norm of the signal u‖u‖∞ infinity-norm of the signal u‖G‖2 two-norm of transfer matrix G ∈ RH2

‖G‖∞ infinity-norm of the transfer matrix G ∈ RH∞µ∆(M) structured singular value of the matrix M with respect to a given uncertainty structure ∆bG,K robust stability margin for system G and controller K

Page 19: Structured Controllers for Uncertain Systemsrosario.toscano.free.fr/bookChap1.pdf · structure feedback controllers for uncertain dynamical systems. Feedback controllers are required

xxii Notation and Acronyms

Acronyms

BB Branch and BoundBMI Bilinear Matrix InequalityGA Genetic AlgorithmHKA Heuristic Kalman Algorithmh.o.t Higher order termsiid independent and identically distributedLDI Linear Differential InclusionLFT Linear Fractional TransformationLMI Linear Matrix InequalityLPV Linear Parameter VaryingLQ Linear QuadraticLSDP Loop-Shaping Design ProcedureLTI Linear Time InvariantMA Metropolis algorithmMIMO Multi Input Multi Outputpdf probability density functionPID Proportional Integral DerivativePRSM Pure Random Search MethodsPSO Particle Swarm OptimizationRMS Root Mean SquareSA Simulated AnnealingSOF Static Output FeedbackSDP Semi Definite ProgrammingSISO Single Input Single OutputSSV Structured Singular ValueSTD STandard Deviation

Page 20: Structured Controllers for Uncertain Systemsrosario.toscano.free.fr/bookChap1.pdf · structure feedback controllers for uncertain dynamical systems. Feedback controllers are required

Chapter 1

Introduction

This chapter introduces some basic definitions and concepts from classical optimiza-

tion theory and indicates some limitations of the classical optimization methods. The

class of convex optimization problems is also briefly presented with an emphasis on

semi-definite programs. Some aspects related to the optimization in engineering de-

sign are also introduced. After that, the main objectives of the book are presented.

1.1 General Formulation of an Optimization Problem

Optimization is the way of achieving the best possible outcome given thedegrees of freedom and the constraints. By degrees of freedom we mean thenumbers of independent variables or design variables that can be used bythe designer to modify the outcome. The quality of the outcome can beevaluated via a function, called the objective function, that depends on thedesign variables. This objective function may reflect a cost that needs to beminimized or a benefit that must be maximized. Thus, optimization can beseen as the way of determining the values of the design variables that givethe minimum or the maximum of a given objective function.

1

Page 21: Structured Controllers for Uncertain Systemsrosario.toscano.free.fr/bookChap1.pdf · structure feedback controllers for uncertain dynamical systems. Feedback controllers are required

2 1 Introduction

Any control system design can be devised in terms of optimization. In orderto understand this, consider the general system presented in figure 1.1, whichproduces an output in response to a given input. In addition, this systemhas some tuning parameters (decision variables), denoted κ, allowing themodification of its behavior. By behavior we mean the relationship existingbetween the inputs and outputs. The problem is then to find the values of

Fig. 1.1: Control system design. The problem is to determine the tuning parametersso that the system behave as specified by the designer.

the tuning parameters κ so that the system behaves properly. Usually, thedesired behavior can be formulated via an objective function that dependson the tuning parameters, which needs to be maximized or minimized withrespect to κ.

More formally, an optimization problem, also called mathematical program-ming problem or simply programming problem, has the following general form:

minimize f(x)subject to gi(x) ≤ 0, i = 1, · · · , ng

hi(x) = 0, i = 1, · · · , nhx ∈ D = {x ∈ Rnx | x � x � x̄}

(1.1)

where f : Rnx → R is the objective function (or cost function) i.e., thefunction that we want to minimize1, gi : Rnx → R, i = 1, · · · , nf are theinequality constraint functions, hi : Rnx → R, i = 1, · · · , nh are the equalityconstraint functions, and the vector x = (x1, · · · , xnx)T is the optimizationvariable also called decision variable or design variable. The set D is whatwe call the search domain2 i.e., the set under which the minimization isperformed. The vectors x = (x1, · · · , xnx)T and x̄ = (x̄1, · · · , x̄nx)T are thebounds of the search domain and the symbol � means a componentwiseinequality. A vector x ∈ D is said feasible if it satisfies the nf inequalityconstraints fi(x) ≤ 0 and the nh equality constraints hi = 0; the set offeasible vectors is called the feasible domain F , which is a subset of D and isgiven by

1 Note that any maximization problem can be converted into a minimization problem.Indeed maximizing f(x) is the same as minimizing −f(x).2 The set D is a hyperbox and thus is also called the hyperbox search domain.

Page 22: Structured Controllers for Uncertain Systemsrosario.toscano.free.fr/bookChap1.pdf · structure feedback controllers for uncertain dynamical systems. Feedback controllers are required

1.1 General Formulation of an Optimization Problem 3

F = D ∩ {x ∈ D | gi(x) ≤ 0, i = 1, · · · , nf , hi(x) = 0, i = 1, · · · , nh} (1.2)

Obviously, the optimization problem (1.1) is said feasible if and only if theset F is not empty. When there are no constraints (i.e., F = D = Rnx)the problem (1.1) is said unconstrained . We denote by x∗ a solution of theproblem (1.1), i.e., a vector which ensures the smallest objective value amongall vectors that satisfy the constraints. A vector solution of problem (1.1) isalso said to be optimal. We refer to f∗ as the minimum value of the objectivefunction over F i.e., f∗ = f(x∗). The optimization problem (1.1) can be alsowritten in a more compact form

minimize f(x)subject to g(x) � 0

h(x) = 0x ∈ D = {x ∈ Rnx | x � x � x̄}

(1.3)

where g(x) = (g1(x), · · · , gng (x))T is the vector of inequality constraint func-tions, and g(x) = (g1(x), · · · , gng (x))T represents the vector of equality con-straint functions.

1.1.1 Design Variables, Constraints and ObjectiveFunction

Any optimization problem is entirely defined by giving:

• the vector of design variables x = (x1, · · · , xnx)T ,• the constraints gi(x) ≤ 0, and hi(x) = 0,• the objective function f(x).

These elements result of a formal description of the problem to be solved andare now examined in depth.

1.1.1.1 Design variables

When designing a system, there are some quantities, called design (or op-timization) variables, that need to be fixed in order to meet certain designrequirements. The number of independent variables nx determines the size ofthe problem. When the design vector is real valued, problem (1.1) is referredto as a continuous optimization problem as opposed to discrete optimizationin which the variables can take only integer values. In this case, the problemis said to be an integer programming problem which include boolean program-

Page 23: Structured Controllers for Uncertain Systemsrosario.toscano.free.fr/bookChap1.pdf · structure feedback controllers for uncertain dynamical systems. Feedback controllers are required

4 1 Introduction

ming , i.e., the variables can take only 0, 1 values.

1.1.1.2 Constraints

Usually, the optimization variables cannot be selected arbitrarily, they aresubjected to some restrictions called constraints. These constraints are im-posed by the nature of the problem to be solved, or reflect some designrequirements imposed by the designer.

For practical reasons, the optimization variables are usually bounded, i.e.,a design variable xi must belong to a prescribed interval [xi, x̄i], where xi andx̄i are the bounds of xi. This is a particular kind of constraints called a sideconstraint . These constraints may result from various physical limitations orare imposed as design requirements.

Example 1.1. Consider the constraints of the following two-dimensional opti-mization problem

minimize f(x) = −x1x2 exp(−x1)subject to g1(x) = (3x1 − 8)2 + 10x2 − 90 ≤ 0

g2(x) = (3.92x1 − 9.6)3 + 5(3.92x1 − 9.6)2

−30(3.92x1 − 9.6)− 100x2 + 200 ≤ 0x ∈ D = {x ∈ R2 | 0.5 ≤ x1 ≤ 4.2, 1.5 ≤ x2 ≤ 8.2}

(1.4)

the domain of values of x that satisfy the constraints is given by F = D∩{x ∈D | g1(x) ≤ 0, g2(x) ≤ 0}. Figure 1.2 gives a geometrical representation ofthe domain F .

1.1.1.3 Objective function

Design variables and constraints gives rise to the set of feasible solutions F .Any element of F satisfies the requirements imposed by the constraints, andthus is an acceptable candidate solution. The purpose of optimization is toselect within F the best possible solution. To do so, a criterion, called objec-tive function, denoted f , must be defined to compare two different candidatesolutions. In the case of a minimization problem, a candidate solution x′ ∈ Fis said to be better than x′′ ∈ F if f(x′) < f(x′′). Therefore, the best pos-sible solution x∗ is such that f(x∗) ≤ f(x) for all x ∈ F . Figure 1.2 showsthe optimal solution of the two dimensional optimization problem (1.4). Thedashed curves represent the contour plot of the objective function i.e;, theset of points for which f(x) = ci, i = 1, · · · , 15, where ci is a given constantvalue. The optimal solution x∗ lies on the lowest level curve of the objectivefunction.

Page 24: Structured Controllers for Uncertain Systemsrosario.toscano.free.fr/bookChap1.pdf · structure feedback controllers for uncertain dynamical systems. Feedback controllers are required

1.1 General Formulation of an Optimization Problem 5

Fig. 1.2: Geometrical representation of the two-dimensional problem (1.4). The fea-sible domain F , represented in grey, divide the design space into two regions: onein which the constraints are satisfied i.e., x ∈ F and one in which x /∈ F . A pointx on the boundary of F satisfies the constraints critically whereas a point x /∈ Fis unacceptable as a candidate solution. The dashed curves represent the contourplot of the objective function f(x), the optimal solution x∗ is such that x∗ ∈ F andf(x∗) ≤ f(x) for all x ∈ F .

1.1.2 Global and Local Minimum, Descent Direction

Optimization consists of finding a feasible point x∗ for which the objectivefunction takes its minimum value. A distinction must be made between globaland local minimum.

Global minimum. A feasible point x∗ ∈ F is said to be a global minimumpoint or a global minimizer if f(x∗) ≤ f(x) for all x ∈ F . In this case, f(x∗)is the global minimum of f(x) over F .

Local minimum. A feasible point x∗ is said to be a local minimum pointor a local minimizer if there is a neighborhood3 V ⊂ F of x∗ such thatf(x∗) ≤ f(x) for all x ∈ V. In this case, f(x∗) is the local minimum of f(x)over V. Note that a global minimum is also a local minimum, the converse isfalse except for convex optimization problems (see section 1.2).

3 A neighborhood V of a point x∗ over F is defined as V = {x ∈ F | ‖x − x∗‖ ≤ ρ},where ρ > 0 is a given positive real number.

Page 25: Structured Controllers for Uncertain Systemsrosario.toscano.free.fr/bookChap1.pdf · structure feedback controllers for uncertain dynamical systems. Feedback controllers are required

6 1 Introduction

Obviously, it is desirable to look for a global minimizer since this gives theassurance that nothing better can be found. However, the global minimizeris generally very difficult to find, because f is usually known locally only.Most algorithms are able to find a local minimizer only, which is a point thatachieves the smallest value of f in its neighborhood. Such a point can bereached using successive descent directions.

Descent direction. Usually, an optimization problem is solved via an it-erative algorithm. This mean that, given an initial point x0, a sequence offeasible points {xk} is generated by repeating application of a transition rule.In other words, two successive points of the sequence are such that

xk+1 = A(xk) (1.5)

where A is the transition rule by means of which the new point xk+1 iscalculated according to the information gained at point xk. The availableinformation at point xk can be the values of the objective function and theconstraints at xk, the values of the first-order derivatives of these functions atxk and, possibly, the values of the second-order derivatives of these functionsat xk.

To solve a minimization problem, it seems reasonable to require that thetransition rule A must be such that xk+1 ∈ F , and f(xk+1) < f(xk). Underthese conditions, we can expect that by successive applications of (1.5), wefinally obtain a local minimum point. This is the principle of the so calleddescent methods. For this kind of algorithm, the transition rule can be writtenas follows:

xk+1 = A(xk) = xk + αkdk (1.6)

where dk is a vector in Rnx called the search direction, and the scalar αk ≥ 0is called the step size or step length at iteration k. The new point xk+1 isthus generated by adding to the current point xk an appropriately chosenvector αkdk.

Among all the feasible search directions4we are interested in finding a di-rection dk along which the objective function f is decreasing. Such a directionis called a feasible descent direction. More formally, a vector dk ∈ Rnx is saidto be a feasible descent direction for f at xk if there exists δ > 0 such that:{

xk + αkdk ∈ Ff(xk + αkdk) < f(xk)

, for all αk ∈ (0, δ] (1.7)

If the objective function f is differentiable, we can use the first order Taylorseries to expand f at xk, thus we can write:

f(xk + αkdk) = f(xk) + αk∇f(xk)T dk + h.o.t (1.8)

4 A search direction is said to be a feasible direction at the point xk, if there existsδ > 0 such that xk + αkdk ∈ F , for all αk ∈ [0, δ]

Page 26: Structured Controllers for Uncertain Systemsrosario.toscano.free.fr/bookChap1.pdf · structure feedback controllers for uncertain dynamical systems. Feedback controllers are required

1.1 General Formulation of an Optimization Problem 7

where the high order terms (h.o.t) tend to zero as αk → 0+, consequently,the quantity ∇f(xk)T dk, called directional derivative of f at xk along dk,can be defined as:

∇f(xk)T dk = limαk→0+

f(xk + αkdk)− f(xk)

αk(1.9)

From (1.9), we can see that a vector dk is a descent direction (i.e. f(xk +αkdk)− f(xk) < 0 for some αk), if the directional derivative of f at xk alongdk is negative:

∇f(xk)T dk < 0 (1.10)

This inequality means that dk is a descent direction if the angle between dkand the direction of the gradient at xk is obtuse. Therefore, the anti-gradient−∇f(xk) is a descent direction. Note that if ∇f(xk)T dk > 0, then dk is anascent direction, i.e., the function f is increasing along dk, more precisely wehave f(xk + αkdk) > f(xk) for a sufficiently small positive value of αk. Inthis case, the angle between ∇f(xk) and dk is acute (see figure 1.3).

Fig. 1.3: The angle between ∇f(x) and d is obtuse i.e., ∇f(x)T d < 0, d is then adescent direction.

1.1.3 Optimality Conditions

To solve an optimization problem, it is essential to give the conditions that alocal minimum point must satisfy. For a constrained optimization problem,we can reasonably say that a point x∗ ∈ F is a local minimum point ifthe objective function f cannot be decreased for sufficiently small positivedisplacements from x∗ along any feasible direction d. In other words, x∗ is a

Page 27: Structured Controllers for Uncertain Systemsrosario.toscano.free.fr/bookChap1.pdf · structure feedback controllers for uncertain dynamical systems. Feedback controllers are required

8 1 Introduction

local minimum point if there is no feasible d satisfying ∇f(x∗)T d < 0 and sowe can only find feasible d such that

∇f(x∗)T d ≥ 0 (1.11)

This is indeed a necessary condition that a local minimum of a constrainedoptimization must satisfy. Figure 1.2 gives a geometric interpretation of theoptimality condition (1.11). The feasible set F is shown shaded. We can inferthat from x∗ it is not possible to find a feasible descent direction, thus x∗

satisfies the necessary condition of local minimum.In the case of an unconstrained optimization problem, any direction d ∈

Rnx is a feasible direction. Thus if we have ∇f(x∗)T d ≥ 0 for all d ∈ Rnx ,this necessarily means that

∇f(x∗) = 0 (1.12)

we recognize the well-known necessary condition that a local minimum mustsatisfy in the case of unconstrained optimization problems. A point x∗ issaid to be a stationary point if it satisfies ∇f(x∗) = 0. Therefore, for anunconstrained optimization problem, a necessary condition to be satisfied toget a local minimizer is that it must be a stationary point. It is interesting tonote that the set of stationary points can be determined by solving the setof equations

∇f(x) = 0 (1.13)

In the same way, it is important to know the set of equations that a candidatesolution of a constrained optimization problem must satisfy to be a localminimum point. This can be done using the Karush-Kuhn-Tucker conditions.

� KKT Necessary Optimality Condition. Let x∗ be a local minimizerof the constrained optimization problem (1.3), and assume that x∗ is aregular5 point of the constraints. Then there exist uniquely defined vectorsλ∗, µ∗ of Lagrange multipliers such that vectors x∗, λ∗, µ∗ are a solutionof the KKT system of equalities and inequalities with unknowns x, λ, µ:

∇xL(x, λ, µ) = 0 (1.14)

∇λL(x, λ, µ) = 0 (1.15)

µT g(x) = 0 (1.16)

g(x) � 0 (1.17)

µ � 0 (1.18)

where L(x, λ, µ), called the Lagrangian function, is defined as

L(x, λ, µ) = f(x) + µT g(x) + λTh(x) (1.19)

5 A point x∗ is said to be regular for the system of constraints if the gradient of theequality constraints ∇hi(x∗), i = 1, · · · , nh and the gradient of the active inequalityconstraints ∇gi(x∗), i ∈ {j : gj(x∗) = 0} are linearly independent.

Page 28: Structured Controllers for Uncertain Systemsrosario.toscano.free.fr/bookChap1.pdf · structure feedback controllers for uncertain dynamical systems. Feedback controllers are required

1.2 Convex Optimization Problems 9

Equation (1.14) represents the nx stationarity conditions, (1.15) representsthe nh equality constraints, (1.16) represents the ng complementary slackness,i.e. the fact that µi = 0 if gi is non active and µi 6= 0 if gi is active, (1.17)represents the inequality constraints and (1.18) impose that the multipliersµi must be non negative. Generally speaking, the KKT system cannot besolved analytically and therefore only a numerical method can be employedto find a local minimum point.

It can be shown that the above KKT conditions are necessary and sufficientconditions of optimality in the case of a convex optimization problem.

1.2 Convex Optimization Problems

Convex programming is a special case of nonlinear programming of greatpractical importance for at least two reasons. First, it includes a broad classof commonly addressed optimization problems such as linear programming,quadratic programming, geometric programming, etc. Moreover, many prac-tical engineering problems can be formulated as a convex optimization prob-lem. Secondly, what is more important, is that a convex optimization problemcan be very efficiently solved because any local optimum is also the globaloptimum. In fact, it is now well recognized that the frontier between thoseproblems that are tractable and those that are intractable relies on convexity.In other words, if we are able to formulate an optimization problem that isconvex, it is guaranteed that we can find the global optimum very efficiently,i.e., in polynomial time in the the number of decision variables (see notesand reference for a short discussion on complexity). In contrast, a nonconvexoptimization problem can be very hard to solve even for a small number ofdecision variables.

Roughly speaking, a convex optimization problem is formulated as theproblem of minimizing a convex function over convex set. In what follows,we introduce some basic definitions for the convexity of sets and functions.

Convex Sets. A set S ⊂ Rn is said to be convex if it contains the linesegment between any two points in S. In other words, for any x1 ∈ S, x2 ∈ Sand any α satisfying 0 ≤ α ≤ 1, we have

αx1 + (1− α)x2 ∈ S (1.20)

Figure 1.4 gives an example of convex and nonconvex sets in R2. A pointwritten in the form x =

∑Ni=1 αixi with αi ≥ 0 and

∑Ni=1 αi = 1 is called

a convex combination of the points x1,...,xN . It can be shown that a set isconvex if and only if it contains all the convex combinations of its points.

Page 29: Structured Controllers for Uncertain Systemsrosario.toscano.free.fr/bookChap1.pdf · structure feedback controllers for uncertain dynamical systems. Feedback controllers are required

10 1 Introduction

Fig. 1.4: Example of convex and nonconvex sets. The set on the left is convex whereasthe V-shaped set is not convex because the line segment between the two points inthe set is not contained in the set.

Fig. 1.5: Convex hull of the V-shaped set of the figure 1.4, and convex hull of a finiteset of points, i.e., a polytope.

The convex hull of S, denoted by Co(S) is defined as the set of all convexcombinations of points in S, Co(S) is then a convex set even if S is notconvex. A polytope is defined as the convex hull of a finite set of points{x1, · · · , xN} ⊂ Rn, i.e.,

Co({x1, · · · , xN}) =

{N∑i=1

αixi : αi ≥ 0,

N∑i=1

αi = 1

}(1.21)

Figure 1.5 gives an example of convex hulls in R2.

Convex Functions. A function f : Rn → R is convex if its definitiondomain, denoted domf is a convex set and if for all x, y ∈ domf , and αsatisfying 0 ≤ α ≤ 1, we have

f(αx+ (1− α)y) ≤ αf(x) + (1− α)f(y) (1.22)

The geometric interpretation of this inequality is that line segment betweenany two point on the graph of f is above the graph (see figure 1.6). A functionf is said to be concave if −f is convex. A function both convex and concaveis affine.

A necessary and sufficient condition of convexity of a differentiable functionf is that

f(y) ≥ f(x) +∇f(x)T (y − x), ∀x, y ∈ domf (1.23)

Page 30: Structured Controllers for Uncertain Systemsrosario.toscano.free.fr/bookChap1.pdf · structure feedback controllers for uncertain dynamical systems. Feedback controllers are required

1.2 Convex Optimization Problems 11

Fig. 1.6: The function represented by this graph is convex because the line segmentbetween any two points on the graph is above the graph.

In other words, f is a convex function, if and only if the affine approximationof f near any x ∈ domf , obtained via the first-order Taylor expansion,i.e., f̂(y) = f(x) + ∇f(x)T (y − x), is always a global underestimator of f :

f(y) ≥ f̂(y). This is illustrated figure 1.7.

Fig. 1.7: A differentiable function f is convex if and only if the first-order Taylorapproximation of f in any x ∈ domf , expressed as f̂(y) = f(x) +∇f(x)T (y − x), is

such that f(y) ≥ f̂(y).

This property shows that from a local evaluation of f , we can get globalinformation about it. In particular, according to the first order optimalitycondition, i.e. ∇f(x) = 0, it follow from (1.23) that f(y) ≥ f(x) for ally ∈ domf , and thus x is the global minimizer of f .

Page 31: Structured Controllers for Uncertain Systemsrosario.toscano.free.fr/bookChap1.pdf · structure feedback controllers for uncertain dynamical systems. Feedback controllers are required

12 1 Introduction

General Form of a Convex Optimization Problem. An optimizationproblem is said to be convex if it can be written as follows

minimize f(x)subject to gi(x) ≤ 0, i = 1, · · · , ng

hi(x) = 0, i = 1, · · · , nh(1.24)

where f and gi are convex functions, and the equality constraints are affine:hi(x) = aTi x+ bi. Since the functions gi of inequality constraints are convexand the functions hi of the equality constraints are affine, the resulting fea-sible domain is a convex set. Due to the convexity of the objective functionand of the feasible set, the optimality condition given in (1.11) is necessaryand sufficient.

Convex programming includes as a special case the so called semidefiniteprogramming under which a broad class of control problem can be formulatedand efficiently solved.

1.2.1 Semidefinite Programming

A semidefinite program (SDP) has the following general form

minimize cTxsubject to F (x) � 0

Ax = b(1.25)

where c ∈ Rnx , Fi ∈ Sk, i = 0, · · · , nx, A ∈ Rnh×nx and b ∈ Rnh , are theproblem data. In our notations, Sk represents the set of symmetric matricesof size k, i.e., Sk = {S ∈ Rk×k : S = ST }. The constraint F (x) � 0 is alinear matrix inequality (LMI), which is defined as

F (x) = F (x)T = F0 +

n∑i=1

xiFi � 0 (1.26)

where Fi ∈ Sk, i = 0, · · · , nx are given symmetric matrices. The notationF (x) � 0 means that the inequality is satisfied if x ∈ Rnx is such thatF (x) is a positive semidefinite matrix, i.e., zTF (x)z ≥ 0 for all z ∈ Rk.Equivalently, this means that the eigenvalues of F (x) are nonnegative6. As aresult, it can be easily verified that the constraint F (x) � 0 is convex, i.e.,the set {x : F (x) � 0} is convex. Therefore, a SDP is a convex optimizationproblem.

Multiple LMI’s F (1)(x) � 0, · · · , F (ng)(x) � 0 can be rewritten as a singleLMI by using a block-diagonal matrix, i.e., diag(F (1)(x), · · · , F (ng)(x)) � 0.

6 Since F (x) is symmetric its eigenvalues are necessarily real numbers.

Page 32: Structured Controllers for Uncertain Systemsrosario.toscano.free.fr/bookChap1.pdf · structure feedback controllers for uncertain dynamical systems. Feedback controllers are required

1.2 Convex Optimization Problems 13

By using a block-diagonal matrix, we can also incorporate linear inequal-ity constraints in a single LMI. Indeed, the set of constraints {aT1 x ≤b, · · · , aTngx ≤ bng , F (x) � 0} can be rewritten as diag(b1 − aT1 x, · · · , bng −aTngx, F (x)) � 0. Note that SDP is reduced to a linear program when the ma-trices Fi are all diagonal, indeed, in this case F (x) � 0 defines a set of linearinequality constraints. Note however that, in general, a LMI is a nonlinearconstraint, the term linear matrix inequality come from the fact that F (x) isaffine in x.

We can also encounter strict LMI, denoted F (x) � 0, which means that theinequality is satisfied if x ∈ Rnx is such that F (x) is a positive definite matrix,i.e., zTF (x)z > 0 for all non-zero z ∈ Rk (or equivalently the eigenvalues ofF (x) are positive). Note that if the non strict LMI F (x) � εI, with ε > 0, issatisfied then the strict LMI F (x) � 0 is satisfied.

Matrix Variable. Most LMIs are not formulated in the standard form (1.26)as for instance the Lyapunov inequality

ATP + PA ≺ 0, P = PT � 0 (1.27)

where A is a given matrix and P is the matrix variable. In this case (1.27) issaid to be a LMI in P . As shown in the example 1.2, LMIs that are formulatedwith matrix variables can be rewritten into the standard form (1.26).

Example 1.2. For simplicity, assume that the matrix variable P belongs to S2, thenany P ∈ S2 can be expressed as

P (x) = x1

[1 00 0

]+ x2

[0 11 0

]+ x3

[0 00 1

]= x1P1 + x2P2 + x3P3 (1.28)

Note that a symmetric matrix variable of size k, represents k(k + 1)/2 decision vari-ables. A symmetric matrix variable is often called a Lyapunov variable. The Lyapunovinequality (1.27) can then be written as

F (x) =∑3

i=1xiFi, Fi =

[Pi 00 −ATPi − PiA

]∈ S4 (1.29)

which is a LMI in the standard form (1.26) with F0 = 0.

Schur Complement Lemma. The Schur complement lemma states thatfor any given matrices Q = QT , R = RT � 0 and S of appropriate dimension,the LMI

L =

[Q SST R

]� 0 (1.30)

is equivalent toR � 0, Q− SR−1ST � 0 (1.31)

The matrix Q − SR−1ST is called the Schur complement of R in L. Thisresult is very often used to convert a convex quadratic constraint into a LMI.For instance, the Riccati inequality

ATP + PA+ PBR−1BTP +Q ≺ 0, P = PT � 0 (1.32)

Page 33: Structured Controllers for Uncertain Systemsrosario.toscano.free.fr/bookChap1.pdf · structure feedback controllers for uncertain dynamical systems. Feedback controllers are required

14 1 Introduction

where A, B, Q = QT , R = RT � 0 are given matrices and P is the variable,can be rewritten as [

−ATP − PA−Q PBBTP R

]� 0 (1.33)

by application of the Schur complement lemma.

1.2.2 Dual Problem

The optimization problem (1.24), also referred to as the primal problem, canbe converted into a particular form called the dual problem. The interestingfact, is that the solution of the dual problem, denoted d∗, provides a lowerbound to the optimal value of the primal problem, denoted f∗. Since anyfeasible point x̃ of the primal problem satisfies f∗ ≤ f(x̃), it is guaranteedthat the optimal value is such that d∗ ≤ f∗ ≤ f(x̃). Therefore, the solutionof the dual problem can be used to provide a guarantee of optimality.

The dual problem can be obtained by associating to the optimization prob-lem (1.24) the Lagrangian L(x, λ, µ)

L(x, λ, µ) = f(x) +

ng∑i=1

µigi(x) +

nh∑i=1

λihi(x)

where λ and µ are the Lagrange multiplier vectors also called the dual vari-ables. The dual function Ld(λ, µ) is defined as the minimum over x ∈ D ofthe Lagrangian

Ld(λ, µ) = minx∈D

L(x, λ, µ) = minx∈D

(f(x) +

ng∑i=1

µigi(x) +

nh∑i=1

λihi(x)

)(1.34)

Note that the dual function is concave since it is defined as the minimizationof a family of affine functions in the variables λ and µ. The main interestof introducing the dual function is that it gives lower bounds of the optimalobjective function value f∗ of the problem (1.24). For any λ and µ � 0, wehave

Ld(λ, µ) ≤ f∗ (1.35)

Indeed, for any feasible point x̃, we have λTh(x̃) + µT g(x̃) ≤ 0 becauseh(x̃) = 0 and g(x̃) � 0. Therefore L(x̃, λ, µ) = f(x̃)+λTh(x̃)+µT g(x̃) ≤ f(x̃),and thus we have

Ld(λ, µ) = minx∈D

L(x, λ, µ) ≤ L(x̃, λ, µ) ≤ f(x̃) (1.36)

given that Ld(λ, µ) ≤ f(x̃) for all x̃ ∈ F , we have in particular Ld(λ, µ) ≤ f∗.

Page 34: Structured Controllers for Uncertain Systemsrosario.toscano.free.fr/bookChap1.pdf · structure feedback controllers for uncertain dynamical systems. Feedback controllers are required

1.3 Optimization in Engineering Design 15

Thus, for any λ and µ � 0, the dual function provides a lower bound of theoptimal value of the corresponding primal problem. Therefore, the greatestlower bound is solution of the following optimization problem

maximize Ld(λ, µ)subject to µ � 0

(1.37)

called the Lagrange dual problem associated to the problem (1.24). The dualproblem is convex because of the maximization of a concave objective functionover convex constraints. This property is true even when the primal problemis non-convex. The determination of the best lower bound is thus guaranteedand can be determined very efficiently using available convex solvers. Thedifference between the optimal value of the dual problem and the primalproblem p∗ − d∗ is known as the optimal duality gap. For convex problems,the optimal duality gap is zero under very mild assumptions7. In general, thisis not true for non-convex problems.

1.3 Optimization in Engineering Design

Many problems encountered in the engineering sciences can be formulatedas optimization problems. For instance, a design problem can be defined asfollows: find x ∈ D = {x ∈ Rnx : x � x � x̄} such that the set of performancespecifications

fi(x, θ) ≤ αdi , i = 1, · · · , nf (1.38)

are satisfied, where the positive real numbers αdi are the design objectivesthat are chosen by the designer, x = (x1, · · · , xnx)T is the vector of designvariables, fi(., .) are real positive functions, and θ = (θ1, · · · , θnθ )T is theparameters vector, which depends on the design problem. Usually, θ is as-sumed to be perfectly known; in this case, we denote by θ = θ0 the nominalparameters vector. The nominal design problem is then be defined as: findx ∈ D subject to

fi(x, θ0) ≤ αdi , i = 1, · · · , nf (1.39)

Any x satisfying (1.39) is a solution of the design problem considered. Infact, the set of specifications (1.39) define the set of solutions F of the designproblem

F = {x ∈ D | f(x, θ0) � αd} (1.40)

where f(x, θ0) = (f1(x, θ0), · · · , fnf (x, θ0))T and αd = (αd1, · · · , αdnf )T . Adesign problem thus formulated is not, strictly speaking, an optimization

7 If the primal problem is convex and if there exists a strictly feasible point i.e., apoint x̃ satisfying g(x̃) ≺ 0 and Ax̃ = b, then the optimal duality gap is zero. Thecondition of existence of a strictly feasible point is known as the Slater’s constraintqualification.

Page 35: Structured Controllers for Uncertain Systemsrosario.toscano.free.fr/bookChap1.pdf · structure feedback controllers for uncertain dynamical systems. Feedback controllers are required

16 1 Introduction

problem because all x of F are equally valid with respect to the designobjective specified by αd. However, it can be extremely interesting to findx∗ ∈ F such that all the performance values fi(x

∗, θ0) are as small aspossible with each fi(x

∗, θ0) satisfying the corresponding design objective

i.e., (f1(x∗, θ0), · · · , fnf (x∗, θ0))T � αd ⇔(f1(x∗,θ0)

αd1, · · · , fnf (x∗,θ0)

αdnf

)T� 1,

where 1 is the unity vector. This can be done by solving the following opti-mization problem

minimizeα, x

α

subject to fi(x, θ0)/αdi ≤ α, i = 1, · · · , nfx ∈ {x ∈ Rnx : x � x � x̄}

(1.41)

Note that the additional scalar variable α has been introduced to formulatethe optimization problem. A solution (x∗T , α∗)T of problem (1.41) satisfiesfi(x

∗, θ0) ≤ α∗αdi . Since α∗ is minimal, the quantity α∗αdi represents the bestpossible achievable performance.

In the context of control system design, the functions fi(x, θ0) given in(1.39) can be various performance measurements taken from the system stepresponse, say the overshoot and the settling time, and the design variables xiare the tuning parameters of the controller. In this case, solving the optimiza-tion problem (1.41), means finding the controller parameters ensuring thatthe system step response has the smallest overshoot and the lowest settlingtime while satisfying the design goals.

Robust Design. Until now it has been assumed that the parameters (i.e.,the problem data) which enter in the formulation of the design problem areprecisely known. However, in many practical applications some of these pa-rameters are subject to uncertainties. It is then important to be able tocalculate solutions that are insensitive to parameters uncertainties; this leadsto the notion of optimal robust design. We say that the design is robust, ifthe various specifications (i.e., the constraints) are satisfied for a set of valuesof the parameters uncertainties. Usually, it can be realistically assumed thatθ lies in a bounded set Θ defined as follows:

Θ ={θ ∈ Rnθ : θ � θ � θ̄

}, (1.42)

where the vectors θ = (θ1, · · · , θnθ )T , and θ̄ = (θ̄1, · · · , θ̄nθ )T are the uncer-

tainty bounds of the parameters vector θ. Thus, the uncertain vector belongto the nθ-dimensional hyperrectangle Θ also called the parameter box. Therobust version of the optimization problem (1.41) is then written as follows

minimizeα, x

α

subject to fi(x, θ)/αdi ≤ α, i = 1, · · · , nf , ∀θ ∈ Θ

x ∈ {x ∈ Rnx : x � x � x̄}(1.43)

Page 36: Structured Controllers for Uncertain Systemsrosario.toscano.free.fr/bookChap1.pdf · structure feedback controllers for uncertain dynamical systems. Feedback controllers are required

1.4 Main objectives of the book 17

The formulation (1.43) is quite general and can be applied to a broad class ofengineering problems including the synthesis of robust structured controllers,which is the subject of this book. Unfortunately, there are several obstaclesto solving this kind of problem efficiently. The main obstacle is that most ofthe optimization problems are NP-hard (see the Notes and References fora short introduction on complexity). Therefore the known theoretical meth-ods cannot be applied except possibly for some small size problems (i.e.,small number of decision variables). Other difficulties arise when some of thefunctions fi are not differentiable and/or non-convex. In this case, the setof methods requiring the derivatives of the functions fi cannot be used andthe achievement of a global optimum cannot be guaranteed in a reasonablecomputation time. Another obstacle is when some qualities of a design can-not be expressed in an analytic form, in this case, these qualities can only beevaluated through simulations. In these situations, probabilistic approachesseem to be a good way for solving this kind of optimization problem. Byprobabilistic approach, we mean a computational method employing exper-imentations, evaluations and trial-and-errors procedures in order to obtainan approximate solution for computationally difficult problems. This is theapproach adopted in this book whose main objectives are presented in thenext section.

1.4 Main objectives of the book

The main objective of this book is to develop simple and user-friendly designmethods for robust structured controllers satisfying multiple performancespecifications. We call this the multi-objective robust structured control designproblem, which can be stated as follows. Given an uncertain system, denotedby G, and a structured controller K(κ) parameterized by κ ∈ D = {κ ∈ Rnκ :κ ≤ κ ≤ κ̄}, find an appropriate setting κ = κ∗ such that the interconnectionbetween G and K(κ), denoted [G, K(κ)], forms a satisfactory design, i.e.,satisfies a given set of performance specifications

φi([G, K(κ)]) ≤ αdi , i = 1, · · · , nφ, κ ∈ D (1.44)

As seen above, the optimal version of this design problem is formulated asfollows

minimizeα, κ

α

subject to φi([G, K(κ)])/αdi ≤ α, i = 1, · · · , nφκ ∈ D = {κ ∈ Rnκ : κ ≤ κ ≤ κ̄}

(1.45)

To solve this problem, we have to define:

• the uncertain system G,

• the structured controller K(.),

Page 37: Structured Controllers for Uncertain Systemsrosario.toscano.free.fr/bookChap1.pdf · structure feedback controllers for uncertain dynamical systems. Feedback controllers are required

18 1 Introduction

• the interconnection [G, K],

• the performance measures φi(.),

• and an appropriate algorithm so as to find an acceptable solution to theproblem (1.45).

These various points are considered in some details in this book, and are nowbriefly presented. For better reading, the parts of the book dedicated to theseissues are also given.

1.4.1 The Uncertain System G

The design of a controller requires a mathematical description, called a model,of the system to be controlled. This model is always obtained at the priceof approximations and idealizations and so we have an approximate descrip-tion of the system behavior. For instance, the mathematical description ofthe system is practically always assumed to be linear and time invariant8.However, linear models often constitute a gross oversimplification of the realsystem, and so we have to take into account the uncertainties resulting fromthis simplification. Two kinds of model uncertainties are usually considered:the parametric uncertainties, also called structured uncertainties9 and dy-namical uncertainties, also known as unstructured uncertainties10. Formally,an uncertain system can be seen as set of systems defined as

G = {G(∆) : ∆ ∈ D∆} (1.46)

where G(.) represents a linear input/output mapping, obtained from a math-ematical modeling of the system under consideration. This mapping is param-eterized by ∆ ∈ D∆, where D∆ can be a set of parameter vectors reflectingthe uncertainty of the system parameters, and/or a set of stable systems re-flecting the neglected dynamics. Thus, for every ∆ ∈ D∆, we have a particularsystem model denoted G(∆). The so called nominal model is given by G(θ0),where θ0 is the nominal parameter vector. The details of the elaboration ofthe uncertain model G are discussed in chapter 6.

8 This is very usual in almost all areas of applied sciences and engineering becausethe theory of linear systems is well developed, and is usually very good for sufficientlysmall variations of the system variables around an operating point.9 Structured uncertainties can be due to a change of the system operating point or tothe fact that the values of certain physical parameters (such as resistance, inductance,mass values etc) are not precisely known.10 Non parametric uncertainties are essentially due to unmodelled or neglected fastdynamics. The fast dynamics are often neglected to simplify a model of high orderby retaining only the slow dynamics also called the dominant modes of the system.

Page 38: Structured Controllers for Uncertain Systemsrosario.toscano.free.fr/bookChap1.pdf · structure feedback controllers for uncertain dynamical systems. Feedback controllers are required

1.4 Main objectives of the book 19

Another source of uncertainties comes from the undesirable but inevitableeffects of the environment to the system. This kind of uncertainty is calleddisturbance. For example, the ambient temperature of a thermal process is adisturbance because it affects the temperature in the system due to imperfectinsulation. At this point, it must be stressed that the main objective of acontrol system is to make the system relatively insensitive to disturbancesand to the model uncertainties. This can be done by designing an appropriatecontroller for the system. In this book, we focus on the design of structuredcontrollers (see chapter 3 section 3.5).

1.4.2 Structured Controller K

In this book, the controllers considered are themselves linear time invariant(LTI) systems except in chapter 9 where we will consider the design of a nonlinear controller obtained by aggregation of linear controllers.

The fact of imposing that the controller is a LTI system is in itself a struc-tural constraint, i.e., a constraint on the mathematical form of the controller.This constraint is justified because we want to remain in the context of linearsystems for which the design problems are usually much more tractable thannonlinear design problems. In this book, a linear controller is called struc-tured when its order and its mathematical form are imposed by the designer(see chapter 3 section 3.5).

This requirement is very important for many practical reasons. For exam-ple, it can be imposed that the controller has a specific form (e.g., a PID);the controller can be distributed, i.e., not localized at a single place; theavailable memory and computational power of the processor utilized to im-plement the controller is limited, therefore this imposes the design of a loworder controller, i.e., of low complexity, etc. A more detailed presentation ofstructured controllers is given in chapter 3 section 3.5, some design examplesof robust structured controllers are presented in chapters 7 and 8. In thesequel, we will denote by K a structured controller and by κ = vect(K) thecontroller parameter vector.

1.4.3 Interconnection [G, K]

The basic goal of a control system is to ensure that the system output ySfollows a reference input r as closely as possible. To this end, there are twopossible interconnections between the system and the controller, that are ofpractical interest: the open-loop interconnection, presented figure 1.8-a, andthe closed-loop interconnection shown in figure 1.8-b.

Page 39: Structured Controllers for Uncertain Systemsrosario.toscano.free.fr/bookChap1.pdf · structure feedback controllers for uncertain dynamical systems. Feedback controllers are required

20 1 Introduction

Fig. 1.8: The two basic interconnections: the open- and closed-loop control. Theopen-loop control is unable to make the system less sensitive to disturbances. Theclosed-loop control utilizes the system output yS to elaborate the control signal u.This result in a control system that is much less sensitive to disturbances and modeluncertainties.

In the case of an open/closed-loop interconnection, the controller K(.)is called an open/closed-loop controller, and the resulting control scheme iscalled an open/closed-loop control. Open-loop control is the simplest form ofcontrol; the control signal u is elaborated by the controller by using only thereference signal r, which represents the desired system output (see figure 1.8-a). The real situation of the system is not taken into account with this kindof control. As a result, the open-loop control system is very sensitive to theeffect of disturbances d affecting the system and to its model uncertainties. Anopen-loop controller is often used in simple processes because of its simplicityand low cost, especially in systems where feedback is not critical.

In cases where high performance is required, the use of a closed-loop con-troller is inevitable. Indeed, only the feedback control is able to ensure thatthe controlled system behaves as desired despite the inevitable presence ofmodel uncertainties and disturbances. This is possible because the controlsignal is elaborated not only from the reference signal but also from theactual system output yS (see figure 1.8-b). Consequently, if appropriately de-signed, the controller is able to compensate the effect of disturbances and isnot too much sensitive to the uncertainties of the model used to make its de-sign. In this book, only closed-loop control is considered, and a more generalframework, known as the standard control problem is presented in chapter 3section 3.2.

1.4.4 Performance Specifications

A fundamental requirement of any control system is the stability. Stabilitymeans that in the absence of external input signals, all system variables tend

Page 40: Structured Controllers for Uncertain Systemsrosario.toscano.free.fr/bookChap1.pdf · structure feedback controllers for uncertain dynamical systems. Feedback controllers are required

1.4 Main objectives of the book 21

to zero (see chapter 2 section 2.2.1). In the case of instability, the system vari-ables diverge to infinity causing thus damages to the system. Therefore thestability of the closed-loop system is an absolute necessity. It must be stressedthat due to the model uncertainties, the stability must hold for every systemin G defined by (1.46). In other words, the closed-loop system must be stableover the uncertainty set D∆. This is called the robust stability11 (see chapter6 section 6.5.1).

Robust stability is an essential requirement but is insufficient for practi-cal applications. It is also required that the closed-loop system must satisfysome additional design goals including performances specifications and con-straints on the structure of the controller. We denote by φi([G, K]) a givenperformance measure of the closed-loop system [G, K]. This function allowsevaluating a certain quality of the closed-loop such as disturbance rejection,tracking error, stability degree etc. In chapter 3 some detailed examples ofperformance measure functions are presented. We denote by φi([G,K]) ≤ αdia performance specification, where αdi is the specification fixed by the de-signer. The quantity αdi can be seen as the maximum tolerated value of theperformance measure φi. According to what was mentioned above, the multi-objective robust structured control design problem is then formulated as fol-lows

minimizeα, κ

α

subject to φi([G, K])/αdi ≤ α, i = 1, · · · , nφK ∈ KG

(1.47)

where KG denote the set of structured controllers that stabilize the uncertainsystem, that is

KG = {K : [G, K] is stable, κ = vect(K) ∈ D} (1.48)

Note that unlike to (1.45), this formulation makes explicit the constraintrelative to the robust stability. This constraint is actually not negotiable andmust absolutely be satisfied. Problem (1.47) can be also rewritten as follows

minimizeκ

max1≤i≤nφ

{φi([G, K])/αdi }

subject to K ∈ KG(1.49)

The nominal design of the structured controller corresponds to the casewhere G is reduced to the nominal system that is G = {G(θ0)}. Therefore, thenominal design can be seen as a relaxed version of the robust design problem,and is thus more tractable.

11 The necessity of robust stability has been recognized as a fundamental requirementfrom the beginning of automatic control, and has been obtained via the, now classical,gain and phase margins.

Page 41: Structured Controllers for Uncertain Systemsrosario.toscano.free.fr/bookChap1.pdf · structure feedback controllers for uncertain dynamical systems. Feedback controllers are required

22 1 Introduction

1.4.5 Algorithms for Finding an Acceptable Solution

Problem (1.49) is in general very difficult to solve because even in the nominalcase this is a non-convex and non-smooth optimization problem (see chapter3 section 3.5). If the robust design is considered, the problem is in additionsemi-infinite. This means that the optimization problem has a finite numberof optimization variables and an infinite number of constraints. Therefore, therobust design is much more difficult to solve than the nominal case, which isitself very difficult to solve when constraints on the structure of the controllermust be satisfied.

In this book, to deal with these various difficulties, we make use to thestochastic (or probabilistic) optimization methods. As will be seen later on,this kind of approach is well adapted indeed to find an acceptable solution12

to problems that are non-convex non-smooth and semi-infinite.

Several stochastic methods, also called metaheuristics, have been devel-oped in the last decades, which have demonstrated a strong ability to solveproblems that were previously difficult or impossible to solve. These meta-heuristics include Simulated Annealing (SA), Genetic Algorithm (GA), andParticle Swarm Optimization (PSO), to quote only the most widely usedin the framework of continuous optimization problems (see chapter 4). Themain characteristic of these approaches is the use of a stochastic mechanismto seek a solution. From a general point of view, the use of a stochasticsearch procedure seems in fact unavoidable to find a promising solution ofnon-convex and non-smooth optimization problems. In this book, an alterna-tive probabilistic optimization method, called Heuristic Kalman Algorithm(HKA), will be introduced (see chapter 5). Throughout the book, the HKAwill be used to find an acceptable solution to the robust structured controldesign problem (see chapter 7, 8, and 9). This is not a limitation because anyother probabilistic approach could be used to solve the problems consideredin this book. However, a significant advantage of HKA against other stochas-tic methods lies mainly in the small number of parameters which have to beset by the user. This property results in a user-friendly algorithm.

12 By acceptable solution, we mean a setting that satisfies the design goals but whichis not necessarily the global optimum of the optimization problem.

Page 42: Structured Controllers for Uncertain Systemsrosario.toscano.free.fr/bookChap1.pdf · structure feedback controllers for uncertain dynamical systems. Feedback controllers are required

1.5 Notes and references 23

1.5 Notes and references

The existence of optimization can be traced back to Newton, Lagrange, andCauchy13. The invention of differential and integral calculus by Fermat, New-ton and Leibnitz has made possible the development of methods for solvingoptimization problems. Constrained optimization was first considered by La-grange and the notion of steepest descent was introduced by Cauchy. Thereal development of the numerical optimization has been possible only fromthe middle of the twentieth century with the advent of digital computers thathave made possible the implementation of algorithms for solving optimiza-tion problems. This period also marks the transition from classical controlto the modern control theory characterized by the state space approach andthe use of optimization techniques.

General non-linear optimization

Many excellent textbooks can be found on the subject of optimization, thereferences hereafter, do not claim to be exhaustive. Local methods for non-linear programming are covered in the books by Gill, Murray, and Wright[60], Luenberger [88], Nocedal and Wright [102], Bertsekas [22], Fletcher [51],Sun and Yuan [128]. Global optimization is considered in the books by Pinter[108], Horst, Pardalos and Thoai [71], Hendrix and Toth [67]. Optimizationwith a strong orientation to engineering design is covered in the books byRao [112], Deb [43] and Arora [6].

Computational complexity

Consider a given optimization problem, denoted P , and an algorithm, de-noted A, designed to solve the problem P . We say that an algorithm runs inpolynomial time (or also is a polynomial-time algorithm) if it can solve theoptimization problem P with a number of operations that grows no fasterthan a polynomial of the problem dimensions. Polynomial-time algorithmsare considered efficient, and problems for which polynomial time algorithmsexist are considered “easy”.

The theory of computational complexity classifies problems according totheir inherent tractability or intractability, i.e., whether they are “easy” or“hard” to solve. This classification includes the well-known classes14 P andNP.

13 Some optimization problems were considered by the ancient Greek mathematicians.For instance, Euclid (300 bc) considers the minimal distance between a point and aline, and proves that a square has the greatest area among the rectangles with giventotal edge lengths.14 The notation P stands for polynomial time algorithm; the notation NP standsfor non-deterministic polynomial time algorithm. This means that the algorithm con-sists in two phases; in the first phase, a candidate solution is generated in a non-

Page 43: Structured Controllers for Uncertain Systemsrosario.toscano.free.fr/bookChap1.pdf · structure feedback controllers for uncertain dynamical systems. Feedback controllers are required

24 1 Introduction

The class P is defined as the set of problems for which there exists apolynomial-time algorithm. Therefore, P is considered as the set of problemsthat are easy to solve. The class NP is defined as the set of problem for whichthe validity of a candidate solution can be verified in polynomial time. If aproblem is at least as hard as the hardest problem in the class NP then wesay that this problem is NP-hard. A NP-hard problem is usually consideredas an“intractable problem”.

The problem of complexity is covered in the books by Garey and Johnson[57], Papadimitriou [104]. It as been shown that many problems arising inautomatic control are NP-hard see for instance the paper by Blondel andTsitsiklis [24]. In particular, it has been proved that the static output feed-back stabilization problem is NP-hard if one constrains the coefficients ofthe controller to lie in prespecified intervals. This is precisely the case whendesigning a structured controller.

Convex Optimization

The theory of convex optimization is much more developed and completethan the theory of general nonlinear optimization. Nesterov and Nemirovski[101] were the first to show that interior-point methods can solve efficiently,i.e., in polynomial time, convex optimization problems15. This result con-firm the observation made by Rockafellar [113], that: “the great watershedin optimization is not between linearity and nonlinearity, but convexity andnon-convexity”. The interesting thing is that a large variety of problems ofpractical interest can be formulated as a convex optimization problem [14].Convex optimization is very well covered in the book by Boyd and Van-denberghe [18], and includes many applications of great practical interest.Applications of convex optimization in control theory can be found in thebooks by Boyd and Barratt [16], Boyd, El Ghaoui, Feron, and Balakrishnan[17].

Stochastic optimization

Simulated annealing, genetic algorithms, and particle swarm methods repre-sent a new class of mathematical programming techniques that have gainedimportance over the last two decades. Simulated annealing is analogous tothe physical process of solid annealing. The genetic algorithms are searchtechniques based on the principle of natural selection. Particle swarm is asearch technique based on the principle of collective behavior. A survey of

deterministic way, while the second phase consists of a deterministic algorithm whichverifies or rejects the candidate solution as a valid solution to the problem.15 This is in contrast to the situation of non-convex optimization problems, for whichthe known algorithms require, in the worst case, a number of operations that isexponential in the problem dimensions.

Page 44: Structured Controllers for Uncertain Systemsrosario.toscano.free.fr/bookChap1.pdf · structure feedback controllers for uncertain dynamical systems. Feedback controllers are required

1.5 Notes and references 25

these approaches can be found in [138], see also the Notes and Reference forchapter 4.