33
Mixed L 1 /H 2 /H Control Design: Numerical Optimization Approaches * Daniel U. Campos-Delgado Kemin Zhou College of Sciences Dept. of Elec. & Comp. Eng. Autonomous University of San Luis Potosi Louisiana State University Mexico Baton Rouge, Louisiana 70803, USA [email protected] [email protected] February 2002 * This research was supported in part by grants from AFOSR (F49620-99-1-0179), LEQSF, NASA, CONACYT and FAI(UASLP). Corresponding author. Address: Apartado Postal 3-05, C.P. 78231, San Luis Potosi, S.L.P., Mexico. Tel: +52(444)826-2316, Fax: +52(444)826-2321 1

Mixed L1/H2/H∞ Control Design: Numerical Optimization Approaches

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Approaches ∗
College of Sciences Dept. of Elec. & Comp. Eng.
Autonomous University of San Luis Potosi Louisiana State University
Mexico Baton Rouge, Louisiana 70803, USA
[email protected] [email protected]
February 2002
∗This research was supported in part by grants from AFOSR (F49620-99-1-0179), LEQSF, NASA, CONACYT
and FAI(UASLP). †Corresponding author. Address: Apartado Postal 3-05, C.P. 78231, San Luis Potosi, S.L.P., Mexico. Tel:
+52(444)826-2316, Fax: +52(444)826-2321
1
Abstract
This paper presents some new approaches to mixed performance control problems of linear systems.
The design techniques proposed in this paper are based on numerical search of the norm bounded
stable transfer matrix Q in the H∞ and H2 suboptimal controller parameterizations so that the
additional performance specifications are satisfied. The design problems are then converted to some
finite dimensional nonlinear unconstrained optimization problems by explicitly parameterizing the
H∞ or H2 norm bounded stable transfer matrix Q for any fixed order. Finally, some two-stage
optimization algorithms are applied to find the optimal parameters. Numerical examples have
shown significant performance improvements of the proposed algorithms over those in the existing
literature.
1 Introduction
The fundamental objective of a feedback control system design is to achieve desired performance
despite of model uncertainties and external disturbances. It is well-known in the control community
that there are intrinsic conflicts between achievable performance and system robustness. A well
thought control system design is to make some suitable tradeoffs between performance and system
robustness. It is therefore desirable to develop design techniques that can optimally and system-
atically perform such performance and robustness tradeoffs. It is therefore not surprising that
multiobjective (or mixed performance) optimal control has become a crucially important research
area in the last decade or so.
Many approaches have been proposed in the literature to solve mixed performance problems.
An overview of various approaches to multi-objective design is presented by Vroemen and Jager
(1997). However, it is impossible to review all approaches and works related to mixed performance
problems. Hence only some most related work will be described here. Bernstein and Haddad (1989)
proposed for the first time the mixed H2/H∞ as a way to formulate a meaningful optimization
problem to the standard H∞ control problem by using the Lagrange multiplier method. Later on,
Khargonekar and Rotea (1991), and Halder et al (1997) addressed the mixed H2/H∞ problem and
converted the state feedback mixed design into a convex optimization problem. A time domain
signal formulation for a dual mixed H2/H∞ problem was presented and characterized by Doyle
et al (1994) and Zhou et al (1990,1994). In a different direction, Limebeer et al (1994) and Chen
and Zhou (2001) considered Nash game approaches to these mixed problems. Solutions based on
linear matrix inequalities (LMI) to multi-objective problems have also been proposed for output
feedback control by Oliveira et al (1999), and Scherer et al (1997). In this design, the objectives
2
are formulated in terms of a common Lyapunov function. However, this formulation tends to
be very conservative. Clement and Duc (2000) presented an extension to this method in order
to use different Lyapunov functions for each design objective. Hindi et al (1998), and Scherer
(1995, 2000) proposed to combine LMI’s with the Youla parameterization in order to search for
the optimal Youla parameter in a finite dimensional space. Similarly, Qi et al (2001) proposed
finite dimensional approximations for the Youla parameter in order to solve mixed problems with
time-domain contraints. Thus, an optimal solution is reached by approximating it from below
and above. Due to its physical interpretation, the mixed H2/H∞ design has also been applied to
the optimal filtering problem, e.g. (Khargonekar et al 1996, Rotstein et al 1996). Multi-objective
H2/H∞ problems have also been characterized in terms of their duality description (Djouadi et al
2001). Furthermore, Chen et al (2000) extended the H2/H∞ design to nonlinear systems by using
a fuzzy output feedback controller.
Another mixed problem, the L1/H∞ control was synthesized via convex optimization of finite
dimensional approximations by Sznaier and Bu (1998). Haddad et al (1998) revisited the L1/H∞ problem, but now a fixed structure controller is suggested and optimality conditions are derived for
its solution. Sznaier and Blanchini (1994) proposed the design of rational mixed L1/H∞ suboptimal
controllers by solving a sequence of finite dimensional auxiliary problems. On the other hand,
solutions to the H2/L1 problems were presented by Amishima et al (1988), Voulgaris (1994), and
Wu and Chu (1999) where quadratic programming problems were introduced to reach a solution. In
addition, Salapaka and Khammash (1998) and Salapaka et al (1999) approached theH2/L1 problem
by obtaining upper and lower bound convergence methods that involve the Q parameter in the Youla
parameterization. Sznaier and Amishima (1998) studied the H2 problem with time constraints
and suggested approximations to the optimal solutions by solving some quadratic programming
problems. In general, most of approaches presented in the literature do not solve directly the true
mixed performance problem by either optimizing an upper bound of the true performance or using
a related performance criterion since the original mixed performance problem is a highly complex
and nonlinear constrained optimization problem.
In recent years, evolutionary schemes have been extensively used to solve nonlinear constrained
optimization problems where multi-local minima can restrict global convergence. Evolutionary
schemes are inspired by the natural selection criteria where the stronger organisms are likely to
survive after generations. Thus, a parallelism can be drawn with an optimization problem where
the evolution period is considered as the optimization time and the most fitted organism in the
population will represent the optimal solution. Two evolutionary schemes, evolution algorithms and
genetic algorithms, are most commonly used. These algorithms present two main characteristics:
3
a multi-directional (random) search and an information exchange among best solutions. These
properties can generate new search directions in order to avoid local minima. Applications of
genetic algorithms to control and signal processing have been reported in literature: digital IIR
filter design (Man et al 1999), adaptive recursive filtering (White and Flockton 1997), active noise
control (Tang et al 1995), systems model reduction (Li et al 1997), weighting function design for
H∞ loop-shaping (Whidborne et al 1995), etc.
Motivated from those successful applications of the evolutionary algorithms, it seems nature to
apply these algorithms such as genetic algorithms to the above mentioned multiobjective optimiza-
tion problems. Nevertheless, it is unlikely to produce any reasonable results if these algorithms
are applied blindly since the optimization parameter spaces are too large. In this paper, we shall
propose design techniques that explicitly parameterize the free transfer matrix Q in the H∞ and
H2 suboptimal controller parameterizations such that the optimization parameter spaces are highly
restricted and then evolutionary algorithms such as genetic/evolution algorithms can be applied
effectively to produce the desired results. In addition to the computational advantage, the proposed
technique may produce mcuh lower order controllers than those using Youla parameterization and
convex optimization.
The rest of the paper is organized as follows. First, notations and some definitions used in the
paper are presented in Section 2. Next, the suboptimalH∞ andH2 controller parameterizations are
introduced in Section 3. Section 4 gives some explicit parameterizations of norm bounded H∞ and
H2 functions. In Section 5, various multi-objective design problems are presented and optimization
schemes are proposed. Section 6 outlines a two-stage optimization scheme that will be applied in
Section 7 to solve those multi-objective problems. Finally, some numerical examples are shown in
Section 7 and the paper is concluded with some remarks in Section 8.
2 Notations and Definitions
Let G(s) be an MIMO transfer matrix with the following state-space realization
G(s) =
(1)
Let H2 denote the space of all strictly proper and stable transfer matrices. The H2 norm is defined
as
4
It can be computed by using the state space representation of G. For a stable and strictly proper
G (i.e. A is stable and D = 0), we have
G2 2 = Trace(BT QB) = Trace(CPCT ) (3)
where P and Q are the controllability and observability Gramians which can be obtained by solving
the following Lyapunov equations
AP + PAT + BBT = 0 AT Q + QA + CT C = 0. (4)
Let RH∞ denote the space of all proper and real rational stable transfer functions. The H∞ norm
is defined as:
σ[G(jω)]. (5)
The L1 norm of a stable transfer matrix is defined as
G1 = max 1≤i≤p
m∑
gij1 = ∫ ∞
0 |gij(t)|dt (7)
Consider a feedback system described by the block diagram in Figure 1 where the generalized plant
G and the controller K are assumed to be real-rational and proper with y(t) ∈ Rp2 and u(t) ∈ Rm2 .
Let G be partitioned accordingly as
G =

Then the transfer function from w to z is given by
Tzw = Fl (G,K) = G11 + G12K(I −G22K)−1G21 =
Acl Bcl
Ccl Dcl
and Fl (·, ·) is called a lower linear fractional transformation.
3 H∞ and H2 Suboptimal Controller Parameterizations
Consider the generalized feedback system described in Figure 1 with the generalized plant G given
by (8) with some suitable assumptions. Then it is well known that all stabilizing controllers K(s)
satisfying the suboptimal H∞ condition, Tzw∞ < γ for a given γ > 0, can be parameterized as
K = Fl (M∞, Q) with Q ∈ RH∞, Q∞ < γ where
M∞ =
(10)
and M∞ is constructed from the solutions of two Riccati equations (Doyle et al 1999, Zhou et
al 1996, Zhou and Doyle 1997). In the case of Q = 0, the solution, K = M11, is called the
central controller. Note that there is no guarantee that M∞ is itself stable even though the closed-
loop system is stable. It is noted that M∞ and Q are (p2 + m2)× (p2 + m2) and m2 × p2 transfer
matrices respectively.
To consider the H2 optimal control problem, we shall assume for simplicity that D11 = 0 and
D22 = 0. The case where D22 6= 0 can be dealt with easily. It is also well known that all stabilizing
controllers for the generalized plant G can be written as K = Fl (M2, Q) with
M2 =
(11)
where A2 = A + B2F2 + L2C2 and F2 and L2 can be constructed from the solutions of two related
Riccati equations, see (Zhou et al 1996, Zhou and Doyle 1997). It is then clear that the closed-loop
H2 norm is given by
Tzw2 2 = GcB12
2 + F2Gf2 2 + Q2
2 (12)
6
where the definition of the transfer matrices Gc and Gf can be found in (Zhou et al 1996, Zhou and
Doyle 1997). Consequently, Q = 0 represents the optimal solution to theH2 problem. As in theH∞ parameterization, M2 and Q are (p2 + m2)× (p2 + m2) and m2 × p2 transfer matrices respectively.
Let γ2 opt = GcB12
2 + F2Gf2 2. Then for any γ > γopt, all suboptimal H2 controllers satisfying
Tzw2 < γ can be parameterized as K = Fl (M2, Q) with Q ∈ RH2 and Q2 2 < γ2 − γ2
opt.
4 Parameterizations of H∞ and H2 Norm Bounded Functions
It is now clear that if a controller is required to satisfy both the H∞ norm constraint, Tzw∞ < γ,
and some additional performance objectives, it has to come from the family of H∞ controllers
parameterized in the last section. In other words, a stable Q with Q∞ < γ must be found
to satisfy the additional performance objectives. Similar observations can be made for problems
involving H2 performance objectives. To find a suitable Q ∈ RH∞ with Q∞ < γ or a Q ∈ RH2
with Q2 < √
γ2 − γ2 opt, it is desirable to have more explicit characterizations of these norm
bounded analytic functions that are appropriate for numerical optimization.
Stein and Bosgra (1991) presented a parameterization for an H∞ norm bounded strictly proper
and stable transfer matrix. That result was extended to the proper case (Campos and Zhou 2001)
by the following lemma.
Lemma 1 Let γ > 0 and let Q be a stable transfer matrix of degree nq and Q∞ < γ. Then Q
can be represented as Q =
Aq Bq
Cq Dq
qk ∈ Rnq×nq ,
Bq ∈ Rnq×p2, Cq ∈ Rm2×nq , Dq ∈ Rm2×p2, and
Aqs = 1 2
−1DT q )Cq
Proof: Assume that Q =
q order observable realization and Q∞ <
γ, then according with the Bounded Real Lemma (Zhou et al 1996) σ(Dq) < γ and ∃Y > 0 such
that
q Cq)∗Y + Y BqR −1Bq
∗ Y + Cq
∗ (I + DqR
−1D∗ q)Cq = 0 (15)
where R = γ2I −D∗ qDq. Since Y > 0, there exists a Cholesky factorization of Y = T ∗T . Now T is
7
invertible and can be used as a similarity transformation on Q
Q =
TAqT
Aq + A∗q + BqR −1D∗
qCq + C∗ q DqR
q)Cq = 0 (17)
Furthermore, Aq can be decomposed into a symmetric part Aqs and a skew symmetric part Aqk
where
Aqs = (Aq + A∗q)/2 Aqk = (Aq −A∗q)/2 (18)
Consequently, the skew symmetric part Aqk disappears from (17) and the result in (13) is obtained.
2
Note that when Dq = 0, we have (Stein and Bosgra, 1991)
Aqs = −1 2 (BqB
T q /γ2 + CT
q Cq) (19)
On the other hand, using the definition of the H2 norm given by (3) and (4), a parameterization
for all Q ∈ RH2 follows (Campos and Zhou 2001).
Lemma 2 Assume that Q ∈ RH2 has degree nq, then Q can be represented in the following form
Q2 2 = Trace(BT
qk ∈ Rnq×nq , Bq ∈ Rnq×p2 , Cq ∈ Rm2×nq . (20)
Proof: Assume that Q =
according to (3) and (4), ∃Y > 0 such that
AT Y + Y A + CT C = 0 (21)
Since Y > 0, there exists a Cholesky factorization of Y = T T T . Now T is invertible and can be
used as a similarity transformation on Q
Q =
TAqT
Aq + AT q + CT
q Cq = 0 (23)
Moreover, Aq can be decomposed into a symmetric part Aqs and a skew symmetric part Aqk .
Consequently, the skew symmetric part Aqk disappears from (23) and it is obtained
Q2 2 = Trace(BT
q Cq (24)
2
Thus, if Q is constructed according with the previous lemma, then Q2 < β is equivalent to
Trace(BT q Bq) < β2.
5 Mixed L1/H2/H∞ Performance Problems
Several mixed objective control problems will be presented in this section. Their solutions are
critically dependent upon the H∞ and H2 controller state-space parameterizations presented in the
previous sections. That is, the parameters of the transfer matrix Q will be used to optimize the
performance indices and satisfy constraints. Note that since the controllers are based on the H2
and H∞ parameterizations the internal stability of the closed-loop is always guaranteed. Assume
that the degree of Q is predefined to nq, then according with the dimensions of the generalized
plant (8), the number of variables of each component of Q is given by
Aqk =
⇒ m2 × p2 (28)
Since Q ∈ RH∞ is an m2×p2 transfer matrix, the total number of variables in the optimization
scheme will be (nq − 1)nq/2 + nqm2 + nqp2 + m2p2. In order to solve the multi-objective problems,
the elements of the state-space description of Q are aligned into a vector form
X = [
a11 a12 · · · a(nq−1)nq b11 · · · bnqp2 c11 · · · cm2nq d11 · · · dm2p2
]T (29)
Therefore, the proposed optimization problems will be solved with respect to the variable vector
X. Note that if Q ∈ RH2, i.e. Dq = 0, the number of free variables reduces to (nq − 1)nq/2 +
nqm2 + nqp2. During the optimization, an interval of variation will be set for all the parameters in
X, i.e.
cmin ≤ cij ≤ cmax
dmin ≤ dij ≤ dmax
However, not all the requirements in the elements of Q can be reflected into a range of variation for
each parameter. Therefore, the following penalty functions are used in the optimization schemes
to restrict the variations of some of the parameters and to enforce requirements on the transfer
matrix Q
1 otherwise (32)
where the matrices A, B and D are linked to the state-space realization of a system, γ > γopt > 0,
M and N are constants À 1.
Consider a generalized feedback system described in figure 2 with
Gm =
G0 =
Tz0w0 = Fl (G0,K) , Tzw = Fl (G,K) (35)
5.1 Mixed H∞/H∞ Design (Robust H∞ Performance)
The robust performance problem can be formulated as
min stabilizing K
γ∞ = min stabilizing K
Tzw∞ (37)
In order to solve (36), the following numerical optimization is proposed
min Aqk
where Q =
+ Aqs Bq
Cq Dq
, M∞ is given by (10) and Aqs by (13). Therefore, (Aqk
, Bq, Cq, Dq)
are the free variables in the optimization scheme. The penalty function P (·, ·) is included to restrict
the maximum singular value of Dq to be < γ, which is a necessary condition in the Bounded Real
Lemma to have Q∞ < γ. This condition could not be incorporated in the optimization directly
since it is difficult to give an interval of variation for the elements of a matrix to have σ < γ. So
it was decided to introduce the penalty function in the cost function to detect violations of the
condition and penalize these solutions. The result obtained from the optimization in (38) is finally
used to construct the multi-objective controller as K = Fl (M∞, Q).
5.2 Mixed H2/H∞
min stabilizing K
Tz0w02 s.t. Tzw∞ < γ (39)
for some γ > γ∞. In order to solve (39), the following numerical optimization is proposed
min Aqk
,Bq ,Cq
11
+ Aqs Bq
Cq 0
since now Q ∈ RH2, M∞ is given by (10) and Aqs by (13). Note
that a penalty function is not needed since, Q has to be strictly proper, i.e. Dq = 0. Consequently,
in this mixed problem, the free variables in the optimization are now (Aqk , Bq, Cq).
5.3 Mixed L1/H∞
The mixed L1 and H∞ design problem can be stated as
min stabilizing K
Tz0w01 s.t. Tzw∞ < γ (41)
for some γ > γ∞. In order to solve (41), the following numerical optimization is proposed
min Aqk
,Bq ,Cq ,Dq P (Dq, γ) · Fl (G0,Fl (M∞, Q)) 1 (42)
where M∞ is given by (10), Aqs by (13) and Q =
Aqk
. Note that the penalty
function P (·, ·) is needed again to restrict σ(Dq) < γ since Q ∈ RH∞. Therefore, (Aqk , Bq, Cq, Dq)
are the free variables in the optimization scheme.
5.4 L1/H2 Design
min stabilizing K
for some γ > γopt where
γopt = min stabilizing K
Tzw2 (44)
In order to solve (43), the following numerical optimization is proposed
min Aqk
,Bq ,Cq ,Dq J(Bq, γ, γopt) · Fl (G0,Fl (M2, Q)) 1 (45)
where Aqs = −CT q Cq/2 and Q =
Aqk
in the optimization scheme with Q2 2 < γ2−γ2
opt. The penalty function J(·, ·, ·) is introduced here
to enforce the restriction on the norm of Q, i.e. limit the value of the elements of Bq to satisfy
Trace(BT q Bq) < γ2−γ2
opt. Note that this condition cannot be translated into any pattern selection
for the elements of Bq. Therefore, it was chosen to include the penalty function in the cost function
to penalize any combination of parameters that violate it.
12
The optimization problems just presented in (38), (40), (42) and (45) are generally nonconvex
and present intrinsically multi-modal characteristics. Therefore, it is natural to think that these
problems are not practical to solve. Nevertheless, evolution optimization has turned out to be quite
useful to solve these complicated optimization problems. Moreover, if it is used in conjunction with
a well-known gradient-based optimization technique, a powerful optimization scheme is obtained
that can effectively solve the proposed problems.
6 Optimization Scheme
In the first attempt to solve the mixed design problems (38), (40), (42) and (45), standard nonlinear
optimization techniques such as quasi-newton method and conjugate gradient (Grace 1992, Nocedal
and Wright 1999) were applied. However, convergence was always limited to the initial conditions.
The algorithms constantly got caught in local minima. A change of direction was obviously needed.
Thus, the evolutionary algorithms came as viable solution to solve these problems. However, the
gradient-based approaches have nice properties that should not be forgotten. Therefore, it was
decided to use a gradient-based algorithm in conjunction with an evolutionary optimization such
as genetic or evolution algorithms. In this way, the genetic/evolution algorithm was applied first to
perform a global search in the parameters space and find a minimum solution. Next, a local search
was conducted to obtain the optimal solution. This two-stage optimization outperformed the use
of each one of the algorithms alone.
In the first stage, a genetic/evolution algorithm was chosen to solve (38), (40), (42) and (45).
However, a natural inspired algorithm such as simulated annealing could have also been used. For
the second stage, a quasi-newton plus linear search (Nocedal and Wright 1999) scheme was used to
perform the local search. Thus, the best solution coming from the genetic/evolution algorithm was
used as a starting point for the gradient-based search. A brief description of the genetic algorithm
used in the paper is presented next. The gradient-based optimization was carried out by using the
Optimization Toolbox (Grace 1992) of MATLAB c©.
6.1 Evolution Algorithm
Consider a function that has to be optimized with m inputs and one output. The output of this
function is referred as its fitness. The idea is now to adjust the input parameters in order to find an
optimum in the fitness. One combination of m input parameters is called an individual. A group
of n individuals is a population. The idea is to start with a randomly selected initial population
(mutation), creating a group of children out of their parents. The fitness of the children is now
13
evaluated and compared with their parent’s fitness, and the best of both are selected to be the next
generation of parents. This procedure will go on until an optimum is found, or a given termination
criterion is fulfilled.
Following these ideas, the evolution algorithm EVAOCP (Evolution Algorithm for Optimization
of Continuous Parameters) is proposed for the optimization process
1. Initialize parameters of the evolution algorithm.
2. Select initial population (this can be a random selection, a guess or a result of a previous
experiment).
3. Check if the conditions for termination of algorithm are satisfied: optimality, max. # of
iterations or no-progress
• YES : Set the best values obtained during the optimization process.
• NO : Continue with Evolution Alg.
4. Adjust step (mutation range) according with progress achieved.
5. Create children from parents set.
(a) Check mutation factor.
(b) Determine new step.
(c) Generate children by adding a random perturbation of variance ’step’ to parents.
(d) Check that children satisfy the parameters bounds.
6. Evaluate children fitness.
7. Compares children and parents performance, keep the best of both.
8. Compute progress velocity according with # of children better than parents.
9. Select the best solutions to judge optimality.
10. Go back to 3.
6.2 Genetic Algorithm
In a genetic algorithm, the optimized parameters are arranged in combination sets that are called
chromosomes. An interval of variation is assigned for each parameter in the optimization search.
Thus, bounds for the elements in a chromosome are defined a priori. To start, an initial combination
14
of chromosomes is used, usually chosen randomly; after the initial evaluation a predefined number
of the best chromosomes is held, population. This population is going to be kept constant after each
generation. From the population, a set of chromosomes is chosen, parents, to generate a new breed
of chromosomes, children. The parents chromosomes are chosen based on a proportional selection
according with their fitness. The new children are created by a linear combination of two parents,
crossover. Note that this is a special type of crossover operator for a real parameters representation.
From the set of parents and children, random perturbations are introduced with a predefined rate,
mutation. The combined set, parents and children, is evaluated and a new population is selected.
The cycle process, generation, is repeated until an ending condition, no progress or convergence, is
satisfied.
The algorithm called GAOCP (Genetic Algorithm for Optimization of Continuous Parameters)
was coded in MATLAB c© and it presents the following steps:
1. Initialize GAOCP parameters,
2. Generate initial population,
3. Evaluate cost function,
8. New generation (children) is created from parents,
9. Mutation is introduced randomly,
10. Go back to step 3.
7 Numerical Examples
In this section, the proposed mixed performance problems will be illustrated through some numer-
ical examples. Due to space limit, we shall only include two examples here. In all these examples,
the first stage of the optimization scheme was carried out by a genetic/evolution algorithm. The
second stage was carried out by a quasi-newton plus linear search scheme (Grace 1992).
15
So far, there is no direct computation of the L1 norm except its mathematical definition (6).
Consequently, in order to approach the L1/H∞ (42) or L1/H2 (45) designs, the norm computation
was approximated. Note that the impulse response of each element of (1) is given by
gij(t) = Cie AtBj + Dijδ(t) (46)
where B = [B1 . . . Bm], C = [C1 . . . Cp]T and D = [Dij ]. Hence, the infinite integration of (7) can
be approximated according with the poles of Gij(s), estimating the time such that gij(t) is below
certain percentage of its peak value and performing the integration in that interval of time.
The following numerical examples were computed on a PC Pentium III at 933 MHz. The
constants M in (31) was given the value 1× 106.
7.1 Example 1
This example is taken from Baeyens and Khargonekar (1994). The multi-objective problem is a
mixed H2/H∞ design with the generalized plant Gm given by
Gm =
(47)
where the description of the matrices (A,B1, B2, C0, C1, C2) is given in (Baeyens and Khargonekar
1994). This is a special case of the mixed H2/H∞ problems formulated in the previous sections
with w0 = w.
min Kstabilizing
Tz0w2 s.t. Tzw∞ < γ (48)
The optimization scheme in (40) was used to obtain the mixed controller. In order to judge the
performance of the proposed scheme, the mixed H2/H∞ optimization was also carried out using
the LMI Toolbox (Gahinet et al 1995) of MATLAB c©. In this example, the results reported for the
proposed method (40) are obtained by using an evolution algorithm (EVAOCP) in the first stage
of the optimization. The results were almost identical if a genetic algorithm was used instead.
The example was run for two values of γ: 1.6 and 2.0. Note that the closed-loop performance
Tzw∞ < γ is always guaranteed by the proposed optimization algorithm (40).
The computational effort and performance for different order Qs was investigated for γ = 1.6.
The algorithm was evaluated 5 consecutive times for the same order of Q. Table 1 presents these
16
results. The mean value and standard deviation (sdt) for each respective order of the floating point
operations (flops), computation time, and resulting performance Tz0w2 are presented. Therefore,
it is seen that there is no significant performance improvement by choosing a Q of order higher
than 2nd. Also, it is clear that the algorithm reached almost constantly the same performance level
( · 2) for each order, since the standard deviation is close to zero for all cases. It is also evident
that the computational cost is increased by raising the Q order. Figure 3 shows the flops required
during the optimization as a function of the Q order.
Table 2 summarize the results obtained for γ = 1.6 and 2.0. A 2nd order Q is needed to provide
these results. Thus, the optimization was carried out for 5 parameters (Q is strictly proper). In
the first stage, the H2 cost was 23.04 and 4.72 for γ = 1.6 and 2.0 respectively. In the second stage,
these values were reduced further to 22.875 and 4.53. Figure 4 presents the evolution of the cost
function for γ = 2.0 during the first stage.
7.2 Example 2
This example is taken from Sznaier and Blanchini (1994). The mixed L1/H∞ design is to solve the
following minimization
min Kstabilizing
The generalized plant G is given by
G =
(50)
where W1, W2 and P are defined in (Sznaier and Blanchini 1994). The value of γ was set to 2.6
as in the original paper. Consequently, the optimization proposed in (42) was applied in order to
compute the mixed controller. For this example, a genetic algorithm (GAOCP) was used in the
first stage of the optimization. However, no clear advantages or disadvantages are shown using
either algorithm.
Similarly to the first example, the computational effort and performance for different order Qs
was first investigated (γ = 2.6). Table 3 presents these results varying the Q order among 1st and
3rd. No significant performance improvements were observed by using a Q of higher order. The
algorithm was run again 5 consecutive times for the same order of Q. Analyzing table 3, it is
observed that the algorithm reached almost constantly the same performance level ( · 1) for any
order, since the standard deviation is always close to zero. However, the computational cost varied
17
drastically for all cases. In addition, the computational time and flops are constantly increased by
raising the Q order. Figure 5 shows the flops required during the optimization as a function of the
Q order.
Table 4 summarize these result with a 2nd order Q. Hence, a 6 parameter optimization was
carried out (Q is now a proper transfer matrix). After the first stage optimization, the optimal cost
was 4.1216. The second stage of the optimization reduced this value to 4.04 (i.e. Tz0w1 = 4.04).
Furthermore, the value of the H∞ norm is Tzw∞ = 2.5476 < 2.6. On average, 307.77 sec.
and 6.66 × 109 flops were needed to reach a solution. In this case, the computation of the L1
norm slowed down the optimization scheme. However, an improvement of L1 performance is seen
compared with the result by Sznaier and Blanchini (1994). In figure 6, the evolution of the cost
function is presented for the first stage of the optimization scheme.
8 Conclusions
Optimization schemes are presented to solve the multi-objective design problems. Parameteriza-
tions of norm bounded H∞ and H2 functions were used to limit the number of variables and restrict
the optimizations. This step reduces the complexity of the mixed synthesis problem and guaran-
tees the closed-loop H∞ or H2 performance according with the selected parameterization. Thus,
the search for the appropriate Q parameter looks to minimize another performance measure. The
resulting optimizations with this method are highly nonlinear and present multi-modal character-
istics. For this reason, a two-stage algorithm was used in the optimization process. Numerical
examples show the success of the optimization schemes to design mixed controllers. However, it
is not possible to establish the best achievable performance with these techniques and this issue
has to be explored iteratively. It is interesting to note that only low order Qs were needed in the
numerical examples. Thus, the orders of resulting controllers are comparable to those of the gen-
eralized plants. In all the benchmark examples, the performance was always improved compared
to previous results published in the literature. It should be pointed out that more complex mixed
problems can also be treated in the same framework without any additional difficulty.
18
References
Amishima, T., Bu, J., and Sznaier, M., 1988, Mixed L1/H2 controllers for continuous
time systems. Proceedings of American Control Conference, Philadelphia, USA,
June pp. 649-654.
Baeyens, E., and Khargonekar, P.P., 1994, Some examples in mixed H2/H∞ control.
Proceedings of the American Control Conference, Baltimore, MA, June pp. 1608-
1612.
Bernstein, D.S., and Haddad, W.M., 1989, LQG control with an H∞ performance
bound: a riccati equation approach. IEEE Transactions on Automatic Control, 34,
293-305.
Campos-Delgado, D.U., and Zhou, K., 2001, H∞ and H2 strong stabilization by nu-
merical optimization. Accepted to IFAC 2002, Barcelona, Spain, 21-26 June.
Chen, B.S., Tseng, C.S., and Uang, H.J., 2000, Mixed H2/H∞ fuzzy output feedback
control design for nonlinear dynamic systems: an LMI approach. IEEE Transactions
on Fuzzy Systems, 8, 249-266.
Chen, X., and Zhou, K., 2001, Multiobjective H2/H∞ control design. SIAM Journal
on Control and Optimization, 40, 628-660.
Clement, B., and Duc, G., 2000, Flexible arm multiobjective control via youla parame-
terization and LMI optimization. 3rd IFAC Symposium on Robust Control Design,
Prague, Czech Republic, 21-23 June.
Djouadi,S.M., Charalambous, C.D., and Repperger, D.W., 2001, On multiobjective
H2/H∞ optimal control. Proceedings of the American Control Conference, Arling-
ton, USA, 25-27 June pp. 4091-4096.
Doyle, J.C., Glover, K., Khargonekar, P.K., and Francis, B.A., 1989, State-space solu-
tions to standard H∈ and H∞ control problems. IEEE Transactions on Automatic
Control, 34, 831-847.
Doyle, J.C., Zhou, K., Glover, K., and Bodenheimer, B., 1994, Mixed H2 and H∞ per-
formance objectives II: optimal control. IEEE Transactions on Automatic Control,
39, 1575-1587.
Gahinet, P., Nemirovski, A., Laud, A.J., and Chilali, M., 1995, LMI control toolbox
user’s guide (The Mathworks, Inc.).
Grace, A., 1992, Optimization toolbox: user’s guide (The MathWorks, Inc.).
Haddad, W.M., Chellaboina, V.S., and Kumar, R., 1998, Multiobjective L1/H∞ con-
troller design for systems with frequency and time domain constraints. Proceedings
19
of the American Control Conference, Philadelphia, USA, June pp. 3250-3254.
Halder, B., Hassibi, B., and Kailath, T., 1997, Linearly combined suboptimal mixed
H2/H∞ controllers. Proceedings of the 36th Conference on Decision and Control,
San Diego, USA, December pp. 434-439.
Hindi, H.A., Hassibi, B., and Boyd, S.P., 1998, Multiobjective H2/H∞-optimal control
via finite dimensional Q-parameterization and linear matrix inequalities. Proceedings
of the American Control Conference, Philadelphia, USA, June pp. 3244-3249.
Khargonekar, P.P., and Rotea, M.A., 1991, Mixed H2/H∞ control: a convex optimiza-
tion approach. IEEE Transactions on Automatic Control, 36, 824-837.
Khargonekar, P.P., Rotea, M.A., and Baeyens, E., 1996, Mixed H2/H∞ filtering. In-
ternational Journal of Robust and Nonlinear Control, 6, 313-330.
Li, Y., Tan, K.C., and Gong, M., 1997, Global structure evolution and local param-
eter learning for control system model reductions. In Evolutionary Algorithms in
Engineering Applications (Springer-Verlag), pp. 345-360.
Limebeer, D.J.N., Anderson, B.D.O., and Hendel, B., 1994, A Nash game approach to
mixed H2/H∞ control. IEEE Transactions on Automatic Control, 39, 687-692.
Man, K.F., Tang, K.S., and Kwong, S., 1999, Genetic Algorithms: Concepts and Designs
(Springer-Verlag London).
Mitchell, M., 1996, An Introduction to Genetic Algorithms (The MIT Press, Cambridge,
Massachusetts).
Nocedal, J., and Wright, S.J., 1999, Numerical Optimization (Springer Verlag New
York).
de Oliveira, M.C., Geromel, J.C., and Bernussou, J., 1999, An LMI optimization ap-
proach to multiobjective controller design for discrete-time systems. Proceedings of
the 38th Conference on Decision and Control, Phoenix, USA, December pp. 3611-
3616.
Qi, X., Khammash, M.H., and Salapaka, M.V., 2001, Optimal controller synthesis with
multiple objectives. Proceedings of the American Control Conference, Arlington,
USA, 25-27 June pp. 2730-2735.
Rotstein, H., Sznaier, M., and Idan, M., 1996, H2/H∞ filtering theory and an aerospace
application. International Journal of Robust and Nonlinear Control, 6, 347-366.
Salapaka, M.V., and Khammash, M., 1998, Multi-objective MIMO optimal control
design without zero interpolation. Proceedings of the 1998 American Control Con-
ference, Philadelphia, USA, June pp. 660-664.
20
Salapaka, M.V., Khammash, M., and Dahleh, 1999, Solution of MIMO H2/L1 problem
without zero interpolation. SIAM Journal on Optimization and Control, 37, 1865-
1873.
Scherer, C.W., 1995, Multiobjective H2/H∞ control. IEEE Transactions on Automatic
Control, 40, 1054-1062.
Scherer, C.W., Gahinet, P., and Chilati, M., 1997, Multiobjective output-feedback con-
trol via LMI optimization. IEEE Transactions on Automatic Control, 42, 896-911.
Scherer, C.W., 2000, An efficient solution to multi-objective control problems with LMI
objectives. Systems and Control Letters, 40, 43-57.
Sznaier, M., and Blanchini, F., 1994, Mixed L1/H∞ controllers for mimo continuous-
time systems. Proceedings of the 33rd Conference on Decision and Control, Lake
Buena Vista, USA, December pp. 2702-2707.
Sznaier, M., and Bu, J., 1998, Mixed `1/H∞ control of mimo systems via convex opti-
mization. IEEE Transactions on Automatic Control, 43, 1229-1241.
Sznaier, M., and Amishima, T., 1998,H2 control with time domain constraints. Proceed-
ings of the American Control Conference, Philadelphia, USA, June pp. 3229-3233.
Steinbuch, M., and Bosgra, O.H., 1991, Robust performance in H2/H∞ optimal con-
trol. Proceedings of the 30th Conference on Decision and Control, London, England,
December pp. 549-550.
Tang, K.S., Man, K.F., Chan, C.Y., Kwong, S., and Fleming, P.J., 1995, GA approach
to multiple objective optimization for active noise control. IFAC Algorithms and
Architecture for Real-Time Control AARTC 95, pp. 13-19.
Voulgaris, P., 1994, Optimal H2/`1 control: the siso case. Proceedings of the 33rd
Conference on Decision and Control, Lake Buena Vista, USA, December pp. 3181-
3186.
Vroemen, B., and de Jager, B., 1997, Multiobjective control: an overview. Proceedings
of the 36th Conference on Decision and Control, San Diego, USA, December pp.
440-445.
Whidborne, J.F., Gu, D.W., and Postlethwaite, I., 1995, Algorithms for solving the
method of inequalities - a comparative study. Proceeding of the American Control
Conference, pp. 3393-3397.
White, M.S., and Flockton, S.J., 1997, Adaptive recursive filtering using evolutionary
algorithm. Evolutionary Algorithms in Engineering Applications (Springer-Verlag),
pp. 361-376.
21
Wu, J., and Chu, J., 1999, Approximation methods of scalar mixed H2/L1 problems
for discrete-time systems. IEEE Transaction on Automatic Control, 44, 1869-1874.
Zhou, K., Doyle, J.C., Glover, K., and Bodenheimer, B., 1990, Mixed H2 and H∞ control theory. American Control Conference, San Diego, USA, pp. 2502-2507.
Zhou, K., Glover, K., Bodenheimer, B., and Doyle, J.C., 1994, Mixed H2 and H∞ performance objectives I: robust performance analysis. IEEE Transactions on Au-
tomatic Control, 39, 1564-1574.
Zhou, K., Doyle, J.C., and Glover, K., 1996, Robust and Optimal Control (Upper Saddle
River, NJ: Prentice Hall).
Zhou, K., and Doyle, J.C., 1998, Essentials of Robust Control (Upper Saddle River, NJ:
Prentice Hall).
order mean std mean std mean std
1st 1.38× 108 3.33× 105 19.12 0.38 23.10 0.00
2nd 1.68× 108 7.96× 107 23.24 0.41 22.88 0.00
3rd 2.91× 108 2.86× 106 28.48 1.70 22.87 0.00
4th 4.33× 108 2.49× 107 36.02 2.22 22.85 0.01
5th 6.72× 108 7.62× 106 47.43 0.22 22.84 0.01
Table 1: Computation effort and performance for γ = 1.6 in example 1.
23
(2nd order Q)
1.6 28.13 37.20 22.88
2.0 5.49 7.45 4.53
Table 2: Closed-loop performance of the mixed H2/H∞ controllers in example 1.
24
order mean std mean std mean std
1st 5.08× 109 1.40× 109 265.51 73.44 4.43 0.00
2nd 6.66× 109 3.23× 109 307.77 149.42 4.04 0.03
3rd 8.08× 108 2.23× 109 334.72 92.38 4.03 0.04
Table 3: Computation performance for γ = 2.6 in example 2.
25
(2nd order Q)
2.6 4.82 4.04
Table 4: Closed-loop performance of the mixed L1/H∞ controllers in example 2.
26
Figure 2 Multiobjective LFT form.
Figure 3 Computation effort (flops) as a function of the Q order for example 1 and γ = 1.6.
Figure 4 Evolution of cost function during the first stage of the optimization: example 1 and
γ = 2.0.
Figure 5 Computation effort (flops) as a function of the Q order for example 2 and γ = 2.6.
Figure 6 Evolution of cost function during the first stage of the optimization: example 2.
27
G
K
z
y
w
u
¾
29
0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 0
1
2
3
4
5
6
ps
Figure 3: Computation effort (flops) as a function of the Q order for example 1 and γ = 1.6.
30
0 10 20 30 40 50 60 70 80 90 100 4
5
6
7
8
9
10
11
12
13
14
generations
co st
fu nc
tio n
Figure 4: Evolution of cost function during the first stage of the optimization: example 1 and
γ = 2.0.
1
2
3
4
5
6
7
8
9
ps
Figure 5: Computation effort (flops) as a function of the Q order for example 2 and γ = 2.6.
32
0 5 10 15 20 25 30 35 40 45 50 4
4.2
4.4
4.6
4.8
5
5.2
5.4
5.6
5.8
6
generations
co st
fu nc
tio n
Figure 6: Evolution of cost function during the first stage of the optimization: example 2.
33