Upload
others
View
7
Download
0
Embed Size (px)
Citation preview
A LINEAR MATRIX INEQUALITY SOLUTION TO THE INPUT COVARIANCECONSTRAINT CONTROL PROBLEM
Andrew White, Guoming Zhu, Jongeun Choi
Department of Mechanical Engineering
Michigan State UniversityEast Lansing, Michigan 48824
Email: {whitea23, zhug, jchoi}@egr.msu.edu
ABSTRACT
In this paper, the input covariance constraint (ICC) con-
trol problem is solved by a convex optimization with linear ma-
trix inequality (LMI) constraints. The ICC control problem is an
optimal control problem that is concerned with finding the best
output performance possible subject to multiple constraints on
the input covariance matrices. The contribution of this paper is
the characterization of the control synthesis LMIs used to solve
the ICC control problem. To demonstrate the effectiveness of the
proposed approach a numerical example is solved with the con-
trol synthesis LMIs. Both discrete and continuous-time problems
are considered.
INTRODUCTION
In this paper, we consider the input covariance constraint
(ICC) control problem. The ICC control problem is an optimal
control problem in which the output performance is minimized
subject to multiple constraints on the control input covariance
matrices Ui of the form Ui ≤U i, where U i is given. The ICC con-
trol problem has two interesting interpretations: stochastic and
deterministic. For the stochastic interpretation the exogenous in-
puts are assumed to be uncorrelated zero-mean white noises with
a given intensity. With the exogenous input defined in this way,
the ICC control problem minimizes the weighted performance
output covariance subject to the control input covariance con-
straints, such that the constraints can be interpreted as constraints
on the variance of the control actuation. For the deterministic in-
terpretation the exogenous inputs are assumed to be unknown
disturbances that belong to a bounded ℓ2 energy set. Then the
ICC control problem minimizes the maximum singular value of
the performance outputs while ensuring that the maximum sin-
gular value of the control inputs are less than the corresponding
control input constraints. In other words, the ICC control prob-
lem is the problem of minimizing the weighted sum of worst-case
peak values on the performance outputs subject to the constraints
on the worst-case peak values of the control input. This inter-
pretation is important in applications where hard constraints on
the actuator signals are present, such as space telescope pointing
control [1], system identification, and machine tool control.
The ICC control problem is closely related to the output co-
variance constraint (OCC) control problem which was originally
studied in [2]. The OCC control problem is an optimal control
problem that minimizes the control input subject to output co-
variance constraints. The OCC control problem is solved by a
linear quadratic Gaussian (LQG) controller with a special choice
of output weights, which can be obtained by using the iterative
OCC algorithm detailed in [2]. While the ICC control problem
can also technically be solved by an LQG controller with a spe-
cial choice of input weights, developing an iterative algorithm to
directly obtain such an input weighting matrix has been an ex-
tremely difficult problem to solve. However, after reconsidering
the OCC control problem as a convex optimization with linear
matrix inequality (LMI) constraints in [3], it became clear that
the more difficult ICC control problem could also be solved as a
convex optimization with LMI constraints.
This paper is organized as follows. First, the continuous-
time ICC control problem is introduced and then Theorems 1
and 2 provide LMIs that can be solved to obtain state-feedback
and dynamic output feedback controllers, respectively, that min-
imize an upper bound on the ICC cost. In the next section, the
discrete-time ICC control problem is presented and Theorems 3
and 4, which find state-feedback and dynamic output feedback
controllers that minimize the upper bound of the ICC cost, are
1 Copyright © 2013 by ASME
Proceedings of the ASME 2013 Dynamic Systems and Control Conference DSCC2013
October 21-23, 2013, Palo Alto, California, USA
DSCC2013-3716
Downloaded From: http://proceedings.asmedigitalcollection.asme.org/ on 05/16/2018 Terms of Use: http://www.asme.org/about-asme/terms-of-use
given. To demonstrate the effectiveness of the proposed ap-
proach, a numerical example is solved with the LMIs introduced
in this paper. Concluding remarks are given in the final section.
CONTINUOUS TIME SYSTEMS
Consider the following continuous-time system:
xp(t) = Apxp(t)+Bpu(t)+Dpwp(t)
yp(t) =Cpxp(t)
z(t) = Mpxp(t)+ v(t)
(1)
where xp, u, wp, and v represent the state, control, process noise,
and measurement noise. The vector yp contains all variables
whose dynamic responses are of interest and the vector z is a
vector of noisy measurements.
Suppose that we apply to the plant (1) a full state feedback
stabilizing control law of the form
u(t) = Gxp(t), (2)
or a strictly proper output feedback stabilizing control law given
by
xc(t) = Acxc(t)+Fz(t),
u(t) = Gxc(t).(3)
Then the resulting closed-loop system is
x(t) = Ax(t)+Dw(t)
y(t) =
[
yp(t)u(t)
]
=
[
Cy
Cu
]
x(t) =Cx(t)(4)
where for the state feedback case we have x = xp and w = wp,
while for the output feedback case we have x =[
xTp xT
c
]Tand
w =[
wTp vT
]T.
Considering the closed-loop system (4), let Wp and V de-
note positive definite symmetric matrices with dimensions equal
to the process noise wp and measurement vector z, respectively.
Then define W =Wp, if the state feedback controller (2) is used
or W = block diag [Wp,V ] if the strictly proper, output feedback
controller (3) is used. Let P denote the closed-loop controlla-
bility Gramian from the (weighted) disturbance input W−1/2w.
Since A is stable, P satisfies
0 = AP+ PAT +DWDT . (5)
The control input u(t) in (4) is partitioned into
u = [uT1 ,u
T2 , . . . ,u
Tm]
T , (6)
such that each ui for i = 1,2, . . . ,m is given by
ui =Cu,ixp = ΦiCuxp ∈ Rmi , (7)
where Φi is an appropriately selected projection matrix for each
input ui. In this paper, we are interested in finding controllers of
the form (2) or (3) that minimize the (weighted) output perfor-
mance trace(
QCpPCTp
)
with Q > 0, and satisfy the constraints
Ui = ΦiCuPCTu Φ
Ti ≤U i, i = 1,2, . . . ,m, (8)
where U i > 0 (i = 1,2, . . . ,m) are given and P solves (5). This
problem, which is called the input covariance constraint (ICC)
problem, is defined as follows.
The ICC Problem. Find a static state feedback or full-order
dynamic output feedback controller for the system (1) to mini-
mize the ICC cost
JICC = trace(
QCpPCTp
)
, Q > 0, (9)
subject to (5) and (8). �
In this paper, we consider a convex optimization solution to
the ICC problem using LMIs.
State Feedback
With the state feedback controller (2) the closed-loop system
matrices in (4) are given by
A = Ap +BpG, D = Dp, Cy =Cp, Cu = G. (10)
Theorem 1. There exists a controller in the form (2), given by
G = LP−1, (11)
that minimizes JICC (9) and satisfies the input constraints (8) if
there exists a matrix L ∈ Rm×n and a symmetric positive definite
matrix P ∈ Rn×n that minimize the upper bound of the ICC cost
JICC = minP,L
trace(
QCpPCTp
)
> trace(
QCpPCTp
)
= JICC, (12)
subject to the LMIs
[
PATp +ApP+LT BT
p +BpL DpW1/2p
⋆ −I
]
< 0, (13)
QCpPCTp > 0, (14)
[
U i ΦiL
⋆ P
]
> 0, (15)
for i = 1,2, . . . ,m.
2 Copyright © 2013 by ASME
Downloaded From: http://proceedings.asmedigitalcollection.asme.org/ on 05/16/2018 Terms of Use: http://www.asme.org/about-asme/terms-of-use
Proof. According to [4], the Lyapunov equation (5) can be writ-
ten as the following inequality:
0 > AP+PAT +DWDT , (16)
where P = PT > 0. Notice that (16) is the Schur complement of
the following LMI:
[
AP+PAT DW 1/2
W 1/2DT −I
]
< 0. (17)
Since P = PT > 0, to ensure that (17) is satisfied the closed-loop
state matrix A must be Hurwitz. The LMI (13) is constructed
from the LMI (17) by first substituting the closed-loop matrices
(10) into the LMI (17), then by using the change of variables
L = GP, and finally by recalling that for state feedback W =Wp.
Notice that since (16) is less than zero, there exist a matrix
M = MT > 0 such that
0 = AP+PAT +DWDT +M. (18)
Consequently, P > P. From (14) it follows that
QCpPCTp > QCpPCT
p . (19)
Therefore,
JICC = trace(
QCpPCTp
)
> trace(
QCpPCTp
)
= JICC. (20)
Likewise, it follows from (15) that
U i > ΦiGPGTΦ
Ti > ΦiGPGT
ΦTi =Ui, (21)
for i = 1,2, . . . ,m, since
ΦiGPGTΦ
Ti = ΦiGPP−1PGT
ΦTi = ΦiLP−1LT
ΦTi . 2
Dynamic Output Feedback
The extension of the state feedback case to the full-order
dynamic output feedback case is straightforward. In fact, the
state feedback LMIs in Theorem 1 are applied to solve the
full-order dynamic output feedback OCC problem. It is known
that the performance of a full information state feedback con-
troller cannot be improved upon by the use of dynamic compen-
sation [4]. However, the full state information without corruption
from measurement noise is not usually available. Thus, under the
assumption that the pair (Mp,Ap) is detectable, a dynamic output
feedback controller can be designed by using the state estimator
xc(t) = Apxc(t)+Bpu(t)+F (z(t)−Mpxc(t)) (22)
where F = PMTp V−1 and P is the unique positive definite matrix
P that satisfies the Riccati equation [2]
0 = ApP+ PATp − PMT
p V−1MpP+DpWpDTp . (23)
Then, as shown in (3), the estimated states are used to compute
the control input such that the state estimator becomes
xc(t) = (Ap +BpG−FMp)xc(t)+Fz(t). (24)
Thus, the only remaining question is how to obtain the state feed-
back gain G, which is covered in the next theorem.
Theorem 2. If the pair (Mp,Ap) is detectable and there exists
a matrix L ∈ Rm×n and a symmetric positive definite matrix P ∈
Rn×n that minimize
JICC = minP,L
trace[
QCp
(
P+P)
CTp
]
, (25)
subject to the LMIs
[
PATp +ApP+LT BT
p +BpL FV 1/2
⋆ −I
]
< 0, (26)
QCp
(
P+P)
CTp > 0, (27)
[
U i ΦiL
⋆ P
]
> 0, (28)
for i = 1,2, . . . ,m where P is the unique positive solution to the
Riccati equation (23), then there exists a controller that mini-
mizes the ICC cost JICC while satisfying the input constraints (8)
in the form (3), given by
xc(t) = (Ap +BpG−FMp)xc(t)+Fz(t),
u(t) = Gxc(t),(29)
with G = LP−1 and F = PMTp V−1.
Proof. A proof of this theorem can be obtained by combining
Theorem 1 of this paper with Lemma 4.2 and Theorem 4.1 of [4].
One of the main results of [4] demonstrates that the ICC prob-
lem (and other H2 like problems) with dynamic output feed-
back reduces to an equivalent problem with state feedback. Thus
the output feedback problem can be solved by first designing a
standard Kalman filter with (23) and then using the state feed-
back synthesis LMIs in Theorem 1 after replacing Dp, Wp, and
QCpPCTp with F , V , and QCp
(
P+P)
CTp , respectively. 2
3 Copyright © 2013 by ASME
Downloaded From: http://proceedings.asmedigitalcollection.asme.org/ on 05/16/2018 Terms of Use: http://www.asme.org/about-asme/terms-of-use
DISCRETE TIME SYSTEMSConsider the following discrete-time system:
xp(k+ 1) = Apxp(k)+Bpu(k)+Dpwp(k),
yp(k) =Cpxp(k),
z(k) = Mpxp(k)+ v(k).
(30)
Suppose that we apply to the plant (30) a full state feedback sta-
bilizing control
u(k) = Gx(k), (31)
or a strictly proper stabilizing control
xc(k+ 1) = Acxc(k)+Fz(k),
u(k) = Gxc(k).(32)
Then the closed-loop system has the following form:
x(k+ 1) = Ax(k)+Dw(k),
y(k) =
[
yp(k)u(k)
]
=
[
Cy
Cu
]
x(k) =Cx(k),(33)
where the definitions of matrices A, D, and C, and vectors x, w,
and y are the same as in the continuous-time case.
As in the continuous-time case, let Wp > 0 and V > 0 denote
symmetric matrices with dimensions equal to wp and z, respec-
tively. Also, define W = Wp if state feedback (31) is used or
W = block diag[Wp,V ] if dynamic output feedback (32) is used.
Then, let P denote the closed-loop controllability Gramian from
the input W−1/2w. Since A is stable, P is given by
P = APAT +DWDT . (34)
As in the continuous-time case, we seek a solution to the follow-
ing optimal control problem.
The Discrete-Time ICC Problem. Find a state feedback
stabilizing controller (31) or a strictly proper output feedback
stabilizing controller (32) for the system (30) to minimize the
ICC cost
JICC = traceQCpPCTp , Q > 0, (35)
subject to
Ui = ΦiCuPCTu Φ
Ti ≤U i, i = 1,2, . . . ,m, (36)
where P is given by (34) and the matrices Φi for i = 1,2, . . . ,mare, as in the continuous-time case, appropriately selected pro-
jection matrices for each ui corresponding to the constraint U i.
State FeedbackWith the state feedback controller (31), the closed-loop ma-
trices are the same as in the continuous-time case (10). To for-
mulate the LMIs for the discrete-time case, we use the H2 state
feedback LMIs given by Theorem 5 of [5] as a starting point.
Theorem 3. There exists a controller in the form (31), given by
G = LX−1 (37)
that minimizes JICC (9) and satisfies the input constraints (36) if
there exist matrices L ∈ Rm×n and X ∈ R
n×n and a symmetric
positive definite matrix P ∈ Rn×n that minimize the upper bound
of the ICC cost
JICC = minP,L,X
trace(
QCpPCTp
)
> trace(
QCpPCTp
)
= JICC, (38)
subject to the LMIs
P ApX +BpL DpW1/2p
⋆ X +XT −P 0
⋆ ⋆ I
> 0, (39)
QCpPCTp > 0, (40)
[
U i ΦiL
⋆ X +XT −P
]
> 0, (41)
for i = 1,2, . . . ,m.
Proof. The fact that the LMI (39) is equivalent to the correspond-
ing LMI in Theorem 5 of [5] comes from noticing that DpW1/2p
is the weighted disturbance input matrix. Since (39) implies that
P > APAT +DWDT (42)
there exists a matrix M = MT > 0 such that
P = APAT +DWDT +M. (43)
Consequently, P > P. Thus, from (40) it can be shown that
QCpPCTp > QCpPCT
p . (44)
Therefore,
JICC = trace(
QCpPCTp
)
> trace(
QCpPCTp
)
= JICC. (45)
Similarly, it follows from (41) that
U i > ΦiGPGTΦ
Ti > ΦiGPGT
ΦTi =Ui, (46)
for i = 1,2, . . . ,m. 2
4 Copyright © 2013 by ASME
Downloaded From: http://proceedings.asmedigitalcollection.asme.org/ on 05/16/2018 Terms of Use: http://www.asme.org/about-asme/terms-of-use
Dynamic Output Feedback
As in the continuous-time case, the discrete-time state feed-
back results are extended to the discrete-time OCC problem with
output feedback with the use of a state estimator. Under the as-
sumption that the pair (Mp,Ap) is detectable, then the state esti-
mator is given by
xc(k+ 1) = Apxc(k)+Bpu(k)+F (z(k)−Mpxc(k)) , (47)
where F = ApPMTp
(
V +MpPMTp
)−1and P is the unique positive
definite matrix P that satisfies the Riccati equation [2]
P = ApPATp −ApPMT
p
(
V +MpPMTp
)−1MpPAT
p +DpWpDTp .(48)
Then, as in the continuous-time case, the estimated states are
used to compute the control input such that the state estimator
becomes
xc(k+ 1) = (Ap +BpG−FMp)xc(k)+Fz(k), (49)
and G is given by the following theorem.
Theorem 4. If the pair (Mp,Ap) is detectable and there exists
matrices L ∈ Rm×n and X ∈R
n×n and a symmetric positive defi-
nite matrix P ∈ Rn×n that minimize
JICC = minP,L,X
trace[
QCp
(
P+P)
CTp
]
, (50)
subject to the LMIs
P ApX +BpL F(
V +MpPMTp
)1/2
⋆ X +XT −P 0
⋆ ⋆ I
> 0, (51)
QCp
(
P+P)
CTp > 0, (52)
[
U i ΦiL
⋆ X +XT −P
]
> 0, (53)
for i = 1,2, . . . ,m where P is the unique positive solution to the
Riccati equation (48), then there exists a controller that mini-
mizes the ICC cost JICC while satisfying the input constraints
(36) in the form (32), given by
xc(t) = (Ap +BpG−FMp)xc(t)+Fz(t)
u(t) = Gxc(t)(54)
with G = LX−1 and F = ApPMTp
(
V +MpPMTp
)−1.
Proof. Note that, as in the continuous-time case, the full-order
dynamic output feedback OCC problem can be solved by first
designing a standard Kalman filter and then using the state feed-
back LMIs in Theorem 3 by replacing Dp, Wp, and QCpPCTp with
F , V +MpPMTp , and QCp
(
P+P)
CTp , respectively. 2
Numerical Example
To show the effectiveness of the LMIs presented in this pa-
per, we first demonstrate that the ICC problem considered in this
paper can be solved through the use of an iterative approach.
Then we show that similar and sometimes better results can be
obtained directly by solving the LMIs provided in Theorems 1-4.
For this demonstration, we use the example given in [2], which
considers the continuous-time OCC problem for the plant (1)
with the following system matrices:
Ap =
0 1 0
−1 −0.1 1
0 0 −10
, Bp =
0
0
1
, Dp =
0
0
1
,
Cp =
1 0.5 0
0 0 0.51 1 0
,
Mp =[
1 1 0]
.
(55)
The process and measurement noises wp and v are weighted by
Wp = 1 and V = 0.01. (56)
As mentioned in the introduction, the OCC problem is an op-
timal control problem that minimizes the control input subject
to specified output covariance constraints. For this example, the
output covariance constraint is taken to be
Y1 ≤ 0.035, Y2 ≤ σ× I2, (57)
where Y1 denotes the (1× 1) output variance corresponding to
the first performance variable and Y2 denotes the (2× 2) output
covariance matrix of the second and third performance outputs
grouped together. To show how an iterative approach can be
used to solve the ICC problem, we start with a σ value of 0.05
and then reduce it gradually down to 0.005. When this is done,
the input energy required to meet the demand increases as shown
in Fig. 1. Notice also in Fig. 1 that at a certain point, the control
energy required for additional performance increases exponen-
tially. Then at a certain point, increasing the control energy will
no longer provide better control. This is especially true for the
output feedback control problem.
This indicates that for this specific example in order to solve
the ICC problem, it is possible to use an iterative algorithm by it-
erating level of performance required for the OCC problem until
5 Copyright © 2013 by ASME
Downloaded From: http://proceedings.asmedigitalcollection.asme.org/ on 05/16/2018 Terms of Use: http://www.asme.org/about-asme/terms-of-use
U
σ0 0.01 0.02 0.03 0.04 0.05
10−2
10−1
100
101
State Feedback ControlOutput Feedback Control
Figure 1. The input energy, U , plotted against the covariance costraint σ
for both the state feedback control and output feedback control problems.
the desired input covariance constraint is met. To steer the it-
erative algorithm towards the desired input covariance, a simple
bisection algorithm like Algorithm 1 could be used. In the iter-
Algorithm 1 Iterative Bisection ICC Algorithm
Input: Plant matrices Ap, Bp, Dp, Cp, Mp, weighting matrices
Wp and V , a desired input covariance constraint U , and an
upper and lower bound for the output performance level σ
denoted by σU and σL, respectively.
Output: A state feedback (2) or output feedback controller (3)
with an input covariance given by U ≤U .
1: Set σ = σU .
2: while σU −σL ≥ ε do
3: Solve the OCC problem to obtain a controller that satis-
fies the performance constraint σ and compute the input
covariance U .
4: if U ≤U then
5: Set σU = σ
6: else
7: Set σL = σ
8: end if
9: σ = σL +σU−σL
2
10: end while
ative algorithm, any method that is capable of solving the OCC
problem, such as the OCC algorithm [2] or the LMI method [3],
can be used. Algorithm 1 is used to solve the following two ICC
problems:
Problem 1: U ≤ 0.25, (58)
Problem 2: U ≤ 1.00, (59)
Table 1. STATE FEEDBACK DESIGN: PROBLEM 1 COMPARISON
U Algorithm 1 SeDuMi LMI Lab
Iterations: 23
U 0.250000 0.250000 0.250000 0.249933
JICC 0.033735 0.033652 0.033688
GT
−0.7648
−5.6956
−0.5543
−0.7083
−5.5831
−0.7014
−0.6992
−5.5739
−0.7085
Table 2. OUTPUT FEEDBACK DESIGN: PROBLEM 1 COMPARISON
U Algorithm 1 SeDuMi LMI Lab
Iterations: 22
U 0.250000 0.250000 0.250000 0.249449
JICC 0.054937 0.054935 0.055090
GT
−1.0757
−6.6391
−0.6431
−1.0687
−6.7709
−0.8755
−1.3283
−6.7550
−1.0910
with the system matrices given by (55) for both state feedback
and output feedback control. For the dynamic output feedback
controller, the controller input matrix F is precomputed accord-
ing to (22) and (23). In this case we have
F = [0.4412 0.7633 0.4796]T . (60)
Since F is precomputed and independent of the control synthesis
for the output matrix, G, of the dynamic output feedback con-
troller, it is same for both problems 1 and 2. With the precision
ε set at 1× 10−8, Algorithm 1 typically needs about 22-23 itera-
tions before it finds a solutions of acceptable accuracy, as shown
in Tables 1-4.
The benefit of the LMI solution method for the ICC problem
detailed in this paper over ad-hoc methods like Algorithm 1 is
that a solution can be obtained directly without any iterations. To
show this, we solve the same two problems ((58) and (59)) using
two different LMI solvers: SeDuMi [6] and LMI Lab [7]. To use
SeDuMi as the LMI solver, the LMIs are first programmed into
MATLAB using YALMIP [8]. SeDuMi and YALMIP are both
free software that can be installed in MATLAB as toolboxes. The
LMI Lab is included in the Robust Controls Toolbox. The results
obtained using each of the solvers are compared to the results
obtained when using Algorithm 1.
For problem 1 (58), we can see in Table 1 for the state feed-
back control problem that LMI method found a solution with
6 Copyright © 2013 by ASME
Downloaded From: http://proceedings.asmedigitalcollection.asme.org/ on 05/16/2018 Terms of Use: http://www.asme.org/about-asme/terms-of-use
Table 3. STATE FEEDBACK DESIGN: PROBLEM 2 COMPARISON
U Algorithm 1 SeDuMi LMI Lab
Iterations: 23
U 1.000000 0.999996 1.000000 0.998416
JICC 0.017615 0.015359 0.015441
GT
−1.3217
−10.7071
−4.2923
−6.2728
−17.2734
−2.8007
−6.1818
−17.2283
−2.7912
Table 4. OUTPUT FEEDBACK DESIGN: PROBLEM 2 COMPARISON
U Algorithm 1 SeDuMi LMI Lab
Iterations: 22
U 1.000000 0.999999 1.000000 0.999441
JICC 0.043040 0.043003 0.043167
GT
−8.5133
−20.1235
−1.8452
−9.9072
−22.9509
−4.1005
−12.7348
−25.9277
−7.6070
a lower JICC cost and therefore somewhat slightly better per-
formance than was found with the iterative algorithm. Also as
shown in Table 2, for the output feedback control problem, the
solution found using SeDuMi was just ever so slightly better than
the solution found with the iterative algorithm. We also note that
as expected, the output performance obtained is better with state
feedback control than with output feedback control.
For problem 2 (59), we can see in Table 3 for the state feed-
back control problem that again the LMI method found a solution
with a lower JICC cost than was found with the iterative algo-
rithm. Also as shown in Table 4, for the output feedback con-
trol problem, the solution found using SeDuMi was again ever
so slightly better than the solution found with the iterative algo-
rithm.
CONCLUSION
In this paper the input covariance constraint (ICC) control
problem is solved using a convex optimization with linear matrix
inequality (LMI) constraints for both continuous and discrete-
time linear time invariant systems. The theorems provided in
this paper provide a set of control synthesis techniques based on
LMI optimization to obtain a controller, state feedback or dy-
namic output feedback, that solves the ICC problem. That is a
controller that obtains the best possible performance subject to
multiple constraints on the input covariance matrices.
REFERENCES[1] Zhu, G., Grigoriadis, K. M., and Skelton, R. E., 1995. “Co-
variance control design for hubble space telescope”. Journal
of Guidance, Control, and Dynamics, 18(2), pp. 230–236.
[2] Zhu, G., Rotea, M., and Skelton, R. E., 1997. “A conver-
gent algorithm for the output covariance constraint control
problem”. SIAM Journal on Control and Optimization, 35,
pp. 341–361.
[3] White, A., Zhu, G., and Choi, J., 2012. “A linear matrix in-
equality solution to the output covariance constraint control
problem”. In Proceedings of ASME Dynamic Systems and
Control Conference.
[4] Rotea, M. A., 1993. “The generalized H2 control problem”.
Automatica, 29(2), pp. 373–385.
[5] De Oliveira, M. C., Geromel, J. C., and Bernussou, J., 2002.
“Extended H2 and H∞ norm characterizations and controller
parametrizations for discrete-time systems”. International
Journal of Control, 75(9), pp. 666–679.
[6] Sturm, J., 1999. “Using SeDuMi 1.02, a MATLAB toolbox
for optimization over symmetric cones”. Optimization meth-
ods and software, 11(1), pp. 625–653.
[7] Gahinet, P., Nemirovski, A., Laub, A., and Chilali, M., 1995.
“Matlab LMI control toolbox”. The MathWorks Inc.
[8] Lofberg, J., 2004. “Yalmip : A toolbox for modeling and
optimization in MATLAB”. In Proceedings of the CACSD
Conference.
7 Copyright © 2013 by ASME
Downloaded From: http://proceedings.asmedigitalcollection.asme.org/ on 05/16/2018 Terms of Use: http://www.asme.org/about-asme/terms-of-use