5
INTERIOR-POINT METHODS FOR LINEAR MODEL PREDICTIVE CONTROL A. G. Wills * , W. P. Heath * School of Electrical Engineering and Computer Science, University of Newcastle, NSW 2308, Australia Tel: +61 2 4921 5204; Fax: +61 2 4960 1712; email: [email protected] Control Systems Centre, UMIST, PO Box 88, Manchester M60 1QD, UK Tel: +44 161 200 4659; Fax: +44 161 200 4647; email: [email protected] Abstract We discuss linear Model Predictive Control (MPC) from a computational and algorithmic perspective. We describe two commonly encountered MPC formulations; the first includes the system model as an explicit equality constraint while the second includes the system model implicitly by projecting onto the sub-space described by this linear relation. Both formula- tions are expressed as Quadratic Programming (QP) problems that can be solved efficiently using interior point methods. We discuss how the chosen interior-point algorithm can be tailored to exploit the MPC structure. 1 Introduction We are concerned with interior point optimisation algorithms used on-line with linear Model Predictive Control (MPC). Lin- ear MPC (e.g. [17, 18]) assumes a linear system model, lin- ear inequality constraints and a convex quadratic cost function. Linear MPC is appealing because the associated optimisation problem - which is typically solved at each time interval - may be expressed as a convex quadratic programme. Comprehen- sive reviews of linear and general MPC can be found in e.g. [4], [6], [17], [18], [3], [15] and [13]. Usually the MPC optimisation problem has exploitable struc- ture which can often lead to more efficient algorithms. There are two important forms, which we term the explicit and im- plicit forms. The optimality conditions for the explicit form can be expressed on an interval-by-interval basis leading to a block-banded system of equations[28]. Such a system must be factored and solved (at least once) at each iteration of an interior-point method. The computation required for the fac- torisation of this matrix grows linearly in the prediction horizon (see Fig 1). Furthermore a Riccati recursion approach can be exploited within the interior-point framework for solving this block-banded matrix [19]. This also results in an algorithm whose complexity grows linearly in the prediction horizon. For the implicit form the complexity grows cubically with pre- diction horizon (see Fig 1). However, for short to medium horizons, algorithms based on the implicit form still tend to outperform those based on the explicit form. In this paper, we discuss how the choices of structure can affect both algo- rithm speed and numerical conditioning. Such considerations are well-known to the optimisation community, but often seem to have been overlooked or misunderstood by the control com- munity. Thus the paper is largely tutorial in style, although it includes some novel recommendations for Hessian factorisa- tion. It is based on a longer and more detailed technical report discussing both implicit and explicit forms [27]. Interior point methods are just one of a number of algorithms that can be used for MPC, such as active set methods [1, 8] and others [20, 23]. These, as well as suboptimal solutions [12] and off-line formulations [25] are beyond the scope of this discussion. Note that for certain MPC formulations active set methods may outperform interior point methods [1]. 2 Quadratic Programming and Inte- rior Point Methods The following discussion is based (primarily) on [10], [30], [32], [31], [29], [26] and [16]. We describe a quadratic pro- gramming problem and give the KKT (Karush-Kuhn-Tucker - see e.g. [14]) optimality conditions for this problem. These conditions may be restated as a system of non-linear equalities with some variables constrained to be non-negative. Loosely speaking, the interior-point approach attempts to solve a re- lated system of equalities whilst strictly honouring the non- negativity constraints. In particular, the system of non-linear equalities is linearised about the current point and the resulting linear system of equations is solved. Computational efficiency of interior-point methods depends heavily upon how efficiently we can solve this linear subproblem. 2.1 Quadratic Programming Optimality Condi- tions Consider the following quadratic program, (QP) : min x 1 2 x T Hx + f T x + c 0 , s.t. Lx k, Ax = b. Let x R n . It is assumed that L is an m × n matrix and A is an m eq × n matrix with m eq n. It is further assumed that A has independent rows and b = Ax 0 for some x 0 . The matrix H is assumed to be positive semi-definite and symmetric. Optimality conditions for (QP) are given by (e.g. [2]), Hx + f + L T z + A T y = 0, Lx + s = k, Ax = b, z T s = 0, z 0, s 0. Here y represents the Lagrange multipliers for the equality con- straints, z represents the Lagrange multipliers for the inequal- ity constraints and the slack variable s has been introduced for convenience. Use of the “” symbol denotes element-wise in- equality. We will be concerned with a related problem where the com- plementarity condition z T s = 0 is replaced by the following re- lation, ZSe = μ e, where Z = diag(z), S = diag(s), e is a vector of all ones and μ > 0 is the “target” (the parameter μ goes by various names but it is instructive to think of μ as the target). Clearly, as μ 0, the two problems coincide. Control 2004, University of Bath, UK, September 2004 ID-040

INTERIOR-POINT METHODS FOR LINEAR MODEL PREDICTIVE …ukacc.group.shef.ac.uk/proceedings/control2004/Papers/040.pdf · We are concerned with interior point optimisation algorithms

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

Page 1: INTERIOR-POINT METHODS FOR LINEAR MODEL PREDICTIVE …ukacc.group.shef.ac.uk/proceedings/control2004/Papers/040.pdf · We are concerned with interior point optimisation algorithms

INTERIOR-POINT METHODS FOR LINEAR MODELPREDICTIVE CONTROL

A. G. Wills∗, W. P. Heath†

∗School of Electrical Engineering and Computer Science, University of Newcastle, NSW 2308, AustraliaTel: +61 2 4921 5204; Fax: +61 2 4960 1712; email: [email protected]

†Control Systems Centre, UMIST, PO Box 88, Manchester M60 1QD, UKTel: +44 161 200 4659; Fax: +44 161 200 4647; email: [email protected]

Abstract

We discuss linear Model Predictive Control (MPC) from acomputational and algorithmic perspective. We describe twocommonly encountered MPC formulations; the first includesthe system model as an explicit equality constraint while thesecond includes the system model implicitly by projecting ontothe sub-space described by this linear relation. Both formula-tions are expressed as Quadratic Programming (QP) problemsthat can be solved efficiently using interior point methods. Wediscuss how the chosen interior-point algorithm can be tailoredto exploit the MPC structure.

1 Introduction

We are concerned with interior point optimisation algorithmsused on-line with linear Model Predictive Control (MPC). Lin-ear MPC (e.g. [17, 18]) assumes a linear system model, lin-ear inequality constraints and a convex quadratic cost function.Linear MPC is appealing because the associated optimisationproblem - which is typically solved at each time interval - maybe expressed as a convex quadratic programme. Comprehen-sive reviews of linear and general MPC can be found in e.g.[4], [6], [17], [18], [3], [15] and [13].

Usually the MPC optimisation problem has exploitable struc-ture which can often lead to more efficient algorithms. Thereare two important forms, which we term the explicit and im-plicit forms. The optimality conditions for the explicit formcan be expressed on an interval-by-interval basis leading to ablock-banded system of equations[28]. Such a system mustbe factored and solved (at least once) at each iteration of aninterior-point method. The computation required for the fac-torisation of this matrix grows linearly in the prediction horizon(see Fig 1). Furthermore a Riccati recursion approach can beexploited within the interior-point framework for solving thisblock-banded matrix [19]. This also results in an algorithmwhose complexity grows linearly in the prediction horizon.

For the implicit form the complexity grows cubically with pre-diction horizon (see Fig 1). However, for short to mediumhorizons, algorithms based on the implicit form still tend tooutperform those based on the explicit form. In this paper,we discuss how the choices of structure can affect both algo-rithm speed and numerical conditioning. Such considerationsare well-known to the optimisation community, but often seemto have been overlooked or misunderstood by the control com-munity. Thus the paper is largely tutorial in style, although itincludes some novel recommendations for Hessian factorisa-tion. It is based on a longer and more detailed technical reportdiscussing both implicit and explicit forms [27].

Interior point methods are just one of a number of algorithmsthat can be used for MPC, such as active set methods [1, 8]

and others [20, 23]. These, as well as suboptimal solutions[12] and off-line formulations [25] are beyond the scope of thisdiscussion. Note that for certain MPC formulations active setmethods may outperform interior point methods [1].

2 Quadratic Programming and Inte-rior Point Methods

The following discussion is based (primarily) on [10], [30],[32], [31], [29], [26] and [16]. We describe a quadratic pro-gramming problem and give the KKT (Karush-Kuhn-Tucker -see e.g. [14]) optimality conditions for this problem. Theseconditions may be restated as a system of non-linear equalitieswith some variables constrained to be non-negative. Looselyspeaking, the interior-point approach attempts to solve a re-lated system of equalities whilst strictly honouring the non-negativity constraints. In particular, the system of non-linearequalities is linearised about the current point and the resultinglinear system of equations is solved. Computational efficiencyof interior-point methods depends heavily upon how efficientlywe can solve this linear subproblem.

2.1 Quadratic Programming Optimality Condi-tions

Consider the following quadratic program,

(QP) : minx

12 xT Hx+ f T x+ c0, s.t. Lx¹ k, Ax = b.

Let x ∈ Rn. It is assumed that L is an m×n matrix and A is an

meq× n matrix with meq ≤ n. It is further assumed that A hasindependent rows and b = Ax0 for some x0. The matrix H isassumed to be positive semi-definite and symmetric.

Optimality conditions for (QP) are given by (e.g. [2]),

Hx+ f +LT z+AT y = 0,

Lx+ s = k, Ax = b, zT s = 0, zº 0, sº 0.

Here y represents the Lagrange multipliers for the equality con-straints, z represents the Lagrange multipliers for the inequal-ity constraints and the slack variable s has been introduced forconvenience. Use of the “º” symbol denotes element-wise in-equality.

We will be concerned with a related problem where the com-plementarity condition zT s = 0 is replaced by the following re-lation, ZSe = µe, where Z = diag(z), S = diag(s), e is a vectorof all ones and µ > 0 is the “target” (the parameter µ goes byvarious names but it is instructive to think of µ as the target).Clearly, as µ → 0, the two problems coincide.

Control 2004, University of Bath, UK, September 2004 ID-040

Page 2: INTERIOR-POINT METHODS FOR LINEAR MODEL PREDICTIVE …ukacc.group.shef.ac.uk/proceedings/control2004/Papers/040.pdf · We are concerned with interior point optimisation algorithms

For z,sº 0, the related problem is to find x,y,z,s such that

Fµ(x,y,z,s) =

Hx+ f +LT z+AT yLx+ s− k

Ax−bZSe−µe

= 0.

The above system of non-linear equations can be solved itera-tively via, say, Newton’s method. Indeed, interior-point meth-ods take this approach. However, exact solutions for each targetvalue of µ are not required. Rather, µ is adaptively reduced ateach iteration aiming in the limit for µ = 0. In this case theoptimal conditions for (QP) are recovered.

Applying Newton’s method to Fµ(·) can be interpreted as fol-lows: find a search direction ∆p = [∆xT ∆yT ∆zT ∆sT ]T thatsatisfies

J(x,y,z,s)∆p = −Fµ(x,y,z,s), (1)

where J(.) represents the Jacobian matrix for Fµ(.). Expanding(1) gives that

H AT LT 0A 0 0 0L 0 0 I0 0 S Z

∆x∆y∆z∆s

=

r1r2r3r4

, (2)

where

r1 =−Hx− f −LT z−AT y, r2 = b−Ax,r3 = k−Lx− s, r4 = σ µe−ZSe. (3)

It is possible to reduce equation (2) to a more convenient form(known as the Augmented form) as follows. From the lastequation it follows that

∆s = Z−1(r4−S∆z). (4)

Using the above expression for ∆s, then (2) reduces to[

H AT LT

A 0 0L 0 −Z−1S

][

∆x∆y∆z

]

=

[

r1r2

r3−Z−1r4

]

. (5)

After solving the above system for (∆x,∆y,∆z), ∆s is obtainedfrom (4). If there are no explicit equality constraints then (5)becomes

[

H LT

L −Z−1S

]

[∆x∆z

]

=

[

r1r3−Z−1r4

]

. (6)

In certain cases it may be desirable to eliminate ∆z as

∆z = S−1Z(L∆x− r3 +Z−1r4), (7)

resulting in the following system for ∆x,[

H +LT S−1ZL]

∆x = r1 +LT S−1Z(r3−Z−1r4). (8)

Note that ∆z can be obtained from (7) and ∆s from (4). System(8) is positive definite and may be factored using a Choleskyfactorisation. Some care must be taken when µ is small sincethen S−1Z may have very large/small components (this phe-nomenon is common to all interior-point methods). A modi-fied Cholesky factorisation which is stable under these circum-stances is described in [32]; see also [31] for stability of theaugmented system in (5). It has been observed that reducing

system (6) to (8) may introduce unnecessary computational er-ror [26].

The choice of factorisation method often depends on the prob-lem structure (sparsity, presence of equality constraints etc.).For systems where (2) is sparse then a direct Gaussian elimina-tion approach is often used [7]. The factorisation step is usuallythe most computationally expensive operation at each iterationof an interior point algorithm. Most practical algorithms per-form one factorisation and possibly several solve operations percycle.

An interior point algorithm consists of a series of such factori-sations, together with careful choices of search direction, iter-ations on the value of µ and stopping criteria. A primal-dualpath following algorithm, suitable for MPC, based (primarily)on Mehrotra’s predictor-corrector method [16] is discussed in[27].

3 Model predictive control

The system dynamics are assumed to be given by the state evo-lution linear difference equation

x(t +1) = Ax(t)+Bu(t), (9)

with x ∈ Rnx the state variable and u ∈ R

nu the input variable:

Let X denote a sequence with N +1 elements (where N is theprediction horizon) where each element is a vector of dimen-sion nx, i.e. X = {x0, . . . ,xN}. The sequence U is definedsimilarly as U = {u0, . . . ,uN−1}. Where convenient, we willtreat X and U as (N + 1)nx and Nnu dimensional vectors re-spectively.

The MPC objective function is denoted by J, where

J(X ,U ) = 12‖xN − xr

N‖2P + 1

2

N−1

∑i=0

(

‖xi− xri ‖

2Q +‖ui−ur

i‖2R)

.

(10)In the above, the vectors xr

i and uri denote the i’th element of

the state and input reference sequences X r and U r respec-tively (defined in a similar manner to X and U ). Further, thematrices P and Q are assumed to be positive semi-definite andsymmetric and the matrix R is assumed to be positive definiteand symmetric.

Some formulations of MPC include a penalty on changes inthe control action in addition to penalising absolute controlmoves. Changes in the control action are typically denoted by∆u(t) = u(t)−u(t−1). This may be adequately included intothe current framework, for example, by augmenting the systemequations and including the penalty in Q (see [27]).

At each time interval t, the current state of the system x(t) isused as the initial state x0 for controller predictions and the op-timal input trajectory is calculated using this initial state. Typ-ically, x(t) is not available and an estimate of the state x(t) isused instead. The optimal input trajectory is optimal in thesense that it minimises J subject to a number of constraints,namely:

• The first element in X should equal the initial state vectorx, i.e. x0 = x.

• Each successive element in X should satisfy the systemequations (9), i.e. xi+1 = Axi +Bui, for i = 1, . . . ,N.

Control 2004, University of Bath, UK, September 2004 ID-040

Page 3: INTERIOR-POINT METHODS FOR LINEAR MODEL PREDICTIVE …ukacc.group.shef.ac.uk/proceedings/control2004/Papers/040.pdf · We are concerned with interior point optimisation algorithms

• Each element in U and the last N elements in X shouldrespect input, state and terminal state constraints respec-tively, i.e. for i = 1, . . . ,N, ui ∈ U, xi ∈ X and xN ∈ XF .

Issues of feasibility and soft constraints are beyond the scopeof this paper. See [27] and references therein for discussion.

State, terminal state and input constraint sets are assumed tohave the following structure respectively,

X = {x ∈ Rnx : Lxx¹ kx} ,

XF = {x ∈ Rnx : LF x¹ kF} ,

U = {u ∈ Rnu : Luu¹ ku} .

Using the above definitions, we define the MPC optimisationproblem as follows:

Definition 3.1 Given some initial state x, solve (if possible) thefollowing minimisation problem,

(MPC) : minX ,U

J(X ,U ),

s.t. x0 = x, xi+1 = Axi +Bui,ui ∈ U, xi ∈ X, xN ∈ XF

3.1 Explicit Formulation

Treating X and U as vectors, we can express J as

J(X ,U ) = 12‖X −X

r‖2Q + 1

2‖U −Ur‖2

R.

In the above, Q is given by the block diagonal matrixQ = diag{Q, . . . ,Q,P} and R is given similarly by R =diag{R, . . . ,R}.

The system dynamics can be expressed as

AX + BU = b, (11)

where A is an (N +1)nx×(N +1)nx matrix, B is an (N +1)nx×Nnu matrix and b is an (N +1)nx vector given respectively by

A =

I 0 0 · · · 0−A I 0 · · · 00 −A I · · · 0...

. . ....

0 · · · −A I

, B =

0 0 · · · 0−B 0 · · · 00 −B · · · 0...

. . ....

0 · · · −B

bT =[

xT 0 . . . 0]

The inequality constraints may be expressed as

LX X ¹ kX , LU U ¹ kU ,

where LX , kX , LU and kU are given by

LX =

0 Lx 0 · · · 00 0 Lx · · · 0...

. . ....

0 · · · 0 LF

, kX =

kxkx...

kF

,

LU =

Lu 0 · · · 00 Lu · · · 0...

. . ....

0 · · · Lu

, kU =

kuku...

ku

.

Problem (MPC) can be restated as follows.

Definition 3.2 Given some initial state x, solve (if possible) thefollowing minimisation problem.

(MPCE) : minX ,U

12‖X −X

r‖2Q + 1

2‖U −Ur‖2

R

s.t. AX + BU = b, LX X =¹ kX , LU U ¹ kU .

Interior-point methods can be geared to exploit the structure of(MPCE) (see [10] for an authoritative account of the relevantissues for this style of problem, albeit in the context of Sequen-tial Quadratic Programming). Furthermore a Riccati recursionapproach can be exploited within the interior-point frameworkfor solving this block-banded matrix [19]. It is possible to showthat the resultant algorithm’s complexity grows linearly in theprediction horizon [19].

Although the Riccati recursion approach is very appealing, di-rect methods for solving the sparse linear subproblem shouldproduce acceptable results (even for large horizons). Indeed,packages such as MA27, MA47 and MA57 from the Har-well library (http://hsl.rl.ac.uk/) can offer remark-able computational savings. The MA57 [5] package reportsgood results for practical optimisation problems. Due to theadaptive nature of these algorithms, it is difficult to specifytheir computational cost a priori. However, preliminary tri-als on a limited number of examples show near linear growthin the prediction horizon – even without exploiting the Riccatistructure (see Fig 1).

3.2 Implicit Formulation

Problem (MPCE) can be interpreted as minimising a convexquadratic objective subject to inequality constraints with thefurther requirement that the solution lie in a subspace describedby equation (11). By projecting (MPCE) onto this subspace wecan reduce the overall dimension.

Since A has full rank, then equation (11) may be equivalentlystated as

X = A−1(b− BU ). (12)

This gives the following expression for X ,

X = Λx+ΦU , (13)

where Λ and Φ are given by

Λ =

IA...

AN

, Φ =

0 · · · · · · 0B 0 · · · 0

AB B · · · 0...

. . .AN−1B AN−2B · · · B

We may then express (MPC) as follows.

Definition 3.3 Given some initial state x, solve (if possible) thefollowing minimisation problem:

(MPCI) : minU

12‖Λx+ΦU −X

r‖2Q + 1

2‖U −Ur‖2

R

s.t. LX (Λx+ΦU )¹ kX , LU U ¹ kU .

In the case of unstable system models, (MPCI) can becomeill-conditioned [19, 24]. Indeed, since A has eigenvalues out-side the unit circle in this case, then max{eig(Ai)} can becomelarge, even for small values of i. One possibility [11, 21, 22]

Control 2004, University of Bath, UK, September 2004 ID-040

Page 4: INTERIOR-POINT METHODS FOR LINEAR MODEL PREDICTIVE …ukacc.group.shef.ac.uk/proceedings/control2004/Papers/040.pdf · We are concerned with interior point optimisation algorithms

is to introduce a dummy variable v(k) and exploit a stabilisingstate feedback gain K. Suppose we write

u(k) =−Kx(k)+ v(k)

with Acl = A−BK stable. Let V denote a sequence with Nelements where each element is a vector of dimension nu givenV = {v0, . . . ,vN−1} so that U = KX +V with

K =

−K 0. . .

...−K 0

We can thus find a numerically stable implicit formulation bysubstituting

X = (A− BK)−1(b− BV ) = Λcl x+ΦclV

with Λcl and Φcl defined similarly to Λ and Φ, but with Aclreplacing A.

A natural choice of K is the equivalent state gain when thereare no constraints. In this case the unconstrained minimum ofthe cost function occurs where V = 0.

An alternative approach is to use a numerically stable projec-tion. In the above we have restricted the projection to the typegiven in Equation (12). Alternatively, we could introduce adummy variable E and represent X and U by

X = STE +b, U = T T

E ,

where S and T have the following properties,[

A B]

[ST

]

= 0,[

ST T T]

[ST

]

= I.

Hence X and U can be replaced by their respective affinefunctions in E and the resulting problem solved for E . Specif-ically, we may express (MPC) as follows.

Definition 3.4 Given some initial state x, solve (if possible) thefollowing minimisation problem.

(MPCIP) : minE

12‖S

TE +b−X

r‖2Q + 1

2‖TTE −U

r‖2R

s.t. LX (STE +b)¹ kX , LU (T T

E )¹ kU .

Of course X and U can then be recovered using the sameequations given above. A caveat for such approaches is thatconstraints will most likely become dense, thus effecting over-all computational efficiency.

Any potential numerical errors in the formation of Φ (or equiv-alent) are exacerbated when the Hessian matrix of the costfunction is formed. In (MPCI), (MPCIcl) and (MPCIP) theHessians are given respectively as

HI = ΦT QΦ+ RHIcl = ΦT

clQΦcl +(I + KΦcl)T R(I + KΦcl)

HIP = ST QS +T T RT.

One approach for reducing numerical ill-conditioning is toavoid forming the Hessian matrix explicitly, but instead rep-resent it as, respectively,

HI = GTI GI , GT

I =[

ΦT Q1/2 R1/2]

HIcl = GTIclGIcl , GT

Icl =[

ΦTclQ

1/2 (I + KΦcl)T R1/2

]

,

HIP = GTIPGIP, GT

IP =[

ST Q1/2 T T R1/2]

.

When solving the linear sub-system found in interior-pointmethods (see Equation (8) below), this structure can be ex-ploited. For example, Equation (8) can be expressed as (notethat the S matrix below is not that defined in this section),

H +LT S−1ZL =[

GT(

S−1Z)1/2 LT

]

[

G(

S−1Z)1/2 L

]

.

Hence, the Cholesky factor of H + LT S−1ZL may be obtainedfrom the QR factorisation as follows,

QF RF =

[

G(

S−1Z)1/2 L

]

,

where RF is also the Cholesky factor in this case [9]. Therefore,the Hessian matrix H is not explicitly formed, thus reducingnumerical ill-conditioning.

4 Discussion and Conclusion

Linear MPC involves (at least) one Quadratic Programmingproblem at each time interval. In the literature there appearto be two prominent formulations for this QP. Typically, theimplicit form tends to be more dense than the explicit form.

If we adopt an interior-point strategy (which may or may notbe the most suitable approach) then we are asked to solve alinear system of equations (at least once per iteration), Ax = b.Typically, A will become increasingly ill-conditioned towardslater stages of the algorithm and care must be taken in this case.

We may see various possible structures for the co-efficient ma-trix including sparse (and possibly block tridiagonal) or dense,positive definite or positive semi-definite and symmetric orasymmetric [27]. Applicable factorisation methods will de-pend on the chosen structure. In any event, sparsity shouldbe exploited if possible.

It may be difficult to prescribe one particular structure that per-forms well for an arbitrary MPC problem. For example, if thesystem has many states and only a few inputs, then even formoderate prediction horizons it is reasonable to expect that theimplicit formulation will perform better. But if the predictionhorizon is large then an explicit formulation will generally per-form better.

Achieving optimal performance for each MPC optimisationproblem requires individual consideration (see also [1, 10, 28]).Inter-alia, the linear dynamical system, sparsity structure, sys-tem dimensions, presence of state and/or input constraints,weighting matrices and the prediction horizon can all affect theperformance.

References

[1] R. A. Bartlett, A. Wachter, and L. T. Biegler. Activeset vs. interior point strategies for model predictive con-trol. In Proceedings of the American Control Conference,pages 4229–4233, Chicago, Illinois, 2000.

[2] S. Boyd and L. Vandenberghe. Convex Optimization.Cambridge University Press, 2004.

[3] H. Chen and F. Allgower. A quasi-infinite nonlinearmodel predictive control scheme with guaranteed stabil-ity. Automatica, 34(10):1205–1217, 1998.

[4] D. W. Clarke, C. Mohtadi, and P. S. Tuffs. Generalizedpredictive control, parts 1 and 2. Automatica, 23(2):137–148, 1987.

Control 2004, University of Bath, UK, September 2004 ID-040

Page 5: INTERIOR-POINT METHODS FOR LINEAR MODEL PREDICTIVE …ukacc.group.shef.ac.uk/proceedings/control2004/Papers/040.pdf · We are concerned with interior point optimisation algorithms

[5] I. S. Duff. MA57 - A new code for the solution of sparsesymmetric definite and indefinite systems. Technical re-port, Report RAL-TR-2002-024, Computational Scienceand Engineering Department, Atlas Centre, RutherfordAppleton Laboratory, 2002.

[6] C. E. Garcia, D. M. Prett, and M. Morari. Model predic-tive control: Theory and practice - A survey. Automatica,25(3):335–348, 1989.

[7] E. M. Gertz and S. J. Wright. Object-oriented software forquadratic programming. Technical report, OptimizationTechnical Report 01-02, Computer Sciences Department,University of Wisconsin-Madison, October 2001.

[8] T. Glad and H. Jonson. A method for state and controlconstrained linear quadratic control problems. In Pro-ceedings of the 9th IFAC World Congress, Budapest, Hun-gary, July 1984.

[9] G. H. Golub and C. F. Van Loan. Matrix Computations,Third Edition. The Johns Hopkins University Press, Bal-timore, Maryland, 1996.

[10] V. Gopal and L. T. Biegler. Large scale inequality con-strained optimization and control. IEEE Control SystemsMagazine, 18(6):59–68, 1998.

[11] S. S. Keerthi. Optimal Feedback Control of Discrete-TimeSystems with State-Control Constraints and General CostFunctions. PhD thesis, University of Michigan, 1986.

[12] B. Kouvaritakis, M. Cannon, and J. A. Rossiter. Whoneeds QP for linear MPC anyway? Automatica,38(5):879–884, May 2002.

[13] J. M. Maciejowski. Predictive Control with Constraints.Pearson Education Limited, Harlow, Essex, 2002.

[14] O. L. Mangasarian. Nonlinear Programming. McGraw-Hill Book Company, New York, 1969.

[15] D. Q. Mayne, J. B. Rawlings, C. V. Rao, and P. O. M.Scokaert. Constrained model predictive control: Stabilityand optimality. Automatica, 36:789–814, 2000.

[16] S. Mehrotra. On the implementation of a primal-dualinterior-point method. SIAM Journal on Optimization,2:575–601, 1992.

[17] K. R. Muske and J. B. Rawlings. Model predictive con-trol with linear models. AIChE Journal, 39(2):262–287,1993.

[18] S. Joe Qin and T. A. Badgwell. An overview of indus-trial model predictive control technology. AIChE Sym-posium Series, 5th International Symposium on ChemicalProcess Control, 93:232–256, 1997.

[19] C. V. Rao, S. J. Wright, and J. B. Rawlings. Applica-tion of interior point methods to model predictive con-trol. Journal of Optimization Theory and Applications,99(3):723–757, 1998.

[20] J. A. Rossiter and B. Kouvaritakis. Constrained stablegeneralised predictive control. IEE Proc.-Control TheoryAppl., 140:243–254, 1993.

[21] J. A. Rossiter and B. Kouvaritakis. Numerical robust-ness and efficiency of generalised predictive control algo-rithms with guaranteed stability. IEE Proc.-Control The-ory Appl., 141:154–162, 1993.

[22] J. A. Rossiter, B. Kouvaritakis, and M. J. Rice. A nu-merically robust state-space approach to stable-predictivecontrol strategies. Automatica, 34:65–73, 1998.

[23] M. Soroush and S. Valluri. Optimal directional-ity compensation in processes with input saturationnon-linearities. International Journal of Control,72(17):1555–1564, 1999.

[24] M. J. Tenny, S. J. Wright, and J. B. Rawlings. A fea-sible trust-region sequential quadratic programming al-gorithm. Technical report, Optimization Technical Re-port 02-05, Computer Sciences Department, Universityof Wisconsin-Madison, August 2002.

101 102104

105

106

107

Prediction Horizon (N)

Flop

cou

nt

Matlab flop count for different prediction horizons

Explicit

Implicit

Figure 1: Flop counts for the authors’ code using the flopsMatlab function. Note that both scales are logarithmic withbase 10. Also note that the implicit curve approaches a straightline with gradient 3 for large values of N, thus indicating a cu-bic relationship between computation time and prediction hori-zon. Meanwhile, the explicit curve approaches a straight linewith gradient 1 for large N, indicating a linear relationship withprediction horizon. Note that the MPC structure has not beendirectly exploited when solving the linear sub-system (2); inparticular, the Riccati relation [19] is not exploited. The sim-ulations were performed in Matlab version 5.3 running on anIntel Pentium IV 2GHz machine with 512M of memory. Matri-ces A,B and C were chosen randomly with the number of statesnx equal to 10 and the number of inputs nu and outputs ny bothequal to 2. The system has one unstable pole. Similar resultsusing the OOQP package by [7] which uses the MA47 solverfrom the Harwell library (see http://hsl.rl.ac.uk/)are reported in [27].

[25] P. Tøndel, T. A. Johansen, and A. Bemporad. An algo-rithm for multi-parametric quadratic programming andexplicit MPC solutions. Austomatica, 39(3):489–497,March 2003.

[26] R. J. Vanderbei. LOQO: An interior point code forquadratic programming. Optimization Methods and Soft-ware, 11:451–484, 1999.

[27] A. G. Wills and W. P. Heath. EE03016 – Interior-PointMethods for Linear Model Predictive Control. Technicalreport, School of Electrical Engineering and ComputerScience, University of Newcastle, Australia, 2003.

[28] S. J. Wright. Interior-point methods for optimal controlof discrete-time systems. Journal of Optimization Theoryand Applications, 77:161–187, 1993.

[29] S. J. Wright. Stability of linear equations solvers ininterior-point methods. SIAM Journal of Matrix Analy-sis and Applications, 16:1287–1307, 1995.

[30] S. J. Wright. Applying new optimization algorithms tomodel predictive control. Chemical Process Control-V, CACHE, AIChE Symposium Series, 93(316):147–155,1997.

[31] S. J. Wright. Stability of augmented system factorizationsin interior-point methods. SIAM Journal of Matrix Anal-ysis and Applications, 18:191–222, 1997.

[32] S. J. Wright. Modified Cholesky factorizations in interior-point algorithms for linear programming. SIAM Journalon Optimization, 9:1159–1191, 1999.

Control 2004, University of Bath, UK, September 2004 ID-040