43
Regularized & Distributionally Robust Data-Enabled Predictive Control Florian D ¨ orfler ETH Z ¨ urich Peking University Seminar

Regularized & Distributionally Robust Data-Enabled

  • Upload
    others

  • View
    7

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Regularized & Distributionally Robust Data-Enabled

Regularized & Distributionally RobustData-Enabled Predictive ControlFlorian DorflerETH Zurich Peking University Seminar

Page 2: Regularized & Distributionally Robust Data-Enabled

Acknowledgements

Jeremy Coulson

John Lygeros

Linbin Huang

Ivan Markovsky

Paul Beuchat

Further:Ezzat Elokda,Daniele Alpago,Jianzhe (Trevor) Zhen,Saverio Bolognani,Andrea Favato,Paolo Carlet, &IfA DeePC team

1/28

Page 3: Regularized & Distributionally Robust Data-Enabled

Perspectives on model-based control

x+ = Ax + Buy = Cx + Du

controller

system

model

system→ models useful forsystem analysis, design,estimation, . . . control→ modeling from firstprinciples & system ID

recurring themes• modeling & system ID

are very expensive• models not always

useful for control• need for end-to-end

automation solutions

From experiment design to closed-loop control!

Hakan Hjalmarsson∗

Department of Signals, Sensors and Systems, Royal Institute of Technology, S-100 44 Stockholm, Sweden

1. Introduction

Ever increasing productivity demands and environmental

standards necessitate more and more advanced control meth-

ods to be employed in industry. However, such methods usu-

ally require a model of the process and modeling and system

identification are expensive. Quoting (Ogunnaike, 1996):

“It is also widely recognized, however, that obtaining the

process model is the single most time consuming task in the

application of model-based control.”

In Hussain (1999) it is reported that three quarters of the

total costs associated with advanced control projects can

be attributed to modeling. It is estimated that models exist

for far less than one percent of all processes in regulatory

control. According to Desborough and Miller (2001), one of

the few instances when the cost of dynamic modeling can

be justified is for the commissioning of model predictive

controllers.

It has also been recognized that models for control pose

special considerations. Again quoting (Ogunnaike, 1996):

“There is abundant evidence in industrial practice that

when modeling for control is not based on criteria related

to the actual end use, the results can sometimes be quite

disappointing.”

Hence, efficient modeling and system identification tech-

niques suited for industrial use and tailored for control de-

sign applications have become important enablers for indus-

trial advances. The Panel for Future Directions in Control,

(Murray, Aström, Boyd, Brockett, & Stein, 2003), has iden-

tified automatic synthesis of control algorithms, with inte-

grated validation and verification as one of the major future

challenges in control. Quoting (Murray et al., 2003):

“Researchers need to develop much more powerful design

tools that automate the entire control design process from

model development to hardware-in-the-loop simulation.”

2/28

Page 4: Regularized & Distributionally Robust Data-Enabled

Control in a data-rich world• ever-growing trend in CS & applications:

data-driven control by-passing models• canonical problem: black/gray-box

system control based on I/O samples

Q: Why give up physical modeling &reliable model-based algorithms ?

data-driven

control

u2

u1 y1

y2

Data-driven control is viable alternative when• models are too complex to be useful

(e.g., fluids, wind farms, & building automation)

• first-principle models are not conceivable(e.g., human-in-the-loop, biology, & perception)

• modeling & system ID is too cumbersomee.g., robotics, drives, & electronics applications

Central promise: Itis often easier to learncontrol policies directlyfrom data, rather thanlearning a model.Example: PID [Astrom, ’73]

3/28

Page 5: Regularized & Distributionally Robust Data-Enabled

Abstraction reveals pros & consindirect (model-based) data-driven control

minimize control cost(x, u

)

subject to(x, u

)satisfy state-space model

where x estimated from(u, y)

& model

where model identified from(ud, yd

)data

→ nested multi-level optimization problem

}outeroptimization

}middle opt.

}inner opt.

separation &certaintyequivalence(→ LQG case)}no separation(→ ID-4-control)

direct (black-box) data-driven control

minimize control cost(u, y)

subject to(u, y)

consistent with(ud, yd

)data

→ trade-offsmodular vs. end-2-end

suboptimal (?) vs. optimalconvex vs. non-convex (?)

Additionally: all above should be min-max or E(·) accounting for uncertainty . . .4/28

Page 6: Regularized & Distributionally Robust Data-Enabled

A direct approach: dictionary + MPC1© trajectory dictionary learning• motion primitives / basis functions• theory: Koopman & Liouville

practice: (E)DMD & particles

2© MPC optimizing over dictionary span

→ huge theory vs. practice gap→ back to basics: impulse response

y4y2

y1y3 y5

y6

y7

u2 = u3 = · · · = 0

u1 = 1

x0 =0

dynamic matrix control(Shell, 1970s): predictivecontrol from raw data

yfuture(t) =[y1 y2 y3 . . .

ufuture(t)

ufuture(t− 1)ufuture(t− 2)

...

today : arbitrary, finite, & corrupted data, . . . stochastic & nonlinear ?

5/28

Page 7: Regularized & Distributionally Robust Data-Enabled

ContentsI. Data-Enabled Predictive Control (DeePC): Basic Idea

J. Coulson, J. Lygeros, and F. Dorfler. Data-Enabled Predictive Control: In theShallows of the DeePC. [arxiv.org/abs/1811.05890].

II. From Heuristics & Numerical Promises to TheoremsJ. Coulson, J. Lygeros, and F. Dorfler. Distributionally Robust Chance ConstrainedData-enabled Predictive Control. [https://arxiv.org/abs/2006.01702].

I. Markovsky and F. Dorfler. Identifiability in the Behavioral Setting. [link]

III. Application: End-to-End Automation in Energy & Robotics

L. Huang, J. Coulson, J. Lygeros, and F. Dorfler. Decentralized Data-EnabledPredictive Control for Power System Oscillation Damping.[arxiv.org/abs/1911.12151].

E. Elokda, J. Coulson, P. Beuchat, J. Lygeros, and F. Dorfler. Data-EnabledPredictive Control for Quadcopters. [link].

[click here] for related publications

Page 8: Regularized & Distributionally Robust Data-Enabled

Previewcomplex 4-area power system:large (n=208), few sensors (8),nonlinear, noisy, stiff, inputconstraints, & decentralizedcontrol specifications

control objective: oscillationdamping without model(grid has many owners, models areproprietary, operation in flux, . . . )

!"#$

!"#%

!"#&

!"#'

()*+#$ ()*+#%

!"#,

!"#-

!"#.

!"#/

()*+#& ()*+#'

$ ,

% '

&0

/- .

$1

$$

$%

$&

$'

$, $0

$- $. $/

%1

234*#$5,

234*#%5,

234*#,5-

234*#-5.5$

234*#-5.5%

234*#.5/5$

234*#.5/5%

234*#/50

234*#05&

234*#05'

234*#-5$1

234*#$%5%1

234*#/5$/

234*#$$5$,

234*#$%5$,

234*#$,5$-

234*#$-5$.5$

234*#$-5$.5%

234*#$.5$/5$

234*#$.5$/5%

234*#$/5$0

234*#$05$&

234*#$05$'

6!758697!:+:3;4#$

6!758697!:+:3;4#%

7;4:);<#!3=4+<>

7;4:);<#!3=4+<>

!?>:*@A+):3:3;434=

2;+B#$ 2;+B#%

2;+B#& 2;+B#'

control

control

! " #! #" $! $" %!

!&!

!&$

!&'

!&(

10

time (s)

uncontrolled flow (p.u.)

collect data controltie lin

e fl

ow

(p

.u.)

!"#$%&'(! " #! #" $! $" %!

!&!

!&$

!&'

!&( seek a method that worksreliably, can be efficientlyimplemented, & certifiable→ automating ourselves

6/28

Page 9: Regularized & Distributionally Robust Data-Enabled

Reality check: magic or hoax ?surely, nobody would put apply such a shaky data-driven method• on the world’s most complex engineered system (the electric grid),• using the world’s biggest actuators (Gigawatt-sized HVDC links),• and subject to real-time, safety, & stability constraints . . . right?

!"#$%&'()'(%#(*%+,-$'#(.

%

/% 0123% 21)4'5"*% #% 6"$7% 8#6-1$#),"% $"6'"9% -8% 7-1$% :#:"$% ;<=% 9>'?>% /% )",'"6"% ?-1,*% )"% -8%

'4:-$3#(?"%3-%-1$%9-$@%#3%A'3#?>'%B-9"$%C$'*2D%E"%*-%>#6"%%;<=%$"F1'$"%-GH,'("%31('(I%3>#3% %;<=%%

?-4'22'2'-('(I%"(I'(""$%?#(%*-%-(%>'2%-9(%%;<=%%#(%#*#:J6"%#::$-#?>%9-1,*%)"%6"$7%'(3"$"2J(ID

%

/8% :-22'),"% /%9-1,*% ,'@"% 3-% 3$7% 3>"%*"?"(3$#,'K"*%!""BL%#::$-#?>%9'3>%-1$%4-$"%*"3#',"*%AM!L%

2723"4% 4-*",2% -(% 3>"% '(3"$#$"#% -2?',,#J-(% :$-),"4D% L-1,*% 2-% 2-4"% ?-*"% )"% 4#*"% #6#',#),"%%

;<=%%N%E-1,*%7-1%)"%'(3"$"23"*%'(%9-$@'(I%3-I"3>"$%3-%*-%21?>%#%*"4-(23$#J-(%N%%;<=

%/3%9-$@2O%<%"6"(%

-(%#(%"(J$",7%

*'G"$"(3%4-*",%P%

2-Q9#$"%:,#R-$4

<%8"9%*#72%#Q"$%

2"(*'(I%-1$%?-*"

at least someone believes that DeePC is practically useful . . .7/28

Page 10: Regularized & Distributionally Robust Data-Enabled

Behavioral view on LTI systemsDefinition: A discrete-time dynamicalsystem is a 3-tuple (Z≥0,W,B) where

(i) Z≥0 is the discrete-time axis,

(ii) W is a signal space, and

(iii) B ⊆ WZ≥0 is the behavior.

B is the set ofall trajectories

Definition: The dynamical system (Z≥0,W,B) is(i) linear if W is a vector space & B is a subspace of WZ≥0

(ii) and time-invariant if B ⊆ σB, where σwt = wt+1.

LTI system = shift-invariant subspace of trajectory space

y

u

8/28

Page 11: Regularized & Distributionally Robust Data-Enabled

LTI systems and matrix time seriesfoundation of state-space subspace system ID & signal recovery algorithms

u(t)

t

u4

u2

u1 u3

u5u6

u7

y(t)

t

y4

y2

y1

y3

y5

y6

y7

(u(t), y(t)

)satisfy recursive

difference equationb0ut+b1ut+1+. . .+bnut+n+

a0yt+a1yt+1+. . .+anyt+n = 0

(ARX / kernel representation)

⇐under assumptions

[ 0 b0 a0 b1 a1 ... bn an 0 ] in left nullspaceof trajectory matrix (collected data)

HT

(ud

yd

)=

(ud1,1

yd1,1

) (ud1,2

yd1,2

) (ud1,3

yd1,3

)...

(ud2,1

yd2,1

) (ud2,2

yd2,2

) (ud2,3

yd2,3

)...

......

......

(udT,1

ydT,1

) (udT,2

ydT,2

) (udT,3

ydT,3

)...

where ydt,i is tth sample from ith experiment

9/28

Page 12: Regularized & Distributionally Robust Data-Enabled

Fundamental Lemma [Willems et al. ’05], [Markovsky & Dorfler ’20]

u(t)

t

u4

u2

u1 u3

u5u6

u7

y(t)

t

y4

y2

y1

y3

y5

y6

y7

Given: data(udiydi

)∈ Rm+p & LTI complexity parameters

{lag `order n

set of all T -length trajectories ={

(u, y) ∈ R(m+p)T : ∃x ∈ Rn s.t.

x+ = Ax + Bu , y = Cx + Du}

︸ ︷︷ ︸ ︸ ︷︷ ︸parametric state-space model non-parametric model from raw data

colspan

(ud

1,1

yd1,1

) (ud

1,2

yd1,2

) (ud

1,3

yd1,3

)...

(ud

2,1

yd2,1

) (ud

2,2

yd2,2

) (ud

2,3

yd2,3

)...

......

......

(udT,1

ydT,1

) (udT,2

ydT,2

) (udT,3

ydT,3

)...

if and only if the trajectory matrix has rank m · T + n for all T > `

10/28

Page 13: Regularized & Distributionally Robust Data-Enabled

set of all T -length trajectories ={

(u, y) ∈ R(m+p)T : ∃x ∈ Rn s.t.

x+ = Ax + Bu , y = Cx + Du}

︸ ︷︷ ︸ ︸ ︷︷ ︸parametric state-space model non-parametric model from raw data

colspan

(ud

1,1

yd1,1

) (ud

1,2

yd1,2

) (ud

1,3

yd1,3

)...

(ud

2,1

yd2,1

) (ud

2,2

yd2,2

) (ud

2,3

yd2,3

)...

......

......

(udT,1

ydT,1

) (udT,2

ydT,2

) (udT,3

ydT,3

)...

all trajectories constructible from finitely many previous trajectories

• can also use other matrix data structures: (mosaic) Hankel, Page, . . .

• novelty (?) : motion primitives, (E)DMD, dictionary learning, subspacesystem id, . . . all implicitly rely on this equivalence→ c.f. “fundamental”

• standing on the shoulders of giants:classic Willems’ result was only “if” &required further assumptions: Hankel,persistency of excitation, controllability

11/28

Page 14: Regularized & Distributionally Robust Data-Enabled

Control from matrix time series data

We are all writing merely the dramatic corollaries . . .

implicit & stochastic

→ Markovsky & ourselves

explicit & deterministic

→ Groningen: Persis, Camlibel, . . .

→ lots of recent momentum (∼ 1 ArXiv / week) with contributions byScherer, Allgower, Matni, Pappas, Fischer, Pasqualetti, Goulart, Mesbahi, . . .

→ more classic subspace predictive control (De Moor) literature12/28

Page 15: Regularized & Distributionally Robust Data-Enabled

Data-driven prediction [Markovsky & Rapisarda ’08]

Problem : predict future output y ∈ Rp·Tfuture based on• input signal u ∈ Rm·Tfuture

• past data col(ud, yd) ∈ BTdata

→ to predict forward

→ to form trajectory matrix

Solution: given (u1, . . . , uTfuture )→ compute g & (y1, . . . , yTfuture ) from

HTfuture

(ud

yd

)g =

ud1,1 ud

2,1 ud3,1 · · ·

......

......

ud1,Tfuture

ud2,Tfuture

ud3,Tfuture

· · ·

yd1,1 yd

2,1 yd3,1 · · ·

......

......

yd1,Tfuture

yd2,Tfuture

yd3,Tfuture

· · ·

g =

u1

...uTfuture

y1

...yTfuture

Issue: predicted output is not unique → need to set initial conditions !13/28

Page 16: Regularized & Distributionally Robust Data-Enabled

Data-driven prediction & estimationRefined problem : predict future output y ∈ Rp·Tfuture based on• initial trajectory col(uini, yini) ∈ R(m+p)·Tini

• input signal u ∈ Rm·Tfuture

• past data col(ud, yd) ∈ BTdata

→ to estimate initial xini

→ to predict forward

→ to form trajectory matrix

Solution: given u & col(uini, yini)→ compute g & y from

HTini

(ud

yd

)

HTfuture

(ud

yd

)

g =

ud1,1 ud

2,1 ud3,1 · · ·

.

.

....

.

.

....

ud1,Tini

ud2,Tini

ud3,Tini

· · ·yd

1,1 yd2,1 yd

3,1 · · ·...

.

.

....

.

.

.yd

1,Tiniyd

2,Tiniyd

3,Tini· · ·

ud1,Tini+1

ud2,Tini+1

ud3,Tini+1

· · ·...

.

.

....

.

.

.ud

1,Tini+Tfutureud

2,Tini+Tfutureud

3,Tini+Tfuture· · ·

yd1,Tini+1

yd2,Tini+1

yd3,Tini+1

· · ·...

.

.

....

.

.

.yd

1,Tini+Tfutureyd

2,Tini+Tfutureyd

3,Tini+Tfuture· · ·

g =

uiniyiniuy

⇒ observability condition: if Tini ≥ lag of system, then y is unique 14/28

Page 17: Regularized & Distributionally Robust Data-Enabled

Output Model Predictive ControlThe canonical receding-horizon MPC optimization problem :

minimizeu, x, y

Tfuture−1∑

k=0

‖yk − rt+k‖2Q + ‖uk‖2R

subject to xk+1 = Axk +Buk, ∀k ∈ {0, . . . , Tfuture − 1},yk = Cxk +Duk, ∀k ∈ {0, . . . , Tfuture − 1},xk+1 = Axk +Buk, ∀k ∈ {−Tini − 1, . . . ,−1},yk = Cxk +Duk, ∀k ∈ {−Tini − 1, . . . ,−1},uk ∈ U , ∀k ∈ {0, . . . , Tfuture − 1},yk ∈ Y, ∀k ∈ {0, . . . , Tfuture − 1}

quadratic cost withR � 0, Q � 0 & ref. r

model for predictionover k ∈ [0, Tfuture − 1]

model for estimation(many variations)

hard operational orsafety constraints

For a deterministic LTI plant and an exact model of the plant,MPC is the gold standard of control : safe, optimal, tracking, . . .

15/28

Page 18: Regularized & Distributionally Robust Data-Enabled

Data-Enabled Predictive ControlDeePC uses Hankel matrix for receding-horizon prediction / estimation:

minimizeg, u, y

Tfuture−1∑

k=0

‖yk − rt+k‖2Q + ‖uk‖2R

subject to H(ud

yd

)g =

uiniyiniuy

,

uk ∈ U , ∀k ∈ {0, . . . , Tfuture − 1},yk ∈ Y, ∀k ∈ {0, . . . , Tfuture − 1}

quadratic cost withR � 0, Q � 0 & ref. r

non-parametricmodel for predictionand estimation

hard operational orsafety constraints

• trajectory matrix H(ud

yd

)=

HTini

(udyd

)

HTfuture

(udyd

)

from past data

• past Tini ≥ lag samples (uini, yini) for xini estimation

collected offline(could be adapted online)

updated online16/28

Page 19: Regularized & Distributionally Robust Data-Enabled

Consistency for LTI SystemsTheorem: Consider DeePC & MPC optimization problems. If therank condition holds (= rich data), then the feasible sets coincide.

Corollary: closed-loop behaviors under DeePC and MPC coincide.

Aerial robotics case study :Thus, most of MPC carries overto DeePC . . . in the nominal casec.f. stability certificate [Berberich et al. ’19]

Beyond LTI: what about noise,corrupted data, & nonlinearities ?

. . . playing certainty-equivalencefails → need robustified approach

17/28

Page 20: Regularized & Distributionally Robust Data-Enabled

Noisy real-time measurements

minimizeg, u, y

Tfuture−1∑

k=0

‖yk − rt+k‖2Q + ‖uk‖2R + λy‖σini‖p

subject to H(ud

yd

)g =

uiniyiniuy

+

0σini00

,

uk ∈ U , ∀k ∈ {0, . . . , Tfuture − 1},yk ∈ Y, ∀k ∈ {0, . . . , Tfuture − 1}

Solution : add `p-slackσini to ensure feasibility→ receding-horizonleast-square filter→ for λy � 1: constraintis slack only if infeasible

c.f. sensitivity analysisover randomized sims

100

102

104

106

106

108

1010

Co

st

Cost

100

102

104

106

0

5

10

15

20

Du

ratio

n v

iola

tio

ns (

s)

Constraint Violations

18/28

Page 21: Regularized & Distributionally Robust Data-Enabled

Trajectory matrix corrupted by noise

minimizeg, u, y

Tfuture−1∑

k=0

‖yk − rt+k‖2Q + ‖uk‖2R + λg‖g‖1

subject to H(ud

yd

)g =

uiniyiniuy

,

uk ∈ U , ∀k ∈ {0, . . . , Tfuture − 1},yk ∈ Y, ∀k ∈ {0, . . . , Tfuture − 1}

Solution : add a`1-penalty on g

intuition: `1 sparsely selects{trajectory matrix columns}= {motion primitives}∼ low-order basis

c.f. sensitivity analysisover randomized sims

0 200 400 600 8000

1

2

3

4

5

6

7

Co

st

107 Cost

0 200 400 600 8000

5

10

15

20

Du

ratio

n v

iola

tio

ns (

s)

Constraint Violations

19/28

Page 22: Regularized & Distributionally Robust Data-Enabled

Towards nonlinear systemsIdea : lift nonlinear system to large/∞-dimensional bi-/linear system→ Carleman, Volterra, Fliess, Koopman, Sturm-Liouville methods→ nonlinear dynamics can be approximated by LTI on finite horizon

regularization singles out relevant features / basis functions in data

case study :DeePC+ σini slack+ ‖g‖1 regularizer+ more columns

in H(ud

yd

)

-1.5

1

-1

0.5-0.2

-0.5

00

0

0.2-0.5 0.4

0.5

0.6-1

1

1.5

2

flukeorsolid ?

20/28

Page 23: Regularized & Distributionally Robust Data-Enabled

Experimental snippet

21/28

Page 24: Regularized & Distributionally Robust Data-Enabled

Consistent observations acrosscase studies — more than a fluke

22/28

Page 25: Regularized & Distributionally Robust Data-Enabled

let’s try to put some theorybehind all of this . . .

Page 26: Regularized & Distributionally Robust Data-Enabled

Distributional robust formulation [Coulson et al. ’19]

• problem abstraction : minx∈X c(ξ, x

)= minx∈X EP

[c (ξ, x)

]

where ξ denotes measured data with empirical distribution P = δξ

⇒ poor out-of-sample performance of above sample-average solution x?

for real problem: EP

[c (ξ, x?)

]where P is the unknown distribution of ξ

• distributionally robust formulation −→ “minx∈X max E[c (ξ, x)

]”

where max accounts for all stochastic processes (linear or nonlinear)that could have generated the data . . . more precisely

infx∈X supQ∈Bε(P ) EQ

[c (ξ, x)

]

where Bε(P ) is an ε-Wasserstein ballcentered at empirical sample distribution P :

Bε(P ) =

{P : inf

Π

∫ ∥∥ξ − ξ∥∥pdΠ ≤ ε

}

ξ

ξ

P

P

Π 23/28

Page 27: Regularized & Distributionally Robust Data-Enabled

note: Wasserstein ball does notonly include LTI systems withadditive Gaussian noise but

“everything” (integrable)

Page 28: Regularized & Distributionally Robust Data-Enabled

• distributionally robust formulation:

infx∈X supQ∈Bε(P ) EQ

[c (ξ, x)

]

where Bε(P ) is an ε-Wasserstein ballcentered at empirical sample distribution P :

Bε(P ) =

{P : inf

Π

∫ ∥∥ξ − ξ∥∥pdΠ ≤ ε

}

ξ

ξ

P

P

Π

Theorem : Under minor technical conditions:inf

x∈X supQ∈Bε(P ) EQ[c (ξ, x)

]≡ minx∈X c

(ξ, x)

+ εLip(c) · ‖x‖?p

Cor : `∞-robustness in trajectory space⇐⇒ `1-regularization of DeePC

10-5

10-4

10-3

10-2

10-1

100

0

0.5

1

1.5

2

2.5

3

3.5

Cost

105

cost

ǫ

Proof builds on methods by Shafieezadeh, Esfahani, & Kuhn : problem tractableafter marginalization, for discrete worst case, & with many convex conjugates. 24/28

Page 29: Regularized & Distributionally Robust Data-Enabled

Further ingredients [Coulson et al. ’19], [Alpago et al. ’20]

• multiple i.i.d. experiments→ sampleaverage data matrix 1

N

∑Ni=1 Hi(y

d)

• measure concentration: Wassersteinball Bε(P ) includes true distribution P

with high confidence if ε ∼ 1/N1/ dim(ξ)

• old online measurements→ Kalmanfiltering with explicit g? as hidden state

N = 1N = 10

• distributionally robust probabilistic constraintssupQ∈Bε(P ) CVaRQ

1−α ⇔ averaging + regularization + tightening

CVaRP1−α(X)

P(X) ≤ 1 − αVarP1−α(X)

25/28

Page 30: Regularized & Distributionally Robust Data-Enabled

All together in action for nonlinear& stochastic quadcoptor setup

control objective+ regularization+ matrix predictor+ averaging+ CVaR constraints+ σini estimation slack

→ DeePC works muchbetter than it should ! 0 2 4 6 8 10

-1

-0.5

0

0.5

1

1.5

2

main catch : optimization problems become large (no-free-lunch)→ models are compressed, de-noised, & tidied-up representations

26/28

Page 31: Regularized & Distributionally Robust Data-Enabled

Comparison: DeePC vs. ID + MPCconsistent across all nonlinearcase studies : DeePC always wins

reason (?) : DeePC is robust, whereascertainty-equivalence control is basedon identified model with a bias error

Closed‐loop cost

Number of simulations

DeePCPEM‐MPC

realized closed-loop cost =∑k∥∥yk − rk

∥∥2Q +

∥∥uk∥∥2Rstochastic LTI comparison (no bias)

show certainty-equivalence vs. robustcontrol trade-offs (mean vs. median)

link : DeePC includes implicit sys IDthough 1© biased by control objective,2© data not projected on LTI class, &3© robustified through regularizations

N4SID

+ MPC

DeePC

Open-loop tracking error (% increase wrt optimal)

→ more to be understood . . . ArXiv paper coming 27/28

Page 32: Regularized & Distributionally Robust Data-Enabled

Summary & conclusionsmain take-aways• matrix time series serves as predictive model• data-enabled predictive control (DeePC)

X consistent for deterministic LTI systemsX distributional robustness via regularizations

future work→ tighter certificates for nonlinear systems→ explicit policies & direct adaptive control→ online optimization & real-time iteration

-1.5

1

-1

0.5-0.2

-0.5

00

0

0.2-0.5 0.4

0.5

0.6-1

1

1.5

2

Why have thesepowerful ideasnot been mixedlong before ?

Willems ’07: “[MPC] has perhaps too little systemtheory and too much brute force computation in it.”

The other side often proclaims “behavioral systemstheory is beautiful but did not prove utterly useful.”

28/28

Page 34: Regularized & Distributionally Robust Data-Enabled

appendix:

end-to-end automationcase study in power systems

Page 35: Regularized & Distributionally Robust Data-Enabled

Power system case study

!"#$

!"#%

!"#&

!"#'

()*+#$ ()*+#%

!"#,

!"#-

!"#.

!"#/

()*+#& ()*+#'

$ ,

% '

&0

/- .

$1

$$

$%

$&

$'

$, $0

$- $. $/

%1

234*#$5,

234*#%5,

234*#,5-

234*#-5.5$

234*#-5.5%

234*#.5/5$

234*#.5/5%

234*#/50

234*#05&

234*#05'

234*#-5$1

234*#$%5%1

234*#/5$/

234*#$$5$,

234*#$%5$,

234*#$,5$-

234*#$-5$.5$

234*#$-5$.5%

234*#$.5$/5$

234*#$.5$/5%

234*#$/5$0

234*#$05$&

234*#$05$'

6!758697!:+:3;4#$

6!758697!:+:3;4#%

7;4:);<#!3=4+<>

7;4:);<#!3=4+<>

!"

!"

#

#

#!"

!"

#

#!"

#$%

!"

?@+>*52;AB*C#2;;D

7E))*4:#7;4:);<#2;;D

97#6;<:+=*#7;4:);<#2;;D

6;<:+=*#7;4:);<#2;;D

!"

#$%

7;4:);<#93+=)+F#;G#6!758697#!:+:3;4#%

!"

!"

#

#

#!"

!"

#

#!"

#$%

!"

?@+>*52;AB*C#2;;D

7E))*4:#7;4:);<#2;;D

?;H*)#7;4:);<#2;;D

6;<:+=*#7;4:);<#2;;D

!"

#$%

7;4:);<#93+=)+F#;G#6!758697#!:+:3;4#$

!I>:*F?+):3:3;434=

2;+C#$ 2;+C#%

2;+C#& 2;+C#'

control

control

! " #! #" $! $" %!

!&!

!&$

!&'

!&(

10

time (s)

uncontrolled flow (p.u.)

• complex 4-area power system: large (n = 208), few measurements (8),nonlinear, noisy, stiff, input constraints, & decentralized control

• control objective: damping of inter-area oscillations via HVDC link• real-time MPC & DeePC prohibitive→ choose T , Tini, & Tfuture wisely

Page 36: Regularized & Distributionally Robust Data-Enabled

Centralized control

0 5 10 15 20 25 30

0.2

0.4

0.6

0.8

0 5 10 15 20 25 30

0.0

0.2

0.4

0.6

0 5 10 15 20 25 30

0.0

0.2

0.4

0.6

time (s)

5

0 5 10 15 20 25 30

0.2

0.4

0.6

0.8

0 5 10 15 20 25 30

0.0

0.2

0.4

0.6

0 5 10 15 20 25 30

0.0

0.2

0.4

0.6

time (s)

Fig. 5. Time-domain responses of the four-area system with the practicalsetting. The DeePC (or PEM-MPC) is activated at t = 10s. —– withoutwide-area control; —– with PEM-MPC (s = 60); —– with DeePC (s = 60).

Closed‐loop cost

Num

ber o

f sim

ulations DeePC

PEM‐MPC

Fig. 6. Cost comparison of DeePC and PEM-MPC under the practical setting.

Fig.7 plots the closed-loop cost (from 10s to 30) of thesystem with different DeePC parameters, which shows that a)the closed-loop cost dramatically drops with the increase of theprediction horizon N and then remains within an acceptablerange (here we set k = N

2 ); b) the closed-loop cost dropswhen T is increased from 800 to 1000 and then remains nearlythe same if further increasing T ; c) the closed-loop cost dropswith the increase of Tini from 5 to 40 and remains basically thesame with a larger Tini; and d) the system may have undesiredclosed-loop cost with a relatively small (or with a relativelylarge) �g but presents anticipated performance in between,which coincides with the fact the regularization on g providesrobustness against noisy measurements. Note that setting a toolarge �g (e.g., �g > 104) makes (5) focuses on minimizingkgk2

2 and results in inferior input/output performance. Fig.7also indicates the robustness of the DeePC with regards to thechoices of parameters, that is, the system presents anticipatedperformance with proper regularization on g (�g generally hasa wide admissible range) and sufficiently large N , Tini and T .

IV. MIN-MAX DEEPC

The DeePC algorithm presented above acts as a centralizedwide-area control, which is not resilient to communication fail-ures and less reliable than decentralized approaches especiallywhen more VSC-HVDC stations are considered. To this end,

Closed

‐loop

 cost

Closed

‐loop

 cost

Closed

‐loop

 cost

Closed

‐loop

 cost

Fig. 7. Closed-loop cost of the system with different DeePC parameters.

we present a Min-Max DeePC algorithm which further enablesdecentralized wide-area control.

A. Basic FormulationWe extend the unknown LTI system in (1) by adding a

measurable disturbance vector wt 2 Rq to (1) as⇢

xt+1 = Axt + But + Ewt

yt = Cxt + Dut + Fwt, (9)

where E 2 Rn⇥q and F 2 Rp⇥q .To be specific, the unknown system is subjected to some ex-

ternal disturbances (wt) whose past trajectory can be measuredbut the future trajectory is unknown. Let wd be a disturbancetrajectory of length T (i.e., wd 2 RqT ) measured from theunknown system such that col(ud, wd) is persistently excitingof order Tini + N + n. Note that here wt is regarded as anuncontrollable input vector of the unknown system. Similarto ud and yd, we use wd to construct the Hankel matrixHTini+N (wd), which is further partitioned into two parts as

WP

WF

�:= HTini+N (wd) , (10)

where WP 2 RqTini⇥(T�Tini�N+1) and WF 2RqN⇥(T�Tini�N+1).

Then, similar to (4), col(uini, wini, yini, u, w, y) is a trajec-tory of the unknown system (9) if and only if there existsg 2 RT�Tini�N+1 such that

26666664

UP

WP

YP

UF

WF

YF

37777775

g =

26666664

uini

wini

yini

uwy

37777775

, (11)

where wini 2 RqTini is the most recent measured disturbancetrajectory and w = col(w0, w1, ..., wN�1) 2 RqN is the futuredisturbance trajectory, which is unknown but assumed to bebounded as wt 2 [w, w].

= Prediction ErrorMethod (PEM)System ID + MPC

t < 10 s : open loopdata collection withwhite noise excitat.

t > 10 s : control

Page 37: Regularized & Distributionally Robust Data-Enabled

Performance: DeePC wins (clearly!)

Closed‐loop cost

Number of simulations

DeePCPEM‐MPC

realized closed-loop cost =∑

k ‖yk − rk‖2Q + ‖uk‖2R

Page 38: Regularized & Distributionally Robust Data-Enabled

DeePC hyper-parameter tuningClosed‐loop cost

Closed‐loop cost

Closed‐loop cost

Closed‐loop cost

Tfuture

regularizer λg• for distributional robustness≈ radius of Wasserstein ball• wide range of sweet spots

→ choose λg = 20

estimation horizon Tini

• for model complexity ≈ lag• Tini ≥ 50 is sufficient & low

computational complexity

→ choose Tini = 60

Page 39: Regularized & Distributionally Robust Data-Enabled

Closed‐loop cost

Closed‐loop cost

Closed‐loop cost

Closed‐loop cost

Tfuture

prediction horizon Tfuture

• nominal MPC is stable ifhorizon Tfuture long enough

→ choose Tfuture = 120 andapply first 60 input steps

data length T

• long enough for low-rankcondition but card(g) grows

→ choose T = 1500(data matrix ≈ square)

Page 40: Regularized & Distributionally Robust Data-Enabled

Computational cost

time (s)

0 5 10 15 20 25 30

0.2

0.4

0.6

0.8

0 5 10 15 20 25 30

0.0

0.2

0.4

0.6

0 5 10 15 20 25 30

0.0

0.2

0.4

0.6

• T = 1500

• λg = 20

• Tini = 60

• Tfuture = 120 & applyfirst 60 input steps• sampling time = 0.02 s• solver (OSQP) time = 1 s

(on Intel Core i5 7200U)⇒ implementable

Page 41: Regularized & Distributionally Robust Data-Enabled

Comparison: Hankel & Page matrix

Control Horizon k Control Horizon k

Averaged Closed‐loop Cost

S0=1

� Hankel matrix

� Hankel matrix withSVD (σthreshhold = 1)

� Page matrix

� Page matrix withSVD (σthreshhold = 1)

• comparison baseline: Hankel and Page matrices of same size• perfomance : Page consistency beats Hankel matrix predictors• offline denoising via SVD threshholding works wonderfully for

Page though obviously not for Hankel (entries are constrained)• effects very pronounced for longer horizon (= open-loop time)• price-to-be-paid : Page matrix predictor requires more data

Page 42: Regularized & Distributionally Robust Data-Enabled

Decentralized implementation

!"#$

!"#%

!"#&

!"#'

()*+#$ ()*+#%

!"#,

!"#-

!"#.

!"#/

()*+#& ()*+#'

$ ,

% '

&0

/- .

$1

$$

$%

$&

$'

$, $0

$- $. $/

%1

234*#$5,

234*#%5,

234*#,5-

234*#-5.5$

234*#-5.5%

234*#.5/5$

234*#.5/5%

234*#/50

234*#05&

234*#05'

234*#-5$1

234*#$%5%1

234*#/5$/

234*#$$5$,

234*#$%5$,

234*#$,5$-

234*#$-5$.5$

234*#$-5$.5%

234*#$.5$/5$

234*#$.5$/5%

234*#$/5$0

234*#$05$&

234*#$05$'

6!758697!:+:3;4#$

6!758697!:+:3;4#%

7;4:);<#!3=4+<>

7;4:);<#!3=4+<>

!"

!"

#

#

#!"

!"

#

#!"

#$%

!"

?@+>*52;AB*C#2;;D

7E))*4:#7;4:);<#2;;D

97#6;<:+=*#7;4:);<#2;;D

6;<:+=*#7;4:);<#2;;D

!"

#$%

7;4:);<#93+=)+F#;G#6!758697#!:+:3;4#%

!"

!"

#

#

#!"

!"

#

#!"

#$%

!"

?@+>*52;AB*C#2;;D

7E))*4:#7;4:);<#2;;D

?;H*)#7;4:);<#2;;D

6;<:+=*#7;4:);<#2;;D

!"

#$%

7;4:);<#93+=)+F#;G#6!758697#!:+:3;4#$

!I>:*F?+):3:3;434=

2;+C#$ 2;+C#%

2;+C#& 2;+C#'

control

control

! " #! #" $! $" %!

!&!

!&$

!&'

!&(

10

time (s)

uncontrolled flow (p.u.)

• plug’n’play MPC: treat interconnection P3 as disturbance variable wwith past disturbance wini measurable & future wfuture ∈ W uncertain• for each controller augment trajectory matrix with disturbance data w• decentralized robust min-max DeePC: ming,u,y maxw∈W

Page 43: Regularized & Distributionally Robust Data-Enabled

Decentralized control performance

0 5 10 15 20 25 30

0.2

0.4

0.6

0.8

0 5 10 15 20 25 30

0.0

0.2

0.4

0.6

0 5 10 15 20 25 30

0.0

0.2

0.4

0.6

time (s)

• colors correspondto different hyper-parameter settings(not discernible)

• ambiguity setWis∞-ball (box)

• for computationalefficiencyW isdownsampled(piece-wise linear)

• solver time ≈ 2.6 s

⇒ implementable