227
M. Zanon and S. Gros Numerical Methods for Optimal Control: Introduction Mario Zanon & S´ ebastien Gros

Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Numerical Methods for Optimal Control:Introduction

Mario Zanon & Sebastien Gros

Page 2: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Outline

1 Introduction

2 An Overview about OCP Solution Approaches

3 A Motivating Example

M. Zanon & S. Gros Introduction 2 / 25

Page 3: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Outline

1 Introduction

2 An Overview about OCP Solution Approaches

3 A Motivating Example

M. Zanon & S. Gros Introduction 3 / 25

Page 4: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

MotivationOptimization and optimal control are everywhere

We don’t just want systems to work......we want them to work in the best possible way!

Optimization is

a great tool to support decision-making

used in automation, solid mechanics, economics, and many more fields

Why direct optimal control?

very flexible and efficient

hard to use for analysis but very useful to solve practical problems

Why a full course?

optimization (especially nonconvex) does not work out of the box!

perfect recipe for disaster: hit run and hope for the best

Warning!

If there is a flaw in the problem formulation, the optimization algorithm will find it.a

J. BettsaThe proof is left to the student.

M. Zanon & S. Gros Introduction 4 / 25

Page 5: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

MotivationOptimization and optimal control are everywhere

We don’t just want systems to work......we want them to work in the best possible way!

Optimization is

a great tool to support decision-making

used in automation, solid mechanics, economics, and many more fields

Why direct optimal control?

very flexible and efficient

hard to use for analysis but very useful to solve practical problems

Why a full course?

optimization (especially nonconvex) does not work out of the box!

perfect recipe for disaster: hit run and hope for the best

Warning!

If there is a flaw in the problem formulation, the optimization algorithm will find it.a

J. BettsaThe proof is left to the student.

M. Zanon & S. Gros Introduction 4 / 25

Page 6: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

MotivationOptimization and optimal control are everywhere

We don’t just want systems to work......we want them to work in the best possible way!

Optimization is

a great tool to support decision-making

used in automation, solid mechanics, economics, and many more fields

Why direct optimal control?

very flexible and efficient

hard to use for analysis but very useful to solve practical problems

Why a full course?

optimization (especially nonconvex) does not work out of the box!

perfect recipe for disaster: hit run and hope for the best

Warning!

If there is a flaw in the problem formulation, the optimization algorithm will find it.a

J. BettsaThe proof is left to the student.

M. Zanon & S. Gros Introduction 4 / 25

Page 7: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

MotivationOptimization and optimal control are everywhere

We don’t just want systems to work......we want them to work in the best possible way!

Optimization is

a great tool to support decision-making

used in automation, solid mechanics, economics, and many more fields

Why direct optimal control?

very flexible and efficient

hard to use for analysis but very useful to solve practical problems

Why a full course?

optimization (especially nonconvex) does not work out of the box!

perfect recipe for disaster: hit run and hope for the best

Warning!

If there is a flaw in the problem formulation, the optimization algorithm will find it.a

J. BettsaThe proof is left to the student.

M. Zanon & S. Gros Introduction 4 / 25

Page 8: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

MotivationOptimization and optimal control are everywhere

We don’t just want systems to work......we want them to work in the best possible way!

Optimization is

a great tool to support decision-making

used in automation, solid mechanics, economics, and many more fields

Why direct optimal control?

very flexible and efficient

hard to use for analysis but very useful to solve practical problems

Why a full course?

optimization (especially nonconvex) does not work out of the box!

perfect recipe for disaster: hit run and hope for the best

Warning!

If there is a flaw in the problem formulation, the optimization algorithm will find it.a

J. BettsaThe proof is left to the student.

M. Zanon & S. Gros Introduction 4 / 25

Page 9: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Organization

Goals

build a sound understanding of the fundamentals

learn the different approaches and how to use them effectively

theory is provided as a support to problem solving

Setup

10 lectures, mostly twice a week, mostly 10:30 - 12:30I Discussions are highly welcome (within reasonable limits)I There are no stupid questions, only bad explanationsI Slides will be put on the website, please do not print them

AssignmentsI some theoretical work, some programmingI collaboration is highly encouraged but please don’t copy (it’s in your own interest)I coding can be challenging, ask for advice if you are unsure about how to do thingsI deadline for the assignments: end of January

All information and material will be available on the website

https://mariozanon.wordpress.com/course-doc/

M. Zanon & S. Gros Introduction 5 / 25

Page 10: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Notation

set of integers in the interval [a, b]: Iba = { x ∈ Z | a ≤ x ≤ b }scalar / vector a ∈ Rna unless specified otherwise

all vectors are column vectors unless specified otherwise

ai indicates component i of vector a

depending on the context, ak can indicate vector a at time k

∇xa> = ∂a

∂x∈ Rna×nx

F (x , u) continuous dynamics

f (x , u) discrete dynamics

M. Zanon & S. Gros Introduction 6 / 25

Page 11: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Optimal Control Problem (OCP)

Continuous time:

minu(·)

M (x(tf)) +

∫ tf

t0

L(x(t), u(t)) dt

s.t. x(t0) = x0,

x(t) = F (x(t), u(t)), t ∈ [t0, tf ],

H(x(t), u(t)) ≤ 0, t ∈ [t0, tf ].

Discrete-time:

minu

m (xN) +N−1∑k=0

`(xk , uk)

s.t. x0 = x0,

xk+1 = f (xk , uk), k ∈ IN−10 ,

h(xk , uk) ≤ 0, k ∈ IN−10 .

Components of an OCP

1 cost functional

2 initial condition

3 system dynamics

4 path constraintsI actuator limitationsI physical limitationsI user-defined

Remark1 ODE can be replaced by

I DAEI PDE

2 terminal constraint can be added3 initial condition can be replaced by

I periodic conditionsI some components can be free

4 independent variable does not need tobe time

I formulations in space

M. Zanon & S. Gros Introduction 7 / 25

Page 12: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Optimal Control Problem (OCP)

Continuous time:

minu(·)

M (x(tf)) +

∫ tf

t0

L(x(t), u(t)) dt

s.t. x(t0) = x0,

x(t) = F (x(t), u(t)), t ∈ [t0, tf ],

H(x(t), u(t)) ≤ 0, t ∈ [t0, tf ].

Discrete-time:

minu

m (xN) +N−1∑k=0

`(xk , uk)

s.t. x0 = x0,

xk+1 = f (xk , uk), k ∈ IN−10 ,

h(xk , uk) ≤ 0, k ∈ IN−10 .

Components of an OCP

1 cost functional

2 initial condition

3 system dynamics

4 path constraintsI actuator limitationsI physical limitationsI user-defined

Remark1 ODE can be replaced by

I DAEI PDE

2 terminal constraint can be added3 initial condition can be replaced by

I periodic conditionsI some components can be free

4 independent variable does not need tobe time

I formulations in space

M. Zanon & S. Gros Introduction 7 / 25

Page 13: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Optimal Control Problem (OCP)

Continuous time:

minu(·)

M (x(tf)) +

∫ tf

t0

L(x(t), u(t)) dt

s.t. x(t0) = x0,

x(t) = F (x(t), u(t)), t ∈ [t0, tf ],

H(x(t), u(t)) ≤ 0, t ∈ [t0, tf ].

Discrete-time:

minu

m (xN) +N−1∑k=0

`(xk , uk)

s.t. x0 = x0,

xk+1 = f (xk , uk), k ∈ IN−10 ,

h(xk , uk) ≤ 0, k ∈ IN−10 .

Components of an OCP

1 cost functional

2 initial condition

3 system dynamics

4 path constraintsI actuator limitationsI physical limitationsI user-defined

Remark1 ODE can be replaced by

I DAEI PDE

2 terminal constraint can be added3 initial condition can be replaced by

I periodic conditionsI some components can be free

4 independent variable does not need tobe time

I formulations in space

M. Zanon & S. Gros Introduction 7 / 25

Page 14: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Optimal Control Problem (OCP)

Continuous time:

minu(·)

M (x(tf)) +

∫ tf

t0

L(x(t), u(t)) dt

s.t. x(t0) = x0,

x(t) = F (x(t), u(t)), t ∈ [t0, tf ],

H(x(t), u(t)) ≤ 0, t ∈ [t0, tf ].

Discrete-time:

minu

m (xN) +N−1∑k=0

`(xk , uk)

s.t. x0 = x0,

xk+1 = f (xk , uk), k ∈ IN−10 ,

h(xk , uk) ≤ 0, k ∈ IN−10 .

Components of an OCP

1 cost functional

2 initial condition

3 system dynamics

4 path constraintsI actuator limitationsI physical limitationsI user-defined

Remark1 ODE can be replaced by

I DAEI PDE

2 terminal constraint can be added3 initial condition can be replaced by

I periodic conditionsI some components can be free

4 independent variable does not need tobe time

I formulations in space

M. Zanon & S. Gros Introduction 7 / 25

Page 15: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Direct Optimal Control - The Basic Idea

OCP:

minu(·)

M (x(tf)) +

∫ tf

t0

L(x(t), u(t)) dt

s.t. x(t0) = x0,

x(t) = F (x(t), u(t)), t ∈ [t0, tf ],

H(x(t), u(t)) ≤ 0, t ∈ [t0, tf ].

Transform the OCP into the NLP

minw

f (w)

s.t. g(w) = 0,

h(w) ≤ 0.

and solve it

The transformation from OCP to NLP is called transcription or discretization

Then why is it not trivial?

The choice of transcription strategy has a huge impact on both efficiency andreliability of the NLP solution...

... and the best solution strategy depends on the transcription

NLPs arising from OCPs have a specific structure which must be exploited

each problem is different and needs to be formulated, discretized and solved byaccurately choosing the best method

We will try to understand all of this together!

M. Zanon & S. Gros Introduction 8 / 25

Page 16: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Direct Optimal Control - The Basic Idea

OCP:

minu(·)

M (x(tf)) +

∫ tf

t0

L(x(t), u(t)) dt

s.t. x(t0) = x0,

x(t) = F (x(t), u(t)), t ∈ [t0, tf ],

H(x(t), u(t)) ≤ 0, t ∈ [t0, tf ].

Transform the OCP into the NLP

minw

f (w)

s.t. g(w) = 0,

h(w) ≤ 0.

and solve it

The transformation from OCP to NLP is called transcription or discretization

Then why is it not trivial?

The choice of transcription strategy has a huge impact on both efficiency andreliability of the NLP solution...

... and the best solution strategy depends on the transcription

NLPs arising from OCPs have a specific structure which must be exploited

each problem is different and needs to be formulated, discretized and solved byaccurately choosing the best method

We will try to understand all of this together!

M. Zanon & S. Gros Introduction 8 / 25

Page 17: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Direct Optimal Control - The Basic Idea

OCP:

minu(·)

M (x(tf)) +

∫ tf

t0

L(x(t), u(t)) dt

s.t. x(t0) = x0,

x(t) = F (x(t), u(t)), t ∈ [t0, tf ],

H(x(t), u(t)) ≤ 0, t ∈ [t0, tf ].

Transform the OCP into the NLP

minw

f (w)

s.t. g(w) = 0,

h(w) ≤ 0.

and solve it

The transformation from OCP to NLP is called transcription or discretization

Then why is it not trivial?

The choice of transcription strategy has a huge impact on both efficiency andreliability of the NLP solution...

... and the best solution strategy depends on the transcription

NLPs arising from OCPs have a specific structure which must be exploited

each problem is different and needs to be formulated, discretized and solved byaccurately choosing the best method

We will try to understand all of this together!

M. Zanon & S. Gros Introduction 8 / 25

Page 18: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Direct Optimal Control - The Basic Idea

OCP:

minu(·)

M (x(tf)) +

∫ tf

t0

L(x(t), u(t)) dt

s.t. x(t0) = x0,

x(t) = F (x(t), u(t)), t ∈ [t0, tf ],

H(x(t), u(t)) ≤ 0, t ∈ [t0, tf ].

Transform the OCP into the NLP

minw

f (w)

s.t. g(w) = 0,

h(w) ≤ 0.

and solve it

The transformation from OCP to NLP is called transcription or discretization

Then why is it not trivial?

The choice of transcription strategy has a huge impact on both efficiency andreliability of the NLP solution...

... and the best solution strategy depends on the transcription

NLPs arising from OCPs have a specific structure which must be exploited

each problem is different and needs to be formulated, discretized and solved byaccurately choosing the best method

We will try to understand all of this together!

M. Zanon & S. Gros Introduction 8 / 25

Page 19: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Direct Optimal Control - The Basic Idea

OCP:

minu(·)

M (x(tf)) +

∫ tf

t0

L(x(t), u(t)) dt

s.t. x(t0) = x0,

x(t) = F (x(t), u(t)), t ∈ [t0, tf ],

H(x(t), u(t)) ≤ 0, t ∈ [t0, tf ].

Transform the OCP into the NLP

minw

f (w)

s.t. g(w) = 0,

h(w) ≤ 0.

and solve it

The transformation from OCP to NLP is called transcription or discretization

Then why is it not trivial?

The choice of transcription strategy has a huge impact on both efficiency andreliability of the NLP solution...

... and the best solution strategy depends on the transcription

NLPs arising from OCPs have a specific structure which must be exploited

each problem is different and needs to be formulated, discretized and solved byaccurately choosing the best method

We will try to understand all of this together!

M. Zanon & S. Gros Introduction 8 / 25

Page 20: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Direct Optimal Control - The Basic Idea

OCP:

minu(·)

M (x(tf)) +

∫ tf

t0

L(x(t), u(t)) dt

s.t. x(t0) = x0,

x(t) = F (x(t), u(t)), t ∈ [t0, tf ],

H(x(t), u(t)) ≤ 0, t ∈ [t0, tf ].

Transform the OCP into the NLP

minw

f (w)

s.t. g(w) = 0,

h(w) ≤ 0.

and solve it

The transformation from OCP to NLP is called transcription or discretization

Then why is it not trivial?

The choice of transcription strategy has a huge impact on both efficiency andreliability of the NLP solution...

... and the best solution strategy depends on the transcription

NLPs arising from OCPs have a specific structure which must be exploited

each problem is different and needs to be formulated, discretized and solved byaccurately choosing the best method

We will try to understand all of this together!

M. Zanon & S. Gros Introduction 8 / 25

Page 21: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Direct Optimal Control - The Basic Idea

OCP:

minu(·)

M (x(tf)) +

∫ tf

t0

L(x(t), u(t)) dt

s.t. x(t0) = x0,

x(t) = F (x(t), u(t)), t ∈ [t0, tf ],

H(x(t), u(t)) ≤ 0, t ∈ [t0, tf ].

Transform the OCP into the NLP

minw

f (w)

s.t. g(w) = 0,

h(w) ≤ 0.

and solve it

The transformation from OCP to NLP is called transcription or discretization

Then why is it not trivial?

The choice of transcription strategy has a huge impact on both efficiency andreliability of the NLP solution...

... and the best solution strategy depends on the transcription

NLPs arising from OCPs have a specific structure which must be exploited

each problem is different and needs to be formulated, discretized and solved byaccurately choosing the best method

We will try to understand all of this together!

M. Zanon & S. Gros Introduction 8 / 25

Page 22: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Direct Optimal Control - The Basic Idea

OCP:

minu(·)

M (x(tf)) +

∫ tf

t0

L(x(t), u(t)) dt

s.t. x(t0) = x0,

x(t) = F (x(t), u(t)), t ∈ [t0, tf ],

H(x(t), u(t)) ≤ 0, t ∈ [t0, tf ].

Transform the OCP into the NLP

minw

f (w)

s.t. g(w) = 0,

h(w) ≤ 0.

and solve it

The transformation from OCP to NLP is called transcription or discretization

Then why is it not trivial?

The choice of transcription strategy has a huge impact on both efficiency andreliability of the NLP solution...

... and the best solution strategy depends on the transcription

NLPs arising from OCPs have a specific structure which must be exploited

each problem is different and needs to be formulated, discretized and solved byaccurately choosing the best method

We will try to understand all of this together!

M. Zanon & S. Gros Introduction 8 / 25

Page 23: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Direct Optimal Control - The Basic Idea

OCP:

minu(·)

M (x(tf)) +

∫ tf

t0

L(x(t), u(t)) dt

s.t. x(t0) = x0,

x(t) = F (x(t), u(t)), t ∈ [t0, tf ],

H(x(t), u(t)) ≤ 0, t ∈ [t0, tf ].

Transform the OCP into the NLP

minw

f (w)

s.t. g(w) = 0,

h(w) ≤ 0.

and solve it

The transformation from OCP to NLP is called transcription or discretization

Then why is it not trivial?

The choice of transcription strategy has a huge impact on both efficiency andreliability of the NLP solution...

... and the best solution strategy depends on the transcription

NLPs arising from OCPs have a specific structure which must be exploited

each problem is different and needs to be formulated, discretized and solved byaccurately choosing the best method

We will try to understand all of this together!

M. Zanon & S. Gros Introduction 8 / 25

Page 24: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

References

Optimal Control

L. Betts. Practical Methods for Optimal Control Using Nonlinear Programming,Advances in Design and Control 2010

L. Biegler. Nonlinear Programming, MOS-SIAM Series on Optimization 2010

NLPs

J. Nocedal and S. Wright. Numerical Optimization, Springer 2006

S. Boyd and L. Vandenberghe. Convex Optimization, University Press 2004

Integrators

J. C. Butcher Numerical Methods for Ordinary Differential Equations, Wiley 2016

E. Hairer, S. P. Nørsett, and G. Wanner. Solving Ordinary Differential EquationsI, Springer 1993

E. Hairer ,and G. Wanner. Solving Ordinary Differential Equations II, Springer1996

M. Zanon & S. Gros Introduction 9 / 25

Page 25: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Survival Map of Direct Optimal Control

OCP Single Shooting

Collocation

Multiple Shooting

Inte

gra

tor

NLP

Interior-Point

SQP

Active-setQP solver

Interior-PointQP solver

Sen

siti

viti

es

We will have to jump around this tree......but we’ll come back to this map to keep in mind the big picture!

M. Zanon & S. Gros Introduction 10 / 25

Page 26: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Outline

1 Introduction

2 An Overview about OCP Solution Approaches

3 A Motivating Example

M. Zanon & S. Gros Introduction 11 / 25

Page 27: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Overview - 1

Continuous Equations Discrete Equations

Global Hamilton-Jacobi-Bellman (HJB) Dynamic Programming (DP)

Local Pontryagin (PMP) Direct Optimal Control (DOC)

M. Zanon & S. Gros Introduction 12 / 25

Page 28: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Overview - 1

Continuous Equations Discrete Equations

Global Hamilton-Jacobi-Bellman (HJB) Dynamic Programming (DP)

Local Pontryagin (PMP) Direct Optimal Control (DOC)

M. Zanon & S. Gros Introduction 12 / 25

Page 29: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Overview - 1

Continuous Equations Discrete Equations

Global Hamilton-Jacobi-Bellman (HJB) Dynamic Programming (DP)

Local Pontryagin (PMP) Direct Optimal Control (DOC)

Bellman Principle of Optimality

Thm: Tails of optimal trajectories are optimal.

Proof: Optimal solution

x∗(t) =

{x1(t) t0 ≤ t < t1

x2(t) t1 ≤ t < tf.

Assume by absurd that

x3(t), t1 ≤ t < tf , x3(t1) = x2(t1),

has a lower cost than x2(t).Then x∗(t) can not be optimal.

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5

t

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

x

x1x2x3

M. Zanon & S. Gros Introduction 12 / 25

Page 30: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Overview - 2

Continuous Equations Discrete Equations

Global Hamilton-Jacobi-Bellman (HJB) Dynamic Programming (DP)

Local Pontryagin (PMP) Direct Optimal Control (DOC)

M. Zanon & S. Gros Introduction 13 / 25

Page 31: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Overview - 2

Continuous Equations Discrete Equations

Global Hamilton-Jacobi-Bellman (HJB) Dynamic Programming (DP)

Local Pontryagin (PMP) Direct Optimal Control (DOC)

Tails of optimal trajectories are optimal.

DP: find the cost-to-go v0(x)

vj(xj) = minu

m (xN) +N−1∑k=j

`(xk , uk)

s.t. xj = xj ,

xk+1 = f (xk , uk), k ∈ IN−1j ,

h(xk , uk) ≤ 0, k ∈ IN−1j .

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5

t

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

x

f (x, u) = 1.1x + 0.1u, `(x, u) = x2 + u2

Global optimum

Feedback solution (closed-loop)

Curse of dimensionality

Can be deployed with 2− 4 states/ controls

Mixed-integer “easily handled”

M. Zanon & S. Gros Introduction 13 / 25

Page 32: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Overview - 2

Continuous Equations Discrete Equations

Global Hamilton-Jacobi-Bellman (HJB) Dynamic Programming (DP)

Local Pontryagin (PMP) Direct Optimal Control (DOC)

Tails of optimal trajectories are optimal.

DP: find the cost-to-go v0(x)

vj(xj) = minu

m (xN) +N−1∑k=j

`(xk , uk)

s.t. xj = xj ,

xk+1 = f (xk , uk), k ∈ IN−1j ,

h(xk , uk) ≤ 0, k ∈ IN−1j .

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5

t

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

x

f (x, u) = 1.1x + 0.1u, `(x, u) = x2 + u2

Global optimum

Feedback solution (closed-loop)

Curse of dimensionality

Can be deployed with 2− 4 states/ controls

Mixed-integer “easily handled”

M. Zanon & S. Gros Introduction 13 / 25

Page 33: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Overview - 2

Continuous Equations Discrete Equations

Global Hamilton-Jacobi-Bellman (HJB) Dynamic Programming (DP)

Local Pontryagin (PMP) Direct Optimal Control (DOC)

Tails of optimal trajectories are optimal.

DP: find the cost-to-go v0(x)

vj(xj) = minu

m (xN) +N−1∑k=j

`(xk , uk)

s.t. xj = xj ,

xk+1 = f (xk , uk), k ∈ IN−1j ,

h(xk , uk) ≤ 0, k ∈ IN−1j .

vj+1(xj+1) = minu

m (xN) +N−1∑k=j+1

`(xk , uk)

s.t. xj+1 = xj+1,

xk+1 = f (xk , uk), k ∈ IN−1j+1 ,

h(xk , uk) ≤ 0, k ∈ IN−1j+1 .

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5

t

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

x

f (x, u) = 1.1x + 0.1u, `(x, u) = x2 + u2

Global optimum

Feedback solution (closed-loop)

Curse of dimensionality

Can be deployed with 2− 4 states/ controls

Mixed-integer “easily handled”

M. Zanon & S. Gros Introduction 13 / 25

Page 34: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Overview - 2

Continuous Equations Discrete Equations

Global Hamilton-Jacobi-Bellman (HJB) Dynamic Programming (DP)

Local Pontryagin (PMP) Direct Optimal Control (DOC)

Tails of optimal trajectories are optimal.

DP: find the cost-to-go v0(x)

vj(xj) = minu

m (xN) +N−1∑k=j

`(xk , uk)

s.t. xj = xj ,

xk+1 = f (xk , uk), k ∈ IN−1j ,

h(xk , uk) ≤ 0, k ∈ IN−1j .

vj+1(xj+1) = minu

m (xN) +N−1∑k=j+1

`(xk , uk)

s.t. xj+1 = xj+1,

xk+1 = f (xk , uk), k ∈ IN−1j+1 ,

h(xk , uk) ≤ 0, k ∈ IN−1j+1 .

vj(xj) = minu

vj+1 (xj+1) + `(xj , uj)

s.t. xj = xj ,

xj+1 = f (xj , uj),

h(xj , uj) ≤ 0.

f (x, u) = 1.1x + 0.1u, `(x, u) = x2 + u2

Global optimum

Feedback solution (closed-loop)

Curse of dimensionality

Can be deployed with 2− 4 states/ controls

Mixed-integer “easily handled”

M. Zanon & S. Gros Introduction 13 / 25

Page 35: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Overview - 2

Continuous Equations Discrete Equations

Global Hamilton-Jacobi-Bellman (HJB) Dynamic Programming (DP)

Local Pontryagin (PMP) Direct Optimal Control (DOC)

Tails of optimal trajectories are optimal.

DP: find the cost-to-go v0(x)

vj(xj) = minuj

vj+1 (f (xj , uj)) + `(xj , uj)

+Ih(xj , uj),

with vN(x) = m(x) and

Ih(x , u) =

{0 if h(x , u) ≤ 0

∞ otherwise

the indicator function.

vj(xj) = minu

vj+1 (xj+1) + `(xj , uj)

s.t. xj = xj ,

xj+1 = f (xj , uj),

h(xj , uj) ≤ 0.

f (x, u) = 1.1x + 0.1u, `(x, u) = x2 + u2

Global optimum

Feedback solution (closed-loop)

Curse of dimensionality

Can be deployed with 2− 4 states/ controls

Mixed-integer “easily handled”

M. Zanon & S. Gros Introduction 13 / 25

Page 36: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Overview - 2

Continuous Equations Discrete Equations

Global Hamilton-Jacobi-Bellman (HJB) Dynamic Programming (DP)

Local Pontryagin (PMP) Direct Optimal Control (DOC)

Tails of optimal trajectories are optimal.

DP: find the cost-to-go v0(x)

vj(x) = minu

vj+1 (f (x , u)) + `(x , u)

+Ih(x , u).

Then the optimal control is

u∗(x) = arg minu

v0(x).

f (x, u) = 1.1x + 0.1u, `(x, u) = x2 + u2

Global optimum

Feedback solution (closed-loop)

Curse of dimensionality

Can be deployed with 2− 4 states/ controls

Mixed-integer “easily handled”

M. Zanon & S. Gros Introduction 13 / 25

Page 37: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Overview - 2

Continuous Equations Discrete Equations

Global Hamilton-Jacobi-Bellman (HJB) Dynamic Programming (DP)

Local Pontryagin (PMP) Direct Optimal Control (DOC)

Tails of optimal trajectories are optimal.

DP: find the cost-to-go v0(x)

vj(x) = minu

vj+1 (f (x , u)) + `(x , u)

+Ih(x , u).

Then the optimal control is

u∗(x) = arg minu

v0(x).

f (x, u) = 1.1x + 0.1u, `(x, u) = x2 + u2

-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 10

10

20

30

40

50

-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1

-3

-2

-1

0

1

2

3

Global optimum

Feedback solution (closed-loop)

Curse of dimensionality

Can be deployed with 2− 4 states/ controls

Mixed-integer “easily handled”

M. Zanon & S. Gros Introduction 13 / 25

Page 38: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Overview - 2

Continuous Equations Discrete Equations

Global Hamilton-Jacobi-Bellman (HJB) Dynamic Programming (DP)

Local Pontryagin (PMP) Direct Optimal Control (DOC)

Tails of optimal trajectories are optimal.

DP: find the cost-to-go v0(x)

vj(x) = minu

vj+1 (f (x , u)) + `(x , u)

+Ih(x , u).

Then the optimal control is

u∗(x) = arg minu

v0(x).

f (x, u) = 1.1x + 0.1u, `(x, u) = x2 + u2

-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 10

10

20

30

40

50

-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1

-3

-2

-1

0

1

2

3

Global optimum

Feedback solution (closed-loop)

Curse of dimensionality

Can be deployed with 2− 4 states/ controls

Mixed-integer “easily handled”

M. Zanon & S. Gros Introduction 13 / 25

Page 39: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Overview - 2

Continuous Equations Discrete Equations

Global Hamilton-Jacobi-Bellman (HJB) Dynamic Programming (DP)

Local Pontryagin (PMP) Direct Optimal Control (DOC)

Tails of optimal trajectories are optimal.

DP: find the cost-to-go v0(x)

vj(x) = minu

vj+1 (f (x , u)) + `(x , u)

+Ih(x , u).

Then the optimal control is

u∗(x) = arg minu

v0(x).

f (x, u) = 1.1x + 0.1u, `(x, u) = x2 + u2

-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 10

10

20

30

40

50

-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1

-3

-2

-1

0

1

2

3

Global optimum

Feedback solution (closed-loop)

Curse of dimensionality

Can be deployed with 2− 4 states/ controls

Mixed-integer “easily handled”

M. Zanon & S. Gros Introduction 13 / 25

Page 40: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Overview - 2

Continuous Equations Discrete Equations

Global Hamilton-Jacobi-Bellman (HJB) Dynamic Programming (DP)

Local Pontryagin (PMP) Direct Optimal Control (DOC)

Tails of optimal trajectories are optimal.

DP: find the cost-to-go v0(x)

vj(x) = minu

vj+1 (f (x , u)) + `(x , u)

+Ih(x , u).

Then the optimal control is

u∗(x) = arg minu

v0(x).

f (x, u) = 1.1x + 0.1u, `(x, u) = x2 + u2

-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 10

10

20

30

40

50

-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1

-3

-2

-1

0

1

2

3

Global optimum

Feedback solution (closed-loop)

Curse of dimensionality

Can be deployed with 2− 4 states/ controls

Mixed-integer “easily handled”

M. Zanon & S. Gros Introduction 13 / 25

Page 41: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Overview - 2

Continuous Equations Discrete Equations

Global Hamilton-Jacobi-Bellman (HJB) Dynamic Programming (DP)

Local Pontryagin (PMP) Direct Optimal Control (DOC)

Tails of optimal trajectories are optimal.

DP: find the cost-to-go v0(x)

vj(x) = minu

vj+1 (f (x , u)) + `(x , u)

+Ih(x , u).

Then the optimal control is

u∗(x) = arg minu

v0(x).

f (x, u) = 1.1x + 0.1u, `(x, u) = x2 + u2

-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 10

10

20

30

40

50

-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1

-3

-2

-1

0

1

2

3

Global optimum

Feedback solution (closed-loop)

Curse of dimensionality

Can be deployed with 2− 4 states/ controls

Mixed-integer “easily handled”

M. Zanon & S. Gros Introduction 13 / 25

Page 42: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Overview - 2

Continuous Equations Discrete Equations

Global Hamilton-Jacobi-Bellman (HJB) Dynamic Programming (DP)

Local Pontryagin (PMP) Direct Optimal Control (DOC)

Tails of optimal trajectories are optimal.

DP: find the cost-to-go v0(x)

vj(x) = minu

vj+1 (f (x , u)) + `(x , u)

+Ih(x , u).

Then the optimal control is

u∗(x) = arg minu

v0(x).

f (x, u) = 1.1x + 0.1u, `(x, u) = x2 + u2

-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 10

10

20

30

40

50

-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1

-3

-2

-1

0

1

2

3

Global optimum

Feedback solution (closed-loop)

Curse of dimensionality

Can be deployed with 2− 4 states/ controls

Mixed-integer “easily handled”

M. Zanon & S. Gros Introduction 13 / 25

Page 43: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Overview - 2

Continuous Equations Discrete Equations

Global Hamilton-Jacobi-Bellman (HJB) Dynamic Programming (DP)

Local Pontryagin (PMP) Direct Optimal Control (DOC)

Tails of optimal trajectories are optimal.

DP: find the cost-to-go v0(x)

vj(x) = minu

vj+1 (f (x , u)) + `(x , u)

+Ih(x , u).

Then the optimal control is

u∗(x) = arg minu

v0(x).

f (x, u) = 1.1x + 0.1u, `(x, u) = x2 + u2

-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 10

10

20

30

40

50

-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1

-3

-2

-1

0

1

2

3

Global optimum

Feedback solution (closed-loop)

Curse of dimensionality

Can be deployed with 2− 4 states/ controls

Mixed-integer “easily handled”

M. Zanon & S. Gros Introduction 13 / 25

Page 44: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Overview - 2

Continuous Equations Discrete Equations

Global Hamilton-Jacobi-Bellman (HJB) Dynamic Programming (DP)

Local Pontryagin (PMP) Direct Optimal Control (DOC)

Tails of optimal trajectories are optimal.

DP: find the cost-to-go v0(x)

vj(x) = minu

vj+1 (f (x , u)) + `(x , u)

+Ih(x , u).

Then the optimal control is

u∗(x) = arg minu

v0(x).

f (x, u) = 1.1x + 0.1u, `(x, u) = x2 + u2

-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 10

10

20

30

40

50

-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1

-3

-2

-1

0

1

2

3

Global optimum

Feedback solution (closed-loop)

Curse of dimensionality

Can be deployed with 2− 4 states/ controls

Mixed-integer “easily handled”

M. Zanon & S. Gros Introduction 13 / 25

Page 45: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Overview - 2

Continuous Equations Discrete Equations

Global Hamilton-Jacobi-Bellman (HJB) Dynamic Programming (DP)

Local Pontryagin (PMP) Direct Optimal Control (DOC)

Tails of optimal trajectories are optimal.

DP: find the cost-to-go v0(x)

vj(x) = minu

vj+1 (f (x , u)) + `(x , u)

+Ih(x , u).

Then the optimal control is

u∗(x) = arg minu

v0(x).

f (x, u) = 1.1x + 0.1u, `(x, u) = x2 + u2

-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 10

10

20

30

40

50

-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1

-3

-2

-1

0

1

2

3

Global optimum

Feedback solution (closed-loop)

Curse of dimensionality

Can be deployed with 2− 4 states/ controls

Mixed-integer “easily handled”

M. Zanon & S. Gros Introduction 13 / 25

Page 46: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Overview - 2

Continuous Equations Discrete Equations

Global Hamilton-Jacobi-Bellman (HJB) Dynamic Programming (DP)

Local Pontryagin (PMP) Direct Optimal Control (DOC)

Tails of optimal trajectories are optimal.

DP: find the cost-to-go v0(x)

vj(x) = minu

vj+1 (f (x , u)) + `(x , u)

+Ih(x , u).

Then the optimal control is

u∗(x) = arg minu

v0(x).

f (x, u) = 1.1x + 0.1u, `(x, u) = x2 + u2

-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 10

10

20

30

40

50

-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1

-3

-2

-1

0

1

2

3

Global optimum

Feedback solution (closed-loop)

Curse of dimensionality

Can be deployed with 2− 4 states/ controls

Mixed-integer “easily handled”

M. Zanon & S. Gros Introduction 13 / 25

Page 47: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Overview - 2

Continuous Equations Discrete Equations

Global Hamilton-Jacobi-Bellman (HJB) Dynamic Programming (DP)

Local Pontryagin (PMP) Direct Optimal Control (DOC)

Tails of optimal trajectories are optimal.

DP: find the cost-to-go v0(x)

vj(x) = minu

vj+1 (f (x , u)) + `(x , u)

+Ih(x , u).

Then the optimal control is

u∗(x) = arg minu

v0(x).

f (x, u) = 1.1x + 0.1u, `(x, u) = x2 + u2

-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 10

10

20

30

40

50

DPLQR

-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1

-3

-2

-1

0

1

2

3

Global optimum

Feedback solution (closed-loop)

Curse of dimensionality

Can be deployed with 2− 4 states/ controls

Mixed-integer “easily handled”

M. Zanon & S. Gros Introduction 13 / 25

Page 48: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Overview - 2

Continuous Equations Discrete Equations

Global Hamilton-Jacobi-Bellman (HJB) Dynamic Programming (DP)

Local Pontryagin (PMP) Direct Optimal Control (DOC)

Tails of optimal trajectories are optimal.

DP: find the cost-to-go v0(x)

vj(x) = minu

vj+1 (f (x , u)) + `(x , u)

+Ih(x , u).

Then the optimal control is

u∗(x) = arg minu

v0(x).

f (x, u) = 1.1x + 0.1u, `(x, u) = x2 + u2

-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 10

10

20

30

40

50

DPLQR

-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1

-3

-2

-1

0

1

2

3

Global optimum

Feedback solution (closed-loop)

Curse of dimensionality

Can be deployed with 2− 4 states/ controls

Mixed-integer “easily handled”

M. Zanon & S. Gros Introduction 13 / 25

Page 49: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Overview - 2

Continuous Equations Discrete Equations

Global Hamilton-Jacobi-Bellman (HJB) Dynamic Programming (DP)

Local Pontryagin (PMP) Direct Optimal Control (DOC)

Tails of optimal trajectories are optimal.

DP: find the cost-to-go v0(x)

vj(x) = minu

vj+1 (f (x , u)) + `(x , u)

+Ih(x , u).

Then the optimal control is

u∗(x) = arg minu

v0(x).

f (x, u) = 1.1x + 0.1u, `(x, u) = x2 + u2

-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 10

10

20

30

40

50

DPLQR

-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1

-3

-2

-1

0

1

2

3

Global optimum

Feedback solution (closed-loop)

Curse of dimensionality

Can be deployed with 2− 4 states/ controls

Mixed-integer “easily handled”

M. Zanon & S. Gros Introduction 13 / 25

Page 50: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Overview - 3

Continuous Equations Discrete Equations

Global Hamilton-Jacobi-Bellman (HJB) Dynamic Programming (DP)

Local Pontryagin (PMP) Direct Optimal Control (DOC)

Tails of optimal trajectories are optimal.

HJB: find the cost-to-go V (x , t) by solving the Partial Differential Equation

−∂V (x , t)

∂t= min

u

{L(x , u) + IH(x , u) +

∂V (x , t)

∂xF (x , u)

}, V (x , tf) = M(x).

minu(·)

M(x(tf )

)+

∫ tf

t0

L(x(t), u(t)) dt

s.t. x(t0) = x0,

x(t) = F (x(t), u(t)), t ∈ [t0, tf ],

H(x(t), u(t)) ≤ 0, t ∈ [t0, tf ].

Global optimum

Feedback solution(closed-loop)

Difficult to solve

Curse of dimensionality

M. Zanon & S. Gros Introduction 14 / 25

Page 51: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Overview - 3

Continuous Equations Discrete Equations

Global Hamilton-Jacobi-Bellman (HJB) Dynamic Programming (DP)

Local Pontryagin (PMP) Direct Optimal Control (DOC)

Tails of optimal trajectories are optimal.

HJB: find the cost-to-go V (x , t) by solving the Partial Differential Equation

−∂V (x , t)

∂t= min

u

{L(x , u) + IH(x , u) +

∂V (x , t)

∂xF (x , u)

}, V (x , tf) = M(x).

minu(·)

M(x(tf )

)+

∫ tf

t0

L(x(t), u(t)) dt

s.t. x(t0) = x0,

x(t) = F (x(t), u(t)), t ∈ [t0, tf ],

H(x(t), u(t)) ≤ 0, t ∈ [t0, tf ].

Global optimum

Feedback solution(closed-loop)

Difficult to solve

Curse of dimensionality

M. Zanon & S. Gros Introduction 14 / 25

Page 52: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Overview - 3

Continuous Equations Discrete Equations

Global Hamilton-Jacobi-Bellman (HJB) Dynamic Programming (DP)

Local Pontryagin (PMP) Direct Optimal Control (DOC)

Tails of optimal trajectories are optimal.

HJB: find the cost-to-go V (x , t) by solving the Partial Differential Equation

−∂V (x , t)

∂t= min

u

{L(x , u) + IH(x , u) +

∂V (x , t)

∂xF (x , u)

}, V (x , tf) = M(x).

Interpretation:

vj (x) = minu`(x, u) + Ih(x, u) + vj+1 (f (x, u)) , vN (x) = m(x),

take the limit:

limδ→0

vj+1(x(t + δ))− vj (x(t))

δ=∂V (x, t)

∂t+∂V (x, t)

∂xF (x, u)

limδ→0

`δ(x(t), u(t))

δ= limδ→0

∫ t+δt L(x(τ), u(τ)) dτ

δ= L(x(t), u(t))

minu(·)

M(x(tf )

)+

∫ tf

t0

L(x(t), u(t)) dt

s.t. x(t0) = x0,

x(t) = F (x(t), u(t)), t ∈ [t0, tf ],

H(x(t), u(t)) ≤ 0, t ∈ [t0, tf ].

Global optimum

Feedback solution(closed-loop)

Difficult to solve

Curse of dimensionality

M. Zanon & S. Gros Introduction 14 / 25

Page 53: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Overview - 3

Continuous Equations Discrete Equations

Global Hamilton-Jacobi-Bellman (HJB) Dynamic Programming (DP)

Local Pontryagin (PMP) Direct Optimal Control (DOC)

Tails of optimal trajectories are optimal.

HJB: find the cost-to-go V (x , t) by solving the Partial Differential Equation

−∂V (x , t)

∂t= min

u

{L(x , u) + IH(x , u) +

∂V (x , t)

∂xF (x , u)

}, V (x , tf) = M(x).

Interpretation:

vj (x) = minu`(x, u) + Ih(x, u) + vj+1 (f (x, u)) , vN (x) = m(x),

take the limit:

limδ→0

vj+1(x(t + δ))− vj (x(t))

δ=∂V (x, t)

∂t+∂V (x, t)

∂xF (x, u)

limδ→0

`δ(x(t), u(t))

δ= limδ→0

∫ t+δt L(x(τ), u(τ)) dτ

δ= L(x(t), u(t))

minu(·)

M(x(tf )

)+

∫ tf

t0

L(x(t), u(t)) dt

s.t. x(t0) = x0,

x(t) = F (x(t), u(t)), t ∈ [t0, tf ],

H(x(t), u(t)) ≤ 0, t ∈ [t0, tf ].

Global optimum

Feedback solution(closed-loop)

Difficult to solve

Curse of dimensionality

M. Zanon & S. Gros Introduction 14 / 25

Page 54: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Overview - 4

Continuous Equations Discrete Equations

Global Hamilton-Jacobi-Bellman (HJB) Dynamic Programming (DP)

Local Pontryagin (PMP) Direct Optimal Control (DOC)

Continuous problem:

minu(·)

M (x(tf)) +

∫ tf

t0

L(x(t), u(t)) dt

s.t. x(t0) = x0,

x(t) = F (x(t), u(t)), t ∈ [t0, tf ],

H(x(t), u(t)) ≤ 0, t ∈ [t0, tf ].

Eliminate the inputs

Single trajectory (open loop)

Local optimum

State constraints not easily handled

Two-point BVP problem

PMP: Solve 1st-order necessary conditions

Input : u∗ = u∗ (x , λ, u) = argminu∈UH (x , λ, u, µ) , U := { u |H(x , u) ≤ 0 }

States : x = F (x , u∗) , x(t0) = x0,

Costates : λ = −∂H (x , λ, u∗, µ)

∂x, λ(tf) =

∂M

∂x(x (tf)) ,

Algebraic cond. : 0 = µ>H (x , u∗) , µ ≥ 0,

where we rely on the Hamiltonian function

H(x , λ, u, µ) = L (x , u) + λ>F (x , u) + µ>H (x , u) .

M. Zanon & S. Gros Introduction 15 / 25

Page 55: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Overview - 4

Continuous Equations Discrete Equations

Global Hamilton-Jacobi-Bellman (HJB) Dynamic Programming (DP)

Local Pontryagin (PMP) Direct Optimal Control (DOC)

Continuous problem:

minu(·)

M (x(tf)) +

∫ tf

t0

L(x(t), u(t)) dt

s.t. x(t0) = x0,

x(t) = F (x(t), u(t)), t ∈ [t0, tf ],

H(x(t), u(t)) ≤ 0, t ∈ [t0, tf ].

Eliminate the inputs

Single trajectory (open loop)

Local optimum

State constraints not easily handled

Two-point BVP problem

PMP: Solve 1st-order necessary conditions

Input : u∗ = u∗ (x , λ, u) = argminu∈UH (x , λ, u, µ) , U := { u |H(x , u) ≤ 0 }

States : x = F (x , u∗) , x(t0) = x0,

Costates : λ = −∂H (x , λ, u∗, µ)

∂x, λ(tf) =

∂M

∂x(x (tf)) ,

Algebraic cond. : 0 = µ>H (x , u∗) , µ ≥ 0,

where we rely on the Hamiltonian function

H(x , λ, u, µ) = L (x , u) + λ>F (x , u) + µ>H (x , u) .

M. Zanon & S. Gros Introduction 15 / 25

Page 56: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Overview - 4

Continuous Equations Discrete Equations

Global Hamilton-Jacobi-Bellman (HJB) Dynamic Programming (DP)

Local Pontryagin (PMP) Direct Optimal Control (DOC)

Continuous problem:

minu(·)

M (x(tf)) +

∫ tf

t0

L(x(t), u(t)) dt

s.t. x(t0) = x0,

x(t) = F (x(t), u(t)), t ∈ [t0, tf ],

H(x(t), u(t)) ≤ 0, t ∈ [t0, tf ].

Eliminate the inputs

Single trajectory (open loop)

Local optimum

State constraints not easily handled

Two-point BVP problem

PMP: Solve 1st-order necessary conditions

Input : u∗ = u∗ (x , λ, u) = argminu∈UH (x , λ, u, µ) , U := { u |H(x , u) ≤ 0 }

States : x = F (x , u∗) , x(t0) = x0,

Costates : λ = −∂H (x , λ, u∗, µ)

∂x, λ(tf) =

∂M

∂x(x (tf)) ,

Algebraic cond. : 0 = µ>H (x , u∗) , µ ≥ 0,

where we rely on the Hamiltonian function

H(x , λ, u, µ) = L (x , u) + λ>F (x , u) + µ>H (x , u) .

M. Zanon & S. Gros Introduction 15 / 25

Page 57: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

HJB and Pontryagin (no path constraints)HJB:

−∂V (x , t)

∂t= min

u

{L(x , u) +

∂V (x , t)

∂xF (x , u)

}, V (x , tf) = M(x).

PMP:

H(x , λ, u, µ) = L (x , u) + λ>F (x , u)

u∗ = argminuH (x , λ, u, µ) = argmin

u

{L (x , u) + λ>F (x , u)

}x = F (x , u∗) x(t0) = x0

λ = −∇xH (x , λ, u∗, µ) λ(tf) = ∇xM (x (tf))

Observe: λ(tf) = ∇xV (x(tf), tf). Use the costate equation to obtain

λ = −∇xH (x , λ, u∗, µ) = ∇x

(minu

{L(x , u) + λ(t)>F (x , u)

})= ∇x

∂V (x , t)

∂talong the optimal trajectory

Then

λ(t) = ∇xV (x , t)

M. Zanon & S. Gros Introduction 16 / 25

Page 58: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

HJB and Pontryagin (no path constraints)HJB:

−∂V (x , t)

∂t= min

u

{L(x , u) +

∂V (x , t)

∂xF (x , u)

}, V (x , tf) = M(x).

PMP:

H(x , λ, u, µ) = L (x , u) + λ>F (x , u)

u∗ = argminuH (x , λ, u, µ) = argmin

u

{L (x , u) + λ>F (x , u)

}x = F (x , u∗) x(t0) = x0

λ = −∇xH (x , λ, u∗, µ) λ(tf) = ∇xM (x (tf))

Observe: λ(tf) = ∇xV (x(tf), tf). Use the costate equation to obtain

λ = −∇xH (x , λ, u∗, µ) = ∇x

(minu

{L(x , u) + λ(t)>F (x , u)

})= ∇x

∂V (x , t)

∂talong the optimal trajectory

Then

λ(t) = ∇xV (x , t)

M. Zanon & S. Gros Introduction 16 / 25

Page 59: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

HJB and Pontryagin (no path constraints)HJB:

−∂V (x , t)

∂t= min

u

{L(x , u) +

∂V (x , t)

∂xF (x , u)

}, V (x , tf) = M(x).

PMP:

H(x , λ, u, µ) = L (x , u) + λ>F (x , u)

u∗ = argminuH (x , λ, u, µ) = argmin

u

{L (x , u) + λ>F (x , u)

}x = F (x , u∗) x(t0) = x0

λ = −∇xH (x , λ, u∗, µ) λ(tf) = ∇xM (x (tf))

Observe: λ(tf) = ∇xV (x(tf), tf). Use the costate equation to obtain

λ = −∇xH (x , λ, u∗, µ) = ∇x

(minu

{L(x , u) + λ(t)>F (x , u)

})= ∇x

∂V (x , t)

∂talong the optimal trajectory

Then

λ(t) = ∇xV (x , t)

M. Zanon & S. Gros Introduction 16 / 25

Page 60: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Overview - 5

Continuous Equations Discrete Equations

Global Hamilton-Jacobi-Bellman (HJB) Dynamic Programming (DP)

Local Pontryagin (PMP) Direct Optimal Control (DOC)

Continuous problem:

minu(·)

M (x(tf)) +

∫ tf

t0

L(x(t), u(t)) dt

s.t. x(t0) = x0,

x(t) = F (x(t), u(t)), t ∈ [t0, tf ],

H(x(t), u(t)) ≤ 0, t ∈ [t0, tf ].

Discrete problem:

minu

m (xN) +N−1∑k=0

`(xk , uk)

s.t. x0 = x0,

xk+1 = f (xk , uk), k ∈ IN−10 ,

h(xk , uk) ≤ 0, k ∈ IN−10 .

Discretize Continuous Problem:

1 Parametrize u(·), e.g.

u(t) = uk , t ∈ [kts, (k + 1)ts]

2 Integrate

f (xk , uk ) = x(tk+1) with

{x(t) = F (x(t), u(t))

x(tk ) = xk

`(xk , uk ) =

∫ tk+1

tk

L(x(t), u(t)) dt

3 Relax path constraints

h(xk , uk ) =

H(x(tk,0, u(tk,0)))

...

H(x(tk,n, u(tk,n)))

tk,0, . . . , tk,n ∈ [tk , tk+1]

typically: n = 0

M. Zanon & S. Gros Introduction 17 / 25

Page 61: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Overview - 5

Continuous Equations Discrete Equations

Global Hamilton-Jacobi-Bellman (HJB) Dynamic Programming (DP)

Local Pontryagin (PMP) Direct Optimal Control (DOC)

Continuous problem:

minu(·)

M (x(tf)) +

∫ tf

t0

L(x(t), u(t)) dt

s.t. x(t0) = x0,

x(t) = F (x(t), u(t)), t ∈ [t0, tf ],

H(x(t), u(t)) ≤ 0, t ∈ [t0, tf ].

Discrete problem:

minu

m (xN) +N−1∑k=0

`(xk , uk)

s.t. x0 = x0,

xk+1 = f (xk , uk), k ∈ IN−10 ,

h(xk , uk) ≤ 0, k ∈ IN−10 .

Discretize Continuous Problem:

1 Parametrize u(·), e.g.

u(t) = uk , t ∈ [kts, (k + 1)ts]

2 Integrate

f (xk , uk ) = x(tk+1) with

{x(t) = F (x(t), u(t))

x(tk ) = xk

`(xk , uk ) =

∫ tk+1

tk

L(x(t), u(t)) dt

3 Relax path constraints

h(xk , uk ) =

H(x(tk,0, u(tk,0)))

...

H(x(tk,n, u(tk,n)))

tk,0, . . . , tk,n ∈ [tk , tk+1]

typically: n = 0

M. Zanon & S. Gros Introduction 17 / 25

Page 62: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Overview - 5

Continuous Equations Discrete Equations

Global Hamilton-Jacobi-Bellman (HJB) Dynamic Programming (DP)

Local Pontryagin (PMP) Direct Optimal Control (DOC)

Continuous problem:

minu(·)

M (x(tf)) +

∫ tf

t0

L(x(t), u(t)) dt

s.t. x(t0) = x0,

x(t) = F (x(t), u(t)), t ∈ [t0, tf ],

H(x(t), u(t)) ≤ 0, t ∈ [t0, tf ].

Discrete problem:

minu

m (xN) +N−1∑k=0

`(xk , uk)

s.t. x0 = x0,

xk+1 = f (xk , uk), k ∈ IN−10 ,

h(xk , uk) ≤ 0, k ∈ IN−10 .

Define the Lagrangian:

L = λ(x0 − x0) + m(xN ) +

N−1∑k=0

Lk ,

Lk = `(xk , uk ) + λ>k (f (xk , uk )− xk+1)

+ µ>k (h(xk , uk ))

Write the KKT conditions:

0 = ∇ukL, k ∈ IN−1

0

xk+1 = f (xk , uk ), k ∈ IN−10

0 = ∇xkL, k ∈ IN−1

0

0 ≥ h(xk , uk ), k ∈ IN−10

0 ≤ µk , k ∈ IN−10

0 = hk,iµk,i , k ∈ IN−10 , i ∈ Inh0 .

M. Zanon & S. Gros Introduction 17 / 25

Page 63: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Overview - 5

Continuous Equations Discrete Equations

Global Hamilton-Jacobi-Bellman (HJB) Dynamic Programming (DP)

Local Pontryagin (PMP) Direct Optimal Control (DOC)

Easy to formulate

Very large and difficult problems

Many solvers available

Local optimum

Mixed-Integer not as easy as in DP

Discrete problem:

minu

m (xN) +N−1∑k=0

`(xk , uk)

s.t. x0 = x0,

xk+1 = f (xk , uk), k ∈ IN−10 ,

h(xk , uk) ≤ 0, k ∈ IN−10 .

Define the Lagrangian:

L = λ(x0 − x0) + m(xN ) +

N−1∑k=0

Lk ,

Lk = `(xk , uk ) + λ>k (f (xk , uk )− xk+1)

+ µ>k (h(xk , uk ))

Write the KKT conditions:

0 = ∇ukL, k ∈ IN−1

0

xk+1 = f (xk , uk ), k ∈ IN−10

0 = ∇xkL, k ∈ IN−1

0

0 ≥ h(xk , uk ), k ∈ IN−10

0 ≤ µk , k ∈ IN−10

0 = hk,iµk,i , k ∈ IN−10 , i ∈ Inh0 .

M. Zanon & S. Gros Introduction 17 / 25

Page 64: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

PMP and DOC (no path constraints)

Continuous problem:

minu(·)

M (x(tf)) +

∫ tf

t0

L(x(t), u(t)) dt

s.t. x(t0) = x0,

x(t) = F (x(t), u(t)), t ∈ [t0, tf ].

Discrete problem:

minu

m (xN) +

N−1∑k=0

`(xk , uk )

s.t. x0 = x0,

xk+1 = f (xk , uk ), k ∈ IN−10 .

PMP:

0 = ∇uH (1a)

x = F (x , u) x(t0) = x0 (1b)

−λ = ∇xH λ(tf) = ∇xM (x (tf)) (1c)

H = L (x , u) + λ>F (x , u)

Euler: δ = tk+1 − tk

(1b)⇒ xk+1 = xk + δF (xk , uk ) := f (xk , uk )

(1c)⇒(1a)⇒

KKT:

0 = ∇ukLxk+1 = f (xk , uk ) x0 = x0

0 = ∇xkL λN = ∇xNm (xN)

Lk = `(xk , uk ) + λ>k (f (xk , uk )− xk+1)

L = λ(x0 − x0) + m(xN) +

N−1∑k=0

Lk

DOC ≈ PMP

Specific choice of integration

Considering discrete OC ≈ continuous OC: Multiple Shooting

M. Zanon & S. Gros Introduction 18 / 25

Page 65: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

PMP and DOC (no path constraints)

Continuous problem:

minu(·)

M (x(tf)) +

∫ tf

t0

L(x(t), u(t)) dt

s.t. x(t0) = x0,

x(t) = F (x(t), u(t)), t ∈ [t0, tf ].

Discrete problem:

minu

m (xN) +

N−1∑k=0

`(xk , uk )

s.t. x0 = x0,

xk+1 = f (xk , uk ), k ∈ IN−10 .

PMP:

0 = ∇uH (1a)

x = F (x , u) x(t0) = x0 (1b)

−λ = ∇xH λ(tf) = ∇xM (x (tf)) (1c)

H = L (x , u) + λ>F (x , u)

Euler: δ = tk+1 − tk

(1b)⇒ xk+1 = xk + δF (xk , uk ) := f (xk , uk )

(1c)⇒(1a)⇒

KKT:

0 = ∇ukLxk+1 = f (xk , uk ) x0 = x0

0 = ∇xkL λN = ∇xNm (xN)

Lk = `(xk , uk ) + λ>k (f (xk , uk )− xk+1)

L = λ(x0 − x0) + m(xN) +

N−1∑k=0

Lk

DOC ≈ PMP

Specific choice of integration

Considering discrete OC ≈ continuous OC: Multiple Shooting

M. Zanon & S. Gros Introduction 18 / 25

Page 66: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

PMP and DOC (no path constraints)

Continuous problem:

minu(·)

M (x(tf)) +

∫ tf

t0

L(x(t), u(t)) dt

s.t. x(t0) = x0,

x(t) = F (x(t), u(t)), t ∈ [t0, tf ].

Discrete problem:

minu

m (xN) +

N−1∑k=0

`(xk , uk )

s.t. x0 = x0,

xk+1 = f (xk , uk ), k ∈ IN−10 .

PMP:

0 = ∇uH (1a)

x = F (x , u) x(t0) = x0 (1b)

−λ = ∇xH λ(tf) = ∇xM (x (tf)) (1c)

H = L (x , u) + λ>F (x , u)

Euler: δ = tk+1 − tk

(1b)⇒ xk+1 = xk + δF (xk , uk ) := f (xk , uk )

(1c)⇒(1a)⇒

KKT:

0 = ∇ukLxk+1 = f (xk , uk ) x0 = x0

0 = ∇xkL λN = ∇xNm (xN)

Lk = `(xk , uk ) + λ>k (f (xk , uk )− xk+1)

L = λ(x0 − x0) + m(xN) +

N−1∑k=0

Lk

DOC ≈ PMP

Specific choice of integration

Considering discrete OC ≈ continuous OC: Multiple Shooting

M. Zanon & S. Gros Introduction 18 / 25

Page 67: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

PMP and DOC (no path constraints)

Continuous problem:

minu(·)

M (x(tf)) +

∫ tf

t0

L(x(t), u(t)) dt

s.t. x(t0) = x0,

x(t) = F (x(t), u(t)), t ∈ [t0, tf ].

Discrete problem:

minu

m (xN) +

N−1∑k=0

`(xk , uk )

s.t. x0 = x0,

xk+1 = f (xk , uk ), k ∈ IN−10 .

PMP:

0 = ∇uH (1a)

x = F (x , u) x(t0) = x0 (1b)

−λ = ∇xH λ(tf) = ∇xM (x (tf)) (1c)

H = L (x , u) + λ>F (x , u)

Euler: δ = tk+1 − tk

(1b)⇒ xk+1 = xk + δF (xk , uk ) := f (xk , uk )

(1c)⇒(1a)⇒

KKT:

0 = ∇ukLxk+1 = f (xk , uk ) x0 = x0

0 = ∇xkL λN = ∇xNm (xN)

Lk = `(xk , uk ) + λ>k (f (xk , uk )− xk+1)

L = λ(x0 − x0) + m(xN) +

N−1∑k=0

Lk

DOC ≈ PMP

Specific choice of integration

Considering discrete OC ≈ continuous OC: Multiple Shooting

M. Zanon & S. Gros Introduction 18 / 25

Page 68: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

PMP and DOC (no path constraints)

Continuous problem:

minu(·)

M (x(tf)) +

∫ tf

t0

L(x(t), u(t)) dt

s.t. x(t0) = x0,

x(t) = F (x(t), u(t)), t ∈ [t0, tf ].

Discrete problem:

minu

m (xN) +

N−1∑k=0

`(xk , uk )

s.t. x0 = x0,

xk+1 = f (xk , uk ), k ∈ IN−10 .

PMP:

0 = ∇uH (1a)

x = F (x , u) x(t0) = x0 (1b)

−λ = ∇xH λ(tf) = ∇xM (x (tf)) (1c)

H = L (x , u) + λ>F (x , u)

Euler: δ = tk+1 − tk

(1b)⇒ xk+1 = xk + δF (xk , uk ) := f (xk , uk )

(1c)⇒(1a)⇒

KKT:

0 = ∇ukLxk+1 = f (xk , uk ) x0 = x0

0 = ∇xkL λN = ∇xNm (xN)

Lk = `(xk , uk ) + λ>k (f (xk , uk )− xk+1)

L = λ(x0 − x0) + m(xN) +

N−1∑k=0

Lk

DOC ≈ PMP

Specific choice of integration

Considering discrete OC ≈ continuous OC: Multiple Shooting

M. Zanon & S. Gros Introduction 18 / 25

Page 69: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

PMP and DOC (no path constraints)

Continuous problem:

minu(·)

M (x(tf)) +

∫ tf

t0

L(x(t), u(t)) dt

s.t. x(t0) = x0,

x(t) = F (x(t), u(t)), t ∈ [t0, tf ].

Discrete problem:

minu

m (xN) +

N−1∑k=0

`(xk , uk )

s.t. x0 = x0,

xk+1 = f (xk , uk ), k ∈ IN−10 .

PMP:

0 = ∇uH (1a)

x = F (x , u) x(t0) = x0 (1b)

−λ = ∇xH λ(tf) = ∇xM (x (tf)) (1c)

H = L (x , u) + λ>F (x , u)

Euler: δ = tk+1 − tk

(1b)⇒ xk+1 = xk + δF (xk , uk ) := f (xk , uk )

(1c)⇒ λk−1 = λk + δ∇xkH (xk , λk , uk )

(1a)⇒

KKT:

0 = ∇ukLxk+1 = f (xk , uk ) x0 = x0

0 = ∇xkL λN = ∇xNm (xN)

Lk = `(xk , uk ) + λ>k (f (xk , uk )− xk+1)

L = λ(x0 − x0) + m(xN) +

N−1∑k=0

Lk

DOC ≈ PMP

Specific choice of integration

Considering discrete OC ≈ continuous OC: Multiple Shooting

M. Zanon & S. Gros Introduction 18 / 25

Page 70: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

PMP and DOC (no path constraints)

Continuous problem:

minu(·)

M (x(tf)) +

∫ tf

t0

L(x(t), u(t)) dt

s.t. x(t0) = x0,

x(t) = F (x(t), u(t)), t ∈ [t0, tf ].

Discrete problem:

minu

m (xN) +

N−1∑k=0

`(xk , uk )

s.t. x0 = x0,

xk+1 = f (xk , uk ), k ∈ IN−10 .

PMP:

0 = ∇uH (1a)

x = F (x , u) x(t0) = x0 (1b)

−λ = ∇xH λ(tf) = ∇xM (x (tf)) (1c)

H = L (x , u) + λ>F (x , u)

Euler: δ = tk+1 − tk

(1b)⇒ xk+1 = xk + δF (xk , uk ) := f (xk , uk )

(1c)⇒ λk−1 = λk + δ (∇xk L(xk , uk ) +∇xkF (xk , uk )λk )

(1a)⇒

KKT:

0 = ∇ukLxk+1 = f (xk , uk ) x0 = x0

0 = ∇xkL λN = ∇xNm (xN)

Lk = `(xk , uk ) + λ>k (f (xk , uk )− xk+1)

L = λ(x0 − x0) + m(xN) +

N−1∑k=0

Lk

DOC ≈ PMP

Specific choice of integration

Considering discrete OC ≈ continuous OC: Multiple Shooting

M. Zanon & S. Gros Introduction 18 / 25

Page 71: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

PMP and DOC (no path constraints)

Continuous problem:

minu(·)

M (x(tf)) +

∫ tf

t0

L(x(t), u(t)) dt

s.t. x(t0) = x0,

x(t) = F (x(t), u(t)), t ∈ [t0, tf ].

Discrete problem:

minu

m (xN) +

N−1∑k=0

`(xk , uk )

s.t. x0 = x0,

xk+1 = f (xk , uk ), k ∈ IN−10 .

PMP:

0 = ∇uH (1a)

x = F (x , u) x(t0) = x0 (1b)

−λ = ∇xH λ(tf) = ∇xM (x (tf)) (1c)

H = L (x , u) + λ>F (x , u)

Euler: δ = tk+1 − tk

(1b)⇒ xk+1 = xk + δF (xk , uk ) := f (xk , uk )

(1c)⇒ 0 = δ∇xk L(xk , uk ) + (δ∇xkF (xk , uk ) + I )λk − λk−1

(1a)⇒

KKT:

0 = ∇ukLxk+1 = f (xk , uk ) x0 = x0

0 = ∇xkL λN = ∇xNm (xN)

Lk = `(xk , uk ) + λ>k (f (xk , uk )− xk+1)

L = λ(x0 − x0) + m(xN) +

N−1∑k=0

Lk

DOC ≈ PMP

Specific choice of integration

Considering discrete OC ≈ continuous OC: Multiple Shooting

M. Zanon & S. Gros Introduction 18 / 25

Page 72: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

PMP and DOC (no path constraints)

Continuous problem:

minu(·)

M (x(tf)) +

∫ tf

t0

L(x(t), u(t)) dt

s.t. x(t0) = x0,

x(t) = F (x(t), u(t)), t ∈ [t0, tf ].

Discrete problem:

minu

m (xN) +

N−1∑k=0

`(xk , uk )

s.t. x0 = x0,

xk+1 = f (xk , uk ), k ∈ IN−10 .

PMP:

0 = ∇uH (1a)

x = F (x , u) x(t0) = x0 (1b)

−λ = ∇xH λ(tf) = ∇xM (x (tf)) (1c)

H = L (x , u) + λ>F (x , u)

Euler: δ = tk+1 − tk

(1b)⇒ xk+1 = xk + δF (xk , uk ) := f (xk , uk )

(1c)⇒ 0 = δ∇xk L(xk , uk ) +∇xk (δF (xk , uk ) + xk )λk − λk−1

(1a)⇒

KKT:

0 = ∇ukLxk+1 = f (xk , uk ) x0 = x0

0 = ∇xkL λN = ∇xNm (xN)

Lk = `(xk , uk ) + λ>k (f (xk , uk )− xk+1)

L = λ(x0 − x0) + m(xN) +

N−1∑k=0

Lk

DOC ≈ PMP

Specific choice of integration

Considering discrete OC ≈ continuous OC: Multiple Shooting

M. Zanon & S. Gros Introduction 18 / 25

Page 73: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

PMP and DOC (no path constraints)

Continuous problem:

minu(·)

M (x(tf)) +

∫ tf

t0

L(x(t), u(t)) dt

s.t. x(t0) = x0,

x(t) = F (x(t), u(t)), t ∈ [t0, tf ].

Discrete problem:

minu

m (xN) +

N−1∑k=0

`(xk , uk )

s.t. x0 = x0,

xk+1 = f (xk , uk ), k ∈ IN−10 .

PMP:

0 = ∇uH (1a)

x = F (x , u) x(t0) = x0 (1b)

−λ = ∇xH λ(tf) = ∇xM (x (tf)) (1c)

H = L (x , u) + λ>F (x , u)

Euler: δ = tk+1 − tk

(1b)⇒ xk+1 = xk + δF (xk , uk ) := f (xk , uk )

(1c)⇒ 0 = ∇xk `(xk , uk ) +∇xk f (xk , uk )λk −∇xk xkλk−1

(1a)⇒

KKT:

0 = ∇ukLxk+1 = f (xk , uk ) x0 = x0

0 = ∇xkL λN = ∇xNm (xN)

Lk = `(xk , uk ) + λ>k (f (xk , uk )− xk+1)

L = λ(x0 − x0) + m(xN) +

N−1∑k=0

Lk

DOC ≈ PMP

Specific choice of integration

Considering discrete OC ≈ continuous OC: Multiple Shooting

M. Zanon & S. Gros Introduction 18 / 25

Page 74: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

PMP and DOC (no path constraints)

Continuous problem:

minu(·)

M (x(tf)) +

∫ tf

t0

L(x(t), u(t)) dt

s.t. x(t0) = x0,

x(t) = F (x(t), u(t)), t ∈ [t0, tf ].

Discrete problem:

minu

m (xN) +

N−1∑k=0

`(xk , uk )

s.t. x0 = x0,

xk+1 = f (xk , uk ), k ∈ IN−10 .

PMP:

0 = ∇uH (1a)

x = F (x , u) x(t0) = x0 (1b)

−λ = ∇xH λ(tf) = ∇xM (x (tf)) (1c)

H = L (x , u) + λ>F (x , u)

Euler: δ = tk+1 − tk

(1b)⇒ xk+1 = xk + δF (xk , uk ) := f (xk , uk )

(1c)⇒ 0 = ∇xkL

(1a)⇒

KKT:

0 = ∇ukLxk+1 = f (xk , uk ) x0 = x0

0 = ∇xkL λN = ∇xNm (xN)

Lk = `(xk , uk ) + λ>k (f (xk , uk )− xk+1)

L = λ(x0 − x0) + m(xN) +

N−1∑k=0

Lk

DOC ≈ PMP

Specific choice of integration

Considering discrete OC ≈ continuous OC: Multiple Shooting

M. Zanon & S. Gros Introduction 18 / 25

Page 75: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

PMP and DOC (no path constraints)

Continuous problem:

minu(·)

M (x(tf)) +

∫ tf

t0

L(x(t), u(t)) dt

s.t. x(t0) = x0,

x(t) = F (x(t), u(t)), t ∈ [t0, tf ].

Discrete problem:

minu

m (xN) +

N−1∑k=0

`(xk , uk )

s.t. x0 = x0,

xk+1 = f (xk , uk ), k ∈ IN−10 .

PMP:

0 = ∇uH (1a)

x = F (x , u) x(t0) = x0 (1b)

−λ = ∇xH λ(tf) = ∇xM (x (tf)) (1c)

H = L (x , u) + λ>F (x , u)

Euler: δ = tk+1 − tk

(1b)⇒ xk+1 = xk + δF (xk , uk ) := f (xk , uk )

(1c)⇒ 0 = ∇xkL(1a)⇒ 0 = ∇ukH (xk , λk , uk ) = δ∇ukH (xk , λk , uk )

KKT:

0 = ∇ukLxk+1 = f (xk , uk ) x0 = x0

0 = ∇xkL λN = ∇xNm (xN)

Lk = `(xk , uk ) + λ>k (f (xk , uk )− xk+1)

L = λ(x0 − x0) + m(xN) +

N−1∑k=0

Lk

DOC ≈ PMP

Specific choice of integration

Considering discrete OC ≈ continuous OC: Multiple Shooting

M. Zanon & S. Gros Introduction 18 / 25

Page 76: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

PMP and DOC (no path constraints)

Continuous problem:

minu(·)

M (x(tf)) +

∫ tf

t0

L(x(t), u(t)) dt

s.t. x(t0) = x0,

x(t) = F (x(t), u(t)), t ∈ [t0, tf ].

Discrete problem:

minu

m (xN) +

N−1∑k=0

`(xk , uk )

s.t. x0 = x0,

xk+1 = f (xk , uk ), k ∈ IN−10 .

PMP:

0 = ∇uH (1a)

x = F (x , u) x(t0) = x0 (1b)

−λ = ∇xH λ(tf) = ∇xM (x (tf)) (1c)

H = L (x , u) + λ>F (x , u)

Euler: δ = tk+1 − tk

(1b)⇒ xk+1 = xk + δF (xk , uk ) := f (xk , uk )

(1c)⇒ 0 = ∇xkL(1a)⇒ 0 = δ∇uk L(xk , uk ) + δ∇ukF (xk , uk )λk

KKT:

0 = ∇ukLxk+1 = f (xk , uk ) x0 = x0

0 = ∇xkL λN = ∇xNm (xN)

Lk = `(xk , uk ) + λ>k (f (xk , uk )− xk+1)

L = λ(x0 − x0) + m(xN) +

N−1∑k=0

Lk

DOC ≈ PMP

Specific choice of integration

Considering discrete OC ≈ continuous OC: Multiple Shooting

M. Zanon & S. Gros Introduction 18 / 25

Page 77: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

PMP and DOC (no path constraints)

Continuous problem:

minu(·)

M (x(tf)) +

∫ tf

t0

L(x(t), u(t)) dt

s.t. x(t0) = x0,

x(t) = F (x(t), u(t)), t ∈ [t0, tf ].

Discrete problem:

minu

m (xN) +

N−1∑k=0

`(xk , uk )

s.t. x0 = x0,

xk+1 = f (xk , uk ), k ∈ IN−10 .

PMP:

0 = ∇uH (1a)

x = F (x , u) x(t0) = x0 (1b)

−λ = ∇xH λ(tf) = ∇xM (x (tf)) (1c)

H = L (x , u) + λ>F (x , u)

Euler: δ = tk+1 − tk

(1b)⇒ xk+1 = xk + δF (xk , uk ) := f (xk , uk )

(1c)⇒ 0 = ∇xkL(1a)⇒ 0 = ∇uk `(xk , uk ) +∇uk f (xk , uk )λk

KKT:

0 = ∇ukLxk+1 = f (xk , uk ) x0 = x0

0 = ∇xkL λN = ∇xNm (xN)

Lk = `(xk , uk ) + λ>k (f (xk , uk )− xk+1)

L = λ(x0 − x0) + m(xN) +

N−1∑k=0

Lk

DOC ≈ PMP

Specific choice of integration

Considering discrete OC ≈ continuous OC: Multiple Shooting

M. Zanon & S. Gros Introduction 18 / 25

Page 78: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

PMP and DOC (no path constraints)

Continuous problem:

minu(·)

M (x(tf)) +

∫ tf

t0

L(x(t), u(t)) dt

s.t. x(t0) = x0,

x(t) = F (x(t), u(t)), t ∈ [t0, tf ].

Discrete problem:

minu

m (xN) +

N−1∑k=0

`(xk , uk )

s.t. x0 = x0,

xk+1 = f (xk , uk ), k ∈ IN−10 .

PMP:

0 = ∇uH (1a)

x = F (x , u) x(t0) = x0 (1b)

−λ = ∇xH λ(tf) = ∇xM (x (tf)) (1c)

H = L (x , u) + λ>F (x , u)

Euler: δ = tk+1 − tk

(1b)⇒ xk+1 = xk + δF (xk , uk ) := f (xk , uk )

(1c)⇒ 0 = ∇xkL(1a)⇒ 0 = ∇ukL

KKT:

0 = ∇ukLxk+1 = f (xk , uk ) x0 = x0

0 = ∇xkL λN = ∇xNm (xN)

Lk = `(xk , uk ) + λ>k (f (xk , uk )− xk+1)

L = λ(x0 − x0) + m(xN) +

N−1∑k=0

Lk

DOC ≈ PMP

Specific choice of integration

Considering discrete OC ≈ continuous OC: Multiple Shooting

M. Zanon & S. Gros Introduction 18 / 25

Page 79: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

PMP and DOC (no path constraints)

Continuous problem:

minu(·)

M (x(tf)) +

∫ tf

t0

L(x(t), u(t)) dt

s.t. x(t0) = x0,

x(t) = F (x(t), u(t)), t ∈ [t0, tf ].

Discrete problem:

minu

m (xN) +

N−1∑k=0

`(xk , uk )

s.t. x0 = x0,

xk+1 = f (xk , uk ), k ∈ IN−10 .

PMP:

0 = ∇uH (1a)

x = F (x , u) x(t0) = x0 (1b)

−λ = ∇xH λ(tf) = ∇xM (x (tf)) (1c)

H = L (x , u) + λ>F (x , u)

Euler: δ = tk+1 − tk

(1b)⇒ xk+1 = xk + δF (xk , uk ) := f (xk , uk )

(1c)⇒ 0 = ∇xkL(1a)⇒ 0 = ∇ukL

KKT:

0 = ∇ukLxk+1 = f (xk , uk ) x0 = x0

0 = ∇xkL λN = ∇xNm (xN)

Lk = `(xk , uk ) + λ>k (f (xk , uk )− xk+1)

L = λ(x0 − x0) + m(xN) +

N−1∑k=0

Lk

DOC ≈ PMP

Specific choice of integration

Considering discrete OC ≈ continuous OC: Multiple Shooting

M. Zanon & S. Gros Introduction 18 / 25

Page 80: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

PMP and DOC (no path constraints)

Continuous problem:

minu(·)

M (x(tf)) +

∫ tf

t0

L(x(t), u(t)) dt

s.t. x(t0) = x0,

x(t) = F (x(t), u(t)), t ∈ [t0, tf ].

Discrete problem:

minu

m (xN) +

N−1∑k=0

`(xk , uk )

s.t. x0 = x0,

xk+1 = f (xk , uk ), k ∈ IN−10 .

PMP:

0 = ∇uH (1a)

x = F (x , u) x(t0) = x0 (1b)

−λ = ∇xH λ(tf) = ∇xM (x (tf)) (1c)

H = L (x , u) + λ>F (x , u)

Euler: δ = tk+1 − tk

(1b)⇒ xk+1 = xk + δF (xk , uk ) := f (xk , uk )

(1c)⇒ 0 = ∇xkL(1a)⇒ 0 = ∇ukL

KKT:

0 = ∇ukLxk+1 = f (xk , uk ) x0 = x0

0 = ∇xkL λN = ∇xNm (xN)

Lk = `(xk , uk ) + λ>k (f (xk , uk )− xk+1)

L = λ(x0 − x0) + m(xN) +

N−1∑k=0

Lk

DOC ≈ PMP

Specific choice of integration

Considering discrete OC ≈ continuous OC: Multiple Shooting

M. Zanon & S. Gros Introduction 18 / 25

Page 81: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Overview - 6

Continuous Equations Discrete Equations

Global Hamilton-Jacobi-Bellman (HJB) Dynamic Programming (DP)

Local Pontryagin (PMP) Direct Optimal Control (DOC)

Global methods:

Curse of dimensionality

Limited applicability

Good for analysis

Local methods:

Very powerful

Applicable to wider range of problems

Used for practical problems

DOC

straightforward to apply (compared to the other methods)

solvers available

the most flexible

M. Zanon & S. Gros Introduction 19 / 25

Page 82: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Outline

1 Introduction

2 An Overview about OCP Solution Approaches

3 A Motivating Example

M. Zanon & S. Gros Introduction 20 / 25

Page 83: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Pontryagin Maximum Principle (PMP) - an example

minu(·)

M (x(tf)) +

∫ tf

t0

L(x(t), u(t)) dt

s.t. x(t0) = x0

x(t) = F (x(t), u(t)), t ∈ [t0, tf ]

u∗ = argminuH (x , λ, u)

x = F (x , u∗)

x(t0) = x0

λ = −∇xH (x , λ, u∗)

λ(tf) = ∇xM (x (tf))

Is Newton flawed?

Newton can be improved, we willsee how

Is there something else tounderstand?

There is also a more fundamentalproblem

M. Zanon & S. Gros Introduction 21 / 25

Page 84: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Pontryagin Maximum Principle (PMP) - an example

minu(·)

M (x(tf)) +

∫ tf

t0

L(x(t), u(t)) dt

s.t. x(t0) = x0

x(t) = F (x(t), u(t)), t ∈ [t0, tf ]

u∗ = argminuH (x , λ, u)

x = F (x , u∗)

x(t0) = x0

λ = −∇xH (x , λ, u∗)

λ(tf) = ∇xM (x (tf))

1 Solve for u∗

2 Guess λ(t0) = λ0

3 Solve IVP

x = F (x , u∗)

λ = −∇xH (x , λ, u∗, µ)

with λ(t0) = λ0, x(t0) = x0

4 Adjust λ0 to enforce

λ(tf) = ∇xM (x (tf))

5 If not converged, go to 2

Is Newton flawed?

Newton can be improved, we willsee how

Is there something else tounderstand?

There is also a more fundamentalproblem

M. Zanon & S. Gros Introduction 21 / 25

Page 85: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Pontryagin Maximum Principle (PMP) - an example

minu(·)

M (x(tf)) +

∫ tf

t0

L(x(t), u(t)) dt

s.t. x(t0) = x0

x(t) = F (x(t), u(t)), t ∈ [t0, tf ]

u∗ = argminuH (x , λ, u)

x = F (x , u∗)

x(t0) = x0

λ = −∇xH (x , λ, u∗)

λ(tf) = ∇xM (x (tf))

1 Solve for u∗

2 Guess λ(t0) = λ0

3 Solve IVP

x = F (x , u∗)

λ = −∇xH (x , λ, u∗, µ)

with λ(t0) = λ0, x(t0) = x0

4 Adjust λ0 to enforce

λ(tf) = ∇xM (x (tf))

5 If not converged, go to 2

Is Newton flawed?

Newton can be improved, we willsee how

Is there something else tounderstand?

There is also a more fundamentalproblem

M. Zanon & S. Gros Introduction 21 / 25

Page 86: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Pontryagin Maximum Principle (PMP) - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1

x = u − sin(x), t ∈ [0, 4]

u∗ = argminuH (x , λ, u)

x = F (x , u∗)

x(t0) = x0

λ = −∇xH (x , λ, u∗)

λ(tf) = ∇xM (x (tf))

1 Solve for u∗

2 Guess λ(t0) = λ0

3 Solve IVP

x = F (x , u∗)

λ = −∇xH (x , λ, u∗, µ)

with λ(t0) = λ0, x(t0) = x0

4 Adjust λ0 to enforce

λ(tf) = ∇xM (x (tf))

5 If not converged, go to 2

Is Newton flawed?

Newton can be improved, we willsee how

Is there something else tounderstand?

There is also a more fundamentalproblem

M. Zanon & S. Gros Introduction 21 / 25

Page 87: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Pontryagin Maximum Principle (PMP) - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1

x = u − sin(x), t ∈ [0, 4]

H (x , λ, u) =1

2

(x2 + u2

)+ λ(u − sin(x))

u∗ = argminuH (x , λ, u)

x = F (x , u∗)

x(t0) = x0

λ = −∇xH (x , λ, u∗)

λ(tf) = ∇xM (x (tf))

1 Solve for u∗

2 Guess λ(t0) = λ0

3 Solve IVP

x = F (x , u∗)

λ = −∇xH (x , λ, u∗, µ)

with λ(t0) = λ0, x(t0) = x0

4 Adjust λ0 to enforce

λ(tf) = ∇xM (x (tf))

5 If not converged, go to 2

Is Newton flawed?

Newton can be improved, we willsee how

Is there something else tounderstand?

There is also a more fundamentalproblem

M. Zanon & S. Gros Introduction 21 / 25

Page 88: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Pontryagin Maximum Principle (PMP) - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1

x = u − sin(x), t ∈ [0, 4]

H (x , λ, u) =1

2

(x2 + u2

)+ λ(u − sin(x))

∇uH = u + λ

u∗ = argminuH (x , λ, u)

x = F (x , u∗)

x(t0) = x0

λ = −∇xH (x , λ, u∗)

λ(tf) = ∇xM (x (tf))

1 Solve for u∗

2 Guess λ(t0) = λ0

3 Solve IVP

x = F (x , u∗)

λ = −∇xH (x , λ, u∗, µ)

with λ(t0) = λ0, x(t0) = x0

4 Adjust λ0 to enforce

λ(tf) = ∇xM (x (tf))

5 If not converged, go to 2

Is Newton flawed?

Newton can be improved, we willsee how

Is there something else tounderstand?

There is also a more fundamentalproblem

M. Zanon & S. Gros Introduction 21 / 25

Page 89: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Pontryagin Maximum Principle (PMP) - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1

x = u − sin(x), t ∈ [0, 4]

H (x , λ, u) =1

2

(x2 + u2

)+ λ(u − sin(x))

u∗ = − λx = F (x , u∗)

x(t0) = x0

λ = −∇xH (x , λ, u∗)

λ(tf) = ∇xM (x (tf))

1 Solve for u∗

2 Guess λ(t0) = λ0

3 Solve IVP

x = F (x , u∗)

λ = −∇xH (x , λ, u∗, µ)

with λ(t0) = λ0, x(t0) = x0

4 Adjust λ0 to enforce

λ(tf) = ∇xM (x (tf))

5 If not converged, go to 2

Is Newton flawed?

Newton can be improved, we willsee how

Is there something else tounderstand?

There is also a more fundamentalproblem

M. Zanon & S. Gros Introduction 21 / 25

Page 90: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Pontryagin Maximum Principle (PMP) - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1

x = u − sin(x), t ∈ [0, 4]

H (x , λ, u) =1

2

(x2 + u2

)+ λ(u − sin(x))

u∗ = − λx = − λ− sin(x)

x(0) = x0

λ = −∇xH (x , λ, u∗)

λ(tf) = ∇xM (x (tf))

1 Solve for u∗

2 Guess λ(t0) = λ0

3 Solve IVP

x = F (x , u∗)

λ = −∇xH (x , λ, u∗, µ)

with λ(t0) = λ0, x(t0) = x0

4 Adjust λ0 to enforce

λ(tf) = ∇xM (x (tf))

5 If not converged, go to 2

Is Newton flawed?

Newton can be improved, we willsee how

Is there something else tounderstand?

There is also a more fundamentalproblem

M. Zanon & S. Gros Introduction 21 / 25

Page 91: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Pontryagin Maximum Principle (PMP) - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1

x = u − sin(x), t ∈ [0, 4]

H (x , λ, u) =1

2

(x2 + u2

)+ λ(u − sin(x))

∇xH = x − λ cos(x)

u∗ = − λx = − λ− sin(x)

x(0) = x0

λ = −∇xH (x , λ, u∗)

λ(tf) = ∇xM (x (tf))

1 Solve for u∗

2 Guess λ(t0) = λ0

3 Solve IVP

x = F (x , u∗)

λ = −∇xH (x , λ, u∗, µ)

with λ(t0) = λ0, x(t0) = x0

4 Adjust λ0 to enforce

λ(tf) = ∇xM (x (tf))

5 If not converged, go to 2

Is Newton flawed?

Newton can be improved, we willsee how

Is there something else tounderstand?

There is also a more fundamentalproblem

M. Zanon & S. Gros Introduction 21 / 25

Page 92: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Pontryagin Maximum Principle (PMP) - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1

x = u − sin(x), t ∈ [0, 4]

H (x , λ, u) =1

2

(x2 + u2

)+ λ(u − sin(x))

u∗ = − λx = − λ− sin(x)

x(0) = x0

λ = λ cos(x)− x

λ(tf) = ∇xM (x (tf))

1 Solve for u∗

2 Guess λ(t0) = λ0

3 Solve IVP

x = F (x , u∗)

λ = −∇xH (x , λ, u∗, µ)

with λ(t0) = λ0, x(t0) = x0

4 Adjust λ0 to enforce

λ(tf) = ∇xM (x (tf))

5 If not converged, go to 2

Is Newton flawed?

Newton can be improved, we willsee how

Is there something else tounderstand?

There is also a more fundamentalproblem

M. Zanon & S. Gros Introduction 21 / 25

Page 93: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Pontryagin Maximum Principle (PMP) - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1

x = u − sin(x), t ∈ [0, 4]

H (x , λ, u) =1

2

(x2 + u2

)+ λ(u − sin(x))

u∗ = − λx = − λ− sin(x)

x(0) = x0

λ = λ cos(x)− x

λ(4) = 0

1 Solve for u∗

2 Guess λ(t0) = λ0

3 Solve IVP

x = F (x , u∗)

λ = −∇xH (x , λ, u∗, µ)

with λ(t0) = λ0, x(t0) = x0

4 Adjust λ0 to enforce

λ(tf) = ∇xM (x (tf))

5 If not converged, go to 2

Is Newton flawed?

Newton can be improved, we willsee how

Is there something else tounderstand?

There is also a more fundamentalproblem

M. Zanon & S. Gros Introduction 21 / 25

Page 94: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Pontryagin Maximum Principle (PMP) - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1

x = u − sin(x), t ∈ [0, 4]

H (x , λ, u) =1

2

(x2 + u2

)+ λ(u − sin(x))

u∗ = − λx = − λ− sin(x)

x(0) = x0

λ = λ cos(x)− x

λ(4) = 0

1 Solve for u∗

2 Guess λ(t0) = λ0

3 Solve IVP

x = F (x , u∗)

λ = −∇xH (x , λ, u∗, µ)

with λ(t0) = λ0, x(t0) = x0

4 Adjust λ0 to enforce

λ(tf) = ∇xM (x (tf))

5 If not converged, go to 2

How to adjust λ0?

Is Newton flawed?

Newton can be improved, we willsee how

Is there something else tounderstand?

There is also a more fundamentalproblem

M. Zanon & S. Gros Introduction 21 / 25

Page 95: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Pontryagin Maximum Principle (PMP) - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1

x = u − sin(x), t ∈ [0, 4]

H (x , λ, u) =1

2

(x2 + u2

)+ λ(u − sin(x))

u∗ = − λx = − λ− sin(x)

x(0) = x0

λ = λ cos(x)− x

λ(4) = 0

1 Solve for u∗

2 Guess λ(t0) = λ0

3 Solve IVP

x = F (x , u∗)

λ = −∇xH (x , λ, u∗, µ)

with λ(t0) = λ0, x(t0) = x0

4 Adjust λ0 to enforce

λ(tf) = ∇xM (x (tf))

5 If not converged, go to 2

How to adjust λ0?

Newton:

λf = λ (tf , λ0 + ∆λ0)

≈ λ(tf , λ0) +∂λ(tf , λ0 + ∆λ0)

∂λ0∆λ0

then

∆λ0 =−(∂λ(tf , λ0 + ∆λ0)

∂λ0

)−1

(λf−λ(tf , λ0))

Is Newton flawed?

Newton can be improved, we willsee how

Is there something else tounderstand?

There is also a more fundamentalproblem

M. Zanon & S. Gros Introduction 21 / 25

Page 96: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Pontryagin Maximum Principle (PMP) - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1

x = u − sin(x), t ∈ [0, 4]

H (x , λ, u) =1

2

(x2 + u2

)+ λ(u − sin(x))

u∗ = − λx = − λ− sin(x)

x(0) = x0

λ = λ cos(x)− x

λ(4) = 0

1 Solve for u∗

2 Guess λ(t0) = λ0

3 Solve IVP

x = F (x , u∗)

λ = −∇xH (x , λ, u∗, µ)

with λ(t0) = λ0, x(t0) = x0

4 Adjust λ0 to enforce

λ(tf) = ∇xM (x (tf))

5 If not converged, go to 2

Guess: λ0 = 1

0 0.5 1 1.5 2 2.5 3 3.5 4-20

-15

-10

-5

0

5

0 0.5 1 1.5 2 2.5 3 3.5 40

5

10

15

20

25

Is Newton flawed?

Newton can be improved, we willsee how

Is there something else tounderstand?

There is also a more fundamentalproblem

M. Zanon & S. Gros Introduction 21 / 25

Page 97: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Pontryagin Maximum Principle (PMP) - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1

x = u − sin(x), t ∈ [0, 4]

H (x , λ, u) =1

2

(x2 + u2

)+ λ(u − sin(x))

u∗ = − λx = − λ− sin(x)

x(0) = x0

λ = λ cos(x)− x

λ(4) = 0

1 Solve for u∗

2 Guess λ(t0) = λ0

3 Solve IVP

x = F (x , u∗)

λ = −∇xH (x , λ, u∗, µ)

with λ(t0) = λ0, x(t0) = x0

4 Adjust λ0 to enforce

λ(tf) = ∇xM (x (tf))

5 If not converged, go to 2

Guess: λ0 = 1

0 0.5 1 1.5 2 2.5 3 3.5 4-5

-4

-3

-2

-1

0

1

0 0.5 1 1.5 2 2.5 3 3.5 40

1

2

3

4

Is Newton flawed?

Newton can be improved, we willsee how

Is there something else tounderstand?

There is also a more fundamentalproblem

M. Zanon & S. Gros Introduction 21 / 25

Page 98: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Pontryagin Maximum Principle (PMP) - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1

x = u − sin(x), t ∈ [0, 4]

H (x , λ, u) =1

2

(x2 + u2

)+ λ(u − sin(x))

u∗ = − λx = − λ− sin(x)

x(0) = x0

λ = λ cos(x)− x

λ(4) = 0

1 Solve for u∗

2 Guess λ(t0) = λ0

3 Solve IVP

x = F (x , u∗)

λ = −∇xH (x , λ, u∗, µ)

with λ(t0) = λ0, x(t0) = x0

4 Adjust λ0 to enforce

λ(tf) = ∇xM (x (tf))

5 If not converged, go to 2

Guess: λ0 = 1

0 0.5 1 1.5 2 2.5 3 3.5 40

0.2

0.4

0.6

0.8

1

0 0.5 1 1.5 2 2.5 3 3.5 4-2

-1.5

-1

-0.5

0

0.5

Is Newton flawed?

Newton can be improved, we willsee how

Is there something else tounderstand?

There is also a more fundamentalproblem

M. Zanon & S. Gros Introduction 21 / 25

Page 99: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Pontryagin Maximum Principle (PMP) - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1

x = u − sin(x), t ∈ [0, 4]

H (x , λ, u) =1

2

(x2 + u2

)+ λ(u − sin(x))

u∗ = − λx = − λ− sin(x)

x(0) = x0

λ = λ cos(x)− x

λ(4) = 0

1 Solve for u∗

2 Guess λ(t0) = λ0

3 Solve IVP

x = F (x , u∗)

λ = −∇xH (x , λ, u∗, µ)

with λ(t0) = λ0, x(t0) = x0

4 Adjust λ0 to enforce

λ(tf) = ∇xM (x (tf))

5 If not converged, go to 2

Guess: λ0 = 1

0 0.5 1 1.5 2 2.5 3 3.5 4-0.2

0

0.2

0.4

0.6

0.8

1

0 0.5 1 1.5 2 2.5 3 3.5 40

0.1

0.2

0.3

0.4

0.5

Is Newton flawed?

Newton can be improved, we willsee how

Is there something else tounderstand?

There is also a more fundamentalproblem

M. Zanon & S. Gros Introduction 21 / 25

Page 100: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Pontryagin Maximum Principle (PMP) - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1

x = u − sin(x), t ∈ [0, 4]

H (x , λ, u) =1

2

(x2 + u2

)+ λ(u − sin(x))

u∗ = − λx = − λ− sin(x)

x(0) = x0

λ = λ cos(x)− x

λ(4) = 0

1 Solve for u∗

2 Guess λ(t0) = λ0

3 Solve IVP

x = F (x , u∗)

λ = −∇xH (x , λ, u∗, µ)

with λ(t0) = λ0, x(t0) = x0

4 Adjust λ0 to enforce

λ(tf) = ∇xM (x (tf))

5 If not converged, go to 2

Guess: λ0 = 1

0 0.5 1 1.5 2 2.5 3 3.5 40

0.2

0.4

0.6

0.8

1

0 0.5 1 1.5 2 2.5 3 3.5 4-0.1

0

0.1

0.2

0.3

0.4

0.5

Is Newton flawed?

Newton can be improved, we willsee how

Is there something else tounderstand?

There is also a more fundamentalproblem

M. Zanon & S. Gros Introduction 21 / 25

Page 101: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Pontryagin Maximum Principle (PMP) - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1

x = u − sin(x), t ∈ [0, 4]

H (x , λ, u) =1

2

(x2 + u2

)+ λ(u − sin(x))

u∗ = − λx = − λ− sin(x)

x(0) = x0

λ = λ cos(x)− x

λ(4) = 0

1 Solve for u∗

2 Guess λ(t0) = λ0

3 Solve IVP

x = F (x , u∗)

λ = −∇xH (x , λ, u∗, µ)

with λ(t0) = λ0, x(t0) = x0

4 Adjust λ0 to enforce

λ(tf) = ∇xM (x (tf))

5 If not converged, go to 2

Guess: λ0 = 1

0 0.5 1 1.5 2 2.5 3 3.5 40

0.2

0.4

0.6

0.8

1

0 0.5 1 1.5 2 2.5 3 3.5 40

0.1

0.2

0.3

0.4

0.5

Is Newton flawed?

Newton can be improved, we willsee how

Is there something else tounderstand?

There is also a more fundamentalproblem

M. Zanon & S. Gros Introduction 21 / 25

Page 102: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Pontryagin Maximum Principle (PMP) - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1

x = u − sin(x), t ∈ [0, 4]

H (x , λ, u) =1

2

(x2 + u2

)+ λ(u − sin(x))

u∗ = − λx = − λ− sin(x)

x(0) = x0

λ = λ cos(x)− x

λ(4) = 0

1 Solve for u∗

2 Guess λ(t0) = λ0

3 Solve IVP

x = F (x , u∗)

λ = −∇xH (x , λ, u∗, µ)

with λ(t0) = λ0, x(t0) = x0

4 Adjust λ0 to enforce

λ(tf) = ∇xM (x (tf))

5 If not converged, go to 2

Guess: λ0 = 1

0 0.5 1 1.5 2 2.5 3 3.5 40

0.2

0.4

0.6

0.8

1

0 0.5 1 1.5 2 2.5 3 3.5 4-0.1

0

0.1

0.2

0.3

0.4

0.5

Is Newton flawed?

Newton can be improved, we willsee how

Is there something else tounderstand?

There is also a more fundamentalproblem

M. Zanon & S. Gros Introduction 21 / 25

Page 103: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Pontryagin Maximum Principle (PMP) - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1

x = u − sin(x), t ∈ [0, 4]

H (x , λ, u) =1

2

(x2 + u2

)+ λ(u − sin(x))

u∗ = − λx = − λ− sin(x)

x(0) = x0

λ = λ cos(x)− x

λ(4) = 0

1 Solve for u∗

2 Guess λ(t0) = λ0

3 Solve IVP

x = F (x , u∗)

λ = −∇xH (x , λ, u∗, µ)

with λ(t0) = λ0, x(t0) = x0

4 Adjust λ0 to enforce

λ(tf) = ∇xM (x (tf))

5 If not converged, go to 2

Guess: λ0 = 0.4

0 0.5 1 1.5 2 2.5 3 3.5 40

1

2

3

4

0 0.5 1 1.5 2 2.5 3 3.5 4-4

-3

-2

-1

0

1

Is Newton flawed?

Newton can be improved, we willsee how

Is there something else tounderstand?

There is also a more fundamentalproblem

M. Zanon & S. Gros Introduction 21 / 25

Page 104: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Pontryagin Maximum Principle (PMP) - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1

x = u − sin(x), t ∈ [0, 4]

H (x , λ, u) =1

2

(x2 + u2

)+ λ(u − sin(x))

u∗ = − λx = − λ− sin(x)

x(0) = x0

λ = λ cos(x)− x

λ(4) = 0

1 Solve for u∗

2 Guess λ(t0) = λ0

3 Solve IVP

x = F (x , u∗)

λ = −∇xH (x , λ, u∗, µ)

with λ(t0) = λ0, x(t0) = x0

4 Adjust λ0 to enforce

λ(tf) = ∇xM (x (tf))

5 If not converged, go to 2

Guess: λ0 = 0.4

0 0.5 1 1.5 2 2.5 3 3.5 4-10

-8

-6

-4

-2

0

2

0 0.5 1 1.5 2 2.5 3 3.5 40

2

4

6

8

10

Is Newton flawed?

Newton can be improved, we willsee how

Is there something else tounderstand?

There is also a more fundamentalproblem

M. Zanon & S. Gros Introduction 21 / 25

Page 105: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Pontryagin Maximum Principle (PMP) - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1

x = u − sin(x), t ∈ [0, 4]

H (x , λ, u) =1

2

(x2 + u2

)+ λ(u − sin(x))

u∗ = − λx = − λ− sin(x)

x(0) = x0

λ = λ cos(x)− x

λ(4) = 0

1 Solve for u∗

2 Guess λ(t0) = λ0

3 Solve IVP

x = F (x , u∗)

λ = −∇xH (x , λ, u∗, µ)

with λ(t0) = λ0, x(t0) = x0

4 Adjust λ0 to enforce

λ(tf) = ∇xM (x (tf))

5 If not converged, go to 2

Guess: λ0 = 0.4

0 0.5 1 1.5 2 2.5 3 3.5 40

100

200

300

400

500

0 0.5 1 1.5 2 2.5 3 3.5 4-500

-400

-300

-200

-100

0

Is Newton flawed?

Newton can be improved, we willsee how

Is there something else tounderstand?

There is also a more fundamentalproblem

M. Zanon & S. Gros Introduction 21 / 25

Page 106: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Pontryagin Maximum Principle (PMP) - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1

x = u − sin(x), t ∈ [0, 4]

H (x , λ, u) =1

2

(x2 + u2

)+ λ(u − sin(x))

u∗ = − λx = − λ− sin(x)

x(0) = x0

λ = λ cos(x)− x

λ(4) = 0

1 Solve for u∗

2 Guess λ(t0) = λ0

3 Solve IVP

x = F (x , u∗)

λ = −∇xH (x , λ, u∗, µ)

with λ(t0) = λ0, x(t0) = x0

4 Adjust λ0 to enforce

λ(tf) = ∇xM (x (tf))

5 If not converged, go to 2

Guess: λ0 = 0.4

0 0.5 1 1.5 2 2.5 3 3.5 40

50

100

150

200

250

0 0.5 1 1.5 2 2.5 3 3.5 4-250

-200

-150

-100

-50

0

Is Newton flawed?

Newton can be improved, we willsee how

Is there something else tounderstand?

There is also a more fundamentalproblem

M. Zanon & S. Gros Introduction 21 / 25

Page 107: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Pontryagin Maximum Principle (PMP) - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1

x = u − sin(x), t ∈ [0, 4]

H (x , λ, u) =1

2

(x2 + u2

)+ λ(u − sin(x))

u∗ = − λx = − λ− sin(x)

x(0) = x0

λ = λ cos(x)− x

λ(4) = 0

1 Solve for u∗

2 Guess λ(t0) = λ0

3 Solve IVP

x = F (x , u∗)

λ = −∇xH (x , λ, u∗, µ)

with λ(t0) = λ0, x(t0) = x0

4 Adjust λ0 to enforce

λ(tf) = ∇xM (x (tf))

5 If not converged, go to 2

Guess: λ0 = 0.4

0 0.5 1 1.5 2 2.5 3 3.5 4-25

-20

-15

-10

-5

0

5

0 0.5 1 1.5 2 2.5 3 3.5 40

5

10

15

20

25

Is Newton flawed?

Newton can be improved, we willsee how

Is there something else tounderstand?

There is also a more fundamentalproblem

M. Zanon & S. Gros Introduction 21 / 25

Page 108: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Pontryagin Maximum Principle (PMP) - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1

x = u − sin(x), t ∈ [0, 4]

H (x , λ, u) =1

2

(x2 + u2

)+ λ(u − sin(x))

u∗ = − λx = − λ− sin(x)

x(0) = x0

λ = λ cos(x)− x

λ(4) = 0

1 Solve for u∗

2 Guess λ(t0) = λ0

3 Solve IVP

x = F (x , u∗)

λ = −∇xH (x , λ, u∗, µ)

with λ(t0) = λ0, x(t0) = x0

4 Adjust λ0 to enforce

λ(tf) = ∇xM (x (tf))

5 If not converged, go to 2

Guess: λ0 = 0.4

0 0.5 1 1.5 2 2.5 3 3.5 4-15

-10

-5

0

5

0 0.5 1 1.5 2 2.5 3 3.5 40

2

4

6

8

10

12

Is Newton flawed?

Newton can be improved, we willsee how

Is there something else tounderstand?

There is also a more fundamentalproblem

M. Zanon & S. Gros Introduction 21 / 25

Page 109: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Pontryagin Maximum Principle (PMP) - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1

x = u − sin(x), t ∈ [0, 4]

H (x , λ, u) =1

2

(x2 + u2

)+ λ(u − sin(x))

u∗ = − λx = − λ− sin(x)

x(0) = x0

λ = λ cos(x)− x

λ(4) = 0

1 Solve for u∗

2 Guess λ(t0) = λ0

3 Solve IVP

x = F (x , u∗)

λ = −∇xH (x , λ, u∗, µ)

with λ(t0) = λ0, x(t0) = x0

4 Adjust λ0 to enforce

λ(tf) = ∇xM (x (tf))

5 If not converged, go to 2

Guess: λ0 = 0.4

0 0.5 1 1.5 2 2.5 3 3.5 4-4

-3

-2

-1

0

1

0 0.5 1 1.5 2 2.5 3 3.5 40

1

2

3

4

Is Newton flawed?

Newton can be improved, we willsee how

Is there something else tounderstand?

There is also a more fundamentalproblem

M. Zanon & S. Gros Introduction 21 / 25

Page 110: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Pontryagin Maximum Principle (PMP) - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1

x = u − sin(x), t ∈ [0, 4]

H (x , λ, u) =1

2

(x2 + u2

)+ λ(u − sin(x))

u∗ = − λx = − λ− sin(x)

x(0) = x0

λ = λ cos(x)− x

λ(4) = 0

1 Solve for u∗

2 Guess λ(t0) = λ0

3 Solve IVP

x = F (x , u∗)

λ = −∇xH (x , λ, u∗, µ)

with λ(t0) = λ0, x(t0) = x0

4 Adjust λ0 to enforce

λ(tf) = ∇xM (x (tf))

5 If not converged, go to 2

Guess: λ0 = 0.4

0 0.5 1 1.5 2 2.5 3 3.5 40

2

4

6

8

0 0.5 1 1.5 2 2.5 3 3.5 4-8

-6

-4

-2

0

2

Is Newton flawed?

Newton can be improved, we willsee how

Is there something else tounderstand?

There is also a more fundamentalproblem

M. Zanon & S. Gros Introduction 21 / 25

Page 111: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Pontryagin Maximum Principle (PMP) - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1

x = u − sin(x), t ∈ [0, 4]

H (x , λ, u) =1

2

(x2 + u2

)+ λ(u − sin(x))

u∗ = − λx = − λ− sin(x)

x(0) = x0

λ = λ cos(x)− x

λ(4) = 0

1 Solve for u∗

2 Guess λ(t0) = λ0

3 Solve IVP

x = F (x , u∗)

λ = −∇xH (x , λ, u∗, µ)

with λ(t0) = λ0, x(t0) = x0

4 Adjust λ0 to enforce

λ(tf) = ∇xM (x (tf))

5 If not converged, go to 2

Guess: λ0 = 0.4

0 0.5 1 1.5 2 2.5 3 3.5 40

0.5

1

1.5

2

2.5

0 0.5 1 1.5 2 2.5 3 3.5 4-4

-3

-2

-1

0

1

Is Newton flawed?

Newton can be improved, we willsee how

Is there something else tounderstand?

There is also a more fundamentalproblem

M. Zanon & S. Gros Introduction 21 / 25

Page 112: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Pontryagin Maximum Principle (PMP) - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1

x = u − sin(x), t ∈ [0, 4]

H (x , λ, u) =1

2

(x2 + u2

)+ λ(u − sin(x))

u∗ = − λx = − λ− sin(x)

x(0) = x0

λ = λ cos(x)− x

λ(4) = 0

1 Solve for u∗

2 Guess λ(t0) = λ0

3 Solve IVP

x = F (x , u∗)

λ = −∇xH (x , λ, u∗, µ)

with λ(t0) = λ0, x(t0) = x0

4 Adjust λ0 to enforce

λ(tf) = ∇xM (x (tf))

5 If not converged, go to 2

Guess: λ0 = 0.4

0 0.5 1 1.5 2 2.5 3 3.5 4-60

-40

-20

0

20

0 0.5 1 1.5 2 2.5 3 3.5 40

10

20

30

40

50

60

Is Newton flawed?

Newton can be improved, we willsee how

Is there something else tounderstand?

There is also a more fundamentalproblem

M. Zanon & S. Gros Introduction 21 / 25

Page 113: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Pontryagin Maximum Principle (PMP) - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1

x = u − sin(x), t ∈ [0, 4]

H (x , λ, u) =1

2

(x2 + u2

)+ λ(u − sin(x))

u∗ = − λx = − λ− sin(x)

x(0) = x0

λ = λ cos(x)− x

λ(4) = 0

1 Solve for u∗

2 Guess λ(t0) = λ0

3 Solve IVP

x = F (x , u∗)

λ = −∇xH (x , λ, u∗, µ)

with λ(t0) = λ0, x(t0) = x0

4 Adjust λ0 to enforce

λ(tf) = ∇xM (x (tf))

5 If not converged, go to 2

Is Newton flawed?

Newton can be improved, we willsee how

Is there something else tounderstand?

There is also a more fundamentalproblem

M. Zanon & S. Gros Introduction 21 / 25

Page 114: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Pontryagin Maximum Principle (PMP) - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1

x = u − sin(x), t ∈ [0, 4]

H (x , λ, u) =1

2

(x2 + u2

)+ λ(u − sin(x))

u∗ = − λx = − λ− sin(x)

x(0) = x0

λ = λ cos(x)− x

λ(4) = 0

1 Solve for u∗

2 Guess λ(t0) = λ0

3 Solve IVP

x = F (x , u∗)

λ = −∇xH (x , λ, u∗, µ)

with λ(t0) = λ0, x(t0) = x0

4 Adjust λ0 to enforce

λ(tf) = ∇xM (x (tf))

5 If not converged, go to 2 0.2 0.3 0.4 0.5 0.6 0.7 0.8-10

-8

-6

-4

-2

0

2

4

6

8

10

Is Newton flawed?

Newton can be improved, we willsee how

Is there something else tounderstand?

There is also a more fundamentalproblem

M. Zanon & S. Gros Introduction 21 / 25

Page 115: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Pontryagin Maximum Principle (PMP) - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1

x = u − sin(x), t ∈ [0, 4]

H (x , λ, u) =1

2

(x2 + u2

)+ λ(u − sin(x))

u∗ = − λx = − λ− sin(x)

x(0) = x0

λ = λ cos(x)− x

λ(4) = 0

1 Solve for u∗

2 Guess λ(t0) = λ0

3 Solve IVP

x = F (x , u∗)

λ = −∇xH (x , λ, u∗, µ)

with λ(t0) = λ0, x(t0) = x0

4 Adjust λ0 to enforce

λ(tf) = ∇xM (x (tf))

5 If not converged, go to 2 0.2 0.3 0.4 0.5 0.6 0.7 0.8-10

-8

-6

-4

-2

0

2

4

6

8

10

Is Newton flawed?

Newton can be improved, we willsee how

Is there something else tounderstand?

There is also a more fundamentalproblem

M. Zanon & S. Gros Introduction 21 / 25

Page 116: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Pontryagin Maximum Principle (PMP) - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1

x = u − sin(x), t ∈ [0, 4]

H (x , λ, u) =1

2

(x2 + u2

)+ λ(u − sin(x))

u∗ = − λx = − λ− sin(x)

x(0) = x0

λ = λ cos(x)− x

λ(4) = 0

1 Solve for u∗

2 Guess λ(t0) = λ0

3 Solve IVP

x = F (x , u∗)

λ = −∇xH (x , λ, u∗, µ)

with λ(t0) = λ0, x(t0) = x0

4 Adjust λ0 to enforce

λ(tf) = ∇xM (x (tf))

5 If not converged, go to 2 0.2 0.3 0.4 0.5 0.6 0.7 0.8-10

-8

-6

-4

-2

0

2

4

6

8

10

Is Newton flawed?

Newton can be improved, we willsee how

Is there something else tounderstand?

There is also a more fundamentalproblem

M. Zanon & S. Gros Introduction 21 / 25

Page 117: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Pontryagin Maximum Principle (PMP) - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1

x = u − sin(x), t ∈ [0, 4]

H (x , λ, u) =1

2

(x2 + u2

)+ λ(u − sin(x))

u∗ = − λx = − λ− sin(x)

x(0) = x0

λ = λ cos(x)− x

λ(4) = 0

1 Solve for u∗

2 Guess λ(t0) = λ0

3 Solve IVP

x = F (x , u∗)

λ = −∇xH (x , λ, u∗, µ)

with λ(t0) = λ0, x(t0) = x0

4 Adjust λ0 to enforce

λ(tf) = ∇xM (x (tf))

5 If not converged, go to 2 0.2 0.3 0.4 0.5 0.6 0.7 0.8-10

-8

-6

-4

-2

0

2

4

6

8

10

Is Newton flawed?

Newton can be improved, we willsee how

Is there something else tounderstand?

There is also a more fundamentalproblem

M. Zanon & S. Gros Introduction 21 / 25

Page 118: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Pontryagin Maximum Principle (PMP) - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1

x = u − sin(x), t ∈ [0, 4]

H (x , λ, u) =1

2

(x2 + u2

)+ λ(u − sin(x))

u∗ = − λx = − λ− sin(x)

x(0) = x0

λ = λ cos(x)− x

λ(4) = 0

1 Solve for u∗

2 Guess λ(t0) = λ0

3 Solve IVP

x = F (x , u∗)

λ = −∇xH (x , λ, u∗, µ)

with λ(t0) = λ0, x(t0) = x0

4 Adjust λ0 to enforce

λ(tf) = ∇xM (x (tf))

5 If not converged, go to 2 0.2 0.3 0.4 0.5 0.6 0.7 0.8-10

-8

-6

-4

-2

0

2

4

6

8

10

Is Newton flawed?

Newton can be improved, we willsee how

Is there something else tounderstand?

There is also a more fundamentalproblem

M. Zanon & S. Gros Introduction 21 / 25

Page 119: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Pontryagin Maximum Principle (PMP) - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1

x = u − sin(x), t ∈ [0, 4]

H (x , λ, u) =1

2

(x2 + u2

)+ λ(u − sin(x))

u∗ = − λx = − λ− sin(x)

x(0) = x0

λ = λ cos(x)− x

λ(4) = 0

1 Solve for u∗

2 Guess λ(t0) = λ0

3 Solve IVP

x = F (x , u∗)

λ = −∇xH (x , λ, u∗, µ)

with λ(t0) = λ0, x(t0) = x0

4 Adjust λ0 to enforce

λ(tf) = ∇xM (x (tf))

5 If not converged, go to 2 0.2 0.3 0.4 0.5 0.6 0.7 0.8-10

-8

-6

-4

-2

0

2

4

6

8

10

Is Newton flawed?

Newton can be improved, we willsee how

Is there something else tounderstand?

There is also a more fundamentalproblem

M. Zanon & S. Gros Introduction 21 / 25

Page 120: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Pontryagin Maximum Principle (PMP) - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1

x = u − sin(x), t ∈ [0, 4]

H (x , λ, u) =1

2

(x2 + u2

)+ λ(u − sin(x))

u∗ = − λx = − λ− sin(x)

x(0) = x0

λ = λ cos(x)− x

λ(4) = 0

1 Solve for u∗

2 Guess λ(t0) = λ0

3 Solve IVP

x = F (x , u∗)

λ = −∇xH (x , λ, u∗, µ)

with λ(t0) = λ0, x(t0) = x0

4 Adjust λ0 to enforce

λ(tf) = ∇xM (x (tf))

5 If not converged, go to 2 0.2 0.3 0.4 0.5 0.6 0.7 0.8-10

-8

-6

-4

-2

0

2

4

6

8

10

Is Newton flawed?

Newton can be improved, we willsee how

Is there something else tounderstand?

There is also a more fundamentalproblem

M. Zanon & S. Gros Introduction 21 / 25

Page 121: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Pontryagin Maximum Principle (PMP) - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1

x = u − sin(x), t ∈ [0, 4]

H (x , λ, u) =1

2

(x2 + u2

)+ λ(u − sin(x))

u∗ = − λx = − λ− sin(x)

x(0) = x0

λ = λ cos(x)− x

λ(4) = 0

1 Solve for u∗

2 Guess λ(t0) = λ0

3 Solve IVP

x = F (x , u∗)

λ = −∇xH (x , λ, u∗, µ)

with λ(t0) = λ0, x(t0) = x0

4 Adjust λ0 to enforce

λ(tf) = ∇xM (x (tf))

5 If not converged, go to 2 0.2 0.3 0.4 0.5 0.6 0.7 0.8-10

-8

-6

-4

-2

0

2

4

6

8

10

Is Newton flawed?

Newton can be improved, we willsee how

Is there something else tounderstand?

There is also a more fundamentalproblem

M. Zanon & S. Gros Introduction 21 / 25

Page 122: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Pontryagin Maximum Principle (PMP) - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1

x = u − sin(x), t ∈ [0, 4]

H (x , λ, u) =1

2

(x2 + u2

)+ λ(u − sin(x))

u∗ = − λx = − λ− sin(x)

x(0) = x0

λ = λ cos(x)− x

λ(4) = 0

1 Solve for u∗

2 Guess λ(t0) = λ0

3 Solve IVP

x = F (x , u∗)

λ = −∇xH (x , λ, u∗, µ)

with λ(t0) = λ0, x(t0) = x0

4 Adjust λ0 to enforce

λ(tf) = ∇xM (x (tf))

5 If not converged, go to 2 0.2 0.3 0.4 0.5 0.6 0.7 0.8-10

-8

-6

-4

-2

0

2

4

6

8

10

Is Newton flawed?

Newton can be improved, we willsee how

Is there something else tounderstand?

There is also a more fundamentalproblem

M. Zanon & S. Gros Introduction 21 / 25

Page 123: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Pontryagin Maximum Principle (PMP) - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1

x = u − sin(x), t ∈ [0, 4]

H (x , λ, u) =1

2

(x2 + u2

)+ λ(u − sin(x))

u∗ = − λx = − λ− sin(x)

x(0) = x0

λ = λ cos(x)− x

λ(4) = 0

1 Solve for u∗

2 Guess λ(t0) = λ0

3 Solve IVP

x = F (x , u∗)

λ = −∇xH (x , λ, u∗, µ)

with λ(t0) = λ0, x(t0) = x0

4 Adjust λ0 to enforce

λ(tf) = ∇xM (x (tf))

5 If not converged, go to 2 0.2 0.3 0.4 0.5 0.6 0.7 0.8-10

-8

-6

-4

-2

0

2

4

6

8

10

Is Newton flawed?

Newton can be improved, we willsee how

Is there something else tounderstand?

There is also a more fundamentalproblem

M. Zanon & S. Gros Introduction 21 / 25

Page 124: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Pontryagin Maximum Principle (PMP) - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1

x = u − sin(x), t ∈ [0, 4]

H (x , λ, u) =1

2

(x2 + u2

)+ λ(u − sin(x))

u∗ = − λx = − λ− sin(x)

x(0) = x0

λ = λ cos(x)− x

λ(4) = 0

1 Solve for u∗

2 Guess λ(t0) = λ0

3 Solve IVP

x = F (x , u∗)

λ = −∇xH (x , λ, u∗, µ)

with λ(t0) = λ0, x(t0) = x0

4 Adjust λ0 to enforce

λ(tf) = ∇xM (x (tf))

5 If not converged, go to 2 0.2 0.3 0.4 0.5 0.6 0.7 0.8-10

-8

-6

-4

-2

0

2

4

6

8

10

Is Newton flawed?

Newton can be improved, we willsee how

Is there something else tounderstand?

There is also a more fundamentalproblem

M. Zanon & S. Gros Introduction 21 / 25

Page 125: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Pontryagin Maximum Principle (PMP) - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1

x = u − sin(x), t ∈ [0, 4]

H (x , λ, u) =1

2

(x2 + u2

)+ λ(u − sin(x))

u∗ = − λx = − λ− sin(x)

x(0) = x0

λ = λ cos(x)− x

λ(4) = 0

1 Solve for u∗

2 Guess λ(t0) = λ0

3 Solve IVP

x = F (x , u∗)

λ = −∇xH (x , λ, u∗, µ)

with λ(t0) = λ0, x(t0) = x0

4 Adjust λ0 to enforce

λ(tf) = ∇xM (x (tf))

5 If not converged, go to 2 0.2 0.3 0.4 0.5 0.6 0.7 0.8-10

-8

-6

-4

-2

0

2

4

6

8

10

Is Newton flawed?

Newton can be improved, we willsee how

Is there something else tounderstand?

There is also a more fundamentalproblem

M. Zanon & S. Gros Introduction 21 / 25

Page 126: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Pontryagin Maximum Principle (PMP) - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1

x = u − sin(x), t ∈ [0, 4]

H (x , λ, u) =1

2

(x2 + u2

)+ λ(u − sin(x))

u∗ = − λx = − λ− sin(x)

x(0) = x0

λ = λ cos(x)− x

λ(4) = 0

1 Solve for u∗

2 Guess λ(t0) = λ0

3 Solve IVP

x = F (x , u∗)

λ = −∇xH (x , λ, u∗, µ)

with λ(t0) = λ0, x(t0) = x0

4 Adjust λ0 to enforce

λ(tf) = ∇xM (x (tf))

5 If not converged, go to 2 0.2 0.3 0.4 0.5 0.6 0.7 0.8-10

-8

-6

-4

-2

0

2

4

6

8

10

Is Newton flawed?

Newton can be improved, we willsee how

Is there something else tounderstand?

There is also a more fundamentalproblem

M. Zanon & S. Gros Introduction 21 / 25

Page 127: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Pontryagin Maximum Principle (PMP) - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1

x = u − sin(x), t ∈ [0, 4]

H (x , λ, u) =1

2

(x2 + u2

)+ λ(u − sin(x))

u∗ = − λx = − λ− sin(x)

x(0) = x0

λ = λ cos(x)− x

λ(4) = 0

1 Solve for u∗

2 Guess λ(t0) = λ0

3 Solve IVP

x = F (x , u∗)

λ = −∇xH (x , λ, u∗, µ)

with λ(t0) = λ0, x(t0) = x0

4 Adjust λ0 to enforce

λ(tf) = ∇xM (x (tf))

5 If not converged, go to 2 0.2 0.3 0.4 0.5 0.6 0.7 0.8-10

-8

-6

-4

-2

0

2

4

6

8

10

Is Newton flawed?

Newton can be improved, we willsee how

Is there something else tounderstand?

There is also a more fundamentalproblem

M. Zanon & S. Gros Introduction 21 / 25

Page 128: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Pontryagin Maximum Principle (PMP) - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1

x = u − sin(x), t ∈ [0, 4]

H (x , λ, u) =1

2

(x2 + u2

)+ λ(u − sin(x))

u∗ = − λx = − λ− sin(x)

x(0) = x0

λ = λ cos(x)− x

λ(4) = 0

1 Solve for u∗

2 Guess λ(t0) = λ0

3 Solve IVP

x = F (x , u∗)

λ = −∇xH (x , λ, u∗, µ)

with λ(t0) = λ0, x(t0) = x0

4 Adjust λ0 to enforce

λ(tf) = ∇xM (x (tf))

5 If not converged, go to 2 0.2 0.3 0.4 0.5 0.6 0.7 0.8-10

-8

-6

-4

-2

0

2

4

6

8

10

Is Newton flawed?

Newton can be improved, we willsee how

Is there something else tounderstand?

There is also a more fundamentalproblem

M. Zanon & S. Gros Introduction 21 / 25

Page 129: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Pontryagin Maximum Principle (PMP) - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1

x = u − sin(x), t ∈ [0, 4]

H (x , λ, u) =1

2

(x2 + u2

)+ λ(u − sin(x))

u∗ = − λx = − λ− sin(x)

x(0) = x0

λ = λ cos(x)− x

λ(4) = 0

1 Solve for u∗

2 Guess λ(t0) = λ0

3 Solve IVP

x = F (x , u∗)

λ = −∇xH (x , λ, u∗, µ)

with λ(t0) = λ0, x(t0) = x0

4 Adjust λ0 to enforce

λ(tf) = ∇xM (x (tf))

5 If not converged, go to 2 0.2 0.3 0.4 0.5 0.6 0.7 0.8-10

-8

-6

-4

-2

0

2

4

6

8

10

Is Newton flawed?

Newton can be improved, we willsee how

Is there something else tounderstand?

There is also a more fundamentalproblem

M. Zanon & S. Gros Introduction 21 / 25

Page 130: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Pontryagin Maximum Principle (PMP) - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1

x = u − sin(x), t ∈ [0, 4]

H (x , λ, u) =1

2

(x2 + u2

)+ λ(u − sin(x))

u∗ = − λx = − λ− sin(x)

x(0) = x0

λ = λ cos(x)− x

λ(4) = 0

1 Solve for u∗

2 Guess λ(t0) = λ0

3 Solve IVP

x = F (x , u∗)

λ = −∇xH (x , λ, u∗, µ)

with λ(t0) = λ0, x(t0) = x0

4 Adjust λ0 to enforce

λ(tf) = ∇xM (x (tf))

5 If not converged, go to 2 0.2 0.3 0.4 0.5 0.6 0.7 0.8-10

-8

-6

-4

-2

0

2

4

6

8

10

Is Newton flawed?

Newton can be improved, we willsee how

Is there something else tounderstand?

There is also a more fundamentalproblem

M. Zanon & S. Gros Introduction 21 / 25

Page 131: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Pontryagin Maximum Principle (PMP) - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1

x = u − sin(x), t ∈ [0, 4]

H (x , λ, u) =1

2

(x2 + u2

)+ λ(u − sin(x))

u∗ = − λx = − λ− sin(x)

x(0) = x0

λ = λ cos(x)− x

λ(4) = 0

1 Solve for u∗

2 Guess λ(t0) = λ0

3 Solve IVP

x = F (x , u∗)

λ = −∇xH (x , λ, u∗, µ)

with λ(t0) = λ0, x(t0) = x0

4 Adjust λ0 to enforce

λ(tf) = ∇xM (x (tf))

5 If not converged, go to 2 0.2 0.3 0.4 0.5 0.6 0.7 0.8-10

-8

-6

-4

-2

0

2

4

6

8

10

Is Newton flawed?

Newton can be improved, we willsee how

Is there something else tounderstand?

There is also a more fundamentalproblem

M. Zanon & S. Gros Introduction 21 / 25

Page 132: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Pontryagin Maximum Principle (PMP) - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1

x = u − sin(x), t ∈ [0, 4]

H (x , λ, u) =1

2

(x2 + u2

)+ λ(u − sin(x))

u∗ = − λx = − λ− sin(x)

x(0) = x0

λ = λ cos(x)− x

λ(4) = 0

1 Solve for u∗

2 Guess λ(t0) = λ0

3 Solve IVP

x = F (x , u∗)

λ = −∇xH (x , λ, u∗, µ)

with λ(t0) = λ0, x(t0) = x0

4 Adjust λ0 to enforce

λ(tf) = ∇xM (x (tf))

5 If not converged, go to 2 0.2 0.3 0.4 0.5 0.6 0.7 0.8-10

-8

-6

-4

-2

0

2

4

6

8

10

Is Newton flawed?

Newton can be improved, we willsee how

Is there something else tounderstand?

There is also a more fundamentalproblem

M. Zanon & S. Gros Introduction 21 / 25

Page 133: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Pontryagin Maximum Principle (PMP) - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1

x = u − sin(x), t ∈ [0, 4]

H (x , λ, u) =1

2

(x2 + u2

)+ λ(u − sin(x))

u∗ = − λx = − λ− sin(x)

x(0) = x0

λ = λ cos(x)− x

λ(4) = 0

1 Solve for u∗

2 Guess λ(t0) = λ0

3 Solve IVP

x = F (x , u∗)

λ = −∇xH (x , λ, u∗, µ)

with λ(t0) = λ0, x(t0) = x0

4 Adjust λ0 to enforce

λ(tf) = ∇xM (x (tf))

5 If not converged, go to 2

Guess: λ0 = 1

0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1 1.2-30

-20

-10

0

10

20

30

Is Newton flawed?

Newton can be improved, we willsee how

Is there something else tounderstand?

There is also a more fundamentalproblem

M. Zanon & S. Gros Introduction 21 / 25

Page 134: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Pontryagin Maximum Principle (PMP) - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1

x = u − sin(x), t ∈ [0, 4]

H (x , λ, u) =1

2

(x2 + u2

)+ λ(u − sin(x))

u∗ = − λx = − λ− sin(x)

x(0) = x0

λ = λ cos(x)− x

λ(4) = 0

1 Solve for u∗

2 Guess λ(t0) = λ0

3 Solve IVP

x = F (x , u∗)

λ = −∇xH (x , λ, u∗, µ)

with λ(t0) = λ0, x(t0) = x0

4 Adjust λ0 to enforce

λ(tf) = ∇xM (x (tf))

5 If not converged, go to 2

Guess: λ0 = 1

0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1 1.2-30

-20

-10

0

10

20

30

Is Newton flawed?

Newton can be improved, we willsee how

Is there something else tounderstand?

There is also a more fundamentalproblem

M. Zanon & S. Gros Introduction 21 / 25

Page 135: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Pontryagin Maximum Principle (PMP) - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1

x = u − sin(x), t ∈ [0, 4]

H (x , λ, u) =1

2

(x2 + u2

)+ λ(u − sin(x))

u∗ = − λx = − λ− sin(x)

x(0) = x0

λ = λ cos(x)− x

λ(4) = 0

1 Solve for u∗

2 Guess λ(t0) = λ0

3 Solve IVP

x = F (x , u∗)

λ = −∇xH (x , λ, u∗, µ)

with λ(t0) = λ0, x(t0) = x0

4 Adjust λ0 to enforce

λ(tf) = ∇xM (x (tf))

5 If not converged, go to 2

Guess: λ0 = 1

0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1 1.2-30

-20

-10

0

10

20

30

Is Newton flawed?

Newton can be improved, we willsee how

Is there something else tounderstand?

There is also a more fundamentalproblem

M. Zanon & S. Gros Introduction 21 / 25

Page 136: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Pontryagin Maximum Principle (PMP) - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1

x = u − sin(x), t ∈ [0, 4]

H (x , λ, u) =1

2

(x2 + u2

)+ λ(u − sin(x))

u∗ = − λx = − λ− sin(x)

x(0) = x0

λ = λ cos(x)− x

λ(4) = 0

1 Solve for u∗

2 Guess λ(t0) = λ0

3 Solve IVP

x = F (x , u∗)

λ = −∇xH (x , λ, u∗, µ)

with λ(t0) = λ0, x(t0) = x0

4 Adjust λ0 to enforce

λ(tf) = ∇xM (x (tf))

5 If not converged, go to 2

Guess: λ0 = 1

0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1 1.2-30

-20

-10

0

10

20

30

Is Newton flawed?

Newton can be improved, we willsee how

Is there something else tounderstand?

There is also a more fundamentalproblem

M. Zanon & S. Gros Introduction 21 / 25

Page 137: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Pontryagin Maximum Principle (PMP) - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1

x = u − sin(x), t ∈ [0, 4]

H (x , λ, u) =1

2

(x2 + u2

)+ λ(u − sin(x))

u∗ = − λx = − λ− sin(x)

x(0) = x0

λ = λ cos(x)− x

λ(4) = 0

1 Solve for u∗

2 Guess λ(t0) = λ0

3 Solve IVP

x = F (x , u∗)

λ = −∇xH (x , λ, u∗, µ)

with λ(t0) = λ0, x(t0) = x0

4 Adjust λ0 to enforce

λ(tf) = ∇xM (x (tf))

5 If not converged, go to 2

Guess: λ0 = 1

0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1 1.2-30

-20

-10

0

10

20

30

Is Newton flawed?

Newton can be improved, we willsee how

Is there something else tounderstand?

There is also a more fundamentalproblem

M. Zanon & S. Gros Introduction 21 / 25

Page 138: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Pontryagin Maximum Principle (PMP) - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1

x = u − sin(x), t ∈ [0, 4]

H (x , λ, u) =1

2

(x2 + u2

)+ λ(u − sin(x))

u∗ = − λx = − λ− sin(x)

x(0) = x0

λ = λ cos(x)− x

λ(4) = 0

1 Solve for u∗

2 Guess λ(t0) = λ0

3 Solve IVP

x = F (x , u∗)

λ = −∇xH (x , λ, u∗, µ)

with λ(t0) = λ0, x(t0) = x0

4 Adjust λ0 to enforce

λ(tf) = ∇xM (x (tf))

5 If not converged, go to 2

Guess: λ0 = 1

0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1 1.2-30

-20

-10

0

10

20

30

Is Newton flawed?

Newton can be improved, we willsee how

Is there something else tounderstand?

There is also a more fundamentalproblem

M. Zanon & S. Gros Introduction 21 / 25

Page 139: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Pontryagin Maximum Principle (PMP) - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1

x = u − sin(x), t ∈ [0, 4]

H (x , λ, u) =1

2

(x2 + u2

)+ λ(u − sin(x))

u∗ = − λx = − λ− sin(x)

x(0) = x0

λ = λ cos(x)− x

λ(4) = 0

1 Solve for u∗

2 Guess λ(t0) = λ0

3 Solve IVP

x = F (x , u∗)

λ = −∇xH (x , λ, u∗, µ)

with λ(t0) = λ0, x(t0) = x0

4 Adjust λ0 to enforce

λ(tf) = ∇xM (x (tf))

5 If not converged, go to 2

Guess: λ0 = 0.4

0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1 1.2-30

-20

-10

0

10

20

30

Is Newton flawed?

Newton can be improved, we willsee how

Is there something else tounderstand?

There is also a more fundamentalproblem

M. Zanon & S. Gros Introduction 21 / 25

Page 140: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Pontryagin Maximum Principle (PMP) - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1

x = u − sin(x), t ∈ [0, 4]

H (x , λ, u) =1

2

(x2 + u2

)+ λ(u − sin(x))

u∗ = − λx = − λ− sin(x)

x(0) = x0

λ = λ cos(x)− x

λ(4) = 0

1 Solve for u∗

2 Guess λ(t0) = λ0

3 Solve IVP

x = F (x , u∗)

λ = −∇xH (x , λ, u∗, µ)

with λ(t0) = λ0, x(t0) = x0

4 Adjust λ0 to enforce

λ(tf) = ∇xM (x (tf))

5 If not converged, go to 2

Guess: λ0 = 0.4

0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1 1.2-30

-20

-10

0

10

20

30

Is Newton flawed?

Newton can be improved, we willsee how

Is there something else tounderstand?

There is also a more fundamentalproblem

M. Zanon & S. Gros Introduction 21 / 25

Page 141: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Pontryagin Maximum Principle (PMP) - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1

x = u − sin(x), t ∈ [0, 4]

H (x , λ, u) =1

2

(x2 + u2

)+ λ(u − sin(x))

u∗ = − λx = − λ− sin(x)

x(0) = x0

λ = λ cos(x)− x

λ(4) = 0

1 Solve for u∗

2 Guess λ(t0) = λ0

3 Solve IVP

x = F (x , u∗)

λ = −∇xH (x , λ, u∗, µ)

with λ(t0) = λ0, x(t0) = x0

4 Adjust λ0 to enforce

λ(tf) = ∇xM (x (tf))

5 If not converged, go to 2

Guess: λ0 = 0.4

0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1 1.2-30

-20

-10

0

10

20

30

Is Newton flawed?

Newton can be improved, we willsee how

Is there something else tounderstand?

There is also a more fundamentalproblem

M. Zanon & S. Gros Introduction 21 / 25

Page 142: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Pontryagin Maximum Principle (PMP) - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1

x = u − sin(x), t ∈ [0, 4]

H (x , λ, u) =1

2

(x2 + u2

)+ λ(u − sin(x))

u∗ = − λx = − λ− sin(x)

x(0) = x0

λ = λ cos(x)− x

λ(4) = 0

1 Solve for u∗

2 Guess λ(t0) = λ0

3 Solve IVP

x = F (x , u∗)

λ = −∇xH (x , λ, u∗, µ)

with λ(t0) = λ0, x(t0) = x0

4 Adjust λ0 to enforce

λ(tf) = ∇xM (x (tf))

5 If not converged, go to 2

Guess: λ0 = 0.4

0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1 1.2-30

-20

-10

0

10

20

30

Is Newton flawed?

Newton can be improved, we willsee how

Is there something else tounderstand?

There is also a more fundamentalproblem

M. Zanon & S. Gros Introduction 21 / 25

Page 143: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Pontryagin Maximum Principle (PMP) - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1

x = u − sin(x), t ∈ [0, 4]

H (x , λ, u) =1

2

(x2 + u2

)+ λ(u − sin(x))

u∗ = − λx = − λ− sin(x)

x(0) = x0

λ = λ cos(x)− x

λ(4) = 0

1 Solve for u∗

2 Guess λ(t0) = λ0

3 Solve IVP

x = F (x , u∗)

λ = −∇xH (x , λ, u∗, µ)

with λ(t0) = λ0, x(t0) = x0

4 Adjust λ0 to enforce

λ(tf) = ∇xM (x (tf))

5 If not converged, go to 2

Guess: λ0 = 0.4

0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1 1.2-30

-20

-10

0

10

20

30

Is Newton flawed?

Newton can be improved, we willsee how

Is there something else tounderstand?

There is also a more fundamentalproblem

M. Zanon & S. Gros Introduction 21 / 25

Page 144: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Pontryagin Maximum Principle (PMP) - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1

x = u − sin(x), t ∈ [0, 4]

H (x , λ, u) =1

2

(x2 + u2

)+ λ(u − sin(x))

u∗ = − λx = − λ− sin(x)

x(0) = x0

λ = λ cos(x)− x

λ(4) = 0

1 Solve for u∗

2 Guess λ(t0) = λ0

3 Solve IVP

x = F (x , u∗)

λ = −∇xH (x , λ, u∗, µ)

with λ(t0) = λ0, x(t0) = x0

4 Adjust λ0 to enforce

λ(tf) = ∇xM (x (tf))

5 If not converged, go to 2

Guess: λ0 = 0.4

0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1 1.2-30

-20

-10

0

10

20

30

Is Newton flawed?

Newton can be improved, we willsee how

Is there something else tounderstand?

There is also a more fundamentalproblem

M. Zanon & S. Gros Introduction 21 / 25

Page 145: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Pontryagin Maximum Principle (PMP) - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1

x = u − sin(x), t ∈ [0, 4]

H (x , λ, u) =1

2

(x2 + u2

)+ λ(u − sin(x))

u∗ = − λx = − λ− sin(x)

x(0) = x0

λ = λ cos(x)− x

λ(4) = 0

1 Solve for u∗

2 Guess λ(t0) = λ0

3 Solve IVP

x = F (x , u∗)

λ = −∇xH (x , λ, u∗, µ)

with λ(t0) = λ0, x(t0) = x0

4 Adjust λ0 to enforce

λ(tf) = ∇xM (x (tf))

5 If not converged, go to 2

Guess: λ0 = 0.4

0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1 1.2-30

-20

-10

0

10

20

30

Is Newton flawed?

Newton can be improved, we willsee how

Is there something else tounderstand?

There is also a more fundamentalproblem

M. Zanon & S. Gros Introduction 21 / 25

Page 146: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Pontryagin Maximum Principle (PMP) - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1

x = u − sin(x), t ∈ [0, 4]

H (x , λ, u) =1

2

(x2 + u2

)+ λ(u − sin(x))

u∗ = − λx = − λ− sin(x)

x(0) = x0

λ = λ cos(x)− x

λ(4) = 0

1 Solve for u∗

2 Guess λ(t0) = λ0

3 Solve IVP

x = F (x , u∗)

λ = −∇xH (x , λ, u∗, µ)

with λ(t0) = λ0, x(t0) = x0

4 Adjust λ0 to enforce

λ(tf) = ∇xM (x (tf))

5 If not converged, go to 2

Guess: λ0 = 0.4

0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1 1.2-30

-20

-10

0

10

20

30

Is Newton flawed?

Newton can be improved, we willsee how

Is there something else tounderstand?

There is also a more fundamentalproblem

M. Zanon & S. Gros Introduction 21 / 25

Page 147: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Pontryagin Maximum Principle (PMP) - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1

x = u − sin(x), t ∈ [0, 4]

H (x , λ, u) =1

2

(x2 + u2

)+ λ(u − sin(x))

u∗ = − λx = − λ− sin(x)

x(0) = x0

λ = λ cos(x)− x

λ(4) = 0

1 Solve for u∗

2 Guess λ(t0) = λ0

3 Solve IVP

x = F (x , u∗)

λ = −∇xH (x , λ, u∗, µ)

with λ(t0) = λ0, x(t0) = x0

4 Adjust λ0 to enforce

λ(tf) = ∇xM (x (tf))

5 If not converged, go to 2

Guess: λ0 = 0.4

0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1 1.2-30

-20

-10

0

10

20

30

Is Newton flawed?

Newton can be improved, we willsee how

Is there something else tounderstand?

There is also a more fundamentalproblem

M. Zanon & S. Gros Introduction 21 / 25

Page 148: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Pontryagin Maximum Principle (PMP) - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1

x = u − sin(x), t ∈ [0, 4]

H (x , λ, u) =1

2

(x2 + u2

)+ λ(u − sin(x))

u∗ = − λx = − λ− sin(x)

x(0) = x0

λ = λ cos(x)− x

λ(4) = 0

1 Solve for u∗

2 Guess λ(t0) = λ0

3 Solve IVP

x = F (x , u∗)

λ = −∇xH (x , λ, u∗, µ)

with λ(t0) = λ0, x(t0) = x0

4 Adjust λ0 to enforce

λ(tf) = ∇xM (x (tf))

5 If not converged, go to 2

Is Newton flawed?

Newton can be improved, we willsee how

Is there something else tounderstand?

There is also a more fundamentalproblem

M. Zanon & S. Gros Introduction 21 / 25

Page 149: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Pontryagin Maximum Principle (PMP) - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1

x = u − sin(x), t ∈ [0, 4]

H (x , λ, u) =1

2

(x2 + u2

)+ λ(u − sin(x))

u∗ = − λx = − λ− sin(x)

x(0) = x0

λ = λ cos(x)− x

λ(4) = 0

1 Solve for u∗

2 Guess λ(t0) = λ0

3 Solve IVP

x = F (x , u∗)

λ = −∇xH (x , λ, u∗, µ)

with λ(t0) = λ0, x(t0) = x0

4 Adjust λ0 to enforce

λ(tf) = ∇xM (x (tf))

5 If not converged, go to 2

Is Newton flawed?Newton can be improved, we willsee how

Is there something else tounderstand?

There is also a more fundamentalproblem

M. Zanon & S. Gros Introduction 21 / 25

Page 150: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Pontryagin Maximum Principle (PMP) - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1

x = u − sin(x), t ∈ [0, 4]

H (x , λ, u) =1

2

(x2 + u2

)+ λ(u − sin(x))

u∗ = − λx = − λ− sin(x)

x(0) = x0

λ = λ cos(x)− x

λ(4) = 0

1 Solve for u∗

2 Guess λ(t0) = λ0

3 Solve IVP

x = F (x , u∗)

λ = −∇xH (x , λ, u∗, µ)

with λ(t0) = λ0, x(t0) = x0

4 Adjust λ0 to enforce

λ(tf) = ∇xM (x (tf))

5 If not converged, go to 2

Is Newton flawed?Newton can be improved, we willsee how

Is there something else tounderstand?There is also a more fundamentalproblem

M. Zanon & S. Gros Introduction 21 / 25

Page 151: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Pontryagin Maximum Principle (PMP) - numerical difficulties

Dynamics of states x and costates λ cannot be both stable!

Consider a domain D(t) in the x(t), λ(t) space

Define the volume:

V (D(t)) =

∫D(t)

dD and its time derivatived

dtV (D(t)) =

∫D(t)

div

([x

λ

])dD

Observe:

div

([x

λ

])= Tr

(∂x

∂x+∂λ

∂λ

)= Tr

(∂F

∂x− ∂

∂λ

∂H∂x

)

= 0

and

∂λ

∂H∂x

=∂

∂λ

(∂L

∂x+∂λ>F

∂x

)=∂F

∂x

Then,

d

dtV (D(t)) = 0 ⇒ V (D(t)) = const.

M. Zanon & S. Gros Introduction 22 / 25

Page 152: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Pontryagin Maximum Principle (PMP) - numerical difficulties

Dynamics of states x and costates λ cannot be both stable!

Consider a domain D(t) in the x(t), λ(t) spaceDefine the volume:

V (D(t)) =

∫D(t)

dD and its time derivatived

dtV (D(t)) =

∫D(t)

div

([x

λ

])dD

Observe:

div

([x

λ

])= Tr

(∂x

∂x+∂λ

∂λ

)= Tr

(∂F

∂x− ∂

∂λ

∂H∂x

)

= 0

and

∂λ

∂H∂x

=∂

∂λ

(∂L

∂x+∂λ>F

∂x

)=∂F

∂x

Then,

d

dtV (D(t)) = 0 ⇒ V (D(t)) = const.

M. Zanon & S. Gros Introduction 22 / 25

Page 153: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Pontryagin Maximum Principle (PMP) - numerical difficulties

Dynamics of states x and costates λ cannot be both stable!

Consider a domain D(t) in the x(t), λ(t) spaceDefine the volume:

V (D(t)) =

∫D(t)

dD and its time derivatived

dtV (D(t)) =

∫D(t)

div

([x

λ

])dD

Observe:

div

([x

λ

])= Tr

(∂x

∂x+∂λ

∂λ

)= Tr

(∂F

∂x− ∂

∂λ

∂H∂x

)

= 0

and

∂λ

∂H∂x

=∂

∂λ

(∂L

∂x+∂λ>F

∂x

)=∂F

∂x

Then,

d

dtV (D(t)) = 0 ⇒ V (D(t)) = const.

M. Zanon & S. Gros Introduction 22 / 25

Page 154: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Pontryagin Maximum Principle (PMP) - numerical difficulties

Dynamics of states x and costates λ cannot be both stable!

Consider a domain D(t) in the x(t), λ(t) spaceDefine the volume:

V (D(t)) =

∫D(t)

dD and its time derivatived

dtV (D(t)) =

∫D(t)

div

([x

λ

])dD

Observe:

div

([x

λ

])= Tr

(∂x

∂x+∂λ

∂λ

)= Tr

(∂F

∂x− ∂

∂λ

∂H∂x

)

= 0

and

∂λ

∂H∂x

=∂

∂λ

(∂L

∂x+∂λ>F

∂x

)=∂F

∂x

Then,

d

dtV (D(t)) = 0 ⇒ V (D(t)) = const.

M. Zanon & S. Gros Introduction 22 / 25

Page 155: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Pontryagin Maximum Principle (PMP) - numerical difficulties

Dynamics of states x and costates λ cannot be both stable!

Consider a domain D(t) in the x(t), λ(t) spaceDefine the volume:

V (D(t)) =

∫D(t)

dD and its time derivatived

dtV (D(t)) =

∫D(t)

div

([x

λ

])dD

Observe:

div

([x

λ

])= Tr

(∂x

∂x+∂λ

∂λ

)= Tr

(∂F

∂x− ∂

∂λ

∂H∂x

)= 0

and

∂λ

∂H∂x

=∂

∂λ

(∂L

∂x+∂λ>F

∂x

)=∂F

∂x

Then,

d

dtV (D(t)) = 0 ⇒ V (D(t)) = const.

M. Zanon & S. Gros Introduction 22 / 25

Page 156: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Pontryagin Maximum Principle (PMP) - numerical difficulties

Dynamics of states x and costates λ cannot be both stable!

Consider a domain D(t) in the x(t), λ(t) spaceDefine the volume:

V (D(t)) =

∫D(t)

dD and its time derivatived

dtV (D(t)) =

∫D(t)

div

([x

λ

])dD

Observe:

div

([x

λ

])= Tr

(∂x

∂x+∂λ

∂λ

)= Tr

(∂F

∂x− ∂

∂λ

∂H∂x

)= 0

and

∂λ

∂H∂x

=∂

∂λ

(∂L

∂x+∂λ>F

∂x

)=∂F

∂x

Then,

d

dtV (D(t)) = 0 ⇒ V (D(t)) = const.

M. Zanon & S. Gros Introduction 22 / 25

Page 157: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Simulation with the set

Problem:

minu(·)

1

2

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1x = u − sin(x), t ∈ [0, 4]

Hamiltonian

H(x , λ, u) =1

2

(x2 + u2

)+ λ(u − sin(x))

minimized by

u = −λ

Dynamics become

x = −λ− sin(x)

λ = λ cos(x)− x

M. Zanon & S. Gros Introduction 23 / 25

Page 158: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Simulation with the set

Problem:

minu(·)

1

2

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1x = u − sin(x), t ∈ [0, 4]

Hamiltonian

H(x , λ, u) =1

2

(x2 + u2

)+ λ(u − sin(x))

minimized by

u = −λ

Dynamics become

x = −λ− sin(x)

λ = λ cos(x)− x

0

2

4 -4-2

02

4

-4

-2

0

2

4

M. Zanon & S. Gros Introduction 23 / 25

Page 159: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Simulation with the set

Problem:

minu(·)

1

2

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1x = u − sin(x), t ∈ [0, 4]

Hamiltonian

H(x , λ, u) =1

2

(x2 + u2

)+ λ(u − sin(x))

minimized by

u = −λ

Dynamics become

x = −λ− sin(x)

λ = λ cos(x)− x

-40

-2

4

0

2 2

2

0

4

-24 -4

M. Zanon & S. Gros Introduction 23 / 25

Page 160: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Simulation with the set

Problem:

minu(·)

1

2

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1x = u − sin(x), t ∈ [0, 4]

Hamiltonian

H(x , λ, u) =1

2

(x2 + u2

)+ λ(u − sin(x))

minimized by

u = −λ

Dynamics become

x = −λ− sin(x)

λ = λ cos(x)− x

-40

-2

4

0

2 2

2

0

4

-24 -4

M. Zanon & S. Gros Introduction 23 / 25

Page 161: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Simulation with the set

Problem:

minu(·)

1

2

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1x = u − sin(x), t ∈ [0, 4]

Hamiltonian

H(x , λ, u) =1

2

(x2 + u2

)+ λ(u − sin(x))

minimized by

u = −λ

Dynamics become

x = −λ− sin(x)

λ = λ cos(x)− x

-40

-2

4

0

2 2

2

0

4

-24 -4

M. Zanon & S. Gros Introduction 23 / 25

Page 162: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Simulation with the set

Problem:

minu(·)

1

2

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1x = u − sin(x), t ∈ [0, 4]

Hamiltonian

H(x , λ, u) =1

2

(x2 + u2

)+ λ(u − sin(x))

minimized by

u = −λ

Dynamics become

x = −λ− sin(x)

λ = λ cos(x)− x

-40

-2

4

0

2 2

2

0

4

-24 -4

M. Zanon & S. Gros Introduction 23 / 25

Page 163: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Simulation with the set

Problem:

minu(·)

1

2

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1x = u − sin(x), t ∈ [0, 4]

Hamiltonian

H(x , λ, u) =1

2

(x2 + u2

)+ λ(u − sin(x))

minimized by

u = −λ

Dynamics become

x = −λ− sin(x)

λ = λ cos(x)− x

-40

-2

4

0

2 2

2

0

4

-24 -4

M. Zanon & S. Gros Introduction 23 / 25

Page 164: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Simulation with the set

Problem:

minu(·)

1

2

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1x = u − sin(x), t ∈ [0, 4]

Hamiltonian

H(x , λ, u) =1

2

(x2 + u2

)+ λ(u − sin(x))

minimized by

u = −λ

Dynamics become

x = −λ− sin(x)

λ = λ cos(x)− x

-40

-2

4

0

2 2

2

0

4

-24 -4

M. Zanon & S. Gros Introduction 23 / 25

Page 165: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Simulation with the set

Problem:

minu(·)

1

2

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1x = u − sin(x), t ∈ [0, 4]

Hamiltonian

H(x , λ, u) =1

2

(x2 + u2

)+ λ(u − sin(x))

minimized by

u = −λ

Dynamics become

x = −λ− sin(x)

λ = λ cos(x)− x

-40

-2

4

0

2 2

2

0

4

-24 -4

M. Zanon & S. Gros Introduction 23 / 25

Page 166: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Simulation with the set

Problem:

minu(·)

1

2

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1x = u − sin(x), t ∈ [0, 4]

Hamiltonian

H(x , λ, u) =1

2

(x2 + u2

)+ λ(u − sin(x))

minimized by

u = −λ

Dynamics become

x = −λ− sin(x)

λ = λ cos(x)− x

-40

-2

4

0

2 2

2

0

4

-24 -4

M. Zanon & S. Gros Introduction 23 / 25

Page 167: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Simulation with the set

Problem:

minu(·)

1

2

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1x = u − sin(x), t ∈ [0, 4]

Hamiltonian

H(x , λ, u) =1

2

(x2 + u2

)+ λ(u − sin(x))

minimized by

u = −λ

Dynamics become

x = −λ− sin(x)

λ = λ cos(x)− x

-40

-2

4

0

2 2

2

0

4

-24 -4

M. Zanon & S. Gros Introduction 23 / 25

Page 168: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Simulation with the set

Problem:

minu(·)

1

2

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1x = u − sin(x), t ∈ [0, 4]

Hamiltonian

H(x , λ, u) =1

2

(x2 + u2

)+ λ(u − sin(x))

minimized by

u = −λ

Dynamics become

x = −λ− sin(x)

λ = λ cos(x)− x

-40

-2

4

0

2 2

2

0

4

-24 -4

M. Zanon & S. Gros Introduction 23 / 25

Page 169: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Simulation with the set

Problem:

minu(·)

1

2

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1x = u − sin(x), t ∈ [0, 4]

Hamiltonian

H(x , λ, u) =1

2

(x2 + u2

)+ λ(u − sin(x))

minimized by

u = −λ

Dynamics become

x = −λ− sin(x)

λ = λ cos(x)− x

-40

-2

4

0

2 2

2

0

4

-24 -4

M. Zanon & S. Gros Introduction 23 / 25

Page 170: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Simulation with the set

Problem:

minu(·)

1

2

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1x = u − sin(x), t ∈ [0, 4]

Hamiltonian

H(x , λ, u) =1

2

(x2 + u2

)+ λ(u − sin(x))

minimized by

u = −λ

Dynamics become

x = −λ− sin(x)

λ = λ cos(x)− x

-20

0

0

2020

20

-204

M. Zanon & S. Gros Introduction 23 / 25

Page 171: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Direct Optimal Control

minu(·)

M (x(tf)) +

∫ tf

t0

L(x(t), u(t)) dt

s.t. x(t0) = x0

x(t) = F (x(t), u(t)), t ∈ [t0, tf ]H(x(t), u(t)) ≤ 0, t ∈ [t0, tf ]

Choose a time grid t0, . . . , tN

Discretize the input w = (u0, . . . , uN−1)

u(t) = uk , t ∈ [tk , tk+1], k ∈ IN−10

Get corresponding state trajectory

x(t) = x(t,w , x0)

and the corresponding cost

φ(w)

Solve

minwφ(w)

How?

Gradient steps

w ← w − α∇φ(w)

for α sufficiently small

Newton steps

w ← w − α∇2φ(w)−1∇φ(w)

for α sufficiently small

M. Zanon & S. Gros Introduction 24 / 25

Page 172: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Direct Optimal Control

minu(·)

M (x(tf)) +

∫ tf

t0

L(x(t), u(t)) dt

s.t. x(t0) = x0

x(t) = F (x(t), u(t)), t ∈ [t0, tf ]H(x(t), u(t)) ≤ 0, t ∈ [t0, tf ]

Choose a time grid t0, . . . , tN

Discretize the input w = (u0, . . . , uN−1)

u(t) = uk , t ∈ [tk , tk+1], k ∈ IN−10

Get corresponding state trajectory

x(t) = x(t,w , x0)

and the corresponding cost

φ(w)

Solve

minwφ(w)

How?

Gradient steps

w ← w − α∇φ(w)

for α sufficiently small

Newton steps

w ← w − α∇2φ(w)−1∇φ(w)

for α sufficiently small

M. Zanon & S. Gros Introduction 24 / 25

Page 173: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Direct Optimal Control

minu(·)

M (x(tf)) +

∫ tf

t0

L(x(t), u(t)) dt

s.t. x(t0) = x0

x(t) = F (x(t), u(t)), t ∈ [t0, tf ]H(x(t), u(t)) ≤ 0, t ∈ [t0, tf ]

Choose a time grid t0, . . . , tN

Discretize the input w = (u0, . . . , uN−1)

u(t) = uk , t ∈ [tk , tk+1], k ∈ IN−10

Get corresponding state trajectory

x(t) = x(t,w , x0)

and the corresponding cost

φ(w)

Solve

minwφ(w)

How?

Gradient steps

w ← w − α∇φ(w)

for α sufficiently small

Newton steps

w ← w − α∇2φ(w)−1∇φ(w)

for α sufficiently small

M. Zanon & S. Gros Introduction 24 / 25

Page 174: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Direct Optimal Control

minu(·)

M (x(tf)) +

∫ tf

t0

L(x(t), u(t)) dt

s.t. x(t0) = x0

x(t) = F (x(t), u(t)), t ∈ [t0, tf ]H(x(t), u(t)) ≤ 0, t ∈ [t0, tf ]

Choose a time grid t0, . . . , tN

Discretize the input w = (u0, . . . , uN−1)

u(t) = uk , t ∈ [tk , tk+1], k ∈ IN−10

Get corresponding state trajectory

x(t) = x(t,w , x0)

and the corresponding cost

φ(w)

Solve

minwφ(w)

How?

Gradient steps

w ← w − α∇φ(w)

for α sufficiently small

Newton steps

w ← w − α∇2φ(w)−1∇φ(w)

for α sufficiently small

M. Zanon & S. Gros Introduction 24 / 25

Page 175: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Direct Optimal Control

minu(·)

M (x(tf)) +

∫ tf

t0

L(x(t), u(t)) dt

s.t. x(t0) = x0

x(t) = F (x(t), u(t)), t ∈ [t0, tf ]H(x(t), u(t)) ≤ 0, t ∈ [t0, tf ]

Choose a time grid t0, . . . , tN

Discretize the input w = (u0, . . . , uN−1)

u(t) = uk , t ∈ [tk , tk+1], k ∈ IN−10

Get corresponding state trajectory

x(t) = x(t,w , x0)

and the corresponding cost

φ(w)

Solve

minwφ(w)

How?

Gradient steps

w ← w − α∇φ(w)

for α sufficiently small

Newton steps

w ← w − α∇2φ(w)−1∇φ(w)

for α sufficiently small

M. Zanon & S. Gros Introduction 24 / 25

Page 176: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Direct Optimal Control

minu(·)

M (x(tf)) +

∫ tf

t0

L(x(t), u(t)) dt

s.t. x(t0) = x0

x(t) = F (x(t), u(t)), t ∈ [t0, tf ]H(x(t), u(t)) ≤ 0, t ∈ [t0, tf ]

Choose a time grid t0, . . . , tN

Discretize the input w = (u0, . . . , uN−1)

u(t) = uk , t ∈ [tk , tk+1], k ∈ IN−10

Get corresponding state trajectory

x(t) = x(t,w , x0)

and the corresponding cost

φ(w)

Solve

minwφ(w)

How?

Gradient steps

w ← w − α∇φ(w)

for α sufficiently small

Newton steps

w ← w − α∇2φ(w)−1∇φ(w)

for α sufficiently small

M. Zanon & S. Gros Introduction 24 / 25

Page 177: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Direct Optimal Control

minu(·)

M (x(tf)) +

∫ tf

t0

L(x(t), u(t)) dt

s.t. x(t0) = x0

x(t) = F (x(t), u(t)), t ∈ [t0, tf ]H(x(t), u(t)) ≤ 0, t ∈ [t0, tf ]

Choose a time grid t0, . . . , tN

Discretize the input w = (u0, . . . , uN−1)

u(t) = uk , t ∈ [tk , tk+1], k ∈ IN−10

Get corresponding state trajectory

x(t) = x(t,w , x0)

and the corresponding cost

φ(w)

Solve

minwφ(w)

How?

Gradient steps

w ← w − α∇φ(w)

for α sufficiently small

Newton steps

w ← w − α∇2φ(w)−1∇φ(w)

for α sufficiently small

M. Zanon & S. Gros Introduction 24 / 25

Page 178: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Direct Optimal Control

minu(·)

M (x(tf)) +

∫ tf

t0

L(x(t), u(t)) dt

s.t. x(t0) = x0

x(t) = F (x(t), u(t)), t ∈ [t0, tf ]H(x(t), u(t)) ≤ 0, t ∈ [t0, tf ]

Choose a time grid t0, . . . , tN

Discretize the input w = (u0, . . . , uN−1)

u(t) = uk , t ∈ [tk , tk+1], k ∈ IN−10

Get corresponding state trajectory

x(t) = x(t,w , x0)

and the corresponding cost

φ(w)

Solve

minwφ(w)

How?

Gradient steps

w ← w − α∇φ(w)

for α sufficiently small

Newton steps

w ← w − α∇2φ(w)−1∇φ(w)

for α sufficiently small

M. Zanon & S. Gros Introduction 24 / 25

Page 179: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Direct Optimal Control

minu(·)

M (x(tf)) +

∫ tf

t0

L(x(t), u(t)) dt

s.t. x(t0) = x0

x(t) = F (x(t), u(t)), t ∈ [t0, tf ]H(x(t), u(t)) ≤ 0, t ∈ [t0, tf ]

Choose a time grid t0, . . . , tN

Discretize the input w = (u0, . . . , uN−1)

u(t) = uk , t ∈ [tk , tk+1], k ∈ IN−10

Get corresponding state trajectory

x(t) = x(t,w , x0)

and the corresponding cost

φ(w)

Solve

minwφ(w)

How?

Gradient steps

w ← w − α∇φ(w)

for α sufficiently small

Newton steps

w ← w − α∇2φ(w)−1∇φ(w)

for α sufficiently small

M. Zanon & S. Gros Introduction 24 / 25

Page 180: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Direct Optimal Control - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1x = u − sin(x), t ∈ [0, 4]

N = 20

α = 1

What is happening??

This approach is called Single Shooting.We will unpack what is going on and fix it!

M. Zanon & S. Gros Introduction 25 / 25

Page 181: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Direct Optimal Control - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1x = u − sin(x), t ∈ [0, 4]

N = 20

α = 1

0 1 2 3 4

0

0.5

1SQP iter: 0

0 1 2 3 4

-0.4

-0.2

0

What is happening??

This approach is called Single Shooting.We will unpack what is going on and fix it!

M. Zanon & S. Gros Introduction 25 / 25

Page 182: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Direct Optimal Control - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1x = u − sin(x), t ∈ [0, 4]

N = 20

α = 1

0 1 2 3 4

0

0.5

1SQP iter: 1

0 1 2 3 4

-0.4

-0.2

0

What is happening??

This approach is called Single Shooting.We will unpack what is going on and fix it!

M. Zanon & S. Gros Introduction 25 / 25

Page 183: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Direct Optimal Control - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1x = u − sin(x), t ∈ [0, 4]

N = 20

α = 1

0 1 2 3 4

0

0.5

1SQP iter: 2

0 1 2 3 4

-0.4

-0.2

0

What is happening??

This approach is called Single Shooting.We will unpack what is going on and fix it!

M. Zanon & S. Gros Introduction 25 / 25

Page 184: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Direct Optimal Control - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1x = u − sin(x), t ∈ [0, 4]

N = 20

α = 1

0 1 2 3 4

0

0.5

1SQP iter: 3

0 1 2 3 4

-0.4

-0.2

0

What is happening??

This approach is called Single Shooting.We will unpack what is going on and fix it!

M. Zanon & S. Gros Introduction 25 / 25

Page 185: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Direct Optimal Control - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1x = u − sin(x), t ∈ [0, 4]

N = 20

α = 1

0 1 2 3 4

0

0.5

1SQP iter: 4

0 1 2 3 4

-0.4

-0.2

0

What is happening??

This approach is called Single Shooting.We will unpack what is going on and fix it!

M. Zanon & S. Gros Introduction 25 / 25

Page 186: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Direct Optimal Control - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1x = u − sin(x), t ∈ [0, 4]

N = 20

α = 1

0 1 2 3 4

0

0.5

1SQP iter: 5

0 1 2 3 4

-0.4

-0.2

0

What is happening??

This approach is called Single Shooting.We will unpack what is going on and fix it!

M. Zanon & S. Gros Introduction 25 / 25

Page 187: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Direct Optimal Control - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1x = u − sin(x), t ∈ [0, 4]

N = 20

α = 1

0 1 2 3 4

0

0.5

1SQP iter: 6

0 1 2 3 4

-0.4

-0.2

0

What is happening??

This approach is called Single Shooting.We will unpack what is going on and fix it!

M. Zanon & S. Gros Introduction 25 / 25

Page 188: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Direct Optimal Control - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1x = u − sin(x), t ∈ [0, 4]

N = 20

α = 1

0 1 2 3 4

0

0.5

1SQP iter: 7

0 1 2 3 4

-0.4

-0.2

0

What is happening??

This approach is called Single Shooting.We will unpack what is going on and fix it!

M. Zanon & S. Gros Introduction 25 / 25

Page 189: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Direct Optimal Control - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1x = u − sin(x), t ∈ [0, 4]

N = 20

α = 1

0 1 2 3 4

0

0.5

1SQP iter: 8

0 1 2 3 4

-0.4

-0.2

0

What is happening??

This approach is called Single Shooting.We will unpack what is going on and fix it!

M. Zanon & S. Gros Introduction 25 / 25

Page 190: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Direct Optimal Control - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1x = u − sin(x), t ∈ [0, 4]

N = 20

α = 1

0 1 2 3 4

0

0.5

1SQP iter: 9

0 1 2 3 4

-0.4

-0.2

0

What is happening??

This approach is called Single Shooting.We will unpack what is going on and fix it!

M. Zanon & S. Gros Introduction 25 / 25

Page 191: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Direct Optimal Control - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1x = u − sin(x), t ∈ [0, 4]

N = 20

α = 1

0 1 2 3 4

0

0.5

1SQP iter: 10

0 1 2 3 4

-0.4

-0.2

0

What is happening??

This approach is called Single Shooting.We will unpack what is going on and fix it!

M. Zanon & S. Gros Introduction 25 / 25

Page 192: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Direct Optimal Control - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1x = u − sin(x), t ∈ [0, 4]

N = 20

α = 1

0 1 2 3 4

0

0.5

1SQP iter: 0

0 1 2 3 4

-0.4

-0.2

0

What is happening??

This approach is called Single Shooting.We will unpack what is going on and fix it!

M. Zanon & S. Gros Introduction 25 / 25

Page 193: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Direct Optimal Control - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1x = u − sin(x), t ∈ [0, 4]

N = 20

α = 1

0 1 2 3 4

0

0.5

1SQP iter: 1

0 1 2 3 4

-0.4

-0.2

0

What is happening??

This approach is called Single Shooting.We will unpack what is going on and fix it!

M. Zanon & S. Gros Introduction 25 / 25

Page 194: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Direct Optimal Control - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1x = u − sin(x), t ∈ [0, 4]

N = 20

α = 1

0 1 2 3 4

0

0.5

1SQP iter: 2

0 1 2 3 4

-0.4

-0.2

0

What is happening??

This approach is called Single Shooting.We will unpack what is going on and fix it!

M. Zanon & S. Gros Introduction 25 / 25

Page 195: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Direct Optimal Control - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1x = u − sin(x), t ∈ [0, 4]

N = 20

α = 1

0 1 2 3 4

0

0.5

1SQP iter: 3

0 1 2 3 4

-0.4

-0.2

0

What is happening??

This approach is called Single Shooting.We will unpack what is going on and fix it!

M. Zanon & S. Gros Introduction 25 / 25

Page 196: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Direct Optimal Control - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1x = u − sin(x), t ∈ [0, 4]

N = 20

α = 1

0 1 2 3 4

0

0.5

1SQP iter: 4

0 1 2 3 4

-0.4

-0.2

0

What is happening??

This approach is called Single Shooting.We will unpack what is going on and fix it!

M. Zanon & S. Gros Introduction 25 / 25

Page 197: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Direct Optimal Control - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1x = u − sin(x), t ∈ [0, 4]

N = 20

α = 1

0 1 2 3 4

0

0.5

1SQP iter: 5

0 1 2 3 4

-0.4

-0.2

0

What is happening??

This approach is called Single Shooting.We will unpack what is going on and fix it!

M. Zanon & S. Gros Introduction 25 / 25

Page 198: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Direct Optimal Control - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1x = u − sin(x), t ∈ [0, 4]

N = 20

α = 1

0 1 2 3 4

0

0.5

1SQP iter: 6

0 1 2 3 4

-0.4

-0.2

0

What is happening??

This approach is called Single Shooting.We will unpack what is going on and fix it!

M. Zanon & S. Gros Introduction 25 / 25

Page 199: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Direct Optimal Control - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1x = u − sin(x), t ∈ [0, 4]

N = 20

α = 1

0 1 2 3 4

0

0.5

1SQP iter: 7

0 1 2 3 4

-0.4

-0.2

0

What is happening??

This approach is called Single Shooting.We will unpack what is going on and fix it!

M. Zanon & S. Gros Introduction 25 / 25

Page 200: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Direct Optimal Control - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1x = u − sin(x), t ∈ [0, 4]

N = 20

α = 1

0 1 2 3 4

0

0.5

1SQP iter: 8

0 1 2 3 4

-0.4

-0.2

0

What is happening??

This approach is called Single Shooting.We will unpack what is going on and fix it!

M. Zanon & S. Gros Introduction 25 / 25

Page 201: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Direct Optimal Control - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1x = u − sin(x), t ∈ [0, 4]

N = 20

α = 1

0 1 2 3 4

0

0.5

1SQP iter: 9

0 1 2 3 4

-0.4

-0.2

0

What is happening??

This approach is called Single Shooting.We will unpack what is going on and fix it!

M. Zanon & S. Gros Introduction 25 / 25

Page 202: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Direct Optimal Control - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1x = u − sin(x), t ∈ [0, 4]

N = 20

α = 1

0 1 2 3 4

0

0.5

1SQP iter: 10

0 1 2 3 4

-0.4

-0.2

0

What is happening??

This approach is called Single Shooting.We will unpack what is going on and fix it!

M. Zanon & S. Gros Introduction 25 / 25

Page 203: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Direct Optimal Control - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1x = u − sin(x), t ∈ [0, 4]

N = 20

α = 1

Great! Can we stop here? One more example...

What is happening??

This approach is called Single Shooting.We will unpack what is going on and fix it!

M. Zanon & S. Gros Introduction 25 / 25

Page 204: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Direct Optimal Control - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1x = u + sin(x), t ∈ [0, 4]

N = 20

α = 1

Great! Can we stop here? One more example...

What is happening??

This approach is called Single Shooting.We will unpack what is going on and fix it!

M. Zanon & S. Gros Introduction 25 / 25

Page 205: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Direct Optimal Control - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1x = u + sin(x), t ∈ [0, 4]

N = 20

α = 1

0 1 2 3 41

2

3

4SQP iter: 0

0 1 2 3 4-1

0

1

What is happening??

This approach is called Single Shooting.We will unpack what is going on and fix it!

M. Zanon & S. Gros Introduction 25 / 25

Page 206: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Direct Optimal Control - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1x = u + sin(x), t ∈ [0, 4]

N = 20

α = 1

0 1 2 3 4-4

-2

0

2SQP iter: 1

0 1 2 3 4-2

-1

0

What is happening??

This approach is called Single Shooting.We will unpack what is going on and fix it!

M. Zanon & S. Gros Introduction 25 / 25

Page 207: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Direct Optimal Control - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1x = u + sin(x), t ∈ [0, 4]

N = 20

α = 1

0 1 2 3 40

2

4

6SQP iter: 2

0 1 2 3 40

2

4

6

What is happening??

This approach is called Single Shooting.We will unpack what is going on and fix it!

M. Zanon & S. Gros Introduction 25 / 25

Page 208: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Direct Optimal Control - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1x = u + sin(x), t ∈ [0, 4]

N = 20

α = 1

0 1 2 3 4-4

-2

0

2SQP iter: 3

0 1 2 3 4-2

-1

0

1

What is happening??

This approach is called Single Shooting.We will unpack what is going on and fix it!

M. Zanon & S. Gros Introduction 25 / 25

Page 209: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Direct Optimal Control - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1x = u + sin(x), t ∈ [0, 4]

N = 20

α = 1

0 1 2 3 40

2

4

6SQP iter: 4

0 1 2 3 40

2

4

What is happening??

This approach is called Single Shooting.We will unpack what is going on and fix it!

M. Zanon & S. Gros Introduction 25 / 25

Page 210: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Direct Optimal Control - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1x = u + sin(x), t ∈ [0, 4]

N = 20

α = 1

0 1 2 3 4-4

-2

0

2SQP iter: 5

0 1 2 3 4-3

-2

-1

0

What is happening??

This approach is called Single Shooting.We will unpack what is going on and fix it!

M. Zanon & S. Gros Introduction 25 / 25

Page 211: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Direct Optimal Control - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1x = u + sin(x), t ∈ [0, 4]

N = 20

α = 1

0 1 2 3 41

2

3

4SQP iter: 6

0 1 2 3 40

2

4

What is happening??

This approach is called Single Shooting.We will unpack what is going on and fix it!

M. Zanon & S. Gros Introduction 25 / 25

Page 212: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Direct Optimal Control - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1x = u + sin(x), t ∈ [0, 4]

N = 20

α = 1

0 1 2 3 41

1.5

2

2.5SQP iter: 7

0 1 2 3 4-2

-1

0

1

What is happening??

This approach is called Single Shooting.We will unpack what is going on and fix it!

M. Zanon & S. Gros Introduction 25 / 25

Page 213: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Direct Optimal Control - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1x = u + sin(x), t ∈ [0, 4]

N = 20

α = 1

0 1 2 3 4-5

0

5SQP iter: 8

0 1 2 3 4-3

-2

-1

0

What is happening??

This approach is called Single Shooting.We will unpack what is going on and fix it!

M. Zanon & S. Gros Introduction 25 / 25

Page 214: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Direct Optimal Control - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1x = u + sin(x), t ∈ [0, 4]

N = 20

α = 1

0 1 2 3 41

2

3

4SQP iter: 9

0 1 2 3 40

2

4

What is happening??

This approach is called Single Shooting.We will unpack what is going on and fix it!

M. Zanon & S. Gros Introduction 25 / 25

Page 215: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Direct Optimal Control - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1x = u + sin(x), t ∈ [0, 4]

N = 20

α = 1

0 1 2 3 41

1.5

2

2.5SQP iter: 10

0 1 2 3 4-2

-1

0

1

What is happening??

This approach is called Single Shooting.We will unpack what is going on and fix it!

M. Zanon & S. Gros Introduction 25 / 25

Page 216: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Direct Optimal Control - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1x = u + sin(x), t ∈ [0, 4]

N = 20

α = 1

0 1 2 3 4-5

0

5SQP iter: 11

0 1 2 3 4-3

-2

-1

0

What is happening??

This approach is called Single Shooting.We will unpack what is going on and fix it!

M. Zanon & S. Gros Introduction 25 / 25

Page 217: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Direct Optimal Control - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1x = u + sin(x), t ∈ [0, 4]

N = 20

α = 1

0 1 2 3 41

2

3

4SQP iter: 12

0 1 2 3 40

1

2

3

What is happening??

This approach is called Single Shooting.We will unpack what is going on and fix it!

M. Zanon & S. Gros Introduction 25 / 25

Page 218: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Direct Optimal Control - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1x = u + sin(x), t ∈ [0, 4]

N = 20

α = 1

0 1 2 3 41

1.5

2

2.5SQP iter: 13

0 1 2 3 4-2

-1

0

1

What is happening??

This approach is called Single Shooting.We will unpack what is going on and fix it!

M. Zanon & S. Gros Introduction 25 / 25

Page 219: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Direct Optimal Control - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1x = u + sin(x), t ∈ [0, 4]

N = 20

α = 1

0 1 2 3 4-5

0

5SQP iter: 14

0 1 2 3 4-3

-2

-1

0

What is happening??

This approach is called Single Shooting.We will unpack what is going on and fix it!

M. Zanon & S. Gros Introduction 25 / 25

Page 220: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Direct Optimal Control - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1x = u + sin(x), t ∈ [0, 4]

N = 20

α = 1

0 1 2 3 40

2

4

6SQP iter: 15

0 1 2 3 40

1

2

3

What is happening??

This approach is called Single Shooting.We will unpack what is going on and fix it!

M. Zanon & S. Gros Introduction 25 / 25

Page 221: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Direct Optimal Control - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1x = u + sin(x), t ∈ [0, 4]

N = 20

α = 1

0 1 2 3 41

1.5

2

2.5SQP iter: 16

0 1 2 3 4-2

-1

0

1

What is happening??

This approach is called Single Shooting.We will unpack what is going on and fix it!

M. Zanon & S. Gros Introduction 25 / 25

Page 222: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Direct Optimal Control - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1x = u + sin(x), t ∈ [0, 4]

N = 20

α = 1

0 1 2 3 4-5

0

5SQP iter: 17

0 1 2 3 4-3

-2

-1

0

What is happening??

This approach is called Single Shooting.We will unpack what is going on and fix it!

M. Zanon & S. Gros Introduction 25 / 25

Page 223: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Direct Optimal Control - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1x = u + sin(x), t ∈ [0, 4]

N = 20

α = 1

0 1 2 3 40

2

4

6SQP iter: 18

0 1 2 3 40

1

2

3

What is happening??

This approach is called Single Shooting.We will unpack what is going on and fix it!

M. Zanon & S. Gros Introduction 25 / 25

Page 224: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Direct Optimal Control - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1x = u + sin(x), t ∈ [0, 4]

N = 20

α = 1

0 1 2 3 4-10

-5

0

5SQP iter: 19

0 1 2 3 4-4

-2

0

What is happening??

This approach is called Single Shooting.We will unpack what is going on and fix it!

M. Zanon & S. Gros Introduction 25 / 25

Page 225: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Direct Optimal Control - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1x = u + sin(x), t ∈ [0, 4]

N = 20

α = 1

0 1 2 3 40

10

20

30SQP iter: 20

0 1 2 3 40

5

10

15

What is happening??

This approach is called Single Shooting.We will unpack what is going on and fix it!

M. Zanon & S. Gros Introduction 25 / 25

Page 226: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Direct Optimal Control - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1x = u + sin(x), t ∈ [0, 4]

N = 20

α = 1

What is happening??

This approach is called Single Shooting.We will unpack what is going on and fix it!

M. Zanon & S. Gros Introduction 25 / 25

Page 227: Mario Zanon & S ebastien Gros · 01-03-2019  · f]: Transform the OCP into the NLP min w f(w) s:t: g(w) = 0; h(w) 0: and solve it The transformation from OCP to NLP is called transcription

M. ZanonandS.Gros

Direct Optimal Control - an example

minu(·)

∫ 4

0

x2 + u2 dt

s.t. x(0) = 1x = u + sin(x), t ∈ [0, 4]

N = 20

α = 1

What is happening??

This approach is called Single Shooting.We will unpack what is going on and fix it!

M. Zanon & S. Gros Introduction 25 / 25