53
Lyapunov functions, value functions, Sean Meyn Department of Electrical and Computer Engineering University of Illinois and the Coordinated Science Laboratory Joint work with R. Tweedie, I. Kontoyiannis, and P. Mehta Supported in part by NSF (ECS 05 23620, and prior funding), and AFOSR and performance bounds

Markov Tutorial CDC Shanghai 2009

Embed Size (px)

DESCRIPTION

A crash coarse in stochastic Lyapunov theory for Markov processes (emphasis is on continuous time) See also the survey for models in discrete time, https://netfiles.uiuc.edu/meyn/www/spm_files/MarkovTutorial/MarkovTutorialUCSB2010.html

Citation preview

Page 1: Markov Tutorial CDC Shanghai 2009

Lyapunov functions, value functions,

Sean Meyn

Department of Electrical and Computer EngineeringUniversity of Illinois and the Coordinated Science Laboratory

Joint work with R. Tweedie, I. Kontoyiannis, and P. Mehta

Supported in part by NSF (ECS 05 23620, and prior funding), and AFOSR

and performance bounds

Page 2: Markov Tutorial CDC Shanghai 2009

Objectives

Nonlinear state space model ≡ (controlled) Markov process,

Typical form:

noisecontrol

dX(t) = f(X(t), U(t)) dt + σ(X(t), U(t)) dW (t)

state process X

Page 3: Markov Tutorial CDC Shanghai 2009

Objectives

Nonlinear state space model ≡ (controlled) Markov process,

Typical form:

Questions: For a given feedback law,

• Is the state process stable?

• Is the average cost finite?

• Can we solve the DP equations?

• Can we approximate the average cost η ? The value function h ?

noisecontrol

E[c(X(t), U(t))]

dX(t) = f(X(t), U(t)) dt + σ(X(t), U(t)) dW (t)

minu

c(x, u) + Duh∗

∗∗

(x) = η∗{ }

state process X

Page 4: Markov Tutorial CDC Shanghai 2009

Outline

Markov Models

Representations

Lyapunov Theory

Conclusionsπ(f

)<

DV (x) ≤ −f(x) + bIC(x)

‖P t (x, · ) − π‖f → 0

sup

CE

x [Sτ

C(f

)]<

Page 5: Markov Tutorial CDC Shanghai 2009

IMarkov Models

Page 6: Markov Tutorial CDC Shanghai 2009

Notation

Markov chain:

Countable state space, X

Transition semigroup,

X = X(t) : t ≥ 0{ }

P t(x, y) = P X(s + t) = y X(s) = x{ }

x, y, ∈ X

Page 7: Markov Tutorial CDC Shanghai 2009

Notation: Generators & Resolvents

Markov chain:

Countable state space, X

Transition semigroup,

Generator: For some domain of functions h,

X = X(t) : t ≥ 0{ }

Dh (x) = limt→0

1

tE[h(X(s + t)) − h(X(s)) X(s) = x]

= limt→0

1

t(P th (x) − h(x))

P t(x, y) = P X(s + t) = y X(s) = x{ }

x, y, ∈ X

Page 8: Markov Tutorial CDC Shanghai 2009

Notation: Generators & Resolvents

Generator: For some domain of functions h,

Rate matrix:

P t = eQt

Dh (x) = limt→0

1

tE[h(X(s + t)) − h(X(s)) X(s) = x]

= limt→0

1

t(P th (x) − h(x))

Dh (x) =∑

y

Q(x, y)h(y)

Page 9: Markov Tutorial CDC Shanghai 2009

µα

Example: MM1 Queue

Sample paths:

Rate matrix:

X(t + ε) ≈

x + 1 Prob εα

x − 1 Prob εµ

x Prob 1 − ε(α + µ)

Q =

−α α 0 0 0 0 · · ·µ −α − µ α 0 0 0 · · ·0 µ −α − µ α 0 0 · · ·0 0 µ −α − µ α 0 · · ·0 0 0 µ −α − µ α · · ·0 0 0 0 µ −α − µ · · ·...

......

......

...

Page 10: Markov Tutorial CDC Shanghai 2009

σ2W = 0 σ2

W = 1

Example: O-U Model

Sample paths:

Generator:

dX(t) = AX(t) dt + B dW (t)

A n × n, Bn× 1, W standard BM

Dh (x) = (Ax)T∇h (x) + BT∇2h (x)B

Page 11: Markov Tutorial CDC Shanghai 2009

σ2W = 0 σ2

W = 1

Example: O-U Model

Sample paths:

Generator:

h quadratic,

dX(t) = AX(t) dt + B dW (t)

A n × n, Bn× 1, W standard BM

Dh (x) = (Ax)T∇h (x) + BT∇2h (x)B

h(x) = 12xTPx ∇h (x) = Px

∇2h (x) = P

Dh (x) = 12xT(PA + ATP )x + BTPB

Page 12: Markov Tutorial CDC Shanghai 2009

Notation: Generators & Resolvents

Generator: For some domain of functions h,

Rate matrix:

Resolvent:

P t = eQt

Dh (x) = limt→0

1

tE[h(X(s + t)) − h(X(s)) X(s) = x]

= limt→0

1

t(P th (x) − h(x))

Dh (x) =∑

y

Q(x, y)h(y)

Rα =

∫ ∞

0

e−αtP t

Page 13: Markov Tutorial CDC Shanghai 2009

Notation: Generators & Resolvents

Generator: For some domain of functions h,

Rate matrix:

Resolvent: Resolvent equations:

P t = eQt

Dh (x) = limt→0

1

tE[h(X(s + t)) − h(X(s)) X(s) = x]

= limt→0

1

t(P th (x) − h(x))

Dh (x) =∑

y

Q(x, y)h(y)

Rα = Rα =

∫ ∞

0

e−αtP t [ Iα − Q]−1

QRα = RαQ = αRα − I

Page 14: Markov Tutorial CDC Shanghai 2009

Notation: Generators & Resolvents

Motivation: Dynamic programming. For a cost function c,

h

Discounted-cost value function

α(x) = Rαc (x) =∑

y∈X

Rα(x, y)c(y)

=

∫ ∞

0

eαtE[c(X(t)) X(0) = x] dt

Page 15: Markov Tutorial CDC Shanghai 2009

Notation: Generators & Resolvents

Motivation: Dynamic programming. For a cost function c,

Resolvent equation = dynamic programming equation,

h

Discounted-cost value function

α(x) = Rαc (x) =∑

y∈X

Rα(x, y)c(y)

c + Dh αα = hα

=

∫ ∞

0

eαtE[c(X(t)) X(0) = x] dt

Page 16: Markov Tutorial CDC Shanghai 2009

Notation: Steady State Distribution

Invariant (probability) measure π: X is stationary. In particular,

X(t) ∼ π, t ≥ 0

Page 17: Markov Tutorial CDC Shanghai 2009

Notation: Steady State Distribution

Invariant (probability) measure π: X is stationary. In particular,

Characterizations:

X(t) ∼ π, t ≥ 0

x∈X

π(x)P t(x, y) = π(y)

y ∈ X

x∈X

π(x)Q(x, y) = 0

α∑

x∈X

π(x)Rα(x, y) = π(y), α > 0

Page 18: Markov Tutorial CDC Shanghai 2009

Notation: Relative Value Function

Invariant measure π, cost function c , steady-state mean η

Relative value function:

h(x) =

∫ ∞

0

E[c(X(t)) − η X(0) = x] dt

Page 19: Markov Tutorial CDC Shanghai 2009

Notation: Relative Value Function

Invariant measure π, cost function c , steady-state mean η

Relative value function:

Solution to Poisson’s equation (average-cost DP equation):

h(x) =

∫ ∞

0

E[c(X(t)) − η X(0) = x] dt

c + Dh = η

Page 20: Markov Tutorial CDC Shanghai 2009

IIRepresentations

π ∝ ν[I − (R − s ⊗ ν)]−1

h = [I − (R − s ⊗ ν)]−1c̃

Page 21: Markov Tutorial CDC Shanghai 2009

Irreducibility

ψ-Irreducibility:

ψ(y) > 0 =⇒ R(x, y) > 0 all x

> 0ψ(y) > 0 =⇒ P X(t) reaches all xy X(0) = x{ }

Page 22: Markov Tutorial CDC Shanghai 2009

Small Functions and Small Measures

ψ-Irreducibility:

Small functions and measures: For a function s and probability ν,

ψ(y) > 0 =⇒ R(x, y) > 0 all x

> 0ψ(y) > 0 =⇒ P X(t) reaches all xy X(0) = x{ }

R(x, y) ≥ s(x)ν(y), x, y ∈ XR =

∫ ∞

0

e−tP t dt

Page 23: Markov Tutorial CDC Shanghai 2009

Small Functions and Small Measures

ψ-Irreducibility:

Small functions and measures: For a function s and probability ν,

Resolvent dominates rank-one matrix,

ψ(y) > 0 =⇒ R(x, y) > 0 all x

> 0ψ(y) > 0 =⇒ P X(t) reaches all xy X(0) = x{ }

R(x, y) ≥ s(x)ν(y), x, y ∈ X

R ≥ s ⊗ νR =

∫ ∞

0

e−tP t dt

Page 24: Markov Tutorial CDC Shanghai 2009

Small Functions and Small Measures

ψ-Irreducibility:

Small functions and measures: For a function s and probability ν,

Resolvent dominates rank-one matrix,

and WLOG,

ψ-Irreducibility justi�es assumption: for all x

ψ(y) > 0 =⇒ R(x, y) > 0 all x

> 0ψ(y) > 0 =⇒ P X(t) reaches all xy X(0) = x{ }

R(x, y) ≥ s(x)ν(y), x, y ∈ X

R ≥ s ⊗ ν

s(x) > 0

ν = δx∗ , where ψ(x∗) > 0

R =

∫ ∞

0

e−tP t dt

Page 25: Markov Tutorial CDC Shanghai 2009

µα

Example: MM1 Queue

R(x, y) > 0 for all x and y (irreducible in usual sense)

Conclusion:

where

R(x, y ) ≥ s(x)ν(y )

s(x) := R(x, 0)

ν := δ0

Page 26: Markov Tutorial CDC Shanghai 2009

σ2W = 0 σ2

W = 1

dX(t) = AX(t) dt + B dW (t)Example: O-U Model

GaussianFull rank if and only if (A, B) is controllable.

R

Conclusion: Under controllability, for any m, there is ε s.t.,

where

(0, ).

R(x, A) ≥ s(x all x and A)ν(A)

s(x) = εIν(A) uniform on

{‖x‖ ≤ m

}

{‖x‖ ≤ m

}

Page 27: Markov Tutorial CDC Shanghai 2009

Potential Matrix

Potential matrix: G(x, y) =

∞∑

n=0

(R − s ⊗ ν)n (x, y)

G = [I − (R − s ⊗ ν)]−1

Page 28: Markov Tutorial CDC Shanghai 2009

Representation of π

Potential matrix:

νG (y) =∑

x∈X

ν(x)G(x, y)

G(x, y) =

∞∑

n=0

(R − s ⊗ ν)n (x, y)

π ∝ νG

Page 29: Markov Tutorial CDC Shanghai 2009

Representation of h

Gc̃ (y) =∑

y∈X

G(x, y)c̃(y)∑

y∈X

Potential matrix: G(x, y) =

∞∑

n=0

(R − s ⊗ ν)n (x, y)

c̃(x) = c(x) − η

η = π(x)c(x)

h = RGc̃ + constant

Page 30: Markov Tutorial CDC Shanghai 2009

Representation of h

Gc̃ (y) =∑

y∈X

G(x, y)c̃(y)∑

y∈X

Potential matrix:

If sum converges, then Poisson’s equation is solved:

G(x, y) =

∞∑

n=0

(R − s ⊗ ν)n (x, y)

c̃(x) = c(x) − η

η = π(x)c(x)

c(x) + Dh (x) = η

h = RGc̃ + constant

Page 31: Markov Tutorial CDC Shanghai 2009

IIILyapunov Theory

π(f

)<

∆V (x) ≤ −f(x) + bIC(x)

‖P n(x, · ) − π‖f → 0

sup

CE

x [Sτ

C(f

)]<

Page 32: Markov Tutorial CDC Shanghai 2009

Lyapunov Functions

DV ≤ −g + bs

Page 33: Markov Tutorial CDC Shanghai 2009

Lyapunov Functions

General assumptions:

DV ≤ −g + bs

V : X → (0,∞)

g : X → [1,∞)

b < ∞, s smalle.g., s(x) = IC(x), C �nite

Page 34: Markov Tutorial CDC Shanghai 2009

Lyapunov Bounds on G DV ≤ −g + bs

Resolvent equation gives RV − V ≤ −Rg + bRs

Page 35: Markov Tutorial CDC Shanghai 2009

Lyapunov Bounds on G DV ≤ −g + bs

Resolvent equation gives

Since s⊗ν is non-negative,

−[I − (R − s ⊗ ν)]V ≤ RV − V ≤ −Rg + bRs

RV − V ≤ −Rg + bRs

G−1

Page 36: Markov Tutorial CDC Shanghai 2009

Lyapunov Bounds on G DV ≤ −g + bs

Resolvent equation gives

Since s⊗ν is non-negative,

More positivity,

Some algebra,

−[I − (R − s ⊗ ν)]V ≤ RV − V ≤ −Rg + bRs

RV − V ≤ −Rg + bRs

G−1

V ≥ GRg − bGRs

GR = G(R − s ⊗ ν) + (Gs) ⊗ ν ≥ G − I

Gs ≤ 1

Page 37: Markov Tutorial CDC Shanghai 2009

Lyapunov Bounds on G DV ≤ −g + bs

Resolvent equation gives

Since s⊗ν is non-negative,

More positivity,

Some algebra,

General bound:

−[I − (R − s ⊗ ν)]V ≤ RV − V ≤ −Rg + bRs

RV − V ≤ −Rg + bRs

G−1

V ≥ GRg − bGRs

GR = G(R − s ⊗ ν) + (Gs) ⊗ ν ≥ G − I

Gs ≤ 1

GRg ≤ V + 2b

Gg ≤ V + g + 2b

Page 38: Markov Tutorial CDC Shanghai 2009

Existence of π DV ≤ −g + bs

DV ≤ −1 + bsCondition (V2)

Representation:

Bound:

Conclusion: π exists as a probability measure on X

π ∝ νG

GRg ≤ V + 2b =⇒ G(x, X) ≤ V (x) + 2b

Page 39: Markov Tutorial CDC Shanghai 2009

Existence of moments DV ≤ −g + bs

DV ≤ −g + bsCondition (V3)

Representation:

Bound:

π ∝ νG

Gg ≤ V + 2+ g b

Page 40: Markov Tutorial CDC Shanghai 2009

Existence of moments DV ≤ −g + bs

DV ≤ −g + bsCondition (V3)

Representation:

Bound:

Conclusion: π exists as a probability measure on X

and the steady-state mean is �nite,

π ∝ νG

Gg ≤ V + 2+ g b

π(g) :=∑

x∈X

π(x)g(x) ≤ b

Page 41: Markov Tutorial CDC Shanghai 2009

µα

Example: MM1 Queue ρ =α

µ

Linear Lyapunov function,

Conclusion: (V2) holds if and only if ρ < 1

DV (x) =

∞∑

y=0

Q(x, y)y

= α(x + 1) + µ(x − 1) − (α + µ)x

V (x) = x

= −(µ − α) x > 0

Page 42: Markov Tutorial CDC Shanghai 2009

µα

Example: MM1 Queue ρ =α

µ

g(x) = 1 + x

QuadraticLyapunov function,

Conclusion: (V3) holds,

DV (x) =

∞∑

y=0

Q(x, y)y

V (x) = x2

2

= α(x + 1)2 + µ(x − 1)2 − (α + µ)x2

= α(x2 + 2x + 1) + µ(x2 − 2x + 1)2 − (α + µ)x2

= −2(µ − α)x + α + µ

if and only if ρ < 1

Page 43: Markov Tutorial CDC Shanghai 2009

σ2W = 0 σ2

W = 1

dX(t) = AX(t) dt + B dW (t)Example: O-U Model

Suppose that P>0 solves the Lyapunov equation,

h quadratic, h(x) = 12xTPx ∇h (x) = Px

∇2h (x) = P

Dh (x) = 12xT(PA + ATP )x + BTPB

= -IPA + ATP

Page 44: Markov Tutorial CDC Shanghai 2009

σ2W = 0 σ2

W = 1

dX(t) = AX(t) dt + B dW (t)Example: O-U Model

Suppose that P>0 solves the Lyapunov equation,

Then (V3) follows from the identity,

h quadratic, h(x) = 12xTPx ∇h (x) = Px

∇2h (x) = P

Dh (x) = 12xT(PA + ATP )x + BTPB

= -IPA + ATP

Dh (x) = − 12‖x‖

2 + σ2X , σ2

X = BTPB

Page 45: Markov Tutorial CDC Shanghai 2009

σ2W = 0 σ2

W = 1

dX(t) = AX(t) dt + B dW (t)Example: O-U Model

Suppose that P>0 solves the Lyapunov equation,

Then (V3) follows from the identity,

The function solves Poisson’s equation,h(x) = 12xTPx

= -IPA + ATP

Dh (x) = − 12‖x‖

2 + σ2X , σ2

X = BTPB

g(x) = 12‖x‖

2

η = σ2X

Dh = −g + η

Page 46: Markov Tutorial CDC Shanghai 2009

Poisson’s Equation DV ≤ −g + bs

RGg RGg≤ V + 2b =⇒ (x ) ≤ V (x) + 2b

DV ≤ −g + bsCondition (V3)

Representation:

Bound:

c̃(x) = c(x) − ηh = RGc̃ + constant

Page 47: Markov Tutorial CDC Shanghai 2009

Poisson’s Equation DV ≤ −g + bs

RGg RGg≤ V + 2b =⇒ (x ) ≤ V (x) + 2b

DV ≤ −g + bsCondition (V3)

Representation:

Bound:

Conclusion: If c is bounded by g, then h is bounded,

c̃(x) = c(x) − ηh = RGc̃ + constant

h(x) ≤ V (x) + 2b

Page 48: Markov Tutorial CDC Shanghai 2009

µα

Example: MM1 Queue ρ =α

µ

Poisson’s equation with

We have (V3) with V a quadratic function of x:

Recall, with

g (x) = x

Dh = −g + η

Dh (x) = −2(µ − α)x x>0+ α + µ

h (x) = x2

Page 49: Markov Tutorial CDC Shanghai 2009

µα

Example: MM1 Queue ρ =α

µ

Poisson’s equation with

Solved with

g (x) = x

Dh = −g + η

h(x) = 12

x2 + x

µ − αη =

ρ

1 − ρ

Page 50: Markov Tutorial CDC Shanghai 2009

IVConclusions

Page 51: Markov Tutorial CDC Shanghai 2009

Final words

Just as in linear systems theory, Lyapunov functionsprovide a characterization of system properties, as wellas a practical verification tool

π(f

)<

DV (x) ≤ −f(x) + bIC(x)

‖P t (x, · ) − π‖f → 0

sup

CE

x [Sτ

C(f

)]<

Page 52: Markov Tutorial CDC Shanghai 2009

Final words

Just as in linear systems theory, Lyapunov functionsprovide a characterization of system properties, as wellas a practical verification tool

Much is left out of this survey - in particular,

• Converse theory• Limit theory• Approximation techniques to construct Lyapunov functions or approximations to value functions• Application to controlled Markov processes, and approximate dynamic programming

π(f

)<

DV (x) ≤ −f(x) + bIC(x)

‖P t (x, · ) − π‖f → 0

sup

CE

x [Sτ

C(f

)]<

Page 53: Markov Tutorial CDC Shanghai 2009

References

[1] S. P. Meyn and R. L. Tweedie.

[1,4] ψ-Irreducible foundations

[2,11,12,13] Mean-�eld models, ODE models, and Lyapunov functions

[1,4,5,9,10] Operator-theoretic methods. See also appendix of [2]

[3,6,7,10] Generators and continuous time models

Markov chains and stochasticstability. Cambridge University Press, Cambridge, secondedition, 2009. Published in the Cambridge MathematicalLibrary.

[2] S. P. Meyn. Control Techniques for Complex Networks. Cam-bridge University Press, Cambridge, 2007. Pre-publicationedition online: http://black.csl.uiuc.edu/˜meyn.

[3] S. N. Ethier and T. G. Kurtz. Markov Processes : Charac-terization and Convergence. John Wiley & Sons, New York,1986.

[4] E. Nummelin. General Irreducible Markov Chains and Non-negative Operators. Cambridge University Press, Cambridge,1984.

[5] S. P. Meyn and R. L. Tweedie. Generalized resolventsand Harris recurrence of Markov processes. ContemporaryMathematics, 149:227–250, 1993.

[6] S. P. Meyn and R. L. Tweedie. Stability of Markovianprocesses III: Foster-Lyapunov criteria for continuous timeprocesses. Adv. Appl. Probab., 25:518–548, 1993.

[7] D. Down, S. P. Meyn, and R. L. Tweedie. Exponentialand uniform ergodicity of Markov processes. Ann. Probab.,23(4):1671–1691, 1995.

[8] P. W. Glynn and S. P. Meyn. A Liapounov bound for solutionsof the Poisson equation. Ann. Probab., 24(2):916–931, 1996.

[9] I. Kontoyiannis and S. P. Meyn. Spectral theory and limittheorems for geometrically ergodic Markov processes. Ann.Appl. Probab., 13:304–362, 2003. Presented at the INFORMSApplied Probability Conference, NYC, July, 2001.

[10] I. Kontoyiannis and S. P. Meyn. Large deviations asymptoticsand the spectral theory of multiplicatively regular Markovprocesses. Electron. J. Probab., 10(3):61–123 (electronic),2005.

[11] W. Chen, D. Huang, A. Kulkarni, J. Unnikrishnan, Q. Zhu,P. Mehta, S. Meyn, and A. Wierman. Approximate dynamicprogramming using fluid and diffusion approximations withapplications to power management. Accepted for inclusion inthe 48th IEEE Conference on Decision and Control, December16-18 2009.

[12] P. Mehta and S. Meyn. Q-learning and Pontryagin’s MinimumPrinciple. Accepted for inclusion in the 48th IEEE Conferenceon Decision and Control, December 16-18 2009.

[13] G. Fort, S. Meyn, E. Moulines, and P. Priouret. ODEmethods for skip-free Markov chain stability with applicationsto MCMC. Ann. Appl. Probab., 18(2):664–707, 2008.

See also earlier seminal work by Hordijk, Tweedie, ... full references in [1].