207
Lectures on Lévy Processes and Stochastic Calculus (Koc University) Lecture 5: The Ornstein-Uhlenbeck Process David Applebaum School of Mathematics and Statistics, University of Sheffield, UK 9th December 2011 Dave Applebaum (Sheffield UK) Lecture 5 December 2011 1 / 44

Koc5(dba)

Embed Size (px)

DESCRIPTION

 

Citation preview

Page 1: Koc5(dba)

Lectures on Lévy Processes and StochasticCalculus (Koc University) Lecture 5: The

Ornstein-Uhlenbeck Process

David Applebaum

School of Mathematics and Statistics, University of Sheffield, UK

9th December 2011

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 1 / 44

Page 2: Koc5(dba)

Historical Origins

This process was first introduced by Ornstein and Uhlenbeck in the1930s as a more accurate model of the physical phenomenon ofBrownian motion than the Einstein-Smoluchowski-Wiener process.They argued that

Brownian motion = viscous drag of fluid + random molecularbombardment.

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 2 / 44

Page 3: Koc5(dba)

Historical Origins

This process was first introduced by Ornstein and Uhlenbeck in the1930s as a more accurate model of the physical phenomenon ofBrownian motion than the Einstein-Smoluchowski-Wiener process.They argued that

Brownian motion = viscous drag of fluid + random molecularbombardment.

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 2 / 44

Page 4: Koc5(dba)

Historical Origins

This process was first introduced by Ornstein and Uhlenbeck in the1930s as a more accurate model of the physical phenomenon ofBrownian motion than the Einstein-Smoluchowski-Wiener process.They argued that

Brownian motion = viscous drag of fluid + random molecularbombardment.

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 2 / 44

Page 5: Koc5(dba)

Let v(t) be the velocity at time t of a particle of mass m executingBrownian motion. By Newton’s second law of motion, the total force

acting on the particle at time t is F (t) = mdv(t)

dt.We then have

mdv(t)

dt= − mkv(t)︸ ︷︷ ︸

viscous drag

+ mσdB(t)

dt︸ ︷︷ ︸molecular bombardment

,

where k , σ > 0.

Of course,dB(t)

dtdoesn’t exist, but this is a “physicist’s argument”. If

we cancel the ms and multiply both sides by dt then we get alegitimate SDE - the Langevin equation

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 3 / 44

Page 6: Koc5(dba)

Let v(t) be the velocity at time t of a particle of mass m executingBrownian motion. By Newton’s second law of motion, the total force

acting on the particle at time t is F (t) = mdv(t)

dt.We then have

mdv(t)

dt= − mkv(t)︸ ︷︷ ︸

viscous drag

+ mσdB(t)

dt︸ ︷︷ ︸molecular bombardment

,

where k , σ > 0.

Of course,dB(t)

dtdoesn’t exist, but this is a “physicist’s argument”. If

we cancel the ms and multiply both sides by dt then we get alegitimate SDE - the Langevin equation

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 3 / 44

Page 7: Koc5(dba)

Let v(t) be the velocity at time t of a particle of mass m executingBrownian motion. By Newton’s second law of motion, the total force

acting on the particle at time t is F (t) = mdv(t)

dt.We then have

mdv(t)

dt= − mkv(t)︸ ︷︷ ︸

viscous drag

+ mσdB(t)

dt︸ ︷︷ ︸molecular bombardment

,

where k , σ > 0.

Of course,dB(t)

dtdoesn’t exist, but this is a “physicist’s argument”. If

we cancel the ms and multiply both sides by dt then we get alegitimate SDE - the Langevin equation

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 3 / 44

Page 8: Koc5(dba)

Let v(t) be the velocity at time t of a particle of mass m executingBrownian motion. By Newton’s second law of motion, the total force

acting on the particle at time t is F (t) = mdv(t)

dt.We then have

mdv(t)

dt= − mkv(t)︸ ︷︷ ︸

viscous drag

+ mσdB(t)

dt︸ ︷︷ ︸molecular bombardment

,

where k , σ > 0.

Of course,dB(t)

dtdoesn’t exist, but this is a “physicist’s argument”. If

we cancel the ms and multiply both sides by dt then we get alegitimate SDE - the Langevin equation

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 3 / 44

Page 9: Koc5(dba)

Let v(t) be the velocity at time t of a particle of mass m executingBrownian motion. By Newton’s second law of motion, the total force

acting on the particle at time t is F (t) = mdv(t)

dt.We then have

mdv(t)

dt= − mkv(t)︸ ︷︷ ︸

viscous drag

+ mσdB(t)

dt︸ ︷︷ ︸molecular bombardment

,

where k , σ > 0.

Of course,dB(t)

dtdoesn’t exist, but this is a “physicist’s argument”. If

we cancel the ms and multiply both sides by dt then we get alegitimate SDE - the Langevin equation

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 3 / 44

Page 10: Koc5(dba)

dv(t) = −kv(t)dt + σdB(t) (0.1)

Using the integrating factor ekt we can then easily check that theunique solution to this equation is the Ornstein-Uhlenbeck process(v(t), t ≥ 0) where

v(t) = e−ktv(0) +

∫ t

0e−k(t−s)dB(s).

We are interested in Lévy processes so replace B by a d-dimensionalLévy process X and k by a d × d matrix K . Our Langevin equation is

dY (t) = −KY (t)dt + dX (t) (0.2)

and its unique solution is

Y (t) = e−tK Y0 +

∫ t

0e−(t−s)K dX (s), (0.3)

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 4 / 44

Page 11: Koc5(dba)

dv(t) = −kv(t)dt + σdB(t) (0.1)

Using the integrating factor ekt we can then easily check that theunique solution to this equation is the Ornstein-Uhlenbeck process(v(t), t ≥ 0) where

v(t) = e−ktv(0) +

∫ t

0e−k(t−s)dB(s).

We are interested in Lévy processes so replace B by a d-dimensionalLévy process X and k by a d × d matrix K . Our Langevin equation is

dY (t) = −KY (t)dt + dX (t) (0.2)

and its unique solution is

Y (t) = e−tK Y0 +

∫ t

0e−(t−s)K dX (s), (0.3)

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 4 / 44

Page 12: Koc5(dba)

dv(t) = −kv(t)dt + σdB(t) (0.1)

Using the integrating factor ekt we can then easily check that theunique solution to this equation is the Ornstein-Uhlenbeck process(v(t), t ≥ 0) where

v(t) = e−ktv(0) +

∫ t

0e−k(t−s)dB(s).

We are interested in Lévy processes so replace B by a d-dimensionalLévy process X and k by a d × d matrix K . Our Langevin equation is

dY (t) = −KY (t)dt + dX (t) (0.2)

and its unique solution is

Y (t) = e−tK Y0 +

∫ t

0e−(t−s)K dX (s), (0.3)

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 4 / 44

Page 13: Koc5(dba)

dv(t) = −kv(t)dt + σdB(t) (0.1)

Using the integrating factor ekt we can then easily check that theunique solution to this equation is the Ornstein-Uhlenbeck process(v(t), t ≥ 0) where

v(t) = e−ktv(0) +

∫ t

0e−k(t−s)dB(s).

We are interested in Lévy processes so replace B by a d-dimensionalLévy process X and k by a d × d matrix K . Our Langevin equation is

dY (t) = −KY (t)dt + dX (t) (0.2)

and its unique solution is

Y (t) = e−tK Y0 +

∫ t

0e−(t−s)K dX (s), (0.3)

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 4 / 44

Page 14: Koc5(dba)

dv(t) = −kv(t)dt + σdB(t) (0.1)

Using the integrating factor ekt we can then easily check that theunique solution to this equation is the Ornstein-Uhlenbeck process(v(t), t ≥ 0) where

v(t) = e−ktv(0) +

∫ t

0e−k(t−s)dB(s).

We are interested in Lévy processes so replace B by a d-dimensionalLévy process X and k by a d × d matrix K . Our Langevin equation is

dY (t) = −KY (t)dt + dX (t) (0.2)

and its unique solution is

Y (t) = e−tK Y0 +

∫ t

0e−(t−s)K dX (s), (0.3)

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 4 / 44

Page 15: Koc5(dba)

dv(t) = −kv(t)dt + σdB(t) (0.1)

Using the integrating factor ekt we can then easily check that theunique solution to this equation is the Ornstein-Uhlenbeck process(v(t), t ≥ 0) where

v(t) = e−ktv(0) +

∫ t

0e−k(t−s)dB(s).

We are interested in Lévy processes so replace B by a d-dimensionalLévy process X and k by a d × d matrix K . Our Langevin equation is

dY (t) = −KY (t)dt + dX (t) (0.2)

and its unique solution is

Y (t) = e−tK Y0 +

∫ t

0e−(t−s)K dX (s), (0.3)

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 4 / 44

Page 16: Koc5(dba)

dv(t) = −kv(t)dt + σdB(t) (0.1)

Using the integrating factor ekt we can then easily check that theunique solution to this equation is the Ornstein-Uhlenbeck process(v(t), t ≥ 0) where

v(t) = e−ktv(0) +

∫ t

0e−k(t−s)dB(s).

We are interested in Lévy processes so replace B by a d-dimensionalLévy process X and k by a d × d matrix K . Our Langevin equation is

dY (t) = −KY (t)dt + dX (t) (0.2)

and its unique solution is

Y (t) = e−tK Y0 +

∫ t

0e−(t−s)K dX (s), (0.3)

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 4 / 44

Page 17: Koc5(dba)

where Y0 := Y (0) is a fixed F0 measurable random variables. We stillcall the process Y an Ornstein-Uhlenbeck or OU process.Furthermore

Y has càdlàg paths.Y is a Markov process.

The process X is sometimes called the background driving Lévyprocess or BDLP.

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 5 / 44

Page 18: Koc5(dba)

where Y0 := Y (0) is a fixed F0 measurable random variables. We stillcall the process Y an Ornstein-Uhlenbeck or OU process.Furthermore

Y has càdlàg paths.Y is a Markov process.

The process X is sometimes called the background driving Lévyprocess or BDLP.

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 5 / 44

Page 19: Koc5(dba)

where Y0 := Y (0) is a fixed F0 measurable random variables. We stillcall the process Y an Ornstein-Uhlenbeck or OU process.Furthermore

Y has càdlàg paths.Y is a Markov process.

The process X is sometimes called the background driving Lévyprocess or BDLP.

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 5 / 44

Page 20: Koc5(dba)

where Y0 := Y (0) is a fixed F0 measurable random variables. We stillcall the process Y an Ornstein-Uhlenbeck or OU process.Furthermore

Y has càdlàg paths.Y is a Markov process.

The process X is sometimes called the background driving Lévyprocess or BDLP.

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 5 / 44

Page 21: Koc5(dba)

where Y0 := Y (0) is a fixed F0 measurable random variables. We stillcall the process Y an Ornstein-Uhlenbeck or OU process.Furthermore

Y has càdlàg paths.Y is a Markov process.

The process X is sometimes called the background driving Lévyprocess or BDLP.

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 5 / 44

Page 22: Koc5(dba)

We get a Markov semigroup on Bb(Rd ) called a Mehler semigroup:

Tt f (x) = E(f (Y (t))|Y0 = x)

=

∫Rd

f (e−tK x + y)ρt (dy) (0.4)

where ρt is the law of the stochastic integral∫ t0 e−sK dX (s)

d=∫ t

0 e−(t−s)K dX (s).This generalises the classical Mehler formula (X (t) = B(t),K = kI)

Tt f (x) =1

(2π)d2

∫Rd

f

(e−ktx +

√1− e−2kt

2ky

)e−

y2

2 dy .

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 6 / 44

Page 23: Koc5(dba)

We get a Markov semigroup on Bb(Rd ) called a Mehler semigroup:

Tt f (x) = E(f (Y (t))|Y0 = x)

=

∫Rd

f (e−tK x + y)ρt (dy) (0.4)

where ρt is the law of the stochastic integral∫ t0 e−sK dX (s)

d=∫ t

0 e−(t−s)K dX (s).This generalises the classical Mehler formula (X (t) = B(t),K = kI)

Tt f (x) =1

(2π)d2

∫Rd

f

(e−ktx +

√1− e−2kt

2ky

)e−

y2

2 dy .

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 6 / 44

Page 24: Koc5(dba)

We get a Markov semigroup on Bb(Rd ) called a Mehler semigroup:

Tt f (x) = E(f (Y (t))|Y0 = x)

=

∫Rd

f (e−tK x + y)ρt (dy) (0.4)

where ρt is the law of the stochastic integral∫ t0 e−sK dX (s)

d=∫ t

0 e−(t−s)K dX (s).This generalises the classical Mehler formula (X (t) = B(t),K = kI)

Tt f (x) =1

(2π)d2

∫Rd

f

(e−ktx +

√1− e−2kt

2ky

)e−

y2

2 dy .

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 6 / 44

Page 25: Koc5(dba)

We get a Markov semigroup on Bb(Rd ) called a Mehler semigroup:

Tt f (x) = E(f (Y (t))|Y0 = x)

=

∫Rd

f (e−tK x + y)ρt (dy) (0.4)

where ρt is the law of the stochastic integral∫ t0 e−sK dX (s)

d=∫ t

0 e−(t−s)K dX (s).This generalises the classical Mehler formula (X (t) = B(t),K = kI)

Tt f (x) =1

(2π)d2

∫Rd

f

(e−ktx +

√1− e−2kt

2ky

)e−

y2

2 dy .

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 6 / 44

Page 26: Koc5(dba)

We get a Markov semigroup on Bb(Rd ) called a Mehler semigroup:

Tt f (x) = E(f (Y (t))|Y0 = x)

=

∫Rd

f (e−tK x + y)ρt (dy) (0.4)

where ρt is the law of the stochastic integral∫ t0 e−sK dX (s)

d=∫ t

0 e−(t−s)K dX (s).This generalises the classical Mehler formula (X (t) = B(t),K = kI)

Tt f (x) =1

(2π)d2

∫Rd

f

(e−ktx +

√1− e−2kt

2ky

)e−

y2

2 dy .

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 6 / 44

Page 27: Koc5(dba)

In fact (Tt , t ≥ 0) satisfies the Feller property: Tt (C0(Rd )) ⊆ C0(Rd ).We also have the skew-convolution semigroup property:

ρs+t = ρKs ∗ ρt ,

where ρKs (B) = ρs(etK B). Another terminology for this is

measure-valued cocycle.

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 7 / 44

Page 28: Koc5(dba)

In fact (Tt , t ≥ 0) satisfies the Feller property: Tt (C0(Rd )) ⊆ C0(Rd ).We also have the skew-convolution semigroup property:

ρs+t = ρKs ∗ ρt ,

where ρKs (B) = ρs(etK B). Another terminology for this is

measure-valued cocycle.

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 7 / 44

Page 29: Koc5(dba)

In fact (Tt , t ≥ 0) satisfies the Feller property: Tt (C0(Rd )) ⊆ C0(Rd ).We also have the skew-convolution semigroup property:

ρs+t = ρKs ∗ ρt ,

where ρKs (B) = ρs(etK B). Another terminology for this is

measure-valued cocycle.

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 7 / 44

Page 30: Koc5(dba)

We get nicer probabilistic properties of our solution if we make thefollowing

Assumption K is strictly positive definite.

OU processes solve simple linear SDEs. They are important inapplications such as volatility modelling, Lévy driven CARMAprocesses, branching processes with immigration.In infinite dimensions they solve the simplest linear SPDE with additivenoise. To develop this theme, let H and K be separable Hilbert spacesand (S(t), t ≥ 0) be a C0-semigroup on H with infinitesimal generatorJ. Let X be a Lévy process on K and C ∈ L(K ,H).

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 8 / 44

Page 31: Koc5(dba)

We get nicer probabilistic properties of our solution if we make thefollowing

Assumption K is strictly positive definite.

OU processes solve simple linear SDEs. They are important inapplications such as volatility modelling, Lévy driven CARMAprocesses, branching processes with immigration.In infinite dimensions they solve the simplest linear SPDE with additivenoise. To develop this theme, let H and K be separable Hilbert spacesand (S(t), t ≥ 0) be a C0-semigroup on H with infinitesimal generatorJ. Let X be a Lévy process on K and C ∈ L(K ,H).

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 8 / 44

Page 32: Koc5(dba)

We get nicer probabilistic properties of our solution if we make thefollowing

Assumption K is strictly positive definite.

OU processes solve simple linear SDEs. They are important inapplications such as volatility modelling, Lévy driven CARMAprocesses, branching processes with immigration.In infinite dimensions they solve the simplest linear SPDE with additivenoise. To develop this theme, let H and K be separable Hilbert spacesand (S(t), t ≥ 0) be a C0-semigroup on H with infinitesimal generatorJ. Let X be a Lévy process on K and C ∈ L(K ,H).

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 8 / 44

Page 33: Koc5(dba)

We get nicer probabilistic properties of our solution if we make thefollowing

Assumption K is strictly positive definite.

OU processes solve simple linear SDEs. They are important inapplications such as volatility modelling, Lévy driven CARMAprocesses, branching processes with immigration.In infinite dimensions they solve the simplest linear SPDE with additivenoise. To develop this theme, let H and K be separable Hilbert spacesand (S(t), t ≥ 0) be a C0-semigroup on H with infinitesimal generatorJ. Let X be a Lévy process on K and C ∈ L(K ,H).

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 8 / 44

Page 34: Koc5(dba)

We get nicer probabilistic properties of our solution if we make thefollowing

Assumption K is strictly positive definite.

OU processes solve simple linear SDEs. They are important inapplications such as volatility modelling, Lévy driven CARMAprocesses, branching processes with immigration.In infinite dimensions they solve the simplest linear SPDE with additivenoise. To develop this theme, let H and K be separable Hilbert spacesand (S(t), t ≥ 0) be a C0-semigroup on H with infinitesimal generatorJ. Let X be a Lévy process on K and C ∈ L(K ,H).

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 8 / 44

Page 35: Koc5(dba)

We get nicer probabilistic properties of our solution if we make thefollowing

Assumption K is strictly positive definite.

OU processes solve simple linear SDEs. They are important inapplications such as volatility modelling, Lévy driven CARMAprocesses, branching processes with immigration.In infinite dimensions they solve the simplest linear SPDE with additivenoise. To develop this theme, let H and K be separable Hilbert spacesand (S(t), t ≥ 0) be a C0-semigroup on H with infinitesimal generatorJ. Let X be a Lévy process on K and C ∈ L(K ,H).

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 8 / 44

Page 36: Koc5(dba)

We get nicer probabilistic properties of our solution if we make thefollowing

Assumption K is strictly positive definite.

OU processes solve simple linear SDEs. They are important inapplications such as volatility modelling, Lévy driven CARMAprocesses, branching processes with immigration.In infinite dimensions they solve the simplest linear SPDE with additivenoise. To develop this theme, let H and K be separable Hilbert spacesand (S(t), t ≥ 0) be a C0-semigroup on H with infinitesimal generatorJ. Let X be a Lévy process on K and C ∈ L(K ,H).

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 8 / 44

Page 37: Koc5(dba)

We have the SPDE

dY (t) = JY (t) + CdX (t),

whose unique solution is

Y (t) = S(t)Y0 +

∫ t

0S(t − s)CdX (s)︸ ︷︷ ︸

stochastic convolution

,

and the generalised Mehler semigroup is

Tt f (x) =

∫Rd

f (S(t)x + y)ρt (dy).

From now on we will work in finite dimensions and assume the strictpositive-definiteness of K .

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 9 / 44

Page 38: Koc5(dba)

We have the SPDE

dY (t) = JY (t) + CdX (t),

whose unique solution is

Y (t) = S(t)Y0 +

∫ t

0S(t − s)CdX (s)︸ ︷︷ ︸

stochastic convolution

,

and the generalised Mehler semigroup is

Tt f (x) =

∫Rd

f (S(t)x + y)ρt (dy).

From now on we will work in finite dimensions and assume the strictpositive-definiteness of K .

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 9 / 44

Page 39: Koc5(dba)

We have the SPDE

dY (t) = JY (t) + CdX (t),

whose unique solution is

Y (t) = S(t)Y0 +

∫ t

0S(t − s)CdX (s)︸ ︷︷ ︸

stochastic convolution

,

and the generalised Mehler semigroup is

Tt f (x) =

∫Rd

f (S(t)x + y)ρt (dy).

From now on we will work in finite dimensions and assume the strictpositive-definiteness of K .

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 9 / 44

Page 40: Koc5(dba)

We have the SPDE

dY (t) = JY (t) + CdX (t),

whose unique solution is

Y (t) = S(t)Y0 +

∫ t

0S(t − s)CdX (s)︸ ︷︷ ︸

stochastic convolution

,

and the generalised Mehler semigroup is

Tt f (x) =

∫Rd

f (S(t)x + y)ρt (dy).

From now on we will work in finite dimensions and assume the strictpositive-definiteness of K .

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 9 / 44

Page 41: Koc5(dba)

Additive Processes and Wiener-Lévy Integrals

The study of O-U processes focusses attention on Wiener-Lévyintegrals If (t) :=

∫ t0 f (s)dX (s). For simplicity we assume that

f : Rd → Rd is continuous.Recall that Z = (Z (t), t ≥ 0) is an additive process if Z (0) = 0 (a.s.), Zhas independent increments and is stochastically continuous. It followsthat each Z (t) is infinitely divisible.

Theorem(If (t), t ≥ 0) is an additive process.

Proof. (sketch) Independent increments follows from the fact that forr ≤ s ≤ tIf (s)− If (r) =

∫ sr f (u)dX (u) is σ{X (b)− X (s); r ≤ a < b ≤ s} -

measurable,If (t)− If (s) =

∫ ts f (u)dX (u) is σ{X (d)− X (c); s ≤ c < d ≤ t} -

measurable, 2

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 10 / 44

Page 42: Koc5(dba)

Additive Processes and Wiener-Lévy Integrals

The study of O-U processes focusses attention on Wiener-Lévyintegrals If (t) :=

∫ t0 f (s)dX (s). For simplicity we assume that

f : Rd → Rd is continuous.Recall that Z = (Z (t), t ≥ 0) is an additive process if Z (0) = 0 (a.s.), Zhas independent increments and is stochastically continuous. It followsthat each Z (t) is infinitely divisible.

Theorem(If (t), t ≥ 0) is an additive process.

Proof. (sketch) Independent increments follows from the fact that forr ≤ s ≤ tIf (s)− If (r) =

∫ sr f (u)dX (u) is σ{X (b)− X (s); r ≤ a < b ≤ s} -

measurable,If (t)− If (s) =

∫ ts f (u)dX (u) is σ{X (d)− X (c); s ≤ c < d ≤ t} -

measurable, 2

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 10 / 44

Page 43: Koc5(dba)

Additive Processes and Wiener-Lévy Integrals

The study of O-U processes focusses attention on Wiener-Lévyintegrals If (t) :=

∫ t0 f (s)dX (s). For simplicity we assume that

f : Rd → Rd is continuous.Recall that Z = (Z (t), t ≥ 0) is an additive process if Z (0) = 0 (a.s.), Zhas independent increments and is stochastically continuous. It followsthat each Z (t) is infinitely divisible.

Theorem(If (t), t ≥ 0) is an additive process.

Proof. (sketch) Independent increments follows from the fact that forr ≤ s ≤ tIf (s)− If (r) =

∫ sr f (u)dX (u) is σ{X (b)− X (s); r ≤ a < b ≤ s} -

measurable,If (t)− If (s) =

∫ ts f (u)dX (u) is σ{X (d)− X (c); s ≤ c < d ≤ t} -

measurable, 2

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 10 / 44

Page 44: Koc5(dba)

Additive Processes and Wiener-Lévy Integrals

The study of O-U processes focusses attention on Wiener-Lévyintegrals If (t) :=

∫ t0 f (s)dX (s). For simplicity we assume that

f : Rd → Rd is continuous.Recall that Z = (Z (t), t ≥ 0) is an additive process if Z (0) = 0 (a.s.), Zhas independent increments and is stochastically continuous. It followsthat each Z (t) is infinitely divisible.

Theorem(If (t), t ≥ 0) is an additive process.

Proof. (sketch) Independent increments follows from the fact that forr ≤ s ≤ tIf (s)− If (r) =

∫ sr f (u)dX (u) is σ{X (b)− X (s); r ≤ a < b ≤ s} -

measurable,If (t)− If (s) =

∫ ts f (u)dX (u) is σ{X (d)− X (c); s ≤ c < d ≤ t} -

measurable, 2

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 10 / 44

Page 45: Koc5(dba)

Additive Processes and Wiener-Lévy Integrals

The study of O-U processes focusses attention on Wiener-Lévyintegrals If (t) :=

∫ t0 f (s)dX (s). For simplicity we assume that

f : Rd → Rd is continuous.Recall that Z = (Z (t), t ≥ 0) is an additive process if Z (0) = 0 (a.s.), Zhas independent increments and is stochastically continuous. It followsthat each Z (t) is infinitely divisible.

Theorem(If (t), t ≥ 0) is an additive process.

Proof. (sketch) Independent increments follows from the fact that forr ≤ s ≤ tIf (s)− If (r) =

∫ sr f (u)dX (u) is σ{X (b)− X (s); r ≤ a < b ≤ s} -

measurable,If (t)− If (s) =

∫ ts f (u)dX (u) is σ{X (d)− X (c); s ≤ c < d ≤ t} -

measurable, 2

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 10 / 44

Page 46: Koc5(dba)

Additive Processes and Wiener-Lévy Integrals

The study of O-U processes focusses attention on Wiener-Lévyintegrals If (t) :=

∫ t0 f (s)dX (s). For simplicity we assume that

f : Rd → Rd is continuous.Recall that Z = (Z (t), t ≥ 0) is an additive process if Z (0) = 0 (a.s.), Zhas independent increments and is stochastically continuous. It followsthat each Z (t) is infinitely divisible.

Theorem(If (t), t ≥ 0) is an additive process.

Proof. (sketch) Independent increments follows from the fact that forr ≤ s ≤ tIf (s)− If (r) =

∫ sr f (u)dX (u) is σ{X (b)− X (s); r ≤ a < b ≤ s} -

measurable,If (t)− If (s) =

∫ ts f (u)dX (u) is σ{X (d)− X (c); s ≤ c < d ≤ t} -

measurable, 2

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 10 / 44

Page 47: Koc5(dba)

TheoremIf X has Lévy symbol η then for each t ≥ 0,u ∈ Rd ,

E(ei(u,If (t))) = exp{∫ t

0η(f (s)T u)

}.

Proof. (sketch) Define Mf (t) = exp{

i(

u,∫ t

0 f (s)dX (s))}

and use Itô’sformula to show that

Mf (t) = 1 + i(

u,∫ t

0Mf (s−)f (s)dB(s)

)+

∫ t

0

∫Rd−{0}

Mf (s−)(ei(u,f (s)x) − 1)N(ds,dx) +

∫ t

0Mf (s−)η(f (s)T u)ds.

Now take expectations of both sides to get

E(Mf (t)) = 1 +

∫ t

0E(Mf (s))η(f (s)T u)ds,

and the result follows. 2Dave Applebaum (Sheffield UK) Lecture 5 December 2011 11 / 44

Page 48: Koc5(dba)

TheoremIf X has Lévy symbol η then for each t ≥ 0,u ∈ Rd ,

E(ei(u,If (t))) = exp{∫ t

0η(f (s)T u)

}.

Proof. (sketch) Define Mf (t) = exp{

i(

u,∫ t

0 f (s)dX (s))}

and use Itô’sformula to show that

Mf (t) = 1 + i(

u,∫ t

0Mf (s−)f (s)dB(s)

)+

∫ t

0

∫Rd−{0}

Mf (s−)(ei(u,f (s)x) − 1)N(ds,dx) +

∫ t

0Mf (s−)η(f (s)T u)ds.

Now take expectations of both sides to get

E(Mf (t)) = 1 +

∫ t

0E(Mf (s))η(f (s)T u)ds,

and the result follows. 2Dave Applebaum (Sheffield UK) Lecture 5 December 2011 11 / 44

Page 49: Koc5(dba)

TheoremIf X has Lévy symbol η then for each t ≥ 0,u ∈ Rd ,

E(ei(u,If (t))) = exp{∫ t

0η(f (s)T u)

}.

Proof. (sketch) Define Mf (t) = exp{

i(

u,∫ t

0 f (s)dX (s))}

and use Itô’sformula to show that

Mf (t) = 1 + i(

u,∫ t

0Mf (s−)f (s)dB(s)

)+

∫ t

0

∫Rd−{0}

Mf (s−)(ei(u,f (s)x) − 1)N(ds,dx) +

∫ t

0Mf (s−)η(f (s)T u)ds.

Now take expectations of both sides to get

E(Mf (t)) = 1 +

∫ t

0E(Mf (s))η(f (s)T u)ds,

and the result follows. 2Dave Applebaum (Sheffield UK) Lecture 5 December 2011 11 / 44

Page 50: Koc5(dba)

TheoremIf X has Lévy symbol η then for each t ≥ 0,u ∈ Rd ,

E(ei(u,If (t))) = exp{∫ t

0η(f (s)T u)

}.

Proof. (sketch) Define Mf (t) = exp{

i(

u,∫ t

0 f (s)dX (s))}

and use Itô’sformula to show that

Mf (t) = 1 + i(

u,∫ t

0Mf (s−)f (s)dB(s)

)+

∫ t

0

∫Rd−{0}

Mf (s−)(ei(u,f (s)x) − 1)N(ds,dx) +

∫ t

0Mf (s−)η(f (s)T u)ds.

Now take expectations of both sides to get

E(Mf (t)) = 1 +

∫ t

0E(Mf (s))η(f (s)T u)ds,

and the result follows. 2Dave Applebaum (Sheffield UK) Lecture 5 December 2011 11 / 44

Page 51: Koc5(dba)

If X has characteristics (b,A, ν), it follows that If (t) has characteristics(bf

t ,Aft , ν

ft ) where

bft =

∫ t

0f (s)bds +

∫0

∫Rd−{0}

f (s)x(1B(x)− 1B(f (s)x))ν(dx)ds,

Aft =

∫ t

0f (s)T Af (s)ds,

ν ft (B) =

∫ t

0ν(f (s)−1(B)).

It follows that every OU process Y conditioned on Y0 = y is an additiveprocess. It will have characteristics as above with f (s) = e−sK and bf

ttranslated by e−tK y .

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 12 / 44

Page 52: Koc5(dba)

If X has characteristics (b,A, ν), it follows that If (t) has characteristics(bf

t ,Aft , ν

ft ) where

bft =

∫ t

0f (s)bds +

∫0

∫Rd−{0}

f (s)x(1B(x)− 1B(f (s)x))ν(dx)ds,

Aft =

∫ t

0f (s)T Af (s)ds,

ν ft (B) =

∫ t

0ν(f (s)−1(B)).

It follows that every OU process Y conditioned on Y0 = y is an additiveprocess. It will have characteristics as above with f (s) = e−sK and bf

ttranslated by e−tK y .

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 12 / 44

Page 53: Koc5(dba)

If X has characteristics (b,A, ν), it follows that If (t) has characteristics(bf

t ,Aft , ν

ft ) where

bft =

∫ t

0f (s)bds +

∫0

∫Rd−{0}

f (s)x(1B(x)− 1B(f (s)x))ν(dx)ds,

Aft =

∫ t

0f (s)T Af (s)ds,

ν ft (B) =

∫ t

0ν(f (s)−1(B)).

It follows that every OU process Y conditioned on Y0 = y is an additiveprocess. It will have characteristics as above with f (s) = e−sK and bf

ttranslated by e−tK y .

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 12 / 44

Page 54: Koc5(dba)

If X has characteristics (b,A, ν), it follows that If (t) has characteristics(bf

t ,Aft , ν

ft ) where

bft =

∫ t

0f (s)bds +

∫0

∫Rd−{0}

f (s)x(1B(x)− 1B(f (s)x))ν(dx)ds,

Aft =

∫ t

0f (s)T Af (s)ds,

ν ft (B) =

∫ t

0ν(f (s)−1(B)).

It follows that every OU process Y conditioned on Y0 = y is an additiveprocess. It will have characteristics as above with f (s) = e−sK and bf

ttranslated by e−tK y .

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 12 / 44

Page 55: Koc5(dba)

If X has characteristics (b,A, ν), it follows that If (t) has characteristics(bf

t ,Aft , ν

ft ) where

bft =

∫ t

0f (s)bds +

∫0

∫Rd−{0}

f (s)x(1B(x)− 1B(f (s)x))ν(dx)ds,

Aft =

∫ t

0f (s)T Af (s)ds,

ν ft (B) =

∫ t

0ν(f (s)−1(B)).

It follows that every OU process Y conditioned on Y0 = y is an additiveprocess. It will have characteristics as above with f (s) = e−sK and bf

ttranslated by e−tK y .

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 12 / 44

Page 56: Koc5(dba)

If X has characteristics (b,A, ν), it follows that If (t) has characteristics(bf

t ,Aft , ν

ft ) where

bft =

∫ t

0f (s)bds +

∫0

∫Rd−{0}

f (s)x(1B(x)− 1B(f (s)x))ν(dx)ds,

Aft =

∫ t

0f (s)T Af (s)ds,

ν ft (B) =

∫ t

0ν(f (s)−1(B)).

It follows that every OU process Y conditioned on Y0 = y is an additiveprocess. It will have characteristics as above with f (s) = e−sK and bf

ttranslated by e−tK y .

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 12 / 44

Page 57: Koc5(dba)

Invariant Measures, Stationary Processes, Ergodicity:General Theory

We want to investigate invariant measures and stationary solutions forOU processes. First a little general theory.First let (Tt , t ≥ 0) be a general Markov semigroup with transitionprobabilities pt (x ,B) = Tt1B(x) so that Tt f (x) =

∫Rd f (y)pt (x ,dy) for

f ∈ Bb(Rd ). We say that a probability measure µ is an invariantmeasure for the semigroup if for all t ≥ 0, f ∈ Bb(Rd ),∫

RdTt f (x)µ(dx) =

∫Rd

f (x)µ(dx) (0.5)

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 13 / 44

Page 58: Koc5(dba)

Invariant Measures, Stationary Processes, Ergodicity:General Theory

We want to investigate invariant measures and stationary solutions forOU processes. First a little general theory.First let (Tt , t ≥ 0) be a general Markov semigroup with transitionprobabilities pt (x ,B) = Tt1B(x) so that Tt f (x) =

∫Rd f (y)pt (x ,dy) for

f ∈ Bb(Rd ). We say that a probability measure µ is an invariantmeasure for the semigroup if for all t ≥ 0, f ∈ Bb(Rd ),∫

RdTt f (x)µ(dx) =

∫Rd

f (x)µ(dx) (0.5)

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 13 / 44

Page 59: Koc5(dba)

Invariant Measures, Stationary Processes, Ergodicity:General Theory

We want to investigate invariant measures and stationary solutions forOU processes. First a little general theory.First let (Tt , t ≥ 0) be a general Markov semigroup with transitionprobabilities pt (x ,B) = Tt1B(x) so that Tt f (x) =

∫Rd f (y)pt (x ,dy) for

f ∈ Bb(Rd ). We say that a probability measure µ is an invariantmeasure for the semigroup if for all t ≥ 0, f ∈ Bb(Rd ),∫

RdTt f (x)µ(dx) =

∫Rd

f (x)µ(dx) (0.5)

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 13 / 44

Page 60: Koc5(dba)

Invariant Measures, Stationary Processes, Ergodicity:General Theory

We want to investigate invariant measures and stationary solutions forOU processes. First a little general theory.First let (Tt , t ≥ 0) be a general Markov semigroup with transitionprobabilities pt (x ,B) = Tt1B(x) so that Tt f (x) =

∫Rd f (y)pt (x ,dy) for

f ∈ Bb(Rd ). We say that a probability measure µ is an invariantmeasure for the semigroup if for all t ≥ 0, f ∈ Bb(Rd ),∫

RdTt f (x)µ(dx) =

∫Rd

f (x)µ(dx) (0.5)

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 13 / 44

Page 61: Koc5(dba)

Invariant Measures, Stationary Processes, Ergodicity:General Theory

We want to investigate invariant measures and stationary solutions forOU processes. First a little general theory.First let (Tt , t ≥ 0) be a general Markov semigroup with transitionprobabilities pt (x ,B) = Tt1B(x) so that Tt f (x) =

∫Rd f (y)pt (x ,dy) for

f ∈ Bb(Rd ). We say that a probability measure µ is an invariantmeasure for the semigroup if for all t ≥ 0, f ∈ Bb(Rd ),∫

RdTt f (x)µ(dx) =

∫Rd

f (x)µ(dx) (0.5)

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 13 / 44

Page 62: Koc5(dba)

Invariant Measures, Stationary Processes, Ergodicity:General Theory

We want to investigate invariant measures and stationary solutions forOU processes. First a little general theory.First let (Tt , t ≥ 0) be a general Markov semigroup with transitionprobabilities pt (x ,B) = Tt1B(x) so that Tt f (x) =

∫Rd f (y)pt (x ,dy) for

f ∈ Bb(Rd ). We say that a probability measure µ is an invariantmeasure for the semigroup if for all t ≥ 0, f ∈ Bb(Rd ),∫

RdTt f (x)µ(dx) =

∫Rd

f (x)µ(dx) (0.5)

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 13 / 44

Page 63: Koc5(dba)

Equivalently for all Borel sets B∫Rd

pt (x ,B)µ(dx) = µ(B). (0.6)

To see that (0.5)⇒ (0.6) rewrite as∫Rd

∫Rd

f (y)pt (x ,dy)µ(dx) =

∫Rd

f (x)µ(dx),

and put f = 1B. For the converse - approximate f by simple functionsand take limits.

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 14 / 44

Page 64: Koc5(dba)

Equivalently for all Borel sets B∫Rd

pt (x ,B)µ(dx) = µ(B). (0.6)

To see that (0.5)⇒ (0.6) rewrite as∫Rd

∫Rd

f (y)pt (x ,dy)µ(dx) =

∫Rd

f (x)µ(dx),

and put f = 1B. For the converse - approximate f by simple functionsand take limits.

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 14 / 44

Page 65: Koc5(dba)

Equivalently for all Borel sets B∫Rd

pt (x ,B)µ(dx) = µ(B). (0.6)

To see that (0.5)⇒ (0.6) rewrite as∫Rd

∫Rd

f (y)pt (x ,dy)µ(dx) =

∫Rd

f (x)µ(dx),

and put f = 1B. For the converse - approximate f by simple functionsand take limits.

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 14 / 44

Page 66: Koc5(dba)

Equivalently for all Borel sets B∫Rd

pt (x ,B)µ(dx) = µ(B). (0.6)

To see that (0.5)⇒ (0.6) rewrite as∫Rd

∫Rd

f (y)pt (x ,dy)µ(dx) =

∫Rd

f (x)µ(dx),

and put f = 1B. For the converse - approximate f by simple functionsand take limits.

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 14 / 44

Page 67: Koc5(dba)

e.g. A Lévy process doesn’t have an invariant probability measure butLebesgue measure is invariant in the sense that for f ∈ L1(Rd )∫

RdTt f (x)dx =

∫Rd

∫Rd

f (x + y)pt (dy)dx =

∫Rd

f (x)dx .

A process Z = (Z (t), t ≥ 0) is (strictly) stationary if for alln ∈ N, t1, . . . , tn,h ∈ R+,

(Z (t1), . . . ,Z (tn))d= (Z (t1 + h), . . . ,Z (tn + h))

TheoremA Markov process Z wherein µ is the law of Z (0) is stationary if andonly if µ is an invariant measure.

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 15 / 44

Page 68: Koc5(dba)

e.g. A Lévy process doesn’t have an invariant probability measure butLebesgue measure is invariant in the sense that for f ∈ L1(Rd )∫

RdTt f (x)dx =

∫Rd

∫Rd

f (x + y)pt (dy)dx =

∫Rd

f (x)dx .

A process Z = (Z (t), t ≥ 0) is (strictly) stationary if for alln ∈ N, t1, . . . , tn,h ∈ R+,

(Z (t1), . . . ,Z (tn))d= (Z (t1 + h), . . . ,Z (tn + h))

TheoremA Markov process Z wherein µ is the law of Z (0) is stationary if andonly if µ is an invariant measure.

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 15 / 44

Page 69: Koc5(dba)

e.g. A Lévy process doesn’t have an invariant probability measure butLebesgue measure is invariant in the sense that for f ∈ L1(Rd )∫

RdTt f (x)dx =

∫Rd

∫Rd

f (x + y)pt (dy)dx =

∫Rd

f (x)dx .

A process Z = (Z (t), t ≥ 0) is (strictly) stationary if for alln ∈ N, t1, . . . , tn,h ∈ R+,

(Z (t1), . . . ,Z (tn))d= (Z (t1 + h), . . . ,Z (tn + h))

TheoremA Markov process Z wherein µ is the law of Z (0) is stationary if andonly if µ is an invariant measure.

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 15 / 44

Page 70: Koc5(dba)

e.g. A Lévy process doesn’t have an invariant probability measure butLebesgue measure is invariant in the sense that for f ∈ L1(Rd )∫

RdTt f (x)dx =

∫Rd

∫Rd

f (x + y)pt (dy)dx =

∫Rd

f (x)dx .

A process Z = (Z (t), t ≥ 0) is (strictly) stationary if for alln ∈ N, t1, . . . , tn,h ∈ R+,

(Z (t1), . . . ,Z (tn))d= (Z (t1 + h), . . . ,Z (tn + h))

TheoremA Markov process Z wherein µ is the law of Z (0) is stationary if andonly if µ is an invariant measure.

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 15 / 44

Page 71: Koc5(dba)

Proof. If the process is stationary then µ is invariant since

µ(B) = P(Z (0) ∈ B) = P(Z (t) ∈ B) =

∫Rd

pt (x ,B)µ(dx).

For the converse, its sufficient to prove thatE(f1(Z (t1 + h)) · · · fn(Z (tn + h)))) is independent of h for allf1, . . . fn ∈ Bb(Rd ). Proof is by induction. Case n = 1. Its enough toshow

E(f (Z (t)) = E(E(f (Z (t)|F0)))

= E(Tt f (Z (0)))

=

∫Rd

(Tt f (x))µ(dx)

=

∫Rd

f (x)µ(dx) = E(f (Z (0))).

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 16 / 44

Page 72: Koc5(dba)

Proof. If the process is stationary then µ is invariant since

µ(B) = P(Z (0) ∈ B) = P(Z (t) ∈ B) =

∫Rd

pt (x ,B)µ(dx).

For the converse, its sufficient to prove thatE(f1(Z (t1 + h)) · · · fn(Z (tn + h)))) is independent of h for allf1, . . . fn ∈ Bb(Rd ). Proof is by induction. Case n = 1. Its enough toshow

E(f (Z (t)) = E(E(f (Z (t)|F0)))

= E(Tt f (Z (0)))

=

∫Rd

(Tt f (x))µ(dx)

=

∫Rd

f (x)µ(dx) = E(f (Z (0))).

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 16 / 44

Page 73: Koc5(dba)

Proof. If the process is stationary then µ is invariant since

µ(B) = P(Z (0) ∈ B) = P(Z (t) ∈ B) =

∫Rd

pt (x ,B)µ(dx).

For the converse, its sufficient to prove thatE(f1(Z (t1 + h)) · · · fn(Z (tn + h)))) is independent of h for allf1, . . . fn ∈ Bb(Rd ). Proof is by induction. Case n = 1. Its enough toshow

E(f (Z (t)) = E(E(f (Z (t)|F0)))

= E(Tt f (Z (0)))

=

∫Rd

(Tt f (x))µ(dx)

=

∫Rd

f (x)µ(dx) = E(f (Z (0))).

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 16 / 44

Page 74: Koc5(dba)

Proof. If the process is stationary then µ is invariant since

µ(B) = P(Z (0) ∈ B) = P(Z (t) ∈ B) =

∫Rd

pt (x ,B)µ(dx).

For the converse, its sufficient to prove thatE(f1(Z (t1 + h)) · · · fn(Z (tn + h)))) is independent of h for allf1, . . . fn ∈ Bb(Rd ). Proof is by induction. Case n = 1. Its enough toshow

E(f (Z (t)) = E(E(f (Z (t)|F0)))

= E(Tt f (Z (0)))

=

∫Rd

(Tt f (x))µ(dx)

=

∫Rd

f (x)µ(dx) = E(f (Z (0))).

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 16 / 44

Page 75: Koc5(dba)

Proof. If the process is stationary then µ is invariant since

µ(B) = P(Z (0) ∈ B) = P(Z (t) ∈ B) =

∫Rd

pt (x ,B)µ(dx).

For the converse, its sufficient to prove thatE(f1(Z (t1 + h)) · · · fn(Z (tn + h)))) is independent of h for allf1, . . . fn ∈ Bb(Rd ). Proof is by induction. Case n = 1. Its enough toshow

E(f (Z (t)) = E(E(f (Z (t)|F0)))

= E(Tt f (Z (0)))

=

∫Rd

(Tt f (x))µ(dx)

=

∫Rd

f (x)µ(dx) = E(f (Z (0))).

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 16 / 44

Page 76: Koc5(dba)

Proof. If the process is stationary then µ is invariant since

µ(B) = P(Z (0) ∈ B) = P(Z (t) ∈ B) =

∫Rd

pt (x ,B)µ(dx).

For the converse, its sufficient to prove thatE(f1(Z (t1 + h)) · · · fn(Z (tn + h)))) is independent of h for allf1, . . . fn ∈ Bb(Rd ). Proof is by induction. Case n = 1. Its enough toshow

E(f (Z (t)) = E(E(f (Z (t)|F0)))

= E(Tt f (Z (0)))

=

∫Rd

(Tt f (x))µ(dx)

=

∫Rd

f (x)µ(dx) = E(f (Z (0))).

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 16 / 44

Page 77: Koc5(dba)

Proof. If the process is stationary then µ is invariant since

µ(B) = P(Z (0) ∈ B) = P(Z (t) ∈ B) =

∫Rd

pt (x ,B)µ(dx).

For the converse, its sufficient to prove thatE(f1(Z (t1 + h)) · · · fn(Z (tn + h)))) is independent of h for allf1, . . . fn ∈ Bb(Rd ). Proof is by induction. Case n = 1. Its enough toshow

E(f (Z (t)) = E(E(f (Z (t)|F0)))

= E(Tt f (Z (0)))

=

∫Rd

(Tt f (x))µ(dx)

=

∫Rd

f (x)µ(dx) = E(f (Z (0))).

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 16 / 44

Page 78: Koc5(dba)

In general use

E(f1(Z (t1 + h)) · · · fn(Z (tn + h)))

= E(f1(Z (t1 + h)) · · ·E(fn(Z (tn + h))|Ftn−1+h))

= E(f1(Z (t1 + h) · · ·Ttn−tn−1 fn(Z (tn−1 + h)))).

2

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 17 / 44

Page 79: Koc5(dba)

In general use

E(f1(Z (t1 + h)) · · · fn(Z (tn + h)))

= E(f1(Z (t1 + h)) · · ·E(fn(Z (tn + h))|Ftn−1+h))

= E(f1(Z (t1 + h) · · ·Ttn−tn−1 fn(Z (tn−1 + h)))).

2

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 17 / 44

Page 80: Koc5(dba)

Let µ be an invariant probability measure for a Markov semigroup(Tt , t ≥ 0). µ is ergodic if

Tt1B = 1B (µ a.s.) ⇒ µ(B) = 0 or µ(B) = 1.

If µ is ergodic then “time averages” = “space averages” for thecorresponding stationary Markov process, i.e.

limT→∞

1T

∫ T

0f (Z (s))ds =

∫Rd

f (x)µ(dx) a.s.

Fact: The invariant measures form a convex set and the ergodicmeasures are the extreme points of this set.

It follows that if an invariant measure is unique then it is ergodic.

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 18 / 44

Page 81: Koc5(dba)

Let µ be an invariant probability measure for a Markov semigroup(Tt , t ≥ 0). µ is ergodic if

Tt1B = 1B (µ a.s.) ⇒ µ(B) = 0 or µ(B) = 1.

If µ is ergodic then “time averages” = “space averages” for thecorresponding stationary Markov process, i.e.

limT→∞

1T

∫ T

0f (Z (s))ds =

∫Rd

f (x)µ(dx) a.s.

Fact: The invariant measures form a convex set and the ergodicmeasures are the extreme points of this set.

It follows that if an invariant measure is unique then it is ergodic.

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 18 / 44

Page 82: Koc5(dba)

Let µ be an invariant probability measure for a Markov semigroup(Tt , t ≥ 0). µ is ergodic if

Tt1B = 1B (µ a.s.) ⇒ µ(B) = 0 or µ(B) = 1.

If µ is ergodic then “time averages” = “space averages” for thecorresponding stationary Markov process, i.e.

limT→∞

1T

∫ T

0f (Z (s))ds =

∫Rd

f (x)µ(dx) a.s.

Fact: The invariant measures form a convex set and the ergodicmeasures are the extreme points of this set.

It follows that if an invariant measure is unique then it is ergodic.

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 18 / 44

Page 83: Koc5(dba)

Let µ be an invariant probability measure for a Markov semigroup(Tt , t ≥ 0). µ is ergodic if

Tt1B = 1B (µ a.s.) ⇒ µ(B) = 0 or µ(B) = 1.

If µ is ergodic then “time averages” = “space averages” for thecorresponding stationary Markov process, i.e.

limT→∞

1T

∫ T

0f (Z (s))ds =

∫Rd

f (x)µ(dx) a.s.

Fact: The invariant measures form a convex set and the ergodicmeasures are the extreme points of this set.

It follows that if an invariant measure is unique then it is ergodic.

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 18 / 44

Page 84: Koc5(dba)

Let µ be an invariant probability measure for a Markov semigroup(Tt , t ≥ 0). µ is ergodic if

Tt1B = 1B (µ a.s.) ⇒ µ(B) = 0 or µ(B) = 1.

If µ is ergodic then “time averages” = “space averages” for thecorresponding stationary Markov process, i.e.

limT→∞

1T

∫ T

0f (Z (s))ds =

∫Rd

f (x)µ(dx) a.s.

Fact: The invariant measures form a convex set and the ergodicmeasures are the extreme points of this set.

It follows that if an invariant measure is unique then it is ergodic.

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 18 / 44

Page 85: Koc5(dba)

Let µ be an invariant probability measure for a Markov semigroup(Tt , t ≥ 0). µ is ergodic if

Tt1B = 1B (µ a.s.) ⇒ µ(B) = 0 or µ(B) = 1.

If µ is ergodic then “time averages” = “space averages” for thecorresponding stationary Markov process, i.e.

limT→∞

1T

∫ T

0f (Z (s))ds =

∫Rd

f (x)µ(dx) a.s.

Fact: The invariant measures form a convex set and the ergodicmeasures are the extreme points of this set.

It follows that if an invariant measure is unique then it is ergodic.

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 18 / 44

Page 86: Koc5(dba)

The Self-Decomposable Connection

Recall that a random variable Z is self-decomposable if for each0 < a < 1 there exists a random variable Wa that is independent of Zsuch that

Z d= aZ + Wa

or equivalently ρZ = ρaZ ∗ ρWa , where ρa

Z (B) = ρ(a−1B).Now suppose that Y is a stationary Ornstein-Uhlenbeck process on R.Then Y0 is self decomposable with a = e−kt and Wa(t) =

∫ t0 e−ksdX (s)

since

Y (t) = e−ktY0 +

∫ t

0e−(t−s)K dX (s)

and by stationary increments of the process X

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 19 / 44

Page 87: Koc5(dba)

The Self-Decomposable Connection

Recall that a random variable Z is self-decomposable if for each0 < a < 1 there exists a random variable Wa that is independent of Zsuch that

Z d= aZ + Wa

or equivalently ρZ = ρaZ ∗ ρWa , where ρa

Z (B) = ρ(a−1B).Now suppose that Y is a stationary Ornstein-Uhlenbeck process on R.Then Y0 is self decomposable with a = e−kt and Wa(t) =

∫ t0 e−ksdX (s)

since

Y (t) = e−ktY0 +

∫ t

0e−(t−s)K dX (s)

and by stationary increments of the process X

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 19 / 44

Page 88: Koc5(dba)

The Self-Decomposable Connection

Recall that a random variable Z is self-decomposable if for each0 < a < 1 there exists a random variable Wa that is independent of Zsuch that

Z d= aZ + Wa

or equivalently ρZ = ρaZ ∗ ρWa , where ρa

Z (B) = ρ(a−1B).Now suppose that Y is a stationary Ornstein-Uhlenbeck process on R.Then Y0 is self decomposable with a = e−kt and Wa(t) =

∫ t0 e−ksdX (s)

since

Y (t) = e−ktY0 +

∫ t

0e−(t−s)K dX (s)

and by stationary increments of the process X

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 19 / 44

Page 89: Koc5(dba)

The Self-Decomposable Connection

Recall that a random variable Z is self-decomposable if for each0 < a < 1 there exists a random variable Wa that is independent of Zsuch that

Z d= aZ + Wa

or equivalently ρZ = ρaZ ∗ ρWa , where ρa

Z (B) = ρ(a−1B).Now suppose that Y is a stationary Ornstein-Uhlenbeck process on R.Then Y0 is self decomposable with a = e−kt and Wa(t) =

∫ t0 e−ksdX (s)

since

Y (t) = e−ktY0 +

∫ t

0e−(t−s)K dX (s)

and by stationary increments of the process X

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 19 / 44

Page 90: Koc5(dba)

The Self-Decomposable Connection

Recall that a random variable Z is self-decomposable if for each0 < a < 1 there exists a random variable Wa that is independent of Zsuch that

Z d= aZ + Wa

or equivalently ρZ = ρaZ ∗ ρWa , where ρa

Z (B) = ρ(a−1B).Now suppose that Y is a stationary Ornstein-Uhlenbeck process on R.Then Y0 is self decomposable with a = e−kt and Wa(t) =

∫ t0 e−ksdX (s)

since

Y (t) = e−ktY0 +

∫ t

0e−(t−s)K dX (s)

and by stationary increments of the process X

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 19 / 44

Page 91: Koc5(dba)

The Self-Decomposable Connection

Recall that a random variable Z is self-decomposable if for each0 < a < 1 there exists a random variable Wa that is independent of Zsuch that

Z d= aZ + Wa

or equivalently ρZ = ρaZ ∗ ρWa , where ρa

Z (B) = ρ(a−1B).Now suppose that Y is a stationary Ornstein-Uhlenbeck process on R.Then Y0 is self decomposable with a = e−kt and Wa(t) =

∫ t0 e−ksdX (s)

since

Y (t) = e−ktY0 +

∫ t

0e−(t−s)K dX (s)

and by stationary increments of the process X

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 19 / 44

Page 92: Koc5(dba)

Y (t) d= Y0 and

∫ t

0e−k(t−s)dX (s)

d=

∫ t

0e−ksdX (s)

⇒ Y0d= e−ktY0 + Wa(t).

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 20 / 44

Page 93: Koc5(dba)

Y (t) d= Y0 and

∫ t

0e−k(t−s)dX (s)

d=

∫ t

0e−ksdX (s)

⇒ Y0d= e−ktY0 + Wa(t).

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 20 / 44

Page 94: Koc5(dba)

Now suppose that µ is self-decomposable - more precisely that

µ = µekt ∗ ρt ,

where ρt is the law of Wa(t). Then

∫R

Tt f (x)µ(dx) =

∫R

∫R

f (e−ktx + y)ρt (dy)µ(dx)

=

∫R

∫R

f (x + y)ρt (dy)µekt(dx)

=

∫R

f (x)(µekt ∗ ρt )(dx)

=

∫R

f (x)µ(dx).

So µ is an invariant measure.

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 21 / 44

Page 95: Koc5(dba)

Now suppose that µ is self-decomposable - more precisely that

µ = µekt ∗ ρt ,

where ρt is the law of Wa(t). Then

∫R

Tt f (x)µ(dx) =

∫R

∫R

f (e−ktx + y)ρt (dy)µ(dx)

=

∫R

∫R

f (x + y)ρt (dy)µekt(dx)

=

∫R

f (x)(µekt ∗ ρt )(dx)

=

∫R

f (x)µ(dx).

So µ is an invariant measure.

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 21 / 44

Page 96: Koc5(dba)

Now suppose that µ is self-decomposable - more precisely that

µ = µekt ∗ ρt ,

where ρt is the law of Wa(t). Then

∫R

Tt f (x)µ(dx) =

∫R

∫R

f (e−ktx + y)ρt (dy)µ(dx)

=

∫R

∫R

f (x + y)ρt (dy)µekt(dx)

=

∫R

f (x)(µekt ∗ ρt )(dx)

=

∫R

f (x)µ(dx).

So µ is an invariant measure.

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 21 / 44

Page 97: Koc5(dba)

Now suppose that µ is self-decomposable - more precisely that

µ = µekt ∗ ρt ,

where ρt is the law of Wa(t). Then

∫R

Tt f (x)µ(dx) =

∫R

∫R

f (e−ktx + y)ρt (dy)µ(dx)

=

∫R

∫R

f (x + y)ρt (dy)µekt(dx)

=

∫R

f (x)(µekt ∗ ρt )(dx)

=

∫R

f (x)µ(dx).

So µ is an invariant measure.

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 21 / 44

Page 98: Koc5(dba)

Now suppose that µ is self-decomposable - more precisely that

µ = µekt ∗ ρt ,

where ρt is the law of Wa(t). Then

∫R

Tt f (x)µ(dx) =

∫R

∫R

f (e−ktx + y)ρt (dy)µ(dx)

=

∫R

∫R

f (x + y)ρt (dy)µekt(dx)

=

∫R

f (x)(µekt ∗ ρt )(dx)

=

∫R

f (x)µ(dx).

So µ is an invariant measure.

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 21 / 44

Page 99: Koc5(dba)

Now suppose that µ is self-decomposable - more precisely that

µ = µekt ∗ ρt ,

where ρt is the law of Wa(t). Then

∫R

Tt f (x)µ(dx) =

∫R

∫R

f (e−ktx + y)ρt (dy)µ(dx)

=

∫R

∫R

f (x + y)ρt (dy)µekt(dx)

=

∫R

f (x)(µekt ∗ ρt )(dx)

=

∫R

f (x)µ(dx).

So µ is an invariant measure.

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 21 / 44

Page 100: Koc5(dba)

So we have shown that:

TheoremThe following are equivalent for the O-U process Y .

Y is stationary.The law of Y (0) is an invariant measure.

The law of Y (0) is self-decomposable (with Wa(t) =∫ t

0 e−ksdX (s)).

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 22 / 44

Page 101: Koc5(dba)

So we have shown that:

TheoremThe following are equivalent for the O-U process Y .

Y is stationary.The law of Y (0) is an invariant measure.

The law of Y (0) is self-decomposable (with Wa(t) =∫ t

0 e−ksdX (s)).

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 22 / 44

Page 102: Koc5(dba)

So we have shown that:

TheoremThe following are equivalent for the O-U process Y .

Y is stationary.The law of Y (0) is an invariant measure.

The law of Y (0) is self-decomposable (with Wa(t) =∫ t

0 e−ksdX (s)).

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 22 / 44

Page 103: Koc5(dba)

So we have shown that:

TheoremThe following are equivalent for the O-U process Y .

Y is stationary.The law of Y (0) is an invariant measure.

The law of Y (0) is self-decomposable (with Wa(t) =∫ t

0 e−ksdX (s)).

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 22 / 44

Page 104: Koc5(dba)

We seek some condition on the Lévy process X which ensures that Yis stationary.

Fact: If Y∞ :=∫∞

0 e−ksdX (s) exists in distribution then it isself-decomposable.

To see this observe that (using stationary increments of X )

∫ ∞0

e−ksdX (s) =

∫ ∞t

e−ksdX (s) +

∫ t

0e−ksdX (s)

d=

∫ ∞0

e−k(t+s)dX (s) +

∫ t

0e−ksdX (s)

= e−kt∫ ∞

0e−ksdX (s) +

∫ t

0e−ksdX (s)

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 23 / 44

Page 105: Koc5(dba)

We seek some condition on the Lévy process X which ensures that Yis stationary.

Fact: If Y∞ :=∫∞

0 e−ksdX (s) exists in distribution then it isself-decomposable.

To see this observe that (using stationary increments of X )

∫ ∞0

e−ksdX (s) =

∫ ∞t

e−ksdX (s) +

∫ t

0e−ksdX (s)

d=

∫ ∞0

e−k(t+s)dX (s) +

∫ t

0e−ksdX (s)

= e−kt∫ ∞

0e−ksdX (s) +

∫ t

0e−ksdX (s)

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 23 / 44

Page 106: Koc5(dba)

We seek some condition on the Lévy process X which ensures that Yis stationary.

Fact: If Y∞ :=∫∞

0 e−ksdX (s) exists in distribution then it isself-decomposable.

To see this observe that (using stationary increments of X )

∫ ∞0

e−ksdX (s) =

∫ ∞t

e−ksdX (s) +

∫ t

0e−ksdX (s)

d=

∫ ∞0

e−k(t+s)dX (s) +

∫ t

0e−ksdX (s)

= e−kt∫ ∞

0e−ksdX (s) +

∫ t

0e−ksdX (s)

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 23 / 44

Page 107: Koc5(dba)

We seek some condition on the Lévy process X which ensures that Yis stationary.

Fact: If Y∞ :=∫∞

0 e−ksdX (s) exists in distribution then it isself-decomposable.

To see this observe that (using stationary increments of X )

∫ ∞0

e−ksdX (s) =

∫ ∞t

e−ksdX (s) +

∫ t

0e−ksdX (s)

d=

∫ ∞0

e−k(t+s)dX (s) +

∫ t

0e−ksdX (s)

= e−kt∫ ∞

0e−ksdX (s) +

∫ t

0e−ksdX (s)

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 23 / 44

Page 108: Koc5(dba)

We seek some condition on the Lévy process X which ensures that Yis stationary.

Fact: If Y∞ :=∫∞

0 e−ksdX (s) exists in distribution then it isself-decomposable.

To see this observe that (using stationary increments of X )

∫ ∞0

e−ksdX (s) =

∫ ∞t

e−ksdX (s) +

∫ t

0e−ksdX (s)

d=

∫ ∞0

e−k(t+s)dX (s) +

∫ t

0e−ksdX (s)

= e−kt∫ ∞

0e−ksdX (s) +

∫ t

0e−ksdX (s)

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 23 / 44

Page 109: Koc5(dba)

We seek some condition on the Lévy process X which ensures that Yis stationary.

Fact: If Y∞ :=∫∞

0 e−ksdX (s) exists in distribution then it isself-decomposable.

To see this observe that (using stationary increments of X )

∫ ∞0

e−ksdX (s) =

∫ ∞t

e−ksdX (s) +

∫ t

0e−ksdX (s)

d=

∫ ∞0

e−k(t+s)dX (s) +

∫ t

0e−ksdX (s)

= e−kt∫ ∞

0e−ksdX (s) +

∫ t

0e−ksdX (s)

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 23 / 44

Page 110: Koc5(dba)

When does limt→∞∫ t

0 e−ksdX (s) exist in distribution? Use the Lévy-Itôdecomposition.

X (t) = bt + M(t) +

∫|x |≥1

xN(t ,dx).

It is not difficult to see that limt→∞∫ t

0 e−ksdM(s) exists in L2-sense.

Fact: limt→∞

∫ t

0

∫|x |≥1

e−ksxN(ds,dx) exists in distribution if and only if∫|x |≥1 log(1 + |x |)ν(dx) <∞.

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 24 / 44

Page 111: Koc5(dba)

When does limt→∞∫ t

0 e−ksdX (s) exist in distribution? Use the Lévy-Itôdecomposition.

X (t) = bt + M(t) +

∫|x |≥1

xN(t ,dx).

It is not difficult to see that limt→∞∫ t

0 e−ksdM(s) exists in L2-sense.

Fact: limt→∞

∫ t

0

∫|x |≥1

e−ksxN(ds,dx) exists in distribution if and only if∫|x |≥1 log(1 + |x |)ν(dx) <∞.

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 24 / 44

Page 112: Koc5(dba)

When does limt→∞∫ t

0 e−ksdX (s) exist in distribution? Use the Lévy-Itôdecomposition.

X (t) = bt + M(t) +

∫|x |≥1

xN(t ,dx).

It is not difficult to see that limt→∞∫ t

0 e−ksdM(s) exists in L2-sense.

Fact: limt→∞

∫ t

0

∫|x |≥1

e−ksxN(ds,dx) exists in distribution if and only if∫|x |≥1 log(1 + |x |)ν(dx) <∞.

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 24 / 44

Page 113: Koc5(dba)

When does limt→∞∫ t

0 e−ksdX (s) exist in distribution? Use the Lévy-Itôdecomposition.

X (t) = bt + M(t) +

∫|x |≥1

xN(t ,dx).

It is not difficult to see that limt→∞∫ t

0 e−ksdM(s) exists in L2-sense.

Fact: limt→∞

∫ t

0

∫|x |≥1

e−ksxN(ds,dx) exists in distribution if and only if∫|x |≥1 log(1 + |x |)ν(dx) <∞.

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 24 / 44

Page 114: Koc5(dba)

When does limt→∞∫ t

0 e−ksdX (s) exist in distribution? Use the Lévy-Itôdecomposition.

X (t) = bt + M(t) +

∫|x |≥1

xN(t ,dx).

It is not difficult to see that limt→∞∫ t

0 e−ksdM(s) exists in L2-sense.

Fact: limt→∞

∫ t

0

∫|x |≥1

e−ksxN(ds,dx) exists in distribution if and only if∫|x |≥1 log(1 + |x |)ν(dx) <∞.

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 24 / 44

Page 115: Koc5(dba)

When does limt→∞∫ t

0 e−ksdX (s) exist in distribution? Use the Lévy-Itôdecomposition.

X (t) = bt + M(t) +

∫|x |≥1

xN(t ,dx).

It is not difficult to see that limt→∞∫ t

0 e−ksdM(s) exists in L2-sense.

Fact: limt→∞

∫ t

0

∫|x |≥1

e−ksxN(ds,dx) exists in distribution if and only if∫|x |≥1 log(1 + |x |)ν(dx) <∞.

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 24 / 44

Page 116: Koc5(dba)

To prove this you need

1 If (ξn,n ∈ N) are i.i.d. then∑∞

n=1 cnξn converges a.s. (0 < c < 1)if and only if E(log(1 + |ξ1|)) <∞.

2 ∫ n

0

∫|x |≥1

e−ksxN(ds,dx)d=

n−1∑j=0

e−kjMj

where Mj :=∫ j+1

j

∫|x |≥1 e−k(s−j)xN(ds,dx). Note that (Mj , j ∈ N)

are i.i.d.

In this case, Y has characteristics (bf∞,Af

∞, νf∞).

e.g. Brownian motion case. X (t) = B(t). µ ∼ N(0, 1

2k

).

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 25 / 44

Page 117: Koc5(dba)

To prove this you need

1 If (ξn,n ∈ N) are i.i.d. then∑∞

n=1 cnξn converges a.s. (0 < c < 1)if and only if E(log(1 + |ξ1|)) <∞.

2 ∫ n

0

∫|x |≥1

e−ksxN(ds,dx)d=

n−1∑j=0

e−kjMj

where Mj :=∫ j+1

j

∫|x |≥1 e−k(s−j)xN(ds,dx). Note that (Mj , j ∈ N)

are i.i.d.

In this case, Y has characteristics (bf∞,Af

∞, νf∞).

e.g. Brownian motion case. X (t) = B(t). µ ∼ N(0, 1

2k

).

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 25 / 44

Page 118: Koc5(dba)

To prove this you need

1 If (ξn,n ∈ N) are i.i.d. then∑∞

n=1 cnξn converges a.s. (0 < c < 1)if and only if E(log(1 + |ξ1|)) <∞.

2 ∫ n

0

∫|x |≥1

e−ksxN(ds,dx)d=

n−1∑j=0

e−kjMj

where Mj :=∫ j+1

j

∫|x |≥1 e−k(s−j)xN(ds,dx). Note that (Mj , j ∈ N)

are i.i.d.

In this case, Y has characteristics (bf∞,Af

∞, νf∞).

e.g. Brownian motion case. X (t) = B(t). µ ∼ N(0, 1

2k

).

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 25 / 44

Page 119: Koc5(dba)

To prove this you need

1 If (ξn,n ∈ N) are i.i.d. then∑∞

n=1 cnξn converges a.s. (0 < c < 1)if and only if E(log(1 + |ξ1|)) <∞.

2 ∫ n

0

∫|x |≥1

e−ksxN(ds,dx)d=

n−1∑j=0

e−kjMj

where Mj :=∫ j+1

j

∫|x |≥1 e−k(s−j)xN(ds,dx). Note that (Mj , j ∈ N)

are i.i.d.

In this case, Y has characteristics (bf∞,Af

∞, νf∞).

e.g. Brownian motion case. X (t) = B(t). µ ∼ N(0, 1

2k

).

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 25 / 44

Page 120: Koc5(dba)

To prove this you need

1 If (ξn,n ∈ N) are i.i.d. then∑∞

n=1 cnξn converges a.s. (0 < c < 1)if and only if E(log(1 + |ξ1|)) <∞.

2 ∫ n

0

∫|x |≥1

e−ksxN(ds,dx)d=

n−1∑j=0

e−kjMj

where Mj :=∫ j+1

j

∫|x |≥1 e−k(s−j)xN(ds,dx). Note that (Mj , j ∈ N)

are i.i.d.

In this case, Y has characteristics (bf∞,Af

∞, νf∞).

e.g. Brownian motion case. X (t) = B(t). µ ∼ N(0, 1

2k

).

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 25 / 44

Page 121: Koc5(dba)

To prove this you need

1 If (ξn,n ∈ N) are i.i.d. then∑∞

n=1 cnξn converges a.s. (0 < c < 1)if and only if E(log(1 + |ξ1|)) <∞.

2 ∫ n

0

∫|x |≥1

e−ksxN(ds,dx)d=

n−1∑j=0

e−kjMj

where Mj :=∫ j+1

j

∫|x |≥1 e−k(s−j)xN(ds,dx). Note that (Mj , j ∈ N)

are i.i.d.

In this case, Y has characteristics (bf∞,Af

∞, νf∞).

e.g. Brownian motion case. X (t) = B(t). µ ∼ N(0, 1

2k

).

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 25 / 44

Page 122: Koc5(dba)

In fact, if an invariant measure µ exists then it is unique. For supposethat Y is stationary, then

Y (0)d= e−ktY (0) +

∫0

te−ksdX (s).

Now let ρ be the law of Y (0) and Φρ(u) :=∫R eiuyρ(dy). Then for all

u ∈ R, by independence

Φρ(u) = Φρ(e−ktu) exp{−∫ t

0η(e−ksu)ds

}.

Take limits as t →∞ to get

Φρ(u) = exp{−∫ ∞

0η(e−ksu)ds

}.

So ρ is the law of Y∞.

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 26 / 44

Page 123: Koc5(dba)

In fact, if an invariant measure µ exists then it is unique. For supposethat Y is stationary, then

Y (0)d= e−ktY (0) +

∫0

te−ksdX (s).

Now let ρ be the law of Y (0) and Φρ(u) :=∫R eiuyρ(dy). Then for all

u ∈ R, by independence

Φρ(u) = Φρ(e−ktu) exp{−∫ t

0η(e−ksu)ds

}.

Take limits as t →∞ to get

Φρ(u) = exp{−∫ ∞

0η(e−ksu)ds

}.

So ρ is the law of Y∞.

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 26 / 44

Page 124: Koc5(dba)

In fact, if an invariant measure µ exists then it is unique. For supposethat Y is stationary, then

Y (0)d= e−ktY (0) +

∫0

te−ksdX (s).

Now let ρ be the law of Y (0) and Φρ(u) :=∫R eiuyρ(dy). Then for all

u ∈ R, by independence

Φρ(u) = Φρ(e−ktu) exp{−∫ t

0η(e−ksu)ds

}.

Take limits as t →∞ to get

Φρ(u) = exp{−∫ ∞

0η(e−ksu)ds

}.

So ρ is the law of Y∞.

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 26 / 44

Page 125: Koc5(dba)

In fact, if an invariant measure µ exists then it is unique. For supposethat Y is stationary, then

Y (0)d= e−ktY (0) +

∫0

te−ksdX (s).

Now let ρ be the law of Y (0) and Φρ(u) :=∫R eiuyρ(dy). Then for all

u ∈ R, by independence

Φρ(u) = Φρ(e−ktu) exp{−∫ t

0η(e−ksu)ds

}.

Take limits as t →∞ to get

Φρ(u) = exp{−∫ ∞

0η(e−ksu)ds

}.

So ρ is the law of Y∞.

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 26 / 44

Page 126: Koc5(dba)

In fact, if an invariant measure µ exists then it is unique. For supposethat Y is stationary, then

Y (0)d= e−ktY (0) +

∫0

te−ksdX (s).

Now let ρ be the law of Y (0) and Φρ(u) :=∫R eiuyρ(dy). Then for all

u ∈ R, by independence

Φρ(u) = Φρ(e−ktu) exp{−∫ t

0η(e−ksu)ds

}.

Take limits as t →∞ to get

Φρ(u) = exp{−∫ ∞

0η(e−ksu)ds

}.

So ρ is the law of Y∞.

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 26 / 44

Page 127: Koc5(dba)

In fact, if an invariant measure µ exists then it is unique. For supposethat Y is stationary, then

Y (0)d= e−ktY (0) +

∫0

te−ksdX (s).

Now let ρ be the law of Y (0) and Φρ(u) :=∫R eiuyρ(dy). Then for all

u ∈ R, by independence

Φρ(u) = Φρ(e−ktu) exp{−∫ t

0η(e−ksu)ds

}.

Take limits as t →∞ to get

Φρ(u) = exp{−∫ ∞

0η(e−ksu)ds

}.

So ρ is the law of Y∞.

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 26 / 44

Page 128: Koc5(dba)

Example: Let (X (t), t ≥ 0) be a compound Poisson processX (t) =

∑N(t)i=1 Wi where the Wis are i.i.d. exponential with common

density fW (x) = ae−ax1x>0. Then

η(u) = λa∫ ∞

0(eiux − 1)e−axdx =

λaa− iu

.

You can check that Φρ(u) = (1− ia−1u)−λ as so ρ has a gamma(c, λ)distribution.

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 27 / 44

Page 129: Koc5(dba)

Example: Let (X (t), t ≥ 0) be a compound Poisson processX (t) =

∑N(t)i=1 Wi where the Wis are i.i.d. exponential with common

density fW (x) = ae−ax1x>0. Then

η(u) = λa∫ ∞

0(eiux − 1)e−axdx =

λaa− iu

.

You can check that Φρ(u) = (1− ia−1u)−λ as so ρ has a gamma(c, λ)distribution.

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 27 / 44

Page 130: Koc5(dba)

Example: Let (X (t), t ≥ 0) be a compound Poisson processX (t) =

∑N(t)i=1 Wi where the Wis are i.i.d. exponential with common

density fW (x) = ae−ax1x>0. Then

η(u) = λa∫ ∞

0(eiux − 1)e−axdx =

λaa− iu

.

You can check that Φρ(u) = (1− ia−1u)−λ as so ρ has a gamma(c, λ)distribution.

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 27 / 44

Page 131: Koc5(dba)

In fact - it is possible to go further. Given any self-decomposabledistribution µ there exists a stationary Ornstein-Uhlenbeck process Ysuch that the law of Y (0) is µ. Let’s sketch the proof of this - due toJurek and Vervaat (1983). Let X be a self-decomposable randomvariable with distribution µ. Then for each t ≥ 0

X d= e−tX + Xt ,

where X and Xt are independent.The key step is the observation that we can construct an additiveprocess (Z (t), t ≥ 0) such that

Z (t) d= Xt and Z (t + h)− Z (t) d

= e−tXh.

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 28 / 44

Page 132: Koc5(dba)

In fact - it is possible to go further. Given any self-decomposabledistribution µ there exists a stationary Ornstein-Uhlenbeck process Ysuch that the law of Y (0) is µ. Let’s sketch the proof of this - due toJurek and Vervaat (1983). Let X be a self-decomposable randomvariable with distribution µ. Then for each t ≥ 0

X d= e−tX + Xt ,

where X and Xt are independent.The key step is the observation that we can construct an additiveprocess (Z (t), t ≥ 0) such that

Z (t) d= Xt and Z (t + h)− Z (t) d

= e−tXh.

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 28 / 44

Page 133: Koc5(dba)

In fact - it is possible to go further. Given any self-decomposabledistribution µ there exists a stationary Ornstein-Uhlenbeck process Ysuch that the law of Y (0) is µ. Let’s sketch the proof of this - due toJurek and Vervaat (1983). Let X be a self-decomposable randomvariable with distribution µ. Then for each t ≥ 0

X d= e−tX + Xt ,

where X and Xt are independent.The key step is the observation that we can construct an additiveprocess (Z (t), t ≥ 0) such that

Z (t) d= Xt and Z (t + h)− Z (t) d

= e−tXh.

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 28 / 44

Page 134: Koc5(dba)

In fact - it is possible to go further. Given any self-decomposabledistribution µ there exists a stationary Ornstein-Uhlenbeck process Ysuch that the law of Y (0) is µ. Let’s sketch the proof of this - due toJurek and Vervaat (1983). Let X be a self-decomposable randomvariable with distribution µ. Then for each t ≥ 0

X d= e−tX + Xt ,

where X and Xt are independent.The key step is the observation that we can construct an additiveprocess (Z (t), t ≥ 0) such that

Z (t) d= Xt and Z (t + h)− Z (t) d

= e−tXh.

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 28 / 44

Page 135: Koc5(dba)

In fact - it is possible to go further. Given any self-decomposabledistribution µ there exists a stationary Ornstein-Uhlenbeck process Ysuch that the law of Y (0) is µ. Let’s sketch the proof of this - due toJurek and Vervaat (1983). Let X be a self-decomposable randomvariable with distribution µ. Then for each t ≥ 0

X d= e−tX + Xt ,

where X and Xt are independent.The key step is the observation that we can construct an additiveprocess (Z (t), t ≥ 0) such that

Z (t) d= Xt and Z (t + h)− Z (t) d

= e−tXh.

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 28 / 44

Page 136: Koc5(dba)

In fact - it is possible to go further. Given any self-decomposabledistribution µ there exists a stationary Ornstein-Uhlenbeck process Ysuch that the law of Y (0) is µ. Let’s sketch the proof of this - due toJurek and Vervaat (1983). Let X be a self-decomposable randomvariable with distribution µ. Then for each t ≥ 0

X d= e−tX + Xt ,

where X and Xt are independent.The key step is the observation that we can construct an additiveprocess (Z (t), t ≥ 0) such that

Z (t) d= Xt and Z (t + h)− Z (t) d

= e−tXh.

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 28 / 44

Page 137: Koc5(dba)

This follows by Kolmogorov’s theorem since

X d= e−(t+h)X + Xt+hd= e−t (e−hX + Xh) + Xt

⇒ Xt+hd= e−tXh + Xt .

It follows that Y (t) =∫ t

0 esdZ (s) also has independent increments. ButY is a Lévy process since

Y (t + h)− Y (t) =

∫ t+h

tesdZ (s)

=

∫ h

0esetdZ (s + t)

d=

∫ h

0esete−tdZ (s) = Y (h)

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 29 / 44

Page 138: Koc5(dba)

This follows by Kolmogorov’s theorem since

X d= e−(t+h)X + Xt+hd= e−t (e−hX + Xh) + Xt

⇒ Xt+hd= e−tXh + Xt .

It follows that Y (t) =∫ t

0 esdZ (s) also has independent increments. ButY is a Lévy process since

Y (t + h)− Y (t) =

∫ t+h

tesdZ (s)

=

∫ h

0esetdZ (s + t)

d=

∫ h

0esete−tdZ (s) = Y (h)

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 29 / 44

Page 139: Koc5(dba)

This follows by Kolmogorov’s theorem since

X d= e−(t+h)X + Xt+hd= e−t (e−hX + Xh) + Xt

⇒ Xt+hd= e−tXh + Xt .

It follows that Y (t) =∫ t

0 esdZ (s) also has independent increments. ButY is a Lévy process since

Y (t + h)− Y (t) =

∫ t+h

tesdZ (s)

=

∫ h

0esetdZ (s + t)

d=

∫ h

0esete−tdZ (s) = Y (h)

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 29 / 44

Page 140: Koc5(dba)

This follows by Kolmogorov’s theorem since

X d= e−(t+h)X + Xt+hd= e−t (e−hX + Xh) + Xt

⇒ Xt+hd= e−tXh + Xt .

It follows that Y (t) =∫ t

0 esdZ (s) also has independent increments. ButY is a Lévy process since

Y (t + h)− Y (t) =

∫ t+h

tesdZ (s)

=

∫ h

0esetdZ (s + t)

d=

∫ h

0esete−tdZ (s) = Y (h)

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 29 / 44

Page 141: Koc5(dba)

This follows by Kolmogorov’s theorem since

X d= e−(t+h)X + Xt+hd= e−t (e−hX + Xh) + Xt

⇒ Xt+hd= e−tXh + Xt .

It follows that Y (t) =∫ t

0 esdZ (s) also has independent increments. ButY is a Lévy process since

Y (t + h)− Y (t) =

∫ t+h

tesdZ (s)

=

∫ h

0esetdZ (s + t)

d=

∫ h

0esete−tdZ (s) = Y (h)

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 29 / 44

Page 142: Koc5(dba)

This follows by Kolmogorov’s theorem since

X d= e−(t+h)X + Xt+hd= e−t (e−hX + Xh) + Xt

⇒ Xt+hd= e−tXh + Xt .

It follows that Y (t) =∫ t

0 esdZ (s) also has independent increments. ButY is a Lévy process since

Y (t + h)− Y (t) =

∫ t+h

tesdZ (s)

=

∫ h

0esetdZ (s + t)

d=

∫ h

0esete−tdZ (s) = Y (h)

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 29 / 44

Page 143: Koc5(dba)

This follows by Kolmogorov’s theorem since

X d= e−(t+h)X + Xt+hd= e−t (e−hX + Xh) + Xt

⇒ Xt+hd= e−tXh + Xt .

It follows that Y (t) =∫ t

0 esdZ (s) also has independent increments. ButY is a Lévy process since

Y (t + h)− Y (t) =

∫ t+h

tesdZ (s)

=

∫ h

0esetdZ (s + t)

d=

∫ h

0esete−tdZ (s) = Y (h)

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 29 / 44

Page 144: Koc5(dba)

We then find that Z (t) =∫ t

0 e−sdY (s) and so

X d= e−tX +

∫ t

0e−sdY (s)

d= e−tX +

∫ t

0e−(t−s)dY (s),

using stationary increments of Y which is extended to an Lévyprocess on the whole of R.In the 1990s, Sato showed that µ is self-decomposable if and only if itis the law of W (1) where (W (t), t ≥ 0) is a self-similar additiveprocess. Recall W self-similar (index H) means for all c ≥ 0

W (ct) d= cHW (t).

So we can embed selfdecomposable distributions into stationary OUprocesses and self-similar additive processes. Is there a connection?

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 30 / 44

Page 145: Koc5(dba)

We then find that Z (t) =∫ t

0 e−sdY (s) and so

X d= e−tX +

∫ t

0e−sdY (s)

d= e−tX +

∫ t

0e−(t−s)dY (s),

using stationary increments of Y which is extended to an Lévyprocess on the whole of R.In the 1990s, Sato showed that µ is self-decomposable if and only if itis the law of W (1) where (W (t), t ≥ 0) is a self-similar additiveprocess. Recall W self-similar (index H) means for all c ≥ 0

W (ct) d= cHW (t).

So we can embed selfdecomposable distributions into stationary OUprocesses and self-similar additive processes. Is there a connection?

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 30 / 44

Page 146: Koc5(dba)

We then find that Z (t) =∫ t

0 e−sdY (s) and so

X d= e−tX +

∫ t

0e−sdY (s)

d= e−tX +

∫ t

0e−(t−s)dY (s),

using stationary increments of Y which is extended to an Lévyprocess on the whole of R.In the 1990s, Sato showed that µ is self-decomposable if and only if itis the law of W (1) where (W (t), t ≥ 0) is a self-similar additiveprocess. Recall W self-similar (index H) means for all c ≥ 0

W (ct) d= cHW (t).

So we can embed selfdecomposable distributions into stationary OUprocesses and self-similar additive processes. Is there a connection?

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 30 / 44

Page 147: Koc5(dba)

We then find that Z (t) =∫ t

0 e−sdY (s) and so

X d= e−tX +

∫ t

0e−sdY (s)

d= e−tX +

∫ t

0e−(t−s)dY (s),

using stationary increments of Y which is extended to an Lévyprocess on the whole of R.In the 1990s, Sato showed that µ is self-decomposable if and only if itis the law of W (1) where (W (t), t ≥ 0) is a self-similar additiveprocess. Recall W self-similar (index H) means for all c ≥ 0

W (ct) d= cHW (t).

So we can embed selfdecomposable distributions into stationary OUprocesses and self-similar additive processes. Is there a connection?

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 30 / 44

Page 148: Koc5(dba)

We then find that Z (t) =∫ t

0 e−sdY (s) and so

X d= e−tX +

∫ t

0e−sdY (s)

d= e−tX +

∫ t

0e−(t−s)dY (s),

using stationary increments of Y which is extended to an Lévyprocess on the whole of R.In the 1990s, Sato showed that µ is self-decomposable if and only if itis the law of W (1) where (W (t), t ≥ 0) is a self-similar additiveprocess. Recall W self-similar (index H) means for all c ≥ 0

W (ct) d= cHW (t).

So we can embed selfdecomposable distributions into stationary OUprocesses and self-similar additive processes. Is there a connection?

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 30 / 44

Page 149: Koc5(dba)

To understand the connection between the two “embeddings” of µ weneed the

Lamperti Transform. There is a one-to-one correspondence betweenself-similar processes (W (t), t ≥ 0) and stationary processes(Z (t), t ≥ 0) given by

W (t) = tHZ (log(t)) or equivalently Z (t) = e−tHW (et ).

Indeed if W self-similar

Z (t + h) = e−(t+h)HW (et+h)d= e−tHe−hHehHW (et )

= Z (t).

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 31 / 44

Page 150: Koc5(dba)

To understand the connection between the two “embeddings” of µ weneed the

Lamperti Transform. There is a one-to-one correspondence betweenself-similar processes (W (t), t ≥ 0) and stationary processes(Z (t), t ≥ 0) given by

W (t) = tHZ (log(t)) or equivalently Z (t) = e−tHW (et ).

Indeed if W self-similar

Z (t + h) = e−(t+h)HW (et+h)d= e−tHe−hHehHW (et )

= Z (t).

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 31 / 44

Page 151: Koc5(dba)

To understand the connection between the two “embeddings” of µ weneed the

Lamperti Transform. There is a one-to-one correspondence betweenself-similar processes (W (t), t ≥ 0) and stationary processes(Z (t), t ≥ 0) given by

W (t) = tHZ (log(t)) or equivalently Z (t) = e−tHW (et ).

Indeed if W self-similar

Z (t + h) = e−(t+h)HW (et+h)d= e−tHe−hHehHW (et )

= Z (t).

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 31 / 44

Page 152: Koc5(dba)

To understand the connection between the two “embeddings” of µ weneed the

Lamperti Transform. There is a one-to-one correspondence betweenself-similar processes (W (t), t ≥ 0) and stationary processes(Z (t), t ≥ 0) given by

W (t) = tHZ (log(t)) or equivalently Z (t) = e−tHW (et ).

Indeed if W self-similar

Z (t + h) = e−(t+h)HW (et+h)d= e−tHe−hHehHW (et )

= Z (t).

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 31 / 44

Page 153: Koc5(dba)

To understand the connection between the two “embeddings” of µ weneed the

Lamperti Transform. There is a one-to-one correspondence betweenself-similar processes (W (t), t ≥ 0) and stationary processes(Z (t), t ≥ 0) given by

W (t) = tHZ (log(t)) or equivalently Z (t) = e−tHW (et ).

Indeed if W self-similar

Z (t + h) = e−(t+h)HW (et+h)d= e−tHe−hHehHW (et )

= Z (t).

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 31 / 44

Page 154: Koc5(dba)

To understand the connection between the two “embeddings” of µ weneed the

Lamperti Transform. There is a one-to-one correspondence betweenself-similar processes (W (t), t ≥ 0) and stationary processes(Z (t), t ≥ 0) given by

W (t) = tHZ (log(t)) or equivalently Z (t) = e−tHW (et ).

Indeed if W self-similar

Z (t + h) = e−(t+h)HW (et+h)d= e−tHe−hHehHW (et )

= Z (t).

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 31 / 44

Page 155: Koc5(dba)

The next step is due to Jeanblanc, Pitman, Yor (SPA 100, 223(2002))

Start with a self-similar additive process (W (t), t ≥ 0). Then we knowthat W (1) is self-decomposable. There exist two independent,identically distributed Lévy processes (X−t , t ≥ 0) and (X +

t , t ≥ 0) suchthat

X−t =

∫ 1

e−t

dW (r)

rH ,X +t =

∫ et

1

dW (r)

rH .

Let (Z (t), t ≥ 0) be the stationary Lamperti transform of W . Then it isan Ornstein-Uhlenbeck process and

Z (t) = e−tHW (1) +

∫ t

0e−(t+s)HdX +

s ,

Z (−t) = e−tHW (1)−∫ t

0e−(t+s)HdX−s .

In the last part of the lecture we’ll briefly look at some recentdevelopments.

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 32 / 44

Page 156: Koc5(dba)

The next step is due to Jeanblanc, Pitman, Yor (SPA 100, 223(2002))

Start with a self-similar additive process (W (t), t ≥ 0). Then we knowthat W (1) is self-decomposable. There exist two independent,identically distributed Lévy processes (X−t , t ≥ 0) and (X +

t , t ≥ 0) suchthat

X−t =

∫ 1

e−t

dW (r)

rH ,X +t =

∫ et

1

dW (r)

rH .

Let (Z (t), t ≥ 0) be the stationary Lamperti transform of W . Then it isan Ornstein-Uhlenbeck process and

Z (t) = e−tHW (1) +

∫ t

0e−(t+s)HdX +

s ,

Z (−t) = e−tHW (1)−∫ t

0e−(t+s)HdX−s .

In the last part of the lecture we’ll briefly look at some recentdevelopments.

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 32 / 44

Page 157: Koc5(dba)

The next step is due to Jeanblanc, Pitman, Yor (SPA 100, 223(2002))

Start with a self-similar additive process (W (t), t ≥ 0). Then we knowthat W (1) is self-decomposable. There exist two independent,identically distributed Lévy processes (X−t , t ≥ 0) and (X +

t , t ≥ 0) suchthat

X−t =

∫ 1

e−t

dW (r)

rH ,X +t =

∫ et

1

dW (r)

rH .

Let (Z (t), t ≥ 0) be the stationary Lamperti transform of W . Then it isan Ornstein-Uhlenbeck process and

Z (t) = e−tHW (1) +

∫ t

0e−(t+s)HdX +

s ,

Z (−t) = e−tHW (1)−∫ t

0e−(t+s)HdX−s .

In the last part of the lecture we’ll briefly look at some recentdevelopments.

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 32 / 44

Page 158: Koc5(dba)

The next step is due to Jeanblanc, Pitman, Yor (SPA 100, 223(2002))

Start with a self-similar additive process (W (t), t ≥ 0). Then we knowthat W (1) is self-decomposable. There exist two independent,identically distributed Lévy processes (X−t , t ≥ 0) and (X +

t , t ≥ 0) suchthat

X−t =

∫ 1

e−t

dW (r)

rH ,X +t =

∫ et

1

dW (r)

rH .

Let (Z (t), t ≥ 0) be the stationary Lamperti transform of W . Then it isan Ornstein-Uhlenbeck process and

Z (t) = e−tHW (1) +

∫ t

0e−(t+s)HdX +

s ,

Z (−t) = e−tHW (1)−∫ t

0e−(t+s)HdX−s .

In the last part of the lecture we’ll briefly look at some recentdevelopments.

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 32 / 44

Page 159: Koc5(dba)

The next step is due to Jeanblanc, Pitman, Yor (SPA 100, 223(2002))

Start with a self-similar additive process (W (t), t ≥ 0). Then we knowthat W (1) is self-decomposable. There exist two independent,identically distributed Lévy processes (X−t , t ≥ 0) and (X +

t , t ≥ 0) suchthat

X−t =

∫ 1

e−t

dW (r)

rH ,X +t =

∫ et

1

dW (r)

rH .

Let (Z (t), t ≥ 0) be the stationary Lamperti transform of W . Then it isan Ornstein-Uhlenbeck process and

Z (t) = e−tHW (1) +

∫ t

0e−(t+s)HdX +

s ,

Z (−t) = e−tHW (1)−∫ t

0e−(t+s)HdX−s .

In the last part of the lecture we’ll briefly look at some recentdevelopments.

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 32 / 44

Page 160: Koc5(dba)

The next step is due to Jeanblanc, Pitman, Yor (SPA 100, 223(2002))

Start with a self-similar additive process (W (t), t ≥ 0). Then we knowthat W (1) is self-decomposable. There exist two independent,identically distributed Lévy processes (X−t , t ≥ 0) and (X +

t , t ≥ 0) suchthat

X−t =

∫ 1

e−t

dW (r)

rH ,X +t =

∫ et

1

dW (r)

rH .

Let (Z (t), t ≥ 0) be the stationary Lamperti transform of W . Then it isan Ornstein-Uhlenbeck process and

Z (t) = e−tHW (1) +

∫ t

0e−(t+s)HdX +

s ,

Z (−t) = e−tHW (1)−∫ t

0e−(t+s)HdX−s .

In the last part of the lecture we’ll briefly look at some recentdevelopments.

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 32 / 44

Page 161: Koc5(dba)

The next step is due to Jeanblanc, Pitman, Yor (SPA 100, 223(2002))

Start with a self-similar additive process (W (t), t ≥ 0). Then we knowthat W (1) is self-decomposable. There exist two independent,identically distributed Lévy processes (X−t , t ≥ 0) and (X +

t , t ≥ 0) suchthat

X−t =

∫ 1

e−t

dW (r)

rH ,X +t =

∫ et

1

dW (r)

rH .

Let (Z (t), t ≥ 0) be the stationary Lamperti transform of W . Then it isan Ornstein-Uhlenbeck process and

Z (t) = e−tHW (1) +

∫ t

0e−(t+s)HdX +

s ,

Z (−t) = e−tHW (1)−∫ t

0e−(t+s)HdX−s .

In the last part of the lecture we’ll briefly look at some recentdevelopments.

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 32 / 44

Page 162: Koc5(dba)

The next step is due to Jeanblanc, Pitman, Yor (SPA 100, 223(2002))

Start with a self-similar additive process (W (t), t ≥ 0). Then we knowthat W (1) is self-decomposable. There exist two independent,identically distributed Lévy processes (X−t , t ≥ 0) and (X +

t , t ≥ 0) suchthat

X−t =

∫ 1

e−t

dW (r)

rH ,X +t =

∫ et

1

dW (r)

rH .

Let (Z (t), t ≥ 0) be the stationary Lamperti transform of W . Then it isan Ornstein-Uhlenbeck process and

Z (t) = e−tHW (1) +

∫ t

0e−(t+s)HdX +

s ,

Z (−t) = e−tHW (1)−∫ t

0e−(t+s)HdX−s .

In the last part of the lecture we’ll briefly look at some recentdevelopments.

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 32 / 44

Page 163: Koc5(dba)

The next step is due to Jeanblanc, Pitman, Yor (SPA 100, 223(2002))

Start with a self-similar additive process (W (t), t ≥ 0). Then we knowthat W (1) is self-decomposable. There exist two independent,identically distributed Lévy processes (X−t , t ≥ 0) and (X +

t , t ≥ 0) suchthat

X−t =

∫ 1

e−t

dW (r)

rH ,X +t =

∫ et

1

dW (r)

rH .

Let (Z (t), t ≥ 0) be the stationary Lamperti transform of W . Then it isan Ornstein-Uhlenbeck process and

Z (t) = e−tHW (1) +

∫ t

0e−(t+s)HdX +

s ,

Z (−t) = e−tHW (1)−∫ t

0e−(t+s)HdX−s .

In the last part of the lecture we’ll briefly look at some recentdevelopments.

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 32 / 44

Page 164: Koc5(dba)

The next step is due to Jeanblanc, Pitman, Yor (SPA 100, 223(2002))

Start with a self-similar additive process (W (t), t ≥ 0). Then we knowthat W (1) is self-decomposable. There exist two independent,identically distributed Lévy processes (X−t , t ≥ 0) and (X +

t , t ≥ 0) suchthat

X−t =

∫ 1

e−t

dW (r)

rH ,X +t =

∫ et

1

dW (r)

rH .

Let (Z (t), t ≥ 0) be the stationary Lamperti transform of W . Then it isan Ornstein-Uhlenbeck process and

Z (t) = e−tHW (1) +

∫ t

0e−(t+s)HdX +

s ,

Z (−t) = e−tHW (1)−∫ t

0e−(t+s)HdX−s .

In the last part of the lecture we’ll briefly look at some recentdevelopments.

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 32 / 44

Page 165: Koc5(dba)

Densities of the OU Process

We’ve seen that each Y (t) is infinitely divisible so if the Lévy processX (t) has a Gaussian component then so does Y (t) in which case ithas a density by Fourier inversion.More generally, Priola and Zabczyk (BLMS, 41, 41,(2009)) study

dY (t) = AY (t)dt + BdX (t),

where each Y (t) is Rd -valued but X (t) is Rn-valued (n ≥ d). So A isan n × n matrix and B is an n × d matrix.

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 33 / 44

Page 166: Koc5(dba)

Densities of the OU Process

We’ve seen that each Y (t) is infinitely divisible so if the Lévy processX (t) has a Gaussian component then so does Y (t) in which case ithas a density by Fourier inversion.More generally, Priola and Zabczyk (BLMS, 41, 41,(2009)) study

dY (t) = AY (t)dt + BdX (t),

where each Y (t) is Rd -valued but X (t) is Rn-valued (n ≥ d). So A isan n × n matrix and B is an n × d matrix.

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 33 / 44

Page 167: Koc5(dba)

Densities of the OU Process

We’ve seen that each Y (t) is infinitely divisible so if the Lévy processX (t) has a Gaussian component then so does Y (t) in which case ithas a density by Fourier inversion.More generally, Priola and Zabczyk (BLMS, 41, 41,(2009)) study

dY (t) = AY (t)dt + BdX (t),

where each Y (t) is Rd -valued but X (t) is Rn-valued (n ≥ d). So A isan n × n matrix and B is an n × d matrix.

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 33 / 44

Page 168: Koc5(dba)

Densities of the OU Process

We’ve seen that each Y (t) is infinitely divisible so if the Lévy processX (t) has a Gaussian component then so does Y (t) in which case ithas a density by Fourier inversion.More generally, Priola and Zabczyk (BLMS, 41, 41,(2009)) study

dY (t) = AY (t)dt + BdX (t),

where each Y (t) is Rd -valued but X (t) is Rn-valued (n ≥ d). So A isan n × n matrix and B is an n × d matrix.

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 33 / 44

Page 169: Koc5(dba)

Assume

1 Rank[B,AB, . . . ,An−1B] = n,where [B,AB, . . . ,An−1B] is the matrix of the linear mapping fromRnd to Rn given by

(u0,u1, . . . ,un−1)→ Bu0 + ABu1 + . . .+ An−1Bun−1.

2 The restriction of the Lévy measure ν to Br (0) has a density forsome r > 0.

Then Y (t) has a density for t > 0.

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 34 / 44

Page 170: Koc5(dba)

Assume

1 Rank[B,AB, . . . ,An−1B] = n,where [B,AB, . . . ,An−1B] is the matrix of the linear mapping fromRnd to Rn given by

(u0,u1, . . . ,un−1)→ Bu0 + ABu1 + . . .+ An−1Bun−1.

2 The restriction of the Lévy measure ν to Br (0) has a density forsome r > 0.

Then Y (t) has a density for t > 0.

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 34 / 44

Page 171: Koc5(dba)

Assume

1 Rank[B,AB, . . . ,An−1B] = n,where [B,AB, . . . ,An−1B] is the matrix of the linear mapping fromRnd to Rn given by

(u0,u1, . . . ,un−1)→ Bu0 + ABu1 + . . .+ An−1Bun−1.

2 The restriction of the Lévy measure ν to Br (0) has a density forsome r > 0.

Then Y (t) has a density for t > 0.

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 34 / 44

Page 172: Koc5(dba)

Assume

1 Rank[B,AB, . . . ,An−1B] = n,where [B,AB, . . . ,An−1B] is the matrix of the linear mapping fromRnd to Rn given by

(u0,u1, . . . ,un−1)→ Bu0 + ABu1 + . . .+ An−1Bun−1.

2 The restriction of the Lévy measure ν to Br (0) has a density forsome r > 0.

Then Y (t) has a density for t > 0.

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 34 / 44

Page 173: Koc5(dba)

Application - Volatility Modelling

Consider the Black-Scholes model for a stock price

S(t) = S(0) exp{µt + σB(t)},

where µ ∈ R is stock drift and σ > 0 is volatility. By Itô’s formula

dS(t) = σS(t)dB(t) + S(t)(µ+

12σ2)

dt .

In stochastic volatility models the parameter σ2 is replaced by astochastic process (σ2(t), t ≥ 0).

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 35 / 44

Page 174: Koc5(dba)

Application - Volatility Modelling

Consider the Black-Scholes model for a stock price

S(t) = S(0) exp{µt + σB(t)},

where µ ∈ R is stock drift and σ > 0 is volatility. By Itô’s formula

dS(t) = σS(t)dB(t) + S(t)(µ+

12σ2)

dt .

In stochastic volatility models the parameter σ2 is replaced by astochastic process (σ2(t), t ≥ 0).

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 35 / 44

Page 175: Koc5(dba)

Application - Volatility Modelling

Consider the Black-Scholes model for a stock price

S(t) = S(0) exp{µt + σB(t)},

where µ ∈ R is stock drift and σ > 0 is volatility. By Itô’s formula

dS(t) = σS(t)dB(t) + S(t)(µ+

12σ2)

dt .

In stochastic volatility models the parameter σ2 is replaced by astochastic process (σ2(t), t ≥ 0).

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 35 / 44

Page 176: Koc5(dba)

Application - Volatility Modelling

Consider the Black-Scholes model for a stock price

S(t) = S(0) exp{µt + σB(t)},

where µ ∈ R is stock drift and σ > 0 is volatility. By Itô’s formula

dS(t) = σS(t)dB(t) + S(t)(µ+

12σ2)

dt .

In stochastic volatility models the parameter σ2 is replaced by astochastic process (σ2(t), t ≥ 0).

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 35 / 44

Page 177: Koc5(dba)

Barndorff-Nielsen and Shephard (JRSS B 63, 167 (2001)) proposedthe OU model

dσ2(t) = −λσ2(t) + dX (λt),

where λ > 0 and X is a subordinator. Then σ2(t) > 0 (a.s.) since

σ2(t) = e−λtσ2(0) +

∫ t

0e−λ(t−s)dX (λt)

= e−λt

σ2(0) +∑

0≤u≤t

e−λu∆X (λu)

.

Assume that∫∞

1 log(1 + x)ν(dx) <∞. Then there is a unique invariantmeasure µ which is self-decomposable and has characteristic function

µ(u) = exp{∫ ∞

0(eiux − 1)

k(x)

xdx},

where k is decreasing.Dave Applebaum (Sheffield UK) Lecture 5 December 2011 36 / 44

Page 178: Koc5(dba)

Barndorff-Nielsen and Shephard (JRSS B 63, 167 (2001)) proposedthe OU model

dσ2(t) = −λσ2(t) + dX (λt),

where λ > 0 and X is a subordinator. Then σ2(t) > 0 (a.s.) since

σ2(t) = e−λtσ2(0) +

∫ t

0e−λ(t−s)dX (λt)

= e−λt

σ2(0) +∑

0≤u≤t

e−λu∆X (λu)

.

Assume that∫∞

1 log(1 + x)ν(dx) <∞. Then there is a unique invariantmeasure µ which is self-decomposable and has characteristic function

µ(u) = exp{∫ ∞

0(eiux − 1)

k(x)

xdx},

where k is decreasing.Dave Applebaum (Sheffield UK) Lecture 5 December 2011 36 / 44

Page 179: Koc5(dba)

Barndorff-Nielsen and Shephard (JRSS B 63, 167 (2001)) proposedthe OU model

dσ2(t) = −λσ2(t) + dX (λt),

where λ > 0 and X is a subordinator. Then σ2(t) > 0 (a.s.) since

σ2(t) = e−λtσ2(0) +

∫ t

0e−λ(t−s)dX (λt)

= e−λt

σ2(0) +∑

0≤u≤t

e−λu∆X (λu)

.

Assume that∫∞

1 log(1 + x)ν(dx) <∞. Then there is a unique invariantmeasure µ which is self-decomposable and has characteristic function

µ(u) = exp{∫ ∞

0(eiux − 1)

k(x)

xdx},

where k is decreasing.Dave Applebaum (Sheffield UK) Lecture 5 December 2011 36 / 44

Page 180: Koc5(dba)

Barndorff-Nielsen and Shephard (JRSS B 63, 167 (2001)) proposedthe OU model

dσ2(t) = −λσ2(t) + dX (λt),

where λ > 0 and X is a subordinator. Then σ2(t) > 0 (a.s.) since

σ2(t) = e−λtσ2(0) +

∫ t

0e−λ(t−s)dX (λt)

= e−λt

σ2(0) +∑

0≤u≤t

e−λu∆X (λu)

.

Assume that∫∞

1 log(1 + x)ν(dx) <∞. Then there is a unique invariantmeasure µ which is self-decomposable and has characteristic function

µ(u) = exp{∫ ∞

0(eiux − 1)

k(x)

xdx},

where k is decreasing.Dave Applebaum (Sheffield UK) Lecture 5 December 2011 36 / 44

Page 181: Koc5(dba)

Barndorff-Nielsen and Shephard (JRSS B 63, 167 (2001)) proposedthe OU model

dσ2(t) = −λσ2(t) + dX (λt),

where λ > 0 and X is a subordinator. Then σ2(t) > 0 (a.s.) since

σ2(t) = e−λtσ2(0) +

∫ t

0e−λ(t−s)dX (λt)

= e−λt

σ2(0) +∑

0≤u≤t

e−λu∆X (λu)

.

Assume that∫∞

1 log(1 + x)ν(dx) <∞. Then there is a unique invariantmeasure µ which is self-decomposable and has characteristic function

µ(u) = exp{∫ ∞

0(eiux − 1)

k(x)

xdx},

where k is decreasing.Dave Applebaum (Sheffield UK) Lecture 5 December 2011 36 / 44

Page 182: Koc5(dba)

Barndorff-Nielsen and Shephard (JRSS B 63, 167 (2001)) proposedthe OU model

dσ2(t) = −λσ2(t) + dX (λt),

where λ > 0 and X is a subordinator. Then σ2(t) > 0 (a.s.) since

σ2(t) = e−λtσ2(0) +

∫ t

0e−λ(t−s)dX (λt)

= e−λt

σ2(0) +∑

0≤u≤t

e−λu∆X (λu)

.

Assume that∫∞

1 log(1 + x)ν(dx) <∞. Then there is a unique invariantmeasure µ which is self-decomposable and has characteristic function

µ(u) = exp{∫ ∞

0(eiux − 1)

k(x)

xdx},

where k is decreasing.Dave Applebaum (Sheffield UK) Lecture 5 December 2011 36 / 44

Page 183: Koc5(dba)

Barndorff-Nielsen and Shephard (JRSS B 63, 167 (2001)) proposedthe OU model

dσ2(t) = −λσ2(t) + dX (λt),

where λ > 0 and X is a subordinator. Then σ2(t) > 0 (a.s.) since

σ2(t) = e−λtσ2(0) +

∫ t

0e−λ(t−s)dX (λt)

= e−λt

σ2(0) +∑

0≤u≤t

e−λu∆X (λu)

.

Assume that∫∞

1 log(1 + x)ν(dx) <∞. Then there is a unique invariantmeasure µ which is self-decomposable and has characteristic function

µ(u) = exp{∫ ∞

0(eiux − 1)

k(x)

xdx},

where k is decreasing.Dave Applebaum (Sheffield UK) Lecture 5 December 2011 36 / 44

Page 184: Koc5(dba)

Barndorff-Nielsen and Shephard (JRSS B 63, 167 (2001)) proposedthe OU model

dσ2(t) = −λσ2(t) + dX (λt),

where λ > 0 and X is a subordinator. Then σ2(t) > 0 (a.s.) since

σ2(t) = e−λtσ2(0) +

∫ t

0e−λ(t−s)dX (λt)

= e−λt

σ2(0) +∑

0≤u≤t

e−λu∆X (λu)

.

Assume that∫∞

1 log(1 + x)ν(dx) <∞. Then there is a unique invariantmeasure µ which is self-decomposable and has characteristic function

µ(u) = exp{∫ ∞

0(eiux − 1)

k(x)

xdx},

where k is decreasing.Dave Applebaum (Sheffield UK) Lecture 5 December 2011 36 / 44

Page 185: Koc5(dba)

Problem. Based on discrete-time observationsσ2(0), σ2(∆), σ2((N − 1)∆) find estimates of the parameter λ and k .For a non-parametric approach - see Jongbloed et. al. Bernoulli, 11,759 (2005).

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 37 / 44

Page 186: Koc5(dba)

Problem. Based on discrete-time observationsσ2(0), σ2(∆), σ2((N − 1)∆) find estimates of the parameter λ and k .For a non-parametric approach - see Jongbloed et. al. Bernoulli, 11,759 (2005).

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 37 / 44

Page 187: Koc5(dba)

Generalised Ornstein-Uhlenbeck Processes

Let X = (X1,X2) be a Lévy process on R2. Then each Xi is areal-valued Lévy process. Let Y0 be independent of X . Thegeneralised Ornstein-Uhlenbeck process is

Y (t) = e−X1(t)(

Y0 +

∫ t

0e−X1(s)dX2(s)

).

The usual OU process is obtained by taking X1(t) = λt (λ > 0).Necessary and sufficient conditions for stationarity solutions werefound by Lindner and Maller (SPA 115, 1701 (2005)). Almost sureconvergence of

∫ t0 e−X1(s)dX2(s) as t →∞ is a sufficient condition but

the general story is more complicated.

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 38 / 44

Page 188: Koc5(dba)

Generalised Ornstein-Uhlenbeck Processes

Let X = (X1,X2) be a Lévy process on R2. Then each Xi is areal-valued Lévy process. Let Y0 be independent of X . Thegeneralised Ornstein-Uhlenbeck process is

Y (t) = e−X1(t)(

Y0 +

∫ t

0e−X1(s)dX2(s)

).

The usual OU process is obtained by taking X1(t) = λt (λ > 0).Necessary and sufficient conditions for stationarity solutions werefound by Lindner and Maller (SPA 115, 1701 (2005)). Almost sureconvergence of

∫ t0 e−X1(s)dX2(s) as t →∞ is a sufficient condition but

the general story is more complicated.

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 38 / 44

Page 189: Koc5(dba)

Generalised Ornstein-Uhlenbeck Processes

Let X = (X1,X2) be a Lévy process on R2. Then each Xi is areal-valued Lévy process. Let Y0 be independent of X . Thegeneralised Ornstein-Uhlenbeck process is

Y (t) = e−X1(t)(

Y0 +

∫ t

0e−X1(s)dX2(s)

).

The usual OU process is obtained by taking X1(t) = λt (λ > 0).Necessary and sufficient conditions for stationarity solutions werefound by Lindner and Maller (SPA 115, 1701 (2005)). Almost sureconvergence of

∫ t0 e−X1(s)dX2(s) as t →∞ is a sufficient condition but

the general story is more complicated.

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 38 / 44

Page 190: Koc5(dba)

Generalised Ornstein-Uhlenbeck Processes

Let X = (X1,X2) be a Lévy process on R2. Then each Xi is areal-valued Lévy process. Let Y0 be independent of X . Thegeneralised Ornstein-Uhlenbeck process is

Y (t) = e−X1(t)(

Y0 +

∫ t

0e−X1(s)dX2(s)

).

The usual OU process is obtained by taking X1(t) = λt (λ > 0).Necessary and sufficient conditions for stationarity solutions werefound by Lindner and Maller (SPA 115, 1701 (2005)). Almost sureconvergence of

∫ t0 e−X1(s)dX2(s) as t →∞ is a sufficient condition but

the general story is more complicated.

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 38 / 44

Page 191: Koc5(dba)

Generalised Ornstein-Uhlenbeck Processes

Let X = (X1,X2) be a Lévy process on R2. Then each Xi is areal-valued Lévy process. Let Y0 be independent of X . Thegeneralised Ornstein-Uhlenbeck process is

Y (t) = e−X1(t)(

Y0 +

∫ t

0e−X1(s)dX2(s)

).

The usual OU process is obtained by taking X1(t) = λt (λ > 0).Necessary and sufficient conditions for stationarity solutions werefound by Lindner and Maller (SPA 115, 1701 (2005)). Almost sureconvergence of

∫ t0 e−X1(s)dX2(s) as t →∞ is a sufficient condition but

the general story is more complicated.

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 38 / 44

Page 192: Koc5(dba)

Generalised Ornstein-Uhlenbeck Processes

Let X = (X1,X2) be a Lévy process on R2. Then each Xi is areal-valued Lévy process. Let Y0 be independent of X . Thegeneralised Ornstein-Uhlenbeck process is

Y (t) = e−X1(t)(

Y0 +

∫ t

0e−X1(s)dX2(s)

).

The usual OU process is obtained by taking X1(t) = λt (λ > 0).Necessary and sufficient conditions for stationarity solutions werefound by Lindner and Maller (SPA 115, 1701 (2005)). Almost sureconvergence of

∫ t0 e−X1(s)dX2(s) as t →∞ is a sufficient condition but

the general story is more complicated.

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 38 / 44

Page 193: Koc5(dba)

Generalised Ornstein-Uhlenbeck Processes

Let X = (X1,X2) be a Lévy process on R2. Then each Xi is areal-valued Lévy process. Let Y0 be independent of X . Thegeneralised Ornstein-Uhlenbeck process is

Y (t) = e−X1(t)(

Y0 +

∫ t

0e−X1(s)dX2(s)

).

The usual OU process is obtained by taking X1(t) = λt (λ > 0).Necessary and sufficient conditions for stationarity solutions werefound by Lindner and Maller (SPA 115, 1701 (2005)). Almost sureconvergence of

∫ t0 e−X1(s)dX2(s) as t →∞ is a sufficient condition but

the general story is more complicated.

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 38 / 44

Page 194: Koc5(dba)

In fact a necessary and sufficient condition for stationary solutions isthe almost sure convergence of

∫ t0 e−X1(s)dL(s) as t →∞ where the

one-dimensional Lévy process (L(t), t ≥ 0) is defined by

L(t) := X2(t) +∑

0≤s≤t

(e−∆X1(s) − 1)∆X2(s)− tA1,2,

where A1,2 is the off-diagonal entry of the covariance matrix of theGaussian component of the bivariate Lévy process (X1,L). For furtherwork on generalised O-U processes see Lindner and Sato (AP 37, 250(2009)).

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 39 / 44

Page 195: Koc5(dba)

In fact a necessary and sufficient condition for stationary solutions isthe almost sure convergence of

∫ t0 e−X1(s)dL(s) as t →∞ where the

one-dimensional Lévy process (L(t), t ≥ 0) is defined by

L(t) := X2(t) +∑

0≤s≤t

(e−∆X1(s) − 1)∆X2(s)− tA1,2,

where A1,2 is the off-diagonal entry of the covariance matrix of theGaussian component of the bivariate Lévy process (X1,L). For furtherwork on generalised O-U processes see Lindner and Sato (AP 37, 250(2009)).

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 39 / 44

Page 196: Koc5(dba)

In fact a necessary and sufficient condition for stationary solutions isthe almost sure convergence of

∫ t0 e−X1(s)dL(s) as t →∞ where the

one-dimensional Lévy process (L(t), t ≥ 0) is defined by

L(t) := X2(t) +∑

0≤s≤t

(e−∆X1(s) − 1)∆X2(s)− tA1,2,

where A1,2 is the off-diagonal entry of the covariance matrix of theGaussian component of the bivariate Lévy process (X1,L). For furtherwork on generalised O-U processes see Lindner and Sato (AP 37, 250(2009)).

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 39 / 44

Page 197: Koc5(dba)

In fact a necessary and sufficient condition for stationary solutions isthe almost sure convergence of

∫ t0 e−X1(s)dL(s) as t →∞ where the

one-dimensional Lévy process (L(t), t ≥ 0) is defined by

L(t) := X2(t) +∑

0≤s≤t

(e−∆X1(s) − 1)∆X2(s)− tA1,2,

where A1,2 is the off-diagonal entry of the covariance matrix of theGaussian component of the bivariate Lévy process (X1,L). For furtherwork on generalised O-U processes see Lindner and Sato (AP 37, 250(2009)).

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 39 / 44

Page 198: Koc5(dba)

References and Further Reading

These lectures have been broadly based on my recent book:

D.Applebaum Lévy Processes and Stochastic Calculus, CambridgeUniversity Press (second edition) (2009)

and from an earlier course of lectures partly derived from it, whichhave been separately published as

D.Applebaum, Lévy processes in Euclidean spaces and groups inQuantum Independent Increment Processes I: From ClassicalProbability to Quantum Stochastic Calculus, Springer Lecture Notes inMathematics , Vol. 1865 M Schurmann, U. Franz (Eds.) 1-99,(2005)

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 40 / 44

Page 199: Koc5(dba)

References and Further Reading

These lectures have been broadly based on my recent book:

D.Applebaum Lévy Processes and Stochastic Calculus, CambridgeUniversity Press (second edition) (2009)

and from an earlier course of lectures partly derived from it, whichhave been separately published as

D.Applebaum, Lévy processes in Euclidean spaces and groups inQuantum Independent Increment Processes I: From ClassicalProbability to Quantum Stochastic Calculus, Springer Lecture Notes inMathematics , Vol. 1865 M Schurmann, U. Franz (Eds.) 1-99,(2005)

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 40 / 44

Page 200: Koc5(dba)

A comprehensive account of the structure and properties of Lévyprocesses is:

K-I.Sato, Lévy Processes and Infinite Divisibility, Cambridge UniversityPress (1999)

A shorter account, from the point of view of the French school, whichconcentrates on fluctuation theory and potential theory aspects is

J.Bertoin, Lévy Processes, Cambridge University Press (1996)

From this point of view, you should also look atA.Kyprianou, Introductory Lectures on Fluctuations of Lévy Processeswith Applications, Springer-Verlag (2006)

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 41 / 44

Page 201: Koc5(dba)

A comprehensive account of the structure and properties of Lévyprocesses is:

K-I.Sato, Lévy Processes and Infinite Divisibility, Cambridge UniversityPress (1999)

A shorter account, from the point of view of the French school, whichconcentrates on fluctuation theory and potential theory aspects is

J.Bertoin, Lévy Processes, Cambridge University Press (1996)

From this point of view, you should also look atA.Kyprianou, Introductory Lectures on Fluctuations of Lévy Processeswith Applications, Springer-Verlag (2006)

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 41 / 44

Page 202: Koc5(dba)

For an insight into the wide range of both theoretical and appliedrecent work wherein Lévy processes play a role, consultO.E.Barndorff-Nielsen,T.Mikosch, S.Resnick (eds), Lévy Processes:Theory and Applications, Birkhäuser, Basel (2001)

For stochastic calculus with jumps, the authoritative treatise isP.Protter, Stochastic Integration and Differential Equations (secondedition), Springer-Verlag, Berlin Heidelberg (2003)

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 42 / 44

Page 203: Koc5(dba)

For an insight into the wide range of both theoretical and appliedrecent work wherein Lévy processes play a role, consultO.E.Barndorff-Nielsen,T.Mikosch, S.Resnick (eds), Lévy Processes:Theory and Applications, Birkhäuser, Basel (2001)

For stochastic calculus with jumps, the authoritative treatise isP.Protter, Stochastic Integration and Differential Equations (secondedition), Springer-Verlag, Berlin Heidelberg (2003)

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 42 / 44

Page 204: Koc5(dba)

For financial modelling I recommend:

R.Cont, P.Tankov, Financial Modelling with Jump Processes, Chapmanand Hall/CRC (2004)

which is extremely comprehensive and also contains a lot of valuablebackground material on Lévy processes.

W.Schoutens, Lévy Processes in Finance: Pricing FinancialDerivatives, Wiley (2003)

is shorter and aimed at a wider audience than mathematicians andstatisticians.

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 43 / 44

Page 205: Koc5(dba)

For financial modelling I recommend:

R.Cont, P.Tankov, Financial Modelling with Jump Processes, Chapmanand Hall/CRC (2004)

which is extremely comprehensive and also contains a lot of valuablebackground material on Lévy processes.

W.Schoutens, Lévy Processes in Finance: Pricing FinancialDerivatives, Wiley (2003)

is shorter and aimed at a wider audience than mathematicians andstatisticians.

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 43 / 44

Page 206: Koc5(dba)

For financial modelling I recommend:

R.Cont, P.Tankov, Financial Modelling with Jump Processes, Chapmanand Hall/CRC (2004)

which is extremely comprehensive and also contains a lot of valuablebackground material on Lévy processes.

W.Schoutens, Lévy Processes in Finance: Pricing FinancialDerivatives, Wiley (2003)

is shorter and aimed at a wider audience than mathematicians andstatisticians.

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 43 / 44

Page 207: Koc5(dba)

Beni dinlediginiz için tesekkürederim.

Dave Applebaum (Sheffield UK) Lecture 5 December 2011 44 / 44