Upload
serhat-yucel
View
1.540
Download
0
Embed Size (px)
DESCRIPTION
Citation preview
Lectures on Lévy Processes and StochasticCalculus (Koc University) Lecture 5: The
Ornstein-Uhlenbeck Process
David Applebaum
School of Mathematics and Statistics, University of Sheffield, UK
9th December 2011
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 1 / 44
Historical Origins
This process was first introduced by Ornstein and Uhlenbeck in the1930s as a more accurate model of the physical phenomenon ofBrownian motion than the Einstein-Smoluchowski-Wiener process.They argued that
Brownian motion = viscous drag of fluid + random molecularbombardment.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 2 / 44
Historical Origins
This process was first introduced by Ornstein and Uhlenbeck in the1930s as a more accurate model of the physical phenomenon ofBrownian motion than the Einstein-Smoluchowski-Wiener process.They argued that
Brownian motion = viscous drag of fluid + random molecularbombardment.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 2 / 44
Historical Origins
This process was first introduced by Ornstein and Uhlenbeck in the1930s as a more accurate model of the physical phenomenon ofBrownian motion than the Einstein-Smoluchowski-Wiener process.They argued that
Brownian motion = viscous drag of fluid + random molecularbombardment.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 2 / 44
Let v(t) be the velocity at time t of a particle of mass m executingBrownian motion. By Newton’s second law of motion, the total force
acting on the particle at time t is F (t) = mdv(t)
dt.We then have
mdv(t)
dt= − mkv(t)︸ ︷︷ ︸
viscous drag
+ mσdB(t)
dt︸ ︷︷ ︸molecular bombardment
,
where k , σ > 0.
Of course,dB(t)
dtdoesn’t exist, but this is a “physicist’s argument”. If
we cancel the ms and multiply both sides by dt then we get alegitimate SDE - the Langevin equation
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 3 / 44
Let v(t) be the velocity at time t of a particle of mass m executingBrownian motion. By Newton’s second law of motion, the total force
acting on the particle at time t is F (t) = mdv(t)
dt.We then have
mdv(t)
dt= − mkv(t)︸ ︷︷ ︸
viscous drag
+ mσdB(t)
dt︸ ︷︷ ︸molecular bombardment
,
where k , σ > 0.
Of course,dB(t)
dtdoesn’t exist, but this is a “physicist’s argument”. If
we cancel the ms and multiply both sides by dt then we get alegitimate SDE - the Langevin equation
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 3 / 44
Let v(t) be the velocity at time t of a particle of mass m executingBrownian motion. By Newton’s second law of motion, the total force
acting on the particle at time t is F (t) = mdv(t)
dt.We then have
mdv(t)
dt= − mkv(t)︸ ︷︷ ︸
viscous drag
+ mσdB(t)
dt︸ ︷︷ ︸molecular bombardment
,
where k , σ > 0.
Of course,dB(t)
dtdoesn’t exist, but this is a “physicist’s argument”. If
we cancel the ms and multiply both sides by dt then we get alegitimate SDE - the Langevin equation
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 3 / 44
Let v(t) be the velocity at time t of a particle of mass m executingBrownian motion. By Newton’s second law of motion, the total force
acting on the particle at time t is F (t) = mdv(t)
dt.We then have
mdv(t)
dt= − mkv(t)︸ ︷︷ ︸
viscous drag
+ mσdB(t)
dt︸ ︷︷ ︸molecular bombardment
,
where k , σ > 0.
Of course,dB(t)
dtdoesn’t exist, but this is a “physicist’s argument”. If
we cancel the ms and multiply both sides by dt then we get alegitimate SDE - the Langevin equation
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 3 / 44
Let v(t) be the velocity at time t of a particle of mass m executingBrownian motion. By Newton’s second law of motion, the total force
acting on the particle at time t is F (t) = mdv(t)
dt.We then have
mdv(t)
dt= − mkv(t)︸ ︷︷ ︸
viscous drag
+ mσdB(t)
dt︸ ︷︷ ︸molecular bombardment
,
where k , σ > 0.
Of course,dB(t)
dtdoesn’t exist, but this is a “physicist’s argument”. If
we cancel the ms and multiply both sides by dt then we get alegitimate SDE - the Langevin equation
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 3 / 44
dv(t) = −kv(t)dt + σdB(t) (0.1)
Using the integrating factor ekt we can then easily check that theunique solution to this equation is the Ornstein-Uhlenbeck process(v(t), t ≥ 0) where
v(t) = e−ktv(0) +
∫ t
0e−k(t−s)dB(s).
We are interested in Lévy processes so replace B by a d-dimensionalLévy process X and k by a d × d matrix K . Our Langevin equation is
dY (t) = −KY (t)dt + dX (t) (0.2)
and its unique solution is
Y (t) = e−tK Y0 +
∫ t
0e−(t−s)K dX (s), (0.3)
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 4 / 44
dv(t) = −kv(t)dt + σdB(t) (0.1)
Using the integrating factor ekt we can then easily check that theunique solution to this equation is the Ornstein-Uhlenbeck process(v(t), t ≥ 0) where
v(t) = e−ktv(0) +
∫ t
0e−k(t−s)dB(s).
We are interested in Lévy processes so replace B by a d-dimensionalLévy process X and k by a d × d matrix K . Our Langevin equation is
dY (t) = −KY (t)dt + dX (t) (0.2)
and its unique solution is
Y (t) = e−tK Y0 +
∫ t
0e−(t−s)K dX (s), (0.3)
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 4 / 44
dv(t) = −kv(t)dt + σdB(t) (0.1)
Using the integrating factor ekt we can then easily check that theunique solution to this equation is the Ornstein-Uhlenbeck process(v(t), t ≥ 0) where
v(t) = e−ktv(0) +
∫ t
0e−k(t−s)dB(s).
We are interested in Lévy processes so replace B by a d-dimensionalLévy process X and k by a d × d matrix K . Our Langevin equation is
dY (t) = −KY (t)dt + dX (t) (0.2)
and its unique solution is
Y (t) = e−tK Y0 +
∫ t
0e−(t−s)K dX (s), (0.3)
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 4 / 44
dv(t) = −kv(t)dt + σdB(t) (0.1)
Using the integrating factor ekt we can then easily check that theunique solution to this equation is the Ornstein-Uhlenbeck process(v(t), t ≥ 0) where
v(t) = e−ktv(0) +
∫ t
0e−k(t−s)dB(s).
We are interested in Lévy processes so replace B by a d-dimensionalLévy process X and k by a d × d matrix K . Our Langevin equation is
dY (t) = −KY (t)dt + dX (t) (0.2)
and its unique solution is
Y (t) = e−tK Y0 +
∫ t
0e−(t−s)K dX (s), (0.3)
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 4 / 44
dv(t) = −kv(t)dt + σdB(t) (0.1)
Using the integrating factor ekt we can then easily check that theunique solution to this equation is the Ornstein-Uhlenbeck process(v(t), t ≥ 0) where
v(t) = e−ktv(0) +
∫ t
0e−k(t−s)dB(s).
We are interested in Lévy processes so replace B by a d-dimensionalLévy process X and k by a d × d matrix K . Our Langevin equation is
dY (t) = −KY (t)dt + dX (t) (0.2)
and its unique solution is
Y (t) = e−tK Y0 +
∫ t
0e−(t−s)K dX (s), (0.3)
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 4 / 44
dv(t) = −kv(t)dt + σdB(t) (0.1)
Using the integrating factor ekt we can then easily check that theunique solution to this equation is the Ornstein-Uhlenbeck process(v(t), t ≥ 0) where
v(t) = e−ktv(0) +
∫ t
0e−k(t−s)dB(s).
We are interested in Lévy processes so replace B by a d-dimensionalLévy process X and k by a d × d matrix K . Our Langevin equation is
dY (t) = −KY (t)dt + dX (t) (0.2)
and its unique solution is
Y (t) = e−tK Y0 +
∫ t
0e−(t−s)K dX (s), (0.3)
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 4 / 44
dv(t) = −kv(t)dt + σdB(t) (0.1)
Using the integrating factor ekt we can then easily check that theunique solution to this equation is the Ornstein-Uhlenbeck process(v(t), t ≥ 0) where
v(t) = e−ktv(0) +
∫ t
0e−k(t−s)dB(s).
We are interested in Lévy processes so replace B by a d-dimensionalLévy process X and k by a d × d matrix K . Our Langevin equation is
dY (t) = −KY (t)dt + dX (t) (0.2)
and its unique solution is
Y (t) = e−tK Y0 +
∫ t
0e−(t−s)K dX (s), (0.3)
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 4 / 44
where Y0 := Y (0) is a fixed F0 measurable random variables. We stillcall the process Y an Ornstein-Uhlenbeck or OU process.Furthermore
Y has càdlàg paths.Y is a Markov process.
The process X is sometimes called the background driving Lévyprocess or BDLP.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 5 / 44
where Y0 := Y (0) is a fixed F0 measurable random variables. We stillcall the process Y an Ornstein-Uhlenbeck or OU process.Furthermore
Y has càdlàg paths.Y is a Markov process.
The process X is sometimes called the background driving Lévyprocess or BDLP.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 5 / 44
where Y0 := Y (0) is a fixed F0 measurable random variables. We stillcall the process Y an Ornstein-Uhlenbeck or OU process.Furthermore
Y has càdlàg paths.Y is a Markov process.
The process X is sometimes called the background driving Lévyprocess or BDLP.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 5 / 44
where Y0 := Y (0) is a fixed F0 measurable random variables. We stillcall the process Y an Ornstein-Uhlenbeck or OU process.Furthermore
Y has càdlàg paths.Y is a Markov process.
The process X is sometimes called the background driving Lévyprocess or BDLP.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 5 / 44
where Y0 := Y (0) is a fixed F0 measurable random variables. We stillcall the process Y an Ornstein-Uhlenbeck or OU process.Furthermore
Y has càdlàg paths.Y is a Markov process.
The process X is sometimes called the background driving Lévyprocess or BDLP.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 5 / 44
We get a Markov semigroup on Bb(Rd ) called a Mehler semigroup:
Tt f (x) = E(f (Y (t))|Y0 = x)
=
∫Rd
f (e−tK x + y)ρt (dy) (0.4)
where ρt is the law of the stochastic integral∫ t0 e−sK dX (s)
d=∫ t
0 e−(t−s)K dX (s).This generalises the classical Mehler formula (X (t) = B(t),K = kI)
Tt f (x) =1
(2π)d2
∫Rd
f
(e−ktx +
√1− e−2kt
2ky
)e−
y2
2 dy .
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 6 / 44
We get a Markov semigroup on Bb(Rd ) called a Mehler semigroup:
Tt f (x) = E(f (Y (t))|Y0 = x)
=
∫Rd
f (e−tK x + y)ρt (dy) (0.4)
where ρt is the law of the stochastic integral∫ t0 e−sK dX (s)
d=∫ t
0 e−(t−s)K dX (s).This generalises the classical Mehler formula (X (t) = B(t),K = kI)
Tt f (x) =1
(2π)d2
∫Rd
f
(e−ktx +
√1− e−2kt
2ky
)e−
y2
2 dy .
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 6 / 44
We get a Markov semigroup on Bb(Rd ) called a Mehler semigroup:
Tt f (x) = E(f (Y (t))|Y0 = x)
=
∫Rd
f (e−tK x + y)ρt (dy) (0.4)
where ρt is the law of the stochastic integral∫ t0 e−sK dX (s)
d=∫ t
0 e−(t−s)K dX (s).This generalises the classical Mehler formula (X (t) = B(t),K = kI)
Tt f (x) =1
(2π)d2
∫Rd
f
(e−ktx +
√1− e−2kt
2ky
)e−
y2
2 dy .
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 6 / 44
We get a Markov semigroup on Bb(Rd ) called a Mehler semigroup:
Tt f (x) = E(f (Y (t))|Y0 = x)
=
∫Rd
f (e−tK x + y)ρt (dy) (0.4)
where ρt is the law of the stochastic integral∫ t0 e−sK dX (s)
d=∫ t
0 e−(t−s)K dX (s).This generalises the classical Mehler formula (X (t) = B(t),K = kI)
Tt f (x) =1
(2π)d2
∫Rd
f
(e−ktx +
√1− e−2kt
2ky
)e−
y2
2 dy .
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 6 / 44
We get a Markov semigroup on Bb(Rd ) called a Mehler semigroup:
Tt f (x) = E(f (Y (t))|Y0 = x)
=
∫Rd
f (e−tK x + y)ρt (dy) (0.4)
where ρt is the law of the stochastic integral∫ t0 e−sK dX (s)
d=∫ t
0 e−(t−s)K dX (s).This generalises the classical Mehler formula (X (t) = B(t),K = kI)
Tt f (x) =1
(2π)d2
∫Rd
f
(e−ktx +
√1− e−2kt
2ky
)e−
y2
2 dy .
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 6 / 44
In fact (Tt , t ≥ 0) satisfies the Feller property: Tt (C0(Rd )) ⊆ C0(Rd ).We also have the skew-convolution semigroup property:
ρs+t = ρKs ∗ ρt ,
where ρKs (B) = ρs(etK B). Another terminology for this is
measure-valued cocycle.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 7 / 44
In fact (Tt , t ≥ 0) satisfies the Feller property: Tt (C0(Rd )) ⊆ C0(Rd ).We also have the skew-convolution semigroup property:
ρs+t = ρKs ∗ ρt ,
where ρKs (B) = ρs(etK B). Another terminology for this is
measure-valued cocycle.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 7 / 44
In fact (Tt , t ≥ 0) satisfies the Feller property: Tt (C0(Rd )) ⊆ C0(Rd ).We also have the skew-convolution semigroup property:
ρs+t = ρKs ∗ ρt ,
where ρKs (B) = ρs(etK B). Another terminology for this is
measure-valued cocycle.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 7 / 44
We get nicer probabilistic properties of our solution if we make thefollowing
Assumption K is strictly positive definite.
OU processes solve simple linear SDEs. They are important inapplications such as volatility modelling, Lévy driven CARMAprocesses, branching processes with immigration.In infinite dimensions they solve the simplest linear SPDE with additivenoise. To develop this theme, let H and K be separable Hilbert spacesand (S(t), t ≥ 0) be a C0-semigroup on H with infinitesimal generatorJ. Let X be a Lévy process on K and C ∈ L(K ,H).
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 8 / 44
We get nicer probabilistic properties of our solution if we make thefollowing
Assumption K is strictly positive definite.
OU processes solve simple linear SDEs. They are important inapplications such as volatility modelling, Lévy driven CARMAprocesses, branching processes with immigration.In infinite dimensions they solve the simplest linear SPDE with additivenoise. To develop this theme, let H and K be separable Hilbert spacesand (S(t), t ≥ 0) be a C0-semigroup on H with infinitesimal generatorJ. Let X be a Lévy process on K and C ∈ L(K ,H).
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 8 / 44
We get nicer probabilistic properties of our solution if we make thefollowing
Assumption K is strictly positive definite.
OU processes solve simple linear SDEs. They are important inapplications such as volatility modelling, Lévy driven CARMAprocesses, branching processes with immigration.In infinite dimensions they solve the simplest linear SPDE with additivenoise. To develop this theme, let H and K be separable Hilbert spacesand (S(t), t ≥ 0) be a C0-semigroup on H with infinitesimal generatorJ. Let X be a Lévy process on K and C ∈ L(K ,H).
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 8 / 44
We get nicer probabilistic properties of our solution if we make thefollowing
Assumption K is strictly positive definite.
OU processes solve simple linear SDEs. They are important inapplications such as volatility modelling, Lévy driven CARMAprocesses, branching processes with immigration.In infinite dimensions they solve the simplest linear SPDE with additivenoise. To develop this theme, let H and K be separable Hilbert spacesand (S(t), t ≥ 0) be a C0-semigroup on H with infinitesimal generatorJ. Let X be a Lévy process on K and C ∈ L(K ,H).
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 8 / 44
We get nicer probabilistic properties of our solution if we make thefollowing
Assumption K is strictly positive definite.
OU processes solve simple linear SDEs. They are important inapplications such as volatility modelling, Lévy driven CARMAprocesses, branching processes with immigration.In infinite dimensions they solve the simplest linear SPDE with additivenoise. To develop this theme, let H and K be separable Hilbert spacesand (S(t), t ≥ 0) be a C0-semigroup on H with infinitesimal generatorJ. Let X be a Lévy process on K and C ∈ L(K ,H).
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 8 / 44
We get nicer probabilistic properties of our solution if we make thefollowing
Assumption K is strictly positive definite.
OU processes solve simple linear SDEs. They are important inapplications such as volatility modelling, Lévy driven CARMAprocesses, branching processes with immigration.In infinite dimensions they solve the simplest linear SPDE with additivenoise. To develop this theme, let H and K be separable Hilbert spacesand (S(t), t ≥ 0) be a C0-semigroup on H with infinitesimal generatorJ. Let X be a Lévy process on K and C ∈ L(K ,H).
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 8 / 44
We get nicer probabilistic properties of our solution if we make thefollowing
Assumption K is strictly positive definite.
OU processes solve simple linear SDEs. They are important inapplications such as volatility modelling, Lévy driven CARMAprocesses, branching processes with immigration.In infinite dimensions they solve the simplest linear SPDE with additivenoise. To develop this theme, let H and K be separable Hilbert spacesand (S(t), t ≥ 0) be a C0-semigroup on H with infinitesimal generatorJ. Let X be a Lévy process on K and C ∈ L(K ,H).
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 8 / 44
We have the SPDE
dY (t) = JY (t) + CdX (t),
whose unique solution is
Y (t) = S(t)Y0 +
∫ t
0S(t − s)CdX (s)︸ ︷︷ ︸
stochastic convolution
,
and the generalised Mehler semigroup is
Tt f (x) =
∫Rd
f (S(t)x + y)ρt (dy).
From now on we will work in finite dimensions and assume the strictpositive-definiteness of K .
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 9 / 44
We have the SPDE
dY (t) = JY (t) + CdX (t),
whose unique solution is
Y (t) = S(t)Y0 +
∫ t
0S(t − s)CdX (s)︸ ︷︷ ︸
stochastic convolution
,
and the generalised Mehler semigroup is
Tt f (x) =
∫Rd
f (S(t)x + y)ρt (dy).
From now on we will work in finite dimensions and assume the strictpositive-definiteness of K .
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 9 / 44
We have the SPDE
dY (t) = JY (t) + CdX (t),
whose unique solution is
Y (t) = S(t)Y0 +
∫ t
0S(t − s)CdX (s)︸ ︷︷ ︸
stochastic convolution
,
and the generalised Mehler semigroup is
Tt f (x) =
∫Rd
f (S(t)x + y)ρt (dy).
From now on we will work in finite dimensions and assume the strictpositive-definiteness of K .
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 9 / 44
We have the SPDE
dY (t) = JY (t) + CdX (t),
whose unique solution is
Y (t) = S(t)Y0 +
∫ t
0S(t − s)CdX (s)︸ ︷︷ ︸
stochastic convolution
,
and the generalised Mehler semigroup is
Tt f (x) =
∫Rd
f (S(t)x + y)ρt (dy).
From now on we will work in finite dimensions and assume the strictpositive-definiteness of K .
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 9 / 44
Additive Processes and Wiener-Lévy Integrals
The study of O-U processes focusses attention on Wiener-Lévyintegrals If (t) :=
∫ t0 f (s)dX (s). For simplicity we assume that
f : Rd → Rd is continuous.Recall that Z = (Z (t), t ≥ 0) is an additive process if Z (0) = 0 (a.s.), Zhas independent increments and is stochastically continuous. It followsthat each Z (t) is infinitely divisible.
Theorem(If (t), t ≥ 0) is an additive process.
Proof. (sketch) Independent increments follows from the fact that forr ≤ s ≤ tIf (s)− If (r) =
∫ sr f (u)dX (u) is σ{X (b)− X (s); r ≤ a < b ≤ s} -
measurable,If (t)− If (s) =
∫ ts f (u)dX (u) is σ{X (d)− X (c); s ≤ c < d ≤ t} -
measurable, 2
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 10 / 44
Additive Processes and Wiener-Lévy Integrals
The study of O-U processes focusses attention on Wiener-Lévyintegrals If (t) :=
∫ t0 f (s)dX (s). For simplicity we assume that
f : Rd → Rd is continuous.Recall that Z = (Z (t), t ≥ 0) is an additive process if Z (0) = 0 (a.s.), Zhas independent increments and is stochastically continuous. It followsthat each Z (t) is infinitely divisible.
Theorem(If (t), t ≥ 0) is an additive process.
Proof. (sketch) Independent increments follows from the fact that forr ≤ s ≤ tIf (s)− If (r) =
∫ sr f (u)dX (u) is σ{X (b)− X (s); r ≤ a < b ≤ s} -
measurable,If (t)− If (s) =
∫ ts f (u)dX (u) is σ{X (d)− X (c); s ≤ c < d ≤ t} -
measurable, 2
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 10 / 44
Additive Processes and Wiener-Lévy Integrals
The study of O-U processes focusses attention on Wiener-Lévyintegrals If (t) :=
∫ t0 f (s)dX (s). For simplicity we assume that
f : Rd → Rd is continuous.Recall that Z = (Z (t), t ≥ 0) is an additive process if Z (0) = 0 (a.s.), Zhas independent increments and is stochastically continuous. It followsthat each Z (t) is infinitely divisible.
Theorem(If (t), t ≥ 0) is an additive process.
Proof. (sketch) Independent increments follows from the fact that forr ≤ s ≤ tIf (s)− If (r) =
∫ sr f (u)dX (u) is σ{X (b)− X (s); r ≤ a < b ≤ s} -
measurable,If (t)− If (s) =
∫ ts f (u)dX (u) is σ{X (d)− X (c); s ≤ c < d ≤ t} -
measurable, 2
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 10 / 44
Additive Processes and Wiener-Lévy Integrals
The study of O-U processes focusses attention on Wiener-Lévyintegrals If (t) :=
∫ t0 f (s)dX (s). For simplicity we assume that
f : Rd → Rd is continuous.Recall that Z = (Z (t), t ≥ 0) is an additive process if Z (0) = 0 (a.s.), Zhas independent increments and is stochastically continuous. It followsthat each Z (t) is infinitely divisible.
Theorem(If (t), t ≥ 0) is an additive process.
Proof. (sketch) Independent increments follows from the fact that forr ≤ s ≤ tIf (s)− If (r) =
∫ sr f (u)dX (u) is σ{X (b)− X (s); r ≤ a < b ≤ s} -
measurable,If (t)− If (s) =
∫ ts f (u)dX (u) is σ{X (d)− X (c); s ≤ c < d ≤ t} -
measurable, 2
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 10 / 44
Additive Processes and Wiener-Lévy Integrals
The study of O-U processes focusses attention on Wiener-Lévyintegrals If (t) :=
∫ t0 f (s)dX (s). For simplicity we assume that
f : Rd → Rd is continuous.Recall that Z = (Z (t), t ≥ 0) is an additive process if Z (0) = 0 (a.s.), Zhas independent increments and is stochastically continuous. It followsthat each Z (t) is infinitely divisible.
Theorem(If (t), t ≥ 0) is an additive process.
Proof. (sketch) Independent increments follows from the fact that forr ≤ s ≤ tIf (s)− If (r) =
∫ sr f (u)dX (u) is σ{X (b)− X (s); r ≤ a < b ≤ s} -
measurable,If (t)− If (s) =
∫ ts f (u)dX (u) is σ{X (d)− X (c); s ≤ c < d ≤ t} -
measurable, 2
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 10 / 44
Additive Processes and Wiener-Lévy Integrals
The study of O-U processes focusses attention on Wiener-Lévyintegrals If (t) :=
∫ t0 f (s)dX (s). For simplicity we assume that
f : Rd → Rd is continuous.Recall that Z = (Z (t), t ≥ 0) is an additive process if Z (0) = 0 (a.s.), Zhas independent increments and is stochastically continuous. It followsthat each Z (t) is infinitely divisible.
Theorem(If (t), t ≥ 0) is an additive process.
Proof. (sketch) Independent increments follows from the fact that forr ≤ s ≤ tIf (s)− If (r) =
∫ sr f (u)dX (u) is σ{X (b)− X (s); r ≤ a < b ≤ s} -
measurable,If (t)− If (s) =
∫ ts f (u)dX (u) is σ{X (d)− X (c); s ≤ c < d ≤ t} -
measurable, 2
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 10 / 44
TheoremIf X has Lévy symbol η then for each t ≥ 0,u ∈ Rd ,
E(ei(u,If (t))) = exp{∫ t
0η(f (s)T u)
}.
Proof. (sketch) Define Mf (t) = exp{
i(
u,∫ t
0 f (s)dX (s))}
and use Itô’sformula to show that
Mf (t) = 1 + i(
u,∫ t
0Mf (s−)f (s)dB(s)
)+
∫ t
0
∫Rd−{0}
Mf (s−)(ei(u,f (s)x) − 1)N(ds,dx) +
∫ t
0Mf (s−)η(f (s)T u)ds.
Now take expectations of both sides to get
E(Mf (t)) = 1 +
∫ t
0E(Mf (s))η(f (s)T u)ds,
and the result follows. 2Dave Applebaum (Sheffield UK) Lecture 5 December 2011 11 / 44
TheoremIf X has Lévy symbol η then for each t ≥ 0,u ∈ Rd ,
E(ei(u,If (t))) = exp{∫ t
0η(f (s)T u)
}.
Proof. (sketch) Define Mf (t) = exp{
i(
u,∫ t
0 f (s)dX (s))}
and use Itô’sformula to show that
Mf (t) = 1 + i(
u,∫ t
0Mf (s−)f (s)dB(s)
)+
∫ t
0
∫Rd−{0}
Mf (s−)(ei(u,f (s)x) − 1)N(ds,dx) +
∫ t
0Mf (s−)η(f (s)T u)ds.
Now take expectations of both sides to get
E(Mf (t)) = 1 +
∫ t
0E(Mf (s))η(f (s)T u)ds,
and the result follows. 2Dave Applebaum (Sheffield UK) Lecture 5 December 2011 11 / 44
TheoremIf X has Lévy symbol η then for each t ≥ 0,u ∈ Rd ,
E(ei(u,If (t))) = exp{∫ t
0η(f (s)T u)
}.
Proof. (sketch) Define Mf (t) = exp{
i(
u,∫ t
0 f (s)dX (s))}
and use Itô’sformula to show that
Mf (t) = 1 + i(
u,∫ t
0Mf (s−)f (s)dB(s)
)+
∫ t
0
∫Rd−{0}
Mf (s−)(ei(u,f (s)x) − 1)N(ds,dx) +
∫ t
0Mf (s−)η(f (s)T u)ds.
Now take expectations of both sides to get
E(Mf (t)) = 1 +
∫ t
0E(Mf (s))η(f (s)T u)ds,
and the result follows. 2Dave Applebaum (Sheffield UK) Lecture 5 December 2011 11 / 44
TheoremIf X has Lévy symbol η then for each t ≥ 0,u ∈ Rd ,
E(ei(u,If (t))) = exp{∫ t
0η(f (s)T u)
}.
Proof. (sketch) Define Mf (t) = exp{
i(
u,∫ t
0 f (s)dX (s))}
and use Itô’sformula to show that
Mf (t) = 1 + i(
u,∫ t
0Mf (s−)f (s)dB(s)
)+
∫ t
0
∫Rd−{0}
Mf (s−)(ei(u,f (s)x) − 1)N(ds,dx) +
∫ t
0Mf (s−)η(f (s)T u)ds.
Now take expectations of both sides to get
E(Mf (t)) = 1 +
∫ t
0E(Mf (s))η(f (s)T u)ds,
and the result follows. 2Dave Applebaum (Sheffield UK) Lecture 5 December 2011 11 / 44
If X has characteristics (b,A, ν), it follows that If (t) has characteristics(bf
t ,Aft , ν
ft ) where
bft =
∫ t
0f (s)bds +
∫0
∫Rd−{0}
f (s)x(1B(x)− 1B(f (s)x))ν(dx)ds,
Aft =
∫ t
0f (s)T Af (s)ds,
ν ft (B) =
∫ t
0ν(f (s)−1(B)).
It follows that every OU process Y conditioned on Y0 = y is an additiveprocess. It will have characteristics as above with f (s) = e−sK and bf
ttranslated by e−tK y .
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 12 / 44
If X has characteristics (b,A, ν), it follows that If (t) has characteristics(bf
t ,Aft , ν
ft ) where
bft =
∫ t
0f (s)bds +
∫0
∫Rd−{0}
f (s)x(1B(x)− 1B(f (s)x))ν(dx)ds,
Aft =
∫ t
0f (s)T Af (s)ds,
ν ft (B) =
∫ t
0ν(f (s)−1(B)).
It follows that every OU process Y conditioned on Y0 = y is an additiveprocess. It will have characteristics as above with f (s) = e−sK and bf
ttranslated by e−tK y .
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 12 / 44
If X has characteristics (b,A, ν), it follows that If (t) has characteristics(bf
t ,Aft , ν
ft ) where
bft =
∫ t
0f (s)bds +
∫0
∫Rd−{0}
f (s)x(1B(x)− 1B(f (s)x))ν(dx)ds,
Aft =
∫ t
0f (s)T Af (s)ds,
ν ft (B) =
∫ t
0ν(f (s)−1(B)).
It follows that every OU process Y conditioned on Y0 = y is an additiveprocess. It will have characteristics as above with f (s) = e−sK and bf
ttranslated by e−tK y .
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 12 / 44
If X has characteristics (b,A, ν), it follows that If (t) has characteristics(bf
t ,Aft , ν
ft ) where
bft =
∫ t
0f (s)bds +
∫0
∫Rd−{0}
f (s)x(1B(x)− 1B(f (s)x))ν(dx)ds,
Aft =
∫ t
0f (s)T Af (s)ds,
ν ft (B) =
∫ t
0ν(f (s)−1(B)).
It follows that every OU process Y conditioned on Y0 = y is an additiveprocess. It will have characteristics as above with f (s) = e−sK and bf
ttranslated by e−tK y .
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 12 / 44
If X has characteristics (b,A, ν), it follows that If (t) has characteristics(bf
t ,Aft , ν
ft ) where
bft =
∫ t
0f (s)bds +
∫0
∫Rd−{0}
f (s)x(1B(x)− 1B(f (s)x))ν(dx)ds,
Aft =
∫ t
0f (s)T Af (s)ds,
ν ft (B) =
∫ t
0ν(f (s)−1(B)).
It follows that every OU process Y conditioned on Y0 = y is an additiveprocess. It will have characteristics as above with f (s) = e−sK and bf
ttranslated by e−tK y .
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 12 / 44
If X has characteristics (b,A, ν), it follows that If (t) has characteristics(bf
t ,Aft , ν
ft ) where
bft =
∫ t
0f (s)bds +
∫0
∫Rd−{0}
f (s)x(1B(x)− 1B(f (s)x))ν(dx)ds,
Aft =
∫ t
0f (s)T Af (s)ds,
ν ft (B) =
∫ t
0ν(f (s)−1(B)).
It follows that every OU process Y conditioned on Y0 = y is an additiveprocess. It will have characteristics as above with f (s) = e−sK and bf
ttranslated by e−tK y .
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 12 / 44
Invariant Measures, Stationary Processes, Ergodicity:General Theory
We want to investigate invariant measures and stationary solutions forOU processes. First a little general theory.First let (Tt , t ≥ 0) be a general Markov semigroup with transitionprobabilities pt (x ,B) = Tt1B(x) so that Tt f (x) =
∫Rd f (y)pt (x ,dy) for
f ∈ Bb(Rd ). We say that a probability measure µ is an invariantmeasure for the semigroup if for all t ≥ 0, f ∈ Bb(Rd ),∫
RdTt f (x)µ(dx) =
∫Rd
f (x)µ(dx) (0.5)
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 13 / 44
Invariant Measures, Stationary Processes, Ergodicity:General Theory
We want to investigate invariant measures and stationary solutions forOU processes. First a little general theory.First let (Tt , t ≥ 0) be a general Markov semigroup with transitionprobabilities pt (x ,B) = Tt1B(x) so that Tt f (x) =
∫Rd f (y)pt (x ,dy) for
f ∈ Bb(Rd ). We say that a probability measure µ is an invariantmeasure for the semigroup if for all t ≥ 0, f ∈ Bb(Rd ),∫
RdTt f (x)µ(dx) =
∫Rd
f (x)µ(dx) (0.5)
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 13 / 44
Invariant Measures, Stationary Processes, Ergodicity:General Theory
We want to investigate invariant measures and stationary solutions forOU processes. First a little general theory.First let (Tt , t ≥ 0) be a general Markov semigroup with transitionprobabilities pt (x ,B) = Tt1B(x) so that Tt f (x) =
∫Rd f (y)pt (x ,dy) for
f ∈ Bb(Rd ). We say that a probability measure µ is an invariantmeasure for the semigroup if for all t ≥ 0, f ∈ Bb(Rd ),∫
RdTt f (x)µ(dx) =
∫Rd
f (x)µ(dx) (0.5)
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 13 / 44
Invariant Measures, Stationary Processes, Ergodicity:General Theory
We want to investigate invariant measures and stationary solutions forOU processes. First a little general theory.First let (Tt , t ≥ 0) be a general Markov semigroup with transitionprobabilities pt (x ,B) = Tt1B(x) so that Tt f (x) =
∫Rd f (y)pt (x ,dy) for
f ∈ Bb(Rd ). We say that a probability measure µ is an invariantmeasure for the semigroup if for all t ≥ 0, f ∈ Bb(Rd ),∫
RdTt f (x)µ(dx) =
∫Rd
f (x)µ(dx) (0.5)
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 13 / 44
Invariant Measures, Stationary Processes, Ergodicity:General Theory
We want to investigate invariant measures and stationary solutions forOU processes. First a little general theory.First let (Tt , t ≥ 0) be a general Markov semigroup with transitionprobabilities pt (x ,B) = Tt1B(x) so that Tt f (x) =
∫Rd f (y)pt (x ,dy) for
f ∈ Bb(Rd ). We say that a probability measure µ is an invariantmeasure for the semigroup if for all t ≥ 0, f ∈ Bb(Rd ),∫
RdTt f (x)µ(dx) =
∫Rd
f (x)µ(dx) (0.5)
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 13 / 44
Invariant Measures, Stationary Processes, Ergodicity:General Theory
We want to investigate invariant measures and stationary solutions forOU processes. First a little general theory.First let (Tt , t ≥ 0) be a general Markov semigroup with transitionprobabilities pt (x ,B) = Tt1B(x) so that Tt f (x) =
∫Rd f (y)pt (x ,dy) for
f ∈ Bb(Rd ). We say that a probability measure µ is an invariantmeasure for the semigroup if for all t ≥ 0, f ∈ Bb(Rd ),∫
RdTt f (x)µ(dx) =
∫Rd
f (x)µ(dx) (0.5)
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 13 / 44
Equivalently for all Borel sets B∫Rd
pt (x ,B)µ(dx) = µ(B). (0.6)
To see that (0.5)⇒ (0.6) rewrite as∫Rd
∫Rd
f (y)pt (x ,dy)µ(dx) =
∫Rd
f (x)µ(dx),
and put f = 1B. For the converse - approximate f by simple functionsand take limits.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 14 / 44
Equivalently for all Borel sets B∫Rd
pt (x ,B)µ(dx) = µ(B). (0.6)
To see that (0.5)⇒ (0.6) rewrite as∫Rd
∫Rd
f (y)pt (x ,dy)µ(dx) =
∫Rd
f (x)µ(dx),
and put f = 1B. For the converse - approximate f by simple functionsand take limits.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 14 / 44
Equivalently for all Borel sets B∫Rd
pt (x ,B)µ(dx) = µ(B). (0.6)
To see that (0.5)⇒ (0.6) rewrite as∫Rd
∫Rd
f (y)pt (x ,dy)µ(dx) =
∫Rd
f (x)µ(dx),
and put f = 1B. For the converse - approximate f by simple functionsand take limits.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 14 / 44
Equivalently for all Borel sets B∫Rd
pt (x ,B)µ(dx) = µ(B). (0.6)
To see that (0.5)⇒ (0.6) rewrite as∫Rd
∫Rd
f (y)pt (x ,dy)µ(dx) =
∫Rd
f (x)µ(dx),
and put f = 1B. For the converse - approximate f by simple functionsand take limits.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 14 / 44
e.g. A Lévy process doesn’t have an invariant probability measure butLebesgue measure is invariant in the sense that for f ∈ L1(Rd )∫
RdTt f (x)dx =
∫Rd
∫Rd
f (x + y)pt (dy)dx =
∫Rd
f (x)dx .
A process Z = (Z (t), t ≥ 0) is (strictly) stationary if for alln ∈ N, t1, . . . , tn,h ∈ R+,
(Z (t1), . . . ,Z (tn))d= (Z (t1 + h), . . . ,Z (tn + h))
TheoremA Markov process Z wherein µ is the law of Z (0) is stationary if andonly if µ is an invariant measure.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 15 / 44
e.g. A Lévy process doesn’t have an invariant probability measure butLebesgue measure is invariant in the sense that for f ∈ L1(Rd )∫
RdTt f (x)dx =
∫Rd
∫Rd
f (x + y)pt (dy)dx =
∫Rd
f (x)dx .
A process Z = (Z (t), t ≥ 0) is (strictly) stationary if for alln ∈ N, t1, . . . , tn,h ∈ R+,
(Z (t1), . . . ,Z (tn))d= (Z (t1 + h), . . . ,Z (tn + h))
TheoremA Markov process Z wherein µ is the law of Z (0) is stationary if andonly if µ is an invariant measure.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 15 / 44
e.g. A Lévy process doesn’t have an invariant probability measure butLebesgue measure is invariant in the sense that for f ∈ L1(Rd )∫
RdTt f (x)dx =
∫Rd
∫Rd
f (x + y)pt (dy)dx =
∫Rd
f (x)dx .
A process Z = (Z (t), t ≥ 0) is (strictly) stationary if for alln ∈ N, t1, . . . , tn,h ∈ R+,
(Z (t1), . . . ,Z (tn))d= (Z (t1 + h), . . . ,Z (tn + h))
TheoremA Markov process Z wherein µ is the law of Z (0) is stationary if andonly if µ is an invariant measure.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 15 / 44
e.g. A Lévy process doesn’t have an invariant probability measure butLebesgue measure is invariant in the sense that for f ∈ L1(Rd )∫
RdTt f (x)dx =
∫Rd
∫Rd
f (x + y)pt (dy)dx =
∫Rd
f (x)dx .
A process Z = (Z (t), t ≥ 0) is (strictly) stationary if for alln ∈ N, t1, . . . , tn,h ∈ R+,
(Z (t1), . . . ,Z (tn))d= (Z (t1 + h), . . . ,Z (tn + h))
TheoremA Markov process Z wherein µ is the law of Z (0) is stationary if andonly if µ is an invariant measure.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 15 / 44
Proof. If the process is stationary then µ is invariant since
µ(B) = P(Z (0) ∈ B) = P(Z (t) ∈ B) =
∫Rd
pt (x ,B)µ(dx).
For the converse, its sufficient to prove thatE(f1(Z (t1 + h)) · · · fn(Z (tn + h)))) is independent of h for allf1, . . . fn ∈ Bb(Rd ). Proof is by induction. Case n = 1. Its enough toshow
E(f (Z (t)) = E(E(f (Z (t)|F0)))
= E(Tt f (Z (0)))
=
∫Rd
(Tt f (x))µ(dx)
=
∫Rd
f (x)µ(dx) = E(f (Z (0))).
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 16 / 44
Proof. If the process is stationary then µ is invariant since
µ(B) = P(Z (0) ∈ B) = P(Z (t) ∈ B) =
∫Rd
pt (x ,B)µ(dx).
For the converse, its sufficient to prove thatE(f1(Z (t1 + h)) · · · fn(Z (tn + h)))) is independent of h for allf1, . . . fn ∈ Bb(Rd ). Proof is by induction. Case n = 1. Its enough toshow
E(f (Z (t)) = E(E(f (Z (t)|F0)))
= E(Tt f (Z (0)))
=
∫Rd
(Tt f (x))µ(dx)
=
∫Rd
f (x)µ(dx) = E(f (Z (0))).
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 16 / 44
Proof. If the process is stationary then µ is invariant since
µ(B) = P(Z (0) ∈ B) = P(Z (t) ∈ B) =
∫Rd
pt (x ,B)µ(dx).
For the converse, its sufficient to prove thatE(f1(Z (t1 + h)) · · · fn(Z (tn + h)))) is independent of h for allf1, . . . fn ∈ Bb(Rd ). Proof is by induction. Case n = 1. Its enough toshow
E(f (Z (t)) = E(E(f (Z (t)|F0)))
= E(Tt f (Z (0)))
=
∫Rd
(Tt f (x))µ(dx)
=
∫Rd
f (x)µ(dx) = E(f (Z (0))).
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 16 / 44
Proof. If the process is stationary then µ is invariant since
µ(B) = P(Z (0) ∈ B) = P(Z (t) ∈ B) =
∫Rd
pt (x ,B)µ(dx).
For the converse, its sufficient to prove thatE(f1(Z (t1 + h)) · · · fn(Z (tn + h)))) is independent of h for allf1, . . . fn ∈ Bb(Rd ). Proof is by induction. Case n = 1. Its enough toshow
E(f (Z (t)) = E(E(f (Z (t)|F0)))
= E(Tt f (Z (0)))
=
∫Rd
(Tt f (x))µ(dx)
=
∫Rd
f (x)µ(dx) = E(f (Z (0))).
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 16 / 44
Proof. If the process is stationary then µ is invariant since
µ(B) = P(Z (0) ∈ B) = P(Z (t) ∈ B) =
∫Rd
pt (x ,B)µ(dx).
For the converse, its sufficient to prove thatE(f1(Z (t1 + h)) · · · fn(Z (tn + h)))) is independent of h for allf1, . . . fn ∈ Bb(Rd ). Proof is by induction. Case n = 1. Its enough toshow
E(f (Z (t)) = E(E(f (Z (t)|F0)))
= E(Tt f (Z (0)))
=
∫Rd
(Tt f (x))µ(dx)
=
∫Rd
f (x)µ(dx) = E(f (Z (0))).
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 16 / 44
Proof. If the process is stationary then µ is invariant since
µ(B) = P(Z (0) ∈ B) = P(Z (t) ∈ B) =
∫Rd
pt (x ,B)µ(dx).
For the converse, its sufficient to prove thatE(f1(Z (t1 + h)) · · · fn(Z (tn + h)))) is independent of h for allf1, . . . fn ∈ Bb(Rd ). Proof is by induction. Case n = 1. Its enough toshow
E(f (Z (t)) = E(E(f (Z (t)|F0)))
= E(Tt f (Z (0)))
=
∫Rd
(Tt f (x))µ(dx)
=
∫Rd
f (x)µ(dx) = E(f (Z (0))).
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 16 / 44
Proof. If the process is stationary then µ is invariant since
µ(B) = P(Z (0) ∈ B) = P(Z (t) ∈ B) =
∫Rd
pt (x ,B)µ(dx).
For the converse, its sufficient to prove thatE(f1(Z (t1 + h)) · · · fn(Z (tn + h)))) is independent of h for allf1, . . . fn ∈ Bb(Rd ). Proof is by induction. Case n = 1. Its enough toshow
E(f (Z (t)) = E(E(f (Z (t)|F0)))
= E(Tt f (Z (0)))
=
∫Rd
(Tt f (x))µ(dx)
=
∫Rd
f (x)µ(dx) = E(f (Z (0))).
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 16 / 44
In general use
E(f1(Z (t1 + h)) · · · fn(Z (tn + h)))
= E(f1(Z (t1 + h)) · · ·E(fn(Z (tn + h))|Ftn−1+h))
= E(f1(Z (t1 + h) · · ·Ttn−tn−1 fn(Z (tn−1 + h)))).
2
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 17 / 44
In general use
E(f1(Z (t1 + h)) · · · fn(Z (tn + h)))
= E(f1(Z (t1 + h)) · · ·E(fn(Z (tn + h))|Ftn−1+h))
= E(f1(Z (t1 + h) · · ·Ttn−tn−1 fn(Z (tn−1 + h)))).
2
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 17 / 44
Let µ be an invariant probability measure for a Markov semigroup(Tt , t ≥ 0). µ is ergodic if
Tt1B = 1B (µ a.s.) ⇒ µ(B) = 0 or µ(B) = 1.
If µ is ergodic then “time averages” = “space averages” for thecorresponding stationary Markov process, i.e.
limT→∞
1T
∫ T
0f (Z (s))ds =
∫Rd
f (x)µ(dx) a.s.
Fact: The invariant measures form a convex set and the ergodicmeasures are the extreme points of this set.
It follows that if an invariant measure is unique then it is ergodic.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 18 / 44
Let µ be an invariant probability measure for a Markov semigroup(Tt , t ≥ 0). µ is ergodic if
Tt1B = 1B (µ a.s.) ⇒ µ(B) = 0 or µ(B) = 1.
If µ is ergodic then “time averages” = “space averages” for thecorresponding stationary Markov process, i.e.
limT→∞
1T
∫ T
0f (Z (s))ds =
∫Rd
f (x)µ(dx) a.s.
Fact: The invariant measures form a convex set and the ergodicmeasures are the extreme points of this set.
It follows that if an invariant measure is unique then it is ergodic.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 18 / 44
Let µ be an invariant probability measure for a Markov semigroup(Tt , t ≥ 0). µ is ergodic if
Tt1B = 1B (µ a.s.) ⇒ µ(B) = 0 or µ(B) = 1.
If µ is ergodic then “time averages” = “space averages” for thecorresponding stationary Markov process, i.e.
limT→∞
1T
∫ T
0f (Z (s))ds =
∫Rd
f (x)µ(dx) a.s.
Fact: The invariant measures form a convex set and the ergodicmeasures are the extreme points of this set.
It follows that if an invariant measure is unique then it is ergodic.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 18 / 44
Let µ be an invariant probability measure for a Markov semigroup(Tt , t ≥ 0). µ is ergodic if
Tt1B = 1B (µ a.s.) ⇒ µ(B) = 0 or µ(B) = 1.
If µ is ergodic then “time averages” = “space averages” for thecorresponding stationary Markov process, i.e.
limT→∞
1T
∫ T
0f (Z (s))ds =
∫Rd
f (x)µ(dx) a.s.
Fact: The invariant measures form a convex set and the ergodicmeasures are the extreme points of this set.
It follows that if an invariant measure is unique then it is ergodic.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 18 / 44
Let µ be an invariant probability measure for a Markov semigroup(Tt , t ≥ 0). µ is ergodic if
Tt1B = 1B (µ a.s.) ⇒ µ(B) = 0 or µ(B) = 1.
If µ is ergodic then “time averages” = “space averages” for thecorresponding stationary Markov process, i.e.
limT→∞
1T
∫ T
0f (Z (s))ds =
∫Rd
f (x)µ(dx) a.s.
Fact: The invariant measures form a convex set and the ergodicmeasures are the extreme points of this set.
It follows that if an invariant measure is unique then it is ergodic.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 18 / 44
Let µ be an invariant probability measure for a Markov semigroup(Tt , t ≥ 0). µ is ergodic if
Tt1B = 1B (µ a.s.) ⇒ µ(B) = 0 or µ(B) = 1.
If µ is ergodic then “time averages” = “space averages” for thecorresponding stationary Markov process, i.e.
limT→∞
1T
∫ T
0f (Z (s))ds =
∫Rd
f (x)µ(dx) a.s.
Fact: The invariant measures form a convex set and the ergodicmeasures are the extreme points of this set.
It follows that if an invariant measure is unique then it is ergodic.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 18 / 44
The Self-Decomposable Connection
Recall that a random variable Z is self-decomposable if for each0 < a < 1 there exists a random variable Wa that is independent of Zsuch that
Z d= aZ + Wa
or equivalently ρZ = ρaZ ∗ ρWa , where ρa
Z (B) = ρ(a−1B).Now suppose that Y is a stationary Ornstein-Uhlenbeck process on R.Then Y0 is self decomposable with a = e−kt and Wa(t) =
∫ t0 e−ksdX (s)
since
Y (t) = e−ktY0 +
∫ t
0e−(t−s)K dX (s)
and by stationary increments of the process X
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 19 / 44
The Self-Decomposable Connection
Recall that a random variable Z is self-decomposable if for each0 < a < 1 there exists a random variable Wa that is independent of Zsuch that
Z d= aZ + Wa
or equivalently ρZ = ρaZ ∗ ρWa , where ρa
Z (B) = ρ(a−1B).Now suppose that Y is a stationary Ornstein-Uhlenbeck process on R.Then Y0 is self decomposable with a = e−kt and Wa(t) =
∫ t0 e−ksdX (s)
since
Y (t) = e−ktY0 +
∫ t
0e−(t−s)K dX (s)
and by stationary increments of the process X
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 19 / 44
The Self-Decomposable Connection
Recall that a random variable Z is self-decomposable if for each0 < a < 1 there exists a random variable Wa that is independent of Zsuch that
Z d= aZ + Wa
or equivalently ρZ = ρaZ ∗ ρWa , where ρa
Z (B) = ρ(a−1B).Now suppose that Y is a stationary Ornstein-Uhlenbeck process on R.Then Y0 is self decomposable with a = e−kt and Wa(t) =
∫ t0 e−ksdX (s)
since
Y (t) = e−ktY0 +
∫ t
0e−(t−s)K dX (s)
and by stationary increments of the process X
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 19 / 44
The Self-Decomposable Connection
Recall that a random variable Z is self-decomposable if for each0 < a < 1 there exists a random variable Wa that is independent of Zsuch that
Z d= aZ + Wa
or equivalently ρZ = ρaZ ∗ ρWa , where ρa
Z (B) = ρ(a−1B).Now suppose that Y is a stationary Ornstein-Uhlenbeck process on R.Then Y0 is self decomposable with a = e−kt and Wa(t) =
∫ t0 e−ksdX (s)
since
Y (t) = e−ktY0 +
∫ t
0e−(t−s)K dX (s)
and by stationary increments of the process X
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 19 / 44
The Self-Decomposable Connection
Recall that a random variable Z is self-decomposable if for each0 < a < 1 there exists a random variable Wa that is independent of Zsuch that
Z d= aZ + Wa
or equivalently ρZ = ρaZ ∗ ρWa , where ρa
Z (B) = ρ(a−1B).Now suppose that Y is a stationary Ornstein-Uhlenbeck process on R.Then Y0 is self decomposable with a = e−kt and Wa(t) =
∫ t0 e−ksdX (s)
since
Y (t) = e−ktY0 +
∫ t
0e−(t−s)K dX (s)
and by stationary increments of the process X
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 19 / 44
The Self-Decomposable Connection
Recall that a random variable Z is self-decomposable if for each0 < a < 1 there exists a random variable Wa that is independent of Zsuch that
Z d= aZ + Wa
or equivalently ρZ = ρaZ ∗ ρWa , where ρa
Z (B) = ρ(a−1B).Now suppose that Y is a stationary Ornstein-Uhlenbeck process on R.Then Y0 is self decomposable with a = e−kt and Wa(t) =
∫ t0 e−ksdX (s)
since
Y (t) = e−ktY0 +
∫ t
0e−(t−s)K dX (s)
and by stationary increments of the process X
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 19 / 44
Y (t) d= Y0 and
∫ t
0e−k(t−s)dX (s)
d=
∫ t
0e−ksdX (s)
⇒ Y0d= e−ktY0 + Wa(t).
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 20 / 44
Y (t) d= Y0 and
∫ t
0e−k(t−s)dX (s)
d=
∫ t
0e−ksdX (s)
⇒ Y0d= e−ktY0 + Wa(t).
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 20 / 44
Now suppose that µ is self-decomposable - more precisely that
µ = µekt ∗ ρt ,
where ρt is the law of Wa(t). Then
∫R
Tt f (x)µ(dx) =
∫R
∫R
f (e−ktx + y)ρt (dy)µ(dx)
=
∫R
∫R
f (x + y)ρt (dy)µekt(dx)
=
∫R
f (x)(µekt ∗ ρt )(dx)
=
∫R
f (x)µ(dx).
So µ is an invariant measure.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 21 / 44
Now suppose that µ is self-decomposable - more precisely that
µ = µekt ∗ ρt ,
where ρt is the law of Wa(t). Then
∫R
Tt f (x)µ(dx) =
∫R
∫R
f (e−ktx + y)ρt (dy)µ(dx)
=
∫R
∫R
f (x + y)ρt (dy)µekt(dx)
=
∫R
f (x)(µekt ∗ ρt )(dx)
=
∫R
f (x)µ(dx).
So µ is an invariant measure.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 21 / 44
Now suppose that µ is self-decomposable - more precisely that
µ = µekt ∗ ρt ,
where ρt is the law of Wa(t). Then
∫R
Tt f (x)µ(dx) =
∫R
∫R
f (e−ktx + y)ρt (dy)µ(dx)
=
∫R
∫R
f (x + y)ρt (dy)µekt(dx)
=
∫R
f (x)(µekt ∗ ρt )(dx)
=
∫R
f (x)µ(dx).
So µ is an invariant measure.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 21 / 44
Now suppose that µ is self-decomposable - more precisely that
µ = µekt ∗ ρt ,
where ρt is the law of Wa(t). Then
∫R
Tt f (x)µ(dx) =
∫R
∫R
f (e−ktx + y)ρt (dy)µ(dx)
=
∫R
∫R
f (x + y)ρt (dy)µekt(dx)
=
∫R
f (x)(µekt ∗ ρt )(dx)
=
∫R
f (x)µ(dx).
So µ is an invariant measure.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 21 / 44
Now suppose that µ is self-decomposable - more precisely that
µ = µekt ∗ ρt ,
where ρt is the law of Wa(t). Then
∫R
Tt f (x)µ(dx) =
∫R
∫R
f (e−ktx + y)ρt (dy)µ(dx)
=
∫R
∫R
f (x + y)ρt (dy)µekt(dx)
=
∫R
f (x)(µekt ∗ ρt )(dx)
=
∫R
f (x)µ(dx).
So µ is an invariant measure.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 21 / 44
Now suppose that µ is self-decomposable - more precisely that
µ = µekt ∗ ρt ,
where ρt is the law of Wa(t). Then
∫R
Tt f (x)µ(dx) =
∫R
∫R
f (e−ktx + y)ρt (dy)µ(dx)
=
∫R
∫R
f (x + y)ρt (dy)µekt(dx)
=
∫R
f (x)(µekt ∗ ρt )(dx)
=
∫R
f (x)µ(dx).
So µ is an invariant measure.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 21 / 44
So we have shown that:
TheoremThe following are equivalent for the O-U process Y .
Y is stationary.The law of Y (0) is an invariant measure.
The law of Y (0) is self-decomposable (with Wa(t) =∫ t
0 e−ksdX (s)).
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 22 / 44
So we have shown that:
TheoremThe following are equivalent for the O-U process Y .
Y is stationary.The law of Y (0) is an invariant measure.
The law of Y (0) is self-decomposable (with Wa(t) =∫ t
0 e−ksdX (s)).
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 22 / 44
So we have shown that:
TheoremThe following are equivalent for the O-U process Y .
Y is stationary.The law of Y (0) is an invariant measure.
The law of Y (0) is self-decomposable (with Wa(t) =∫ t
0 e−ksdX (s)).
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 22 / 44
So we have shown that:
TheoremThe following are equivalent for the O-U process Y .
Y is stationary.The law of Y (0) is an invariant measure.
The law of Y (0) is self-decomposable (with Wa(t) =∫ t
0 e−ksdX (s)).
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 22 / 44
We seek some condition on the Lévy process X which ensures that Yis stationary.
Fact: If Y∞ :=∫∞
0 e−ksdX (s) exists in distribution then it isself-decomposable.
To see this observe that (using stationary increments of X )
∫ ∞0
e−ksdX (s) =
∫ ∞t
e−ksdX (s) +
∫ t
0e−ksdX (s)
d=
∫ ∞0
e−k(t+s)dX (s) +
∫ t
0e−ksdX (s)
= e−kt∫ ∞
0e−ksdX (s) +
∫ t
0e−ksdX (s)
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 23 / 44
We seek some condition on the Lévy process X which ensures that Yis stationary.
Fact: If Y∞ :=∫∞
0 e−ksdX (s) exists in distribution then it isself-decomposable.
To see this observe that (using stationary increments of X )
∫ ∞0
e−ksdX (s) =
∫ ∞t
e−ksdX (s) +
∫ t
0e−ksdX (s)
d=
∫ ∞0
e−k(t+s)dX (s) +
∫ t
0e−ksdX (s)
= e−kt∫ ∞
0e−ksdX (s) +
∫ t
0e−ksdX (s)
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 23 / 44
We seek some condition on the Lévy process X which ensures that Yis stationary.
Fact: If Y∞ :=∫∞
0 e−ksdX (s) exists in distribution then it isself-decomposable.
To see this observe that (using stationary increments of X )
∫ ∞0
e−ksdX (s) =
∫ ∞t
e−ksdX (s) +
∫ t
0e−ksdX (s)
d=
∫ ∞0
e−k(t+s)dX (s) +
∫ t
0e−ksdX (s)
= e−kt∫ ∞
0e−ksdX (s) +
∫ t
0e−ksdX (s)
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 23 / 44
We seek some condition on the Lévy process X which ensures that Yis stationary.
Fact: If Y∞ :=∫∞
0 e−ksdX (s) exists in distribution then it isself-decomposable.
To see this observe that (using stationary increments of X )
∫ ∞0
e−ksdX (s) =
∫ ∞t
e−ksdX (s) +
∫ t
0e−ksdX (s)
d=
∫ ∞0
e−k(t+s)dX (s) +
∫ t
0e−ksdX (s)
= e−kt∫ ∞
0e−ksdX (s) +
∫ t
0e−ksdX (s)
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 23 / 44
We seek some condition on the Lévy process X which ensures that Yis stationary.
Fact: If Y∞ :=∫∞
0 e−ksdX (s) exists in distribution then it isself-decomposable.
To see this observe that (using stationary increments of X )
∫ ∞0
e−ksdX (s) =
∫ ∞t
e−ksdX (s) +
∫ t
0e−ksdX (s)
d=
∫ ∞0
e−k(t+s)dX (s) +
∫ t
0e−ksdX (s)
= e−kt∫ ∞
0e−ksdX (s) +
∫ t
0e−ksdX (s)
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 23 / 44
We seek some condition on the Lévy process X which ensures that Yis stationary.
Fact: If Y∞ :=∫∞
0 e−ksdX (s) exists in distribution then it isself-decomposable.
To see this observe that (using stationary increments of X )
∫ ∞0
e−ksdX (s) =
∫ ∞t
e−ksdX (s) +
∫ t
0e−ksdX (s)
d=
∫ ∞0
e−k(t+s)dX (s) +
∫ t
0e−ksdX (s)
= e−kt∫ ∞
0e−ksdX (s) +
∫ t
0e−ksdX (s)
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 23 / 44
When does limt→∞∫ t
0 e−ksdX (s) exist in distribution? Use the Lévy-Itôdecomposition.
X (t) = bt + M(t) +
∫|x |≥1
xN(t ,dx).
It is not difficult to see that limt→∞∫ t
0 e−ksdM(s) exists in L2-sense.
Fact: limt→∞
∫ t
0
∫|x |≥1
e−ksxN(ds,dx) exists in distribution if and only if∫|x |≥1 log(1 + |x |)ν(dx) <∞.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 24 / 44
When does limt→∞∫ t
0 e−ksdX (s) exist in distribution? Use the Lévy-Itôdecomposition.
X (t) = bt + M(t) +
∫|x |≥1
xN(t ,dx).
It is not difficult to see that limt→∞∫ t
0 e−ksdM(s) exists in L2-sense.
Fact: limt→∞
∫ t
0
∫|x |≥1
e−ksxN(ds,dx) exists in distribution if and only if∫|x |≥1 log(1 + |x |)ν(dx) <∞.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 24 / 44
When does limt→∞∫ t
0 e−ksdX (s) exist in distribution? Use the Lévy-Itôdecomposition.
X (t) = bt + M(t) +
∫|x |≥1
xN(t ,dx).
It is not difficult to see that limt→∞∫ t
0 e−ksdM(s) exists in L2-sense.
Fact: limt→∞
∫ t
0
∫|x |≥1
e−ksxN(ds,dx) exists in distribution if and only if∫|x |≥1 log(1 + |x |)ν(dx) <∞.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 24 / 44
When does limt→∞∫ t
0 e−ksdX (s) exist in distribution? Use the Lévy-Itôdecomposition.
X (t) = bt + M(t) +
∫|x |≥1
xN(t ,dx).
It is not difficult to see that limt→∞∫ t
0 e−ksdM(s) exists in L2-sense.
Fact: limt→∞
∫ t
0
∫|x |≥1
e−ksxN(ds,dx) exists in distribution if and only if∫|x |≥1 log(1 + |x |)ν(dx) <∞.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 24 / 44
When does limt→∞∫ t
0 e−ksdX (s) exist in distribution? Use the Lévy-Itôdecomposition.
X (t) = bt + M(t) +
∫|x |≥1
xN(t ,dx).
It is not difficult to see that limt→∞∫ t
0 e−ksdM(s) exists in L2-sense.
Fact: limt→∞
∫ t
0
∫|x |≥1
e−ksxN(ds,dx) exists in distribution if and only if∫|x |≥1 log(1 + |x |)ν(dx) <∞.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 24 / 44
When does limt→∞∫ t
0 e−ksdX (s) exist in distribution? Use the Lévy-Itôdecomposition.
X (t) = bt + M(t) +
∫|x |≥1
xN(t ,dx).
It is not difficult to see that limt→∞∫ t
0 e−ksdM(s) exists in L2-sense.
Fact: limt→∞
∫ t
0
∫|x |≥1
e−ksxN(ds,dx) exists in distribution if and only if∫|x |≥1 log(1 + |x |)ν(dx) <∞.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 24 / 44
To prove this you need
1 If (ξn,n ∈ N) are i.i.d. then∑∞
n=1 cnξn converges a.s. (0 < c < 1)if and only if E(log(1 + |ξ1|)) <∞.
2 ∫ n
0
∫|x |≥1
e−ksxN(ds,dx)d=
n−1∑j=0
e−kjMj
where Mj :=∫ j+1
j
∫|x |≥1 e−k(s−j)xN(ds,dx). Note that (Mj , j ∈ N)
are i.i.d.
In this case, Y has characteristics (bf∞,Af
∞, νf∞).
e.g. Brownian motion case. X (t) = B(t). µ ∼ N(0, 1
2k
).
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 25 / 44
To prove this you need
1 If (ξn,n ∈ N) are i.i.d. then∑∞
n=1 cnξn converges a.s. (0 < c < 1)if and only if E(log(1 + |ξ1|)) <∞.
2 ∫ n
0
∫|x |≥1
e−ksxN(ds,dx)d=
n−1∑j=0
e−kjMj
where Mj :=∫ j+1
j
∫|x |≥1 e−k(s−j)xN(ds,dx). Note that (Mj , j ∈ N)
are i.i.d.
In this case, Y has characteristics (bf∞,Af
∞, νf∞).
e.g. Brownian motion case. X (t) = B(t). µ ∼ N(0, 1
2k
).
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 25 / 44
To prove this you need
1 If (ξn,n ∈ N) are i.i.d. then∑∞
n=1 cnξn converges a.s. (0 < c < 1)if and only if E(log(1 + |ξ1|)) <∞.
2 ∫ n
0
∫|x |≥1
e−ksxN(ds,dx)d=
n−1∑j=0
e−kjMj
where Mj :=∫ j+1
j
∫|x |≥1 e−k(s−j)xN(ds,dx). Note that (Mj , j ∈ N)
are i.i.d.
In this case, Y has characteristics (bf∞,Af
∞, νf∞).
e.g. Brownian motion case. X (t) = B(t). µ ∼ N(0, 1
2k
).
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 25 / 44
To prove this you need
1 If (ξn,n ∈ N) are i.i.d. then∑∞
n=1 cnξn converges a.s. (0 < c < 1)if and only if E(log(1 + |ξ1|)) <∞.
2 ∫ n
0
∫|x |≥1
e−ksxN(ds,dx)d=
n−1∑j=0
e−kjMj
where Mj :=∫ j+1
j
∫|x |≥1 e−k(s−j)xN(ds,dx). Note that (Mj , j ∈ N)
are i.i.d.
In this case, Y has characteristics (bf∞,Af
∞, νf∞).
e.g. Brownian motion case. X (t) = B(t). µ ∼ N(0, 1
2k
).
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 25 / 44
To prove this you need
1 If (ξn,n ∈ N) are i.i.d. then∑∞
n=1 cnξn converges a.s. (0 < c < 1)if and only if E(log(1 + |ξ1|)) <∞.
2 ∫ n
0
∫|x |≥1
e−ksxN(ds,dx)d=
n−1∑j=0
e−kjMj
where Mj :=∫ j+1
j
∫|x |≥1 e−k(s−j)xN(ds,dx). Note that (Mj , j ∈ N)
are i.i.d.
In this case, Y has characteristics (bf∞,Af
∞, νf∞).
e.g. Brownian motion case. X (t) = B(t). µ ∼ N(0, 1
2k
).
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 25 / 44
To prove this you need
1 If (ξn,n ∈ N) are i.i.d. then∑∞
n=1 cnξn converges a.s. (0 < c < 1)if and only if E(log(1 + |ξ1|)) <∞.
2 ∫ n
0
∫|x |≥1
e−ksxN(ds,dx)d=
n−1∑j=0
e−kjMj
where Mj :=∫ j+1
j
∫|x |≥1 e−k(s−j)xN(ds,dx). Note that (Mj , j ∈ N)
are i.i.d.
In this case, Y has characteristics (bf∞,Af
∞, νf∞).
e.g. Brownian motion case. X (t) = B(t). µ ∼ N(0, 1
2k
).
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 25 / 44
In fact, if an invariant measure µ exists then it is unique. For supposethat Y is stationary, then
Y (0)d= e−ktY (0) +
∫0
te−ksdX (s).
Now let ρ be the law of Y (0) and Φρ(u) :=∫R eiuyρ(dy). Then for all
u ∈ R, by independence
Φρ(u) = Φρ(e−ktu) exp{−∫ t
0η(e−ksu)ds
}.
Take limits as t →∞ to get
Φρ(u) = exp{−∫ ∞
0η(e−ksu)ds
}.
So ρ is the law of Y∞.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 26 / 44
In fact, if an invariant measure µ exists then it is unique. For supposethat Y is stationary, then
Y (0)d= e−ktY (0) +
∫0
te−ksdX (s).
Now let ρ be the law of Y (0) and Φρ(u) :=∫R eiuyρ(dy). Then for all
u ∈ R, by independence
Φρ(u) = Φρ(e−ktu) exp{−∫ t
0η(e−ksu)ds
}.
Take limits as t →∞ to get
Φρ(u) = exp{−∫ ∞
0η(e−ksu)ds
}.
So ρ is the law of Y∞.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 26 / 44
In fact, if an invariant measure µ exists then it is unique. For supposethat Y is stationary, then
Y (0)d= e−ktY (0) +
∫0
te−ksdX (s).
Now let ρ be the law of Y (0) and Φρ(u) :=∫R eiuyρ(dy). Then for all
u ∈ R, by independence
Φρ(u) = Φρ(e−ktu) exp{−∫ t
0η(e−ksu)ds
}.
Take limits as t →∞ to get
Φρ(u) = exp{−∫ ∞
0η(e−ksu)ds
}.
So ρ is the law of Y∞.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 26 / 44
In fact, if an invariant measure µ exists then it is unique. For supposethat Y is stationary, then
Y (0)d= e−ktY (0) +
∫0
te−ksdX (s).
Now let ρ be the law of Y (0) and Φρ(u) :=∫R eiuyρ(dy). Then for all
u ∈ R, by independence
Φρ(u) = Φρ(e−ktu) exp{−∫ t
0η(e−ksu)ds
}.
Take limits as t →∞ to get
Φρ(u) = exp{−∫ ∞
0η(e−ksu)ds
}.
So ρ is the law of Y∞.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 26 / 44
In fact, if an invariant measure µ exists then it is unique. For supposethat Y is stationary, then
Y (0)d= e−ktY (0) +
∫0
te−ksdX (s).
Now let ρ be the law of Y (0) and Φρ(u) :=∫R eiuyρ(dy). Then for all
u ∈ R, by independence
Φρ(u) = Φρ(e−ktu) exp{−∫ t
0η(e−ksu)ds
}.
Take limits as t →∞ to get
Φρ(u) = exp{−∫ ∞
0η(e−ksu)ds
}.
So ρ is the law of Y∞.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 26 / 44
In fact, if an invariant measure µ exists then it is unique. For supposethat Y is stationary, then
Y (0)d= e−ktY (0) +
∫0
te−ksdX (s).
Now let ρ be the law of Y (0) and Φρ(u) :=∫R eiuyρ(dy). Then for all
u ∈ R, by independence
Φρ(u) = Φρ(e−ktu) exp{−∫ t
0η(e−ksu)ds
}.
Take limits as t →∞ to get
Φρ(u) = exp{−∫ ∞
0η(e−ksu)ds
}.
So ρ is the law of Y∞.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 26 / 44
Example: Let (X (t), t ≥ 0) be a compound Poisson processX (t) =
∑N(t)i=1 Wi where the Wis are i.i.d. exponential with common
density fW (x) = ae−ax1x>0. Then
η(u) = λa∫ ∞
0(eiux − 1)e−axdx =
λaa− iu
.
You can check that Φρ(u) = (1− ia−1u)−λ as so ρ has a gamma(c, λ)distribution.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 27 / 44
Example: Let (X (t), t ≥ 0) be a compound Poisson processX (t) =
∑N(t)i=1 Wi where the Wis are i.i.d. exponential with common
density fW (x) = ae−ax1x>0. Then
η(u) = λa∫ ∞
0(eiux − 1)e−axdx =
λaa− iu
.
You can check that Φρ(u) = (1− ia−1u)−λ as so ρ has a gamma(c, λ)distribution.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 27 / 44
Example: Let (X (t), t ≥ 0) be a compound Poisson processX (t) =
∑N(t)i=1 Wi where the Wis are i.i.d. exponential with common
density fW (x) = ae−ax1x>0. Then
η(u) = λa∫ ∞
0(eiux − 1)e−axdx =
λaa− iu
.
You can check that Φρ(u) = (1− ia−1u)−λ as so ρ has a gamma(c, λ)distribution.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 27 / 44
In fact - it is possible to go further. Given any self-decomposabledistribution µ there exists a stationary Ornstein-Uhlenbeck process Ysuch that the law of Y (0) is µ. Let’s sketch the proof of this - due toJurek and Vervaat (1983). Let X be a self-decomposable randomvariable with distribution µ. Then for each t ≥ 0
X d= e−tX + Xt ,
where X and Xt are independent.The key step is the observation that we can construct an additiveprocess (Z (t), t ≥ 0) such that
Z (t) d= Xt and Z (t + h)− Z (t) d
= e−tXh.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 28 / 44
In fact - it is possible to go further. Given any self-decomposabledistribution µ there exists a stationary Ornstein-Uhlenbeck process Ysuch that the law of Y (0) is µ. Let’s sketch the proof of this - due toJurek and Vervaat (1983). Let X be a self-decomposable randomvariable with distribution µ. Then for each t ≥ 0
X d= e−tX + Xt ,
where X and Xt are independent.The key step is the observation that we can construct an additiveprocess (Z (t), t ≥ 0) such that
Z (t) d= Xt and Z (t + h)− Z (t) d
= e−tXh.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 28 / 44
In fact - it is possible to go further. Given any self-decomposabledistribution µ there exists a stationary Ornstein-Uhlenbeck process Ysuch that the law of Y (0) is µ. Let’s sketch the proof of this - due toJurek and Vervaat (1983). Let X be a self-decomposable randomvariable with distribution µ. Then for each t ≥ 0
X d= e−tX + Xt ,
where X and Xt are independent.The key step is the observation that we can construct an additiveprocess (Z (t), t ≥ 0) such that
Z (t) d= Xt and Z (t + h)− Z (t) d
= e−tXh.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 28 / 44
In fact - it is possible to go further. Given any self-decomposabledistribution µ there exists a stationary Ornstein-Uhlenbeck process Ysuch that the law of Y (0) is µ. Let’s sketch the proof of this - due toJurek and Vervaat (1983). Let X be a self-decomposable randomvariable with distribution µ. Then for each t ≥ 0
X d= e−tX + Xt ,
where X and Xt are independent.The key step is the observation that we can construct an additiveprocess (Z (t), t ≥ 0) such that
Z (t) d= Xt and Z (t + h)− Z (t) d
= e−tXh.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 28 / 44
In fact - it is possible to go further. Given any self-decomposabledistribution µ there exists a stationary Ornstein-Uhlenbeck process Ysuch that the law of Y (0) is µ. Let’s sketch the proof of this - due toJurek and Vervaat (1983). Let X be a self-decomposable randomvariable with distribution µ. Then for each t ≥ 0
X d= e−tX + Xt ,
where X and Xt are independent.The key step is the observation that we can construct an additiveprocess (Z (t), t ≥ 0) such that
Z (t) d= Xt and Z (t + h)− Z (t) d
= e−tXh.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 28 / 44
In fact - it is possible to go further. Given any self-decomposabledistribution µ there exists a stationary Ornstein-Uhlenbeck process Ysuch that the law of Y (0) is µ. Let’s sketch the proof of this - due toJurek and Vervaat (1983). Let X be a self-decomposable randomvariable with distribution µ. Then for each t ≥ 0
X d= e−tX + Xt ,
where X and Xt are independent.The key step is the observation that we can construct an additiveprocess (Z (t), t ≥ 0) such that
Z (t) d= Xt and Z (t + h)− Z (t) d
= e−tXh.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 28 / 44
This follows by Kolmogorov’s theorem since
X d= e−(t+h)X + Xt+hd= e−t (e−hX + Xh) + Xt
⇒ Xt+hd= e−tXh + Xt .
It follows that Y (t) =∫ t
0 esdZ (s) also has independent increments. ButY is a Lévy process since
Y (t + h)− Y (t) =
∫ t+h
tesdZ (s)
=
∫ h
0esetdZ (s + t)
d=
∫ h
0esete−tdZ (s) = Y (h)
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 29 / 44
This follows by Kolmogorov’s theorem since
X d= e−(t+h)X + Xt+hd= e−t (e−hX + Xh) + Xt
⇒ Xt+hd= e−tXh + Xt .
It follows that Y (t) =∫ t
0 esdZ (s) also has independent increments. ButY is a Lévy process since
Y (t + h)− Y (t) =
∫ t+h
tesdZ (s)
=
∫ h
0esetdZ (s + t)
d=
∫ h
0esete−tdZ (s) = Y (h)
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 29 / 44
This follows by Kolmogorov’s theorem since
X d= e−(t+h)X + Xt+hd= e−t (e−hX + Xh) + Xt
⇒ Xt+hd= e−tXh + Xt .
It follows that Y (t) =∫ t
0 esdZ (s) also has independent increments. ButY is a Lévy process since
Y (t + h)− Y (t) =
∫ t+h
tesdZ (s)
=
∫ h
0esetdZ (s + t)
d=
∫ h
0esete−tdZ (s) = Y (h)
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 29 / 44
This follows by Kolmogorov’s theorem since
X d= e−(t+h)X + Xt+hd= e−t (e−hX + Xh) + Xt
⇒ Xt+hd= e−tXh + Xt .
It follows that Y (t) =∫ t
0 esdZ (s) also has independent increments. ButY is a Lévy process since
Y (t + h)− Y (t) =
∫ t+h
tesdZ (s)
=
∫ h
0esetdZ (s + t)
d=
∫ h
0esete−tdZ (s) = Y (h)
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 29 / 44
This follows by Kolmogorov’s theorem since
X d= e−(t+h)X + Xt+hd= e−t (e−hX + Xh) + Xt
⇒ Xt+hd= e−tXh + Xt .
It follows that Y (t) =∫ t
0 esdZ (s) also has independent increments. ButY is a Lévy process since
Y (t + h)− Y (t) =
∫ t+h
tesdZ (s)
=
∫ h
0esetdZ (s + t)
d=
∫ h
0esete−tdZ (s) = Y (h)
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 29 / 44
This follows by Kolmogorov’s theorem since
X d= e−(t+h)X + Xt+hd= e−t (e−hX + Xh) + Xt
⇒ Xt+hd= e−tXh + Xt .
It follows that Y (t) =∫ t
0 esdZ (s) also has independent increments. ButY is a Lévy process since
Y (t + h)− Y (t) =
∫ t+h
tesdZ (s)
=
∫ h
0esetdZ (s + t)
d=
∫ h
0esete−tdZ (s) = Y (h)
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 29 / 44
This follows by Kolmogorov’s theorem since
X d= e−(t+h)X + Xt+hd= e−t (e−hX + Xh) + Xt
⇒ Xt+hd= e−tXh + Xt .
It follows that Y (t) =∫ t
0 esdZ (s) also has independent increments. ButY is a Lévy process since
Y (t + h)− Y (t) =
∫ t+h
tesdZ (s)
=
∫ h
0esetdZ (s + t)
d=
∫ h
0esete−tdZ (s) = Y (h)
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 29 / 44
We then find that Z (t) =∫ t
0 e−sdY (s) and so
X d= e−tX +
∫ t
0e−sdY (s)
d= e−tX +
∫ t
0e−(t−s)dY (s),
using stationary increments of Y which is extended to an Lévyprocess on the whole of R.In the 1990s, Sato showed that µ is self-decomposable if and only if itis the law of W (1) where (W (t), t ≥ 0) is a self-similar additiveprocess. Recall W self-similar (index H) means for all c ≥ 0
W (ct) d= cHW (t).
So we can embed selfdecomposable distributions into stationary OUprocesses and self-similar additive processes. Is there a connection?
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 30 / 44
We then find that Z (t) =∫ t
0 e−sdY (s) and so
X d= e−tX +
∫ t
0e−sdY (s)
d= e−tX +
∫ t
0e−(t−s)dY (s),
using stationary increments of Y which is extended to an Lévyprocess on the whole of R.In the 1990s, Sato showed that µ is self-decomposable if and only if itis the law of W (1) where (W (t), t ≥ 0) is a self-similar additiveprocess. Recall W self-similar (index H) means for all c ≥ 0
W (ct) d= cHW (t).
So we can embed selfdecomposable distributions into stationary OUprocesses and self-similar additive processes. Is there a connection?
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 30 / 44
We then find that Z (t) =∫ t
0 e−sdY (s) and so
X d= e−tX +
∫ t
0e−sdY (s)
d= e−tX +
∫ t
0e−(t−s)dY (s),
using stationary increments of Y which is extended to an Lévyprocess on the whole of R.In the 1990s, Sato showed that µ is self-decomposable if and only if itis the law of W (1) where (W (t), t ≥ 0) is a self-similar additiveprocess. Recall W self-similar (index H) means for all c ≥ 0
W (ct) d= cHW (t).
So we can embed selfdecomposable distributions into stationary OUprocesses and self-similar additive processes. Is there a connection?
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 30 / 44
We then find that Z (t) =∫ t
0 e−sdY (s) and so
X d= e−tX +
∫ t
0e−sdY (s)
d= e−tX +
∫ t
0e−(t−s)dY (s),
using stationary increments of Y which is extended to an Lévyprocess on the whole of R.In the 1990s, Sato showed that µ is self-decomposable if and only if itis the law of W (1) where (W (t), t ≥ 0) is a self-similar additiveprocess. Recall W self-similar (index H) means for all c ≥ 0
W (ct) d= cHW (t).
So we can embed selfdecomposable distributions into stationary OUprocesses and self-similar additive processes. Is there a connection?
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 30 / 44
We then find that Z (t) =∫ t
0 e−sdY (s) and so
X d= e−tX +
∫ t
0e−sdY (s)
d= e−tX +
∫ t
0e−(t−s)dY (s),
using stationary increments of Y which is extended to an Lévyprocess on the whole of R.In the 1990s, Sato showed that µ is self-decomposable if and only if itis the law of W (1) where (W (t), t ≥ 0) is a self-similar additiveprocess. Recall W self-similar (index H) means for all c ≥ 0
W (ct) d= cHW (t).
So we can embed selfdecomposable distributions into stationary OUprocesses and self-similar additive processes. Is there a connection?
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 30 / 44
To understand the connection between the two “embeddings” of µ weneed the
Lamperti Transform. There is a one-to-one correspondence betweenself-similar processes (W (t), t ≥ 0) and stationary processes(Z (t), t ≥ 0) given by
W (t) = tHZ (log(t)) or equivalently Z (t) = e−tHW (et ).
Indeed if W self-similar
Z (t + h) = e−(t+h)HW (et+h)d= e−tHe−hHehHW (et )
= Z (t).
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 31 / 44
To understand the connection between the two “embeddings” of µ weneed the
Lamperti Transform. There is a one-to-one correspondence betweenself-similar processes (W (t), t ≥ 0) and stationary processes(Z (t), t ≥ 0) given by
W (t) = tHZ (log(t)) or equivalently Z (t) = e−tHW (et ).
Indeed if W self-similar
Z (t + h) = e−(t+h)HW (et+h)d= e−tHe−hHehHW (et )
= Z (t).
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 31 / 44
To understand the connection between the two “embeddings” of µ weneed the
Lamperti Transform. There is a one-to-one correspondence betweenself-similar processes (W (t), t ≥ 0) and stationary processes(Z (t), t ≥ 0) given by
W (t) = tHZ (log(t)) or equivalently Z (t) = e−tHW (et ).
Indeed if W self-similar
Z (t + h) = e−(t+h)HW (et+h)d= e−tHe−hHehHW (et )
= Z (t).
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 31 / 44
To understand the connection between the two “embeddings” of µ weneed the
Lamperti Transform. There is a one-to-one correspondence betweenself-similar processes (W (t), t ≥ 0) and stationary processes(Z (t), t ≥ 0) given by
W (t) = tHZ (log(t)) or equivalently Z (t) = e−tHW (et ).
Indeed if W self-similar
Z (t + h) = e−(t+h)HW (et+h)d= e−tHe−hHehHW (et )
= Z (t).
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 31 / 44
To understand the connection between the two “embeddings” of µ weneed the
Lamperti Transform. There is a one-to-one correspondence betweenself-similar processes (W (t), t ≥ 0) and stationary processes(Z (t), t ≥ 0) given by
W (t) = tHZ (log(t)) or equivalently Z (t) = e−tHW (et ).
Indeed if W self-similar
Z (t + h) = e−(t+h)HW (et+h)d= e−tHe−hHehHW (et )
= Z (t).
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 31 / 44
To understand the connection between the two “embeddings” of µ weneed the
Lamperti Transform. There is a one-to-one correspondence betweenself-similar processes (W (t), t ≥ 0) and stationary processes(Z (t), t ≥ 0) given by
W (t) = tHZ (log(t)) or equivalently Z (t) = e−tHW (et ).
Indeed if W self-similar
Z (t + h) = e−(t+h)HW (et+h)d= e−tHe−hHehHW (et )
= Z (t).
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 31 / 44
The next step is due to Jeanblanc, Pitman, Yor (SPA 100, 223(2002))
Start with a self-similar additive process (W (t), t ≥ 0). Then we knowthat W (1) is self-decomposable. There exist two independent,identically distributed Lévy processes (X−t , t ≥ 0) and (X +
t , t ≥ 0) suchthat
X−t =
∫ 1
e−t
dW (r)
rH ,X +t =
∫ et
1
dW (r)
rH .
Let (Z (t), t ≥ 0) be the stationary Lamperti transform of W . Then it isan Ornstein-Uhlenbeck process and
Z (t) = e−tHW (1) +
∫ t
0e−(t+s)HdX +
s ,
Z (−t) = e−tHW (1)−∫ t
0e−(t+s)HdX−s .
In the last part of the lecture we’ll briefly look at some recentdevelopments.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 32 / 44
The next step is due to Jeanblanc, Pitman, Yor (SPA 100, 223(2002))
Start with a self-similar additive process (W (t), t ≥ 0). Then we knowthat W (1) is self-decomposable. There exist two independent,identically distributed Lévy processes (X−t , t ≥ 0) and (X +
t , t ≥ 0) suchthat
X−t =
∫ 1
e−t
dW (r)
rH ,X +t =
∫ et
1
dW (r)
rH .
Let (Z (t), t ≥ 0) be the stationary Lamperti transform of W . Then it isan Ornstein-Uhlenbeck process and
Z (t) = e−tHW (1) +
∫ t
0e−(t+s)HdX +
s ,
Z (−t) = e−tHW (1)−∫ t
0e−(t+s)HdX−s .
In the last part of the lecture we’ll briefly look at some recentdevelopments.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 32 / 44
The next step is due to Jeanblanc, Pitman, Yor (SPA 100, 223(2002))
Start with a self-similar additive process (W (t), t ≥ 0). Then we knowthat W (1) is self-decomposable. There exist two independent,identically distributed Lévy processes (X−t , t ≥ 0) and (X +
t , t ≥ 0) suchthat
X−t =
∫ 1
e−t
dW (r)
rH ,X +t =
∫ et
1
dW (r)
rH .
Let (Z (t), t ≥ 0) be the stationary Lamperti transform of W . Then it isan Ornstein-Uhlenbeck process and
Z (t) = e−tHW (1) +
∫ t
0e−(t+s)HdX +
s ,
Z (−t) = e−tHW (1)−∫ t
0e−(t+s)HdX−s .
In the last part of the lecture we’ll briefly look at some recentdevelopments.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 32 / 44
The next step is due to Jeanblanc, Pitman, Yor (SPA 100, 223(2002))
Start with a self-similar additive process (W (t), t ≥ 0). Then we knowthat W (1) is self-decomposable. There exist two independent,identically distributed Lévy processes (X−t , t ≥ 0) and (X +
t , t ≥ 0) suchthat
X−t =
∫ 1
e−t
dW (r)
rH ,X +t =
∫ et
1
dW (r)
rH .
Let (Z (t), t ≥ 0) be the stationary Lamperti transform of W . Then it isan Ornstein-Uhlenbeck process and
Z (t) = e−tHW (1) +
∫ t
0e−(t+s)HdX +
s ,
Z (−t) = e−tHW (1)−∫ t
0e−(t+s)HdX−s .
In the last part of the lecture we’ll briefly look at some recentdevelopments.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 32 / 44
The next step is due to Jeanblanc, Pitman, Yor (SPA 100, 223(2002))
Start with a self-similar additive process (W (t), t ≥ 0). Then we knowthat W (1) is self-decomposable. There exist two independent,identically distributed Lévy processes (X−t , t ≥ 0) and (X +
t , t ≥ 0) suchthat
X−t =
∫ 1
e−t
dW (r)
rH ,X +t =
∫ et
1
dW (r)
rH .
Let (Z (t), t ≥ 0) be the stationary Lamperti transform of W . Then it isan Ornstein-Uhlenbeck process and
Z (t) = e−tHW (1) +
∫ t
0e−(t+s)HdX +
s ,
Z (−t) = e−tHW (1)−∫ t
0e−(t+s)HdX−s .
In the last part of the lecture we’ll briefly look at some recentdevelopments.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 32 / 44
The next step is due to Jeanblanc, Pitman, Yor (SPA 100, 223(2002))
Start with a self-similar additive process (W (t), t ≥ 0). Then we knowthat W (1) is self-decomposable. There exist two independent,identically distributed Lévy processes (X−t , t ≥ 0) and (X +
t , t ≥ 0) suchthat
X−t =
∫ 1
e−t
dW (r)
rH ,X +t =
∫ et
1
dW (r)
rH .
Let (Z (t), t ≥ 0) be the stationary Lamperti transform of W . Then it isan Ornstein-Uhlenbeck process and
Z (t) = e−tHW (1) +
∫ t
0e−(t+s)HdX +
s ,
Z (−t) = e−tHW (1)−∫ t
0e−(t+s)HdX−s .
In the last part of the lecture we’ll briefly look at some recentdevelopments.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 32 / 44
The next step is due to Jeanblanc, Pitman, Yor (SPA 100, 223(2002))
Start with a self-similar additive process (W (t), t ≥ 0). Then we knowthat W (1) is self-decomposable. There exist two independent,identically distributed Lévy processes (X−t , t ≥ 0) and (X +
t , t ≥ 0) suchthat
X−t =
∫ 1
e−t
dW (r)
rH ,X +t =
∫ et
1
dW (r)
rH .
Let (Z (t), t ≥ 0) be the stationary Lamperti transform of W . Then it isan Ornstein-Uhlenbeck process and
Z (t) = e−tHW (1) +
∫ t
0e−(t+s)HdX +
s ,
Z (−t) = e−tHW (1)−∫ t
0e−(t+s)HdX−s .
In the last part of the lecture we’ll briefly look at some recentdevelopments.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 32 / 44
The next step is due to Jeanblanc, Pitman, Yor (SPA 100, 223(2002))
Start with a self-similar additive process (W (t), t ≥ 0). Then we knowthat W (1) is self-decomposable. There exist two independent,identically distributed Lévy processes (X−t , t ≥ 0) and (X +
t , t ≥ 0) suchthat
X−t =
∫ 1
e−t
dW (r)
rH ,X +t =
∫ et
1
dW (r)
rH .
Let (Z (t), t ≥ 0) be the stationary Lamperti transform of W . Then it isan Ornstein-Uhlenbeck process and
Z (t) = e−tHW (1) +
∫ t
0e−(t+s)HdX +
s ,
Z (−t) = e−tHW (1)−∫ t
0e−(t+s)HdX−s .
In the last part of the lecture we’ll briefly look at some recentdevelopments.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 32 / 44
The next step is due to Jeanblanc, Pitman, Yor (SPA 100, 223(2002))
Start with a self-similar additive process (W (t), t ≥ 0). Then we knowthat W (1) is self-decomposable. There exist two independent,identically distributed Lévy processes (X−t , t ≥ 0) and (X +
t , t ≥ 0) suchthat
X−t =
∫ 1
e−t
dW (r)
rH ,X +t =
∫ et
1
dW (r)
rH .
Let (Z (t), t ≥ 0) be the stationary Lamperti transform of W . Then it isan Ornstein-Uhlenbeck process and
Z (t) = e−tHW (1) +
∫ t
0e−(t+s)HdX +
s ,
Z (−t) = e−tHW (1)−∫ t
0e−(t+s)HdX−s .
In the last part of the lecture we’ll briefly look at some recentdevelopments.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 32 / 44
The next step is due to Jeanblanc, Pitman, Yor (SPA 100, 223(2002))
Start with a self-similar additive process (W (t), t ≥ 0). Then we knowthat W (1) is self-decomposable. There exist two independent,identically distributed Lévy processes (X−t , t ≥ 0) and (X +
t , t ≥ 0) suchthat
X−t =
∫ 1
e−t
dW (r)
rH ,X +t =
∫ et
1
dW (r)
rH .
Let (Z (t), t ≥ 0) be the stationary Lamperti transform of W . Then it isan Ornstein-Uhlenbeck process and
Z (t) = e−tHW (1) +
∫ t
0e−(t+s)HdX +
s ,
Z (−t) = e−tHW (1)−∫ t
0e−(t+s)HdX−s .
In the last part of the lecture we’ll briefly look at some recentdevelopments.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 32 / 44
Densities of the OU Process
We’ve seen that each Y (t) is infinitely divisible so if the Lévy processX (t) has a Gaussian component then so does Y (t) in which case ithas a density by Fourier inversion.More generally, Priola and Zabczyk (BLMS, 41, 41,(2009)) study
dY (t) = AY (t)dt + BdX (t),
where each Y (t) is Rd -valued but X (t) is Rn-valued (n ≥ d). So A isan n × n matrix and B is an n × d matrix.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 33 / 44
Densities of the OU Process
We’ve seen that each Y (t) is infinitely divisible so if the Lévy processX (t) has a Gaussian component then so does Y (t) in which case ithas a density by Fourier inversion.More generally, Priola and Zabczyk (BLMS, 41, 41,(2009)) study
dY (t) = AY (t)dt + BdX (t),
where each Y (t) is Rd -valued but X (t) is Rn-valued (n ≥ d). So A isan n × n matrix and B is an n × d matrix.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 33 / 44
Densities of the OU Process
We’ve seen that each Y (t) is infinitely divisible so if the Lévy processX (t) has a Gaussian component then so does Y (t) in which case ithas a density by Fourier inversion.More generally, Priola and Zabczyk (BLMS, 41, 41,(2009)) study
dY (t) = AY (t)dt + BdX (t),
where each Y (t) is Rd -valued but X (t) is Rn-valued (n ≥ d). So A isan n × n matrix and B is an n × d matrix.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 33 / 44
Densities of the OU Process
We’ve seen that each Y (t) is infinitely divisible so if the Lévy processX (t) has a Gaussian component then so does Y (t) in which case ithas a density by Fourier inversion.More generally, Priola and Zabczyk (BLMS, 41, 41,(2009)) study
dY (t) = AY (t)dt + BdX (t),
where each Y (t) is Rd -valued but X (t) is Rn-valued (n ≥ d). So A isan n × n matrix and B is an n × d matrix.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 33 / 44
Assume
1 Rank[B,AB, . . . ,An−1B] = n,where [B,AB, . . . ,An−1B] is the matrix of the linear mapping fromRnd to Rn given by
(u0,u1, . . . ,un−1)→ Bu0 + ABu1 + . . .+ An−1Bun−1.
2 The restriction of the Lévy measure ν to Br (0) has a density forsome r > 0.
Then Y (t) has a density for t > 0.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 34 / 44
Assume
1 Rank[B,AB, . . . ,An−1B] = n,where [B,AB, . . . ,An−1B] is the matrix of the linear mapping fromRnd to Rn given by
(u0,u1, . . . ,un−1)→ Bu0 + ABu1 + . . .+ An−1Bun−1.
2 The restriction of the Lévy measure ν to Br (0) has a density forsome r > 0.
Then Y (t) has a density for t > 0.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 34 / 44
Assume
1 Rank[B,AB, . . . ,An−1B] = n,where [B,AB, . . . ,An−1B] is the matrix of the linear mapping fromRnd to Rn given by
(u0,u1, . . . ,un−1)→ Bu0 + ABu1 + . . .+ An−1Bun−1.
2 The restriction of the Lévy measure ν to Br (0) has a density forsome r > 0.
Then Y (t) has a density for t > 0.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 34 / 44
Assume
1 Rank[B,AB, . . . ,An−1B] = n,where [B,AB, . . . ,An−1B] is the matrix of the linear mapping fromRnd to Rn given by
(u0,u1, . . . ,un−1)→ Bu0 + ABu1 + . . .+ An−1Bun−1.
2 The restriction of the Lévy measure ν to Br (0) has a density forsome r > 0.
Then Y (t) has a density for t > 0.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 34 / 44
Application - Volatility Modelling
Consider the Black-Scholes model for a stock price
S(t) = S(0) exp{µt + σB(t)},
where µ ∈ R is stock drift and σ > 0 is volatility. By Itô’s formula
dS(t) = σS(t)dB(t) + S(t)(µ+
12σ2)
dt .
In stochastic volatility models the parameter σ2 is replaced by astochastic process (σ2(t), t ≥ 0).
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 35 / 44
Application - Volatility Modelling
Consider the Black-Scholes model for a stock price
S(t) = S(0) exp{µt + σB(t)},
where µ ∈ R is stock drift and σ > 0 is volatility. By Itô’s formula
dS(t) = σS(t)dB(t) + S(t)(µ+
12σ2)
dt .
In stochastic volatility models the parameter σ2 is replaced by astochastic process (σ2(t), t ≥ 0).
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 35 / 44
Application - Volatility Modelling
Consider the Black-Scholes model for a stock price
S(t) = S(0) exp{µt + σB(t)},
where µ ∈ R is stock drift and σ > 0 is volatility. By Itô’s formula
dS(t) = σS(t)dB(t) + S(t)(µ+
12σ2)
dt .
In stochastic volatility models the parameter σ2 is replaced by astochastic process (σ2(t), t ≥ 0).
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 35 / 44
Application - Volatility Modelling
Consider the Black-Scholes model for a stock price
S(t) = S(0) exp{µt + σB(t)},
where µ ∈ R is stock drift and σ > 0 is volatility. By Itô’s formula
dS(t) = σS(t)dB(t) + S(t)(µ+
12σ2)
dt .
In stochastic volatility models the parameter σ2 is replaced by astochastic process (σ2(t), t ≥ 0).
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 35 / 44
Barndorff-Nielsen and Shephard (JRSS B 63, 167 (2001)) proposedthe OU model
dσ2(t) = −λσ2(t) + dX (λt),
where λ > 0 and X is a subordinator. Then σ2(t) > 0 (a.s.) since
σ2(t) = e−λtσ2(0) +
∫ t
0e−λ(t−s)dX (λt)
= e−λt
σ2(0) +∑
0≤u≤t
e−λu∆X (λu)
.
Assume that∫∞
1 log(1 + x)ν(dx) <∞. Then there is a unique invariantmeasure µ which is self-decomposable and has characteristic function
µ(u) = exp{∫ ∞
0(eiux − 1)
k(x)
xdx},
where k is decreasing.Dave Applebaum (Sheffield UK) Lecture 5 December 2011 36 / 44
Barndorff-Nielsen and Shephard (JRSS B 63, 167 (2001)) proposedthe OU model
dσ2(t) = −λσ2(t) + dX (λt),
where λ > 0 and X is a subordinator. Then σ2(t) > 0 (a.s.) since
σ2(t) = e−λtσ2(0) +
∫ t
0e−λ(t−s)dX (λt)
= e−λt
σ2(0) +∑
0≤u≤t
e−λu∆X (λu)
.
Assume that∫∞
1 log(1 + x)ν(dx) <∞. Then there is a unique invariantmeasure µ which is self-decomposable and has characteristic function
µ(u) = exp{∫ ∞
0(eiux − 1)
k(x)
xdx},
where k is decreasing.Dave Applebaum (Sheffield UK) Lecture 5 December 2011 36 / 44
Barndorff-Nielsen and Shephard (JRSS B 63, 167 (2001)) proposedthe OU model
dσ2(t) = −λσ2(t) + dX (λt),
where λ > 0 and X is a subordinator. Then σ2(t) > 0 (a.s.) since
σ2(t) = e−λtσ2(0) +
∫ t
0e−λ(t−s)dX (λt)
= e−λt
σ2(0) +∑
0≤u≤t
e−λu∆X (λu)
.
Assume that∫∞
1 log(1 + x)ν(dx) <∞. Then there is a unique invariantmeasure µ which is self-decomposable and has characteristic function
µ(u) = exp{∫ ∞
0(eiux − 1)
k(x)
xdx},
where k is decreasing.Dave Applebaum (Sheffield UK) Lecture 5 December 2011 36 / 44
Barndorff-Nielsen and Shephard (JRSS B 63, 167 (2001)) proposedthe OU model
dσ2(t) = −λσ2(t) + dX (λt),
where λ > 0 and X is a subordinator. Then σ2(t) > 0 (a.s.) since
σ2(t) = e−λtσ2(0) +
∫ t
0e−λ(t−s)dX (λt)
= e−λt
σ2(0) +∑
0≤u≤t
e−λu∆X (λu)
.
Assume that∫∞
1 log(1 + x)ν(dx) <∞. Then there is a unique invariantmeasure µ which is self-decomposable and has characteristic function
µ(u) = exp{∫ ∞
0(eiux − 1)
k(x)
xdx},
where k is decreasing.Dave Applebaum (Sheffield UK) Lecture 5 December 2011 36 / 44
Barndorff-Nielsen and Shephard (JRSS B 63, 167 (2001)) proposedthe OU model
dσ2(t) = −λσ2(t) + dX (λt),
where λ > 0 and X is a subordinator. Then σ2(t) > 0 (a.s.) since
σ2(t) = e−λtσ2(0) +
∫ t
0e−λ(t−s)dX (λt)
= e−λt
σ2(0) +∑
0≤u≤t
e−λu∆X (λu)
.
Assume that∫∞
1 log(1 + x)ν(dx) <∞. Then there is a unique invariantmeasure µ which is self-decomposable and has characteristic function
µ(u) = exp{∫ ∞
0(eiux − 1)
k(x)
xdx},
where k is decreasing.Dave Applebaum (Sheffield UK) Lecture 5 December 2011 36 / 44
Barndorff-Nielsen and Shephard (JRSS B 63, 167 (2001)) proposedthe OU model
dσ2(t) = −λσ2(t) + dX (λt),
where λ > 0 and X is a subordinator. Then σ2(t) > 0 (a.s.) since
σ2(t) = e−λtσ2(0) +
∫ t
0e−λ(t−s)dX (λt)
= e−λt
σ2(0) +∑
0≤u≤t
e−λu∆X (λu)
.
Assume that∫∞
1 log(1 + x)ν(dx) <∞. Then there is a unique invariantmeasure µ which is self-decomposable and has characteristic function
µ(u) = exp{∫ ∞
0(eiux − 1)
k(x)
xdx},
where k is decreasing.Dave Applebaum (Sheffield UK) Lecture 5 December 2011 36 / 44
Barndorff-Nielsen and Shephard (JRSS B 63, 167 (2001)) proposedthe OU model
dσ2(t) = −λσ2(t) + dX (λt),
where λ > 0 and X is a subordinator. Then σ2(t) > 0 (a.s.) since
σ2(t) = e−λtσ2(0) +
∫ t
0e−λ(t−s)dX (λt)
= e−λt
σ2(0) +∑
0≤u≤t
e−λu∆X (λu)
.
Assume that∫∞
1 log(1 + x)ν(dx) <∞. Then there is a unique invariantmeasure µ which is self-decomposable and has characteristic function
µ(u) = exp{∫ ∞
0(eiux − 1)
k(x)
xdx},
where k is decreasing.Dave Applebaum (Sheffield UK) Lecture 5 December 2011 36 / 44
Barndorff-Nielsen and Shephard (JRSS B 63, 167 (2001)) proposedthe OU model
dσ2(t) = −λσ2(t) + dX (λt),
where λ > 0 and X is a subordinator. Then σ2(t) > 0 (a.s.) since
σ2(t) = e−λtσ2(0) +
∫ t
0e−λ(t−s)dX (λt)
= e−λt
σ2(0) +∑
0≤u≤t
e−λu∆X (λu)
.
Assume that∫∞
1 log(1 + x)ν(dx) <∞. Then there is a unique invariantmeasure µ which is self-decomposable and has characteristic function
µ(u) = exp{∫ ∞
0(eiux − 1)
k(x)
xdx},
where k is decreasing.Dave Applebaum (Sheffield UK) Lecture 5 December 2011 36 / 44
Problem. Based on discrete-time observationsσ2(0), σ2(∆), σ2((N − 1)∆) find estimates of the parameter λ and k .For a non-parametric approach - see Jongbloed et. al. Bernoulli, 11,759 (2005).
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 37 / 44
Problem. Based on discrete-time observationsσ2(0), σ2(∆), σ2((N − 1)∆) find estimates of the parameter λ and k .For a non-parametric approach - see Jongbloed et. al. Bernoulli, 11,759 (2005).
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 37 / 44
Generalised Ornstein-Uhlenbeck Processes
Let X = (X1,X2) be a Lévy process on R2. Then each Xi is areal-valued Lévy process. Let Y0 be independent of X . Thegeneralised Ornstein-Uhlenbeck process is
Y (t) = e−X1(t)(
Y0 +
∫ t
0e−X1(s)dX2(s)
).
The usual OU process is obtained by taking X1(t) = λt (λ > 0).Necessary and sufficient conditions for stationarity solutions werefound by Lindner and Maller (SPA 115, 1701 (2005)). Almost sureconvergence of
∫ t0 e−X1(s)dX2(s) as t →∞ is a sufficient condition but
the general story is more complicated.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 38 / 44
Generalised Ornstein-Uhlenbeck Processes
Let X = (X1,X2) be a Lévy process on R2. Then each Xi is areal-valued Lévy process. Let Y0 be independent of X . Thegeneralised Ornstein-Uhlenbeck process is
Y (t) = e−X1(t)(
Y0 +
∫ t
0e−X1(s)dX2(s)
).
The usual OU process is obtained by taking X1(t) = λt (λ > 0).Necessary and sufficient conditions for stationarity solutions werefound by Lindner and Maller (SPA 115, 1701 (2005)). Almost sureconvergence of
∫ t0 e−X1(s)dX2(s) as t →∞ is a sufficient condition but
the general story is more complicated.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 38 / 44
Generalised Ornstein-Uhlenbeck Processes
Let X = (X1,X2) be a Lévy process on R2. Then each Xi is areal-valued Lévy process. Let Y0 be independent of X . Thegeneralised Ornstein-Uhlenbeck process is
Y (t) = e−X1(t)(
Y0 +
∫ t
0e−X1(s)dX2(s)
).
The usual OU process is obtained by taking X1(t) = λt (λ > 0).Necessary and sufficient conditions for stationarity solutions werefound by Lindner and Maller (SPA 115, 1701 (2005)). Almost sureconvergence of
∫ t0 e−X1(s)dX2(s) as t →∞ is a sufficient condition but
the general story is more complicated.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 38 / 44
Generalised Ornstein-Uhlenbeck Processes
Let X = (X1,X2) be a Lévy process on R2. Then each Xi is areal-valued Lévy process. Let Y0 be independent of X . Thegeneralised Ornstein-Uhlenbeck process is
Y (t) = e−X1(t)(
Y0 +
∫ t
0e−X1(s)dX2(s)
).
The usual OU process is obtained by taking X1(t) = λt (λ > 0).Necessary and sufficient conditions for stationarity solutions werefound by Lindner and Maller (SPA 115, 1701 (2005)). Almost sureconvergence of
∫ t0 e−X1(s)dX2(s) as t →∞ is a sufficient condition but
the general story is more complicated.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 38 / 44
Generalised Ornstein-Uhlenbeck Processes
Let X = (X1,X2) be a Lévy process on R2. Then each Xi is areal-valued Lévy process. Let Y0 be independent of X . Thegeneralised Ornstein-Uhlenbeck process is
Y (t) = e−X1(t)(
Y0 +
∫ t
0e−X1(s)dX2(s)
).
The usual OU process is obtained by taking X1(t) = λt (λ > 0).Necessary and sufficient conditions for stationarity solutions werefound by Lindner and Maller (SPA 115, 1701 (2005)). Almost sureconvergence of
∫ t0 e−X1(s)dX2(s) as t →∞ is a sufficient condition but
the general story is more complicated.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 38 / 44
Generalised Ornstein-Uhlenbeck Processes
Let X = (X1,X2) be a Lévy process on R2. Then each Xi is areal-valued Lévy process. Let Y0 be independent of X . Thegeneralised Ornstein-Uhlenbeck process is
Y (t) = e−X1(t)(
Y0 +
∫ t
0e−X1(s)dX2(s)
).
The usual OU process is obtained by taking X1(t) = λt (λ > 0).Necessary and sufficient conditions for stationarity solutions werefound by Lindner and Maller (SPA 115, 1701 (2005)). Almost sureconvergence of
∫ t0 e−X1(s)dX2(s) as t →∞ is a sufficient condition but
the general story is more complicated.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 38 / 44
Generalised Ornstein-Uhlenbeck Processes
Let X = (X1,X2) be a Lévy process on R2. Then each Xi is areal-valued Lévy process. Let Y0 be independent of X . Thegeneralised Ornstein-Uhlenbeck process is
Y (t) = e−X1(t)(
Y0 +
∫ t
0e−X1(s)dX2(s)
).
The usual OU process is obtained by taking X1(t) = λt (λ > 0).Necessary and sufficient conditions for stationarity solutions werefound by Lindner and Maller (SPA 115, 1701 (2005)). Almost sureconvergence of
∫ t0 e−X1(s)dX2(s) as t →∞ is a sufficient condition but
the general story is more complicated.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 38 / 44
In fact a necessary and sufficient condition for stationary solutions isthe almost sure convergence of
∫ t0 e−X1(s)dL(s) as t →∞ where the
one-dimensional Lévy process (L(t), t ≥ 0) is defined by
L(t) := X2(t) +∑
0≤s≤t
(e−∆X1(s) − 1)∆X2(s)− tA1,2,
where A1,2 is the off-diagonal entry of the covariance matrix of theGaussian component of the bivariate Lévy process (X1,L). For furtherwork on generalised O-U processes see Lindner and Sato (AP 37, 250(2009)).
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 39 / 44
In fact a necessary and sufficient condition for stationary solutions isthe almost sure convergence of
∫ t0 e−X1(s)dL(s) as t →∞ where the
one-dimensional Lévy process (L(t), t ≥ 0) is defined by
L(t) := X2(t) +∑
0≤s≤t
(e−∆X1(s) − 1)∆X2(s)− tA1,2,
where A1,2 is the off-diagonal entry of the covariance matrix of theGaussian component of the bivariate Lévy process (X1,L). For furtherwork on generalised O-U processes see Lindner and Sato (AP 37, 250(2009)).
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 39 / 44
In fact a necessary and sufficient condition for stationary solutions isthe almost sure convergence of
∫ t0 e−X1(s)dL(s) as t →∞ where the
one-dimensional Lévy process (L(t), t ≥ 0) is defined by
L(t) := X2(t) +∑
0≤s≤t
(e−∆X1(s) − 1)∆X2(s)− tA1,2,
where A1,2 is the off-diagonal entry of the covariance matrix of theGaussian component of the bivariate Lévy process (X1,L). For furtherwork on generalised O-U processes see Lindner and Sato (AP 37, 250(2009)).
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 39 / 44
In fact a necessary and sufficient condition for stationary solutions isthe almost sure convergence of
∫ t0 e−X1(s)dL(s) as t →∞ where the
one-dimensional Lévy process (L(t), t ≥ 0) is defined by
L(t) := X2(t) +∑
0≤s≤t
(e−∆X1(s) − 1)∆X2(s)− tA1,2,
where A1,2 is the off-diagonal entry of the covariance matrix of theGaussian component of the bivariate Lévy process (X1,L). For furtherwork on generalised O-U processes see Lindner and Sato (AP 37, 250(2009)).
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 39 / 44
References and Further Reading
These lectures have been broadly based on my recent book:
D.Applebaum Lévy Processes and Stochastic Calculus, CambridgeUniversity Press (second edition) (2009)
and from an earlier course of lectures partly derived from it, whichhave been separately published as
D.Applebaum, Lévy processes in Euclidean spaces and groups inQuantum Independent Increment Processes I: From ClassicalProbability to Quantum Stochastic Calculus, Springer Lecture Notes inMathematics , Vol. 1865 M Schurmann, U. Franz (Eds.) 1-99,(2005)
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 40 / 44
References and Further Reading
These lectures have been broadly based on my recent book:
D.Applebaum Lévy Processes and Stochastic Calculus, CambridgeUniversity Press (second edition) (2009)
and from an earlier course of lectures partly derived from it, whichhave been separately published as
D.Applebaum, Lévy processes in Euclidean spaces and groups inQuantum Independent Increment Processes I: From ClassicalProbability to Quantum Stochastic Calculus, Springer Lecture Notes inMathematics , Vol. 1865 M Schurmann, U. Franz (Eds.) 1-99,(2005)
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 40 / 44
A comprehensive account of the structure and properties of Lévyprocesses is:
K-I.Sato, Lévy Processes and Infinite Divisibility, Cambridge UniversityPress (1999)
A shorter account, from the point of view of the French school, whichconcentrates on fluctuation theory and potential theory aspects is
J.Bertoin, Lévy Processes, Cambridge University Press (1996)
From this point of view, you should also look atA.Kyprianou, Introductory Lectures on Fluctuations of Lévy Processeswith Applications, Springer-Verlag (2006)
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 41 / 44
A comprehensive account of the structure and properties of Lévyprocesses is:
K-I.Sato, Lévy Processes and Infinite Divisibility, Cambridge UniversityPress (1999)
A shorter account, from the point of view of the French school, whichconcentrates on fluctuation theory and potential theory aspects is
J.Bertoin, Lévy Processes, Cambridge University Press (1996)
From this point of view, you should also look atA.Kyprianou, Introductory Lectures on Fluctuations of Lévy Processeswith Applications, Springer-Verlag (2006)
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 41 / 44
For an insight into the wide range of both theoretical and appliedrecent work wherein Lévy processes play a role, consultO.E.Barndorff-Nielsen,T.Mikosch, S.Resnick (eds), Lévy Processes:Theory and Applications, Birkhäuser, Basel (2001)
For stochastic calculus with jumps, the authoritative treatise isP.Protter, Stochastic Integration and Differential Equations (secondedition), Springer-Verlag, Berlin Heidelberg (2003)
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 42 / 44
For an insight into the wide range of both theoretical and appliedrecent work wherein Lévy processes play a role, consultO.E.Barndorff-Nielsen,T.Mikosch, S.Resnick (eds), Lévy Processes:Theory and Applications, Birkhäuser, Basel (2001)
For stochastic calculus with jumps, the authoritative treatise isP.Protter, Stochastic Integration and Differential Equations (secondedition), Springer-Verlag, Berlin Heidelberg (2003)
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 42 / 44
For financial modelling I recommend:
R.Cont, P.Tankov, Financial Modelling with Jump Processes, Chapmanand Hall/CRC (2004)
which is extremely comprehensive and also contains a lot of valuablebackground material on Lévy processes.
W.Schoutens, Lévy Processes in Finance: Pricing FinancialDerivatives, Wiley (2003)
is shorter and aimed at a wider audience than mathematicians andstatisticians.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 43 / 44
For financial modelling I recommend:
R.Cont, P.Tankov, Financial Modelling with Jump Processes, Chapmanand Hall/CRC (2004)
which is extremely comprehensive and also contains a lot of valuablebackground material on Lévy processes.
W.Schoutens, Lévy Processes in Finance: Pricing FinancialDerivatives, Wiley (2003)
is shorter and aimed at a wider audience than mathematicians andstatisticians.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 43 / 44
For financial modelling I recommend:
R.Cont, P.Tankov, Financial Modelling with Jump Processes, Chapmanand Hall/CRC (2004)
which is extremely comprehensive and also contains a lot of valuablebackground material on Lévy processes.
W.Schoutens, Lévy Processes in Finance: Pricing FinancialDerivatives, Wiley (2003)
is shorter and aimed at a wider audience than mathematicians andstatisticians.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 43 / 44
Beni dinlediginiz için tesekkürederim.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 44 / 44