244
Lectures on Lévy Processes and Stochastic Calculus (Koc University) Lecture 4: Stochastic Integration and Itô’s Formula David Applebaum School of Mathematics and Statistics, University of Sheffield, UK 8th December 2011 Dave Applebaum (Sheffield UK) Lecture 4 December 2011 1 / 58

Koc4(dba)

Embed Size (px)

DESCRIPTION

 

Citation preview

Page 1: Koc4(dba)

Lectures on Lévy Processes and StochasticCalculus (Koc University)

Lecture 4: Stochastic Integration and Itô’s Formula

David Applebaum

School of Mathematics and Statistics, University of Sheffield, UK

8th December 2011

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 1 / 58

Page 2: Koc4(dba)

Stochastic Integration

We next give a rather rapid account of stochastic integration in a formsuitable for application to Lévy processes.Let X = M + C be a semimartingale. The problem of stochasticintegration is to make sense of objects of the form∫ t

0F (s)dX (s) :=

∫ t

0F (s)dM(s) +

∫ t

0F (s)dC(s).

The second integral can be well-defined using the usualLebesgue-Stieltjes approach. The first one cannot - indeed if M is acontinuous martingale of finite variation, then M is a.s. constant.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 2 / 58

Page 3: Koc4(dba)

Stochastic Integration

We next give a rather rapid account of stochastic integration in a formsuitable for application to Lévy processes.Let X = M + C be a semimartingale. The problem of stochasticintegration is to make sense of objects of the form∫ t

0F (s)dX (s) :=

∫ t

0F (s)dM(s) +

∫ t

0F (s)dC(s).

The second integral can be well-defined using the usualLebesgue-Stieltjes approach. The first one cannot - indeed if M is acontinuous martingale of finite variation, then M is a.s. constant.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 2 / 58

Page 4: Koc4(dba)

Stochastic Integration

We next give a rather rapid account of stochastic integration in a formsuitable for application to Lévy processes.Let X = M + C be a semimartingale. The problem of stochasticintegration is to make sense of objects of the form∫ t

0F (s)dX (s) :=

∫ t

0F (s)dM(s) +

∫ t

0F (s)dC(s).

The second integral can be well-defined using the usualLebesgue-Stieltjes approach. The first one cannot - indeed if M is acontinuous martingale of finite variation, then M is a.s. constant.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 2 / 58

Page 5: Koc4(dba)

Refer to the martingale part of the Lévy-Itô decomposition. Define a“martingale-valued measure” by

M(t ,E) = B(t)δ0(E) + N(t ,E − 0),

for E ∈ B(Rd ), where B = (B(t), t ≥ 0) is a one-dimensional Brownianmotion. The following key properties then hold:-

M((s, t ],E) = M(t ,E)−M(s,E) is independent of Fs, for0 ≤ s < t <∞.E(M((s, t ],E)) = 0.E(M((s, t ],E)2) = ρ((s, t ],E)where ρ((s, t ],E) = (t − s)(δ0(E) + ν(E − 0)).

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 3 / 58

Page 6: Koc4(dba)

Refer to the martingale part of the Lévy-Itô decomposition. Define a“martingale-valued measure” by

M(t ,E) = B(t)δ0(E) + N(t ,E − 0),

for E ∈ B(Rd ), where B = (B(t), t ≥ 0) is a one-dimensional Brownianmotion. The following key properties then hold:-

M((s, t ],E) = M(t ,E)−M(s,E) is independent of Fs, for0 ≤ s < t <∞.E(M((s, t ],E)) = 0.E(M((s, t ],E)2) = ρ((s, t ],E)where ρ((s, t ],E) = (t − s)(δ0(E) + ν(E − 0)).

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 3 / 58

Page 7: Koc4(dba)

Refer to the martingale part of the Lévy-Itô decomposition. Define a“martingale-valued measure” by

M(t ,E) = B(t)δ0(E) + N(t ,E − 0),

for E ∈ B(Rd ), where B = (B(t), t ≥ 0) is a one-dimensional Brownianmotion. The following key properties then hold:-

M((s, t ],E) = M(t ,E)−M(s,E) is independent of Fs, for0 ≤ s < t <∞.E(M((s, t ],E)) = 0.E(M((s, t ],E)2) = ρ((s, t ],E)where ρ((s, t ],E) = (t − s)(δ0(E) + ν(E − 0)).

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 3 / 58

Page 8: Koc4(dba)

Refer to the martingale part of the Lévy-Itô decomposition. Define a“martingale-valued measure” by

M(t ,E) = B(t)δ0(E) + N(t ,E − 0),

for E ∈ B(Rd ), where B = (B(t), t ≥ 0) is a one-dimensional Brownianmotion. The following key properties then hold:-

M((s, t ],E) = M(t ,E)−M(s,E) is independent of Fs, for0 ≤ s < t <∞.E(M((s, t ],E)) = 0.E(M((s, t ],E)2) = ρ((s, t ],E)where ρ((s, t ],E) = (t − s)(δ0(E) + ν(E − 0)).

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 3 / 58

Page 9: Koc4(dba)

Refer to the martingale part of the Lévy-Itô decomposition. Define a“martingale-valued measure” by

M(t ,E) = B(t)δ0(E) + N(t ,E − 0),

for E ∈ B(Rd ), where B = (B(t), t ≥ 0) is a one-dimensional Brownianmotion. The following key properties then hold:-

M((s, t ],E) = M(t ,E)−M(s,E) is independent of Fs, for0 ≤ s < t <∞.E(M((s, t ],E)) = 0.E(M((s, t ],E)2) = ρ((s, t ],E)where ρ((s, t ],E) = (t − s)(δ0(E) + ν(E − 0)).

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 3 / 58

Page 10: Koc4(dba)

Refer to the martingale part of the Lévy-Itô decomposition. Define a“martingale-valued measure” by

M(t ,E) = B(t)δ0(E) + N(t ,E − 0),

for E ∈ B(Rd ), where B = (B(t), t ≥ 0) is a one-dimensional Brownianmotion. The following key properties then hold:-

M((s, t ],E) = M(t ,E)−M(s,E) is independent of Fs, for0 ≤ s < t <∞.E(M((s, t ],E)) = 0.E(M((s, t ],E)2) = ρ((s, t ],E)where ρ((s, t ],E) = (t − s)(δ0(E) + ν(E − 0)).

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 3 / 58

Page 11: Koc4(dba)

Refer to the martingale part of the Lévy-Itô decomposition. Define a“martingale-valued measure” by

M(t ,E) = B(t)δ0(E) + N(t ,E − 0),

for E ∈ B(Rd ), where B = (B(t), t ≥ 0) is a one-dimensional Brownianmotion. The following key properties then hold:-

M((s, t ],E) = M(t ,E)−M(s,E) is independent of Fs, for0 ≤ s < t <∞.E(M((s, t ],E)) = 0.E(M((s, t ],E)2) = ρ((s, t ],E)where ρ((s, t ],E) = (t − s)(δ0(E) + ν(E − 0)).

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 3 / 58

Page 12: Koc4(dba)

Refer to the martingale part of the Lévy-Itô decomposition. Define a“martingale-valued measure” by

M(t ,E) = B(t)δ0(E) + N(t ,E − 0),

for E ∈ B(Rd ), where B = (B(t), t ≥ 0) is a one-dimensional Brownianmotion. The following key properties then hold:-

M((s, t ],E) = M(t ,E)−M(s,E) is independent of Fs, for0 ≤ s < t <∞.E(M((s, t ],E)) = 0.E(M((s, t ],E)2) = ρ((s, t ],E)where ρ((s, t ],E) = (t − s)(δ0(E) + ν(E − 0)).

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 3 / 58

Page 13: Koc4(dba)

We’re going to unify the usual stochastic integral with the Poissonintegral, by defining:∫ t

0

∫E

F (s, x)M(ds,dx) :=

∫ t

0G(s)dB(s) +

∫ t

0

∫E−0

F (s, x)N(ds,dx).

where G(s) = F (s,0). Of course, we need some conditions on theclass of integrands:-Fix E ∈ B(Rd ) and 0 < T <∞ and let P denote the smallest σ-algebrawith respect to which all mappings F : [0,T ]×E ×Ω→ R satisfying (1)and (2) below are measurable.

1 For each 0 ≤ t ≤ T , the mapping (x , ω)→ F (t , x , ω) is B(E)⊗Ftmeasurable,

2 For each x ∈ E , ω ∈ Ω, the mapping t → F (t , x , ω) is leftcontinuous.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 4 / 58

Page 14: Koc4(dba)

We’re going to unify the usual stochastic integral with the Poissonintegral, by defining:∫ t

0

∫E

F (s, x)M(ds,dx) :=

∫ t

0G(s)dB(s) +

∫ t

0

∫E−0

F (s, x)N(ds,dx).

where G(s) = F (s,0). Of course, we need some conditions on theclass of integrands:-Fix E ∈ B(Rd ) and 0 < T <∞ and let P denote the smallest σ-algebrawith respect to which all mappings F : [0,T ]×E ×Ω→ R satisfying (1)and (2) below are measurable.

1 For each 0 ≤ t ≤ T , the mapping (x , ω)→ F (t , x , ω) is B(E)⊗Ftmeasurable,

2 For each x ∈ E , ω ∈ Ω, the mapping t → F (t , x , ω) is leftcontinuous.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 4 / 58

Page 15: Koc4(dba)

We’re going to unify the usual stochastic integral with the Poissonintegral, by defining:∫ t

0

∫E

F (s, x)M(ds,dx) :=

∫ t

0G(s)dB(s) +

∫ t

0

∫E−0

F (s, x)N(ds,dx).

where G(s) = F (s,0). Of course, we need some conditions on theclass of integrands:-Fix E ∈ B(Rd ) and 0 < T <∞ and let P denote the smallest σ-algebrawith respect to which all mappings F : [0,T ]×E ×Ω→ R satisfying (1)and (2) below are measurable.

1 For each 0 ≤ t ≤ T , the mapping (x , ω)→ F (t , x , ω) is B(E)⊗Ftmeasurable,

2 For each x ∈ E , ω ∈ Ω, the mapping t → F (t , x , ω) is leftcontinuous.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 4 / 58

Page 16: Koc4(dba)

We’re going to unify the usual stochastic integral with the Poissonintegral, by defining:∫ t

0

∫E

F (s, x)M(ds,dx) :=

∫ t

0G(s)dB(s) +

∫ t

0

∫E−0

F (s, x)N(ds,dx).

where G(s) = F (s,0). Of course, we need some conditions on theclass of integrands:-Fix E ∈ B(Rd ) and 0 < T <∞ and let P denote the smallest σ-algebrawith respect to which all mappings F : [0,T ]×E ×Ω→ R satisfying (1)and (2) below are measurable.

1 For each 0 ≤ t ≤ T , the mapping (x , ω)→ F (t , x , ω) is B(E)⊗Ftmeasurable,

2 For each x ∈ E , ω ∈ Ω, the mapping t → F (t , x , ω) is leftcontinuous.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 4 / 58

Page 17: Koc4(dba)

We’re going to unify the usual stochastic integral with the Poissonintegral, by defining:∫ t

0

∫E

F (s, x)M(ds,dx) :=

∫ t

0G(s)dB(s) +

∫ t

0

∫E−0

F (s, x)N(ds,dx).

where G(s) = F (s,0). Of course, we need some conditions on theclass of integrands:-Fix E ∈ B(Rd ) and 0 < T <∞ and let P denote the smallest σ-algebrawith respect to which all mappings F : [0,T ]×E ×Ω→ R satisfying (1)and (2) below are measurable.

1 For each 0 ≤ t ≤ T , the mapping (x , ω)→ F (t , x , ω) is B(E)⊗Ftmeasurable,

2 For each x ∈ E , ω ∈ Ω, the mapping t → F (t , x , ω) is leftcontinuous.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 4 / 58

Page 18: Koc4(dba)

We’re going to unify the usual stochastic integral with the Poissonintegral, by defining:∫ t

0

∫E

F (s, x)M(ds,dx) :=

∫ t

0G(s)dB(s) +

∫ t

0

∫E−0

F (s, x)N(ds,dx).

where G(s) = F (s,0). Of course, we need some conditions on theclass of integrands:-Fix E ∈ B(Rd ) and 0 < T <∞ and let P denote the smallest σ-algebrawith respect to which all mappings F : [0,T ]×E ×Ω→ R satisfying (1)and (2) below are measurable.

1 For each 0 ≤ t ≤ T , the mapping (x , ω)→ F (t , x , ω) is B(E)⊗Ftmeasurable,

2 For each x ∈ E , ω ∈ Ω, the mapping t → F (t , x , ω) is leftcontinuous.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 4 / 58

Page 19: Koc4(dba)

We’re going to unify the usual stochastic integral with the Poissonintegral, by defining:∫ t

0

∫E

F (s, x)M(ds,dx) :=

∫ t

0G(s)dB(s) +

∫ t

0

∫E−0

F (s, x)N(ds,dx).

where G(s) = F (s,0). Of course, we need some conditions on theclass of integrands:-Fix E ∈ B(Rd ) and 0 < T <∞ and let P denote the smallest σ-algebrawith respect to which all mappings F : [0,T ]×E ×Ω→ R satisfying (1)and (2) below are measurable.

1 For each 0 ≤ t ≤ T , the mapping (x , ω)→ F (t , x , ω) is B(E)⊗Ftmeasurable,

2 For each x ∈ E , ω ∈ Ω, the mapping t → F (t , x , ω) is leftcontinuous.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 4 / 58

Page 20: Koc4(dba)

We’re going to unify the usual stochastic integral with the Poissonintegral, by defining:∫ t

0

∫E

F (s, x)M(ds,dx) :=

∫ t

0G(s)dB(s) +

∫ t

0

∫E−0

F (s, x)N(ds,dx).

where G(s) = F (s,0). Of course, we need some conditions on theclass of integrands:-Fix E ∈ B(Rd ) and 0 < T <∞ and let P denote the smallest σ-algebrawith respect to which all mappings F : [0,T ]×E ×Ω→ R satisfying (1)and (2) below are measurable.

1 For each 0 ≤ t ≤ T , the mapping (x , ω)→ F (t , x , ω) is B(E)⊗Ftmeasurable,

2 For each x ∈ E , ω ∈ Ω, the mapping t → F (t , x , ω) is leftcontinuous.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 4 / 58

Page 21: Koc4(dba)

We call P the predictable σ-algebra. A P-measurable mappingG : [0,T ]× E × Ω→ R is then said to be predictable. The definitionclearly extends naturally to the case where [0,T ] is replaced by R+.Note that by (1), if G is predictable then the process t → G(t , x , ·) isadapted, for each x ∈ E . If G satisfies (1) and is left continuous then itis clearly predictable.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 5 / 58

Page 22: Koc4(dba)

We call P the predictable σ-algebra. A P-measurable mappingG : [0,T ]× E × Ω→ R is then said to be predictable. The definitionclearly extends naturally to the case where [0,T ] is replaced by R+.Note that by (1), if G is predictable then the process t → G(t , x , ·) isadapted, for each x ∈ E . If G satisfies (1) and is left continuous then itis clearly predictable.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 5 / 58

Page 23: Koc4(dba)

We call P the predictable σ-algebra. A P-measurable mappingG : [0,T ]× E × Ω→ R is then said to be predictable. The definitionclearly extends naturally to the case where [0,T ] is replaced by R+.Note that by (1), if G is predictable then the process t → G(t , x , ·) isadapted, for each x ∈ E . If G satisfies (1) and is left continuous then itis clearly predictable.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 5 / 58

Page 24: Koc4(dba)

We call P the predictable σ-algebra. A P-measurable mappingG : [0,T ]× E × Ω→ R is then said to be predictable. The definitionclearly extends naturally to the case where [0,T ] is replaced by R+.Note that by (1), if G is predictable then the process t → G(t , x , ·) isadapted, for each x ∈ E . If G satisfies (1) and is left continuous then itis clearly predictable.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 5 / 58

Page 25: Koc4(dba)

We call P the predictable σ-algebra. A P-measurable mappingG : [0,T ]× E × Ω→ R is then said to be predictable. The definitionclearly extends naturally to the case where [0,T ] is replaced by R+.Note that by (1), if G is predictable then the process t → G(t , x , ·) isadapted, for each x ∈ E . If G satisfies (1) and is left continuous then itis clearly predictable.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 5 / 58

Page 26: Koc4(dba)

Define H2(T ,E) to be the linear space of all equivalence classes ofmappings F : [0,T ]× E × Ω→ R which coincide almost everywherewith respect to ρ× P and which satisfy the following conditions:

F is predictable,∫ T

0

∫EE(|F (t , x)|2)ρ(dt ,dx) <∞.

It can be shown that H2(T ,E) is a real Hilbert space with respect tothe inner product < F ,G >T ,ρ=

∫ T0

∫E E((F (t , x),G(t , x)))ρ(dt ,dx).

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 6 / 58

Page 27: Koc4(dba)

Define H2(T ,E) to be the linear space of all equivalence classes ofmappings F : [0,T ]× E × Ω→ R which coincide almost everywherewith respect to ρ× P and which satisfy the following conditions:

F is predictable,∫ T

0

∫EE(|F (t , x)|2)ρ(dt ,dx) <∞.

It can be shown that H2(T ,E) is a real Hilbert space with respect tothe inner product < F ,G >T ,ρ=

∫ T0

∫E E((F (t , x),G(t , x)))ρ(dt ,dx).

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 6 / 58

Page 28: Koc4(dba)

Define H2(T ,E) to be the linear space of all equivalence classes ofmappings F : [0,T ]× E × Ω→ R which coincide almost everywherewith respect to ρ× P and which satisfy the following conditions:

F is predictable,∫ T

0

∫EE(|F (t , x)|2)ρ(dt ,dx) <∞.

It can be shown that H2(T ,E) is a real Hilbert space with respect tothe inner product < F ,G >T ,ρ=

∫ T0

∫E E((F (t , x),G(t , x)))ρ(dt ,dx).

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 6 / 58

Page 29: Koc4(dba)

Define H2(T ,E) to be the linear space of all equivalence classes ofmappings F : [0,T ]× E × Ω→ R which coincide almost everywherewith respect to ρ× P and which satisfy the following conditions:

F is predictable,∫ T

0

∫EE(|F (t , x)|2)ρ(dt ,dx) <∞.

It can be shown that H2(T ,E) is a real Hilbert space with respect tothe inner product < F ,G >T ,ρ=

∫ T0

∫E E((F (t , x),G(t , x)))ρ(dt ,dx).

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 6 / 58

Page 30: Koc4(dba)

Define S(T ,E) to be the linear space of all simple processes inH2(T ,E),where F is simple if for some m,n ∈ N, there exists0 ≤ t1 ≤ t2 ≤ · · · ≤ tm+1 = T and there exists a family of disjoint Borelsubsets A1,A2, . . . ,An of E with each ν(Ai) <∞ such that

F =m∑

j=1

n∑k=1

Fj,k1(tj ,tj+1]1Ak ,

where each Fj,k is a bounded Ftj -measurable random variable. Notethat F is left continuous and B(E)⊗Ft measurable, hence it ispredictable. An important fact is that

S(T ,E) is dense in H2(T ,E),

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 7 / 58

Page 31: Koc4(dba)

Define S(T ,E) to be the linear space of all simple processes inH2(T ,E),where F is simple if for some m,n ∈ N, there exists0 ≤ t1 ≤ t2 ≤ · · · ≤ tm+1 = T and there exists a family of disjoint Borelsubsets A1,A2, . . . ,An of E with each ν(Ai) <∞ such that

F =m∑

j=1

n∑k=1

Fj,k1(tj ,tj+1]1Ak ,

where each Fj,k is a bounded Ftj -measurable random variable. Notethat F is left continuous and B(E)⊗Ft measurable, hence it ispredictable. An important fact is that

S(T ,E) is dense in H2(T ,E),

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 7 / 58

Page 32: Koc4(dba)

Define S(T ,E) to be the linear space of all simple processes inH2(T ,E),where F is simple if for some m,n ∈ N, there exists0 ≤ t1 ≤ t2 ≤ · · · ≤ tm+1 = T and there exists a family of disjoint Borelsubsets A1,A2, . . . ,An of E with each ν(Ai) <∞ such that

F =m∑

j=1

n∑k=1

Fj,k1(tj ,tj+1]1Ak ,

where each Fj,k is a bounded Ftj -measurable random variable. Notethat F is left continuous and B(E)⊗Ft measurable, hence it ispredictable. An important fact is that

S(T ,E) is dense in H2(T ,E),

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 7 / 58

Page 33: Koc4(dba)

Define S(T ,E) to be the linear space of all simple processes inH2(T ,E),where F is simple if for some m,n ∈ N, there exists0 ≤ t1 ≤ t2 ≤ · · · ≤ tm+1 = T and there exists a family of disjoint Borelsubsets A1,A2, . . . ,An of E with each ν(Ai) <∞ such that

F =m∑

j=1

n∑k=1

Fj,k1(tj ,tj+1]1Ak ,

where each Fj,k is a bounded Ftj -measurable random variable. Notethat F is left continuous and B(E)⊗Ft measurable, hence it ispredictable. An important fact is that

S(T ,E) is dense in H2(T ,E),

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 7 / 58

Page 34: Koc4(dba)

Define S(T ,E) to be the linear space of all simple processes inH2(T ,E),where F is simple if for some m,n ∈ N, there exists0 ≤ t1 ≤ t2 ≤ · · · ≤ tm+1 = T and there exists a family of disjoint Borelsubsets A1,A2, . . . ,An of E with each ν(Ai) <∞ such that

F =m∑

j=1

n∑k=1

Fj,k1(tj ,tj+1]1Ak ,

where each Fj,k is a bounded Ftj -measurable random variable. Notethat F is left continuous and B(E)⊗Ft measurable, hence it ispredictable. An important fact is that

S(T ,E) is dense in H2(T ,E),

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 7 / 58

Page 35: Koc4(dba)

One of Itô’s greatest achievements was the definition of the stochasticintegral IT (F ), for F simple, by separating the “past” from the “future”within the Riemann sum:-

IT (F ) =m∑

j=1

n∑k=1

Fj,kM((tj , tj+1],Ak ), (0.1)

so on each time interval [tj , tj+1], Fj,k encapsulates informationobtained by time tj , while M gives the innovation into the future (tj , tj+1].

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 8 / 58

Page 36: Koc4(dba)

One of Itô’s greatest achievements was the definition of the stochasticintegral IT (F ), for F simple, by separating the “past” from the “future”within the Riemann sum:-

IT (F ) =m∑

j=1

n∑k=1

Fj,kM((tj , tj+1],Ak ), (0.1)

so on each time interval [tj , tj+1], Fj,k encapsulates informationobtained by time tj , while M gives the innovation into the future (tj , tj+1].

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 8 / 58

Page 37: Koc4(dba)

One of Itô’s greatest achievements was the definition of the stochasticintegral IT (F ), for F simple, by separating the “past” from the “future”within the Riemann sum:-

IT (F ) =m∑

j=1

n∑k=1

Fj,kM((tj , tj+1],Ak ), (0.1)

so on each time interval [tj , tj+1], Fj,k encapsulates informationobtained by time tj , while M gives the innovation into the future (tj , tj+1].

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 8 / 58

Page 38: Koc4(dba)

Lemma

For each T ≥ 0,F ∈ S(T ,E),

E(IT (F )) = 0, E(IT (F )2) =

∫ T

0

∫EE(|F (t , x)|2)ρ(dt ,dx).

Proof. E(IT (F )) = 0 is a straightforward application of linearity andindependence. The second result is quite messy - we lose nothingimportant by just looking at the Brownian case, with d = 1.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 9 / 58

Page 39: Koc4(dba)

Lemma

For each T ≥ 0,F ∈ S(T ,E),

E(IT (F )) = 0, E(IT (F )2) =

∫ T

0

∫EE(|F (t , x)|2)ρ(dt ,dx).

Proof. E(IT (F )) = 0 is a straightforward application of linearity andindependence. The second result is quite messy - we lose nothingimportant by just looking at the Brownian case, with d = 1.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 9 / 58

Page 40: Koc4(dba)

Lemma

For each T ≥ 0,F ∈ S(T ,E),

E(IT (F )) = 0, E(IT (F )2) =

∫ T

0

∫EE(|F (t , x)|2)ρ(dt ,dx).

Proof. E(IT (F )) = 0 is a straightforward application of linearity andindependence. The second result is quite messy - we lose nothingimportant by just looking at the Brownian case, with d = 1.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 9 / 58

Page 41: Koc4(dba)

So let F (t) :=∑m

j=1 Fj1(tj ,tj+1], then IT (F ) =∑m

j=1 Fj(B(tj+1)− B(tj)),

IT (F )2 =m∑

j=1

m∑p=1

FjFp(B(tj+1)− B(tj))(B(tp+1)− B(tp)).

Now fix j and split the second sum into three pieces - corresponding top < j ,p = j and p > j .

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 10 / 58

Page 42: Koc4(dba)

So let F (t) :=∑m

j=1 Fj1(tj ,tj+1], then IT (F ) =∑m

j=1 Fj(B(tj+1)− B(tj)),

IT (F )2 =m∑

j=1

m∑p=1

FjFp(B(tj+1)− B(tj))(B(tp+1)− B(tp)).

Now fix j and split the second sum into three pieces - corresponding top < j ,p = j and p > j .

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 10 / 58

Page 43: Koc4(dba)

So let F (t) :=∑m

j=1 Fj1(tj ,tj+1], then IT (F ) =∑m

j=1 Fj(B(tj+1)− B(tj)),

IT (F )2 =m∑

j=1

m∑p=1

FjFp(B(tj+1)− B(tj))(B(tp+1)− B(tp)).

Now fix j and split the second sum into three pieces - corresponding top < j ,p = j and p > j .

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 10 / 58

Page 44: Koc4(dba)

So let F (t) :=∑m

j=1 Fj1(tj ,tj+1], then IT (F ) =∑m

j=1 Fj(B(tj+1)− B(tj)),

IT (F )2 =m∑

j=1

m∑p=1

FjFp(B(tj+1)− B(tj))(B(tp+1)− B(tp)).

Now fix j and split the second sum into three pieces - corresponding top < j ,p = j and p > j .

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 10 / 58

Page 45: Koc4(dba)

When p < j ,FjFp(B(tp+1)− B(tp)) ∈ Ftj which is independent ofB(tj+1)− B(tj),

E[FjFp(B(tj+1)− B(tj))(B(tp+1)− B(tp))]

= E[FjFp(B(tp+1)− B(tp))]E(B(tj+1)− B(tj)) = 0.

Exactly the same argument works when p > j .

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 11 / 58

Page 46: Koc4(dba)

When p < j ,FjFp(B(tp+1)− B(tp)) ∈ Ftj which is independent ofB(tj+1)− B(tj),

E[FjFp(B(tj+1)− B(tj))(B(tp+1)− B(tp))]

= E[FjFp(B(tp+1)− B(tp))]E(B(tj+1)− B(tj)) = 0.

Exactly the same argument works when p > j .

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 11 / 58

Page 47: Koc4(dba)

When p < j ,FjFp(B(tp+1)− B(tp)) ∈ Ftj which is independent ofB(tj+1)− B(tj),

E[FjFp(B(tj+1)− B(tj))(B(tp+1)− B(tp))]

= E[FjFp(B(tp+1)− B(tp))]E(B(tj+1)− B(tj)) = 0.

Exactly the same argument works when p > j .

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 11 / 58

Page 48: Koc4(dba)

What remains is the case p = j , and by independence again

E(IT (F )2) =m∑

j=1

E(F 2j )E(B(tj+1)− B(tj))2

=m∑

j=1

E(F 2j )(tj+1 − tj). 2

We deduce from Lemma 1 that IT is a linear isometry from S(T ,E) intoL2(Ω,F ,P),and hence it extends to an isometric embedding of thewhole of H2(T ,E) into L2(Ω,F ,P). We continue to denote thisextension as IT and we call IT (F ) the (Itô) stochastic integral ofF ∈ H2(T ,E). When convenient, we will use the Leibniz notationIT (F ) :=

∫ T0

∫E F (t , x)M(dt ,dx).

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 12 / 58

Page 49: Koc4(dba)

What remains is the case p = j , and by independence again

E(IT (F )2) =m∑

j=1

E(F 2j )E(B(tj+1)− B(tj))2

=m∑

j=1

E(F 2j )(tj+1 − tj). 2

We deduce from Lemma 1 that IT is a linear isometry from S(T ,E) intoL2(Ω,F ,P),and hence it extends to an isometric embedding of thewhole of H2(T ,E) into L2(Ω,F ,P). We continue to denote thisextension as IT and we call IT (F ) the (Itô) stochastic integral ofF ∈ H2(T ,E). When convenient, we will use the Leibniz notationIT (F ) :=

∫ T0

∫E F (t , x)M(dt ,dx).

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 12 / 58

Page 50: Koc4(dba)

What remains is the case p = j , and by independence again

E(IT (F )2) =m∑

j=1

E(F 2j )E(B(tj+1)− B(tj))2

=m∑

j=1

E(F 2j )(tj+1 − tj). 2

We deduce from Lemma 1 that IT is a linear isometry from S(T ,E) intoL2(Ω,F ,P),and hence it extends to an isometric embedding of thewhole of H2(T ,E) into L2(Ω,F ,P). We continue to denote thisextension as IT and we call IT (F ) the (Itô) stochastic integral ofF ∈ H2(T ,E). When convenient, we will use the Leibniz notationIT (F ) :=

∫ T0

∫E F (t , x)M(dt ,dx).

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 12 / 58

Page 51: Koc4(dba)

What remains is the case p = j , and by independence again

E(IT (F )2) =m∑

j=1

E(F 2j )E(B(tj+1)− B(tj))2

=m∑

j=1

E(F 2j )(tj+1 − tj). 2

We deduce from Lemma 1 that IT is a linear isometry from S(T ,E) intoL2(Ω,F ,P),and hence it extends to an isometric embedding of thewhole of H2(T ,E) into L2(Ω,F ,P). We continue to denote thisextension as IT and we call IT (F ) the (Itô) stochastic integral ofF ∈ H2(T ,E). When convenient, we will use the Leibniz notationIT (F ) :=

∫ T0

∫E F (t , x)M(dt ,dx).

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 12 / 58

Page 52: Koc4(dba)

What remains is the case p = j , and by independence again

E(IT (F )2) =m∑

j=1

E(F 2j )E(B(tj+1)− B(tj))2

=m∑

j=1

E(F 2j )(tj+1 − tj). 2

We deduce from Lemma 1 that IT is a linear isometry from S(T ,E) intoL2(Ω,F ,P),and hence it extends to an isometric embedding of thewhole of H2(T ,E) into L2(Ω,F ,P). We continue to denote thisextension as IT and we call IT (F ) the (Itô) stochastic integral ofF ∈ H2(T ,E). When convenient, we will use the Leibniz notationIT (F ) :=

∫ T0

∫E F (t , x)M(dt ,dx).

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 12 / 58

Page 53: Koc4(dba)

What remains is the case p = j , and by independence again

E(IT (F )2) =m∑

j=1

E(F 2j )E(B(tj+1)− B(tj))2

=m∑

j=1

E(F 2j )(tj+1 − tj). 2

We deduce from Lemma 1 that IT is a linear isometry from S(T ,E) intoL2(Ω,F ,P),and hence it extends to an isometric embedding of thewhole of H2(T ,E) into L2(Ω,F ,P). We continue to denote thisextension as IT and we call IT (F ) the (Itô) stochastic integral ofF ∈ H2(T ,E). When convenient, we will use the Leibniz notationIT (F ) :=

∫ T0

∫E F (t , x)M(dt ,dx).

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 12 / 58

Page 54: Koc4(dba)

What remains is the case p = j , and by independence again

E(IT (F )2) =m∑

j=1

E(F 2j )E(B(tj+1)− B(tj))2

=m∑

j=1

E(F 2j )(tj+1 − tj). 2

We deduce from Lemma 1 that IT is a linear isometry from S(T ,E) intoL2(Ω,F ,P),and hence it extends to an isometric embedding of thewhole of H2(T ,E) into L2(Ω,F ,P). We continue to denote thisextension as IT and we call IT (F ) the (Itô) stochastic integral ofF ∈ H2(T ,E). When convenient, we will use the Leibniz notationIT (F ) :=

∫ T0

∫E F (t , x)M(dt ,dx).

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 12 / 58

Page 55: Koc4(dba)

So for predictable F satisfying∫ T

0

∫E E(|F (s, x)|2)ρ(ds,dx) <∞, we

can find a sequence (Fn,n ∈ N) of simple processes such that∫ T

0

∫E

F (t , x)M(dt ,dx) = limn→∞

∫ T

0

∫E

Fn(t , x)M(dt ,dx).

The limit is taken in the L2-sense and is independent of the choice ofapproximating sequence.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 13 / 58

Page 56: Koc4(dba)

So for predictable F satisfying∫ T

0

∫E E(|F (s, x)|2)ρ(ds,dx) <∞, we

can find a sequence (Fn,n ∈ N) of simple processes such that∫ T

0

∫E

F (t , x)M(dt ,dx) = limn→∞

∫ T

0

∫E

Fn(t , x)M(dt ,dx).

The limit is taken in the L2-sense and is independent of the choice ofapproximating sequence.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 13 / 58

Page 57: Koc4(dba)

So for predictable F satisfying∫ T

0

∫E E(|F (s, x)|2)ρ(ds,dx) <∞, we

can find a sequence (Fn,n ∈ N) of simple processes such that∫ T

0

∫E

F (t , x)M(dt ,dx) = limn→∞

∫ T

0

∫E

Fn(t , x)M(dt ,dx).

The limit is taken in the L2-sense and is independent of the choice ofapproximating sequence.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 13 / 58

Page 58: Koc4(dba)

So for predictable F satisfying∫ T

0

∫E E(|F (s, x)|2)ρ(ds,dx) <∞, we

can find a sequence (Fn,n ∈ N) of simple processes such that∫ T

0

∫E

F (t , x)M(dt ,dx) = limn→∞

∫ T

0

∫E

Fn(t , x)M(dt ,dx).

The limit is taken in the L2-sense and is independent of the choice ofapproximating sequence.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 13 / 58

Page 59: Koc4(dba)

The following theorem summarises some useful properties of thestochastic integral.

Theorem

If F ,G ∈ H2(T ,E) and α, β ∈ R, then :1 IT (αF + βG) = αIT (F ) + βIT (G).

2 E(IT (F )) = 0, E(IT (F )2) =∫ T

0

∫E E(|F (t , x)|2)ρ(dt ,dx).

3 (It (F ), t ≥ 0) is Ft -adapted.4 (It (F ), t ≥ 0) is a square-integrable martingale.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 14 / 58

Page 60: Koc4(dba)

The following theorem summarises some useful properties of thestochastic integral.

Theorem

If F ,G ∈ H2(T ,E) and α, β ∈ R, then :1 IT (αF + βG) = αIT (F ) + βIT (G).

2 E(IT (F )) = 0, E(IT (F )2) =∫ T

0

∫E E(|F (t , x)|2)ρ(dt ,dx).

3 (It (F ), t ≥ 0) is Ft -adapted.4 (It (F ), t ≥ 0) is a square-integrable martingale.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 14 / 58

Page 61: Koc4(dba)

The following theorem summarises some useful properties of thestochastic integral.

Theorem

If F ,G ∈ H2(T ,E) and α, β ∈ R, then :1 IT (αF + βG) = αIT (F ) + βIT (G).

2 E(IT (F )) = 0, E(IT (F )2) =∫ T

0

∫E E(|F (t , x)|2)ρ(dt ,dx).

3 (It (F ), t ≥ 0) is Ft -adapted.4 (It (F ), t ≥ 0) is a square-integrable martingale.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 14 / 58

Page 62: Koc4(dba)

The following theorem summarises some useful properties of thestochastic integral.

Theorem

If F ,G ∈ H2(T ,E) and α, β ∈ R, then :1 IT (αF + βG) = αIT (F ) + βIT (G).

2 E(IT (F )) = 0, E(IT (F )2) =∫ T

0

∫E E(|F (t , x)|2)ρ(dt ,dx).

3 (It (F ), t ≥ 0) is Ft -adapted.4 (It (F ), t ≥ 0) is a square-integrable martingale.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 14 / 58

Page 63: Koc4(dba)

The following theorem summarises some useful properties of thestochastic integral.

Theorem

If F ,G ∈ H2(T ,E) and α, β ∈ R, then :1 IT (αF + βG) = αIT (F ) + βIT (G).

2 E(IT (F )) = 0, E(IT (F )2) =∫ T

0

∫E E(|F (t , x)|2)ρ(dt ,dx).

3 (It (F ), t ≥ 0) is Ft -adapted.4 (It (F ), t ≥ 0) is a square-integrable martingale.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 14 / 58

Page 64: Koc4(dba)

The following theorem summarises some useful properties of thestochastic integral.

Theorem

If F ,G ∈ H2(T ,E) and α, β ∈ R, then :1 IT (αF + βG) = αIT (F ) + βIT (G).

2 E(IT (F )) = 0, E(IT (F )2) =∫ T

0

∫E E(|F (t , x)|2)ρ(dt ,dx).

3 (It (F ), t ≥ 0) is Ft -adapted.4 (It (F ), t ≥ 0) is a square-integrable martingale.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 14 / 58

Page 65: Koc4(dba)

Proof. (1) and (2) are easy.For (3), let (Fn,n ∈ N) be a sequence in S(T ,E) converging to F ; theneach process (It (Fn), t ≥ 0) is clearly adapted. Since eachIt (Fn)→ It (F ) in L2 as n→∞, we can find a subsequence(Fnk ,nk ∈ N) such that It (Fnk )→ It (F ) a.s. as nk →∞, and therequired result follows.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 15 / 58

Page 66: Koc4(dba)

Proof. (1) and (2) are easy.For (3), let (Fn,n ∈ N) be a sequence in S(T ,E) converging to F ; theneach process (It (Fn), t ≥ 0) is clearly adapted. Since eachIt (Fn)→ It (F ) in L2 as n→∞, we can find a subsequence(Fnk ,nk ∈ N) such that It (Fnk )→ It (F ) a.s. as nk →∞, and therequired result follows.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 15 / 58

Page 67: Koc4(dba)

Proof. (1) and (2) are easy.For (3), let (Fn,n ∈ N) be a sequence in S(T ,E) converging to F ; theneach process (It (Fn), t ≥ 0) is clearly adapted. Since eachIt (Fn)→ It (F ) in L2 as n→∞, we can find a subsequence(Fnk ,nk ∈ N) such that It (Fnk )→ It (F ) a.s. as nk →∞, and therequired result follows.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 15 / 58

Page 68: Koc4(dba)

Proof. (1) and (2) are easy.For (3), let (Fn,n ∈ N) be a sequence in S(T ,E) converging to F ; theneach process (It (Fn), t ≥ 0) is clearly adapted. Since eachIt (Fn)→ It (F ) in L2 as n→∞, we can find a subsequence(Fnk ,nk ∈ N) such that It (Fnk )→ It (F ) a.s. as nk →∞, and therequired result follows.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 15 / 58

Page 69: Koc4(dba)

Proof. (1) and (2) are easy.For (3), let (Fn,n ∈ N) be a sequence in S(T ,E) converging to F ; theneach process (It (Fn), t ≥ 0) is clearly adapted. Since eachIt (Fn)→ It (F ) in L2 as n→∞, we can find a subsequence(Fnk ,nk ∈ N) such that It (Fnk )→ It (F ) a.s. as nk →∞, and therequired result follows.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 15 / 58

Page 70: Koc4(dba)

(4) Let F ∈ S(T ,E) and (without loss of generality) choose0 < s = tl < tl+1 < t . Then it is easy to see that It (F ) = Is(F ) + Is,t (F )and hence Es(It (F )) = Is(F ) + Es(Is,t (F )) by (3). However,

Es(Is,t (F )) = Es

m∑j=l+1

n∑k=1

Fj,kM((tj , tj+1],Ak )

=

n∑j=l+1

n∑k=1

Es(Fj,k )E(M((tj , tj+1],Ak )) = 0.

The result now follows by the continuity of Es in L2. Indeed, let(Fn,n ∈ N) be a sequence in S(T ,E) converging to F ; then we have

||Es(It (F ))− Es(It (Fn))||2 ≤ ||It (F )− It (Fn)||2= ||F − Fn||T ,ρ → 0 as n→∞. 2

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 16 / 58

Page 71: Koc4(dba)

(4) Let F ∈ S(T ,E) and (without loss of generality) choose0 < s = tl < tl+1 < t . Then it is easy to see that It (F ) = Is(F ) + Is,t (F )and hence Es(It (F )) = Is(F ) + Es(Is,t (F )) by (3). However,

Es(Is,t (F )) = Es

m∑j=l+1

n∑k=1

Fj,kM((tj , tj+1],Ak )

=

n∑j=l+1

n∑k=1

Es(Fj,k )E(M((tj , tj+1],Ak )) = 0.

The result now follows by the continuity of Es in L2. Indeed, let(Fn,n ∈ N) be a sequence in S(T ,E) converging to F ; then we have

||Es(It (F ))− Es(It (Fn))||2 ≤ ||It (F )− It (Fn)||2= ||F − Fn||T ,ρ → 0 as n→∞. 2

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 16 / 58

Page 72: Koc4(dba)

(4) Let F ∈ S(T ,E) and (without loss of generality) choose0 < s = tl < tl+1 < t . Then it is easy to see that It (F ) = Is(F ) + Is,t (F )and hence Es(It (F )) = Is(F ) + Es(Is,t (F )) by (3). However,

Es(Is,t (F )) = Es

m∑j=l+1

n∑k=1

Fj,kM((tj , tj+1],Ak )

=

n∑j=l+1

n∑k=1

Es(Fj,k )E(M((tj , tj+1],Ak )) = 0.

The result now follows by the continuity of Es in L2. Indeed, let(Fn,n ∈ N) be a sequence in S(T ,E) converging to F ; then we have

||Es(It (F ))− Es(It (Fn))||2 ≤ ||It (F )− It (Fn)||2= ||F − Fn||T ,ρ → 0 as n→∞. 2

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 16 / 58

Page 73: Koc4(dba)

(4) Let F ∈ S(T ,E) and (without loss of generality) choose0 < s = tl < tl+1 < t . Then it is easy to see that It (F ) = Is(F ) + Is,t (F )and hence Es(It (F )) = Is(F ) + Es(Is,t (F )) by (3). However,

Es(Is,t (F )) = Es

m∑j=l+1

n∑k=1

Fj,kM((tj , tj+1],Ak )

=

n∑j=l+1

n∑k=1

Es(Fj,k )E(M((tj , tj+1],Ak )) = 0.

The result now follows by the continuity of Es in L2. Indeed, let(Fn,n ∈ N) be a sequence in S(T ,E) converging to F ; then we have

||Es(It (F ))− Es(It (Fn))||2 ≤ ||It (F )− It (Fn)||2= ||F − Fn||T ,ρ → 0 as n→∞. 2

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 16 / 58

Page 74: Koc4(dba)

(4) Let F ∈ S(T ,E) and (without loss of generality) choose0 < s = tl < tl+1 < t . Then it is easy to see that It (F ) = Is(F ) + Is,t (F )and hence Es(It (F )) = Is(F ) + Es(Is,t (F )) by (3). However,

Es(Is,t (F )) = Es

m∑j=l+1

n∑k=1

Fj,kM((tj , tj+1],Ak )

=

n∑j=l+1

n∑k=1

Es(Fj,k )E(M((tj , tj+1],Ak )) = 0.

The result now follows by the continuity of Es in L2. Indeed, let(Fn,n ∈ N) be a sequence in S(T ,E) converging to F ; then we have

||Es(It (F ))− Es(It (Fn))||2 ≤ ||It (F )− It (Fn)||2= ||F − Fn||T ,ρ → 0 as n→∞. 2

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 16 / 58

Page 75: Koc4(dba)

(4) Let F ∈ S(T ,E) and (without loss of generality) choose0 < s = tl < tl+1 < t . Then it is easy to see that It (F ) = Is(F ) + Is,t (F )and hence Es(It (F )) = Is(F ) + Es(Is,t (F )) by (3). However,

Es(Is,t (F )) = Es

m∑j=l+1

n∑k=1

Fj,kM((tj , tj+1],Ak )

=

n∑j=l+1

n∑k=1

Es(Fj,k )E(M((tj , tj+1],Ak )) = 0.

The result now follows by the continuity of Es in L2. Indeed, let(Fn,n ∈ N) be a sequence in S(T ,E) converging to F ; then we have

||Es(It (F ))− Es(It (Fn))||2 ≤ ||It (F )− It (Fn)||2= ||F − Fn||T ,ρ → 0 as n→∞. 2

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 16 / 58

Page 76: Koc4(dba)

(4) Let F ∈ S(T ,E) and (without loss of generality) choose0 < s = tl < tl+1 < t . Then it is easy to see that It (F ) = Is(F ) + Is,t (F )and hence Es(It (F )) = Is(F ) + Es(Is,t (F )) by (3). However,

Es(Is,t (F )) = Es

m∑j=l+1

n∑k=1

Fj,kM((tj , tj+1],Ak )

=

n∑j=l+1

n∑k=1

Es(Fj,k )E(M((tj , tj+1],Ak )) = 0.

The result now follows by the continuity of Es in L2. Indeed, let(Fn,n ∈ N) be a sequence in S(T ,E) converging to F ; then we have

||Es(It (F ))− Es(It (Fn))||2 ≤ ||It (F )− It (Fn)||2= ||F − Fn||T ,ρ → 0 as n→∞. 2

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 16 / 58

Page 77: Koc4(dba)

(4) Let F ∈ S(T ,E) and (without loss of generality) choose0 < s = tl < tl+1 < t . Then it is easy to see that It (F ) = Is(F ) + Is,t (F )and hence Es(It (F )) = Is(F ) + Es(Is,t (F )) by (3). However,

Es(Is,t (F )) = Es

m∑j=l+1

n∑k=1

Fj,kM((tj , tj+1],Ak )

=

n∑j=l+1

n∑k=1

Es(Fj,k )E(M((tj , tj+1],Ak )) = 0.

The result now follows by the continuity of Es in L2. Indeed, let(Fn,n ∈ N) be a sequence in S(T ,E) converging to F ; then we have

||Es(It (F ))− Es(It (Fn))||2 ≤ ||It (F )− It (Fn)||2= ||F − Fn||T ,ρ → 0 as n→∞. 2

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 16 / 58

Page 78: Koc4(dba)

We can extend the stochastic integral IT (F ) to integrands inP2(T ,E).This is the linear space of all equivalence classes ofmappings F : [0,T ]× E × Ω→ R which coincide almost everywherewith respect to ρ× P, and which satisfy the following conditions:

F is predictable.

P(∫ T

0

∫E |F (t , x)|2ρ(dt ,dx) <∞

)= 1.

If F ∈ P2(T ,E), (It (F ), t ≥ 0) is always a local martingale, but notnecessarily a martingale.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 17 / 58

Page 79: Koc4(dba)

We can extend the stochastic integral IT (F ) to integrands inP2(T ,E).This is the linear space of all equivalence classes ofmappings F : [0,T ]× E × Ω→ R which coincide almost everywherewith respect to ρ× P, and which satisfy the following conditions:

F is predictable.

P(∫ T

0

∫E |F (t , x)|2ρ(dt ,dx) <∞

)= 1.

If F ∈ P2(T ,E), (It (F ), t ≥ 0) is always a local martingale, but notnecessarily a martingale.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 17 / 58

Page 80: Koc4(dba)

We can extend the stochastic integral IT (F ) to integrands inP2(T ,E).This is the linear space of all equivalence classes ofmappings F : [0,T ]× E × Ω→ R which coincide almost everywherewith respect to ρ× P, and which satisfy the following conditions:

F is predictable.

P(∫ T

0

∫E |F (t , x)|2ρ(dt ,dx) <∞

)= 1.

If F ∈ P2(T ,E), (It (F ), t ≥ 0) is always a local martingale, but notnecessarily a martingale.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 17 / 58

Page 81: Koc4(dba)

We can extend the stochastic integral IT (F ) to integrands inP2(T ,E).This is the linear space of all equivalence classes ofmappings F : [0,T ]× E × Ω→ R which coincide almost everywherewith respect to ρ× P, and which satisfy the following conditions:

F is predictable.

P(∫ T

0

∫E |F (t , x)|2ρ(dt ,dx) <∞

)= 1.

If F ∈ P2(T ,E), (It (F ), t ≥ 0) is always a local martingale, but notnecessarily a martingale.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 17 / 58

Page 82: Koc4(dba)

We can extend the stochastic integral IT (F ) to integrands inP2(T ,E).This is the linear space of all equivalence classes ofmappings F : [0,T ]× E × Ω→ R which coincide almost everywherewith respect to ρ× P, and which satisfy the following conditions:

F is predictable.

P(∫ T

0

∫E |F (t , x)|2ρ(dt ,dx) <∞

)= 1.

If F ∈ P2(T ,E), (It (F ), t ≥ 0) is always a local martingale, but notnecessarily a martingale.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 17 / 58

Page 83: Koc4(dba)

Poisson Stochastic Integrals

Let A be an arbitrary Borel set in Rd − 0 which is bounded below,and introduce the compound Poisson process P = (P(t), t ≥ 0), whereeach P(t) =

∫A xN(t ,dx). Let K be a predictable mapping, then

generalising the earlier Poisson integrals, we define∫ T

0

∫A

K (t , x)N(dt ,dx) =∑

0≤u≤T

K (u,∆P(u))1A(∆P(u)), (0.2)

as a random finite sum.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 18 / 58

Page 84: Koc4(dba)

Poisson Stochastic Integrals

Let A be an arbitrary Borel set in Rd − 0 which is bounded below,and introduce the compound Poisson process P = (P(t), t ≥ 0), whereeach P(t) =

∫A xN(t ,dx). Let K be a predictable mapping, then

generalising the earlier Poisson integrals, we define∫ T

0

∫A

K (t , x)N(dt ,dx) =∑

0≤u≤T

K (u,∆P(u))1A(∆P(u)), (0.2)

as a random finite sum.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 18 / 58

Page 85: Koc4(dba)

Poisson Stochastic Integrals

Let A be an arbitrary Borel set in Rd − 0 which is bounded below,and introduce the compound Poisson process P = (P(t), t ≥ 0), whereeach P(t) =

∫A xN(t ,dx). Let K be a predictable mapping, then

generalising the earlier Poisson integrals, we define∫ T

0

∫A

K (t , x)N(dt ,dx) =∑

0≤u≤T

K (u,∆P(u))1A(∆P(u)), (0.2)

as a random finite sum.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 18 / 58

Page 86: Koc4(dba)

Poisson Stochastic Integrals

Let A be an arbitrary Borel set in Rd − 0 which is bounded below,and introduce the compound Poisson process P = (P(t), t ≥ 0), whereeach P(t) =

∫A xN(t ,dx). Let K be a predictable mapping, then

generalising the earlier Poisson integrals, we define∫ T

0

∫A

K (t , x)N(dt ,dx) =∑

0≤u≤T

K (u,∆P(u))1A(∆P(u)), (0.2)

as a random finite sum.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 18 / 58

Page 87: Koc4(dba)

In particular, if H satisfies the square-integrability condition givenabove, we may then define, for each 1 ≤ i ≤ d ,∫ T

0

∫A

H i(t , x)N(dt ,dx)

:=

∫ T

0

∫A

H i(t , x)N(dt ,dx)−∫ T

0

∫A

H i(t , x)ν(dx)dt .

The definition (0.2) can, in principle, be used to define stochasticintegrals for a more general class of integrands than we have beenconsidering. For simplicity, let N = (N(t), t ≥ 0) be a Poisson processof intensity 1 and let f : R→ R, then we may define∫ t

0f (N(s))dN(s) =

∑0≤s≤t

f (N(s−) + ∆N(s))∆N(s).

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 19 / 58

Page 88: Koc4(dba)

In particular, if H satisfies the square-integrability condition givenabove, we may then define, for each 1 ≤ i ≤ d ,∫ T

0

∫A

H i(t , x)N(dt ,dx)

:=

∫ T

0

∫A

H i(t , x)N(dt ,dx)−∫ T

0

∫A

H i(t , x)ν(dx)dt .

The definition (0.2) can, in principle, be used to define stochasticintegrals for a more general class of integrands than we have beenconsidering. For simplicity, let N = (N(t), t ≥ 0) be a Poisson processof intensity 1 and let f : R→ R, then we may define∫ t

0f (N(s))dN(s) =

∑0≤s≤t

f (N(s−) + ∆N(s))∆N(s).

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 19 / 58

Page 89: Koc4(dba)

In particular, if H satisfies the square-integrability condition givenabove, we may then define, for each 1 ≤ i ≤ d ,∫ T

0

∫A

H i(t , x)N(dt ,dx)

:=

∫ T

0

∫A

H i(t , x)N(dt ,dx)−∫ T

0

∫A

H i(t , x)ν(dx)dt .

The definition (0.2) can, in principle, be used to define stochasticintegrals for a more general class of integrands than we have beenconsidering. For simplicity, let N = (N(t), t ≥ 0) be a Poisson processof intensity 1 and let f : R→ R, then we may define∫ t

0f (N(s))dN(s) =

∑0≤s≤t

f (N(s−) + ∆N(s))∆N(s).

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 19 / 58

Page 90: Koc4(dba)

Lévy-type stochastic integrals

We take E = B − 0 = x ∈ Rd ; 0 < |x | < 1 throughout thissubsection. We say that an Rd -valued stochastic processY = (Y (t), t ≥ 0) is a Lévy-type stochastic integral if it can be writtenin the following form for each 1 ≤ i ≤ d , t ≥ 0,

Y i(t) = Y i(0) +

∫ t

0Gi(s)ds +

∫ t

0F i

j (s)dBj(s)

+

∫ t

0

∫|x |<1

H i(s, x)N(ds,dx) +

∫ t

0

∫|x |≥1

K i(s, x)N(ds,dx),

where for each1 ≤ i ≤ d ,1 ≤ j ≤ m, t ≥ 0, |Gi |

12 ,F i

j ∈ P2(T ),H i ∈ P2(T ,E) and K ispredictable. B is an m-dimensional standard Brownian motion and N isan independent Poisson random measure on R+ × (Rd − 0) withcompensator N and intensity measure ν, which we will assume is aLévy measure.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 20 / 58

Page 91: Koc4(dba)

Lévy-type stochastic integrals

We take E = B − 0 = x ∈ Rd ; 0 < |x | < 1 throughout thissubsection. We say that an Rd -valued stochastic processY = (Y (t), t ≥ 0) is a Lévy-type stochastic integral if it can be writtenin the following form for each 1 ≤ i ≤ d , t ≥ 0,

Y i(t) = Y i(0) +

∫ t

0Gi(s)ds +

∫ t

0F i

j (s)dBj(s)

+

∫ t

0

∫|x |<1

H i(s, x)N(ds,dx) +

∫ t

0

∫|x |≥1

K i(s, x)N(ds,dx),

where for each1 ≤ i ≤ d ,1 ≤ j ≤ m, t ≥ 0, |Gi |

12 ,F i

j ∈ P2(T ),H i ∈ P2(T ,E) and K ispredictable. B is an m-dimensional standard Brownian motion and N isan independent Poisson random measure on R+ × (Rd − 0) withcompensator N and intensity measure ν, which we will assume is aLévy measure.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 20 / 58

Page 92: Koc4(dba)

Lévy-type stochastic integrals

We take E = B − 0 = x ∈ Rd ; 0 < |x | < 1 throughout thissubsection. We say that an Rd -valued stochastic processY = (Y (t), t ≥ 0) is a Lévy-type stochastic integral if it can be writtenin the following form for each 1 ≤ i ≤ d , t ≥ 0,

Y i(t) = Y i(0) +

∫ t

0Gi(s)ds +

∫ t

0F i

j (s)dBj(s)

+

∫ t

0

∫|x |<1

H i(s, x)N(ds,dx) +

∫ t

0

∫|x |≥1

K i(s, x)N(ds,dx),

where for each1 ≤ i ≤ d ,1 ≤ j ≤ m, t ≥ 0, |Gi |

12 ,F i

j ∈ P2(T ),H i ∈ P2(T ,E) and K ispredictable. B is an m-dimensional standard Brownian motion and N isan independent Poisson random measure on R+ × (Rd − 0) withcompensator N and intensity measure ν, which we will assume is aLévy measure.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 20 / 58

Page 93: Koc4(dba)

Lévy-type stochastic integrals

We take E = B − 0 = x ∈ Rd ; 0 < |x | < 1 throughout thissubsection. We say that an Rd -valued stochastic processY = (Y (t), t ≥ 0) is a Lévy-type stochastic integral if it can be writtenin the following form for each 1 ≤ i ≤ d , t ≥ 0,

Y i(t) = Y i(0) +

∫ t

0Gi(s)ds +

∫ t

0F i

j (s)dBj(s)

+

∫ t

0

∫|x |<1

H i(s, x)N(ds,dx) +

∫ t

0

∫|x |≥1

K i(s, x)N(ds,dx),

where for each1 ≤ i ≤ d ,1 ≤ j ≤ m, t ≥ 0, |Gi |

12 ,F i

j ∈ P2(T ),H i ∈ P2(T ,E) and K ispredictable. B is an m-dimensional standard Brownian motion and N isan independent Poisson random measure on R+ × (Rd − 0) withcompensator N and intensity measure ν, which we will assume is aLévy measure.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 20 / 58

Page 94: Koc4(dba)

Lévy-type stochastic integrals

We take E = B − 0 = x ∈ Rd ; 0 < |x | < 1 throughout thissubsection. We say that an Rd -valued stochastic processY = (Y (t), t ≥ 0) is a Lévy-type stochastic integral if it can be writtenin the following form for each 1 ≤ i ≤ d , t ≥ 0,

Y i(t) = Y i(0) +

∫ t

0Gi(s)ds +

∫ t

0F i

j (s)dBj(s)

+

∫ t

0

∫|x |<1

H i(s, x)N(ds,dx) +

∫ t

0

∫|x |≥1

K i(s, x)N(ds,dx),

where for each1 ≤ i ≤ d ,1 ≤ j ≤ m, t ≥ 0, |Gi |

12 ,F i

j ∈ P2(T ),H i ∈ P2(T ,E) and K ispredictable. B is an m-dimensional standard Brownian motion and N isan independent Poisson random measure on R+ × (Rd − 0) withcompensator N and intensity measure ν, which we will assume is aLévy measure.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 20 / 58

Page 95: Koc4(dba)

We will assume that the random variable Y (0) is F0-measurable, andthen it is clear that Y is an adapted process.We can often simplify complicated expressions by employing thenotation of stochastic differentials to represent Lévy-type stochasticintegrals. We then write the last expression as

dY (t) = G(t)dt + F (t)dB(t) + H(t , x)N(dt ,dx)

+ K (t , x)N(dt ,dx).

When we want to particularly emphasise the domains of integrationwith respect to x , we will use an equivalent notation

dY (t) = G(t)dt + F (t)dB(t) +

∫|x |<1

H(t , x)N(dt ,dx)

+

∫|x |≥1

K (t , x)N(dt ,dx).

Clearly Y is a semimartingale.Dave Applebaum (Sheffield UK) Lecture 4 December 2011 21 / 58

Page 96: Koc4(dba)

We will assume that the random variable Y (0) is F0-measurable, andthen it is clear that Y is an adapted process.We can often simplify complicated expressions by employing thenotation of stochastic differentials to represent Lévy-type stochasticintegrals. We then write the last expression as

dY (t) = G(t)dt + F (t)dB(t) + H(t , x)N(dt ,dx)

+ K (t , x)N(dt ,dx).

When we want to particularly emphasise the domains of integrationwith respect to x , we will use an equivalent notation

dY (t) = G(t)dt + F (t)dB(t) +

∫|x |<1

H(t , x)N(dt ,dx)

+

∫|x |≥1

K (t , x)N(dt ,dx).

Clearly Y is a semimartingale.Dave Applebaum (Sheffield UK) Lecture 4 December 2011 21 / 58

Page 97: Koc4(dba)

We will assume that the random variable Y (0) is F0-measurable, andthen it is clear that Y is an adapted process.We can often simplify complicated expressions by employing thenotation of stochastic differentials to represent Lévy-type stochasticintegrals. We then write the last expression as

dY (t) = G(t)dt + F (t)dB(t) + H(t , x)N(dt ,dx)

+ K (t , x)N(dt ,dx).

When we want to particularly emphasise the domains of integrationwith respect to x , we will use an equivalent notation

dY (t) = G(t)dt + F (t)dB(t) +

∫|x |<1

H(t , x)N(dt ,dx)

+

∫|x |≥1

K (t , x)N(dt ,dx).

Clearly Y is a semimartingale.Dave Applebaum (Sheffield UK) Lecture 4 December 2011 21 / 58

Page 98: Koc4(dba)

We will assume that the random variable Y (0) is F0-measurable, andthen it is clear that Y is an adapted process.We can often simplify complicated expressions by employing thenotation of stochastic differentials to represent Lévy-type stochasticintegrals. We then write the last expression as

dY (t) = G(t)dt + F (t)dB(t) + H(t , x)N(dt ,dx)

+ K (t , x)N(dt ,dx).

When we want to particularly emphasise the domains of integrationwith respect to x , we will use an equivalent notation

dY (t) = G(t)dt + F (t)dB(t) +

∫|x |<1

H(t , x)N(dt ,dx)

+

∫|x |≥1

K (t , x)N(dt ,dx).

Clearly Y is a semimartingale.Dave Applebaum (Sheffield UK) Lecture 4 December 2011 21 / 58

Page 99: Koc4(dba)

We will assume that the random variable Y (0) is F0-measurable, andthen it is clear that Y is an adapted process.We can often simplify complicated expressions by employing thenotation of stochastic differentials to represent Lévy-type stochasticintegrals. We then write the last expression as

dY (t) = G(t)dt + F (t)dB(t) + H(t , x)N(dt ,dx)

+ K (t , x)N(dt ,dx).

When we want to particularly emphasise the domains of integrationwith respect to x , we will use an equivalent notation

dY (t) = G(t)dt + F (t)dB(t) +

∫|x |<1

H(t , x)N(dt ,dx)

+

∫|x |≥1

K (t , x)N(dt ,dx).

Clearly Y is a semimartingale.Dave Applebaum (Sheffield UK) Lecture 4 December 2011 21 / 58

Page 100: Koc4(dba)

Let M = (M(t), t ≥ 0) be an adapted process which is such thatMJ ∈ P2(t ,A) whenever J ∈ P2(t ,A) (where A ∈ B(Rd ) is arbitrary).For example, it is sufficient to take M to be adapted andleft-continuous.For these processes we can define an adapted processZ = (Z (t), t ≥ 0) by the prescription that it have the stochasticdifferential

dZ (t) = M(t)G(t)dt + M(t)F (t)dB(t) + M(t)H(t , x)N(dt ,dx)

+ M(t)K (t , x)N(dt ,dx),

and we will adopt the natural notation,

dZ (t) = M(t)dY (t).

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 22 / 58

Page 101: Koc4(dba)

Let M = (M(t), t ≥ 0) be an adapted process which is such thatMJ ∈ P2(t ,A) whenever J ∈ P2(t ,A) (where A ∈ B(Rd ) is arbitrary).For example, it is sufficient to take M to be adapted andleft-continuous.For these processes we can define an adapted processZ = (Z (t), t ≥ 0) by the prescription that it have the stochasticdifferential

dZ (t) = M(t)G(t)dt + M(t)F (t)dB(t) + M(t)H(t , x)N(dt ,dx)

+ M(t)K (t , x)N(dt ,dx),

and we will adopt the natural notation,

dZ (t) = M(t)dY (t).

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 22 / 58

Page 102: Koc4(dba)

Let M = (M(t), t ≥ 0) be an adapted process which is such thatMJ ∈ P2(t ,A) whenever J ∈ P2(t ,A) (where A ∈ B(Rd ) is arbitrary).For example, it is sufficient to take M to be adapted andleft-continuous.For these processes we can define an adapted processZ = (Z (t), t ≥ 0) by the prescription that it have the stochasticdifferential

dZ (t) = M(t)G(t)dt + M(t)F (t)dB(t) + M(t)H(t , x)N(dt ,dx)

+ M(t)K (t , x)N(dt ,dx),

and we will adopt the natural notation,

dZ (t) = M(t)dY (t).

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 22 / 58

Page 103: Koc4(dba)

Let M = (M(t), t ≥ 0) be an adapted process which is such thatMJ ∈ P2(t ,A) whenever J ∈ P2(t ,A) (where A ∈ B(Rd ) is arbitrary).For example, it is sufficient to take M to be adapted andleft-continuous.For these processes we can define an adapted processZ = (Z (t), t ≥ 0) by the prescription that it have the stochasticdifferential

dZ (t) = M(t)G(t)dt + M(t)F (t)dB(t) + M(t)H(t , x)N(dt ,dx)

+ M(t)K (t , x)N(dt ,dx),

and we will adopt the natural notation,

dZ (t) = M(t)dY (t).

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 22 / 58

Page 104: Koc4(dba)

Example (Lévy Stochastic Integrals)Let X be a Lévy process with characteristics (b,a, ν) and Lévy-Itôdecomposition:

X (t) = bt + Ba(t) +

∫|x |<1

xN(t ,dx) +

∫|x |≥1

xN(t ,dx),

for each t ≥ 0. Let L ∈ P2(t) for all t ≥ 0 and choose eachF i

j = aijL,H

i = K i = x iL. Then we can construct processes with thestochastic differential

dY (t) = L(t)dX (t) (0.3)

We call Y a Lévy stochastic integral.In the case where X has finite variation, the Lévy stochastic integral Ycan also be constructed as a Lebesgue-Stieltjes integral, and thiscoincides (up to a set of measure zero) with the prescription (0.3).

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 23 / 58

Page 105: Koc4(dba)

Example (Lévy Stochastic Integrals)Let X be a Lévy process with characteristics (b,a, ν) and Lévy-Itôdecomposition:

X (t) = bt + Ba(t) +

∫|x |<1

xN(t ,dx) +

∫|x |≥1

xN(t ,dx),

for each t ≥ 0. Let L ∈ P2(t) for all t ≥ 0 and choose eachF i

j = aijL,H

i = K i = x iL. Then we can construct processes with thestochastic differential

dY (t) = L(t)dX (t) (0.3)

We call Y a Lévy stochastic integral.In the case where X has finite variation, the Lévy stochastic integral Ycan also be constructed as a Lebesgue-Stieltjes integral, and thiscoincides (up to a set of measure zero) with the prescription (0.3).

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 23 / 58

Page 106: Koc4(dba)

Example (Lévy Stochastic Integrals)Let X be a Lévy process with characteristics (b,a, ν) and Lévy-Itôdecomposition:

X (t) = bt + Ba(t) +

∫|x |<1

xN(t ,dx) +

∫|x |≥1

xN(t ,dx),

for each t ≥ 0. Let L ∈ P2(t) for all t ≥ 0 and choose eachF i

j = aijL,H

i = K i = x iL. Then we can construct processes with thestochastic differential

dY (t) = L(t)dX (t) (0.3)

We call Y a Lévy stochastic integral.In the case where X has finite variation, the Lévy stochastic integral Ycan also be constructed as a Lebesgue-Stieltjes integral, and thiscoincides (up to a set of measure zero) with the prescription (0.3).

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 23 / 58

Page 107: Koc4(dba)

Example (Lévy Stochastic Integrals)Let X be a Lévy process with characteristics (b,a, ν) and Lévy-Itôdecomposition:

X (t) = bt + Ba(t) +

∫|x |<1

xN(t ,dx) +

∫|x |≥1

xN(t ,dx),

for each t ≥ 0. Let L ∈ P2(t) for all t ≥ 0 and choose eachF i

j = aijL,H

i = K i = x iL. Then we can construct processes with thestochastic differential

dY (t) = L(t)dX (t) (0.3)

We call Y a Lévy stochastic integral.In the case where X has finite variation, the Lévy stochastic integral Ycan also be constructed as a Lebesgue-Stieltjes integral, and thiscoincides (up to a set of measure zero) with the prescription (0.3).

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 23 / 58

Page 108: Koc4(dba)

Example (Lévy Stochastic Integrals)Let X be a Lévy process with characteristics (b,a, ν) and Lévy-Itôdecomposition:

X (t) = bt + Ba(t) +

∫|x |<1

xN(t ,dx) +

∫|x |≥1

xN(t ,dx),

for each t ≥ 0. Let L ∈ P2(t) for all t ≥ 0 and choose eachF i

j = aijL,H

i = K i = x iL. Then we can construct processes with thestochastic differential

dY (t) = L(t)dX (t) (0.3)

We call Y a Lévy stochastic integral.In the case where X has finite variation, the Lévy stochastic integral Ycan also be constructed as a Lebesgue-Stieltjes integral, and thiscoincides (up to a set of measure zero) with the prescription (0.3).

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 23 / 58

Page 109: Koc4(dba)

Example (Lévy Stochastic Integrals)Let X be a Lévy process with characteristics (b,a, ν) and Lévy-Itôdecomposition:

X (t) = bt + Ba(t) +

∫|x |<1

xN(t ,dx) +

∫|x |≥1

xN(t ,dx),

for each t ≥ 0. Let L ∈ P2(t) for all t ≥ 0 and choose eachF i

j = aijL,H

i = K i = x iL. Then we can construct processes with thestochastic differential

dY (t) = L(t)dX (t) (0.3)

We call Y a Lévy stochastic integral.In the case where X has finite variation, the Lévy stochastic integral Ycan also be constructed as a Lebesgue-Stieltjes integral, and thiscoincides (up to a set of measure zero) with the prescription (0.3).

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 23 / 58

Page 110: Koc4(dba)

Example (Lévy Stochastic Integrals)Let X be a Lévy process with characteristics (b,a, ν) and Lévy-Itôdecomposition:

X (t) = bt + Ba(t) +

∫|x |<1

xN(t ,dx) +

∫|x |≥1

xN(t ,dx),

for each t ≥ 0. Let L ∈ P2(t) for all t ≥ 0 and choose eachF i

j = aijL,H

i = K i = x iL. Then we can construct processes with thestochastic differential

dY (t) = L(t)dX (t) (0.3)

We call Y a Lévy stochastic integral.In the case where X has finite variation, the Lévy stochastic integral Ycan also be constructed as a Lebesgue-Stieltjes integral, and thiscoincides (up to a set of measure zero) with the prescription (0.3).

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 23 / 58

Page 111: Koc4(dba)

Example (Lévy Stochastic Integrals)Let X be a Lévy process with characteristics (b,a, ν) and Lévy-Itôdecomposition:

X (t) = bt + Ba(t) +

∫|x |<1

xN(t ,dx) +

∫|x |≥1

xN(t ,dx),

for each t ≥ 0. Let L ∈ P2(t) for all t ≥ 0 and choose eachF i

j = aijL,H

i = K i = x iL. Then we can construct processes with thestochastic differential

dY (t) = L(t)dX (t) (0.3)

We call Y a Lévy stochastic integral.In the case where X has finite variation, the Lévy stochastic integral Ycan also be constructed as a Lebesgue-Stieltjes integral, and thiscoincides (up to a set of measure zero) with the prescription (0.3).

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 23 / 58

Page 112: Koc4(dba)

Example: The Ornstein Uhlenbeck Process (OU Process)

Y (t) = e−λty0 +

∫ t

0e−λ(t−s)dX (s) (0.4)

where y0 ∈ Rd is fixed, is a Markov process. The condition∫|x |>1

log(1 + |x |)ν(dx) <∞

is necessary and sufficient for it to be stationary. There are importantapplications to volatility modelling in finance which have beendeveloped by Ole Barndorff-Nielsen and Neil Sheppard. Intriguingly,every self-decomposable random variable can be naturally embeddedin a stationary Lévy-driven OU process.See lecture 5 for more details.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 24 / 58

Page 113: Koc4(dba)

Example: The Ornstein Uhlenbeck Process (OU Process)

Y (t) = e−λty0 +

∫ t

0e−λ(t−s)dX (s) (0.4)

where y0 ∈ Rd is fixed, is a Markov process. The condition∫|x |>1

log(1 + |x |)ν(dx) <∞

is necessary and sufficient for it to be stationary. There are importantapplications to volatility modelling in finance which have beendeveloped by Ole Barndorff-Nielsen and Neil Sheppard. Intriguingly,every self-decomposable random variable can be naturally embeddedin a stationary Lévy-driven OU process.See lecture 5 for more details.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 24 / 58

Page 114: Koc4(dba)

Example: The Ornstein Uhlenbeck Process (OU Process)

Y (t) = e−λty0 +

∫ t

0e−λ(t−s)dX (s) (0.4)

where y0 ∈ Rd is fixed, is a Markov process. The condition∫|x |>1

log(1 + |x |)ν(dx) <∞

is necessary and sufficient for it to be stationary. There are importantapplications to volatility modelling in finance which have beendeveloped by Ole Barndorff-Nielsen and Neil Sheppard. Intriguingly,every self-decomposable random variable can be naturally embeddedin a stationary Lévy-driven OU process.See lecture 5 for more details.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 24 / 58

Page 115: Koc4(dba)

Example: The Ornstein Uhlenbeck Process (OU Process)

Y (t) = e−λty0 +

∫ t

0e−λ(t−s)dX (s) (0.4)

where y0 ∈ Rd is fixed, is a Markov process. The condition∫|x |>1

log(1 + |x |)ν(dx) <∞

is necessary and sufficient for it to be stationary. There are importantapplications to volatility modelling in finance which have beendeveloped by Ole Barndorff-Nielsen and Neil Sheppard. Intriguingly,every self-decomposable random variable can be naturally embeddedin a stationary Lévy-driven OU process.See lecture 5 for more details.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 24 / 58

Page 116: Koc4(dba)

Example: The Ornstein Uhlenbeck Process (OU Process)

Y (t) = e−λty0 +

∫ t

0e−λ(t−s)dX (s) (0.4)

where y0 ∈ Rd is fixed, is a Markov process. The condition∫|x |>1

log(1 + |x |)ν(dx) <∞

is necessary and sufficient for it to be stationary. There are importantapplications to volatility modelling in finance which have beendeveloped by Ole Barndorff-Nielsen and Neil Sheppard. Intriguingly,every self-decomposable random variable can be naturally embeddedin a stationary Lévy-driven OU process.See lecture 5 for more details.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 24 / 58

Page 117: Koc4(dba)

Example: The Ornstein Uhlenbeck Process (OU Process)

Y (t) = e−λty0 +

∫ t

0e−λ(t−s)dX (s) (0.4)

where y0 ∈ Rd is fixed, is a Markov process. The condition∫|x |>1

log(1 + |x |)ν(dx) <∞

is necessary and sufficient for it to be stationary. There are importantapplications to volatility modelling in finance which have beendeveloped by Ole Barndorff-Nielsen and Neil Sheppard. Intriguingly,every self-decomposable random variable can be naturally embeddedin a stationary Lévy-driven OU process.See lecture 5 for more details.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 24 / 58

Page 118: Koc4(dba)

Stochastic integration has a plethora of applications including filtering,stochastic control and infinite dimensional analysis. Here’s somemotivation from finance. Suppose that X (t) is the value of a stock attime t and F (t) is the number of stocks owned at time t . Assume fornow that we buy and sell stocks at discrete times0 = t0 < t1 < · · · < tn < tn+1 = T . Then the total value of the portfolioat time T is:

V (T ) = V (0) +n∑

j=0

F (tj)(X (tj+1)− X (tj)),

and in the limit as the times become infinitesimally close together, wehave the stochastic integral

V (T ) = V (0) +

∫ t

0F (s)dX (s).

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 25 / 58

Page 119: Koc4(dba)

Stochastic integration has a plethora of applications including filtering,stochastic control and infinite dimensional analysis. Here’s somemotivation from finance. Suppose that X (t) is the value of a stock attime t and F (t) is the number of stocks owned at time t . Assume fornow that we buy and sell stocks at discrete times0 = t0 < t1 < · · · < tn < tn+1 = T . Then the total value of the portfolioat time T is:

V (T ) = V (0) +n∑

j=0

F (tj)(X (tj+1)− X (tj)),

and in the limit as the times become infinitesimally close together, wehave the stochastic integral

V (T ) = V (0) +

∫ t

0F (s)dX (s).

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 25 / 58

Page 120: Koc4(dba)

Stochastic integration has a plethora of applications including filtering,stochastic control and infinite dimensional analysis. Here’s somemotivation from finance. Suppose that X (t) is the value of a stock attime t and F (t) is the number of stocks owned at time t . Assume fornow that we buy and sell stocks at discrete times0 = t0 < t1 < · · · < tn < tn+1 = T . Then the total value of the portfolioat time T is:

V (T ) = V (0) +n∑

j=0

F (tj)(X (tj+1)− X (tj)),

and in the limit as the times become infinitesimally close together, wehave the stochastic integral

V (T ) = V (0) +

∫ t

0F (s)dX (s).

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 25 / 58

Page 121: Koc4(dba)

Stochastic integration has a plethora of applications including filtering,stochastic control and infinite dimensional analysis. Here’s somemotivation from finance. Suppose that X (t) is the value of a stock attime t and F (t) is the number of stocks owned at time t . Assume fornow that we buy and sell stocks at discrete times0 = t0 < t1 < · · · < tn < tn+1 = T . Then the total value of the portfolioat time T is:

V (T ) = V (0) +n∑

j=0

F (tj)(X (tj+1)− X (tj)),

and in the limit as the times become infinitesimally close together, wehave the stochastic integral

V (T ) = V (0) +

∫ t

0F (s)dX (s).

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 25 / 58

Page 122: Koc4(dba)

Stochastic integration against Brownian motion was first developed byWiener for sure functions. Itô’s groundbreaking work in 1944 extendedthis to random adapted integrands.The generalisation of the integratorto arbitrary martingales was due to Kunita and Watanabe in 1967andthe further step to allow semimartingales was due to P.A.Meyer andthe Strasbourg school in the 1970s.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 26 / 58

Page 123: Koc4(dba)

Stochastic integration against Brownian motion was first developed byWiener for sure functions. Itô’s groundbreaking work in 1944 extendedthis to random adapted integrands.The generalisation of the integratorto arbitrary martingales was due to Kunita and Watanabe in 1967andthe further step to allow semimartingales was due to P.A.Meyer andthe Strasbourg school in the 1970s.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 26 / 58

Page 124: Koc4(dba)

Stochastic integration against Brownian motion was first developed byWiener for sure functions. Itô’s groundbreaking work in 1944 extendedthis to random adapted integrands.The generalisation of the integratorto arbitrary martingales was due to Kunita and Watanabe in 1967andthe further step to allow semimartingales was due to P.A.Meyer andthe Strasbourg school in the 1970s.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 26 / 58

Page 125: Koc4(dba)

Stochastic integration against Brownian motion was first developed byWiener for sure functions. Itô’s groundbreaking work in 1944 extendedthis to random adapted integrands.The generalisation of the integratorto arbitrary martingales was due to Kunita and Watanabe in 1967andthe further step to allow semimartingales was due to P.A.Meyer andthe Strasbourg school in the 1970s.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 26 / 58

Page 126: Koc4(dba)

Itô’s Formula

We begin with the easy case - Itô’s formula for Poisson stochasticintegrals of the form

W i(t) = W i(0) +

∫ t

0

∫A

K i(t , x)N(dt ,dx) (0.5)

for 1 ≤ i ≤ d , where t ≥ 0,A is bounded below and each K i ispredictable. Itô’s formula for such processes takes a particularly simpleform.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 27 / 58

Page 127: Koc4(dba)

Itô’s Formula

We begin with the easy case - Itô’s formula for Poisson stochasticintegrals of the form

W i(t) = W i(0) +

∫ t

0

∫A

K i(t , x)N(dt ,dx) (0.5)

for 1 ≤ i ≤ d , where t ≥ 0,A is bounded below and each K i ispredictable. Itô’s formula for such processes takes a particularly simpleform.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 27 / 58

Page 128: Koc4(dba)

Itô’s Formula

We begin with the easy case - Itô’s formula for Poisson stochasticintegrals of the form

W i(t) = W i(0) +

∫ t

0

∫A

K i(t , x)N(dt ,dx) (0.5)

for 1 ≤ i ≤ d , where t ≥ 0,A is bounded below and each K i ispredictable. Itô’s formula for such processes takes a particularly simpleform.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 27 / 58

Page 129: Koc4(dba)

Lemma

If W is a Poisson stochastic integral of the above form then for eachf ∈ C(Rd ), and for each t ≥ 0, with probability one, we have

f (W (t))− f (W (0)) =

∫ t

0

∫A

[f (W (s−) + K (s, x))− f (W (s−))]N(ds,dx).

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 28 / 58

Page 130: Koc4(dba)

Proof. Let Y (t) =∫

A xN(dt ,dx) and recall that the jump times for Y aredefined recursively as T A

0 = 0 and for eachn ∈ N,T A

n = inft > T An−1; ∆Y (t) ∈ A. We then find that,

f ((W (t))− f (W (0))

=∑

0≤s≤t

f (W (s))− f (W (s−))

=∞∑

n=1

f (W (t ∧ T An ))− f (W (t ∧ T A

n−1))

=∞∑

n=1

[f (W (t ∧ T An −) + K (t ∧ T A

n ,∆Y (t ∧ T An )))− f (W (t ∧ T A

n −))]

=

∫ t

0

∫A

[f (W (s−) + K (s, x))− f (W (s−))]N(ds,dx). 2

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 29 / 58

Page 131: Koc4(dba)

Proof. Let Y (t) =∫

A xN(dt ,dx) and recall that the jump times for Y aredefined recursively as T A

0 = 0 and for eachn ∈ N,T A

n = inft > T An−1; ∆Y (t) ∈ A. We then find that,

f ((W (t))− f (W (0))

=∑

0≤s≤t

f (W (s))− f (W (s−))

=∞∑

n=1

f (W (t ∧ T An ))− f (W (t ∧ T A

n−1))

=∞∑

n=1

[f (W (t ∧ T An −) + K (t ∧ T A

n ,∆Y (t ∧ T An )))− f (W (t ∧ T A

n −))]

=

∫ t

0

∫A

[f (W (s−) + K (s, x))− f (W (s−))]N(ds,dx). 2

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 29 / 58

Page 132: Koc4(dba)

Proof. Let Y (t) =∫

A xN(dt ,dx) and recall that the jump times for Y aredefined recursively as T A

0 = 0 and for eachn ∈ N,T A

n = inft > T An−1; ∆Y (t) ∈ A. We then find that,

f ((W (t))− f (W (0))

=∑

0≤s≤t

f (W (s))− f (W (s−))

=∞∑

n=1

f (W (t ∧ T An ))− f (W (t ∧ T A

n−1))

=∞∑

n=1

[f (W (t ∧ T An −) + K (t ∧ T A

n ,∆Y (t ∧ T An )))− f (W (t ∧ T A

n −))]

=

∫ t

0

∫A

[f (W (s−) + K (s, x))− f (W (s−))]N(ds,dx). 2

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 29 / 58

Page 133: Koc4(dba)

Proof. Let Y (t) =∫

A xN(dt ,dx) and recall that the jump times for Y aredefined recursively as T A

0 = 0 and for eachn ∈ N,T A

n = inft > T An−1; ∆Y (t) ∈ A. We then find that,

f ((W (t))− f (W (0))

=∑

0≤s≤t

f (W (s))− f (W (s−))

=∞∑

n=1

f (W (t ∧ T An ))− f (W (t ∧ T A

n−1))

=∞∑

n=1

[f (W (t ∧ T An −) + K (t ∧ T A

n ,∆Y (t ∧ T An )))− f (W (t ∧ T A

n −))]

=

∫ t

0

∫A

[f (W (s−) + K (s, x))− f (W (s−))]N(ds,dx). 2

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 29 / 58

Page 134: Koc4(dba)

Proof. Let Y (t) =∫

A xN(dt ,dx) and recall that the jump times for Y aredefined recursively as T A

0 = 0 and for eachn ∈ N,T A

n = inft > T An−1; ∆Y (t) ∈ A. We then find that,

f ((W (t))− f (W (0))

=∑

0≤s≤t

f (W (s))− f (W (s−))

=∞∑

n=1

f (W (t ∧ T An ))− f (W (t ∧ T A

n−1))

=∞∑

n=1

[f (W (t ∧ T An −) + K (t ∧ T A

n ,∆Y (t ∧ T An )))− f (W (t ∧ T A

n −))]

=

∫ t

0

∫A

[f (W (s−) + K (s, x))− f (W (s−))]N(ds,dx). 2

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 29 / 58

Page 135: Koc4(dba)

The celebrated Itô formula for Brownian motion is probably well-knownto you so I’ll briefly outline the proof. Let (Pn,n ∈ N) be a sequence ofpartitions of the formPn = 0 = t(n)

0 < t(n)1 < . . . < t(n)

m(n) < t(n)m(n)+1 = T. and suppose that

limn→∞ δ(Pn) = 0, where the mesh, δ(Pn) = max0≤j≤m(n) |t(n)j+1 − t(n)

j |.As a preliminary - you need the following:-

Lemma

If Wkl ∈ H2(T ) for each 1 ≤ k , l ≤ m, then

L2 − limn→∞

n∑j=0

Wkl(t(n)j )(Bk (t(n)

j+1)− Bk (t(n)j ))(Bl(t(n)

j+1)− Bl(t(n)j ))

=m∑

k=1

∫ T

0Wkk (s)ds.

The proof is similar to that of Lemma 1 - but you will need theGaussian moment E(B(t)4) = 3t2. 2

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 30 / 58

Page 136: Koc4(dba)

The celebrated Itô formula for Brownian motion is probably well-knownto you so I’ll briefly outline the proof. Let (Pn,n ∈ N) be a sequence ofpartitions of the formPn = 0 = t(n)

0 < t(n)1 < . . . < t(n)

m(n) < t(n)m(n)+1 = T. and suppose that

limn→∞ δ(Pn) = 0, where the mesh, δ(Pn) = max0≤j≤m(n) |t(n)j+1 − t(n)

j |.As a preliminary - you need the following:-

Lemma

If Wkl ∈ H2(T ) for each 1 ≤ k , l ≤ m, then

L2 − limn→∞

n∑j=0

Wkl(t(n)j )(Bk (t(n)

j+1)− Bk (t(n)j ))(Bl(t(n)

j+1)− Bl(t(n)j ))

=m∑

k=1

∫ T

0Wkk (s)ds.

The proof is similar to that of Lemma 1 - but you will need theGaussian moment E(B(t)4) = 3t2. 2

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 30 / 58

Page 137: Koc4(dba)

The celebrated Itô formula for Brownian motion is probably well-knownto you so I’ll briefly outline the proof. Let (Pn,n ∈ N) be a sequence ofpartitions of the formPn = 0 = t(n)

0 < t(n)1 < . . . < t(n)

m(n) < t(n)m(n)+1 = T. and suppose that

limn→∞ δ(Pn) = 0, where the mesh, δ(Pn) = max0≤j≤m(n) |t(n)j+1 − t(n)

j |.As a preliminary - you need the following:-

Lemma

If Wkl ∈ H2(T ) for each 1 ≤ k , l ≤ m, then

L2 − limn→∞

n∑j=0

Wkl(t(n)j )(Bk (t(n)

j+1)− Bk (t(n)j ))(Bl(t(n)

j+1)− Bl(t(n)j ))

=m∑

k=1

∫ T

0Wkk (s)ds.

The proof is similar to that of Lemma 1 - but you will need theGaussian moment E(B(t)4) = 3t2. 2

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 30 / 58

Page 138: Koc4(dba)

The celebrated Itô formula for Brownian motion is probably well-knownto you so I’ll briefly outline the proof. Let (Pn,n ∈ N) be a sequence ofpartitions of the formPn = 0 = t(n)

0 < t(n)1 < . . . < t(n)

m(n) < t(n)m(n)+1 = T. and suppose that

limn→∞ δ(Pn) = 0, where the mesh, δ(Pn) = max0≤j≤m(n) |t(n)j+1 − t(n)

j |.As a preliminary - you need the following:-

Lemma

If Wkl ∈ H2(T ) for each 1 ≤ k , l ≤ m, then

L2 − limn→∞

n∑j=0

Wkl(t(n)j )(Bk (t(n)

j+1)− Bk (t(n)j ))(Bl(t(n)

j+1)− Bl(t(n)j ))

=m∑

k=1

∫ T

0Wkk (s)ds.

The proof is similar to that of Lemma 1 - but you will need theGaussian moment E(B(t)4) = 3t2. 2

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 30 / 58

Page 139: Koc4(dba)

The celebrated Itô formula for Brownian motion is probably well-knownto you so I’ll briefly outline the proof. Let (Pn,n ∈ N) be a sequence ofpartitions of the formPn = 0 = t(n)

0 < t(n)1 < . . . < t(n)

m(n) < t(n)m(n)+1 = T. and suppose that

limn→∞ δ(Pn) = 0, where the mesh, δ(Pn) = max0≤j≤m(n) |t(n)j+1 − t(n)

j |.As a preliminary - you need the following:-

Lemma

If Wkl ∈ H2(T ) for each 1 ≤ k , l ≤ m, then

L2 − limn→∞

n∑j=0

Wkl(t(n)j )(Bk (t(n)

j+1)− Bk (t(n)j ))(Bl(t(n)

j+1)− Bl(t(n)j ))

=m∑

k=1

∫ T

0Wkk (s)ds.

The proof is similar to that of Lemma 1 - but you will need theGaussian moment E(B(t)4) = 3t2. 2

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 30 / 58

Page 140: Koc4(dba)

Now let M be a Brownian integral with drift of the form

M i(t) =

∫ t

0F i

j (s)dBj(s) +

∫ t

0Gi(s)ds, (0.6)

where each F ij , (G

i)12 ∈ P2(t), for all t ≥ 0,1 ≤ i ≤ d ,1 ≤ j ≤ m.

For each 1 ≤ i ≤ j , we introduce the quadratic variation processdenoted as ([M i ,M j ](t), t ≥ 0) by

[M i ,M j ](t) =m∑

k=1

∫ T

0F i

k (s)F jk (s)ds.

We will explore quadratic variation in greater depth in the sequel. Thefollowing slick method of proving Itô’s formula is due to Kunita.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 31 / 58

Page 141: Koc4(dba)

Now let M be a Brownian integral with drift of the form

M i(t) =

∫ t

0F i

j (s)dBj(s) +

∫ t

0Gi(s)ds, (0.6)

where each F ij , (G

i)12 ∈ P2(t), for all t ≥ 0,1 ≤ i ≤ d ,1 ≤ j ≤ m.

For each 1 ≤ i ≤ j , we introduce the quadratic variation processdenoted as ([M i ,M j ](t), t ≥ 0) by

[M i ,M j ](t) =m∑

k=1

∫ T

0F i

k (s)F jk (s)ds.

We will explore quadratic variation in greater depth in the sequel. Thefollowing slick method of proving Itô’s formula is due to Kunita.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 31 / 58

Page 142: Koc4(dba)

Now let M be a Brownian integral with drift of the form

M i(t) =

∫ t

0F i

j (s)dBj(s) +

∫ t

0Gi(s)ds, (0.6)

where each F ij , (G

i)12 ∈ P2(t), for all t ≥ 0,1 ≤ i ≤ d ,1 ≤ j ≤ m.

For each 1 ≤ i ≤ j , we introduce the quadratic variation processdenoted as ([M i ,M j ](t), t ≥ 0) by

[M i ,M j ](t) =m∑

k=1

∫ T

0F i

k (s)F jk (s)ds.

We will explore quadratic variation in greater depth in the sequel. Thefollowing slick method of proving Itô’s formula is due to Kunita.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 31 / 58

Page 143: Koc4(dba)

Now let M be a Brownian integral with drift of the form

M i(t) =

∫ t

0F i

j (s)dBj(s) +

∫ t

0Gi(s)ds, (0.6)

where each F ij , (G

i)12 ∈ P2(t), for all t ≥ 0,1 ≤ i ≤ d ,1 ≤ j ≤ m.

For each 1 ≤ i ≤ j , we introduce the quadratic variation processdenoted as ([M i ,M j ](t), t ≥ 0) by

[M i ,M j ](t) =m∑

k=1

∫ T

0F i

k (s)F jk (s)ds.

We will explore quadratic variation in greater depth in the sequel. Thefollowing slick method of proving Itô’s formula is due to Kunita.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 31 / 58

Page 144: Koc4(dba)

Now let M be a Brownian integral with drift of the form

M i(t) =

∫ t

0F i

j (s)dBj(s) +

∫ t

0Gi(s)ds, (0.6)

where each F ij , (G

i)12 ∈ P2(t), for all t ≥ 0,1 ≤ i ≤ d ,1 ≤ j ≤ m.

For each 1 ≤ i ≤ j , we introduce the quadratic variation processdenoted as ([M i ,M j ](t), t ≥ 0) by

[M i ,M j ](t) =m∑

k=1

∫ T

0F i

k (s)F jk (s)ds.

We will explore quadratic variation in greater depth in the sequel. Thefollowing slick method of proving Itô’s formula is due to Kunita.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 31 / 58

Page 145: Koc4(dba)

Now let M be a Brownian integral with drift of the form

M i(t) =

∫ t

0F i

j (s)dBj(s) +

∫ t

0Gi(s)ds, (0.6)

where each F ij , (G

i)12 ∈ P2(t), for all t ≥ 0,1 ≤ i ≤ d ,1 ≤ j ≤ m.

For each 1 ≤ i ≤ j , we introduce the quadratic variation processdenoted as ([M i ,M j ](t), t ≥ 0) by

[M i ,M j ](t) =m∑

k=1

∫ T

0F i

k (s)F jk (s)ds.

We will explore quadratic variation in greater depth in the sequel. Thefollowing slick method of proving Itô’s formula is due to Kunita.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 31 / 58

Page 146: Koc4(dba)

Theorem (Itô’s Theorem 1)

If M = (M(t), t ≥ 0) is a Brownian integral with drift of the form (0.6),then for all f ∈ C2(Rd ), t ≥ 0, with probability 1, we have

f (M(t))−f (M(0)) =

∫ t

0∂i f (M(s))dM i(s)+

12

∫ t

0∂i∂j f (M(s))d [M i ,M j ](s).

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 32 / 58

Page 147: Koc4(dba)

Theorem (Itô’s Theorem 1)

If M = (M(t), t ≥ 0) is a Brownian integral with drift of the form (0.6),then for all f ∈ C2(Rd ), t ≥ 0, with probability 1, we have

f (M(t))−f (M(0)) =

∫ t

0∂i f (M(s))dM i(s)+

12

∫ t

0∂i∂j f (M(s))d [M i ,M j ](s).

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 32 / 58

Page 148: Koc4(dba)

Theorem (Itô’s Theorem 1)

If M = (M(t), t ≥ 0) is a Brownian integral with drift of the form (0.6),then for all f ∈ C2(Rd ), t ≥ 0, with probability 1, we have

f (M(t))−f (M(0)) =

∫ t

0∂i f (M(s))dM i(s)+

12

∫ t

0∂i∂j f (M(s))d [M i ,M j ](s).

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 32 / 58

Page 149: Koc4(dba)

Proof. Let (Pn,n ∈ N) be a sequence of partitions of [0, t ] as above.By Taylor’s theorem, we have, for each such partition (where wesuppress the index n).

f (M(t))− f (M(0)) =m∑

k=0

f (M(tk+1))− f (M(tk ))

= J1(t) +12

J2(t),

where

J1(t) =m∑

k=0

∂i f (M(tk ))(M i(tk+1)−M i(tk )),

J2(t) =m∑

k=0

∂i∂j f (Nkij )(M i(tk+1)−M i(tk ))(M j(tk+1)−M j(tk )),

and where the Nkij ’s are each F(tk+1)-adapted Rd -valued random

variables satisfying |Nkij −M(tk )| ≤ |M(tk+1)−M(tk )|.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 33 / 58

Page 150: Koc4(dba)

Proof. Let (Pn,n ∈ N) be a sequence of partitions of [0, t ] as above.By Taylor’s theorem, we have, for each such partition (where wesuppress the index n).

f (M(t))− f (M(0)) =m∑

k=0

f (M(tk+1))− f (M(tk ))

= J1(t) +12

J2(t),

where

J1(t) =m∑

k=0

∂i f (M(tk ))(M i(tk+1)−M i(tk )),

J2(t) =m∑

k=0

∂i∂j f (Nkij )(M i(tk+1)−M i(tk ))(M j(tk+1)−M j(tk )),

and where the Nkij ’s are each F(tk+1)-adapted Rd -valued random

variables satisfying |Nkij −M(tk )| ≤ |M(tk+1)−M(tk )|.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 33 / 58

Page 151: Koc4(dba)

Proof. Let (Pn,n ∈ N) be a sequence of partitions of [0, t ] as above.By Taylor’s theorem, we have, for each such partition (where wesuppress the index n).

f (M(t))− f (M(0)) =m∑

k=0

f (M(tk+1))− f (M(tk ))

= J1(t) +12

J2(t),

where

J1(t) =m∑

k=0

∂i f (M(tk ))(M i(tk+1)−M i(tk )),

J2(t) =m∑

k=0

∂i∂j f (Nkij )(M i(tk+1)−M i(tk ))(M j(tk+1)−M j(tk )),

and where the Nkij ’s are each F(tk+1)-adapted Rd -valued random

variables satisfying |Nkij −M(tk )| ≤ |M(tk+1)−M(tk )|.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 33 / 58

Page 152: Koc4(dba)

Proof. Let (Pn,n ∈ N) be a sequence of partitions of [0, t ] as above.By Taylor’s theorem, we have, for each such partition (where wesuppress the index n).

f (M(t))− f (M(0)) =m∑

k=0

f (M(tk+1))− f (M(tk ))

= J1(t) +12

J2(t),

where

J1(t) =m∑

k=0

∂i f (M(tk ))(M i(tk+1)−M i(tk )),

J2(t) =m∑

k=0

∂i∂j f (Nkij )(M i(tk+1)−M i(tk ))(M j(tk+1)−M j(tk )),

and where the Nkij ’s are each F(tk+1)-adapted Rd -valued random

variables satisfying |Nkij −M(tk )| ≤ |M(tk+1)−M(tk )|.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 33 / 58

Page 153: Koc4(dba)

Proof. Let (Pn,n ∈ N) be a sequence of partitions of [0, t ] as above.By Taylor’s theorem, we have, for each such partition (where wesuppress the index n).

f (M(t))− f (M(0)) =m∑

k=0

f (M(tk+1))− f (M(tk ))

= J1(t) +12

J2(t),

where

J1(t) =m∑

k=0

∂i f (M(tk ))(M i(tk+1)−M i(tk )),

J2(t) =m∑

k=0

∂i∂j f (Nkij )(M i(tk+1)−M i(tk ))(M j(tk+1)−M j(tk )),

and where the Nkij ’s are each F(tk+1)-adapted Rd -valued random

variables satisfying |Nkij −M(tk )| ≤ |M(tk+1)−M(tk )|.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 33 / 58

Page 154: Koc4(dba)

Proof. Let (Pn,n ∈ N) be a sequence of partitions of [0, t ] as above.By Taylor’s theorem, we have, for each such partition (where wesuppress the index n).

f (M(t))− f (M(0)) =m∑

k=0

f (M(tk+1))− f (M(tk ))

= J1(t) +12

J2(t),

where

J1(t) =m∑

k=0

∂i f (M(tk ))(M i(tk+1)−M i(tk )),

J2(t) =m∑

k=0

∂i∂j f (Nkij )(M i(tk+1)−M i(tk ))(M j(tk+1)−M j(tk )),

and where the Nkij ’s are each F(tk+1)-adapted Rd -valued random

variables satisfying |Nkij −M(tk )| ≤ |M(tk+1)−M(tk )|.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 33 / 58

Page 155: Koc4(dba)

We write each J2(t) = K1(t) + K2(t),where

K1(t) =m∑

k=0

∂i∂j f (M(tk ))(M i(tk+1)−M i(tk ))(M j(tk+1)−M j(tk )),

K2(t) =m∑

k=0

[∂i∂j f (Nkij )−∂i∂j f (M(tk ))](M i(tk+1)−M i(tk ))(M j(tk+1)−M j(tk )).

Now take limits as n→∞. It turns out that K2(t)→ 0, in probabilityand the result follows. 2

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 34 / 58

Page 156: Koc4(dba)

We write each J2(t) = K1(t) + K2(t),where

K1(t) =m∑

k=0

∂i∂j f (M(tk ))(M i(tk+1)−M i(tk ))(M j(tk+1)−M j(tk )),

K2(t) =m∑

k=0

[∂i∂j f (Nkij )−∂i∂j f (M(tk ))](M i(tk+1)−M i(tk ))(M j(tk+1)−M j(tk )).

Now take limits as n→∞. It turns out that K2(t)→ 0, in probabilityand the result follows. 2

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 34 / 58

Page 157: Koc4(dba)

We write each J2(t) = K1(t) + K2(t),where

K1(t) =m∑

k=0

∂i∂j f (M(tk ))(M i(tk+1)−M i(tk ))(M j(tk+1)−M j(tk )),

K2(t) =m∑

k=0

[∂i∂j f (Nkij )−∂i∂j f (M(tk ))](M i(tk+1)−M i(tk ))(M j(tk+1)−M j(tk )).

Now take limits as n→∞. It turns out that K2(t)→ 0, in probabilityand the result follows. 2

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 34 / 58

Page 158: Koc4(dba)

We write each J2(t) = K1(t) + K2(t),where

K1(t) =m∑

k=0

∂i∂j f (M(tk ))(M i(tk+1)−M i(tk ))(M j(tk+1)−M j(tk )),

K2(t) =m∑

k=0

[∂i∂j f (Nkij )−∂i∂j f (M(tk ))](M i(tk+1)−M i(tk ))(M j(tk+1)−M j(tk )).

Now take limits as n→∞. It turns out that K2(t)→ 0, in probabilityand the result follows. 2

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 34 / 58

Page 159: Koc4(dba)

Itô’s formula for general Lévy-type stochastic integrals is obtainedessentially by combining the Poisson and Brownian results and makingsure you take good care of the compensators for small jumps. Youshould be able to guess the right result.To give a precise statement, consider a Lévy-type stochastic integral ofthe form

dY (t) = G(t)dt +F (t)dB(t)+H(t , x)N(dt ,dx)+K (t , x)N(dt ,dx). (0.7)

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 35 / 58

Page 160: Koc4(dba)

Itô’s formula for general Lévy-type stochastic integrals is obtainedessentially by combining the Poisson and Brownian results and makingsure you take good care of the compensators for small jumps. Youshould be able to guess the right result.To give a precise statement, consider a Lévy-type stochastic integral ofthe form

dY (t) = G(t)dt +F (t)dB(t)+H(t , x)N(dt ,dx)+K (t , x)N(dt ,dx). (0.7)

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 35 / 58

Page 161: Koc4(dba)

Itô’s formula for general Lévy-type stochastic integrals is obtainedessentially by combining the Poisson and Brownian results and makingsure you take good care of the compensators for small jumps. Youshould be able to guess the right result.To give a precise statement, consider a Lévy-type stochastic integral ofthe form

dY (t) = G(t)dt +F (t)dB(t)+H(t , x)N(dt ,dx)+K (t , x)N(dt ,dx). (0.7)

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 35 / 58

Page 162: Koc4(dba)

Itô’s formula for general Lévy-type stochastic integrals is obtainedessentially by combining the Poisson and Brownian results and makingsure you take good care of the compensators for small jumps. Youshould be able to guess the right result.To give a precise statement, consider a Lévy-type stochastic integral ofthe form

dY (t) = G(t)dt +F (t)dB(t)+H(t , x)N(dt ,dx)+K (t , x)N(dt ,dx). (0.7)

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 35 / 58

Page 163: Koc4(dba)

Theorem (Itô’s Theorem 2)

If Y is a Lévy-type stochastic integral of the above form then for eachf ∈ C2(Rd ), t ≥ 0, with probability 1, we have

f (Y (t))− f (Y (0))

=

∫ t

0∂i f (Y (s−))dY i

c(s) +12

∫ t

0∂i∂j f (Y (s−))d [Y i

c ,Yjc](s)

+

∫ t

0

∫|x |≥1

[f (Y (s−) + K (s, x))− f (Y (s−))]N(ds,dx)

+

∫ t

0

∫|x |<1

[f (Y (s−) + H(s, x))− f (Y (s−))]N(ds,dx)

+

∫ t

0

∫|x |<1

[f (Y (s−) + H(s, x))− f (Y (s−))

− H i(s, x)∂i f (Y (s−))]ν(dx)ds.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 36 / 58

Page 164: Koc4(dba)

Theorem (Itô’s Theorem 2)

If Y is a Lévy-type stochastic integral of the above form then for eachf ∈ C2(Rd ), t ≥ 0, with probability 1, we have

f (Y (t))− f (Y (0))

=

∫ t

0∂i f (Y (s−))dY i

c(s) +12

∫ t

0∂i∂j f (Y (s−))d [Y i

c ,Yjc](s)

+

∫ t

0

∫|x |≥1

[f (Y (s−) + K (s, x))− f (Y (s−))]N(ds,dx)

+

∫ t

0

∫|x |<1

[f (Y (s−) + H(s, x))− f (Y (s−))]N(ds,dx)

+

∫ t

0

∫|x |<1

[f (Y (s−) + H(s, x))− f (Y (s−))

− H i(s, x)∂i f (Y (s−))]ν(dx)ds.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 36 / 58

Page 165: Koc4(dba)

Here Yc denotes the continuous part of Y defined byY i

c(t) =∫ t

0 Gi(s)ds +∫ t

0 F ij (s)dBj(s).

Tedious but straightforward algebra yields the following form, which isimportant since it extends to general semimartingales:-

Theorem (Itô’s Theorem 3)

If Y is a Lévy-type stochastic integral of the above form then for eachf ∈ C2(Rd ), t ≥ 0, with probability 1, we have

f (Y (t))− f (Y (0))

=

∫ t

0∂i f (Y (s−))dY i(s) +

12

∫ t

0∂i∂j f (Y (s−))d [Y i

c ,Yjc](s)

+∑

0≤s≤t

[f (Y (s))− f (Y (s−))−∆Y i(s)∂i f (Y (s−))].

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 37 / 58

Page 166: Koc4(dba)

Here Yc denotes the continuous part of Y defined byY i

c(t) =∫ t

0 Gi(s)ds +∫ t

0 F ij (s)dBj(s).

Tedious but straightforward algebra yields the following form, which isimportant since it extends to general semimartingales:-

Theorem (Itô’s Theorem 3)

If Y is a Lévy-type stochastic integral of the above form then for eachf ∈ C2(Rd ), t ≥ 0, with probability 1, we have

f (Y (t))− f (Y (0))

=

∫ t

0∂i f (Y (s−))dY i(s) +

12

∫ t

0∂i∂j f (Y (s−))d [Y i

c ,Yjc](s)

+∑

0≤s≤t

[f (Y (s))− f (Y (s−))−∆Y i(s)∂i f (Y (s−))].

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 37 / 58

Page 167: Koc4(dba)

Here Yc denotes the continuous part of Y defined byY i

c(t) =∫ t

0 Gi(s)ds +∫ t

0 F ij (s)dBj(s).

Tedious but straightforward algebra yields the following form, which isimportant since it extends to general semimartingales:-

Theorem (Itô’s Theorem 3)

If Y is a Lévy-type stochastic integral of the above form then for eachf ∈ C2(Rd ), t ≥ 0, with probability 1, we have

f (Y (t))− f (Y (0))

=

∫ t

0∂i f (Y (s−))dY i(s) +

12

∫ t

0∂i∂j f (Y (s−))d [Y i

c ,Yjc](s)

+∑

0≤s≤t

[f (Y (s))− f (Y (s−))−∆Y i(s)∂i f (Y (s−))].

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 37 / 58

Page 168: Koc4(dba)

Here Yc denotes the continuous part of Y defined byY i

c(t) =∫ t

0 Gi(s)ds +∫ t

0 F ij (s)dBj(s).

Tedious but straightforward algebra yields the following form, which isimportant since it extends to general semimartingales:-

Theorem (Itô’s Theorem 3)

If Y is a Lévy-type stochastic integral of the above form then for eachf ∈ C2(Rd ), t ≥ 0, with probability 1, we have

f (Y (t))− f (Y (0))

=

∫ t

0∂i f (Y (s−))dY i(s) +

12

∫ t

0∂i∂j f (Y (s−))d [Y i

c ,Yjc](s)

+∑

0≤s≤t

[f (Y (s))− f (Y (s−))−∆Y i(s)∂i f (Y (s−))].

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 37 / 58

Page 169: Koc4(dba)

Note that a special case of Itô’s formula yields the following “classical”chain rule for differentiable functions f , when the process Y is of finitevariation:

f (Y (t))− f (Y (0)) =

∫ t

0∂i f (Y (s−))dY i(s) +

+∑

0≤s≤t

[f (Y (s))− f (Y (s−))−∆Y i(s)∂i f (Y (s−))].

A form of Itô’s formula may even be established for fractional Brownianmotion, which is not a semimartingale.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 38 / 58

Page 170: Koc4(dba)

Note that a special case of Itô’s formula yields the following “classical”chain rule for differentiable functions f , when the process Y is of finitevariation:

f (Y (t))− f (Y (0)) =

∫ t

0∂i f (Y (s−))dY i(s) +

+∑

0≤s≤t

[f (Y (s))− f (Y (s−))−∆Y i(s)∂i f (Y (s−))].

A form of Itô’s formula may even be established for fractional Brownianmotion, which is not a semimartingale.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 38 / 58

Page 171: Koc4(dba)

Note that a special case of Itô’s formula yields the following “classical”chain rule for differentiable functions f , when the process Y is of finitevariation:

f (Y (t))− f (Y (0)) =

∫ t

0∂i f (Y (s−))dY i(s) +

+∑

0≤s≤t

[f (Y (s))− f (Y (s−))−∆Y i(s)∂i f (Y (s−))].

A form of Itô’s formula may even be established for fractional Brownianmotion, which is not a semimartingale.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 38 / 58

Page 172: Koc4(dba)

Quadratic Variation and Itô’s Product Formula

We extend the definition of quadratic variation to the more generalcase of Lévy-type stochastic integrals Y = (Y (t), t ≥ 0). So for eacht ≥ 0 we define a d × d matrix-valued adapted process[Y ,Y ] = ([Y ,Y ](t), t ≥ 0) by the following prescription for its (i , j)thentry (1 ≤ i , j ≤ d),

[Y i ,Y j ](t) = [Y ic ,Y

jc](t) +

∑0≤s≤t

∆Y i(s)∆Y j(s). (0.8)

Each [Y i ,Y j ](t) is almost surely finite, and we have

[Y i ,Y j ](t) =m∑

k=1

∫ T

0F i

k (s)F jk (s)ds +

∫ t

0

∫|x |<1

H i(s, x)H j(s, x)N(ds,dx)

+

∫ t

0

∫|x |≥1

K i(s, x)K j(s, x)N(ds,dx), (0.9)

so that we clearly have each [Y i ,Y j ](t) = [Y j ,Y i ](t). Note that theintegral over small jumps in this case is always a.s. finite (Why ?)

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 39 / 58

Page 173: Koc4(dba)

Quadratic Variation and Itô’s Product Formula

We extend the definition of quadratic variation to the more generalcase of Lévy-type stochastic integrals Y = (Y (t), t ≥ 0). So for eacht ≥ 0 we define a d × d matrix-valued adapted process[Y ,Y ] = ([Y ,Y ](t), t ≥ 0) by the following prescription for its (i , j)thentry (1 ≤ i , j ≤ d),

[Y i ,Y j ](t) = [Y ic ,Y

jc](t) +

∑0≤s≤t

∆Y i(s)∆Y j(s). (0.8)

Each [Y i ,Y j ](t) is almost surely finite, and we have

[Y i ,Y j ](t) =m∑

k=1

∫ T

0F i

k (s)F jk (s)ds +

∫ t

0

∫|x |<1

H i(s, x)H j(s, x)N(ds,dx)

+

∫ t

0

∫|x |≥1

K i(s, x)K j(s, x)N(ds,dx), (0.9)

so that we clearly have each [Y i ,Y j ](t) = [Y j ,Y i ](t). Note that theintegral over small jumps in this case is always a.s. finite (Why ?)

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 39 / 58

Page 174: Koc4(dba)

Quadratic Variation and Itô’s Product Formula

We extend the definition of quadratic variation to the more generalcase of Lévy-type stochastic integrals Y = (Y (t), t ≥ 0). So for eacht ≥ 0 we define a d × d matrix-valued adapted process[Y ,Y ] = ([Y ,Y ](t), t ≥ 0) by the following prescription for its (i , j)thentry (1 ≤ i , j ≤ d),

[Y i ,Y j ](t) = [Y ic ,Y

jc](t) +

∑0≤s≤t

∆Y i(s)∆Y j(s). (0.8)

Each [Y i ,Y j ](t) is almost surely finite, and we have

[Y i ,Y j ](t) =m∑

k=1

∫ T

0F i

k (s)F jk (s)ds +

∫ t

0

∫|x |<1

H i(s, x)H j(s, x)N(ds,dx)

+

∫ t

0

∫|x |≥1

K i(s, x)K j(s, x)N(ds,dx), (0.9)

so that we clearly have each [Y i ,Y j ](t) = [Y j ,Y i ](t). Note that theintegral over small jumps in this case is always a.s. finite (Why ?)

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 39 / 58

Page 175: Koc4(dba)

Quadratic Variation and Itô’s Product Formula

We extend the definition of quadratic variation to the more generalcase of Lévy-type stochastic integrals Y = (Y (t), t ≥ 0). So for eacht ≥ 0 we define a d × d matrix-valued adapted process[Y ,Y ] = ([Y ,Y ](t), t ≥ 0) by the following prescription for its (i , j)thentry (1 ≤ i , j ≤ d),

[Y i ,Y j ](t) = [Y ic ,Y

jc](t) +

∑0≤s≤t

∆Y i(s)∆Y j(s). (0.8)

Each [Y i ,Y j ](t) is almost surely finite, and we have

[Y i ,Y j ](t) =m∑

k=1

∫ T

0F i

k (s)F jk (s)ds +

∫ t

0

∫|x |<1

H i(s, x)H j(s, x)N(ds,dx)

+

∫ t

0

∫|x |≥1

K i(s, x)K j(s, x)N(ds,dx), (0.9)

so that we clearly have each [Y i ,Y j ](t) = [Y j ,Y i ](t). Note that theintegral over small jumps in this case is always a.s. finite (Why ?)

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 39 / 58

Page 176: Koc4(dba)

Quadratic Variation and Itô’s Product Formula

We extend the definition of quadratic variation to the more generalcase of Lévy-type stochastic integrals Y = (Y (t), t ≥ 0). So for eacht ≥ 0 we define a d × d matrix-valued adapted process[Y ,Y ] = ([Y ,Y ](t), t ≥ 0) by the following prescription for its (i , j)thentry (1 ≤ i , j ≤ d),

[Y i ,Y j ](t) = [Y ic ,Y

jc](t) +

∑0≤s≤t

∆Y i(s)∆Y j(s). (0.8)

Each [Y i ,Y j ](t) is almost surely finite, and we have

[Y i ,Y j ](t) =m∑

k=1

∫ T

0F i

k (s)F jk (s)ds +

∫ t

0

∫|x |<1

H i(s, x)H j(s, x)N(ds,dx)

+

∫ t

0

∫|x |≥1

K i(s, x)K j(s, x)N(ds,dx), (0.9)

so that we clearly have each [Y i ,Y j ](t) = [Y j ,Y i ](t). Note that theintegral over small jumps in this case is always a.s. finite (Why ?)

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 39 / 58

Page 177: Koc4(dba)

It is easy to show that for each α, β ∈ R and 1 ≤ i , j , k ≤ d , t ≥ 0,

[αY i + βY j ,Y k ](t) = α[Y i ,Y k ](t) + β[Y j ,Y k ](t).

The importance of [Y ,Y ] is that it measures the deviation in thestochastic differential of products from the usual Leibniz formula. Thefollowing result makes this precise

Theorem (Itô’s Product Formula)

If Y 1 and Y 2 are real-valued Lévy-type stochastic integrals of the form(0.7), then for all t ≥ 0, with probability one, we have that

Y 1(t)Y 2(t) = Y 1(0)Y 2(0) +

∫ t

0Y 1(s−)dY 2(s)

+

∫ t

0Y 2(s−)dY 1(s) + [Y 1,Y 2](t).

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 40 / 58

Page 178: Koc4(dba)

It is easy to show that for each α, β ∈ R and 1 ≤ i , j , k ≤ d , t ≥ 0,

[αY i + βY j ,Y k ](t) = α[Y i ,Y k ](t) + β[Y j ,Y k ](t).

The importance of [Y ,Y ] is that it measures the deviation in thestochastic differential of products from the usual Leibniz formula. Thefollowing result makes this precise

Theorem (Itô’s Product Formula)

If Y 1 and Y 2 are real-valued Lévy-type stochastic integrals of the form(0.7), then for all t ≥ 0, with probability one, we have that

Y 1(t)Y 2(t) = Y 1(0)Y 2(0) +

∫ t

0Y 1(s−)dY 2(s)

+

∫ t

0Y 2(s−)dY 1(s) + [Y 1,Y 2](t).

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 40 / 58

Page 179: Koc4(dba)

It is easy to show that for each α, β ∈ R and 1 ≤ i , j , k ≤ d , t ≥ 0,

[αY i + βY j ,Y k ](t) = α[Y i ,Y k ](t) + β[Y j ,Y k ](t).

The importance of [Y ,Y ] is that it measures the deviation in thestochastic differential of products from the usual Leibniz formula. Thefollowing result makes this precise

Theorem (Itô’s Product Formula)

If Y 1 and Y 2 are real-valued Lévy-type stochastic integrals of the form(0.7), then for all t ≥ 0, with probability one, we have that

Y 1(t)Y 2(t) = Y 1(0)Y 2(0) +

∫ t

0Y 1(s−)dY 2(s)

+

∫ t

0Y 2(s−)dY 1(s) + [Y 1,Y 2](t).

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 40 / 58

Page 180: Koc4(dba)

Proof. We consider Y 1 and Y 2 as components of a vectorY = (Y 1,Y 2) and we take f in Theorem 7 to be the smooth mappingfrom R2 to R given by f (x1, x2) = x1x2.By Theorem 7, we then obtain, for each t ≥ 0, with probability one,

Y 1(t)Y 2(t) = Y 1(0)Y 2(0) +

∫ t

0Y 1(s−)dY 2(s)

+

∫ t

0Y 2(s−)dY 1(s) + [Y 1

c ,Y2c ](t)

+∑

0≤s≤t

[Y 1(s)Y 2(s)− Y 1(s−)Y 2(s−)

− (Y 1(s)− Y 1(s−))Y 2(s−)− (Y 2(s)− Y 2(s−))Y 1(s−)],

from which the required result easily follows. 2

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 41 / 58

Page 181: Koc4(dba)

Proof. We consider Y 1 and Y 2 as components of a vectorY = (Y 1,Y 2) and we take f in Theorem 7 to be the smooth mappingfrom R2 to R given by f (x1, x2) = x1x2.By Theorem 7, we then obtain, for each t ≥ 0, with probability one,

Y 1(t)Y 2(t) = Y 1(0)Y 2(0) +

∫ t

0Y 1(s−)dY 2(s)

+

∫ t

0Y 2(s−)dY 1(s) + [Y 1

c ,Y2c ](t)

+∑

0≤s≤t

[Y 1(s)Y 2(s)− Y 1(s−)Y 2(s−)

− (Y 1(s)− Y 1(s−))Y 2(s−)− (Y 2(s)− Y 2(s−))Y 1(s−)],

from which the required result easily follows. 2

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 41 / 58

Page 182: Koc4(dba)

Proof. We consider Y 1 and Y 2 as components of a vectorY = (Y 1,Y 2) and we take f in Theorem 7 to be the smooth mappingfrom R2 to R given by f (x1, x2) = x1x2.By Theorem 7, we then obtain, for each t ≥ 0, with probability one,

Y 1(t)Y 2(t) = Y 1(0)Y 2(0) +

∫ t

0Y 1(s−)dY 2(s)

+

∫ t

0Y 2(s−)dY 1(s) + [Y 1

c ,Y2c ](t)

+∑

0≤s≤t

[Y 1(s)Y 2(s)− Y 1(s−)Y 2(s−)

− (Y 1(s)− Y 1(s−))Y 2(s−)− (Y 2(s)− Y 2(s−))Y 1(s−)],

from which the required result easily follows. 2

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 41 / 58

Page 183: Koc4(dba)

Proof. We consider Y 1 and Y 2 as components of a vectorY = (Y 1,Y 2) and we take f in Theorem 7 to be the smooth mappingfrom R2 to R given by f (x1, x2) = x1x2.By Theorem 7, we then obtain, for each t ≥ 0, with probability one,

Y 1(t)Y 2(t) = Y 1(0)Y 2(0) +

∫ t

0Y 1(s−)dY 2(s)

+

∫ t

0Y 2(s−)dY 1(s) + [Y 1

c ,Y2c ](t)

+∑

0≤s≤t

[Y 1(s)Y 2(s)− Y 1(s−)Y 2(s−)

− (Y 1(s)− Y 1(s−))Y 2(s−)− (Y 2(s)− Y 2(s−))Y 1(s−)],

from which the required result easily follows. 2

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 41 / 58

Page 184: Koc4(dba)

We can learn much about the way our Itô formulae work by writing theproduct formula in differential form:-

d(Y 1(t)Y 2(t)) = Y 1(t−)dY 2(t) + Y 2(t−)dY 1(t) + d [Y 1,Y 2](t).

We see that the term d [Y 1,Y 2](t), which is sometimes called an Itôcorrection, arises as a result of the following formal product relationsbetween differentials:-

dBi(t)dBj(t) = δijdt ; N(dt ,dx)N(dt ,dy) = N(dt ,dx)δ(x − y),

for 1 ≤ i , j ≤ m, with all other products of differentials vanishing and ifyou have little previous experience of this game, these relations are avery valuable guide to intuition.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 42 / 58

Page 185: Koc4(dba)

We can learn much about the way our Itô formulae work by writing theproduct formula in differential form:-

d(Y 1(t)Y 2(t)) = Y 1(t−)dY 2(t) + Y 2(t−)dY 1(t) + d [Y 1,Y 2](t).

We see that the term d [Y 1,Y 2](t), which is sometimes called an Itôcorrection, arises as a result of the following formal product relationsbetween differentials:-

dBi(t)dBj(t) = δijdt ; N(dt ,dx)N(dt ,dy) = N(dt ,dx)δ(x − y),

for 1 ≤ i , j ≤ m, with all other products of differentials vanishing and ifyou have little previous experience of this game, these relations are avery valuable guide to intuition.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 42 / 58

Page 186: Koc4(dba)

We can learn much about the way our Itô formulae work by writing theproduct formula in differential form:-

d(Y 1(t)Y 2(t)) = Y 1(t−)dY 2(t) + Y 2(t−)dY 1(t) + d [Y 1,Y 2](t).

We see that the term d [Y 1,Y 2](t), which is sometimes called an Itôcorrection, arises as a result of the following formal product relationsbetween differentials:-

dBi(t)dBj(t) = δijdt ; N(dt ,dx)N(dt ,dy) = N(dt ,dx)δ(x − y),

for 1 ≤ i , j ≤ m, with all other products of differentials vanishing and ifyou have little previous experience of this game, these relations are avery valuable guide to intuition.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 42 / 58

Page 187: Koc4(dba)

We can learn much about the way our Itô formulae work by writing theproduct formula in differential form:-

d(Y 1(t)Y 2(t)) = Y 1(t−)dY 2(t) + Y 2(t−)dY 1(t) + d [Y 1,Y 2](t).

We see that the term d [Y 1,Y 2](t), which is sometimes called an Itôcorrection, arises as a result of the following formal product relationsbetween differentials:-

dBi(t)dBj(t) = δijdt ; N(dt ,dx)N(dt ,dy) = N(dt ,dx)δ(x − y),

for 1 ≤ i , j ≤ m, with all other products of differentials vanishing and ifyou have little previous experience of this game, these relations are avery valuable guide to intuition.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 42 / 58

Page 188: Koc4(dba)

We can learn much about the way our Itô formulae work by writing theproduct formula in differential form:-

d(Y 1(t)Y 2(t)) = Y 1(t−)dY 2(t) + Y 2(t−)dY 1(t) + d [Y 1,Y 2](t).

We see that the term d [Y 1,Y 2](t), which is sometimes called an Itôcorrection, arises as a result of the following formal product relationsbetween differentials:-

dBi(t)dBj(t) = δijdt ; N(dt ,dx)N(dt ,dy) = N(dt ,dx)δ(x − y),

for 1 ≤ i , j ≤ m, with all other products of differentials vanishing and ifyou have little previous experience of this game, these relations are avery valuable guide to intuition.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 42 / 58

Page 189: Koc4(dba)

For completeness, we will give another characterisation of quadraticvariation which is sometimes quite useful. We recall the sequence ofpartitions (Pn,n ∈ N), with mesh tending to zero which wereintroduced earlier.

Theorem

If X and Y are real-valued Lévy-type stochastic integrals then for eacht ≥ 0, with probability one, we have

[X ,Y ](t) = limn→∞

mn∑j=0

(X (t(n)j+1)− X (t(n)

j ))(Y (t(n)j+1)− Y (t(n)

j )),

where the limit is taken in probability.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 43 / 58

Page 190: Koc4(dba)

For completeness, we will give another characterisation of quadraticvariation which is sometimes quite useful. We recall the sequence ofpartitions (Pn,n ∈ N), with mesh tending to zero which wereintroduced earlier.

Theorem

If X and Y are real-valued Lévy-type stochastic integrals then for eacht ≥ 0, with probability one, we have

[X ,Y ](t) = limn→∞

mn∑j=0

(X (t(n)j+1)− X (t(n)

j ))(Y (t(n)j+1)− Y (t(n)

j )),

where the limit is taken in probability.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 43 / 58

Page 191: Koc4(dba)

For completeness, we will give another characterisation of quadraticvariation which is sometimes quite useful. We recall the sequence ofpartitions (Pn,n ∈ N), with mesh tending to zero which wereintroduced earlier.

Theorem

If X and Y are real-valued Lévy-type stochastic integrals then for eacht ≥ 0, with probability one, we have

[X ,Y ](t) = limn→∞

mn∑j=0

(X (t(n)j+1)− X (t(n)

j ))(Y (t(n)j+1)− Y (t(n)

j )),

where the limit is taken in probability.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 43 / 58

Page 192: Koc4(dba)

For completeness, we will give another characterisation of quadraticvariation which is sometimes quite useful. We recall the sequence ofpartitions (Pn,n ∈ N), with mesh tending to zero which wereintroduced earlier.

Theorem

If X and Y are real-valued Lévy-type stochastic integrals then for eacht ≥ 0, with probability one, we have

[X ,Y ](t) = limn→∞

mn∑j=0

(X (t(n)j+1)− X (t(n)

j ))(Y (t(n)j+1)− Y (t(n)

j )),

where the limit is taken in probability.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 43 / 58

Page 193: Koc4(dba)

Proof. By polarisation, it is sufficient to consider the case X = Y .Using the identity

(x − y)2 = x2 − y2 − 2y(x − y)

for x , y ∈ R, we deduce that

mn∑j=0

(X (t(n)j+1)− X (t(n)

j ))2 =mn∑j=0

X (t(n)j+1)2 −

mn∑j=0

X (t(n)j )2

− 2mn∑j=0

X (t(n)j )(X (t(n)

j+1)− X (t(n)j )),

and the required result follows from Itô’s product formula. 2

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 44 / 58

Page 194: Koc4(dba)

Proof. By polarisation, it is sufficient to consider the case X = Y .Using the identity

(x − y)2 = x2 − y2 − 2y(x − y)

for x , y ∈ R, we deduce that

mn∑j=0

(X (t(n)j+1)− X (t(n)

j ))2 =mn∑j=0

X (t(n)j+1)2 −

mn∑j=0

X (t(n)j )2

− 2mn∑j=0

X (t(n)j )(X (t(n)

j+1)− X (t(n)j )),

and the required result follows from Itô’s product formula. 2

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 44 / 58

Page 195: Koc4(dba)

Proof. By polarisation, it is sufficient to consider the case X = Y .Using the identity

(x − y)2 = x2 − y2 − 2y(x − y)

for x , y ∈ R, we deduce that

mn∑j=0

(X (t(n)j+1)− X (t(n)

j ))2 =mn∑j=0

X (t(n)j+1)2 −

mn∑j=0

X (t(n)j )2

− 2mn∑j=0

X (t(n)j )(X (t(n)

j+1)− X (t(n)j )),

and the required result follows from Itô’s product formula. 2

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 44 / 58

Page 196: Koc4(dba)

Many of the results of this lecture extend from Lévy-type stochasticintegrals to arbitrary semimartingales. In particular, if F is a simpleprocess and X is a semimartingale we can again use Itô’s prescriptionto define ∫ t

0F (s)dX (s) =

∑F (tj)(X (tj+1)− X (tj)),

and then pass to the limit to obtain more general stochastic integrals.Itô’s formula can be established in the form given in Theorem 7 and thequadratic variation of semimartingales defined as the correction termin the corresponding Itô product formula.Although stochastic calculus for general semimartingales is not thesubject of these lectures, we do require one result - the famous Lévycharacterisation of Brownian motion.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 45 / 58

Page 197: Koc4(dba)

Many of the results of this lecture extend from Lévy-type stochasticintegrals to arbitrary semimartingales. In particular, if F is a simpleprocess and X is a semimartingale we can again use Itô’s prescriptionto define ∫ t

0F (s)dX (s) =

∑F (tj)(X (tj+1)− X (tj)),

and then pass to the limit to obtain more general stochastic integrals.Itô’s formula can be established in the form given in Theorem 7 and thequadratic variation of semimartingales defined as the correction termin the corresponding Itô product formula.Although stochastic calculus for general semimartingales is not thesubject of these lectures, we do require one result - the famous Lévycharacterisation of Brownian motion.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 45 / 58

Page 198: Koc4(dba)

Many of the results of this lecture extend from Lévy-type stochasticintegrals to arbitrary semimartingales. In particular, if F is a simpleprocess and X is a semimartingale we can again use Itô’s prescriptionto define ∫ t

0F (s)dX (s) =

∑F (tj)(X (tj+1)− X (tj)),

and then pass to the limit to obtain more general stochastic integrals.Itô’s formula can be established in the form given in Theorem 7 and thequadratic variation of semimartingales defined as the correction termin the corresponding Itô product formula.Although stochastic calculus for general semimartingales is not thesubject of these lectures, we do require one result - the famous Lévycharacterisation of Brownian motion.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 45 / 58

Page 199: Koc4(dba)

Many of the results of this lecture extend from Lévy-type stochasticintegrals to arbitrary semimartingales. In particular, if F is a simpleprocess and X is a semimartingale we can again use Itô’s prescriptionto define ∫ t

0F (s)dX (s) =

∑F (tj)(X (tj+1)− X (tj)),

and then pass to the limit to obtain more general stochastic integrals.Itô’s formula can be established in the form given in Theorem 7 and thequadratic variation of semimartingales defined as the correction termin the corresponding Itô product formula.Although stochastic calculus for general semimartingales is not thesubject of these lectures, we do require one result - the famous Lévycharacterisation of Brownian motion.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 45 / 58

Page 200: Koc4(dba)

Many of the results of this lecture extend from Lévy-type stochasticintegrals to arbitrary semimartingales. In particular, if F is a simpleprocess and X is a semimartingale we can again use Itô’s prescriptionto define ∫ t

0F (s)dX (s) =

∑F (tj)(X (tj+1)− X (tj)),

and then pass to the limit to obtain more general stochastic integrals.Itô’s formula can be established in the form given in Theorem 7 and thequadratic variation of semimartingales defined as the correction termin the corresponding Itô product formula.Although stochastic calculus for general semimartingales is not thesubject of these lectures, we do require one result - the famous Lévycharacterisation of Brownian motion.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 45 / 58

Page 201: Koc4(dba)

Theorem (Lévy’s characterisation)

Let M = (M(t), t ≥ 0) be a continuous centered martingale, which isadapted to a given filtration (Ft , t ≥ 0). If [Mi ,Mj ](t) = aij t for eacht ≥ 0,1 ≤ i , j ≤ d where a = (aij) is a positive definite symmetricmatrix, then M is an Ft -adapted Brownian motion with covariance a.

Proof. Fix u ∈ Rd and define the process (Yu(t), t ≥ 0) byYu(t) = ei(u,M(t)), then by Itô’s formula, we obtain

dYu(t) = iujYu(t)dMj(t)−12

uiujYu(t)d [Mi ,Mj ](t)

= iujYu(t)dMj(t)−12

(u,au)Yu(t)dt .

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 46 / 58

Page 202: Koc4(dba)

Theorem (Lévy’s characterisation)

Let M = (M(t), t ≥ 0) be a continuous centered martingale, which isadapted to a given filtration (Ft , t ≥ 0). If [Mi ,Mj ](t) = aij t for eacht ≥ 0,1 ≤ i , j ≤ d where a = (aij) is a positive definite symmetricmatrix, then M is an Ft -adapted Brownian motion with covariance a.

Proof. Fix u ∈ Rd and define the process (Yu(t), t ≥ 0) byYu(t) = ei(u,M(t)), then by Itô’s formula, we obtain

dYu(t) = iujYu(t)dMj(t)−12

uiujYu(t)d [Mi ,Mj ](t)

= iujYu(t)dMj(t)−12

(u,au)Yu(t)dt .

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 46 / 58

Page 203: Koc4(dba)

Theorem (Lévy’s characterisation)

Let M = (M(t), t ≥ 0) be a continuous centered martingale, which isadapted to a given filtration (Ft , t ≥ 0). If [Mi ,Mj ](t) = aij t for eacht ≥ 0,1 ≤ i , j ≤ d where a = (aij) is a positive definite symmetricmatrix, then M is an Ft -adapted Brownian motion with covariance a.

Proof. Fix u ∈ Rd and define the process (Yu(t), t ≥ 0) byYu(t) = ei(u,M(t)), then by Itô’s formula, we obtain

dYu(t) = iujYu(t)dMj(t)−12

uiujYu(t)d [Mi ,Mj ](t)

= iujYu(t)dMj(t)−12

(u,au)Yu(t)dt .

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 46 / 58

Page 204: Koc4(dba)

Theorem (Lévy’s characterisation)

Let M = (M(t), t ≥ 0) be a continuous centered martingale, which isadapted to a given filtration (Ft , t ≥ 0). If [Mi ,Mj ](t) = aij t for eacht ≥ 0,1 ≤ i , j ≤ d where a = (aij) is a positive definite symmetricmatrix, then M is an Ft -adapted Brownian motion with covariance a.

Proof. Fix u ∈ Rd and define the process (Yu(t), t ≥ 0) byYu(t) = ei(u,M(t)), then by Itô’s formula, we obtain

dYu(t) = iujYu(t)dMj(t)−12

uiujYu(t)d [Mi ,Mj ](t)

= iujYu(t)dMj(t)−12

(u,au)Yu(t)dt .

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 46 / 58

Page 205: Koc4(dba)

Upon integrating from s to t , we obtain

Yu(t) = Yu(s) + iuj∫ t

sYu(τ)dMj(τ)− 1

2(u,au)

∫ t

sYu(τ)dτ.

Now take conditional expectations of both sides with respect to Fs, anduse the conditional Fubini Theorem to obtain

E(Yu(t)|Fs) = Yu(s)− 12

(u,au)

∫ t

sE(Yu(τ)|Fs)dτ.

Hence E(ei(u,M(t)−M(s))|Fs) = e−12 (u,au)(t−s). 2

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 47 / 58

Page 206: Koc4(dba)

Upon integrating from s to t , we obtain

Yu(t) = Yu(s) + iuj∫ t

sYu(τ)dMj(τ)− 1

2(u,au)

∫ t

sYu(τ)dτ.

Now take conditional expectations of both sides with respect to Fs, anduse the conditional Fubini Theorem to obtain

E(Yu(t)|Fs) = Yu(s)− 12

(u,au)

∫ t

sE(Yu(τ)|Fs)dτ.

Hence E(ei(u,M(t)−M(s))|Fs) = e−12 (u,au)(t−s). 2

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 47 / 58

Page 207: Koc4(dba)

Upon integrating from s to t , we obtain

Yu(t) = Yu(s) + iuj∫ t

sYu(τ)dMj(τ)− 1

2(u,au)

∫ t

sYu(τ)dτ.

Now take conditional expectations of both sides with respect to Fs, anduse the conditional Fubini Theorem to obtain

E(Yu(t)|Fs) = Yu(s)− 12

(u,au)

∫ t

sE(Yu(τ)|Fs)dτ.

Hence E(ei(u,M(t)−M(s))|Fs) = e−12 (u,au)(t−s). 2

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 47 / 58

Page 208: Koc4(dba)

Stochastic Differential Equations

Using Picard iteration one can show the existence of a unique solutionto

dY (t) = b(Y (t−))dt + σ(Y (t−))dB(t) + (0.10)

+

∫|x |<c

F (Y (t−), x)N(dt ,dx) +

∫|x |≥c

G(Y (t−), x)N(dt ,dx),

which is a convenient shorthand for the system of SDE’s:-

dY i(t) = bi(Y (t−))dt + σij (Y (t−))dBj(t) + (0.11)

+

∫|x |≤c

F i(Y (t−), x)N(dt ,dx) +

∫|x |>c

Gi(Y (t−), x)N(dt ,dx),

where each 1 ≤ i ≤ d .

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 48 / 58

Page 209: Koc4(dba)

Stochastic Differential Equations

Using Picard iteration one can show the existence of a unique solutionto

dY (t) = b(Y (t−))dt + σ(Y (t−))dB(t) + (0.10)

+

∫|x |<c

F (Y (t−), x)N(dt ,dx) +

∫|x |≥c

G(Y (t−), x)N(dt ,dx),

which is a convenient shorthand for the system of SDE’s:-

dY i(t) = bi(Y (t−))dt + σij (Y (t−))dBj(t) + (0.11)

+

∫|x |≤c

F i(Y (t−), x)N(dt ,dx) +

∫|x |>c

Gi(Y (t−), x)N(dt ,dx),

where each 1 ≤ i ≤ d .

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 48 / 58

Page 210: Koc4(dba)

Stochastic Differential Equations

Using Picard iteration one can show the existence of a unique solutionto

dY (t) = b(Y (t−))dt + σ(Y (t−))dB(t) + (0.10)

+

∫|x |<c

F (Y (t−), x)N(dt ,dx) +

∫|x |≥c

G(Y (t−), x)N(dt ,dx),

which is a convenient shorthand for the system of SDE’s:-

dY i(t) = bi(Y (t−))dt + σij (Y (t−))dBj(t) + (0.11)

+

∫|x |≤c

F i(Y (t−), x)N(dt ,dx) +

∫|x |>c

Gi(Y (t−), x)N(dt ,dx),

where each 1 ≤ i ≤ d .

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 48 / 58

Page 211: Koc4(dba)

Stochastic Differential Equations

Using Picard iteration one can show the existence of a unique solutionto

dY (t) = b(Y (t−))dt + σ(Y (t−))dB(t) + (0.10)

+

∫|x |<c

F (Y (t−), x)N(dt ,dx) +

∫|x |≥c

G(Y (t−), x)N(dt ,dx),

which is a convenient shorthand for the system of SDE’s:-

dY i(t) = bi(Y (t−))dt + σij (Y (t−))dBj(t) + (0.11)

+

∫|x |≤c

F i(Y (t−), x)N(dt ,dx) +

∫|x |>c

Gi(Y (t−), x)N(dt ,dx),

where each 1 ≤ i ≤ d .

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 48 / 58

Page 212: Koc4(dba)

The simplest conditions under which this holds are:-

(1) Lipschitz Condition

There exists K1 > 0 such that for all y1, y2 ∈ Rd ,

|b(y1)− b(y2)|2 + ||a(y1, y1)− 2a(y1, y2) + a(y2, y2)||(0.12)

+

∫|x |<c

|F (y1, x)− F (y2, x)|2ν(dx) ≤ K1|y1 − y2|2.

(2) Growth Condition

There exists K2 > 0 such that for all y ∈ Rd ,

|b(y)|2 + ||a(y , y)||+∫|x |<c

|F (y , x)|2ν(dx) ≤ K2(1 + |y |2). (0.13)

(3) Big Jumps Condition

G is jointly measurable and y → G(y , x) is continuous for all |x | ≥ 1.

Here, || · || is the matrix seminorm ||a|| =∑d

i=1 |aii |, and

a(x , y) = σ(x)σ(y)T .Dave Applebaum (Sheffield UK) Lecture 4 December 2011 49 / 58

Page 213: Koc4(dba)

The simplest conditions under which this holds are:-

(1) Lipschitz Condition

There exists K1 > 0 such that for all y1, y2 ∈ Rd ,

|b(y1)− b(y2)|2 + ||a(y1, y1)− 2a(y1, y2) + a(y2, y2)||(0.12)

+

∫|x |<c

|F (y1, x)− F (y2, x)|2ν(dx) ≤ K1|y1 − y2|2.

(2) Growth Condition

There exists K2 > 0 such that for all y ∈ Rd ,

|b(y)|2 + ||a(y , y)||+∫|x |<c

|F (y , x)|2ν(dx) ≤ K2(1 + |y |2). (0.13)

(3) Big Jumps Condition

G is jointly measurable and y → G(y , x) is continuous for all |x | ≥ 1.

Here, || · || is the matrix seminorm ||a|| =∑d

i=1 |aii |, and

a(x , y) = σ(x)σ(y)T .Dave Applebaum (Sheffield UK) Lecture 4 December 2011 49 / 58

Page 214: Koc4(dba)

The simplest conditions under which this holds are:-

(1) Lipschitz Condition

There exists K1 > 0 such that for all y1, y2 ∈ Rd ,

|b(y1)− b(y2)|2 + ||a(y1, y1)− 2a(y1, y2) + a(y2, y2)||(0.12)

+

∫|x |<c

|F (y1, x)− F (y2, x)|2ν(dx) ≤ K1|y1 − y2|2.

(2) Growth Condition

There exists K2 > 0 such that for all y ∈ Rd ,

|b(y)|2 + ||a(y , y)||+∫|x |<c

|F (y , x)|2ν(dx) ≤ K2(1 + |y |2). (0.13)

(3) Big Jumps Condition

G is jointly measurable and y → G(y , x) is continuous for all |x | ≥ 1.

Here, || · || is the matrix seminorm ||a|| =∑d

i=1 |aii |, and

a(x , y) = σ(x)σ(y)T .Dave Applebaum (Sheffield UK) Lecture 4 December 2011 49 / 58

Page 215: Koc4(dba)

We also impose the standard initial condition Y (0) = Y0 (a.s.) forwhich Y0 is independent of (Ft , t > 0). Solutions of SDEs are Markovprocesses and, in the case where there are no jumps, diffusionprocesses.A special case of considerable interest is

dY (t) = L(Y (t−))dX (t).

You can check that the conditions given above boil down to the singlerequirement that L be globally Lipshitz, in order to get existence anduniqueness.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 50 / 58

Page 216: Koc4(dba)

We also impose the standard initial condition Y (0) = Y0 (a.s.) forwhich Y0 is independent of (Ft , t > 0). Solutions of SDEs are Markovprocesses and, in the case where there are no jumps, diffusionprocesses.A special case of considerable interest is

dY (t) = L(Y (t−))dX (t).

You can check that the conditions given above boil down to the singlerequirement that L be globally Lipshitz, in order to get existence anduniqueness.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 50 / 58

Page 217: Koc4(dba)

We also impose the standard initial condition Y (0) = Y0 (a.s.) forwhich Y0 is independent of (Ft , t > 0). Solutions of SDEs are Markovprocesses and, in the case where there are no jumps, diffusionprocesses.A special case of considerable interest is

dY (t) = L(Y (t−))dX (t).

You can check that the conditions given above boil down to the singlerequirement that L be globally Lipshitz, in order to get existence anduniqueness.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 50 / 58

Page 218: Koc4(dba)

Stochastic Exponentials

For convenience we take d = 1 and consider the problem of finding anadapted process Z = (Z (t), t ≥ 0) which has a stochastic differential

dZ (t) = Z (t−)dY (t),

where Y is a Lévy-type stochastic integral.The solution of this problem is obtained as follows. We take Z to bethe stochastic exponential (sometimes called Doléans-Dadeexponential after its discoverer), which is denoted asEY = (EY (t), t ≥ 0) and defined as

EY (t) = exp

Y (t)− 12

[Yc ,Yc](t) ∏

0≤s≤t

(1 + ∆Y (s))e−∆Y (s), (0.14)

for each t ≥ 0.We will need the following assumption:

(SE) inf∆Y (t), t > 0 > −1, (a.s.).

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 51 / 58

Page 219: Koc4(dba)

Stochastic Exponentials

For convenience we take d = 1 and consider the problem of finding anadapted process Z = (Z (t), t ≥ 0) which has a stochastic differential

dZ (t) = Z (t−)dY (t),

where Y is a Lévy-type stochastic integral.The solution of this problem is obtained as follows. We take Z to bethe stochastic exponential (sometimes called Doléans-Dadeexponential after its discoverer), which is denoted asEY = (EY (t), t ≥ 0) and defined as

EY (t) = exp

Y (t)− 12

[Yc ,Yc](t) ∏

0≤s≤t

(1 + ∆Y (s))e−∆Y (s), (0.14)

for each t ≥ 0.We will need the following assumption:

(SE) inf∆Y (t), t > 0 > −1, (a.s.).

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 51 / 58

Page 220: Koc4(dba)

Stochastic Exponentials

For convenience we take d = 1 and consider the problem of finding anadapted process Z = (Z (t), t ≥ 0) which has a stochastic differential

dZ (t) = Z (t−)dY (t),

where Y is a Lévy-type stochastic integral.The solution of this problem is obtained as follows. We take Z to bethe stochastic exponential (sometimes called Doléans-Dadeexponential after its discoverer), which is denoted asEY = (EY (t), t ≥ 0) and defined as

EY (t) = exp

Y (t)− 12

[Yc ,Yc](t) ∏

0≤s≤t

(1 + ∆Y (s))e−∆Y (s), (0.14)

for each t ≥ 0.We will need the following assumption:

(SE) inf∆Y (t), t > 0 > −1, (a.s.).

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 51 / 58

Page 221: Koc4(dba)

TheoremIf Y is a Lévy-type stochastic integral of the form (0.7) and (SE) holds,then each EY (t) is almost surely finite.

Proof. We must show that the infinite product in (0.14) convergesalmost surely. We write∏

0≤s≤t

(1 + ∆Y (s))e−∆Y (s) = A(t) + B(t),

where A(t) =∏

0≤s≤t (1 + ∆Y (s))e−∆Y (s)1|∆Y (s)|≥ 12

and

B(t) =∏

0≤s≤t (1 + ∆Y (s))e−∆Y (s)1|∆Y (s)|< 12

.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 52 / 58

Page 222: Koc4(dba)

TheoremIf Y is a Lévy-type stochastic integral of the form (0.7) and (SE) holds,then each EY (t) is almost surely finite.

Proof. We must show that the infinite product in (0.14) convergesalmost surely. We write∏

0≤s≤t

(1 + ∆Y (s))e−∆Y (s) = A(t) + B(t),

where A(t) =∏

0≤s≤t (1 + ∆Y (s))e−∆Y (s)1|∆Y (s)|≥ 12

and

B(t) =∏

0≤s≤t (1 + ∆Y (s))e−∆Y (s)1|∆Y (s)|< 12

.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 52 / 58

Page 223: Koc4(dba)

TheoremIf Y is a Lévy-type stochastic integral of the form (0.7) and (SE) holds,then each EY (t) is almost surely finite.

Proof. We must show that the infinite product in (0.14) convergesalmost surely. We write∏

0≤s≤t

(1 + ∆Y (s))e−∆Y (s) = A(t) + B(t),

where A(t) =∏

0≤s≤t (1 + ∆Y (s))e−∆Y (s)1|∆Y (s)|≥ 12

and

B(t) =∏

0≤s≤t (1 + ∆Y (s))e−∆Y (s)1|∆Y (s)|< 12

.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 52 / 58

Page 224: Koc4(dba)

TheoremIf Y is a Lévy-type stochastic integral of the form (0.7) and (SE) holds,then each EY (t) is almost surely finite.

Proof. We must show that the infinite product in (0.14) convergesalmost surely. We write∏

0≤s≤t

(1 + ∆Y (s))e−∆Y (s) = A(t) + B(t),

where A(t) =∏

0≤s≤t (1 + ∆Y (s))e−∆Y (s)1|∆Y (s)|≥ 12

and

B(t) =∏

0≤s≤t (1 + ∆Y (s))e−∆Y (s)1|∆Y (s)|< 12

.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 52 / 58

Page 225: Koc4(dba)

TheoremIf Y is a Lévy-type stochastic integral of the form (0.7) and (SE) holds,then each EY (t) is almost surely finite.

Proof. We must show that the infinite product in (0.14) convergesalmost surely. We write∏

0≤s≤t

(1 + ∆Y (s))e−∆Y (s) = A(t) + B(t),

where A(t) =∏

0≤s≤t (1 + ∆Y (s))e−∆Y (s)1|∆Y (s)|≥ 12

and

B(t) =∏

0≤s≤t (1 + ∆Y (s))e−∆Y (s)1|∆Y (s)|< 12

.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 52 / 58

Page 226: Koc4(dba)

Now since Y is càdlàg, #0 ≤ s ≤ t ; |∆Y (s)| ≥ 12 <∞ (a.s.), and so

A(t) is a finite product. Using the assumption (SE), we have

B(t) = exp

∑0≤s≤t

[log(1 + ∆Y (s))−∆Y (s)]1|∆Y (s)|< 12

.We now employ Taylor’s theorem to obtain the inequality

log(1 + y)− y ≤ Ky2,

where K > 0, which is valid whenever |y | < 12 . Hence∣∣∣∣∣∣

∑0≤s≤t

[log(1 + ∆Y (s))−∆Y (s)]1|∆Y (s)|< 12

∣∣∣∣∣∣ ≤∑

0≤s≤t

|∆Y (s)|21|∆Y (s)|< 12

< ∞ a.s.

and we have our required result. 2

Of course (SE) ensures that EY (t) > 0 (a.s.).Dave Applebaum (Sheffield UK) Lecture 4 December 2011 53 / 58

Page 227: Koc4(dba)

Now since Y is càdlàg, #0 ≤ s ≤ t ; |∆Y (s)| ≥ 12 <∞ (a.s.), and so

A(t) is a finite product. Using the assumption (SE), we have

B(t) = exp

∑0≤s≤t

[log(1 + ∆Y (s))−∆Y (s)]1|∆Y (s)|< 12

.We now employ Taylor’s theorem to obtain the inequality

log(1 + y)− y ≤ Ky2,

where K > 0, which is valid whenever |y | < 12 . Hence∣∣∣∣∣∣

∑0≤s≤t

[log(1 + ∆Y (s))−∆Y (s)]1|∆Y (s)|< 12

∣∣∣∣∣∣ ≤∑

0≤s≤t

|∆Y (s)|21|∆Y (s)|< 12

< ∞ a.s.

and we have our required result. 2

Of course (SE) ensures that EY (t) > 0 (a.s.).Dave Applebaum (Sheffield UK) Lecture 4 December 2011 53 / 58

Page 228: Koc4(dba)

Now since Y is càdlàg, #0 ≤ s ≤ t ; |∆Y (s)| ≥ 12 <∞ (a.s.), and so

A(t) is a finite product. Using the assumption (SE), we have

B(t) = exp

∑0≤s≤t

[log(1 + ∆Y (s))−∆Y (s)]1|∆Y (s)|< 12

.We now employ Taylor’s theorem to obtain the inequality

log(1 + y)− y ≤ Ky2,

where K > 0, which is valid whenever |y | < 12 . Hence∣∣∣∣∣∣

∑0≤s≤t

[log(1 + ∆Y (s))−∆Y (s)]1|∆Y (s)|< 12

∣∣∣∣∣∣ ≤∑

0≤s≤t

|∆Y (s)|21|∆Y (s)|< 12

< ∞ a.s.

and we have our required result. 2

Of course (SE) ensures that EY (t) > 0 (a.s.).Dave Applebaum (Sheffield UK) Lecture 4 December 2011 53 / 58

Page 229: Koc4(dba)

Now since Y is càdlàg, #0 ≤ s ≤ t ; |∆Y (s)| ≥ 12 <∞ (a.s.), and so

A(t) is a finite product. Using the assumption (SE), we have

B(t) = exp

∑0≤s≤t

[log(1 + ∆Y (s))−∆Y (s)]1|∆Y (s)|< 12

.We now employ Taylor’s theorem to obtain the inequality

log(1 + y)− y ≤ Ky2,

where K > 0, which is valid whenever |y | < 12 . Hence∣∣∣∣∣∣

∑0≤s≤t

[log(1 + ∆Y (s))−∆Y (s)]1|∆Y (s)|< 12

∣∣∣∣∣∣ ≤∑

0≤s≤t

|∆Y (s)|21|∆Y (s)|< 12

< ∞ a.s.

and we have our required result. 2

Of course (SE) ensures that EY (t) > 0 (a.s.).Dave Applebaum (Sheffield UK) Lecture 4 December 2011 53 / 58

Page 230: Koc4(dba)

Now since Y is càdlàg, #0 ≤ s ≤ t ; |∆Y (s)| ≥ 12 <∞ (a.s.), and so

A(t) is a finite product. Using the assumption (SE), we have

B(t) = exp

∑0≤s≤t

[log(1 + ∆Y (s))−∆Y (s)]1|∆Y (s)|< 12

.We now employ Taylor’s theorem to obtain the inequality

log(1 + y)− y ≤ Ky2,

where K > 0, which is valid whenever |y | < 12 . Hence∣∣∣∣∣∣

∑0≤s≤t

[log(1 + ∆Y (s))−∆Y (s)]1|∆Y (s)|< 12

∣∣∣∣∣∣ ≤∑

0≤s≤t

|∆Y (s)|21|∆Y (s)|< 12

< ∞ a.s.

and we have our required result. 2

Of course (SE) ensures that EY (t) > 0 (a.s.).Dave Applebaum (Sheffield UK) Lecture 4 December 2011 53 / 58

Page 231: Koc4(dba)

Now since Y is càdlàg, #0 ≤ s ≤ t ; |∆Y (s)| ≥ 12 <∞ (a.s.), and so

A(t) is a finite product. Using the assumption (SE), we have

B(t) = exp

∑0≤s≤t

[log(1 + ∆Y (s))−∆Y (s)]1|∆Y (s)|< 12

.We now employ Taylor’s theorem to obtain the inequality

log(1 + y)− y ≤ Ky2,

where K > 0, which is valid whenever |y | < 12 . Hence∣∣∣∣∣∣

∑0≤s≤t

[log(1 + ∆Y (s))−∆Y (s)]1|∆Y (s)|< 12

∣∣∣∣∣∣ ≤∑

0≤s≤t

|∆Y (s)|21|∆Y (s)|< 12

< ∞ a.s.

and we have our required result. 2

Of course (SE) ensures that EY (t) > 0 (a.s.).Dave Applebaum (Sheffield UK) Lecture 4 December 2011 53 / 58

Page 232: Koc4(dba)

The stochastic exponential is, in fact the unique solution of thestochastic differential equation dZ (t) = Z (t−)dY (t), with initialcondition Z (0) = 1 (a.s.).The restrictions (SE) can be dropped and the stochastic exponentialextended to the case where Y is an arbitrary (real valued or evencomplex-valued) càdlàg semimartingale, but the price we have to payis that EY may then take negative values.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 54 / 58

Page 233: Koc4(dba)

The stochastic exponential is, in fact the unique solution of thestochastic differential equation dZ (t) = Z (t−)dY (t), with initialcondition Z (0) = 1 (a.s.).The restrictions (SE) can be dropped and the stochastic exponentialextended to the case where Y is an arbitrary (real valued or evencomplex-valued) càdlàg semimartingale, but the price we have to payis that EY may then take negative values.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 54 / 58

Page 234: Koc4(dba)

The following alternative form of (0.14) is quite useful :

EY (t) = eSY (t),

where dSY (t) = F (t)dB(t) +

(G(t)− 1

2F (t)2

)dt

+

∫|x |≥1

log(1 + K (t , x))N(dt ,dx)

+

∫|x |<1

log(1 + H(t , x))N(dt ,dx)

+

∫|x |<1

(log(1 + H(t , x))− H(t , x))ν(dx)ds.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 55 / 58

Page 235: Koc4(dba)

The following alternative form of (0.14) is quite useful :

EY (t) = eSY (t),

where dSY (t) = F (t)dB(t) +

(G(t)− 1

2F (t)2

)dt

+

∫|x |≥1

log(1 + K (t , x))N(dt ,dx)

+

∫|x |<1

log(1 + H(t , x))N(dt ,dx)

+

∫|x |<1

(log(1 + H(t , x))− H(t , x))ν(dx)ds.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 55 / 58

Page 236: Koc4(dba)

Theorem

dEY (t) = EY (t−)dY (t).

Proof. We apply Itô’s formula to EY (t) = eSY (t) to obtain for eacht ≥ 0,

dEY (t)= EY (t−) (F (t)dB(t) + G(t)dt

+

∫|x |<1

(log(1 + H(t , x))− H(t , x))ν(dx)dt)

+

∫|x |≥1

[expSY (t−) + log(1 + K (t , x)) − exp (SY (t−))]N(dt ,dx)

+

∫|x |<1

[expSY (t−) + log(1 + H(t , x)) − exp (SY (t−))]N(dt ,dx)

+

∫|x |<1

[expSY (t−) + log(1 + H(t , x)) − exp (SY (t−))

− log(1 + H(t , x)) exp SY (t−)]ν(dx)dt)

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 56 / 58

Page 237: Koc4(dba)

Theorem

dEY (t) = EY (t−)dY (t).

Proof. We apply Itô’s formula to EY (t) = eSY (t) to obtain for eacht ≥ 0,

dEY (t)= EY (t−) (F (t)dB(t) + G(t)dt

+

∫|x |<1

(log(1 + H(t , x))− H(t , x))ν(dx)dt)

+

∫|x |≥1

[expSY (t−) + log(1 + K (t , x)) − exp (SY (t−))]N(dt ,dx)

+

∫|x |<1

[expSY (t−) + log(1 + H(t , x)) − exp (SY (t−))]N(dt ,dx)

+

∫|x |<1

[expSY (t−) + log(1 + H(t , x)) − exp (SY (t−))

− log(1 + H(t , x)) exp SY (t−)]ν(dx)dt)

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 56 / 58

Page 238: Koc4(dba)

Theorem

dEY (t) = EY (t−)dY (t).

Proof. We apply Itô’s formula to EY (t) = eSY (t) to obtain for eacht ≥ 0,

dEY (t)= EY (t−) (F (t)dB(t) + G(t)dt

+

∫|x |<1

(log(1 + H(t , x))− H(t , x))ν(dx)dt)

+

∫|x |≥1

[expSY (t−) + log(1 + K (t , x)) − exp (SY (t−))]N(dt ,dx)

+

∫|x |<1

[expSY (t−) + log(1 + H(t , x)) − exp (SY (t−))]N(dt ,dx)

+

∫|x |<1

[expSY (t−) + log(1 + H(t , x)) − exp (SY (t−))

− log(1 + H(t , x)) exp SY (t−)]ν(dx)dt)

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 56 / 58

Page 239: Koc4(dba)

and so

dEY (t) = EY (t−)[F (t)dB(t)+G(t)dt+K (t , x)N(dt ,dx)+H(t , x)N(dt ,dx)].

as required. 2

Examples1 If Y (t) = σB(t) where σ > 0 and B = (B(t), t ≥ 0) is a standard

Brownian motion, then

EY (t) = expσB(t)− 1

2σ2t.

2 If Y = (Y (t), t ≥ 0) is a compound Poisson process, so that eachY (t) = X1 + · · ·+ XN(t), where (Xn,n ∈ N) are i.i.d. and N is anindependent Poisson process, we have

EY (t) =

N(t)∏j=1

(1 + Xj),

for each t ≥ 0.Dave Applebaum (Sheffield UK) Lecture 4 December 2011 57 / 58

Page 240: Koc4(dba)

and so

dEY (t) = EY (t−)[F (t)dB(t)+G(t)dt+K (t , x)N(dt ,dx)+H(t , x)N(dt ,dx)].

as required. 2

Examples1 If Y (t) = σB(t) where σ > 0 and B = (B(t), t ≥ 0) is a standard

Brownian motion, then

EY (t) = expσB(t)− 1

2σ2t.

2 If Y = (Y (t), t ≥ 0) is a compound Poisson process, so that eachY (t) = X1 + · · ·+ XN(t), where (Xn,n ∈ N) are i.i.d. and N is anindependent Poisson process, we have

EY (t) =

N(t)∏j=1

(1 + Xj),

for each t ≥ 0.Dave Applebaum (Sheffield UK) Lecture 4 December 2011 57 / 58

Page 241: Koc4(dba)

and so

dEY (t) = EY (t−)[F (t)dB(t)+G(t)dt+K (t , x)N(dt ,dx)+H(t , x)N(dt ,dx)].

as required. 2

Examples1 If Y (t) = σB(t) where σ > 0 and B = (B(t), t ≥ 0) is a standard

Brownian motion, then

EY (t) = expσB(t)− 1

2σ2t.

2 If Y = (Y (t), t ≥ 0) is a compound Poisson process, so that eachY (t) = X1 + · · ·+ XN(t), where (Xn,n ∈ N) are i.i.d. and N is anindependent Poisson process, we have

EY (t) =

N(t)∏j=1

(1 + Xj),

for each t ≥ 0.Dave Applebaum (Sheffield UK) Lecture 4 December 2011 57 / 58

Page 242: Koc4(dba)

and so

dEY (t) = EY (t−)[F (t)dB(t)+G(t)dt+K (t , x)N(dt ,dx)+H(t , x)N(dt ,dx)].

as required. 2

Examples1 If Y (t) = σB(t) where σ > 0 and B = (B(t), t ≥ 0) is a standard

Brownian motion, then

EY (t) = expσB(t)− 1

2σ2t.

2 If Y = (Y (t), t ≥ 0) is a compound Poisson process, so that eachY (t) = X1 + · · ·+ XN(t), where (Xn,n ∈ N) are i.i.d. and N is anindependent Poisson process, we have

EY (t) =

N(t)∏j=1

(1 + Xj),

for each t ≥ 0.Dave Applebaum (Sheffield UK) Lecture 4 December 2011 57 / 58

Page 243: Koc4(dba)

and so

dEY (t) = EY (t−)[F (t)dB(t)+G(t)dt+K (t , x)N(dt ,dx)+H(t , x)N(dt ,dx)].

as required. 2

Examples1 If Y (t) = σB(t) where σ > 0 and B = (B(t), t ≥ 0) is a standard

Brownian motion, then

EY (t) = expσB(t)− 1

2σ2t.

2 If Y = (Y (t), t ≥ 0) is a compound Poisson process, so that eachY (t) = X1 + · · ·+ XN(t), where (Xn,n ∈ N) are i.i.d. and N is anindependent Poisson process, we have

EY (t) =

N(t)∏j=1

(1 + Xj),

for each t ≥ 0.Dave Applebaum (Sheffield UK) Lecture 4 December 2011 57 / 58

Page 244: Koc4(dba)

If X and Y are Lévy-type stochastic integrals, you can check that

EX (t)EY (t) = EX+Y +[X ,Y ](t),

for each t ≥ 0.

Dave Applebaum (Sheffield UK) Lecture 4 December 2011 58 / 58