80
Enlargements of Filtrations Monique Jeanblanc Jena, June 2010 June 9, 2010

Enlargements of Filtrations Monique Jeanblanc Jena, June 2010 · 2010-06-09 · 2 These notes are not complete, and have to be considered as a preliminary version of a full text

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Enlargements of Filtrations Monique Jeanblanc Jena, June 2010 · 2010-06-09 · 2 These notes are not complete, and have to be considered as a preliminary version of a full text

Enlargements of Filtrations

Monique Jeanblanc

Jena, June 2010

June 9, 2010

Page 2: Enlargements of Filtrations Monique Jeanblanc Jena, June 2010 · 2010-06-09 · 2 These notes are not complete, and have to be considered as a preliminary version of a full text

2

These notes are not complete, and have to be considered as a preliminary version of a full text.The aim was to give a support to the lectures given by Monique Jeanblanc and Giorgia Callegaro atUniversity El Manar, Tunis, in March 2010 and later by Monique Jeanblanc at Iena, in June 2010.

Section marked with a ♠ are used only in very specific (but important) cases and can be forgottenfor a first reading.

A large part of these notes can be found in the bookMathematical Methods in Financial Markets,

by Jeanblanc, M., Yor M., and Chesney, M. (Springer), indicated as [3M]. We thank Springer Verlagfor allowing us to reproduce part of this volume.

Many examples related to Credit risk can be found inCredit Risk Modeling

by Bielecki, T.R., Jeanblanc, M. and Rutkowski, M., CSFI lecture note series, Osaka UniversityPress, 2009, indicated as [BJR].

I thank all the participants to Jena workshop for their remarks, questions and comments

Page 3: Enlargements of Filtrations Monique Jeanblanc Jena, June 2010 · 2010-06-09 · 2 These notes are not complete, and have to be considered as a preliminary version of a full text

Chapter 1

Theory of Stochastic Processes

1.1 Background

In this section, we recall some facts on theory of stochastic processes. Proofs can be found forexample in Dellacherie [18], Dellacherie and Meyer [22] and Rogers and Williams [68].

As usual, we start with a filtered probability space (Ω,F ,F,P) where F = (Ft, t ≥ 0) is a givenfiltration satisfying the usual conditions, and F = F∞.

1.1.1 Path properties

A process X is continuous if, for almost all ω, the map t → Xt(ω) is continuous. The process iscontinuous on the right with limits on the left (in short cadlag following the French acronym1) if,for almost all ω, the map t → Xt(ω) is cadlag.

Definition 1.1.1 A process X is increasing if X0 = 0, X is right-continuous, and Xs ≤ Xt, a.s.for s ≤ t.

Definition 1.1.2 Let (Ω,F ,F,P) be a filtered probability space. The process X is F-adapted if forany t ≥ 0, the random variable Xt is Ft-measurable.

The natural filtration FX of a stochastic process X is the smallest filtration F which satisfies theusual hypotheses and such that X is F-adapted. We shall write in short (with an abuse of notation)FX

t = σ(Xs, s ≤ t).

Exercise 1.1.3 Write carefully the definition of the filtration F so that the right-continuity as-sumption is satisfied. Starting from a non-continuous on right filtration F0, define the smallest rightcontinuous filtration F which contains F0. C

1.1.2 Predictable and optional σ-algebra

If τ and ϑ are two stopping times, the stochastic interval ]]ϑ, τ ]] is the set (ω, t) : ϑ(ω) < t ≤ τ(ω).The optional σ-algebra O is the σ-algebra generated on F × B by the stochastic intervals

[[τ,∞[[ where τ is an F-stopping time.

1In French, continuous on the right is continu a droite, and with limits on the left is admettant des limites agauche. We shall also use cad for continuous on the right. The use of this acronym comes from P-A. Meyer.

3

Page 4: Enlargements of Filtrations Monique Jeanblanc Jena, June 2010 · 2010-06-09 · 2 These notes are not complete, and have to be considered as a preliminary version of a full text

4 CHAPTER 1. THEORY OF STOCHASTIC PROCESSES

The predictable σ-algebra P is the σ-algebra generated on F × B by the stochastic intervals]]ϑ, τ ]] where ϑ and τ are two F-stopping times such that ϑ ≤ τ .

A process X is said to be F-predictable (resp. F-optional) if the map (ω, t) → Xt(ω) is P-measurable (resp. O-measurable).

Example 1.1.4 An adapted cag process is predictable.

Proposition 1.1.5 Let F be a given filtration (called the reference filtration).

• The optional σ-algebra O is the σ-algebra on R+×Ω generated by cadlag F-adapted processes(considered as mappings on R+ × Ω).

• The predictable σ-algebra P is the σ-algebra on R+ × Ω generated by the F-adapted cag (orcontinuous) processes. The inclusion P ⊂ O holds.

These two σ-algebras P and O are equal if all F-martingales are continuous. In general

O = P ∨ σ(∆M, M describing the set of martingales) .

A process is said to be predictable (resp. optional) if it is measurable with respect to thepredictable (resp. optional) σ-field. If X is a predictable (resp. optional) process and T a stoppingtime, then the stopped process XT is also predictable (resp. optional). Every process which iscag and adapted is predictable, every process which is cad and adapted is optional. If X is acadlag adapted process, then (Xt− , t ≥ 0) is a predictable process.

A stopping time T is predictable if and only if the process (11t<T = 1 − 11T≤t, t ≥ 0)is predictable, that is if and only if the stochastic interval [[0, T [[= (ω, t) : 0 ≤ t < T (ω) ispredictable. Note that O = P if and only if any stopping time is predictable. A stopping time Tis totally inaccessible if P(T = S < ∞) = 0 for all predictable stopping times S. See Dellacherie[18], Dellacherie and Meyer [20] and Elliott [26] for related results.

If τ is an F-stopping time, the σ-algebra of events prior to τ , Fτ , and the σ-algebra of eventsstrictly prior to τ Fτ− are defined as:

Fτ = A ∈ F∞ : A ∩ τ ≤ t ∈ Ft, ∀t.Fτ− is the smallest σ-algebra which contains F0 and all the sets of the form A ∩ t < τ, t > 0 forA ∈ Ft.

If X is F-progressively measurable and τ a F-stopping time, then the r.v. Xτ is Fτ -measurableon the set τ < ∞.

If τ is a random time (i.e. a non negative r.v.), the σ-algebra Fτ and Fτ− are defined as

Fτ = σ(Yτ , Y F− optional)Fτ− = σ(Yτ , Y F− predictable)

Exercise 1.1.6 Give a simple example of predictable process which is not continuous on left.Hint: Deterministic functions are predictable C

1.1.3 Doob-Meyer decomposition ♠An adapted process X is said to be of class2 (D) if the collection XT 11T<∞ where T is a stoppingtime is uniformly integrable.

If Z is a non-negative supermartingale of class (D), there exists a unique increasing, integrableand predictable process A such that Zt = E(A∞ − At|Ft). In particular, any non-negative super-martingale of class (D) can be written as Zt = Mt−At where M is a uniformly integrable martingale.

2Class (D) is in honor of Doob.

Page 5: Enlargements of Filtrations Monique Jeanblanc Jena, June 2010 · 2010-06-09 · 2 These notes are not complete, and have to be considered as a preliminary version of a full text

1.1. BACKGROUND 5

1.1.4 Semi-martingales

We recall that an F-adapted process X is an F-semi-martingale if X = M +A where M is an F-localmartingale and A an F-adapted process with finite variation. If there exists a decomposition with aprocess A which is predictable, the decomposition X = M + A where M is an F-martingale and Aan F-predictable process with finite variation is unique and X is called a special semi-martingale.If X is continuous, the process A is continuous.

In general, if G = (Gt, t ≥ 0) is a filtration larger than F = (Ft, t ≥ 0), i.e., Ft ⊂ Gt, ∀t ≥ 0 (weshall write F ⊂ G), it is not true that an F-martingale remains a martingale in the filtration G. Itis not even true that F-martingales remain G-semi-martingales.

Example 1.1.7 (a) Let Gt = F∞. Then, the only F-martingales which are G-martingales areconstants.(b)♠ An interesting example is Azema’s martingale µ, defined as follows. Let B be a Brownianmotion and gt = sups ≤ t, Bs = 0. The process

µt = (sgnBt)√

t− gt, t ≥ 0

is a martingale in its own filtration. This discontinuous Fµ-martingale is not an FB-martingale, itis not even an FB-semi-martingale.

We recall an important(but difficult result) due to Stricker [74].

Proposition 1.1.8 Let F and G be two filtrations such that F ⊂ G. If X is a G-semimartingalewhich is F-adapted, then it is also an F-semimartingale.

Exercise 1.1.9 Let N be a Poisson process (i.e., a process with stationary and independent incre-ments, such that the law of Nt is a Poisson law with parameter λ. Prove that, the process M definesas Mt = Nt−λt is a martingale and that the process M2

t −λt = (Nt−λt)2−λt is also a martingale.Prove that for any θ ∈ [0, 1],

Nt = θ(Nt − λt) + (1− θ)Nt + θλt

is a decomposition of the semi-martingale N . Which one has a predictable finite variation process?C

Exercise 1.1.10 Prove that τ is a H-stopping time, where H is the natural filtration of Ht = 11τ≤t,and that τ is a G stopping time, where G = F ∨H, for any filtration F. C

Exercise 1.1.11 Prove that if M is a G-martingale and is F-adapted, then M is an F-martingale.Prove that, if M is a G-martingale, then M defined as Mt = E(Mt|Ft) is an F-martingale. C

Exercise 1.1.12 Prove that, if G = F ∨ F where F is independent of F, then any F martingaleremains a G-martingale.Prove that, if F is generated by a Brownian motion W , and if there exists a probability Q equivalentto P such that F is independent of F under Q, then any (P,F)-martingale remains a (P,G)-semimartingale. C

Stochastic Integration

If X = M + A is a semi-martingale and Y a (bounded) predictable process, we denote Y ?X thestochastic integral

(Y ?X)t =∫ t

0

YsdXs =∫ t

0

YsdMs +∫ t

0

YsdAs

Page 6: Enlargements of Filtrations Monique Jeanblanc Jena, June 2010 · 2010-06-09 · 2 These notes are not complete, and have to be considered as a preliminary version of a full text

6 CHAPTER 1. THEORY OF STOCHASTIC PROCESSES

Integration by Parts ♠

Any semi-martingale X admits a decomposition as a martingale M and a bounded variation process.The martingale part admits a decomposition as M = M c + Md where M c is continuous and Md adiscontinuous martingale. The process M c is denoted in the literature as Xc (even if this notationis missleading!). The optional Ito formula is

f(Xt) = f(X0) +∫ t

0f ′(Xs−)dXs + 1

2

∫ t

0f ′′(Xs)d〈Xc〉s

+∑

0<s≤t[f(Xs)− f(Xs−)− f ′(Xs−)∆Xs] .

If U and V are two finite variation processes, the Stieltjes’ integration by parts formula can bewritten as follows:

UtVt = U0V0 +∫

]0,t]

VsdUs +∫

]0,t]

Us−dVs (1.1.1)

= U0V0 +∫

]0,t]

Vs−dUs +∫

]0,t]

Us−dVs +∑

s≤t

∆Us ∆Vs .

As a partial check, one can verify that the jump process of the left-hand side, i.e., UtVt−Ut−Vt− , isequal to the jump process of the right-hand side, i.e., Vt−∆Ut + Ut−∆Vt + ∆Ut ∆Vt.

The integration by parts is

XtYt = X0Y0 +∫ t

0

Xs−dYs +∫ t

0

Ys−dXs + [X,Y ]t . (1.1.2)

For bounded variation processes[U, V ]t =

s≤t

∆Us ∆Vs

Let X be a continuous local martingale. The predictable quadratic variation process of X is thecontinuous increasing process 〈X〉 such that X2 − 〈X〉 is a local martingale.Let X and Y be two continuous local martingales. The predictable covariation process is thecontinuous finite variation process 〈X, Y 〉 such that XY − 〈X,Y 〉 is a local martingale.

Let X and Y be two local martingales. The covariation process is the finite variation process[X, Y ] such that

(i) XY − [X,Y ] is a local martingale(ii) ∆[X, Y ]t = ∆Xt∆Yt

The process [X] = [X, X] is non-decreasing.The predictable covariation process is (if it exists) the predictable finite variation process 〈X, Y 〉such that XY − 〈X,Y 〉 is a local martingale.

Exercise 1.1.13 Prove that if X and Y are continuous, 〈X,Y 〉 = [X, Y ].Prove that if M is the compensated martingale of a Poisson process with intensity λ, [M ] = N and〈M〉 = λt. C

1.1.5 Change of probability

Brownian filtration

Let F be a Brownian filtration and L an F-martingale, strictly positive such that L0 = 1 and definedQ|Ft = LtdP|Ft . Then,

Bt := Bt −∫ t

0

1Ls

d〈B, L〉s

Page 7: Enlargements of Filtrations Monique Jeanblanc Jena, June 2010 · 2010-06-09 · 2 These notes are not complete, and have to be considered as a preliminary version of a full text

1.1. BACKGROUND 7

is a Q,F-Brownian motion. Then, if M is an F-martingale,

Mt := Mt −∫ t

0

1Ls

d〈M, L〉s

is a Q,F-martingale.

General case ♠

More generally, let F be a filtration and L an F-martingale, strictly positive such that L0 = 1 anddefine dQ|Ft

= LtdP|Ft. Then, if M is an F-martingale,

Mt := Mt −∫ t

0

1Ls

d[M, L]s

is a Q,F-martingale. If the predictable co-variation process 〈M, L〉 exists,

Mt −∫ t

0

1Ls−

d〈M, L〉s

is a Q,F-martingale.

1.1.6 Dual Predictable Projections

In this section, after recalling some basic facts about optional and predictable projections, we in-troduce the concept of a dual predictable projection3, which leads to the fundamental notion ofpredictable compensators. We recommend the survey paper of Nikeghbali [62].

Definitions

Let X be a bounded (or positive) process, and F a given filtration. The optional projection ofX is the unique optional process (o)X which satisfies: for any F-stopping time τ

E(Xτ11τ<∞) = E( (o)Xτ11τ<∞) . (1.1.3)

For any F-stopping time τ , let Γ ∈ Fτ and apply the equality (1.1.3) to the stopping time τΓ =τ11Γ +∞11Γc . We get the re-inforced identity:

E(Xτ11τ<∞|Fτ ) = (o)Xτ11τ<∞ .

In particular, if A is an increasing process, then, for s ≤ t:

E( (o)At − (o)As|Fs) = E(At −As|Fs) ≥ 0 . (1.1.4)

Note that, for any t, E(Xt|Ft) = (o)Xt. However, E(Xt|Ft) is defined almost surely for any t; thusuncountably many null sets are involved, hence, a priori, E(Xt|Ft) is not a well-defined processwhereas (o)X takes care of this difficulty.

Comment 1.1.14 Let us comment the difficulty here. If X is an integrable random variable, thequantity E(X|Ft) is defined a.s. i.e. if Xt = E(X|Ft) and Xt = E(X|Ft), then P(Xt = Xt) = 1.That means that, for any fixed t, there exists a negligeable set Ωt such that Xt(ω) = Xt(ω) forω /∈ Ωt. For processes, we introduce the following definition: the process X is a modification of Yif, for any t, P(Xt = Yt) = 1. However, one needs a stronger assumption to be able to compare

3See Dellacherie [18] for the notion of dual optional projection.

Page 8: Enlargements of Filtrations Monique Jeanblanc Jena, June 2010 · 2010-06-09 · 2 These notes are not complete, and have to be considered as a preliminary version of a full text

8 CHAPTER 1. THEORY OF STOCHASTIC PROCESSES

functionals of the processes. The process X is indistinguishable from (or a version of) Y ifω : Xt(ω) = Yt(ω),∀t is a measurable set and P(Xt = Yt,∀t) = 1. If X and Y are modificationsof each other and are a.s. continuous, they are indistinguishable.A difficult, but important result (see Dellacherie [17], p. 73) states: Let X and Y two optional(resp. predictable) processes. If for every finite stopping time (resp. predictable stopping time) τ ,Xτ = Yτ a.s., then the processes X and Y are indistinguishable.

Likewise, the predictable projection of X is the unique predictable process (p)X such that forany F-predictable stopping time τ

E(Xτ11τ<∞) = E( (p)Xτ11τ<∞) . (1.1.5)

As above, this identity reinforces as

E(Xτ11τ<∞|Fτ−) = (p)Xτ11τ<∞ ,

for any F-predictable stopping time τ (see Section 1.1 for the definition of Fτ−).

Example 1.1.15 Let ϑi, i = 1, 2 be two stopping times such that ϑ1 ≤ ϑ2 and Z a boundedr.v.. Let X = Z11]]ϑ1,ϑ2]]. Then, (o)X = U11]]ϑ1,ϑ2]],

(p)X = V 11]]ϑ1,ϑ2]] where U (resp. V ) is theright-continuous (resp. left-continuous) version of the martingale (E(Z|Ft), t ≥ 0).

Let τ and ϑ be two stopping times such that ϑ ≤ τ and X a positive process. If A is an increasingoptional process, then,

E(∫ τ

ϑ

XtdAt

)= E

(∫ τ

ϑ

(o)XtdAt

).

If A is an increasing predictable process, then,since 11]]ϑ,τ ]](t) is predictable

E(∫ τ

ϑ

XtdAt) = E(∫ τ

ϑ

(p)XtdAt) .

The notion of interest in this section is that of dual predictable projection, which we defineas follows:

Proposition 1.1.16 Let (At, t ≥ 0) be an integrable increasing process (not necessarily F-adapted).There exists a unique integrable F-predictable increasing process (A(p)

t , t ≥ 0), called the dual pre-dictable projection of A such that

E(∫ ∞

0

YsdAs

)= E

(∫ ∞

0

YsdA(p)s

)

for any positive F-predictable process Y .

In the particular case where At =∫ t

0asds, one has

A(p)t =

∫ t

0

(p)asds (1.1.6)

Proof: See Dellacherie [18], Chapter V, Dellacherie and Meyer [22], Chapter 6 paragraph (73), page148, or Protter [67] Chapter 3, Section 5. The integrability condition of (A(p)

t , t ≥ 0) results fromthe definition, since for Y = 1, one obtains E(A(p)

∞ ) = E(A∞). ¤

This definition extends to the difference between two integrable (for simplicity) increasing pro-cesses. The terminology “dual predictable projection” refers to the fact that it is the random measuredtAt(ω) which is relevant when performing that operation. Note that the predictable projection ofan increasing process is not necessarily increasing, whereas its predictable dual projection is.

Page 9: Enlargements of Filtrations Monique Jeanblanc Jena, June 2010 · 2010-06-09 · 2 These notes are not complete, and have to be considered as a preliminary version of a full text

1.1. BACKGROUND 9

If X is bounded and A has integrable variation (not necessarily adapted), then

E((X?A(p))∞) = E(( (p)X?A)∞) .

This is equivalent to: for s < t,

E(At −As|Fs) = E(A(p)t −A(p)

s |Fs) . (1.1.7)

If A is F-adapted (not necessarily predictable), then (At − A(p)t , t ≥ 0) is an F-martingale. In that

case, A(p)t is also called the predictable compensator of A.

Example 1.1.17 If N is a Poisson process, A(p)t = λt. If X is a Levy process and f a positive func-

tion with compact support which does not contain 0, the predictable compensator of∑

s≤t f(∆Xs)is t〈ν, f〉 = t

∫f(x)ν(dx)

More generally, from Proposition 1.1.16 and (1.1.7), the process (o)A−A(p) is a martingale.

Proposition 1.1.18 If A is increasing, the process (o)A is a sub-martingale and A(p) is the pre-dictable increasing process in the Doob-Meyer decomposition of the sub-martingale (o)A.

In a general setting, the predictable projection of an increasing process A is a sub-martingale whereasthe dual predictable projection is an increasing process. The predictable projection and the dualpredictable projection of an increasing process A are equal if and only if ( (p)At, t ≥ 0) is increasing.

It will also be convenient to introduce the following terminology:

Definition 1.1.19 If τ is a random time, we call the F-predictable compensator associatedwith τ the F-dual predictable projection Aτ of the increasing process 11τ≤t. This dual predictableprojection Aτ satisfies

E(Yτ ) = E(∫ ∞

0

YsdAτs

)(1.1.8)

for any positive, predictable process Y .

It follows that the process E(11τ≤t −Aτt is an F-martingale.

Lemma 1.1.20 Assume that the random time τ avoids the F-stopping times, i.e., P(τ = ϑ) = 0 forany F-stopping time ϑ, and that F-martingales are continuous.Then, the F-dual predictable projection of the process Ht : = 11τ≤t, denoted Aτ , is continuous.

Proof: Indeed, if ϑ is a jump time of Aτ , it is an F-stopping time, hence is predictable, and

E(Aτϑ −Aτ

ϑ−) = E(11τ=ϑ) = 0 ;

the continuity of Aτ follows. ¤

Lemma 1.1.21 Assume that the random time τ avoids the F-stopping times (condition (A) ) andlet aτ be the dual optional projection of 11τ≤t. Then, Aτ = aτ , and Aτ is continuous.Assume that all F-martingales are continuous (condition (C)). Then, aτ is continuous and conse-quently Aτ = aτ . Under conditions (C) and (A), Z = P(τ > t|Ft) is continuous.

Proof: Recall that under (C) the predictable and optional sigma fields are equal. See Dellacherieand Meyer [22] or Nikeghbali [62].

Page 10: Enlargements of Filtrations Monique Jeanblanc Jena, June 2010 · 2010-06-09 · 2 These notes are not complete, and have to be considered as a preliminary version of a full text

10 CHAPTER 1. THEORY OF STOCHASTIC PROCESSES

Lemma 1.1.22 Let τ be a finite random time such that its associated Azema’s supermartingale Zτ

is continuous. Then τ avoids F-stopping times.

Proof: See Coculescu and Nikeghbali

Proposition 1.1.23 The Doob-Meyer decomposition of the super-martingale Zτt = P(τ > t|Ft) is

Zτt = E(Aτ

∞|Ft)−Aτt = µτ

t −Aτt

where µτt : = E(Aτ

∞|Ft).

Proof: From the definition of the dual predictable projection, for any predictable process Y , onehas

E(Yτ ) = E(∫ ∞

0

YudAτu

).

Let t be fixed and Ft ∈ Ft. Then, the process Yu = Ft11t<u, u ≥ 0 is F-predictable. Then

E(Ft11t<τ) = E(Ft(Aτ∞ −Aτ

t )) .

It follows that E(Aτ∞|Ft) = Zτ

t + Aτt . ¤

Comment 1.1.24 ♠ It can be proved that the martingale

µτt : = E(Aτ

∞|Ft) = Aτt + Zτ

t

is BMO. We recall that a continuous uniformly integrable martingale M belongs to BMO space ifthere exists a constant m such that

E(〈M〉∞ − 〈M〉τ |Fτ ) ≤ m

for any stopping time τ . It can be proved (see, e.g., Dellacherie and Meyer [22], Chapter VII,) thatthe space BMO is the dual of H1, the space of martingales such that E(sup |Mt|) < ∞.

Example

We now present an example of computation of dual predictable projection. Let (Bs)s≥0 be an F−Brownian motion starting from 0 and B

(ν)s = Bs + νs. Let G(ν) be the filtration generated by

the process (|B(ν)s |, s ≥ 0) (which coincides with the one generated by (B(ν)

s )2). We now computethe decomposition of the semi-martingale (B(ν))2 in the filtration G(ν) and the dual predictableprojection (with respect to G(ν)) of the finite variation process

∫ t

0B

(ν)s ds.

Ito’s lemma provides us with the decomposition of the process (B(ν))2 in the filtration F:

(B(ν)t )2 = 2

∫ t

0

B(ν)s dBs + 2ν

∫ t

0

B(ν)s ds + t . (1.1.9)

To obtain the decomposition in the filtration G(ν) we remark that,

E(eνBs |F |B|s ) = cosh(νBs)(= cosh(ν|Bs|))

which leads, thanks to Girsanov’s Theorem to the equality:

E(Bs + νs|F |B|s ) =E(Bse

νBs |F |B|s )

E(eνBs |F |B|s )= Bs tanh(νBs) = ψ(νBs)/ν ,

Page 11: Enlargements of Filtrations Monique Jeanblanc Jena, June 2010 · 2010-06-09 · 2 These notes are not complete, and have to be considered as a preliminary version of a full text

1.1. BACKGROUND 11

where ψ(x) = x tanh(x). We now come back to equality (1.1.9). Due to (1.1.6), we have just shownthat:

The dual predictable projection of 2ν∫ t

0

B(ν)s ds is 2

∫ t

0

dsψ(νB(ν)s ) . (1.1.10)

As a consequence,

(B(ν)t )2 − 2

∫ t

0

ds ψ(νB(ν)s )− t

is a G(ν)-martingale with increasing process 4∫ t

0(B(ν)

s )2ds. Hence, there exists a G(ν)-Brownianmotion β such that

(Bt + νt)2 = 2∫ t

0

|Bs + νs|dβs + 2∫ t

0

dsψ(ν(Bs + νs)) + t . (1.1.11)

Exercise 1.1.25 Let M a cadlag martingale. Prove that its predictable projection is Mt−. C

Exercise 1.1.26 Let X be a measurable process such that E(∫ t

0|Xs|ds) < ∞ and Yt =

∫ t

0Xsds,

then (o)Yt −∫ t

0(o)Xsds is an F-martingale (RW) C

Exercise 1.1.27 Prove that if X is bounded and Y predictable (p)(Y X) = Y (p)X (RW p 349) C

Exercise 1.1.28 Prove that, more generally than (1.1.10), the dual predictable projection of∫ t

0f(B(ν)

s )ds

is∫ t

0E(f(B(ν)

s )|G(ν)s )ds and that

E(f(B(ν)s )|G(ν)

s ) =f(B(ν)

s )eνB(ν)s + f(−B

(ν)s )e−νB(ν)

s

2 cosh(νB(ν)s )

.

C

Exercise 1.1.29 Prove that, if (αs, s ≥ 0) is an increasing process and X a positive measurableprocess, then (∫ ·

0

Xsdαs

)(p)

t

=∫ ·

0

(p)Xsdαs

In particular (∫ ·

0

Xsds

)(p)

t

=∫ ·

0

(p)Xsds

C

l

1.1.7 Ito-Kunita-Wentcell formula

We recall here the Ito-Kunita-Wentzell formula (see Kunita [55]). Let Ft(x) be a family of stochasticprocesses, continuous in (t, x) ∈ (R+ × Rd) a.s., and satisfying the following conditions:(i) for each t > 0, x → Ft(x) is C2 from Rd to R,(ii) for each x, (Ft(x), t ≥ 0) is a continuous semimartingale

dFt(x) =n∑

j=1

f jt (x) dM j

t ,

Page 12: Enlargements of Filtrations Monique Jeanblanc Jena, June 2010 · 2010-06-09 · 2 These notes are not complete, and have to be considered as a preliminary version of a full text

12 CHAPTER 1. THEORY OF STOCHASTIC PROCESSES

where M j are continuous semimartingales, and f j(x) are stochastic processes continuous in (t, x),such that for every s > 0, the map x → f j

s (x) is C1, and for every x, f j(x) is an adapted process.Let X = (X1, · · · , Xd) be a continuous semimartingale. Then

Ft(Xt) = F0(X0) +n∑

j=1

∫ t

0

f js (Xs) dM j

s +d∑

i=1

∫ t

0

∂Fs

∂xi(Xs) dXi

s

+d∑

i=1

n∑

j=1

∫ t

0

∂fs

∂xi(Xs) d〈M j , Xi〉s +

12

d∑

i,k=1

∫ t

0

∂2Fs

∂xi∂xkd〈Xk, Xi〉s.

1.2 Some Important Exercices

Exercise 1.2.1 Let B be a Brownian motion, F its natural filtration and Mt = sups≤t Bs. Provethat, for t < 1,

E(f(M1)|Ft) = F (1− t, Bt,Mt)

with

F (s, a, b) =

√2πs

(f(b)

∫ b−a

0

e−u2/(2s)du +∫ ∞

b

f(u) exp(− (u− a)2

2s

)du

).

Hint: Note thatsups≤1

Bs = sups≤t

Bs ∨ supt≤s≤1

Bs = sups≤t

Bs ∨ (M1−t + Bt)

where Ms = supu≤s Bu for Bu = Bu+t −Bt. C

Exercise 1.2.2 Let ϕ be a C1 function, B a Brownian motion and Mt = sups≤t Bs. Prove thatthe process

ϕ(Mt)− (Mt −Bt)ϕ′(Mt)

is a local martingale. C

Hint: Apply Doob’s Theorem to the martingale h(Mt)(Mt −Bt) +∫∞

Mtduh(u).

Exercise 1.2.3 A Useful Lemma: Doob’s Maximal Identity.Let M be a positive continuous martingale such that M0 = x.

(i) Prove that if limt→∞Mt = 0, then

P(sup Mt > a) =(x

a

)∧ 1 (1.2.1)

and sup Mtlaw=

x

Uwhere U is a random variable with a uniform law on [0, 1].

(ii) Conversely, if sup Mtlaw=

x

U, show that M∞ = 0.

Hint: Apply Doob’s optional sampling theorem to Ta ∧ t and prove, passing to the limit whent goes to infinity, that

x = E(MTa) = aP(Ta < ∞) = aP(sup Mt ≥ a) . C

Exercise 1.2.4 Let F ⊂ G and dQ|Gt = LtdP |Gt , where L is continuous. Prove that dQ|Lt = `tdP|Ft

where `t = E(Lt|Ft). Prove that any (G,Q)-martingale can be written as MPt −

∫ t

0d〈MP,L〉s

Lswhere

MP is a (G,P)-martingale. C

Page 13: Enlargements of Filtrations Monique Jeanblanc Jena, June 2010 · 2010-06-09 · 2 These notes are not complete, and have to be considered as a preliminary version of a full text

1.2. SOME IMPORTANT EXERCICES 13

Exercise 1.2.5 Prove that, for any (bounded) process a (not necessarily adapted)

Mt := E(∫ t

0

audu|Ft)−∫ t

0

E(au|Fu)du

is an F-martingale. Extend the result to the case∫ ·0Xsdαs where (αs, s ≥ 0) is an increasing

predictable process and X a positive measurable process. C

Hint: Compute E(Mt −Ms|Fs) = E(∫ t

saudu− ∫ t

sE(au|Fu)du|Fs).

Page 14: Enlargements of Filtrations Monique Jeanblanc Jena, June 2010 · 2010-06-09 · 2 These notes are not complete, and have to be considered as a preliminary version of a full text

14 CHAPTER 1. THEORY OF STOCHASTIC PROCESSES

Page 15: Enlargements of Filtrations Monique Jeanblanc Jena, June 2010 · 2010-06-09 · 2 These notes are not complete, and have to be considered as a preliminary version of a full text

Chapter 2

Enlargement of Filtrations

From the end of the seventies, Barlow, Jeulin and Yor started a systematic study of the problemof enlargement of filtrations: namely which F-martingales M remain G-semi-martingales and if it isthe case, what is the semi-martingale decomposition of M in G?

There are mainly two kinds of enlargement of filtration:• Initial enlargement of filtrations: in that case, Gt = Ft ∨ σ(L) where L is a r.v. (or, more

generally Gt = Ft ∨ F where F is a σ-algebra)• Progressive enlargement of filtrations, where Gt = Ft ∨ Ht with H the natural filtration

of Ht = 11τ≤t where τ is a random time (or, more generally Gt = Ft ∨ Ft where F is anotherfiltration).

Up to now, three lecture notes volumes have been dedicated to this question: Jeulin [46], Jeulinand Yor [50] and Mansuy and Yor [60]. See also related chapters in the books of Protter [67] andDellacherie, Maisonneuve and Meyer [19] and Yor [80]. Some important papers are Bremaud andYor [13], Barlow [7], Jacod [41, 40] and Jeulin and Yor [47], and the survey papers of Nikeghbali [62].See also the theses of Amendinger [1], Ankirchner [4], and Song [72] and the papers of Ankirchneret al. [5] and Yoeurp [78].Very few studies are done in the case Gt = Ft ∨ Ft. One exception is for Ft = σ(Jt) whereJt = infs≥t Xs when X is a three dimensional Bessel process.

These results are extensively used in finance to study two specific problems occurring in insidertrading: existence of arbitrage using strategies adapted w.r.t. the large filtration, and change ofprices dynamics, when an F-martingale is no longer a G-martingale. They are also a main stone forstudy of default risk.

An incomplete list of authors concerned with enlargement of filtration in finance for insidertrading is: Ankirchner [5, 4], Amendinger [2], Amendinger et al. [3], Baudoin [8], Corcuera et al.[15], Eyraud-Loisel [30], Florens and Fougere [31], Gasbarra et al. [33], Grorud and Pontier [34],Hillairet [36], Imkeller [37], Karatzas and Pikovsky [51], Wu [77] and Kohatsu-Higa and Øksendal[54].

Di Nunno et al. [24], Imkeller [38], Imkeller et al. [39], Kohatsu-Higa [52, 53] have introducedMalliavin calculus to study the insider trading problem. We shall not discuss this approach here.

Enlargement theory is also used to study asymmetric information, see, e.g., Follmer et al. [32]and progressive enlargement is an important tool for the study of default in the reduced formapproach by Bielecki et al. [10, 11, 12], Elliott et al.[28] and Kusuoka [56]. We start with the casewhere F-martingales remain G-martingales. In that case, there is a complete characterization sothat this property holds. Then, we study a particular example: Brownian and Poisson bridges. Inthe following step, we study initial enlargement, where Gt = Ft ∨ σ(L) for a random variable L.The goal is to give conditions such that F-martingales remain G-semi-martingale and, in that case,to give the G-semi-martingale decomposition of the F-martingales. Then, we answer to the same

15

Page 16: Enlargements of Filtrations Monique Jeanblanc Jena, June 2010 · 2010-06-09 · 2 These notes are not complete, and have to be considered as a preliminary version of a full text

16 CHAPTER 2. ENLARGEMENT OF FILTRATIONS

questions for progressive enlargements where Gt = Ft∨σ(τ ∧t) for a non negative random variable τ .Only sufficient conditions are known. The study of a particular and important case of last passagetimes is presented.

2.1 Immersion of Filtrations

Let F and G be two filtrations such that F ⊂ G. Our aim is to study some conditions which ensurethat F-martingales are G-semi-martingales, and one can ask in a first step whether all F-martingalesare G-martingales. This last property is equivalent to E(ζ|Ft) = E(ζ|Gt), for any t and ζ ∈ L1(F∞).

Let us study a simple example where G = F∨σ(ζ) where ζ ∈ L1(F∞) and ζ is not F0-measurable.Obviously E(ζ|Gt) = ζ is a G-martingale and E(ζ|Ft) is an F-martingale. However E(ζ|G0) 6=E(ζ|F0), and some F-martingales are not G-martingales (as, for example E(ζ|Ft)).

2.1.1 Definition

The filtration F is said to be immersed in G if any square integrable F-martingale is a G-martingale(Tsirel’son [75], Emery [29]). This is also referred to as the (H) hypothesis by Bremaud and Yor[13] which was defined as:

(H) Every F-square integrable martingale is a G-square integrable martingale.

Proposition 2.1.1 Hypothesis (H) is equivalent to any of the following properties:

(H1) ∀ t ≥ 0, the σ-fields F∞ and Gt are conditionally independent given Ft, i.e., ∀ t ≥ 0, ∀Gt ∈L2(Gt),∀F ∈ L2(F∞),E(Gt F |Ft) = E(Gt|Ft)E(F |Ft).

(H2) ∀ t ≥ 0, ∀Gt ∈ L1(Gt), E(Gt|F∞) = E(Gt|Ft).

(H3) ∀ t ≥ 0, ∀F ∈ L1(F∞), E(F |Gt) = E(F |Ft).

In particular, (H) holds if and only if every F-local martingale is a G-local martingale.

Proof:• (H) ⇒ (H1). Let F ∈ L2(F∞) and assume that hypothesis (H) is satisfied. This implies that themartingale Ft = E(F |Ft) is a G-martingale such that F∞ = F , hence Ft = E(F |Gt). It follows thatfor any t and any Gt ∈ L2(Gt):

E(FGt|Ft) = E(GtE(F |Gt)|Ft) = E(GtE(F |Ft)|Ft) = E(Gt|Ft)E(F |Ft)

which is exactly (H1).• (H1) ⇒ (H2). Let F ∈ L2(F∞) and Gt ∈ L2(Gt). Under (H1),

E(FE(Gt|Ft)) = E(E(F |Ft)E(Gt|Ft))H1= E(E(FGt|Ft)) = E(FGt)

which is (H2).• (H2) ⇒ (H3). Let F ∈ L2(F∞) and Gt ∈ L2(Gt). If (H2) holds, then it is easy to prove that, forF ∈ L2(F∞),

E(GtE(F |Ft)) = E(FE(Gt|Ft))H2= E(FGt),

which implies (H3). The general case follows by approximation.• Obviously (H3) implies (H). ¤

Page 17: Enlargements of Filtrations Monique Jeanblanc Jena, June 2010 · 2010-06-09 · 2 These notes are not complete, and have to be considered as a preliminary version of a full text

2.1. IMMERSION OF FILTRATIONS 17

In particular, under (H), if W is an F-Brownian motion, then it is a G-martingale with brackett, since such a bracket does not depend on the filtration. Hence, it is a G-Brownian motion. It isimportant to note that

∫ t

0ψsdWs is a local martingale, for a G-adapted process ψ, satisfying usual

integrability conditions.

A trivial (but useful) example for which (H) is satisfied is G = F ∨ F where F and F are twofiltrations such that F∞ is independent of F∞.

Exercise 2.1.2 Assume that (H) holds, and that W is an F-Brownian motion. Prove that W is aG-Brownian motion without using the bracket.Hint: either use the fact that W and (W 2

t − t, t ≥ 0) are F, hence G-martingales, or the fact that,for any λ, exp(λWt − 1

2λ2t) is a F, hence a G martingale. C

Exercise 2.1.3 Prove that, if F is immersed in G, then, for any t, Ft = Gt ∩ F∞.Show that, if τ ∈ F∞, immersion holds if and only if τ is an F-stopping time. Hint: if A ∈ Gt ∩F∞,then 11A = E(11A|F∞) = E(11A|Ft) C

2.1.2 Change of probability

Of course, the notion of immersion depends strongly on the probability measure, and in particular,is not stable by change of probability. See Subsection 2.4.6 for a counter example. We now study inwhich setup the immersion property is preserved under change of probability.

Proposition 2.1.4 Assume that the filtration F is immersed in G under P, and let Q be equivalentto P, with Q|Gt = LtP|Gt where L is assumed to be F-adapted. Then, F is immersed in G under Q.

Proof: Let N be a (F,Q)-martingale, then (NtLt, t ≥ 0) is a (F,P)-martingale, and since F isimmersed in G under P, (NtLt, t ≥ 0) is a (G,P)-martingale which implies that N is a (G,Q)-martingale. ¤

Note that, if one defines a change of probability on F with an F-martingale L, one can notextend this change of probability to G by setting Q|Gt = LtP|Gt , since, in general, L fails to be aG-martingale.

In the next proposition, we do not assume that the Radon-Nikodym density is F-adapted.

Proposition 2.1.5 Assume that F is immersed in G under P, and let Q be equivalent to P withQ|Gt = LtP|Gt where L is a G-martingale and define `t = E(Lt|Ft). Assume that all F-martingalesare continuous and that L is continuous. Then, F is immersed in G under Q if and only if the(G,P)-local martingale ∫ t

0

dLs

Ls−

∫ t

0

d`s

`s: = L(L)t − L(`)t

is orthogonal to the set of all (F,P)-local martingales.

Proof: We prove that any (F,Q)-martingale is a (G,Q)-martingale. Every (F,Q)-martingale MQ

may be written as

MQt = MP

t −∫ t

0

d〈MP, `〉s`s

where MP is an (F,P)-martingale. By hypothesis, MP is a (G,P)-martingale and, from Girsanov’stheorem, MP

t = NQt +

∫ t

0d〈MP,L〉s

Lswhere NQ is an (G,Q)-martingale. It follows that

MQt = NQ

t +∫ t

0

d〈MP, L〉sLs

−∫ t

0

d〈MP, `〉s`s

= NQt +

∫ t

0

d〈MP,L(L)− L(`)〉s .

Page 18: Enlargements of Filtrations Monique Jeanblanc Jena, June 2010 · 2010-06-09 · 2 These notes are not complete, and have to be considered as a preliminary version of a full text

18 CHAPTER 2. ENLARGEMENT OF FILTRATIONS

Thus MQ is a (G,Q) martingale if and only if 〈MP,L(L) − L(`)〉s = 0. Here L is the stochasticlogarithm, as the inverse of the stochastic exponential. ¤

Proposition 2.1.6 Let P be a probability measure, and

Q|Gt = LtP|Gt ; Q|Ft = `tP|Ft .

Then, hypothesis (H) holds under Q if and only if:

∀T, ∀X ≥ 0, X ∈ FT ,∀t < T,E(XLT |Gt)

Lt=E(X`T |Ft)

`t(2.1.1)

Proof: Note that, for X ∈ FT ,

EQ(X|Gt) =1LtEP(XLT |Gt) , EQ(X|Ft) =

1`tEP(X`T |Gt)

and that, from MTC, (H) holds under Q if and only if, ∀T, ∀X ∈ FT , ∀t ≤ T , one has

EQ(X|Gt) = EQ(X|Ft) .

¤

Comment 2.1.7 The (H) hypothesis was studied by Bremaud and Yor [13] and Mazziotto andSzpirglas [61], and in a financial setting by Kusuoka [56], Elliott et al. [28] and Jeanblanc andRutkowski [42, 43].

Exercise 2.1.8 Prove that, if F is immersed in G under P and if Q is a probability equivalent toP, then, any (Q,F)-semi-martingale is a (Q,G)-semi-martingale. Let

Q|Gt = LtP|Gt ; Q|Ft = `tP|Ft .

and X be a (Q,F) martingale. Assuming that F is a Brownian filtration and that L is continuous,prove that

Xt +∫ t

0

(1`s

d〈X, `〉s − 1Ls

d〈X,L〉s)

is a (G,Q) martingale.In a general case, prove that

Xt +∫ t

0

Ls−Ls

(1

`s−d[X, `]s − 1

Ls−d[X,L]s

)

is a (G,Q) martingale. See Jeulin and Yor [48]. C

Exercise 2.1.9 Assume that F (L)t = Ft ∨ σ(L) where L is a random variable. Find under which

conditions on L, immersion property holds. C

Exercise 2.1.10 Construct an example where some F-martingales are G-martingales, but not allF martingales are G-martingales. C

Exercise 2.1.11 Assume that F ⊂ G where (H) holds for F and G.Let τ be a G-stopping time. Prove that (H) holds for F and Fτ = F ∨H where Ht = σ(τ ∧ t).Let G be such that F ⊂ G ⊂ G. Prove that F be immersed in G. C

Page 19: Enlargements of Filtrations Monique Jeanblanc Jena, June 2010 · 2010-06-09 · 2 These notes are not complete, and have to be considered as a preliminary version of a full text

2.2. BRIDGES 19

2.2 Bridges

2.2.1 The Brownian Bridge

Rather than studying ab initio the general problem of initial enlargement, we discuss an interestingexample. Let us start with a BM (Bt, t ≥ 0) and its natural filtration FB . Define a new filtrationas F (B1)

t = FBt ∨ σ(B1). In this filtration, the process (Bt, t ≥ 0) is no longer a martingale. It

is easy to be convinced of this by looking at the process (E(B1|F (B1)t ), t ≤ 1): this process is

identically equal to B1, not to Bt, hence (Bt, t ≥ 0) is not a G-martingale. However, (Bt, t ≥ 0) isa F(B1)-semi-martingale, as follows from the next proposition 2.2.2.

Before giving this proposition, we recall some facts on Brownian bridge.

The Brownian bridge (bt, 0 ≤ t ≤ 1) is defined as the conditioned process (Bt, t ≤ 1|B1 = 0).Note that Bt = (Bt − tB1) + tB1 where, from the Gaussian property, the process (Bt − tB1, t ≤ 1)and the random variable B1 are independent. Hence (bt, 0 ≤ t ≤ 1) law= (Bt − tB1, 0 ≤ t ≤ 1). TheBrownian bridge process is a Gaussian process, with zero mean and covariance function s(1−t), s ≤ t.Moreover, it satisfies b0 = b1 = 0.

We can represent the Brownian bridge between 0 and y during the time interval [0, 1] as

(Bt − tB1 + ty; t ≤ 1) .

More generally, the Brownian bridge between x and y during the time interval [0, T ] may be expressedas (

x + Bt − t

TBT +

t

T(y − x); t ≤ T

),

where (Bt; t ≤ T ) is a standard BM starting from 0.

Exercise 2.2.1 Prove that∫ t∧1

0B1−Bs

1−s ds is convergent.Prove that, for 0 ≤ s < t ≤ 1, E(Bt −Bs|B1 −Bs) = t−s

1−s (B1 −Bs) C

Decomposition of the BM in the enlarged filtration F(B1)

Proposition 2.2.2 Let F (B1)t = ∩ε>0Ft+ε ∨ σ(B1). The process

βt = Bt −∫ t∧1

0

B1 −Bs

1− sds

is an F(B1)-martingale, and an F(B1) Brownian motion. In other words,

Bt = βt −∫ t∧1

0

B1 −Bs

1− sds

is the decomposition of B as an F(B1)-semi-martingale.

Proof: Note that the definition of F(B1) is done to satisfy the right-continuity assumption. We shallnote F (B1)

t = Ft ∨ σ(B1) = Ft ∨ σ(B1 −Bs). Then, since Fs is independent of (Bs+h −Bs, h ≥ 0),one has

E(Bt −Bs|F (B1)s ) = E(Bt −Bs|B1 −Bs) .

Page 20: Enlargements of Filtrations Monique Jeanblanc Jena, June 2010 · 2010-06-09 · 2 These notes are not complete, and have to be considered as a preliminary version of a full text

20 CHAPTER 2. ENLARGEMENT OF FILTRATIONS

For t < 1,

E(∫ t

s

B1 −Bu

1− udu|F (B1)

s ) =∫ t

s

11− u

E(B1 −Bu|B1 −Bs)du

=∫ t

s

11− u

(B1 −Bs − E(Bu −Bs|B1 −Bs)) du

=∫ t

s

11− u

(B1 −Bs − u− s

1− s(B1 −Bs)

)du

=1

1− s(B1 −Bs)

∫ t

s

du =t− s

1− s(B1 −Bs)

It follows thatE(βt − βs|F (B1)

s ) = 0

hence, β is an F(B1)-martingale (and an F(B1)-Brownian motion) (WHY?). ¤It follows that if M is an F-local martingale such that

∫ 1

01√1−s

d|〈M, B〉|s is finite, then

Mt = Mt −∫ t∧1

0

B1 −Bs

1− sd〈M, B〉s

is a F(B1)-local martingale.

Comment 2.2.3 The singularity of B1−Bt

1−t at t = 1, i.e., the fact that B1−Bt

1−t is not square integrablebetween 0 and 1 prevents a Girsanov measure change transforming the (P,F(B1)) semi-martingaleB into a (Q,F(B1)) martingale.

We obtain that the standard Brownian bridge b is a solution of the following stochastic equation(take care about the change of notation)

dbt = − bt

1− tdt + dWt ; 0 ≤ t < 1

b0 = 0 .

The solution of the above equation is bt = (1− t)∫ t

01

1−sdWs which is a Gaussian process with zeromean and covariance s(1− t), s ≤ t.

Exercise 2.2.4 Using the notation of Proposition 2.2.2, prove that B1 and β are independent.Check that the projection of β on FB is equal to B.Hint: The F(B1) BM β is independent of F (B1)

0 . C

Exercise 2.2.5 Consider the SDE

dXt = − Xt

1− tdt + dWt ; 0 ≤ t < 1

X0 = 0

1. Prove that

Xt = (1− t)∫ t

0

dWs

1− s; 0 ≤ t < 1 .

2. Prove that (Xt, t ≥ 0) is a Gaussian process. Compute its expectation and its covariance.

3. Prove that limt→1 Xt = 0.

Page 21: Enlargements of Filtrations Monique Jeanblanc Jena, June 2010 · 2010-06-09 · 2 These notes are not complete, and have to be considered as a preliminary version of a full text

2.2. BRIDGES 21

C

Exercise 2.2.6 (See Jeulin and Yor [49]) Let Xt =∫ t

0ϕsdBs where ϕ is predictable such that∫ t

0ϕ2

sds < ∞. Prove that the following assertions are equivalent

1. X is an F(B1)-semimartingale with decomposition

Xt =∫ t

0

ϕsdβs +∫ t∧1

0

B1 −Bs

t− sϕsds

2.∫ 1

0|ϕs| |B1−Bs|

1−s ds < ∞

3.∫ 1

0|ϕs|√1−s

ds < ∞

C

2.2.2 Poisson Bridge

Let N be a Poisson process with constant intensity λ, FNt = σ(Ns, s ≤ t) its natural filtration and

T > 0 a fixed time. The process Mt = Nt − λt is a martingale. Let G∗t = σ(Ns, s ≤ t;NT ) be thenatural filtration of N enlarged with the terminal value NT of the process N .

Proposition 2.2.7 Assume that λ = 1. The process

ηt = Mt −∫ t∧T

0

MT −Ms

T − sds,

is a G∗-martingale with predictable bracket, for t ≤ T ,

Λt =∫ t

0

NT −Ns

T − sds .

Proof: For 0 < s < t < T ,

E(Nt −Ns|G∗s ) = E(Nt −Ns|NT −Ns) =t− s

T − s(NT −Ns)

where the last equality follows from the fact that, if X and Y are independent with Poisson lawswith parameters µ and ν respectively, then

P(X = k|X + Y = n) =n!

k!(n− k)!αk(1− α)n−k

where α =µ

µ + ν. Hence,

E(∫ t

s

duNT −Nu

T − u|G∗s

)=

∫ t

s

du

T − u(NT −Ns − E(Nu −Ns|G∗s ))

=∫ t

s

du

T − u

(NT −Ns − u− s

T − s(NT −Ns)

)

=∫ t

s

du

T − s(NT −Ns) =

t− s

T − s(NT −Ns) .

Therefore,

E(Nt −Ns −∫ t

s

NT −Nu

T − udu|G∗s ) =

t− s

T − s(NT −Ns)− t− s

T − s(NT −Ns) = 0

Page 22: Enlargements of Filtrations Monique Jeanblanc Jena, June 2010 · 2010-06-09 · 2 These notes are not complete, and have to be considered as a preliminary version of a full text

22 CHAPTER 2. ENLARGEMENT OF FILTRATIONS

and the result follows.

¤

Comment 2.2.8 Poisson bridges are studied in Jeulin and Yor [49]. This kind of enlargement offiltration is used for modelling insider trading in Elliott and Jeanblanc [27], Grorud and Pontier [34]and Kohatsu-Higa and Øksendal [54].

Exercise 2.2.9 Prove that, for any enlargement of filtration the compensated martingale M re-mains a semi-martingale.Hint: M has bounded variation. C

Exercise 2.2.10 Prove that any FN -martingale is a G∗-semimartingale. C

Exercise 2.2.11 Prove that

ηt = Nt −∫ t∧T

0

NT −Ns

T − sds− (t− T )+,

Prove that

〈η〉t =∫ t∧T

0

NT −Ns

T − sds + (t− T )+ .

Therefore, (η, t ≤ t) is a compensated G∗-Poisson process, time-changed by∫ t

0NT−Ns

T−s ds, i.e., ηt =

M(∫ t

0NT−Ns

T−s ds) where (M(t), t ≥ 0) is a compensated Poisson process. C

Exercise 2.2.12 A process X fulfills the harness property if

E(

Xt −Xs

t− s

∣∣∣Fs0], [T

)=

XT −Xs0

T − s0

for s0 ≤ s < t ≤ T where Fs0], [T = σ(Xu, u ≤ s0, u ≥ T ). Prove that a process with the harnessproperty satisfies

E(Xt

∣∣∣Fs], [T

)=

T − t

T − sXs +

t− s

T − sXT ,

and conversely. Prove that, if X satisfies the harness property, then, for any fixed T ,

MTt = Xt −

∫ t

0

duXT −Xu

T − u, t < T

is an Ft], [T -martingale and conversely. See [3M] for more comments. C

2.2.3 Insider trading

Brownian Bridge

We consider a first example. LetdSt = St(µdt + σdBt)

where µ and σ are constants, be the price of a risky asset. Assume that the riskless asset has anconstant interest rate r.

The wealth of an agent is

dXt = rXtdt + πt(dSt − rStdt) = rXtdt + πtσXt(dWt + θdt), X0 = x

Page 23: Enlargements of Filtrations Monique Jeanblanc Jena, June 2010 · 2010-06-09 · 2 These notes are not complete, and have to be considered as a preliminary version of a full text

2.2. BRIDGES 23

where θ = µ−rσ and π = πSt/Xt assumed to be an FB adapted process. Here π is the number of

shares of the risky asset, and π the proportion of wealth invested in the risky asset. It follows that

ln(Xπ,xT ) = ln x +

∫ T

0

(r − 12π2

sσ2 + θπsσ)ds +∫ T

0

σπsdWs

Then,

E(ln(Xπ,xT )) = ln x +

∫ T

0

E(

r − 12π2

sσ2 + θπsσ

)ds

The solution of maxE(ln(Xπ,xT )) is πs = θ

σ and

supE(ln(Xπ,xT )) = lnx + T

(r +

12θ2

)

Note that, if the coefficients r, µ and σ are F adapted, the same computation leads to

supE(ln(Xπ,xT )) = lnx +

∫ T

0

E(

rt +12θ2

t

)dt

where θt = µt−rt

σt.

We now enlarge the filtration with S1 (or equivalently, with B1 (WHY?)). In the enlargedfiltration, setting, for t < 1, αt = B1−Bt

1−t , the dynamics of S are

dSt = St((µ + σαt)dt + σdβt) ,

and the dynamics of the wealth are

dXt = rXtdt + πtσXt(dβt + θtdt), X0 = x

with θt = µ−rσ + αt. The solution of maxE(ln(Xπ,x

T )) is πs = θs

σ .Then, for T < 1,

ln(Xπ,x,∗T ) = ln x +

∫ T

0

(r +12θ2

s)ds +∫ T

0

σπsdβs

E(ln(Xπ,x,∗T )) = ln x +

∫ T

0

(r +12(θ2 + E(α2

s) + 2θE(αs))ds = ln x + (r +12θ2)T +

12

∫ T

0

E(α2s)ds

where we have used the fact that E(αt) = 0 (if the coefficients r, µ and σ are F adapted, α isorthogonal to Ft, hence E(αtθt) = 0). Let

V F(x) = maxE(ln(Xπ,xT )) ; π is F admissible

V G(x) = maxE(ln(Xπ,xT )) ; π is G admissible

Then V G(x) = V F(x) + 12E

∫ T

0α2

sds = V F(x)− 12 ln(1− T ).

If T = 1, the value function is infinite: there is an arbitrage opportunity and there does not existan e.m.m. such that the discounted price process (e−rtSt, t ≤ 1) is a G-martingale. However, forany ε ∈ ]0, 1], there exists a uniformly integrable G-martingale L defined as

dLt =µ− r + σζt

σLtdβt, t ≤ 1− ε, L0 = 1 ,

such that, setting dQ|Gt = LtdP|Gt , the process (e−rtSt, t ≤ 1− ε) is a (Q,G)-martingale.

This is the main point in the theory of insider trading where the knowledge of the terminal valueof the underlying asset creates an arbitrage opportunity, which is effective at time 1.

It is important to mention, that in both cases, the wealth of the investor is Xte−rt = x +∫ t

0πsd(Sse

−rs). The insider has a larger class of portfolio, and in order to give a meaning to thestochastic integral for processes π which are not adapted with respect to the semi-martingale S, onehas to give the decomposition of this semi-martingale in the larger filtration.

Page 24: Enlargements of Filtrations Monique Jeanblanc Jena, June 2010 · 2010-06-09 · 2 These notes are not complete, and have to be considered as a preliminary version of a full text

24 CHAPTER 2. ENLARGEMENT OF FILTRATIONS

Exercise 2.2.13 Prove carefully that there does not exist any emm in the enlarged filtration. Makeprecise the arbitrage opportunity. C

Poisson Bridge

We suppose that the interest rate is null and that the risky asset has dynamics

dSt = St− (µdt + σdWt + φdMt)

where M is the compensated martingale of a standard Poisson process. Let (Xt, t ≥ 0) be the wealthof an un-informed agent whose portfolio is described by (πt), the proportion of wealth invested inthe asset S at time t. Then

dXt = πtXt−(µdt + σdWt + φdMt) (2.2.1)

Then,

Xt = x exp(∫ t

0

πs(µ− φλ)ds +∫ t

0

σπsdWs +12

∫ t

0

σ2π2sds +

∫ t

0

ln(1 + πsφ)dNs

)

Assuming that the stochastic integrals with respect to W and M are martingales,

E[ln(XT )] = ln(x) +∫ T

0

E(µπs − 12σ2π2

s + λ(ln(1 + φπs)− φπs)ds .

Our aim is to solveV (x) = sup

πE (ln(Xx,π

T ))

We can then maximize the quantity under the integral sign for each s and ω.The maximum attainable wealth for the uninformed agent is obtained using the constant strategyπ for which

πµ + λ[ln(1 + πφ)− πφ]− 12π2σ2 = sup

π[πµ + λ[ln(1 + πφ)− πφ]− 1

2π2σ2] .

Henceπ =

12σ2φ

(µφ− φ2λ− σ2 ±

√(µφ− φ2λ− σ2)2 + 4σ2φµ

).

The quantity under the square root is (µφ− φ2λ + σ2)2 + 4σ2φ2λ and is non-negative.The sign to be used depends on the sign of quantities related to the parameters. The optimal π isthe only one such that 1 + φπ > 0. Solving the equation (2.2.1), it can be proved that the optimal

wealth is Xt = x(Lt)−1 where dLt = Lt−(−σπdWt +(1

1 + φπ−1)dMt) is a Radon Nikodym density

of an equivalent martingale measure. In this incomplete market, we thus obtain the utility equivalentmartingale measure defined by Davis [16] and duality approach (See Kramkov and Schachermayer).

We assume that the informed agent knows NT from time 0. Therefore, his wealth evolvesaccording to the dynamics

dX∗t = πtX

∗t−[(µ + φ(Λt − λ)]dt + σdWt + φdM∗

t ]

where Λ is given in Proposition 2.2.7. Exactly the same computations as above can be carried out.In fact these only require changing µ to (µ+φ(Λt−λ)) and the intensity of the jumps from λ to Λt.

The optimal portfolio π∗ is now such that µ− λφ + φΛs[1

1 + π∗φ]− π∗σ2 = 0 and is given by

π∗s =1

2σ2φ

(µφ− φ2λ− σ2 ±

√(µφ− φ2λ + σ2)2 + 4σ2φ2Λs

),

Page 25: Enlargements of Filtrations Monique Jeanblanc Jena, June 2010 · 2010-06-09 · 2 These notes are not complete, and have to be considered as a preliminary version of a full text

2.3. INITIAL ENLARGEMENT 25

The optimal wealth is X∗t = x(L∗t )

−1 where

dL∗t = L∗t−(−σπ∗sdWt + (1

1 + φπ∗s− 1)dM∗

t ) .

Whereas the optimal portfolio of the uninformed agent is constant, the optimal portfolio of theinformed agent is time-varying and has a jump as soon as a jump occurs for the prices.The informed agent must maximize at each (s, ω) the quantity

πµ + Λs(ω) ln(1 + πφ)− λπφ− 12π2σ2 .

Consequently,

supπ

πµ + Λs ln(1 + πφ)− λπφ− 12π2σ2 ≥ πµ + Λs ln(1 + πφ)− λπφ− 1

2π2σ2

Now, E[Λs] = λ, so

supπE(ln X∗

T ) = ln x + supπ

∫ T

0

E(πµ + Λs ln(1 + πφ)− λπφ− 12π2σ2)ds

≥ ln x +∫ T

0

π(µ + λ ln(1 + πφ)− λπφ− 12π2σ2)ds = E(ln XT )

Therefore, the maximum expected wealth for the informed agent is greater than that of the un-informed agent. This is obvious because the informed agent can use any strategy available to theuninformed agent.

Exercise 2.2.14 Solve the same problem for power utility function. C

2.3 Initial Enlargement

Let F be a Brownian filtration generated by B. We consider F (L)t = Ft∨σ(L) where L is a real-valued

random variable. More precisely, in order to satisfy the usual hypotheses, redefine

F (L)t = ∩ε>0 Ft+ε ∨ σ(L) .

Proposition 2.3.1 One has(i) Every F(L)-optional process Y is of the form Yt(ω) = yt(ω, L(ω)) for some F ⊗ B(Rd)-optionalprocess (yt(ω, u), t ≥ 0).(ii) Every F(L)-predictable process Y is of the form Yt(ω) = yt(ω, L(ω)) where (t, ω, u) 7→ yt(ω, u) isa P(F)⊗ B(Rd)-measurable function.

We shall now simply write yt(L) for yt(ω,L(ω)).

We recall that there exists a family of regular conditional distributions Pt(ω, dx) such that Pt(·, A)is a version of E(11L∈A|Ft) and for any ω, Pt(ω, ·) is a probability on R.

2.3.1 Jacod’s criterion

In what follows, for y(u) a family of martingales and X a martingale, we shall write 〈y(L), X〉 for〈y(u), X〉|u=L.

Page 26: Enlargements of Filtrations Monique Jeanblanc Jena, June 2010 · 2010-06-09 · 2 These notes are not complete, and have to be considered as a preliminary version of a full text

26 CHAPTER 2. ENLARGEMENT OF FILTRATIONS

Proposition 2.3.2 (Jacod’s Criterion.) Suppose that, for each t < T , Pt(ω, dx) << ν(dx) whereν is the law of L and T is a fixed horizon T ≤ ∞. Then, every F-semi-martingale (Xt, t < T ) isalso an F(L)-semi-martingale.Moreover, if Pt(ω, dx) = pt(ω, x)ν(dx) and if X is an F-martingale, the process

Xt = Xt −∫ t

0

d〈p·(L), X〉sps−(L)

, t < T

is an F(L)-martingale. In other words, the decomposition of the F(L)-semi-martingale X is

Xt = Xt +∫ t

0

d〈p·(L), X〉sps−(L)

.

Proof: In a first step, we show that, for any θ, the process p(θ) is an F-martingale. One has toshow that, for ζs ∈ bFs and s < t

E(pt(θ)ζs) = E(ps(θ)ζs)

This follows fromE(E(11τ>θ|Ft)ζs) = E(E(11τ>θ|Fs)ζs) .

In a second step, we assume that F-martingales are continuous, and that X and p are squareintegrable. In that case, 〈p·(L), X〉 exists. Let Fs be a bounded Fs-measurable random variable andh : R+ → R, be a bounded Borel function. Then the variable Fsh(L) is F (L)

s -measurable and if adecomposition of the form Xt = Xt +

∫ t

0dKu(L) holds, the martingale property of X should imply

that E(Fsh(L)

(Xt − Xs

))= 0, hence

E (Fsh(L) (Xt −Xs)) = E(

Fsh(L)∫ t

s

dKu(L))

.

We can write:

E (Fsh(L) (Xt −Xs)) = E(

Fs (Xt −Xs)∫ ∞

−∞h(θ)pt(θ)ν(dθ)

)

=∫

Rh(θ)E (Fs (Xtpt(θ)−Xsps(θ))) ν(dθ)

=∫

Rh(θ)E

(Fs

∫ t

s

d 〈X, p(θ)〉v)

ν(dθ)

where the first equality comes from a conditioning w.r.t. Ft, the second from the martingale propertyof p(θ), and the third from the fact that both X and p(θ) are square-integrable F-martingales.Moreover:

E(

Fsh(L)∫ t

s

dKv(L))

= E(

Fs

Rh(θ)

∫ t

s

dKv(θ)pt(θ)ν(dθ))

=∫

Rh(θ)E

(Fs

∫ t

s

pv(θ)dKv(θ))

ν(dθ)

where the first equality comes from the definition of p, and the second from the martingale prop-erty of p(θ). By equalization of these two quantities, we obtain that it is necessary to havedKu(θ) = d 〈X, p(θ)〉u /pu(θ). ¤

The stability of initial times under a change of probability is rather obvious.

Page 27: Enlargements of Filtrations Monique Jeanblanc Jena, June 2010 · 2010-06-09 · 2 These notes are not complete, and have to be considered as a preliminary version of a full text

2.3. INITIAL ENLARGEMENT 27

Regularity Conditions ♠

One of the major difficulties is to prove the existence of a universal cadlag martingale version of thefamily of densities. Fortunately, results of Jacod [40] or Stricker and Yor [73] help us to solve thistechnical problem. See also [1] for a detailed discussion. We emphazise that these results are themost important part of enlargement of filtration theory.

Jacod ([40], Lemme 1.8 and 1.10) establishes the existence of a universal cadlag version of thedensity process in the following sense: there exists a non negative function pt(ω, θ) cadlag in t,optional w.r.t. the filtration F on Ω = Ω× R+, generated by Ft ⊗ B(R+), such that

• for any θ, p.(θ) is an F-martingale; moreover, denoting ζθ = inft : pt−(θ) = 0, then p.(θ) > 0,and p−(θ) > 0 on [0, ζθ), and p.(θ) = 0 on [ζθ,∞).

• For any bounded family (Yt(ω, θ), t ≥ 0) measurable w.r.t. P(F) ⊗ B(R+), the F-predictableprojection of the process Yt(ω, L(ω)) is the process Y

(p)t =

∫pt−(θ)Yt(θ)ν(dθ).

• Let m be a continuous F-local martingale. There exists a P(F)-measurable function k suchthat

〈p(θ), m〉 = (k(θ)p(θ)−) ? 〈M〉, Pa.s.on ∩n t ≤ Rθn

where Rθn = inft : pt−(θ) ≤ 1/n.

• Let m be a local F-martingale. There exists predictable increasing process A and a F-predictable function k such that

〈p(θ),m〉t =∫ t

0

ks(θ)p(θ)s−dAs .

If m is locally square integrable, one can chose A = 〈m〉.

Lemma 2.3.3 The process p(L) does not vanish on [0, T [.

Corollary 2.3.4 Let Z be a random variable taking on only a countable number of values. Thenevery F semimartingale is a F(Z) semimartingale.

Proof: If we note

η (dx) =∞∑

k=1

P (Z = xk) δxk(dx) ,

where δxk(dx) is the Dirac measure at xk, the law of Z, then Pt (ω, dx) is absolutely continuous

with respect to η with Radon-Nikodym density:

∞∑

k=1

P (Z = xk|Ft)P (Z = xk)

1x=xk.

Now the result follows from Jacod’s theorem.¤

Exercise 2.3.5 Assume that F is a Brownian filtration. Then, E(∫ t

0d〈p·(L),X〉s

ps− (L) |Ft) is an F-martingale.Answer: Note that the result is obvious from the decomposition theorem! indeed taking expectationw.r.t. Ft of the two sides of

Xt = Xt −∫ t

0

d〈p·(L), X〉sps−(L)

leads to

E(Xt|Ft) = Xt − E(∫ t

0

d〈p·(L), X〉sps−(L)

|Ft),

Page 28: Enlargements of Filtrations Monique Jeanblanc Jena, June 2010 · 2010-06-09 · 2 These notes are not complete, and have to be considered as a preliminary version of a full text

28 CHAPTER 2. ENLARGEMENT OF FILTRATIONS

and E(Xt|Ft) is an F-martingale.

Our aim is to check it directly. Writing dpt(u) = pt(uσt(u)dWt and dXt = xtdWt, we note thatP(L ∈ R|Fs) =

∫R ps(u)ν(du) = 1 implies that

Rps(u)ν(du) =

Rp0(u)ν(du) +

∫ t

0

dWs

Rσs(u)ps(u)ν(du) = 1 +

∫ t

0

dWs

Rσs(u)ps(u)ν(du)

hence∫R σs(u)ps(u)ν(du) = 0. The process E(

∫ t

0d〈p·(L),X〉s

ps− (L) |Ft) is equal to a martingale plus∫ t

0E(σs(L)xs|Fs)ds =

∫ t

0ds xs

∫R σs(u)ps(u)ν(du) = 0. C

Exercise 2.3.6 Prove that if there exists a probability Q∗ equivalent to P such that, under Q∗,the r.v. L is independent of F∞, then every (P,F)-semi-martingale X is also an (P,F(L))-semi-martingale. C

2.3.2 Yor’s Method

We follow here Yor [80]. We assume that F is a Brownian filtration. For a bounded Borel functionf , let (λt(f), t ≥ 0) be the continuous version of the martingale (E(f(L)|Ft), t ≥ 0). There exists apredictable kernel λt(dx) such that

λt(f) =∫

Rλt(dx)f(x) .

From the predictable representation property applied to the martingale E(f(L)|Ft), there exists apredictable process λ(f) such that

λt(f) = E(f(L)) +∫ t

0

λs(f)dBs .

Proposition 2.3.7 We assume that there exists a predictable kernel λt(dx) such that

dt a.s., λt(f) =∫

Rλt(dx)f(x) .

Assume furthermore that dt × dP a.s. the measure λt(dx) is absolutely continuous with respect toλt(dx):

λt(dx) = ρ(t, x)λt(dx) .

Then, if X is an F-martingale, there exists a F(L)-martingale X such that

Xt = Xt +∫ t

0

ρ(s, L)d〈X,B〉s .

Sketch of the proof: Let X be an F-martingale, f a given bounded Borel function and Ft =E(f(L)|Ft). From the hypothesis

Ft = E(f(L)) +∫ t

0

λs(f)dBs .

Then, for As ∈ Fs, s < t:

E(11Asf(L)(Xt −Xs)) = E(11As(FtXt − FsXs)) = E(11As(〈F,X〉t − 〈F, X〉s))

= E(

11As

∫ t

s

d〈X, B〉u λu(f))

= E(

11As

∫ t

s

d〈X, B〉u∫

Rλu(dx)f(x)ρ(u, x)

).

Page 29: Enlargements of Filtrations Monique Jeanblanc Jena, June 2010 · 2010-06-09 · 2 These notes are not complete, and have to be considered as a preliminary version of a full text

2.3. INITIAL ENLARGEMENT 29

Therefore, Vt =∫ t

0ρ(u, L) d〈X,B〉u satisfies

E(11Asf(L)(Xt −Xs)) = E(11As

f(L)(Vt − Vs)) .

It follows that, for any Gs ∈ F (L)s ,

E(11Gs(Xt −Xs)) = E(11Gs(Vt − Vs)) ,

hence, (Xt − Vt, t ≥ 0) is an F(L)-martingale. ¤

Let us write the result of Proposition 2.3.7 in terms of Jacod’s criterion. If λt(dx) = pt(x)ν(dx),then

λt(f) =∫

pt(x)f(x)ν(dx) .

Hence,

d〈λ·(f), B〉t = λt(f)dt =∫

dxf(x) dt〈p·(x), B〉t

and

λt(dx) = dt〈p·(x), B〉t =dt〈p·(x), B〉t

pt(x)pt(x)dx

therefore,

λt(dx)dt =dt〈p·(x), B〉t

pt(x)λt(dx) .

In the case where λt(dx) = Φ(t, x)dx, with Φ > 0, it is possible to find ψ such that

Φ(t, x) = Φ(0, x) exp(∫ t

0

ψ(s, x)dBs − 12

∫ t

0

ψ2(s, x)ds

)

and it follows that λt(dx) = ψ(t, x)λt(dx). Then, if X is an F-martingale of the form Xt = x +∫ t

0xsdBs, the process (Xt −

∫ t

0ds xs ψ(s, L), t ≥ 0) is an F(L)-martingale.

2.3.3 Examples

We now give some examples taken from Mansuy and Yor [60] in a Brownian set-up for which we usethe preceding. Here, B is a standard Brownian motion.

I Enlargement with B1. We compare the results obtained in Subsection 2.2.1 and the methodpresented in Subsection 2.3.2. Let L = B1. From the Markov property

E(g(B1)|Ft) = E(g(B1 −Bt + Bt)|Ft) = Fg(Bt, 1− t)

where Fg(y, 1 − t) =∫

g(x)P (1 − t; y, x)dx and P (s; y, x) = 1√2πs

exp(− (x−y)2

2s

). It follows that

λt(dx) = 1√2π(1−t)

exp(− (x−Bt)

2

2(1−t)

)dx. Then

λt(dx) = pt(x)P(B1 ∈ dx) = pt(x)1√2π

e−x2/2

with

pt(x) =1√

(1− t)exp

(− (x−Bt)2

2(1− t)+

x2

2

).

Page 30: Enlargements of Filtrations Monique Jeanblanc Jena, June 2010 · 2010-06-09 · 2 These notes are not complete, and have to be considered as a preliminary version of a full text

30 CHAPTER 2. ENLARGEMENT OF FILTRATIONS

From Ito’s formula,

dtpt(x) = pt(x)x−Bt

1− tdBt .

(This can be considered as a partial check of the martingale property of (pt(x), t ≥ 0).) It followsthat d〈p(x), B〉t = pt(x)x−Bt

1−t dt, hence

Bt = Bt +∫ t

0

B1 −Bs

1− sds .

Note that, in the notation of Proposition 2.3.7, one has

λt(dx) =x−Bt

1− t

1√2π(1− t)

exp(− (x−Bt)2

2(1− t)

)dx .

I Enlargement with MB = sups≤1 Bs. From Exercise 1.2.1,

E(f(MB)|Ft) = F (1− t, Bt,MBt )

where MBt = sups≤t Bs with

F (s, a, b) =

√2πs

(f(b)

∫ b−a

0

e−u2/(2s)du +∫ ∞

b

f(u)e−(u−a)2/(2s)du

)

and, denoting by δy the Dirac measure at y,

λt(dy) =

√2

π(1− t)

δy(MB

t )∫ MB

t −Bt

0

exp(− u2

2(1− t)

)du

+ 11y>MBt exp

(− (y −Bt)2

2(1− t)

)dy

.

Hence, by applying Ito’s formula

λt(dy) =

√2

π(1− t)

δy(MB

t ) exp(− (MB

t −Bt)2

2(1− t)

)

+ 11y>MBt

y −Bt

1− texp

(− (y −Bt)2

2(1− t)

).

It follows that

ρ(t, x) = 11x>MBt

x−Bt

1− t+ 11MB

t =x1√

1− t

Φ′

Φ

(x−Bt√

1− t

)

with Φ(x) =√

∫ x

0e−

u22 du.

I Enlargement with Z :=∫∞0

f(s)dBs where∫∞0

f2(s)ds < ∞, for a deterministic function f .The above method applies step by step: it is easy to compute λt (dx), since conditionally on Ft, Z

is gaussian, with mean mt =∫ t

0ϕ (s) dBs, and variance σ2

t =∫ t

0ϕ2 (s) ds. The absolute continuity

requirement is satisfied with:

ρ (x, t) = ϕ (s)x−ms

σ2s

.

But here, we have to impose an extra integrability condition. For example, if we assume that∫ t

0

|ϕ (s) |σs

ds < ∞,

Page 31: Enlargements of Filtrations Monique Jeanblanc Jena, June 2010 · 2010-06-09 · 2 These notes are not complete, and have to be considered as a preliminary version of a full text

2.3. INITIAL ENLARGEMENT 31

then B is a F(B1)-semimartingale with canonical decomposition:

Bt = Bt +∫ t

0

dsϕ (s)σ2

s

(∫ ∞

s

ϕ (u) dBu

),

As a particular case, we may take Z = Bt0 , for some fixed t0 and we recover the results for theBrownian bridge.More examples can be found in Jeulin [46] and Mansuy and Yor [60].

2.3.4 Change of Probability

We assume that Pt, the conditional law of L given Ft, is equivalent to ν for any t > 0, so that thehypotheses of Proposition 2.3.2 hold and that p does not vanishes (on the support of ν).

Lemma 2.3.8 The process 1/pt(L), t ≥ 0 is an F(L)-martingale. Let, for any t, dQ = 1/pt(L)dPon F (L)

t . Under Q, the r.v. L is independent of F∞. Moreover,

Q|Ft= P|Ft

, Q|σ(L) = P|σ(L)

Proof: Let Zt(L) = 1/pt(L). In a first step, we prove that Z is an F(L)-martingale. Indeed,E(Zt(L)|F (L)

s ) = Zs(L) if (and only if) E(Zt(L)h(L)As) = E(Zs(L)h(L)As) for any (bounded)Borel function h and any Fs-measurable (bounded) random variable As. From definition of p, onehas

E(Zt(L)h(L)As) = E(∫

RZt(x)h(x)pt(x)ν(dx)As) = E(

Rh(x)ν(dx)As)

=∫

Rh(x)ν(dx)E(As) = E(As)E(h(L))

The particular case t = s leads to E(Zs(L)h(L)As) = E(h(L))E(As), hence E(Zs(L)h(L)As) =E(Zt(L)h(L)As). Note that, since p0(x) = 1, one has E(1/pt(L)) = 1/p0(L) = 1.Now, we prove the required independence. From the above,

EQ(h(L)As) = EP(Zs(L)h(L)As) = EP(h(L))EP(As)

For h = 1 (resp. At = 1), one obtains EQ(As) = EP(As) (resp. EQ(h(L)) = EP(h(L))) and we aredone. ¤This fact appears in Song [72] and plays an important role in Grorud and Pontier [35].

Lemma 2.3.9 Let m be a (P,F) martingale. The process mL defined by mLt = mt/pt(L) is a

(P,F(L))-martingale and satisfies E(mLt |Ft) = mt.

Proof: To establish the martingale property, it suffices to check that for s < t and A ∈ F (L)s , one

has E(mLt 11A) = E(mL

s 11A), which is equivalent to EQ(mt11A) = EQ(ms11A). The last equality followsfrom the fact that the (F,P) martingale m is also a (F,Q) martingale (indeed P and Q coincide on F),hence a (F(L),Q) martingale (by independence of L and F under Q. Bayes criteria shows that mL)is a (P,F(L))-martingale. Noting that E(1/pt(L)|Ft) = 1 (take As = 1 and h = 1 in the preceedingproof), the equality

E(mLt |Ft) = mtE(1/pt(L)|Ft) = mt

ends the proof. ¤Of course, the reverse holds true: if there exists a probability equivalent to P such that, under Q,the r.v. L is independent to F∞, then (P,F)-martingales are (P,F(L))-semi martingales (WHY?).

Exercise 2.3.10 Prove that (Yt(L), t ≥ 0) is a (P,F(L))-martingale if and only if Yt(x)pt(x) is afamily of F-martingales. C

Page 32: Enlargements of Filtrations Monique Jeanblanc Jena, June 2010 · 2010-06-09 · 2 These notes are not complete, and have to be considered as a preliminary version of a full text

32 CHAPTER 2. ENLARGEMENT OF FILTRATIONS

Exercise 2.3.11 Let F be a Brownian filtration. Prove that, if X is a square integrable (P,F(L))-martingale, then, there exists a function h and a process ψ such that

Xt = h(L) +∫ t

0

ψs(L)dBs

C

Equivalent Martingale Measures

We consider a financial market, where some assets, with prices S are traded, as well as a riskless assetS0. The dynamics of S under the so-called historical probability are assumed to be semi-martingales(WHY?). We denote by F the filtration generated by prices and by QF the set of probabilitymeasures, equivalent to P on F such that the discounted prices S are F-(local) martingales. We nowenlarge the filtration and consider prices as F(L) semi-martingales (assuming that it is the case). Wedenote by Q(L) the set of probability measures, equivalent to P on F(L) such that the discountedprices S are F(L)-(local) martingales.

We assume that there exists a probability Q equivalent to P such that, under Q, the r.v. L isindependent to F∞. Then, Q(L) is equal to the set of probability measures, equivalent to Q on F(L)

such that the discounted prices S are F(L)-(local) martingales. Let P∗ ∈ QF and L its RN density(i.e., dP∗|Ft = LtdP|Ft . The probability Q∗ defined as

dQ∗|F(L)t

= LtdQ|F(L)t

= Lt1

pt(τ)dP|F(L)

t

belongs to QF(L). Indeed, S being a (P∗,F)-martingale, SL is a (P,F)-martingale, hence SL/p(L) is

a (P,F(L))-martingale and S is a (Q∗,F(L))-martingale. The probability Q defined as

dQ|F(L)t

= h(τ)Lt1

pt(τ)dP|F(L)

t

where EP(h(τ)) = 1 belongs also to QF(L).

Conversely, if Q∗ ∈ QF(L), with RN density L∗, then Q defined as dQ|Ft = E(L∗t |Ft)dP|Ft

belongs to QF. Indeed (if the interest rate is null), SL∗ is a (Q∗,F(L))-martingale, hence S`∗ is a(Q∗,F(L))-martingale, where `∗t = E(L∗t |Ft). It follows that S`∗ is a (Q∗,F)-martingale.

2.3.5 An Example from Shrinkage ♠Let W = (Wt)te0 be a Brownian motion defined on the probability space (Ω,G,P), and τ be arandom time, independent of W and such that P(τ > t) = e−λt, for all t ≥ 0 and some λ > 0 fixed.We define X = (Xt)t≥0 as the process

Xt = x exp((

a + b− σ2

2

)t− b(t− τ)+ + σ Wt

)

where a, and x, σ, b are some given strictly positive constants. It is easily shown that the processX solves the stochastic differential equation

dXt = Xt

(a + b 11τ>t

)dt + Xt σ dWt .

Let us take as F = (Ft)t≥0 the natural filtration of the process X, that is, Ft = σ(Xs | 0 ≤ s ≤ t)for t ≥ 0 (note that F is smaller than FW ∨ σ(τ)). From Xt = x +

∫ t

0(a + b 11τ>s)ds +

∫ t

0Xs σ dWs

it follows that (from Exercise 1.2.5)

Page 33: Enlargements of Filtrations Monique Jeanblanc Jena, June 2010 · 2010-06-09 · 2 These notes are not complete, and have to be considered as a preliminary version of a full text

2.3. INITIAL ENLARGEMENT 33

dXt = Xt

(a + bGt

)dt + dmart

Identifying the brackets, one has dmart = XtσdWt where W is a martingale with bracket t, henceis a BM. It follows that the process X admits the following representation in its own filtration

dXt = Xt

(a + bGt

)dt + Xt σ dWt .

Here, G = (Gt)t≥0 is the Azema supermartingale given by Gt = P(τ > t | Ft) and the innovationprocess W = (Wt)t≥0 defined by

Wt = Wt +b

σ

∫ t

0

(11τ>s −Gs

)ds

is a standard F-Brownian motion. It is easily shown using the arguments based on the notion ofstrong solutions of stochastic differential equations (see, e.g. [58, Chapter IV]) that the naturalfiltration of W coincides with F. It follows from [58, Chapter IX] that the process G solves thestochastic differential equation

dGt = −λGt dt +b

σGt(1−Gt) dWt . (2.3.1)

Observe that the process n = (nt)t≥0 with nt = eλt Gt admits the representation

dnt = d(eλt Gt) =b

σeλt Gt(1−Gt) dWt

and thus, n is an F-martingale (to establish the true martingale property, note that the process(Gt(1−Gt))t≥0 is bounded). The equality (2.3.1) provides the (additive) Doob-Meyer decompositionof the supermartingale G, while Gt = (Gt eλt) e−λt gives its multiplicative decomposition. It followsfrom these decompositions that the F-intensity rate of τ is λ, so that, the process M = (Mt)t≥0 withMt = 11τ≤t − λ(t ∧ τ) is a G-martingale.

It follows from the definition of the conditional survival probability process G and the fact that(Gt eλt)t≥0 is a martingale that the expression

P(τ > u | Ft) = E[P(τ > u | Fu) | Ft] = E[Gu eλu | Ft] e−λu = Gt eλ(t−u)

holds for 0 ≤ t < u.

From the standard arguments in [71, Chapter IV, Section 4], resulting from the application ofBayes’ formula, we obtain that the conditional survival probability process can be expressed as

P(τ > u | Ft) = 1− Yu∧t

Yt+

Zu∧t

Yte−λu

for all t, u ≥ 0. Here, the process Y = (Yt)t≥0 is defined by

Yt =∫ t

0

Zs λe−λs ds + Zt e−λt (2.3.2)

and the process Z = (Zt)t≥0 is given by

Zt = exp(

b

σ2

(ln

Xt

x− 2a + b− σ2

2t))

. (2.3.3)

Moreover, by standard computations, we see that

dZt =b

σ2Zt (bGt dt + σ dWt)

Page 34: Enlargements of Filtrations Monique Jeanblanc Jena, June 2010 · 2010-06-09 · 2 These notes are not complete, and have to be considered as a preliminary version of a full text

34 CHAPTER 2. ENLARGEMENT OF FILTRATIONS

and hence, the process 1/Y = (1/Yt)t≥0, or its equivalent (eλt Gt/Zt)t≥0, admits the representation

d( 1

Yt

)= d

(eλt Gt

Zt

)= − b

σ

Gt

YtdWt

and thus, it is an F-local martingale (in fact, an F-martingale since, for any u ≥ 0, one has

1Yt

=eλu

ZuP(τ > u | Ft)

for u > t). Hence, for each u ≥ 0 fixed, it can be checked that the process Z/Y = (Zt/Yt)t≥0 definesan F-martingale, as it must be (this process being equal to (eλt Gt)t≥0).

We also get the representation

P(τ > u | Ft) =∫ ∞

u

αt(v) η(dv)

for t, u ≥ 0, and hence, the density of τ is given by

αt(u) =Zu∧t

Yt

where η(du) ≡ P(τ ∈ du) = λe−λu du. In particular, from (2.3.2), we have∫ ∞

0

αt(u) η(du) = 1

as expected.

2.4 Progressive Enlargement

We now consider a different case of enlargement, more precisely we assume that τ is a finite randomtime, i.e., a finite non-negative random variable, and we denote

Gt = Fτt := ∩ε>0 Ft+ε ∨ σ(τ ∧ (t + ε)) .

We define the right-continuous process H as

Ht = 11τ≤t .

We denote by H = (Ht, t ≥ 0) its natural filtration. With this notation, G = H ∨ F. Note that τis an H-stopping time, hence a G-stopping time. (In fact, H is the smallest filtration making τ astopping time).

For a general random time τ , it is not true that F-martingales are Fτ -semi-martingales. Hereis an example: due to the separability of the Brownian filtration, there exists a bounded randomvariable τ such that F∞ = σ(τ). Hence, Fτ

τ+t = F∞, ∀t so that the Fτ -martingales are constantafter τ . Consequently, F-martingales are not Fτ -semi-martingales.

We shall denote Zτt = P(τ > t|Ft) and, if there is no ambiguity, we shall also use the notation

Gt = P(τ > t|Ft). The process P(τ > t|Ft) is a super-martingale. Therefore, it admits a Doob-Meyerdecomposition.

Lemma 2.4.1 (i) Let τ be a positive random time and

P(τ > t|Ft) = µτt −Aτ

t

Page 35: Enlargements of Filtrations Monique Jeanblanc Jena, June 2010 · 2010-06-09 · 2 These notes are not complete, and have to be considered as a preliminary version of a full text

2.4. PROGRESSIVE ENLARGEMENT 35

the Doob-Meyer decomposition of the super-martingale Gt = P(τ > t|Ft) (the process Aτ is theF-predictable compensator of H, see Definition ??). Then, for any F-predictable positive process Y ,

E(Yτ ) = E(∫ ∞

0

YudAτu

).

(ii)

E(Yτ11t<τ≤T |Ft) = E

(∫ T

t

YudAτu|Ft

)= −E

(∫ T

t

YudZτu |Ft

)

Proof: For any process cadlag Y of the form Yu = ys11]s,t](u) with ys ∈ bFs, one has

E(Yτ ) = E(ys11]s,t](τ)) = E(ys(At −As)) .

The result follows from MCT.

See also Proposition 1.1.23.

¤

Proposition 2.4.2 Any Gt measurable random variable admits a decomposition as yt11t<τ+yt(τ)11τ≤t

where yt is Ft-measurable, and, for any u ≤ t , yt(u) is Ft-measurable.Any G-predictable process Y admits a decomposition as Yt = yt11t≤τ + yt(τ)11τ<t where y is F-predictable, and, for any u ≤ t , y(u) is F-predictable.Any G-optional process Y admits a decomposition as Yt = yt11t<τ +yt(τ)11τ≤t where y is F-optional,and, for any u ≤ t , y(u) is F-optional.

Proof: See Pham [65]. ¤

Proposition 2.4.3 The process µτ is a BMO martingale

Proof: From Doob-Meyer decomposition, since G is bounded, µ is a square integrable martingale.¤

2.4.1 General Facts

A Larger Filtration

We also introduce the filtration Gτ defined as

Gτt = A ∈ F∞ ∨ σ(τ) : ∃At ∈ Ft : A ∩ τ > t = At ∩ τ > t

Note that, on t < τ , one has Gτt = Fτ

t = Ft and Gτt = F∞ ∨ σ(τ) on τ < t.

Lemma 2.4.4 Let ϑ be a Gτ -stopping time, then, there exists an F-stopping time ϑ such thatϑ ∧ τ = ϑ ∧ τLet X ∈ F∞. A cadlag version of E(X|Gτ

t ) is

1P(τ > t|Ft)

11t<τE(X11t<τ |Ft) + X11t≥τ

If Mt = E(X|Gτt ), then the process Mt− is

1Gt−

p(X11]]0,τ ]]) + X11]]τ,∞]]

Page 36: Enlargements of Filtrations Monique Jeanblanc Jena, June 2010 · 2010-06-09 · 2 These notes are not complete, and have to be considered as a preliminary version of a full text

36 CHAPTER 2. ENLARGEMENT OF FILTRATIONS

Conditional Expectations

Proposition 2.4.5 For any G-predictable process Y , there exists an F-predictable process y suchthat Yt11t≤τ = yt11t≤τ. Under the condition ∀t,P(τ ≤ t|Ft) < 1, the process (yt, t ≥ 0) is unique.

Proof: We refer to Dellacherie [23] and Dellacherie et al. [19], page 186. The process y may berecovered as the ratio of the F-predictable projections of Yt11t≤τ and 11t≤τ. ¤

Lemma 2.4.6 Key Lemma: Let X ∈ FT be an integrable r.v. Then, for any t ≤ T ,

E(X11τ<T|Gt) = 11t<τE(XGT |Ft)

Gt

Proof: On the set t < τ, any Gt measurable random variable is equal to an Ft-measurable randomvariable, therefore

E(X11τ<T|Gt) = 11t<τyt

where yt is Ft-measurable. Taking conditional expectation w.r.t. Ft, we get yt =E(Yt11t<τ|Ft)P(t < τ |Ft)

.

(it can be proved that P(t < τ |Ft) does not vanishes on the set t < τ.) ¤

Exercise 2.4.7 Prove thatτ > t ⊂ Zτ

t > 0 =: At (2.4.1)

(where the inclusion is up to a negligible set.Hint: P(Ac

t ∩ τ > t) = 0 C

2.4.2 Before τ

In this section, we assume that F-martingales are continuous and that τ avoids F-stopping times.It is proved in Yor [79] that if X is an F-martingale then the processes Xt∧τ and Xt(1 − Ht) areFτ semi-martingales. Furthermore, the decompositions of the F-martingales in the filtration Fτ areknown up to time τ (Jeulin and Yor [47]).

Proposition 2.4.8 Every F-martingale M stopped at time τ is an Fτ -semi-martingale with canon-ical decomposition

Mt∧τ = Mt +∫ t∧τ

0

d〈M,µτ 〉sZτ

s−,

where M is an Fτ -local martingale.

Proof: Let Ys be an Fτs -measurable random variable. There exists an Fs-measurable random

variable ys such that Ys11s≤τ = ys11s≤τ, hence, if M is an F-martingale, for s < t,

E(Ys(Mt∧τ −Ms∧τ )) = E(Ys11s<τ(Mt∧τ −Ms∧τ ))= E(ys11s<τ(Mt∧τ −Ms∧τ ))

= E(ys(11s<τ≤t(Mτ −Ms) + 11t<τ(Mt −Ms))

)

From the definition of Zτ (see also Definition ??),

E(ys11s<τ≤tMτ

)= −E

(ys

∫ t

s

MudZτu

).

Page 37: Enlargements of Filtrations Monique Jeanblanc Jena, June 2010 · 2010-06-09 · 2 These notes are not complete, and have to be considered as a preliminary version of a full text

2.4. PROGRESSIVE ENLARGEMENT 37

From integration by parts formula∫ t

s

MudZτu−MsZ

τs +Zτ

t Mt =∫ t

s

ZτudMu+〈M, Zτ 〉t−〈M, Zτ 〉s =

∫ t

s

ZτudMu+〈M,µτ 〉t−〈M, µτ 〉s .

We have also

E(ys(11s<τ≤tMs

)= E (ysMs(Zτ

s − Zτt ))

E(ys11t<τ(Mt −Ms))

)= E (ysZ

τt (Mt −Ms)))

hence, from the martingale property of M

E(Ys(Mt∧τ −Ms∧τ )) = E(ys(〈M,µτ 〉t − 〈M, µτ 〉s))

= E(

ys

∫ t

s

d〈M,µτ 〉uZτ

u

Zτu

)= E

(ys

∫ t

s

d〈M, µτ 〉uZτ

u

E(11u<τ|Fu))

= E(

ys

∫ t

s

d〈M,µτ 〉uZτ

u

11u<τ

)= E

(ys

∫ t∧τ

s∧τ

d〈M, µτ 〉uZτ

u

).

The result follows. ¤See Jeulin for the general case.

2.4.3 Basic Results

Proposition 2.4.9 The process

Mt = Ht −∫ t∧τ

0

dAτu

Gu−is a G-martingale.For any bounded G-predictable process Y , the process

Yt11τ≤t −∫ t∧τ

0

Ys

Zτs−

dAτs

is a G-martingale.The process Lt = (1−Ht)/Gt is a G -martingale.

Proof: We give the proof in the case where G is continuous. Let s < t. We proceed in two steps,using the Doob-Meyer decomposition of G as Gt = µτ

t −Aτt .

First step: we prove

E(Ht −Hs|Gs) = 11s<τ1

GsE(Aτ

t −Aτs |Fs)

Indeed,

E(Ht −Hs|Gs) = P(s < τ ≤ t|Gs) = 11s<τ1

GsE(Gs −Gt|Fs)

= 11s<τ1

GsE(Aτ

t −Aτs |Fs) .

In a second step, we prove that

E(∫ t∧τ

s∧τ

dAτu

Gu|Gs) = 11s<τ

1GsE(Aτ

t −Aτs |Fs)

One has

E(∫ t∧τ

s∧τ

dAτu

Gu|Gs) = 11s<τE(

∫ t∧τ

s

dAτu

Gu|Gs) = 11s<τ

1GsE(11s<τ

∫ t∧τ

s

dAτu

Gu|Fs)

Page 38: Enlargements of Filtrations Monique Jeanblanc Jena, June 2010 · 2010-06-09 · 2 These notes are not complete, and have to be considered as a preliminary version of a full text

38 CHAPTER 2. ENLARGEMENT OF FILTRATIONS

Setting Kt =∫ t

sdAτ

u

Gu,

E(11s<τ

∫ t∧τ

s

dAτu

Gu|Fs) = E(11s<τKt∧τ |Fs) = −E(

∫ ∞

s

Kt∧udGu|Fs)

= −E(∫ t

s

KudGu + KtGt|Fs) = E(∫ t

s

GudKu|Fs) = E(∫ t

s

dAτu|Fs)

where we have used integration by parts formula to obtain

−∫ t

s

KudGu + KtGt = KsGt +∫ t

s

GudKu =∫ t

s

GudKu

Note that, if Gt = NtDt = Nte−Λt is the multiplicative decomposition of G, then Ht − Λt∧τ is a

martingale. ¤

Lemma 2.4.10 The process λ satisfies

λt = limh→0

1h

P(t < τ < t + h|Ft)P(t < τ |Ft)

.

Proof: The martingale property of M implies that

E(11t<τ<t+h|Gt)−∫ t+h

t

E((1−Hs)λs|Gt)ds = 0

It follows that, by projection on Ft

P(t < τ < t + h|Ft) =∫ t+h

t

λsP(s < τ |Ft)ds .

¤The reverse is know as Aven lemma [6].

Lemma 2.4.11 Let (Ω,Gt,P) be a filtered probability space and N be a counting process. Assumethat E(Nt) < ∞ for any t. Let (hn, n ≥ 1) be a sequence of real numbers converging to 0, and

Y(n)t =

1hn

E(Nt+hn −Nt|Gt)

Assume that there exists λt and yt non-negative G-adapted processes such that

(i) For any t, limY(n)t = λt

(ii) For any t, there exists for almost all ω an n0 = n0(t, ω) such that

|Y (n)s − λs(ω)| ≤ ys(ω) , s ≤ t, n ≥ n0(t, ω)

(iii)∫ t

0ysds < ∞, ∀t

Then, Nt −∫ t

0λsds is a G-martingale.

Definition 2.4.12 In the case where the process dAu = audu, the process λt = at

Gt−is called the

F-intensity of τ , and λt = 11t<τλt is the G-intensity, and the process

Ht −∫ t∧τ

0

λsds = Ht −∫ t

0

(1−Hs)λsds = Ht −∫ t

0

λsds

is a G-martingale.

Page 39: Enlargements of Filtrations Monique Jeanblanc Jena, June 2010 · 2010-06-09 · 2 These notes are not complete, and have to be considered as a preliminary version of a full text

2.4. PROGRESSIVE ENLARGEMENT 39

The intensity process is the basic tool to model default risk.

Exercise 2.4.13 Prove that if X is a (square-integrable) F-martingale, XL is a G -martingale,where L is defined in Proposition 2.4.9.Hint: See [BJR] for proofs. C

Exercise 2.4.14 Let τ be a random time with density f , and G(s) = P(τ > s) =∫∞

sf(u)du. Prove

that the process Mt = Ht −∫ t∧τ

0f(s)G(s)ds is a H-martingale. C

Restricting the information

Suppose from now on that Ft ⊂ Ft and define the σ-algebra Gt = Ft∨Ht and the associated hazardprocess Γt = − ln(Gt) with

Gt = P(t < τ |Ft) = E(Gt|Ft) .

Let Ft = Zt + At be the F-Doob-Meyer decomposition of the F-submartingale F and assumethat A is absolutely continuous with respect to Lebesgue’s measure: At =

∫ t

0asds. The process

At = E(At|Ft) is a F-submartingale and its F-Doob-Meyer decomposition is

At = zt + αt .

Hence, setting Zt = E(Zt|Ft), the sub-martingale

Ft = P(t ≥ τ |Ft) = E(Ft|Ft)

admits a F-Doob-Meyer decomposition as

Ft = Zt + zt + αt

where Zt + zt is the martingale part. From Exercise 1.2.5, αt =∫ t

0E(as|Fs)ds. It follows that

Ht −∫ t∧τ

0

αs

1− Fs

ds

is a G-martingale and that the F-intensity of τ is equal to E(as|Fs)/Gs, and not ”as one couldthink” to E(as/Gs|Fs). Note that even if (H) hypothesis holds between F and F, this proof cannot be simplified since F is increasing but not F-predictable (there is no raison for Ft to have anintensity).

This result can be directly proved thanks to Bremaud’s following result (a consequence of Exercise1.2.5 ): if Ht −

∫ t

0λsds is a G-martingale, then Ht −

∫ t

0E(λs|Gs)ds is a G-martingale. Since

E(11s≤τλs|Gs) =11s≤τ

Gs

E(11s≤τλs|Fs)

=11s≤τ

Gs

E(Gsλs|Fs) =11s≤τ

Gs

E(as|Fs)

it follows that Ht −∫ t∧τ

0E(as|Fs)/Gsds is a G-martingale, and we are done.

Page 40: Enlargements of Filtrations Monique Jeanblanc Jena, June 2010 · 2010-06-09 · 2 These notes are not complete, and have to be considered as a preliminary version of a full text

40 CHAPTER 2. ENLARGEMENT OF FILTRATIONS

Survival Process

Lemma 2.4.15 The super-martingale G admits a multiplicative decomposition as Gt = NtDt whereD is a decreasing process and N a local martingale.

Proof: In the proof, we assume that G is continuous. Assuming that the supermartingale G = µ−Aadmits a multiplicative decomposition Gt = DtNt where N is a local-martingale and D a decreasingprocess, we obtain that dGt = NtdDt + DtdNt is a Doob-Meyer decomposition of G. Hence,

dNt =1

Dtdµt, dDt = −Dt

1Gt

dAt ,

therefore,

Dt = exp−∫ t

0

1Gs

dAs

Conversely, if G admits the Doob-Meyer decomposition µ− A, then Gt = NtDt with Dt = eΛt ,Λ being the intensity process Λt = − ∫ t

01

GsdAs. ¤

Exercise 2.4.16 We consider, as in the paper of Biagini, Lee and Rheinlander a mortality bond, afinancial instrument with payoff Y =

∫ τ∧T

0Gsds. We assume that G is continuous.

1. Compute, in the case r = 0, the price Yt of the mortality bond (it will be convenient tointroduce mt = E(

∫ T

0G2

sds|Ft). Is the process m a (P,F) martingale? a (P,G)-martingale?

2. Determine α, β and γ so that

dYt = αtdMt + βt(dmt − 1Gt

d〈m,Z〉t) + γt(dZt − 1Gt

d〈Z〉t)

3. Determine the price D(t, T ) of a defaultable zero-coupon bond with maturity T , i.e. a financialasset with terminal payoff 11T<τ . Give the dynamics of this price.

4. We now assume that F is a Brownian filtration, and that

dSt = St(bdt + σdWt)

Explain how one can hedge the mortality bond.

Hint: From the key lemma,

E

(∫ τ∧T

0

Gsds|Gt

)= 11t<τ

1GtE(11t<τ

∫ τ∧T

0

Gsds|Ft) + 11τ≤t

∫ τ

0

Gsds

and from the second part of key lemma

I := E(11t<τ

∫ τ∧T

0

Gsds|Ft) = E(∫ ∞

t

dFu

∫ u∧T

0

Gsds|Ft)

= E(∫ T

t

dFu

∫ u

0

Gsds|Ft) + E(∫ ∞

T

dFu

∫ T

0

Gsds|Ft)

= E(∫ T

t

dFu

∫ u

0

Gsds|Ft) + E(GT

∫ T

0

Gsds|Ft)

From integration by parts formula, setting ζt =∫ t

0Gsds

GT ζT = Gtζt +∫ T

t

ζsdGs +∫ T

t

Gsdζs

Page 41: Enlargements of Filtrations Monique Jeanblanc Jena, June 2010 · 2010-06-09 · 2 These notes are not complete, and have to be considered as a preliminary version of a full text

2.4. PROGRESSIVE ENLARGEMENT 41

hence

I = Gtζt + E(∫ T

t

G2sds) = Gtζt −

∫ t

0

G2sds + mt

and

It := E(I|Gt) = 11t<τ

(∫ t

0

Gsds +1Gt

(mt −

∫ t

0

G2sds

))+ 11τ≤t

∫ τ

0

Gudu

= it11t<τ + 11τ≤t

∫ τ

0

Gudu = it11t<τ +∫ t

0

dHs

∫ s

0

Gudu

Differentiating this expression leads to

dIt = (∫ t

0

Gsds− it)dHt + (1−Ht)1

G2t

(mt −

∫ t

0

G2sds

)(−dZt + dAt +

1Gt

d〈Z〉t)

+(1−Ht)(

1G2

t

d〈m,G〉t +1Gt

dmt − 1G2

t

(mt −∫ t

0

G2sds)d〈m,G〉t

)

After some simplifications, and using the fact that dMt = dHt − (1−Ht)dAt

Gt

dIt =1Gt

(mt−∫ t

0

G2sds)dMt+(1−Ht)

1Gt

(dmt− 1Gt

d〈m,Z〉t)−(1−Ht)1

G2t

(mt−∫ t

0

G2sds)(dZt− 1

Gtd〈Z〉t)

C

2.4.4 Immersion Setting

Characterization of Immersion

Let us first investigate the case where the (H) hypothesis holds.

Lemma 2.4.17 In the progressive enlargement setting, (H) holds between F and G if and only ifone of the following equivalent conditions holds:

(i) ∀(t, s), s ≤ t, P(τ ≤ s|F∞) = P(τ ≤ s|Ft),(ii) ∀t, P(τ ≤ t|F∞) = P(τ ≤ t|Ft).

(2.4.2)

Proof: If (ii) holds, then (i) holds too. If (i) holds, F∞ and σ(t∧ τ) are conditionally independentgiven Ft. The property follows. This result can also be found in Dellacherie and Meyer [21]. ¤

Note that, if (H) holds, then (ii) implies that the process P(τ ≤ t|Ft) is increasing.

Exercise 2.4.18 Prove that if H and F are immersed in G, and if any F martingale is continuous,then τ and F∞ are independent. C

Exercise 2.4.19 Assume that immersion property holds and let yt(u) be a family of F-martingales.Prove that, for t > s,

11τ≤sE(yt(τ)|Gs) = 11τ≤sys(τ)

C

Page 42: Enlargements of Filtrations Monique Jeanblanc Jena, June 2010 · 2010-06-09 · 2 These notes are not complete, and have to be considered as a preliminary version of a full text

42 CHAPTER 2. ENLARGEMENT OF FILTRATIONS

Norros’s lemma

Proposition 2.4.20 Assume that G = (Gt)t≥0 is continuous and G∞ = 0 and let Λ be the increas-ing predictable process such that Mt = Ht − Λt∧τ is a martingale. Then the following conclusionshold:

(i) the variable Λτ has standard exponential law (with parameter 1);

(ii) if F is immersed in G, then the variable Λτ is independent of F∞.

Proof: (i) Consider the process X = (Xt, t ≥ 0), defined by:

Xt = (1 + z)Ht e−zΛt∧τ

for all t ≥ 0 and any z > 0 fixed. Then, applying the integration by parts formula, we get:

dXt = z e−zΛt∧τ dMt . (2.4.3)

Hence, by virtue of the assumption that z > 0, it follows from (2.4.3) that X is also a G-martingale,so that:

E[(1 + z)Ht e−zΛt∧τ

∣∣Gs

]= (1 + z)Hs e−zΛs∧τ (2.4.4)

holds for all 0 ≤ s ≤ t. In view of the implied by z > 0 uniform integrability of X, we may let t goto infinity in (2.4.4). Setting s equal to zero in (2.4.4), we therefore obtain:

E[(1 + z) e−zΛτ

]= 1 .

This means that the Laplace transform of Λτ is the same as one of a standard exponential variableand thus proves the claim.

(ii) Recall that, under immersion property, G is decreasing and dΛ = dG/G. Applying thechange-of-variable formula, we get:

e−zΛt∧τ = 1 + z

∫ t

0

e−zΛs11(τ>s)

GsdGs (2.4.5)

for all t ≥ 0 and any z > 0 fixed. Then, taking conditional expectations under Ft from both partsof expression (2.4.5) and applying Fubini’s theorem, we obtain from the immersion of F in G that:

E[e−zΛt∧τ

∣∣Ft

]= 1 + z

∫ t

0

E[e−zΛs

11(τ>s)

Gs

∣∣∣Ft

]dGs (2.4.6)

= 1 + z

∫ t

0

e−zΛsP(τ > s | Ft)

GsdGs

= 1 + z

∫ t

0

e−zΛs dGs

for all t ≥ 0. Hence, using the fact that Λt = − ln Gt, we see from (2.4.6) that:

E[e−zΛt∧τ | Ft

]= 1 +

z

1 + z

((Gt)1+z − (G0)1+z

)

holds for all t ≥ 0. Letting t go to infinity and using the assumption G0 = 1, as well as the fact thatG∞ = 0 (P-a.s.), we therefore obtain:

E[e−zΛτ | F∞

]=

11 + z

that signifies the desired assertion. ¤

Page 43: Enlargements of Filtrations Monique Jeanblanc Jena, June 2010 · 2010-06-09 · 2 These notes are not complete, and have to be considered as a preliminary version of a full text

2.4. PROGRESSIVE ENLARGEMENT 43

G-martingales versus F martingales

Proposition 2.4.21 Assume that F is immersed in G under P. Let ZG be a G-adapted, P-integrableprocess given by the formula

ZGt = Zt11τ>t + Zt(τ)11τ≤t, ∀ t ∈ R+, (2.4.7)

where:(i) the projection of ZG onto F, which is defined by

ZFt := EP(ZGt |Ft) = Zt P(τ > t|Ft) + EP(Zt(τ)11τ≤t|Ft),

is a (P,F)-martingale,(ii) for any fixed u ∈ R+, the process (Zt(u), t ∈ [u,∞)) is a (P,F)-martingale.Then the process ZG is a (P,G)-martingale.

Proof: Let us take s < t. Then

EP(ZGt |Gs) = EP(Zt11τ>t|Gs) + EP(Zt(τ)11s<τ≤t|Gs) + EP(Zt(τ)11τ≤s|Gs)

= 11s<τ1

Gs(EP(ZtGt|Fs) + EP(Zt(τ)11s<τ≤t|Fs)) + EP(Zt(τ)11τ≤s|Gs)

On the one hand,EP(Zt(τ)11τ≤s|Gs) = 11τ≤sZs(τ) (2.4.8)

Indeed, it suffices to prove the previous equality for Zt(u) = h(u)Xt where X is an F-martingale. Inthat case,

EP(Xth(τ)11τ≤s|Gs) = 11τ≤sh(τ)EP(Xt|Gs) = 11τ≤sh(τ)EP(Xt|Fs) = 11τ≤sh(τ)Xs = 11τ≤sZs(τ)

In the other hand, from (i)

EP(ZtGt + Zt(τ)11τ≤t|Fs) = ZsGs + EP(Zs(τ)11τ≤s|Fs)

It follows that

EP(ZGt |Gs) = 11s<τ1

Gs(ZsGs + EP((Zs(τ)− Zt(τ))11τ≤s|Fs)) + 11τ≤sZs(τ)

It remains to check thatEP((Zt(τ)− Zs(τ))11τ≤s|Fs) = 0

which follows from

EP(Zt(τ)11τ≤s|Fs) = EP(Zt(τ)11τ≤s|Gs|Fs) = EP(Zs(τ)11τ≤s|Fs)

where we have used (2.4.8). ¤

Exercise 2.4.22 (A different proof of Norros’ result) Suppose that

P(τ ≤ t|F∞) = 1− e−Γt

where Γ is an arbitrary continuous strictly increasing F-adapted process. Prove, using the inverseof Γ that the random variable Γτ is independent of F∞, with exponential law of parameter 1.Hint: Suppose that

P (τ ≤ t|F∞) = 1− e−Γt

where Γ is an arbitrary continuous strictly increasing F-adapted process. Let us set Θ : = Γτ . Then

t < Θ = t < Γτ = Ct < τ,where C is the right inverse of Γ, so that ΓCt = t. Therefore

P (Θ > u|F∞) = e−ΓCu = e−u.

We have thus established the required properties, namely, the probability law of Θ and its indepen-dence of the σ-field F∞. Furthermore, τ = inft : Γt > Γτ = inft : Γt > Θ. C

Page 44: Enlargements of Filtrations Monique Jeanblanc Jena, June 2010 · 2010-06-09 · 2 These notes are not complete, and have to be considered as a preliminary version of a full text

44 CHAPTER 2. ENLARGEMENT OF FILTRATIONS

2.4.5 Cox processes

We present here an particular case where immersion property holds. We deal here with the basicand important model for credit risk.

Let (Ω,G,P) be a probability space endowed with a filtration F. A nonnegative F-adapted processλ is given. We assume that there exists, on the space (Ω,G,P), a random variable Θ, independent ofF∞, with an exponential law: P(Θ ≥ t) = e−t. We define the default time τ as the first time whenthe increasing process Λt =

∫ t

0λs ds is above the random level Θ, i.e.,

τ = inf t ≥ 0 : Λt ≥ Θ.In particular, using the increasing property of Λ, one gets τ > s = Λs < Θ. We assume thatΛt < ∞, ∀t, Λ∞ = ∞, hence τ is a real-valued r.v..

Comment 2.4.23 (i) In order to construct the r.v. Θ, one needs to enlarge the probability spaceas follows. Let (Ω, F , P) be an auxiliary probability space with a r.v. Θ with exponential law. Weintroduce the product probability space (Ω∗,G∗,Q∗) = (Ω× Ω,F∞ ⊗ F ,P⊗ P).(ii) Another construction for the default time τ is to choose τ = inf t ≥ 0 : NΛt = 1, whereΛt =

∫ t

0λs ds and N is a Poisson process with intensity 1, independent of the filtration F. This

second method is in fact equivalent to the first. Cox processes are used in a great number of studies(see, e.g., [57]).(iii) We shall see that, in many cases, one can restrict attention to Cox processes models.

Conditional Expectations

Lemma 2.4.24 The conditional distribution function of τ given the σ-field Ft is for t ≥ s

P(τ > s|Ft) = exp(− Λs

).

Proof: The proof follows from the equality τ > s = Λs < Θ. From the independence assump-tion and the Ft-measurability of Λs for s ≤ t, we obtain

P(τ > s|Ft) = P(Λs < Θ

∣∣∣Ft

)= exp

(− Λs

).

In particular, we haveP(τ ≤ t|Ft) = P(τ ≤ t|F∞), (2.4.9)

and, for t ≥ s, P(τ > s|Ft) = P(τ > s|Fs). Let us notice that the process Ft = P(τ ≤ t|Ft) is herean increasing process, as the right-hand side of 2.4.9 is. ¤

Corollary 2.4.25 In the Cox process modeling, immersion property holds for F and G.

Remark 2.4.26 If the process λ is not non-negative, we get,

τ > s = supu≤s

Λu < Θ ,

hence for s < tP(τ > s|Ft) = exp(− sup

u≤sΛu) .

More generally, some authors define the default time as

τ = inf t ≥ 0 : Xt ≥ Θwhere X is a given F-semi-martingale. Then, for s ≤ t

P(τ > s|Ft) = exp(− supu≤s

Xu) .

Page 45: Enlargements of Filtrations Monique Jeanblanc Jena, June 2010 · 2010-06-09 · 2 These notes are not complete, and have to be considered as a preliminary version of a full text

2.4. PROGRESSIVE ENLARGEMENT 45

Exercise 2.4.27 Prove that τ is independent of F∞ if and only if λ is a deterministic function. C

Exercise 2.4.28 Compute P(τ > t|Ft) in the case when Θ is a non negative random variable,independent of F∞, with cumulative distribution function F . C

Exercise 2.4.29 Prove that, if P(τ > t|Ft) is continuous and strictly decreasing, then there existsΘ independent of F∞ such that τ = inft : Λt > Θ. C

Exercise 2.4.30 Write the Doob-Meyer and the multiplicative decomposition of G. C

Exercise 2.4.31 Show how one can compute P(τ > t|Ft) when

τ = inft : Xt > Θ

where X is an F-adapted process, not necessarily increasing, and Θ independent of F∞. Doesimmersion property still holds? Same questions if Θ is not independent of F∞. C

2.4.6 Successive Enlargements

Proposition 2.4.32 Let τ1 < τ2 a.s. , Hi be the filtration generated by the default process Hit =

11τi≤t, and G = F ∨H1 ∨H2. Then, the two following assertions are equivalent:(i) F is immersed in G(ii) F is immersed in F ∨H1 and F ∨H1 is immersed in G.

Proof: (this result was obtained by Ehlers and Schonbucher [25], we give here a slightly differentproof.) The only fact to check is that if F is immersed in G, then F ∨H1 is immersed in G, or that

P(τ2 > t|Ft ∨H1t ) = P(τ2 > t|F∞ ∨H1

∞)

This is equivalent to, for any h, and any A∞ ∈ F∞E(A∞h(τ1)11τ2>t) = E(A∞h(τ1)P(τ2 > t|Ft ∨H1

t ))

We spilt this equality in two parts. The first equality

E(A∞h(τ1)11τ1>t11τ2>t) = E(A∞h(τ1)11τ1>tP(τ2 > t|Ft ∨H1t ))

is obvious since 11τ1>t11τ2>t = 11τ1>t and 11τ1>tP(τ2 > t|Ft ∨H1t ) = 11τ1>t. Since F is immersed in G,

one has E(A∞|Gt) = E(A∞|Ft) and it follows (WHY?) that E(A∞|Gt) = E(A∞|Ft ∨H1t ), therefore

E(A∞h(τ1)11τ2>t≥τ1) = E(E(A∞|Gt)h(τ1)11τ2>t≥τ1)= E(E(A∞|Ft ∨H1

t )h(τ1)11τ2>t≥τ1)= E(E(A∞|Ft ∨H1

t )E(h(τ1)11τ2>t≥τ1 |Ft ∨H1t ))

= E(A∞E(h(τ1)11τ2>t≥τ1 |Ft ∨H1t ))

Lemma 2.4.33 Norros Lemma.Let τi, i = 1, · · · , n be n finite-valued random times and Gt = H1

t ∨ · · · ∨ Hnt . Assume that

(i) P (τi = τj) = 0,∀i 6= j

(ii) there exists continuous processes Λi such that M it = Hi

t − Λit∧τi

are G-martingales

then, the r.v’s Λiτi

are independent with exponential law.

Page 46: Enlargements of Filtrations Monique Jeanblanc Jena, June 2010 · 2010-06-09 · 2 These notes are not complete, and have to be considered as a preliminary version of a full text

46 CHAPTER 2. ENLARGEMENT OF FILTRATIONS

Proof: For any µi > −1 the processes Lit = (1 + µi)Hi

t e−µiΛit∧τi , solution of

dLit = Li

t−µidM it

are uniformly integrable martingales. Moreover, these martingales have no common jumps, and areorthogonal. Hence E(

∏i(1 + µi)e

−µiΛiτi ) = 1, which implies

E(∏

i

e−µiΛiτi ) =

i

(1 + µi)−1

hence the independence property. ¤AJOUTER F

As an application, let us study the particular case of Poisson process. Let τ1 and τ2 are the twofirst jumps of a Poisson process, we have

G(t, s) =

e−λt for s < te−λs(1 + λ(s− t)) for s > t

with partial derivatives

∂1G(t, s) = −λe−λt for t > s−λe−λs for s > t

, ∂2G(t, s) =

0 for t > s−λ2e−λs(s− t) for s > t

and

h(t, s) =

1 for t > sts for s > t

, ∂1h(t, s) =

0 for t > s1s for s > t

k(t, s) =

0 for t > s1− e−λ(s−t) for s > t

, ∂2k(t, s) =

0 for t > sλe−λ(s−t) for s > t

Then, one obtains U1 = τ1 et U2 = τ2 − τ1

Martingales associated with a default time in the filtration generated by two defaultprocesses

We present the computation of the martingales associated with the times τi in different filtrations.In particular, we shall obtain the computation of the intensities in various filtrations.

We have established that, if F is a given reference filtration and if Gt = P(τ > t|Ft) is the Azemasupermartingale admitting a Doob-Meyer decomposition Gt = Zt −

∫ t

0asds, then the process

Ht −∫ t∧τ

0

as

Gsds ,

is a G-martingale, where G = F∨H and Ht = σ(t∧ τ). We now consider the case where two defaulttimes τi, i = 1; 2 are given. We assume that

G(t, s) := P(τ1 > t, τ2 > s)

is twice differentiable and does not vanish. We denote by Hit = σ(Hi

s, s ≤ t) the filtration generatedby he default process Hi and by H = H1 ∨H2 the filtration generated by the two default processes.Our aim is to give the compensator of H1 in the filtration H1 and in the filtration H.

We apply the general result to the case F = H2 and H = H1. Let

G1|2t = P(τ1 > t|H2

t )

Page 47: Enlargements of Filtrations Monique Jeanblanc Jena, June 2010 · 2010-06-09 · 2 These notes are not complete, and have to be considered as a preliminary version of a full text

2.4. PROGRESSIVE ENLARGEMENT 47

be the Azema supermartingale of τ1 in the filtration H2, with Doob-Meyer decomposition G1|2t =

Z1|2t − ∫ t

0a1|2s ds where Z1|2 is a H2-martingale. Then, the process

H1t −

∫ t∧τ1

0

a1|2s

G1|2s

ds

is a G-martingale. The process A1|2t =

∫ t∧τ1

0a1|2

s

G1|2s

ds is the G-adapted compensator of H1. The same

methodology can be applied for the compensator of H2.

Proposition 2.4.34 The process M1,G defined as

M1,Gt := H1

t +∫ t∧τ1∧τ2

0

∂1G(s, s)G(s, s)

ds +∫ t∧τ1

t∧τ1∧τ2

f(s, τ2)∂2G(s, τ2)

ds

is a G-martingale.

Proof: Some easy computation enables us to write

G1|2t = P(τ1 > t|H2

t ) = H2t P(τ1 > t|τ2) + (1−H2

t )P(τ1 > t, τ2 > t)P(τ2 > t)

= H2t h(t, τ2) + (1−H2

t )ψ(t) (2.4.10)

where

h(t, v) =∂2G(t, v)∂2G(0, v)

; ψ(t) = G(t, t)/G(0, t).

Function t → ψ(t) and process t → h(t, τ2) are continuous and of finite variation, hence integrationby parts rule leads to

dG1|2t = h(t, τ2)dH2

t + H2t ∂1h(t, τ2)dt + (1−H2

t )ψ′(t)dt− ψ(t)dH2t

= (h(t, τ2)− ψ(t)) dH2t +

(H2

t ∂1h(t, τ2) + (1−H2t )ψ′(t)

)dt

=(

∂2G(t, τ2)∂2G(0, τ2)

− G(t, t)G(0, t)

)dH2

t +(H2

t ∂1h(t, τ2) + (1−H2t )ψ′(t)

)dt

From the computation of the Stieljes integral, we can write∫ T

0

(G(t, t)G(0, t)

− ∂2G(t, τ2)∂2G(0, τ2)

)dH2

t =(

G(τ2, τ2)G(0, τ2)

− ∂2G(τ2, τ2)∂2G(0, τ2)

)1τ2≤T

=∫ T

0

(G(t, t)G(0, t)

− ∂2G(t, t)∂2G(0, t)

)dH2

t

and substitute it in the expression of dG1|2 :

dG1|2t =

(∂2G(t, t)∂2G(0, t)

− G(t, t)G(0, t)

)dH2

t +(H2

t ∂1h(t, τ2) + (1−H2t )ψ′(t)

)dt

We now use that

dH2t = dM2

t −(1−H2

t

) ∂2G(0, t)G(0, t)

dt

where M2 is a H2-martingale, and we get the H2− Doob-Meyer decomposition of G1|2 :

dG1|2t =

(∂2G(t, t)∂2G(0, t)

− G(t, t)G(0, t)

)dM2

t −(1−H2

t

) (G(t, t)G(0, t)

− ∂2G(t, t)∂2G(0, t)

)∂2G(0, t)G(0, t)

dt

+(H2

t ∂1h(t, τ2) + (1−H2t )ψ′(t)

)dt

Page 48: Enlargements of Filtrations Monique Jeanblanc Jena, June 2010 · 2010-06-09 · 2 These notes are not complete, and have to be considered as a preliminary version of a full text

48 CHAPTER 2. ENLARGEMENT OF FILTRATIONS

and from

ψ′(t) =(

∂2G(t, t)∂2G(0, t)

− G(t, t)G(0, t)

)∂2G(0, t)G(0, t)

+∂1G(t, t)G(0, t)

we conclude

dG1|2t =

(G(t, t)G(0, t)

− ∂2G(t, t)∂2G(0, t)

)dM2

t +(

H2t ∂1h

(1)(t, τ2) + (1−H2t )

∂1G(t, t)G(0, t)

)dt

From (2.4.10), the process G1|2 has a single jump of size ∂2G(t,t)∂2G(0,t) − G(t,t)

G(0,t) . From (2.4.10),

G1|2t =

G(t, t)G(0, t)

= ψ(t)

on the set τ2 > t, and its bounded variation part is ψ′(t). The hazard process has a non nullmartingale part, except if G(t,t)

G(0,t) = ∂2G(t,t)∂2G(0,t) (this is the case if the default are independent). Hence,

(H) hypothesis is not satisfied in a general setting between Hi and G. It follows that the intensityof τ1 in the G-filtration is ∂1G(s,s)

G(s,s) on the set t < τ2 ∧ τ1 and ∂1h(s,τ2)h(s,τ2)

on the set τ2 < t < τ1. Itcan be proved that the intensity of τ1 ∧ τ2 is

∂1G(s, s)G(s, s)

+∂2G(s, s)G(s, s)

=g(t)

G(t, t)

where

Exercise 2.4.35 Prove that Hi, i = 1, 2 is immersed in H1 ∨ H2 if and only if τi, i = 1, 2 areindependent. C

Kusuoka counter example

Kusuoka [56] presents a counter-example of the stability of H hypothesis under a change of proba-bility, based on two independent random times τ1 and τ2 given on some probability space (Ω,G,P)and admiting a density w.r.t. Lebesgue’s measure. The process M1

t = H1t −

∫ t∧τ1

0λ1(u) du, where

H1t = 11t≥τ1 and λi is the deterministic intensity function of τi under P, is a (P,Hi) and a (P,G)-

martingale (Recall that λi(s)ds = P(τi∈dsP(τi>s ) . Let us set dQ | Gt = Lt dP | Gt , where

Lt = 1 +∫ t

0

Lu−κu dM1u

for some G-predictable process κ satisfying κ > 1 (WHY?), where G = H1 ∨ H2. We set F = H1

and H = H2. Manifestly, immersion hypothesis holds under P. Let

M1t = H1

t −∫ t∧τ1

0

λ1(u) du

M1t = H1

t −∫ t∧τ1

0

λ1(u)(1 + κu) du

where λ(u)du = Q(τ1 ∈ du)/Q(τ1 > u) is deterministic. It is easy to see that, under Q, theprocess M1 is a (Q,H1)-martingale and M1 is a (Q,G) martingale. The process M1 is not a (Q,G)-martingale (WHY?), hence, immersion does not hold under Q.

Exercise 2.4.36 Compute Q(τ1 > t|H2t ). C

Page 49: Enlargements of Filtrations Monique Jeanblanc Jena, June 2010 · 2010-06-09 · 2 These notes are not complete, and have to be considered as a preliminary version of a full text

2.4. PROGRESSIVE ENLARGEMENT 49

2.4.7 Ordered times

Assume that τi, i = 1, . . . , n are n random times admitting a density process with respect to areference filtration F. Let σi, i = 1, . . . , n be the sequence of ordered random times and G(k) =F ∨ H(1) · · · ∨ H(k) where H(i) = (H(i)

t = σ(t ∧ σi), t ≥ 0). The G(k)-intensity of σk is the positiveG(k)-adapted process λk such that (M (k)

t := 11σk≤t −∫ t

0λk

sds, t ≥ 0) is a G(k)-martingale. TheG(k)-martingale M (k) is stopped at σk and the G(k)-intensity of σk satisfies λk

t = 0 on t ≥ σk.The following lemma shows the G(k)-intensity of σk coincides with its G(n)-intensity.

Lemma 2.4.37 For any k, a G(k)-martingale stopped at σk is a G(n)-martingale.

Proof: Let Y be a G(k) martingale stopped at σk, i.e. Yt = Yt∧σkfor any t. Then it is also a

(G(k)t∧σk

)t≥0-martingale. Since G(k)t∧σk

= G(n)t∧σk

, we shall prove that the (G(n)t∧σk

)t≥0-martingale Y is alsoa G(n)-martingale.Let A ∈ G(n)

t . For any T > t, we have E[YT 11A] = E[YT 11A∩σk≥t] + E[YT 11A∩σk<t]. On the onehand, since A ∩ σk ≥ t ∈ G(n)

σk∧t and Y is a (G(n)t∧σk

)t≥0-martingale, we have E[YT 11A∩σk≥t] =E[Yt11A∩σk≥t]. On the other hand, since Y is stopped at σk, E[YT 11A∩σk<t] = E[Yt11A∩σk<t].Combining the two terms yields E[YT 11A] = E[Yt11A]. Hence Y is a G(n)-martingale. ¤

The following is a familiar result in the literature.

Proposition 2.4.38 Assume that the G(k)-intensity λk of σk exists for all k ∈ Θ. Then the lossintensity is the sum of the intensities of σk, i.e.

λL =n∑

k=1

λk, a.s.. (2.4.11)

Proof: Since (11σk≤t−∫ t

0λk

sds, t ≥ 0) is a G(k)-martingale stopped at σk, it is a G(n)-martingale.We have by taking the sum that (Lt−

∫ t

0

∑nk=1 λk

sds, t ≥ 0) is a G(n)-martingale. So λLt =

∑nk=1 λk

t

for all t ≥ 0. ¤

2.4.8 Pseudo-stopping Times ♠As we have mentioned, if (H) holds, the process (Zτ

t , t ≥ 0) is a decreasing process. The converseis not true. The decreasing property of Zτ is closely related with the definition of pseudo-stoppingtimes, a notion developed from D. Williams example (see Example 2.4.41 below).

Definition 2.4.39 A random time τ is a pseudo-stopping time if, for any bounded F-martingale m,E(mτ ) = m0 .

Proposition 2.4.40 The random time τ is a pseudo-stopping time if and only if one of the followingequivalent properties holds:

• For any local F-martingale m, the process (mt∧τ , t ≥ 0) is a local Fτ -martingale,

• Aτ∞ = 1,

• µτt = 1, ∀t ≥ 0,

• The process Zτ is a decreasing F-predictable process.

Page 50: Enlargements of Filtrations Monique Jeanblanc Jena, June 2010 · 2010-06-09 · 2 These notes are not complete, and have to be considered as a preliminary version of a full text

50 CHAPTER 2. ENLARGEMENT OF FILTRATIONS

Proof: The implication d ⇒ a is a consequence of Jeulin lemma. The implication a ⇒ b followsfrom the properties of the compensator Aτ : indeed

E(mτ) = E(∫ ∞

0

mudAτu) = E(m∞Aτ

∞) = m0

implies that Aτ∞ = 1 We refer to Nikeghbali and Yor [63]. ¤

Example 2.4.41 The first example of a pseudo-stopping time was given by Williams [76]. Let Bbe a Brownian motion and define the stopping time T1 = inft : Bt = 1 and the random timeϑ = supt < T1 : Bt = 0. Set

τ = sups < θ : Bs = MBs

where MBs is the running maximum of the Brownian motion. Then, τ is a pseudo-stopping time.

Note that E(Bτ ) is not equal to 0; this illustrates the fact we cannot take any martingale in Definition2.4.39. The martingale (Bt∧T1 , t ≥ 0) is neither bounded, nor uniformly integrable. In fact, sincethe maximum MB

θ (=Bτ ) is uniformly distributed on [0, 1], one has E(Bτ ) = 1/2.

2.4.9 Honest Times

There exists an interesting class of random times τ such that F-martingales are Fτ -semi-martingales.

Definition 2.4.42 A random time τ is honest if τ is equal to an Ft-measurable random variableon τ < t.

Example 2.4.43 (i) Let t fixed and gt = sups < t : Bs = 0 and set τ = g1. Then, g1 = gt ong1 < t.(ii) Let X be an adapted continuous process and X∗ = sup Xs, X

∗t = sups≤t Xs. The random time

τ = infs : Xs = X∗

is honest. Indeed, on the set τ < t, one has τ infs : Xs = X∗s .

Proposition 2.4.44 ( Jeulin [46] ) A random time τ is honest if it is the end of a predictable set,i.e., τ(ω) = supt : (t, ω) ∈ Γ, where Γ is an F-predictable set.

In particular, an honest time is F∞-measurable. If X is a transient diffusion, the last passagetime Λa (see Proposition 3.1.1) is honest.

Lemma 2.4.45 The process Y is Fτ -predictable if and only if there exist two F predictable processesy and y such that

Yt = yt11t≤τ + yt11t>τ .

Let X ∈ L1. Then a cadlag version of the martingale Xt = E [X|Fτt ] is given by:

Xt =1

Zτt

E [ξ1t<τ |Ft]1t<τ +1

1− Zτt

E [ξ1t≥τ |Ft]1t≥τ .

Every Fτ optional process decomposes as

L11[0,τ [ + J11[τ ] + K11]τ,∞[,

where L and K are F-optional processes and where J is a F progressively measurable process.

Page 51: Enlargements of Filtrations Monique Jeanblanc Jena, June 2010 · 2010-06-09 · 2 These notes are not complete, and have to be considered as a preliminary version of a full text

2.4. PROGRESSIVE ENLARGEMENT 51

See Jeulin [46] for a proof.

Lemma 2.4.46 (Azema) Let τ be an honest time which avoids F-stopping times. Then:(i) Aτ

∞ has an exponential law with parameter 1.(ii) The measure dAτ

t is carried by t : Zτt = 1

(iii) τ = supt : 1− Zt = 1(iv)Aτ

∞ = Aττ

Proposition 2.4.47 Let τ be honest. We assume that τ avoids stopping times. Then, if M is anF-local martingale, there exists an Fτ -local martingale M such that

Mt = Mt +∫ t∧τ

0

d〈M, µτ 〉sZτ

s−−

∫ τ∨t

τ

d〈M, µτ 〉s1− Zτ

s−.

Proof: Let M be an F-martingale which belongs to H1 and Gs ∈ Fτs . We define a G-predictable

process Y as Yu = 11Gs11]s,t](u). For s < t, one has, using the decomposition of G-predictable

processes:

E(11Gs(Mt −Ms)) = E(∫ ∞

0

YudMu

)

= E(∫ τ

0

yudMu

)+ E

(∫ ∞

τ

yudMu

).

Noting that∫ t

0yudMu is a martingale yields E

(∫∞0

yudMu

)= 0,

E(11Gs(Mt −Ms)) = E(∫ τ

0

(yu − yu)dMu

)

= E(∫ ∞

0

dAτv

∫ v

0

(yu − yu)dMu

).

By integration by parts, setting Nt =∫ t

0(yu − yu)dMu, we get

E(11Gs(Mt −Ms)) = E(N∞Aτ∞) = E(N∞µτ

∞) = E(∫ ∞

0

(yu − yu)d〈M, µτ 〉u)

.

Now, it remains to note that

E(∫ ∞

0

Yu

(d〈M, µτ 〉u

Zu−11u≤τ −

d〈M, µτ 〉u1− Zu−

11u>τ

))

= E(∫ ∞

0

(yu

d〈M, µτ 〉uZu−

11u≤τ − yud〈M, µτ 〉u1− Zu−

11u>τ

))

= E(∫ ∞

0

(yud〈M, µτ 〉u − yud〈M,µτ 〉u))

= E(∫ ∞

0

(yu − yu) d〈M, µτ 〉u)

to conclude the result in the case M ∈ H1. The general result follows by localization. ¤

Example 2.4.48 Let W be a Brownian motion, and τ = g1, the last time when the BM reaches0 before time 1, i.e., τ = supt ≤ 1 : Wt = 0. Using the computation of Zg1 in the followingSubsection 3.3 and applying Proposition 2.4.47, we obtain the decomposition of the Brownian motionin the enlarged filtration

Wt = Wt −∫ t

0

11[0,τ ](s)Φ′

1− Φ

( |Ws|√1− s

)sgn(Ws)√

1− sds

+11τ≤t sgn(W1)∫ t

τ

Φ′

Φ

( |Ws|√1− s

)ds

Page 52: Enlargements of Filtrations Monique Jeanblanc Jena, June 2010 · 2010-06-09 · 2 These notes are not complete, and have to be considered as a preliminary version of a full text

52 CHAPTER 2. ENLARGEMENT OF FILTRATIONS

where Φ(x) =√

∫ x

0exp(−u2/2)du.

Comment 2.4.49 Note that the random time τ presented in Subsection 3.4 is not the end of apredictable set, hence, is not honest. However, F-martingales are semi-martingales in the progressiveenlarged filtration: it suffices to note that F-martingales are semi-martingales in the filtration initiallyenlarged with W1.

Exercise 2.4.50 Prove that any F-stopping time is honest C

Exercise 2.4.51 Prove that

E(∫ t∧τ

0

d〈M,µτ 〉sZτ

s−−

∫ τ∨t

τ

d〈M, µτ 〉s1− Zτ

s−Ft)

is an F-martingale. C

2.4.10 An example (Nikeghbali and Yor ♠This section is a part of [64]).

Definition 2.4.52 An F-local martingale N belongs to the class (C0), if it is strictly positive, withno positive jumps, and limt→∞Nt = 0.

Let N be a local martingale which belongs to the class (C0), with N0 = x. Let St = sups≤t Ns. Weconsider:

g = sup t ≥ 0 : Nt = S∞ = sup t ≥ 0 : St −Nt = 0 . (2.4.12)

Lemma 2.4.53 [Doob’s maximal identity]For any a > 0, we have:

P (S∞ > a) =(x

a

)∧ 1. (2.4.13)

In particular,x

S∞is a uniform random variable on (0, 1).

For any stopping time ϑ, denoting Sϑ = supu≥ϑ Nu :

P(Sϑ > a|Fϑ

)=

(Nϑ

a

)∧ 1, (2.4.14)

HenceNϑ

Sϑis also a uniform random variable on (0, 1), independent of Fϑ.

Proof: The first part was done in Exercise 1.2.3. The second part is an application of the first onefor the martingale Nϑ+t, t ≥ 0) and the filtration (Fϑ+t, t ≥ 0). ¤Without loss of generality, we restrict our attention to the case x = 1.

Proposition 2.4.54 The submartingale Gt ≡ P (g > t | Ft) admits a multiplicative decompositionas Gt = Nt

St, t ≥ 0.

Proof: We have the following equalities between sets

g > t = ∃ u > t : Su = Nu = ∃ u > t : St ≤ Nu=

supu≥t

Nu ≥ St

= St ≥ Ss.

Hence, from (2.4.14), we get: P (g > t | Ft) = Nt

St.¤

Page 53: Enlargements of Filtrations Monique Jeanblanc Jena, June 2010 · 2010-06-09 · 2 These notes are not complete, and have to be considered as a preliminary version of a full text

2.4. PROGRESSIVE ENLARGEMENT 53

Proposition 2.4.55 Let N be a local martingale such that its supremum process S is continuous(this is the case if N is in the class C0). Let f be a locally bounded Borel function and defineF (x) =

∫ x

0dyf (y). Then, Xt := F (St)− f (St) (St −Nt) is a local martingale and:

F (St)− f (St) (St −Nt) =∫ t

0

f (Ss) dNs + F (S0) , (2.4.15)

Proof: In the case of a Brownian motion (i.e., N = B), this was done in Exercise 1.2.2. In the caseof continuous martingales, if F is C2,

F (St)− f (St) (St −Nt) = F (St)−∫ t

0

f (Ss) dSs +∫ t

0

f (Ss) dNs

+∫ t

0

(Ss −Ns)f ′(Ss)dSs

The last integral is null, because dS is carried by S −N = 0 and∫ t

0f (Ss) dSs = F (St)− F (S0).

For the general case, we refer the reader to [64]. ¤Let us define the new filtration Fσ(S∞)

t . Since g = inf t : Nt = S∞ ;, the random variable g is anFσ(S∞)-stopping time.Consequently:

Fgt ⊂ Fσ(S∞)

t .

Proposition 2.4.56 For any Borel bounded or positive function f , we have:

E (f (S∞) |Ft) = f (St)(

1− Nt

St

)+

∫ Nt/St

0

dxf

(Nt

x

)

Proof: In the following, U is a random variable, which follows the standard uniform law and whichis independent of Ft.

E (f (S∞) |Ft) = E(f

(St ∨ St

) |Ft

)

= E(f (St) 11St≥St|Ft

)+ E

(f

(St

)11St<St|Ft

)

= f (St)P(St ≥ St|Ft

)+ E

(f

(St

)11St<St|Ft

)

= f (St)P(

U ≤ Nt

St|Ft

)+ E

(f

(Nt

U

)11U<

NtSt|Ft

)

= f (St)(

1− Nt

St

)+

∫ Nt/St

0

dxf

(Nt

x

).

¤We now show that E (f (S∞) |Ft) is of the form 2.4.15 A straightforward change of variable in thelast integral also gives:

E (f (S∞) |Ft) = f (St)(

1− Nt

St

)+ Nt

∫ ∞

St

dyf (y)y2

= f (St)(

1− Nt

St

)+ Nt

∫ ∞

St

dyf (y)y2

= St

∫ ∞

St

dyf (y)y2

− (St −Nt)(∫ ∞

St

dyf (y)y2

− f (St)St

).

Hence,E (f (S∞) |Ft) = H (1) + H (St)− h (St) (St −Nt) ,

Page 54: Enlargements of Filtrations Monique Jeanblanc Jena, June 2010 · 2010-06-09 · 2 These notes are not complete, and have to be considered as a preliminary version of a full text

54 CHAPTER 2. ENLARGEMENT OF FILTRATIONS

with

H (x) = x

∫ ∞

x

dyf (y)y2

,

and

h (x) = hf (x) ≡∫ ∞

x

dyf (y)y2

− f (x)x

=∫ ∞

x

dy

y2(f (y)− f (x)) .

Moreover, from the Azema-Yor type formula 2.4.15, we have the following representation of E (f (S∞) |Ft)as a stochastic integral:

E (f (S∞) |Ft) = E (f (S∞)) +∫ t

0

h (Ss) dNs.

Moreover, there exist two families of random measures (λt (dx))t≥0 and(λt (dx)

)t≥0

, with

λt (dx) =(

1− Nt

St

)δSt

(dx) + Nt11x>Stdx

x2

λt (dx) = − 1St

δSt(dx) + 11x>St

dx

x2,

such that

E (f (S∞) |Ft) = λt (f) =∫

λt (dx) f (x)

λt (f) =∫

λt (dx) f (x) .

Finally, we notice that there is an absolute continuity relationship between λt (dx) and λt (dx);more precisely,

λt (dx) = λt (dx) ρ (x, t) ,

withρ (x, t) =

−1St −Nt

11St=x +1Nt

11St<x.

Theorem 2.4.57 Let N be a local martingale in the class C0 (recall N0 = 1). Then, the pair offiltrations

(F,Fσ(S∞)

)satisfies the (H′) hypothesis and every F local martingale X is an Fσ(S∞)

semimartingale with canonical decomposition:

Xt = Xt +∫ t

0

11g>sd〈X, N〉s

Ns−−

∫ t

0

11g≤sd〈X, N〉sS∞ −Ns−

,

where X is a Fσ(S∞)-local martingale.

Proof: We can first assume that X is in H1; the general case follows by localization. Let Ks bean Fs measurable set, and take t > s. Then, for any bounded test function f , λt (f) is a boundedmartingale, hence in BMO, and we have:

E (11Ksf (S∞) (Xt −Xs)) = E (11Ks (λt (f)Xt − λs (f)Xs))= E (11Ks (〈λ (f) , X〉t − 〈λ (f) , X〉s))

= E(

11Ks

(∫ t

s

λu (f) d〈X,N〉u))

= E(

11Ks

(∫ t

s

∫λu (dx) ρ (x, u) f (x) d〈X,N〉u

))

= E(

11Ks

(∫ t

s

d〈X, N〉uρ (S∞, u)))

.

Page 55: Enlargements of Filtrations Monique Jeanblanc Jena, June 2010 · 2010-06-09 · 2 These notes are not complete, and have to be considered as a preliminary version of a full text

2.4. PROGRESSIVE ENLARGEMENT 55

But we also have:ρ (S∞, t) =

−1St −Nt

11St=S∞ +1Nt

11St<S∞.

It now suffices to use the fact that S is constant after g and g is the first time when S∞ = St, or inother words:

11S∞>St = 11g>t, and 11S∞=St = 11g≤t.

Corollary 2.4.58 The pair of filtrations (F,Fg) satisfies the (H′) hypothesis. Moreover, every F-local martingale X decomposes as:

Xt = Xt +∫ t

0

11g>sd〈X, N〉s

Ns−

∫ t

0

11g≤sd〈X, N〉sS∞ −Ns

,

where X is an Fg-local martingale.

Proof: Let X be an F-martingale which is in H1; the general case follows by localization.

Xt = Xt +∫ t

0

11g>sd〈X, N〉s

Ns−

∫ t

0

11g≤sd〈X, N〉sS∞ −Ns

,

where X denotes an Fσ(S∞) martingale. Thus, X, which is equal to:

Xt −(∫ t

0

11g>sd〈X,N〉s

Ns−

∫ t

0

11g≤sd〈X, N〉sS∞ −Ns

,

),

is Fg adapted (recall that Fgt ⊂ Fσ(S∞)

t ), and hence it is an Fg-martingale.

These results extend to honest times:

Theorem 2.4.59 Let L be an honest time. Then, under the conditions (CA), the supermartingaleZt = P (L > t | Ft) admits the following additive and multiplicative representations: there exists acontinuous and nonnegative local martingale N , with N0 = 1 and limt→∞Nt = 0, such that:

Zt = P (L > t | Ft) =Nt

St

Zt = Mt −At.

where these two representations are related as follows:

Nt = exp(∫ t

0

dMs

Zs− 1

2

∫ t

0

d < M >s

Z2s

), St = exp (At) ;

Mt = 1 +∫ t

0

dNs

Ss= E (log S∞ | Ft) , At = log St.

Remark 2.4.60 From the Doob-Meyer (additive) decomposition of Z, we have 1−Zt = (1−mt)+ln St. From Skorokhod’s reflection lemma (See [3M]) we deduce that

ln St = sups≤t

ms − 1

Exercise 2.4.61 Prove that exp(λBt − λ2

2 t) belongs to (C0). C

Page 56: Enlargements of Filtrations Monique Jeanblanc Jena, June 2010 · 2010-06-09 · 2 These notes are not complete, and have to be considered as a preliminary version of a full text

56 CHAPTER 2. ENLARGEMENT OF FILTRATIONS

2.5 Initial Times

We assume that τ is a random time satisfying Jacod’s criteria (see Proposition 2.3.2)

P(τ > θ|Ft) =∫ ∞

θ

pt(u)ν(du)

where ν is the law of τ . Moreover, in this section, we assume that p is strictly positive. We callinitial times the random variables that enjoy that property.

2.5.1 Progressive enlargement with initial times

As seen in Proposition 2.3.2, if τ s an initial time and X is an F-martingale, it is a F(τ) = F∨σ(τ)-semimartingale with decomposition

Xt = Xt +∫ t

0

d〈X, p(θ)〉upu−(θ)

∣∣∣∣θ=τ

where X is an F ∨ σ(τ)-martingale

Since X is a F(τ)-semimartingale, and is G = Fτ adapted, it is a G-semimartingale. We are nowlooking for its representation. By projection on G, it follows that

Xt = E(Xt|Gt) + E(∫ t

0

d〈X, p(θ)〉sps−(θ)

∣∣∣∣θ=τ

|Gt

)

The process E(Xt|Gt) is a G-martingale.

Proposition 2.5.1 Let X be a F-martingale. Then, it is a G-semi-martingale with decomposition

Xt = Xt +∫ t∧τ

0

d〈X,G〉sGs−

+∫ t

t∧τ

d〈X, p(θ)〉sps−(θ)

∣∣∣∣θ=τ

where X is a G-martingale.

Proof: We give the proof in the case where F is a Brownian filtration. Then, for any θ, there existstwo predictable processes γ(θ) and x such that

dXt = xtdWt

dtpt(θ) = pt(θ)γt(θ)dWt

and d〈X, p(θ)〉t = xtpt(θ)γt(θ)dt. It follows that∫ t

0d〈X,p(θ)〉s

ps− (θ)

∣∣∣θ=τ

=∫ t

0xsγs(τ)ds. We write

E(∫ t

0

d〈X, p(θ)〉sps−(θ)

|θ=τ |Gt

)= E

(∫ t∧τ

0

xsγs(θ)ds|θ=τ |Gt

)+

∫ t

t∧τ

d〈X, p(θ)〉sps−(θ)

From Exercise 1.2.5,

E(∫ t∧τ

0

xsγs(τ)ds|Gt

)= mt +

∫ t

0

xsE(11s<τγs(τ)|Gs)ds

where m is a G-martingale. From key lemma

E(11s<τγs(τ)|Gs) = 11s<τ1

GsE(11<τγs(τ)|Fs) = 11s<τ

1GsE(

∫ ∞

s

γs(u)ps(u)ν(du)|Fs)

= 11s<τ1

Gs

∫ ∞

s

γs(u)ps(u)ν(du)

Page 57: Enlargements of Filtrations Monique Jeanblanc Jena, June 2010 · 2010-06-09 · 2 These notes are not complete, and have to be considered as a preliminary version of a full text

2.5. INITIAL TIMES 57

Now, we compute d〈X, G〉. In order to compute this bracket, one needs the martingale part of GFrom Gt =

∫∞t

pt(u)ν(du), one deduces that

dGt =(∫ ∞

t

γt(u)pt(u)ν(du))

dWt − pt(t)ν(dt)

The last property can be obtained from Ito-Kunita-Wentcell (this can also be checked by hands:indeed it is straightforward to check that Gt+ ∈t

0 ps(S)ν(dS) is a martingale). It follows thatd〈X, G〉t = xt(

∫∞t

γt(u)pt(u)ν(du))dt, hence the result.

Proposition 2.5.2 A cadlag process Y G is a G-martingale if (and only if) there exist an F-adaptedcadlag process y and an Ft ⊗ B(R+)-optional process yt(.) such that

Y Gt = yt11τ>t + yt(τ)11τ≤t

and that• E(Y Gt |Ft) is an F-local martingale;• For nay θ, (yt(θ)pt(θ), t ≥ θ) is an F-martingale.

Proof: We prove only the ’if’ part. Let Q defined as dQ|F(τ)t

= 1pt(τ)dP|F(τ)

t. It follows that

dQ|Gt = 1Lt

dP|Gt (or, equivalently, dP|Gt = LtdQ|Gt where EQ(pt(τ)|Gt)) . The process Y is a (P,G)martingale if and only if Y L is a (Q,G)- martingale where

Lt = EQ(pt(τ)|Gt) = 11t<τ `t + 11τ≤tpt(τ)

(with `t = 1Q(t<τ)EQ(11t<τpt(τ)|Ft)) so that YtLt = 11t<τyt`t + 11τ≤tpt(τ)yt(τ). Under Q, τ and F∞

are independent, and obviously F is immersed in G. From Proposition 2.4.21, the process Y L is a(G,Q) martingale if pt(u)yt(u), t ≥ u is, for any u, an (F,Q)-martingale and if EQ(YtLt|Ft) is an(F,Q)-martingale. We now note that, from Bayes rule, EQ(YtLt|Ft) = EQ(Lt|Ft)EP(Yt|Ft) and that

EQ(Lt|Ft) = EQ(pt(τ)|Ft) =∫ ∞

0

pt(u)ν(du) = 1

therefore, EQ(YtLt|Ft) = EP(Yt|Ft). Note that

EP(Y Gt |Ft) = ytGt +∫ t

0

yt(s)pt(s)ν(ds), t ≥ 0

and that ∫ t

0

yt(s)pt(s)ds−∫ t

0

ys(s)ps(s)ν(ds), t ≥ 0

is a martingale (WHY?) so that the condition EP(Yt|Ft) is a martingale is equivalent to

ytGt +∫ t

0

ys(s)ps(s)ν(ds), t ≥ 0

is a martingale. ¤

Theorem 2.5.3 Let ZGt = zt11τ>t + zt(τ)11τ≤t be a positive G-martingale with ZG0 = 1 and letZFt = ztGt +

∫ t

0zt(u)pt(u)ν(du) be its F projection.

Let Q be the probability measure defined on Gt by dQ = ZGt dP.Then, for t ≥ θ pQt (θ) = pt(θ)

zt(θ)

ZFt, and:

(i) the Q-conditional survival process is defined by GQt = Gtzt

ZFt

Page 58: Enlargements of Filtrations Monique Jeanblanc Jena, June 2010 · 2010-06-09 · 2 These notes are not complete, and have to be considered as a preliminary version of a full text

58 CHAPTER 2. ENLARGEMENT OF FILTRATIONS

(ii) the (F,Q)-intensity process is λF,Qt = λFtzt(t)zt−

, dt- a.s.;

(iii) LF,Q is the (F,Q)-local martingale

LF,Qt = LFtzt

ZFtexp

∫ t

0

(λF,Qs − λFs)ds

Proof: From change of probability

Q(τ > θ|Ft) =1

E(ZGt |Ft)EP(ZGt 11τ>θ|Ft) =

1ZFtEP(zt(τ)11τ>θ|Ft) =

1ZFt

∫ ∞

θ

zt(u)pt(u)ν(du)

The form of the survival process follows immediately from the fact that

ztGt +∫ ∞

t

zt(u)pt(u)ν(du)

is a martingale (or by an immediate application of Bayes formula). The form of the intensity isobvious. The form of L is obtained from Bayes formula. ¤

Exercise 2.5.4 Prove that the change of probability measure generated by the two processes

zt = (LFt )−1, zt(θ) =pθ(θ)pt(θ)

provides a model where the immersion property holds true, and where the intensity processes doesnot change C

Exercise 2.5.5 Check that

E(∫ t∧τ

0

d〈X,G〉sGs−

−∫ t

t∧τ

d〈X, p(θ)〉sps−(θ)

∣∣∣∣θ=τ

|Ft)

is an F-martingale.Check that that

E(∫ t

0

d〈X, p(θ)〉sps−(θ)

∣∣∣∣θ=τ

|Gt)

is a G martingale. C

Exercise 2.5.6 Let λ be a positive F-adapted process and Λt =∫ t

0λsds and Θ be a strictly positive

random variable such that there exists a family γt(u) such that P(Θ > θ|Ft) =∫∞

θγt(u)du.

Let τ = inft > 0 : Λt ≥ Θ.Prove that the density of τ is given by

pt(θ) = λθγt(Λθ) if t ≥ θ and pt(θ) = E[λθγθ(Λθ)|Ft] if t < θ.

Conversely, if we are given a density p, prove that it is possible to construct a threshold Θ such thatτ has p as density. C

Application: Defaultable Zero-Coupon Bonds

A defaultable zero-coupon with maturity T associated with the default time τ is an asset which paysone monetary unit at time T if (and only if) the default has not occurred before T . We assume thatP is the pricing measure. By definition, the risk-neutral price under P of the T -maturity defaultablezero-coupon bond with zero recovery equals, for every t ∈ [0, T ],

D(t, T ) := P(τ > T | Gt) = 11τ>tP(τ > T | Ft)

Gt= 11τ>t

EP(NT e−ΛT | Ft

)

Gt(2.5.1)

Page 59: Enlargements of Filtrations Monique Jeanblanc Jena, June 2010 · 2010-06-09 · 2 These notes are not complete, and have to be considered as a preliminary version of a full text

2.5. INITIAL TIMES 59

where N is the martingalepart in the multiplicative decomposition of G (see Proposition ). Using(2.5.1), we obtain

D(t, T ) := P(τ > T | Gt) = 11τ>t1

Nte−ΛtEP(NT e−ΛT | Ft)

However, using a change of probability, one can get rid of the martingale part N , assuming thatthere exists p such that

P(τ > θ|Ft) =∫ ∞

θ

pt(u)du

Let P∗ be defined asdP∗|Gt

= Z∗t dP|Gt

where Z∗ is the (P,G)-martingale defined as

Z∗t = 11t<τ + 11t≥τλτe−ΛτNt

pt(τ)

Note thatdP∗|Ft = NtdP|Ft = NtdP|Ft

and that P∗ and P coincide on Gτ .Indeed,

EP(Z∗t |Ft) = Gt +∫ t

0

λue−ΛuNt

pt(u)pt(u)η(du)

= Nte−Λt + Nt

∫ t

0

λue−Λuη(du) = Nte−Λt + Nt(1− e−Λt)

Then, for t > θ,

P∗(θ < τ |Ft) =1NtEP(Z∗t 11θ<τ |Ft) =

1NtEP(11t<τ + 11t≥τ>θλτe−Λτ

Nt

pt(τ)|Ft)

=1Nt

(Nte

−Λt +∫ t

θ

λue−ΛuNt

pt(u)pt(u)du

)

=1Nt

(Nte

−Λt + Nt(e−Λθ − e−Λt))

= e−Λθ

which proves that immersion holds true under P∗, and the intensity of τ is the same under P andP∗. It follows that

EP(X11T<τ|Gt) = E∗(X11T<τ|Gt) = 11t<τ1

e−ΛtE∗(e−ΛT X|Ft)

Note that, if the intensity is the same under P andP∗, its dynamics under P∗ will involve a changeof driving process, since P and P∗ do not coincide on F∞.Let us now study the pricing of a recovery. Let Z be an F-predictable bounded process.

EP(Zτ11t<τ≤T |Gt) = 11t<τ1GtEP(−

∫ T

t

ZudGu|Ft)

= 11t<τ1GtEP(

∫ T

t

ZuNuλue−Λudu|Ft)

= E∗(Zτ11t<τ≤T|Gt)

= 11t<τ1

e−ΛtE∗(

∫ T

t

Zuλue−Λudu|Ft)

Page 60: Enlargements of Filtrations Monique Jeanblanc Jena, June 2010 · 2010-06-09 · 2 These notes are not complete, and have to be considered as a preliminary version of a full text

60 CHAPTER 2. ENLARGEMENT OF FILTRATIONS

The problem is more difficult for pricing a recovery paid at maturity, i.e. for X ∈ FT

EP(X11τ<T |Gt) = EP(X|Gt)− EP(X11τ>T |Gt) = EP(X|Gt)− 11τ>t1

Nte−ΛtEP

(XNT e−ΛT | Ft

)

= EP(X|Gt)− 11τ>t1

e−ΛtE∗

(Xe−ΛT | Ft

)

Since immersion holds true under P∗

E∗(X11τ<T |Gt) = E∗(X|Gt)− 11τ>t1

e−ΛtE∗

(XNT e−ΛT | Ft

)

= E∗(X|Ft)− 11τ>t1

e−ΛtE∗

(XNT e−ΛT | Ft

)

If both quantities EP(X11τ<T |Gt) and E∗(X11τ<T |Gt) are the same, this would imply that EP(X|Gt) =E∗(X|Ft) which is impossible: this would lead to EP(X|Gt) = EP(X|Ft), i.e. immersion holds underP . Hence, non-immersion property is important while evaluating recovery paid at maturity ( P∗and P do not coincide on F∞).

2.5.2 Survival Process ♠Proposition 2.5.7 Let (Ω,F,P) be a given filtered probability space. Assume that N is a (P,F)-local martingale and Λ an F-adapted increasing process such that 0 < Nte

−Λt < 1 for t > 0 andN0 = 1 = e−Λ0 . Then, there exists (on an extended probability space) a probability Q which satisfiesQ|Ft = P|Ft) and a random time τ such that Q(τ > t|Ft) = Nte

−Λt .

Proof: We give the proof when F is a Brownian filtration and Λt =∫ t

0λudu. For the general case,

see [45, 44]. We shall construct, using a change of probability, a conditional probability Gt(u) whichadmits the given survival process (i.e. Gt(t) = Gt = Nte

−Λt). From the conditional probability, onecan deduce a density process, hence one can construct a random time admitting tGt(u) as conditionalprobability. We are looking for conditional probabilities with a particular form (the idea is linkedwith the results obtained in Subsection 2.3.5). Let us start with a model in which P(τ > t|Ft) = e−Λt ,where Λt =

∫ t

0λsds and let N be an F-local martingale such that 0 ≤ Nte

−Λt ≤ 1.The goal is to prove that there exists a G-martingale L such that, setting dQ = LdP

(i) Q|F∞ = P|F∞(ii) Q(τ > t|Ft) = Nte

−Λt

The G-adapted process LLt = `t11t<τ + `t(τ)11τ≤t

is a martingale if for any u, (`t(u), t ≥ u) is a martingale and if E(Lt|Ft) is a F-martingale. Then,(i) is satisfied if

1 = E(Lt|Ft) = `te−Λt +

∫ t

0

`t(u)λue−Λudu

and (ii) implies that ` = N and `t(t) = `t. We are now reduced to find a family of martingales`t(u), t ≥ u such that

`u(u) = Nu, 1 = Nte−Λt +

∫ t

0

`t(u)λue−Λudu

We restrict our attention to families ` of the form

`t(u) = XtYu, t ≥ u

where X is a martingale such that

XtYt = Nt, 1 = Nte−Λt + Xt

∫ t

0

Yuλue−Λudu

Page 61: Enlargements of Filtrations Monique Jeanblanc Jena, June 2010 · 2010-06-09 · 2 These notes are not complete, and have to be considered as a preliminary version of a full text

2.5. INITIAL TIMES 61

. It is easy to show that

Yt = Y0 +∫ t

0

eΛud(1

Xu)

In a Brownian filtration case, there exists a process ν such that dNt = νtNtdWt and the positivemartingale X is of the form dXt = xtXtdWt. Then, using the fact that integration by parts implies

d(XtYt) = YtdXt − eΛt1

XtdXt = xt(XtYt − eΛt)dWt = dNt ,

we are lead to chosext =

νtGt

Gt − 1

2.5.3 Generalized Cox construction

In this subsection, we generalize the Cox construction in a setting where the threshold Θ is no moreindependent of F∞. Instead, we make the assumption that Θ admits a non trivial density withrespect to F. Clearly, the H-hypothesis is no longer satisfied and we shall explore the density in thiscase.

Let λ be a strictly positive F-adapted process, and Λt =∫ t

0λsds. Let Θ be a strictly positive

random variable whose conditional distribution w.r.t. F admits a density w.r.t. the Lebesguemeasure, i.e., there exists a family of Ft ⊗ B(R+)-measurable functions γt(u) such that P(Θ >θ|Ft) =

∫∞θ

γt(u)du. Let τ = inft > 0 : Λt ≥ Θ. We say that a random time τ constructed insuch a setting is given by a generalized Cox construction.

Proposition 2.5.8 Let τ be given by a generalized Cox construction. Then τ admits the density

pt(θ) = λθγt(Λθ) if t ≥ θ and pt(θ) = E[λθγθ(Λθ)|Ft] if t < θ. (2.5.2)

Proof: By definition and by the fact that Λ is strictly increasing and absolutely continuous, wehave for t ≥ θ,

P(τ > θ|Ft) = P(Θ > Λθ|Ft) =∫ ∞

Λθ

γt(u)du =∫ ∞

θ

γt(Λu)dΛu =∫ ∞

θ

γt(Λu)λudu,

which implies pt(θ) = λθγt(Λθ). The martingale property of p gives the whole density. ¤

Conversely, if we are given a density p, and hence an associated process Λt =∫ t

0λsds with

λs = ps(s)Gs

, then it is possible to construct a random time τ by a generalized Cox construction, thatis, to find a threshold Θ such that τ has p as density. We denote by Λ−1 the inverse of the strictlyincreasing process λ.

Proposition 2.5.9 Let τ be a random time with density (pt(θ), t ≥ 0), θ ≥ 0. We let Λt =∫ t

0ps(s)Gs

dsand Θ = Λτ . Then Θ has a density γ with respect to F given by

γt(θ) = E[pt∨Λ−1

θ(Λ−1

θ )1

λΛ−1θ

|Ft

]. (2.5.3)

Obviously τ = inft > 0 : Λt ≥ Θ. In particular, if the H-hypothesis holds, then Θ follows theexponential distribution with parameter 1 and is independent of F.

Proof: We set Θ = Λτ and compute the density of Θ w.r.t. FP(Θ > θ|Ft) = P(Λτ > θ|Ft) = P(τ > t, Λτ > θ|Ft) + P(τ ≤ t,Λτ > θ|Ft)

= E[−∫ ∞

t

11Λu>θdGu|Ft] +∫ t

0

11Λu>θpt(u)du

= E[∫ ∞

t

11Λu>θpu(u)du|Ft] +∫ t

0

11Λu>θpt(u)du

(2.5.4)

Page 62: Enlargements of Filtrations Monique Jeanblanc Jena, June 2010 · 2010-06-09 · 2 These notes are not complete, and have to be considered as a preliminary version of a full text

62 CHAPTER 2. ENLARGEMENT OF FILTRATIONS

where the last equality comes from the fact that (Gt +∫ t

0pu(u)du, t ≥ 0) is an F-martingale. Note

that since the process Λ is continuous and strictly increasing, also is its inverse. Hence

E[ ∫ ∞

θ

pΛ−1s ∨t(Λ

−1s )

1λΛ−1

s

ds|Ft

]= E

[ ∫ ∞

Λ−1θ

ps∨t(s)1λs

dΛs|Ft

]

= E[ ∫ ∞

0

11s>Λ−1θ ps∨t(s)ds|Ft

]= E

[ ∫ ∞

0

11Λs>θps∨t(s)ds|Ft

],

which equals P(Θ > θ|Ft) by comparing with (2.5.4). So the density of Θ w.r.t. F is given by (2.5.3).In the particular case where H-hypothesis holds,

pt(u) = pu(u) = λuGu = λue−Λu when t ≥ u

and (2.5.4) equals

P(Θ > θ|Ft) = E[∫ ∞

0

11Λu>θpu(u)du|Ft] = E[∫ ∞

0

11Λu>θλue−Λudu|Ft]

= E[∫ ∞

0

11x>θe−xdx|Ft],

which completes the proof. ¤

2.5.4 Carthagese Filtrations

In Carthegese, one can find three levels of different civilisation. This had lead us to make use ofthree levels of filtration, and a change of measure. We consider the three filtrations

F ⊂ G = F ∨H ⊂ F(τ) = F ∨ σ(τ)

and we assume that the F-conditional law of τ is equivalent to the law of τ . This framework willallow us to give new (and simple) proofs to some results on enlargement of filtrations. We refer thereader to [14] for details.

Semi-martingale properties

We note Q the probability measure defined on F (τ)t as

dQ|F(τ)t

=1

pt(τ)dP|F(τ)

t= LtdP|F(τ)

t

where Lt = 1pt(τ) is a (P,F(τ))-martingale. Hence

dQ|Gt = `tdP|Gt

with ` is the (P,G)-martingale

`t = EP(Lt|Gt) = 11t<τ1GtEP(

∫ ∞

t

ν(du)|Gt) + 11τ≤t1

pt(τ)

where Gt = EP(τ > t|Ft) =∫∞

tpt(u)ν(du), and, obviously

dP|F(τ)t

= pt(τ)dQ|F(τ)t

= ΛtdQ|F(τ)t

where Λ = 1/L is a (Q,F(τ))-martingale and

dP|Gt = EQ(pt(τ)|Gt)dQ|Gt = λtdQ|Gt

Page 63: Enlargements of Filtrations Monique Jeanblanc Jena, June 2010 · 2010-06-09 · 2 These notes are not complete, and have to be considered as a preliminary version of a full text

2.5. INITIAL TIMES 63

with

λt = EQ(pt(τ)|Gt) = 11t<τ1

G(t)EQ(

∫ ∞

t

pt(u)ν(du)|Gt) + 11τ≤tpt(τ) = 11t<τ1

G(t)Gt + 11τ≤tpt(τ) =

1`t

is a (Q,G) martingale, where

G(t) = EQ(τ > t|Ft) = Q(τ > t) = P(τ > t) =∫ ∞

t

ν(du)

Assume now that ν is absolutely continuous with respect to Lebesque measure ν(du) = ν(u)du

-with a light abuse of notation). We know, from the general study that MPt := Ht −

∫ t∧τ

0λPsds with

λPt = pt(t)ν(t)Gt

is a (P,G) martingale and that MQt := Ht −

∫ t∧τ

0λQs ds with λQt = ν(t)

G(t) is a (P,G)martingale.

The predictable increasing process A such that Ht−At∧τ is a (Q,F(τ)) martingale is H (indeed,τ is F(τ)-predictable)

Let now m be a (P,F) martingale. Then, it is a (Q,F) martingale, hence a (Q,G) and a (Q,F(τ))martingale. It follows that mΛ is a (P,F(τ)) martingale and mλ a (P,G) martingale. Writing m =mΛ/Λ and m = mλ/λ and making use of integration by parts formula leads us to the decompositionof m as a F(τ) and a G semi-martingale.

Representation theorem

Using the same methodology, one can check that, if there exists a martingale µ (eventually multi-dimensional) with the predictable representation property in (P,F), then the pair µ,M posses thepredictable representation property in (P,G) where µ is the martingale part of the G semi-martingaleµ.

Page 64: Enlargements of Filtrations Monique Jeanblanc Jena, June 2010 · 2010-06-09 · 2 These notes are not complete, and have to be considered as a preliminary version of a full text

64 CHAPTER 2. ENLARGEMENT OF FILTRATIONS

Page 65: Enlargements of Filtrations Monique Jeanblanc Jena, June 2010 · 2010-06-09 · 2 These notes are not complete, and have to be considered as a preliminary version of a full text

Chapter 3

Last Passage Times ♠

We now present the study of the law (and the conditional law) of some last passage times for diffusionprocesses. In this section, W is a standard Brownian motion and its natural filtration is F. Theserandom times have been studied in Jeanblanc and Rutkowski [42] as theoretical examples of defaulttimes, in Imkeller [37] as examples of insider private information and, in a pure mathematical pointof view, in Pitman and Yor [66] and Salminen [69].

We now show that, in a diffusion setup, the Doob-Meyer decomposition of the Azema super-martingale may be computed explicitly for some random times τ .

3.1 Last Passage Time of a Transient Diffusion

Proposition 3.1.1 Let X be a transient homogeneous diffusion such that Xt → +∞ when t →∞,and s a scale function such that s(+∞) = 0 (hence, s(x) < 0 for x ∈ R) and Λy = supt : Xt = ythe last time that X hits y. Then,

Px(Λy > t|Ft) =s(Xt)s(y)

∧ 1 .

Proof: We follow Pitman and Yor [66] and Yor [80], p.48, and use that under the hypotheses of theproposition, one can choose a scale function such that s(x) < 0 and s(+∞) = 0 (see Sharpe [70]).

Observe that

Px

(Λy > t|Ft

)= Px

(infu≥t

Xu < y∣∣∣Ft

)= Px

(supu≥t

(−s(Xu)) > −s(y)∣∣∣Ft

)

= PXt

(supu≥0

(−s(Xu)) > −s(y))

=s(Xt)s(y)

∧ 1,

where we have used the Markov property of X, and the fact that if M is a continuous local martingalewith M0 = 1, Mt ≥ 0, and lim

t→∞Mt = 0, then

supt≥0

Mtlaw=

1U

,

where U has a uniform law on [0, 1] (see Exercise 1.2.3). ¤

Lemma 3.1.2 The FX-predictable compensator A associated with the random time Λy is the process

A defined as At = − 12s(y)

Ls(y)t (Y ), where L(Y ) is the local time process of the continuous martingale

Y = s(X).

65

Page 66: Enlargements of Filtrations Monique Jeanblanc Jena, June 2010 · 2010-06-09 · 2 These notes are not complete, and have to be considered as a preliminary version of a full text

66 CHAPTER 3. LAST PASSAGE TIMES ♠

Proof: From x ∧ y = x− (x− y)+, Proposition 3.1.1 and Tanaka’s formula, it follows that

s(Xt)s(y)

∧ 1 = Mt +1

2s(y)L

s(y)t (Y ) = Mt +

1s(y)

`yt (X)

where M is a martingale. The required result is then easily obtained. ¤DONNER FORMULE TANAKA. DEFINIR p(m)

We deduce the law of the last passage time:

Px(Λy > t) =(

s(x)s(y)

∧ 1)

+1

s(y)Ex(`y

t (X))

=(

s(x)s(y)

∧ 1)

+1

s(y)

∫ t

0

du p(m)u (x, y) .

Hence, for x < y

Px(Λy ∈ dt) = − dt

s(y)p(m)t (x, y) = − dt

s(y)m(y)pt(x, y)

= −σ2(y)s′(y)2s(y)

pt(x, y)dt . (3.1.1)

For x > y, we have to add a mass at point 0 equal to

1−(

s(x)s(y)

∧ 1)

= 1− s(x)s(y)

= Px(Ty < ∞) .

Example 3.1.3 Last Passage Time for a Transient Bessel Process: For a Bessel process ofdimension δ > 2 and index ν (see [3M] Chapter 6), starting from 0,

Pδ0(Λa < t) = Pδ

0(infu≥t

Ru > a) = Pδ0(sup

u≥tR−2ν

u < a−2ν)

= Pδ0

(R−2ν

t

U< a−2ν

)= Pδ

0(a2ν < UR2ν

t ) = Pδ0

(a2

R21U

1/ν< t

).

Thus, the r.v. Λa = a2

R21U1/ν is distributed as a2

2γ(ν+1)βν,1

law= a2

2γ(ν) where γ(ν) is a gamma variablewith parameter ν:

P(γ(ν) ∈ dt) = 11t≥0tν−1e−t

Γ(ν)dt .

Hence,

Pδ0(Λa ∈ dt) = 11t≥0

1tΓ(ν)

(a2

2t

e−a2/(2t)dt . (3.1.2)

We might also find this result directly from the general formula (3.1.1).

Proposition 3.1.4 For H a positive predictable process

Ex(HΛy |Λy = t) = Ex(Ht|Xt = y)

and, for y > x,

Ex(HΛy ) =∫ ∞

0

Ex(Λy ∈ dt)Ex(Ht|Xt = y) .

In the case x > y,

Ex(HΛy ) = H0

(1− s(x)

s(y)

)+

∫ ∞

0

Ex(Λy ∈ dt)Ex(Ht|Xt = y) .

Page 67: Enlargements of Filtrations Monique Jeanblanc Jena, June 2010 · 2010-06-09 · 2 These notes are not complete, and have to be considered as a preliminary version of a full text

3.2. LAST PASSAGE TIME BEFORE HITTING A LEVEL 67

Proof: We have shown in the previous Proposition 3.1.1 that

Px(Λy > t|Ft) =s(Xt)s(y)

∧ 1 .

From Ito-Tanaka’s formula

s(Xt)s(y)

∧ 1 =s(x)s(y)

∧ 1 +∫ t

0

11Xu>y ds(Xu)s(y)

− 12L

s(y)t (s(X)) .

It follows, using Lemma 2.4.1 that

Ex(HΛx) =

12Ex

(∫ ∞

0

Hu duLs(y)u (s(X))

)

=12Ex

(∫ ∞

0

Ex(Hu|Xu = y) duLs(y)u (s(X))

).

Therefore, replacing Hu by Hug(u), we get

Ex (HΛxg(Λx)) =12Ex

(∫ ∞

0

g(u)Ex (Hu|Xu = y) duLs(y)u (s(X))

). (3.1.3)

Consequently, from (3.1.3), we obtain

Px (Λy ∈ du) =12duEx

(Ls(y)

u (s(X)))

Ex

(HΛy |Λy = t

)= Ex(Ht|Xt = y) .

¤

Exercise 3.1.5 Let X be a drifted Brownian motion with positive drift ν and Λνy its last passage

time at level y. Prove that

Px(Λ(ν)y ∈ dt) =

ν√2πt

exp(− 1

2t(x− y + νt)2

)dt ,

and

Px(Λ(ν)y = 0) =

1− e−2ν(x−y), for x > y0 for x < y .

Prove, using time inversion that, for x = 0,

Λ(ν)y

law=1

T(y)ν

whereT (b)

a = inft : Bt + bt = aSee Madan et al. [59]. C

3.2 Last Passage Time Before Hitting a Level

Let Xt = x+σWt where the initial value x is positive and σ is a positive constant. We consider, for0 < a < x the last passage time at the level a before hitting the level 0, given as ga

T0(X) = sup t ≤

T0 : Xt = a, whereT0 = T0(X) = inf t ≥ 0 : Xt = 0 .

Page 68: Enlargements of Filtrations Monique Jeanblanc Jena, June 2010 · 2010-06-09 · 2 These notes are not complete, and have to be considered as a preliminary version of a full text

68 CHAPTER 3. LAST PASSAGE TIMES ♠

(In a financial setting, T0 can be interpreted as the time of bankruptcy.) Then, setting α = (a−x)/σ,T−x/σ(W ) = inft : Wt = −x/σ and dα

t (W ) = infs ≥ t : Ws = α

Px

(ga

T0(X) ≤ t|Ft

)= P

(dα

t (W ) > T−x/σ(W )|Ft

)

on the set t < T−x/σ(W ). It is easy to prove that

P(dα

t (W ) < T−x/σ(W )|Ft

)= Ψ(Wt∧T−x/σ(W ), α,−x/σ),

where the function Ψ(·, a, b) : R→ R equals, for a > b,

Ψ(y, a, b) = Py(Ta(W ) > Tb(W )) =

(a− y)/(a− b) for b < y < a,1 for a < y,0 for y < b.

(See Proposition ?? for the computation of Ψ.) Consequently, on the set T0(X) > t we have

Px

(ga

T0(X) ≤ t|Ft

)=

(α−Wt∧T0)+

a/σ=

(α−Wt)+

a/σ=

(a−Xt)+

a. (3.2.1)

As a consequence, applying Tanaka’s formula, we obtain the following result.

Lemma 3.2.1 Let Xt = x + σWt, where σ > 0. The F-predictable compensator associated with the

random time gaT0(X) is the process A defined as At =

12α

Lαt∧T−x/σ(W )(W ), where Lα(W ) is the local

time of the Brownian Motion W at level α = (a− x)/σ.

3.3 Last Passage Time Before Maturity

In this subsection, we study the last passage time at level a of a diffusion process X before the fixedhorizon (maturity) T . We start with the case where X = W is a Brownian motion starting from 0and where the level a is null:

gT = supt ≤ T : Wt = 0 .

Lemma 3.3.1 The F-predictable compensator associated with the random time gT equals

At =

√2π

∫ t∧T

0

dLs√T − s

,

where L is the local time at level 0 of the Brownian motion W.

Proof: It suffices to give the proof for T = 1, and we work with t < 1. Let G be a standardGaussian variable. Then

P( a2

G2> 1− t

)= Φ

( |a|√1− t

),

where Φ(x) =√

∫ x

0exp(−u2

2 )du. For t < 1, the set g1 ≤ t is equal to dt > 1. It follows (see

[3M]) that

P(g1 ≤ t|Ft) = Φ( |Wt|√

1− t

).

Then, the Ito-Tanaka formula combined with the identity

xΦ′(x) + Φ′′(x) = 0

Page 69: Enlargements of Filtrations Monique Jeanblanc Jena, June 2010 · 2010-06-09 · 2 These notes are not complete, and have to be considered as a preliminary version of a full text

3.3. LAST PASSAGE TIME BEFORE MATURITY 69

leads to

P(g1 ≤ t|Ft) =∫ t

0

Φ′( |Ws|√

1− s

)d

( |Ws|√1− s

)+

12

∫ t

0

ds

1− sΦ′′

( |Ws|√1− s

)

=∫ t

0

Φ′( |Ws|√

1− s

)sgn(Ws)√

1− sdWs +

∫ t

0

dLs√1− s

Φ′( |Ws|√

1− s

)

=∫ t

0

Φ′( |Ws|√

1− s

)sgn(Ws)√

1− sdWs +

√2π

∫ t

0

dLs√1− s

.

It follows that the F-predictable compensator associated with g1 is

At =

√2π

∫ t

0

dLs√1− s

, (t < 1) .

¤

These results can be extended to the last time before T when the Brownian motion reaches thelevel α, i.e., gα

T = sup t ≤ T : Wt = α, where we set sup(∅) = T. The predictable compensatorassociated with gα

T is

At =

√2π

∫ t∧T

0

dLαs√

T − s,

where Lα is the local time of W at level α.

We now study the case where Xt = x + µ t + σ Wt, with constant coefficients µ and σ > 0. Let

ga1 (X) = sup t ≤ 1 : Xt = a

= sup t ≤ 1 : νt + Wt = α

where ν = µ/σ and α = (a− x)/σ. Setting

Vt = α− νt−Wt = (a−Xt)/σ ,

we obtain, using standard computations (see [3M])

P(ga1 (X) ≤ t|Ft) = (1− eνVtH(ν, |Vt|, 1− t))11T0(V )≤t,

where

H(ν, y, s) = e−νyN(

νs− y√s

)+ eνyN

(−νs− y√s

).

Using Ito’s lemma, we obtain the decomposition of 1 − eνVtH(ν, |Vt|, 1 − t) as a semi-martingaleMt + Ct.

We note that C increases only on the set t : Xt = a. Indeed, setting ga1 (X) = g, for any

predictable process H, one has

E(Hg) = E(∫ ∞

0

dCsHs

)

hence, since Xg = a,

0 = E(11Xg 6=a) = E(∫ ∞

0

dCs11Xs 6=a

).

Therefore, dCt = κtdLat (X) and, since L increases only at points such that Xt = a (i.e., Vt = 0),

one hasκt = H ′

x(ν, 0, 1− t) .

Page 70: Enlargements of Filtrations Monique Jeanblanc Jena, June 2010 · 2010-06-09 · 2 These notes are not complete, and have to be considered as a preliminary version of a full text

70 CHAPTER 3. LAST PASSAGE TIMES ♠

The martingale part is given by dMt = mtdWt where

mt = eνVt (νH(ν, |Vt|, 1− t)− sgn(Vt)H ′x(ν, |Vt|, 1− t)) .

Therefore, the predictable compensator associated with ga1 (X) is

∫ t

0

H ′x(ν, 0, 1− s)

eνVsH(ν, 0, 1− s)dLa

s .

Exercise 3.3.2 The aim of this exercise is to compute, for t < T < 1 , the quantity E(h(WT )11T<g1|Gt),which is the price of the claim h(ST ) with barrier condition 11T<g1.

Prove that

E(h(WT )11T<g1|Ft) = E(h(WT )|Ft)− E(h(WT )Φ

( |WT |√1− T

) ∣∣∣Ft

),

where

Φ(x) =

√2π

∫ x

0

exp(−u2

2

)du .

Define k(w) = h(w)Φ(|w|/√1− T ). Prove that E(k(WT )

∣∣∣Ft

)= k(t,Wt), where

k(t, a) = E(k(WT−t + a)

)

=1√

2π(T − t)

Rh(u)Φ

( |u|√1− T

)exp

(− (u− a)2

2(T − t)

)du.

C

3.4 Absolutely Continuous Compensator

From the preceding computations, the reader might think that the F-predictable compensator isalways singular w.r.t. the Lebesgue measure. This is not the case, as we show now. We are indebtedto Michel Emery for this example.

Let W be a Brownian motion and let τ = sup t ≤ 1 : W1 − 2Wt = 0, that is the last timebefore 1 when the Brownian motion is equal to half of its terminal value at time 1. Then,

τ ≤ t =

inft≤s≤1

2Ws ≥ W1 ≥ 0∪

sup

t≤s≤12Ws ≤ W1 ≤ 0

.

I The quantity

P(τ ≤ t,W1 ≥ 0|Ft) = P(

inft≤s≤1

2Ws ≥ W1 ≥ 0|Ft

)

can be evaluated using the equalities

inft≤s≤1

Ws ≥ W1

2≥ 0

=

inf

t≤s≤1(Ws −Wt) ≥ W1

2−Wt ≥ −Wt

=

inf

0≤u≤1−t(Wu) ≥ W1−t

2− Wt

2≥ −Wt

,

where (Wu = Wt+u −Wt, u ≥ 0) is a Brownian motion independent of Ft. It follows that

P(

inft≤s≤1

Ws ≥ W1

2≥ 0|Ft

)= Ψ(1− t,Wt) ,

Page 71: Enlargements of Filtrations Monique Jeanblanc Jena, June 2010 · 2010-06-09 · 2 These notes are not complete, and have to be considered as a preliminary version of a full text

3.5. TIME WHEN THE SUPREMUM IS REACHED 71

where

Ψ(s, x) = P

(inf

0≤u≤sWu ≥ Ws

2− x

2≥ −x

)= P

(2Ms −Ws ≤ x

2, Ws ≤ x

2

)

= P(

2M1 −W1 ≤ x

2√

s, W1 ≤ x

2√

s

).

I The same kind of computation leads to

P(

supt≤s≤1

2Ws ≤ W1 ≤ 0|Ft

)= Ψ(1− t,−Wt) .

I The quantity Ψ(s, x) can now be computed from the joint law of the maximum and of the process attime 1; however, we prefer to use Pitman’s theorem (see [3M]): let U be a r.v. uniformly distributedon [−1,+1] independent of R1 := 2M1 −W1, then

P(2M1 −W1 ≤ y,W1 ≤ y) = P(R1 ≤ y, UR1 ≤ y)

=12

∫ 1

−1

P(R1 ≤ y, uR1 ≤ y)du .

For y > 0,12

∫ 1

−1

P(R1 ≤ y, uR1 ≤ y)du =12

∫ 1

−1

P(R1 ≤ y)du = P(R1 ≤ y) .

For y < 0 ∫ 1

−1

P(R1 ≤ y, uR1 ≤ y)du = 0 .

Therefore

P(τ ≤ t|Ft) = Ψ(1− t,Wt) + Ψ(1− t,−Wt) = ρ

( |Wt|√1− t

)

where

ρ(y) = P(R1 ≤ y) =

√2π

∫ y

0

x2e−x2/2dx .

Then Zt = P(τ > t|Ft) = 1 − ρ( |Wt|√1−t

). We can now apply Tanaka’s formula to the function ρ.Noting that ρ′(0) = 0, the contribution to the Doob-Meyer decomposition of Z of the local time ofW at level 0 is 0. Furthermore, the increasing process A of the Doob-Meyer decomposition of Z isgiven by

dAt =

(12ρ′′

( |Wt|√1− t

)1

1− t+

12ρ′

( |Wt|√1− t

) |Wt|√(1− t)3

)dt

=1

1− t

|Wt|√1− t

e−W 2t /2(1−t)dt .

We note that A may be obtained as the dual predictable projection on the Brownian filtration ofthe process A

(W1)s , s ≤ 1, where (A(x)

s , s ≤ 1) is the compensator of τ under the law of the Brownianbridge P(1)

0→x.

3.5 Time When the Supremum is Reached

Let W be a Brownian motion, Mt = sups≤t Ws and let τ be the time when the supremum on theinterval [0, 1] is reached, i.e.,

τ = inft ≤ 1 : Wt = M1 = supt ≤ 1 : Mt −Wt = 0 .

Page 72: Enlargements of Filtrations Monique Jeanblanc Jena, June 2010 · 2010-06-09 · 2 These notes are not complete, and have to be considered as a preliminary version of a full text

72 CHAPTER 3. LAST PASSAGE TIMES ♠

Let us denote by ζ the positive continuous semimartingale

ζt =Mt −Wt√

1− t, t < 1.

Let Ft = P(τ ≤ t|Ft). Since Ft = Φ(ζt), (where Φ(x) =√

∫ x

0exp(−u2

2 )du, (see Exercise inChapter 4 in [3M]) using Ito’s formula, we obtain the canonical decomposition of F as follows:

Ft =∫ t

0

Φ′(ζu) dζu +12

∫ t

0

Φ′′(ζu)du

1− u

(i)= −

∫ t

0

Φ′(ζu)dWu√1− u

+

√2π

∫ t

0

dMu√1− u

(ii)= Ut + Ft,

where Ut = − ∫ t

0Φ′(ζu)

dWu√1− u

is a martingale and F a predictable increasing process. To obtain

(i), we have used that xΦ′ + Φ′′ = 0; to obtain (ii), we have used that Φ′(0) =√

2/π and also thatthe process M increases only on the set

u ∈ [0, t] : Mu = Wu = u ∈ [0, t] : ζu = 0.

3.6 Last Passage Times for Particular Martingales

We now study the Azema supermartingale associated with the random time L, a last passage timeor the end of a predictable set Γ, i.e.,

L(ω) = supt : (t, ω) ∈ Γ .

Proposition 3.6.1 Let L be the end of a predictable set. Assume that all the F-martingales arecontinuous and that L avoids the F-stopping times. Then, there exists a continuous and nonnegativelocal martingale N , with N0 = 1 and limt→∞Nt = 0, such that:

Zt = P (L > t | Ft) =Nt

Σt

where Σt = sups≤t Ns. The Doob-Meyer decomposition of Z is

Zt = mt −At

and the following relations hold

Nt = exp(∫ t

0

dms

Zs− 1

2

∫ t

0

d〈m〉sZ2

s

)

Σt = exp(At)

mt = 1 +∫ t

0

dNs

Σs= E(lnS∞|Ft)

Proof: As recalled previously, the Doob-Meyer decomposition of Z reads Zt = mt−At with m andA continuous, and dAt is carried by t : Zt = 1. Then, for t < T0 := inft : Zt = 0

− ln Zt = −(∫ t

0

dms

Zs− 1

2

∫ t

0

d〈m〉sZ2

s

)+ At

From Skorokhod’s reflection lemma we deduce that

At = supu≤t

(∫ u

0

dms

Zs− 1

2

∫ u

0

d〈m〉sZ2

s

)

Page 73: Enlargements of Filtrations Monique Jeanblanc Jena, June 2010 · 2010-06-09 · 2 These notes are not complete, and have to be considered as a preliminary version of a full text

3.6. LAST PASSAGE TIMES FOR PARTICULAR MARTINGALES 73

Introducing the local martingale N defined by

Nt = exp(∫ t

0

dms

Zs− 1

2

∫ t

0

d〈m〉sZ2

s

),

it follows thatZt =

Nt

Σt

and

Σt = supu≤t

Nu = exp(

supu≤t

(∫ u

0

dms

Zs− 1

2

∫ u

0

d〈m〉sZ2

s

))= eAt

¤

The three following exercises are from the work of Bentata and Yor [9].

Exercise 3.6.2 Let M be a positive martingale, such that M0 = 1 and limt→∞Mt = 0. Leta ∈ [0, 1[ and define Ga = supt : Mt = a. Prove that

P(Ga ≤ t|Ft) =(

1− Mt

a

)+

Assume that, for every t > 0, the law of the r.v. Mt admits a density (mt(x), x ≥ 0), and (t, x) →mt(x) may be chosen continuous on (0,∞)2 and that d〈M〉t = σ2

t dt, and there exists a jointlycontinuous function (t, x) → θt(x) = E(σ2

t |Mt = x) on (0,∞)2. Prove that

P(Ga ∈ dt) = (1− M0

a)δ0(dt) + 11t>0

12a

θt(a)mt(a)dt

Hint: Use Tanaka’s formula to prove that the result is equivalent to dtE(Lat (M)) = dtθt(a)mt(a)

where L is the Tanaka-Meyer local time. C

Exercise 3.6.3 Let B be a Brownian motion and

T (ν)a = inft : Bt + νt = a

G(ν)a = supt : Bt + νt = a

Prove that

(T (ν)a , G(ν)

a ) law=(

1

G(a)ν

,1

T(a)ν

)

Give the law of the pair (T (ν)a , G

(ν)a ). C

Exercise 3.6.4 Let X be a transient diffusion, such that

Px(T0 < ∞) = 0, x > 0Px( lim

t→∞Xt = ∞) = 1, x > 0

and note s the scale function satisfying s(0+) = −∞, s(∞) = 0. Prove that for all x, t > 0,

Px(Gy ∈ dt) =−1

2s(y)p(m)t (x, y)dt

where p(m) is the density transition w.r.t. the speed measure m. C

Page 74: Enlargements of Filtrations Monique Jeanblanc Jena, June 2010 · 2010-06-09 · 2 These notes are not complete, and have to be considered as a preliminary version of a full text

74 CHAPTER 3. LAST PASSAGE TIMES ♠

Page 75: Enlargements of Filtrations Monique Jeanblanc Jena, June 2010 · 2010-06-09 · 2 These notes are not complete, and have to be considered as a preliminary version of a full text

Bibliography

[1] J. Amendinger. Initial enlargement of filtrations and additional information in financial mar-kets. PhD thesis, Technischen Universitat Berlin, 1999.

[2] J. Amendinger. Martingale representation theorems for initially enlarged filtrations. StochasticProcesses and their Appl., 89:101–116, 2000.

[3] J. Amendinger, P. Imkeller, and M. Schweizer. Additional logarithmic utility of an insider.Stochastic Processes and their Appl., 75:263–286, 1998.

[4] S. Ankirchner. Information and Semimartingales. PhD thesis, Humboldt Universitat Berlin,2005.

[5] S. Ankirchner, S. Dereich, and P. Imkeller. Enlargement of filtrations and continuous Girsanov-type embeddings. In C. Donati-Martin, M. Emery, A. Rouault, and Ch. Stricker, editors,Seminaire de Probabilites XL, volume 1899 of Lecture Notes in Mathematics. Springer-Verlag,2007.

[6] T. Aven. A theorem for determining the compensator of a counting process. ScandinavianJournal of Statistics, 12:69–72, 1985.

[7] M.T. Barlow. Study of filtration expanded to include an honest time. Z. Wahr. Verw. Gebiete,44:307–323, 1978.

[8] F. Baudoin. Modeling anticipations on financial markets. In Paris-Princeton Lecture on math-ematical Finance 2002, volume 1814 of Lecture Notes in Mathematics, pages 43–92. Springer-Verlag, 2003.

[9] A. Bentata and M. Yor. From Black-Scholes and Dupire formulae to last passage times of localmartingales. Preprint, 2008.

[10] T.R. Bielecki, M. Jeanblanc, and M. Rutkowski. Stochastic methods in credit risk modelling,valuation and hedging. In M. Frittelli and W. Rungaldier, editors, CIME-EMS Summer Schoolon Stochastic Methods in Finance, Bressanone, volume 1856 of Lecture Notes in Mathematics.Springer, 2004.

[11] T.R. Bielecki, M. Jeanblanc, and M. Rutkowski. Market completeness under constrained trad-ing. In A.N. Shiryaev, M.R. Grossinho, P.E. Oliveira, and M.L. Esquivel, editors, StochasticFinance, Proceedings of International Lisbon Conference, pages 83–107. Springer, 2005.

[12] T.R. Bielecki, M. Jeanblanc, and M. Rutkowski. Completeness of a reduced-form credit riskmodel with discontinuous asset prices. Stochastic Models, 22:661–687, 2006.

[13] P. Bremaud and M. Yor. Changes of filtration and of probability measures. Z. Wahr. Verw.Gebiete, 45:269–295, 1978.

[14] G. Callegaro, M Jeanblanc, and B. Zargari. Carthagese Filtrations. Preprint, 2010.

75

Page 76: Enlargements of Filtrations Monique Jeanblanc Jena, June 2010 · 2010-06-09 · 2 These notes are not complete, and have to be considered as a preliminary version of a full text

76 BIBLIOGRAPHY

[15] J.M. Corcuera, P. Imkeller, A. Kohatsu-Higa, and D. Nualart. Additional utility of insiderswith imperfect dynamical information. Finance and Stochastics, 8:437–450, 2004.

[16] M.H.A. Davis. Option Pricing in Incomplete Markets. In M.A.H. Dempster and S.R. Pliska,editors, Mathematics of Derivative Securities, Publication of the Newton Institute, pages 216–227. Cambridge University Press, 1997.

[17] C. Dellacherie. Un exemple de la theorie generale des processus. In P-A. Meyer, editor,Seminaire de Probabilites IV, volume 124 of Lecture Notes in Mathematics, pages 60–70.Springer-Verlag, 1970.

[18] C. Dellacherie. Capacites et processus stochastiques, volume 67 of Ergebnisse. Springer, 1972.

[19] C. Dellacherie, B. Maisonneuve, and P-A. Meyer. Probabilites et Potentiel, chapitres XVII-XXIV, Processus de Markov (fin). Complements de calcul stochastique. Hermann, Paris, 1992.

[20] C. Dellacherie and P-A. Meyer. Probabilites et Potentiel, chapitres I-IV. Hermann, Paris, 1975.English translation: Probabilities and Potentiel, chapters I-IV, North-Holland, (1982).

[21] C. Dellacherie and P-A. Meyer. A propos du travail de Yor sur les grossissements des tribus.In C. Dellacherie, P-A. Meyer, and M. Weil, editors, Seminaire de Probabilites XII, volume 649of Lecture Notes in Mathematics, pages 69–78. Springer-Verlag, 1978.

[22] C. Dellacherie and P-A. Meyer. Probabilites et Potentiel, chapitres V-VIII. Hermann, Paris,1980. English translation : Probabilities and Potentiel, chapters V-VIII, North-Holland, (1982).

[23] D.M. Delong. Crossing probabilities for a square root boundary for a Bessel process. Comm.Statist A-Theory Methods, 10:2197–2213, 1981.

[24] G. Di Nunno, T. Meyer-Brandis, B. Øksendal, and F. Proske. Optimal portfolio for an insiderin a market driven by levy processes. Quantitative Finance, 6:83–94, 2006.

[25] P. Ehlers and Ph. Schonbucher. Background filtrations and canonical loss processes for top-downmodels of portfolio credit risk. Finance and Stochastics, 13:79–103, 2009.

[26] R.J. Elliott. Stochastic Calculus and Applications. Springer, Berlin, 1982.

[27] R.J. Elliott and M. Jeanblanc. Incomplete markets and informed agents. Mathematical Methodof Operations Research, 50:475–492, 1998.

[28] R.J. Elliott, M. Jeanblanc, and M. Yor. On models of default risk. Math. Finance, 10:179–196,2000.

[29] M. Emery. Espaces probabilises filtres : de la theorie de Vershik au mouvement Brownien,via des idees de Tsirelson. In Seminaire Bourbaki, 53ieme annee, volume 282, pages 63–83.Asterisque, 2002.

[30] A. Eyraud-Loisel. BSDE with enlarged filtration. option hedging of an insider trader in afinancial market with jumps. Stochastic Processes and their Appl., 115:1745–1763, 2005.

[31] J-P. Florens and D. Fougere. Noncausality in continuous time. Econometrica, 64:1195–1212,1996.

[32] H. Follmer, C-T Wu, and M. Yor. Canonical decomposition of linear transformations of twoindependent Brownian motions motivated by models of insider trading. Stochastic Processesand their Appl., 84:137–164, 1999.

[33] D. Gasbarra, E. Valkeika, and L. Vostrikova. Enlargement of filtration and additional infor-mation in pricing models: a Bayesian approach. In Yu. Kabanov, R. Lipster, and J. Stoyanov,editors, From Stochastic Calculus to Mathematical Finance: The Shiryaev Festschrift, pages257–286, 2006.

Page 77: Enlargements of Filtrations Monique Jeanblanc Jena, June 2010 · 2010-06-09 · 2 These notes are not complete, and have to be considered as a preliminary version of a full text

BIBLIOGRAPHY 77

[34] A. Grorud and M. Pontier. Insider trading in a continuous time market model. InternationalJournal of Theoretical and Applied Finance, 1:331–347, 1998.

[35] A. Grorud and M. Pontier. Asymmetrical information and incomplete markets. InternationalJournal of Theoretical and Applied Finance, 4:285–302, 2001.

[36] C. Hillairet. Comparison of insiders’ optimal strategies depending on the type of side-information. Stochastic Processes and their Appl., 115:1603–1627, 2005.

[37] P. Imkeller. Random times at which insiders can have free lunches. Stochastics and StochasticsReports, 74:465–487, 2002.

[38] P. Imkeller. Malliavin’s calculus in insider models: additional utility and free lunches. Mathe-matical Finance, 13:153169, 2003.

[39] P. Imkeller, M. Pontier, and F. Weisz. Free lunch and arbitrage possibilities in a financialmarket model with an insider. Stochastic Processes and their Appl., 92:103–130, 2001.

[40] J. Jacod. Calcul stochastique et Problemes de martingales, volume 714 of Lecture Notes inMathematics. Springer-Verlag, Berlin, 1979.

[41] J. Jacod. Grossissement initial, hypothese H ′ et theoreme de Girsanov. In Seminaire de CalculStochastique 1982-83, volume 1118 of Lecture Notes in Mathematics. Springer-Verlag, 1987.

[42] M. Jeanblanc and M. Rutkowski. Modeling default risk: an overview. In Mathematical Fi-nance: Theory and Practice, Fudan University, pages 171–269. Modern Mathematics Series,High Education press. Beijing, 2000.

[43] M. Jeanblanc and M. Rutkowski. Modeling default risk: Mathematical tools. Fixed Incomeand Credit risk modeling and Management, New York University, Stern School of Business,Statistics and Operations Research Department, Workshop, 2000.

[44] M. Jeanblanc and S. Song. Default times with given survival probability and their F-martingaledecomposition formula. Working Paper, 2010.

[45] M. Jeanblanc and S. Song. Explicit model of default time with given survival process. WorkingPaper, 2010.

[46] Th. Jeulin. Semi-martingales et grossissement de filtration, volume 833 of Lecture Notes inMathematics. Springer-Verlag, 1980.

[47] Th. Jeulin and M. Yor. Grossissement d’une filtration et semi-martingales : formules explicites.In C. Dellacherie, P-A. Meyer, and M. Weil, editors, Seminaire de Probabilites XII, volume 649of Lecture Notes in Mathematics, pages 78–97. Springer-Verlag, 1978.

[48] Th. Jeulin and M. Yor. Nouveaux resultats sur le grossissement des tribus. Ann. Scient. Ec.Norm. Sup., 11:429–443, 1978.

[49] Th. Jeulin and M. Yor. Inegalite de Hardy, semimartingales et faux-amis. In P-A. Meyer, editor,Seminaire de Probabilites XIII, volume 721 of Lecture Notes in Mathematics, pages 332–359.Springer-Verlag, 1979.

[50] Th. Jeulin and M Yor, editors. Grossissements de filtrations: exemples et applications, volume1118 of Lecture Notes in Mathematics. Springer-Verlag, 1985.

[51] I. Karatzas and I. Pikovsky. Anticipative portfolio optimization. Adv. Appl. Prob., 28:1095–1122, 1996.

[52] A. Kohatsu-Higa. Enlargement of filtrations and models for insider trading. In J. Akahori,S. Ogawa, and S. Watanabe, editors, Stochastic Processes and Applications to MathematicalFinance, pages 151–166. World Scientific, 2004.

Page 78: Enlargements of Filtrations Monique Jeanblanc Jena, June 2010 · 2010-06-09 · 2 These notes are not complete, and have to be considered as a preliminary version of a full text

78 BIBLIOGRAPHY

[53] A. Kohatsu-Higa. Enlargement of filtration. In Paris-Princeton Lecture on Mathematical Fi-nance, 2004, volume 1919 of Lecture Notes in Mathematics, pages 103–172. Springer-Verlag,2007.

[54] A. Kohatsu-Higa and B. Øksendal. Enlargement of filtration and insider trading. Preprint,2004.

[55] H. Kunita. Some extensions of Ito’s formula. In J. Azema and M. Yor, editors, Seminaire deProbabilites XV, volume 850 of Lecture Notes in Mathematics, pages 118–141. Springer-Verlag,1981.

[56] S. Kusuoka. A remark on default risk models. Adv. Math. Econ., 1:69–82, 1999.

[57] D. Lando. On Cox processes and credit risky securities. Review of Derivatives Research, 2:99–120, 1998.

[58] R.S.R. Liptser and A.N. Shiryaev. Statistics of Random Processes. Springer, 2nd printing 2001,1977.

[59] D. Madan, B. Roynette, and M. Yor. An alternative expression for the Black-Scholes formulain terms of Brownian first and last passage times. Preprint, Universite Paris 6, 2008.

[60] R. Mansuy and M. Yor. Random Times and (Enlargement of) Filtrations in a Brownian Setting,volume 1873 of Lectures Notes in Mathematics. Springer, 2006.

[61] G. Mazziotto and J. Szpirglas. Modele general de filtrage non lineaire et equations differentiellesstochastiques associees. Ann. Inst. H. Poincare, 15:147–173, 1979.

[62] A. Nikeghbali. An essay on the general theory of stochastic processes. Probability Surveys,3:345–412, 2006.

[63] A. Nikeghbali and M. Yor. A definition and some properties of pseudo-stopping times. TheAnnals of Probability, 33:1804–1824, 2005.

[64] A. Nikeghbali and M. Yor. Doob’s maximal identity, multiplicative decompositions and en-largements of filtrations. In D. Burkholder, editor, Joseph Doob: A Collection of MathematicalArticles in his Memory, volume 50, pages 791–814. Illinois Journal of Mathematics, 2007.

[65] H. Pham. Stochastic control under progressive enlargement of filtrations and applications tomultiple defaults risk management. Preprint, 2009.

[66] J.W. Pitman and M. Yor. Bessel processes and infinitely divisible laws. In D. Williams, editor,Stochastic integrals, LMS Durham Symposium, Lect. Notes 851, pages 285–370. Springer, 1980.

[67] Ph. Protter. Stochastic Integration and Differential Equations. Springer, Berlin, Second edition,2005.

[68] L.C.G. Rogers and D. Williams. Diffusions, Markov processes and Martingales, Vol 1. Foun-dations. Cambridge University Press, Cambridge, second edition, 2000.

[69] P. Salminen. On last exit decomposition of linear diffusions. Studia Sci. Math. Hungar., 33:251–262, 1997.

[70] M.J. Sharpe. Some transformations of diffusion by time reversal. The Annals of Probability,8:1157–1162, 1980.

[71] A.N. Shiryaev. Optimal Stopping Rules. Springer, Berlin, 1978.

[72] S. Song. Grossissement de filtration et problemes connexes. Thesis, Paris IV University, 1997.

[73] C. Stricker and M. Yor. Calcul stochastique dependant d’un parametre. Z. Wahrsch. Verw.Gebiete, 45(2):109–133, 1978.

Page 79: Enlargements of Filtrations Monique Jeanblanc Jena, June 2010 · 2010-06-09 · 2 These notes are not complete, and have to be considered as a preliminary version of a full text

BIBLIOGRAPHY 79

[74] Ch. Stricker. Quasi-martingales, martingales locales, semimartingales et filtration naturelle. Z.Wahr. Verw. Gebiete, 39:55–63, 1977.

[75] B. Tsirel’son. Within and beyond the reach of Brownian innovations. In Proceedings of theInternational Congress of Mathematicians, pages 311–320. Documenta Mathematica, 1998.

[76] D. Williams. A non-stopping time with the optional-stopping property. Bull. London Math.Soc., 34:610–612, 2002.

[77] C-T. Wu. Construction of Brownian motions in enlarged filtrations and their role in mathe-matical models of insider trading. These de doctorat d’etat, Humbolt-Universitat Berlin, 1999.

[78] Ch. Yoeurp. Grossissement de filtration et theoreme de Girsanov generalise. In Th. Jeulin andM. Yor, editors, Grossissements de filtrations: exemples et applications, volume 1118. Springer,1985.

[79] M. Yor. Grossissement d’une filtration et semi-martingales : theoremes generaux. In C. Del-lacherie, P-A. Meyer, and M. Weil, editors, Seminaire de Probabilites XII, volume 649 of LectureNotes in Mathematics, pages 61–69. Springer-Verlag, 1978.

[80] M. Yor. Some Aspects of Brownian Motion, Part II: Some Recent Martingale Problems. Lecturesin Mathematics. ETH Zurich. Birkhauser, Basel, 1997.

Page 80: Enlargements of Filtrations Monique Jeanblanc Jena, June 2010 · 2010-06-09 · 2 These notes are not complete, and have to be considered as a preliminary version of a full text

Index

(H) Hypothesis, 16

Brownianbridge, 19

Class D, 4Compensator

predictable, 9

DecompositionDoob-Meyer, 55, 72multiplicative, 55, 72

Doob-Meyer decomposition, 4Dual predictable projection, 69

Filtrationnatural, 3

First timewhen the supremum is reached, 71

Gaussianprocess, 19

Honest time, 50

Immersion, 16Initial time, 56Integration by parts

for Stieltjes, 6

Last passage time, 65before hitting a level, 67before maturity, 68of a transient diffusion, 65

Multiplicative decomposition, 55, 72

Norros lemma, 42, 45

Predictablecompensator, 9

Processadapted, 3increasing, 3

Projectiondual predictable, 8optional, 7

predictable, 8Pseudo stopping time, 49

Random timehonest, 50

Running maximum, 50

semi-martingale, 5Special semi-martingale, 5

80