Upload
krishnan-muralidharan
View
212
Download
0
Embed Size (px)
Citation preview
Proc. Indian Acad. Sci. (Math. Sci.) Vol. 124, No. 3, August 2014, pp. 457469.c Indian Academy of Sciences
On quadratic variation of martingales
RAJEEVA L KARANDIKAR and B V RAO
Chennai Mathematical Institute, H1, SIPCOT IT Park, Siruseri 603 103, IndiaE-mail: [email protected]; [email protected]
MS received 10 February 2013; revised 21 May 2013
Abstract. We give a construction of an explicit mapping
: D([0,),R) D([0,),R),where D([0,),R) denotes the class of real valued r.c.l.l. functions on [0,) suchthat for a locally square integrable martingale (Mt ) with r.c.l.l. paths,
(M.()) = A.()gives the quadratic variation process (written usually as [M,M]t ) of (Mt ). We alsoshow that this process (At) is the unique increasing process (Bt ) such that M2t Bt isa local martingale, B0 = 0 and
P((B)t = [(M)t ]2, 0 < t < ) = 1.Apart from elementary properties of martingales, the only result used is the Doobs max-imal inequality. This result can be the starting point of the development of the stochasticintegral with respect to r.c.l.l. martingales.
Keywords. DoobMeyer decomposition; martingales; quadratic variation.
Mathematics Subject Classification. 60G44, 60G17, 60H05.
1. Introduction
The DoobMeyer decomposition of square of a (square integrable) martingale was alandmark result that led to the development of stochastic integration with respect to amartingale (as outlined by Doob). Meyer [7, 8] showed that there exists a unique naturalincreasing process (At ) such that A0 = 0 and M2t At is a martingale. This decompo-sition plays a central role in the theory of stochastic integration [9, 10]. There have beenseveral attempts, including in recent times, at giving simpler proofs of the DoobMeyerdecomposition for pedagogical reasons (see [1, 2, 11]). In every exposition of stochasticintegration for semimartingales (which may have jumps), the DoobMeyer decompositionremains the first important step. The unusual definition of the natural increasing processdue to Meyer lead to further study and was characterized to be the same as a predictableincreasing process.
Let (Mt ) be a square integrable martingale with respect to a filtration (Ft ). The uniquepredictable increasing process (At ) such that A0 = 0 and M2t At is a martingale has beencalled the compensator of M2t also the predictable quadratic variation of the martingaleM and has subsequently been denoted as M,Mt .
457
458 Rajeeva L Karandikar and B V Rao
For a simple predictable process f given by
ft () =m1
j=0aj ()1(sj ,sj+1](t),
where aj is a Fsj measurable bounded r.v. for 0 j < m; 0 = s0 < si < < sm,m 1, the stochastic integral can be defined as
Xt = t
0f dM =
m1
j=0aj (Mtsj+1 Mtsj ).
It is easy to check that (Xt) is a martingale and using that M2t M,Mt is a martingale,one checks that X2t Bt is a martingale, where
Bt =m1
j=0a2j
(M,Mtsj+1 M,Mtsj) =
t
0f 2s dM,Ms .
Using Doobs maximal inequality, one can deduce
E
(sup
0st
s
0f dM
2)
4E( t
0f 2s dM,Ms
). (1.1)
The process M,Mt can be shown to be the limit in probability
Ant =
j=0E((Mttnj+1 Mttnj )2 | Ftnj ),
where 0 = tn0 < tn1 < < tnm < are a sequence of partitions of [0,) such thattnj as j for each n and n = supj |tnj+1 tnj | 0 as n .
Meyer [9] introduced [M,M]t via Mc,Mc and Ms (where Mc is the continuousmartingale part of M) and showed that it is the limit in probability of
Bnt =
j=0(Mttnj+1 Mttnj )2.
He also showed that for a continuous martingale M , [M,M]t = M,Mt .For a continuous local martingale M , it was shown in Karandikar [3] that for suitably
chosen sequence of random partitions {ni : i 0} the quadratic variation
Qnt =
j=0(Mtnj+1 Mtnj )2 (1.2)
converges almost surely to M,Mt . The random partitions were defined as follows: n0 =0 and for each n, {ni : i 1} is defined inductively by
ni+1 = inf{t ni : |Mt Mni | 2n}.
On quadratic variation of martingales 459
The proof relied on the theory of stochastic integration. Subsequently, in Karandikar[4], the formula was derived using only Doobs maximal inequality. Thus this could be thestarting point for the development of stochastic calculus for continuous semimartingaleswithout bringing in any results from general theory of processes (see [5]).
The almost sure convergence of Qnt to M,Mt also gives a pathwise formula forthe quadratic variation of a continuous local martingale. It also directly shows that for acontinuous local martingale M , the process M,Mt does not depend upon the underly-ing filtration and nor does it depend upon the underlying probability measure (see [6]).Indeed, in [6], a pathwise formula for [M,M]t when M is an r.c.l.l. martingale wasobtained, but the proof depended upon the theory of stochastic integration.
In this article, we show (once again using only Doobs maximal inequality) that for anysquare integrable r.c.l.l. martingale (Mt), the processes Qnt defined by
Qnt =
j=0(Mtnj+1 Mtnj )2
converge almost surely uniformly on t [0, T ] for all T < to the quadratic variation[M,M]t and that Xt = M2t [M,M]t is a martingale.Once we have shown the existence of [M,M]t , one can get an estimate analogous to(1.1) for simple predictable processes f :
E
(sup
0st
s
0f dM
2)
4E( t
0f 2s d[M,M]s
)(1.3)
and this instead of (1.1) could be used as a starting point for developing stochastic cal-culus for locally square integrable martingales, bypassing completely the Doob Meyerdecomposition. This approach is being taken in a book under preparation.
We give an explicit construction of a mapping on the set of r.c.l.l. functions on [0,)such that for a r.c.l.l. martingale M ,
(M()) = [M,M]()yields the quadratic variation of M .
2. The quadratic variation map
Let D([0,),R) denote the space of r.c.l.l. functions on [0,). For D([0,),R),(t) denotes the left limit at t (for t > 0) and (0) = 0 and (t) = (t) (t).We will now define quadratic variation () of a function D([0,),R).
For each n 1, let {tni () : i 1} be defined inductively as follows: tn0 () = 0 andhaving defined tni (), let
tni+1() = inf{t > tni () : |(t) (tni ())| 2n
or |(t) (tni ())| 2n}.
Note that for each D([0,),R) and for n 1, tni () as i (if limi tni () =t < , then the function cannot have a left limit at t). Let
n()(t) =
i=0
((tni+1() t) (tni () t)
)2.
460 Rajeeva L Karandikar and B V Rao
Since tni () increases to infinity, for each and t fixed, the infinite sum appearing aboveis essentially a finite sum and hence n() is itself an r.c.l.l. function. The space D =D([0,),R) is equipped with the topology of uniform convergence on compact subsets(abbreviated as ucc). Let D denote the set of D such that n() converges in the ucctopology and
() ={
limn n(), if D,0, if D.
Here are some basic properties of the quadratic variation map .
Lemma 2.1. For D,(i) () is an increasing function,(ii) ()(t) = ((t))2 for all t (0,),(iii) st ((s))2 < for all t (0,),(iv) let ()(t) = ()(t) 0 0,()(t) = 0 implies that (t) = 0.
On the other hand, let t > 0 be a discontinuity point for . Let us note that by thedefinition of tnj (),
|(u) (v)| 2.2n u, v [tnj (), tnj+1()). (2.5)
Thus for n such that 2.2n < ()(t), t must be equal to tnk () for some k 1 since(2.5) implies (v) 2.2n for any v j (tnj (), tnj+1()). Let sn = tnk1(), wheret = tnk () and s = lim infn sn. We will prove that
limn
(sn) = (t), limn
n()(sn) = ()(t). (2.6)
On quadratic variation of martingales 461
If s = t , then sn t for all n 1 implies sn t and (2.6) follows from the uniformconvergence of n() to () on [0, t].
If s < t , using (2.5) it follows that |(u) (v)| = 0 for u, v (s, t) and hencethe function (u) is constant on the interval (s, t) and implies that sn s. Also,(s) = (t) and ()(s) = ()(t). So if is continuous at s, once againuniform convergence of n() to () on [0, t] shows that (2.6) is valid in this case too.
It remains to consider the case s < t and (s) = > 0. In this case, for n such that2.2n < , sn = s and uniform convergence of n() to () on [0, t] shows that (2.6)is true in this case as well.
We have, for large n,
n()(t) = n()(sn) + ((sn) (t))2 (2.7)
and hence (2.6) yields
()(t) = ()(t) + [(t)]2
completing the proof of (ii).(iii) follows from (i) and (ii) since for an increasing function that is non-negative at
zero, the sum of jumps up to t is at most equal to its value at t :
0
462 Rajeeva L Karandikar and B V Rao
Proof. For (i), it can be checked from the definition that
n()(t sk) = n(k)(t) , t . (2.9)
Since k D, it follows that n()(t) converges uniformly on [0, sk] for every k andhence using (2.9) we conclude that D and that (2.8) holds.
For (ii), note that being a continuous function,(tni+1() t) (tni () t)
2n
for all i, n and hence we have
n()(t) =
i=0
((tni+1() t) (tni () t)
)2
2n
i=0
(tni+1() t) (tni () t)
2n Var[0,T ]().
This shows that ()(t) = 0 for t [0, T ].
3. Quadratic variation of a martingaleThe next lemma connects the quadratic variation map and r.c.l.l. martingales.
Lemma 3.3. Let (,F , P ) be a complete probability space and let (Ft ) be a filtrationwith F0 containing all null sets in F .
Let (Nt ,Ft ) be an r.c.l.l. martingale such that E(N2t ) < for all t > 0. Suppose thereis a constant C < such that with
= inf{t 0 : |Nt | C or |Nt| C}
one has
Nt = Nt .Let
[N,N]t () = [(N.())](t).Then ([N,N]t ) is an (Ft ) adapted r.c.l.l. increasing process such that Xt := N2t [N,N]tis also a martingale.
Proof. Let n() and tni () be as in the previous section.Ant () = n(N.())(t), ni () = tni (N.()),Y nt () = N2t () N20 () Ant (). (3.10)
On quadratic variation of martingales 463
It is easy to see that {ni : i 1} are stopping times (for each n by induction on i) and that
Ant =
i=0(Nn
i+1t Nni t )2.
Further, for each n, ni () increases to as i . We will first prove that for each n, (Y nt ) is an (Ft ) martingale. Using the identity b2
a2 (b a)2 = 2a(b a), we can write
Ynt = N2t N20
i=0(Nni+1t Nni t )2
=
i=0(N2ni+1t N
2ni t )
i=0(Nn
i+1t Nni t )2
= 2
i=0Nni t (Nni+1t Nni t ).
Let us define
Xn,it = Nni t (Nni+1t Nni t ).
Then
Ynt = 2
i=0X
n,it . (3.11)
Noting that for s < , |Ns | C and for s , Ns = N , it follows that
(Nni+1t Nni t ) > 0 implies that |Nni t | C.
Thus, writing C(x) = max{min{x,C},C} (x truncated at C), we have
Xn,it = C(Nni t )(Nni+1t Nni t ) (3.12)
and hence, Xn,it is a martingale. Using the fact that Xn,it is Ftni+1 measurable and that
E(Xn,it |Ftni ) = 0, it follows that for i = j,
EXn,it X
n,jt = 0 (3.13)
Also, using (3.12) and the fact that N is a martingale, we have
E(Xn,it )
2 C2E{Nni+1t N.ni t }2
= C2E{N2ni+1t N2ni t }. (3.14)
464 Rajeeva L Karandikar and B V Rao
Using (3.13) and (3.14), it follows that for s r ,
E
(r
i=sX
n,it
)2 C2E{N2nr+1t N
2ns t }. (3.15)
Since ni increases to as i tends to infinity, E(N2ns t ) and E(N2nr+1t ) both tend toE[N2t ] as r, s tend to and hence
ri=1 X
n,it converges in L2(P). In view of (3.11), one
has
2r
i=1X
n,it Ynt in L2(P) as r
and hence (Y nt ) is an (Ft )-martingale for each n 1.For n 1, define a process (Nn) by
Nnt = N(ni ) if ni t < ni+1.
Observe that by the choice of {ni : i 1}, one has
|Nt Nnt | 2n for all t . (3.16)
For now, let us fix n. For each , let us defineE() = {ni () : i 1} {n+1i () : i 1}. (3.17)
It may be noted that for such that t Nt() is continuous, each nj () is necessarilyequal to ni+1() for some i, but this need not be the case when t Nt() has jumps.Let 0() = 0 and for j 0, let
j+1() = inf{s > j () : s E()}.It can be verified that
{i : i 1} = {ni : i 1} {n+1i : i 1}. (3.18)To see that each i is a stop time, fix i 1, t < . Let Akj = {(nk t) = (n+1j t)}.Since nk ,
n+1j are stopping times, Akj Ft for all k, j . It is not difficult to see that
{i t} =i
k=0({nik t}
Bk),
where B0 = and for 1 k i,
Bk =
0
On quadratic variation of martingales 465
Using (3.18) and using the fact that Nnt = Ntnj for nj t < nj+1, one can write Ynand Yn+1 as
Ynt =
j=02Nntj {Ntj+1 Ntj },
Y n+1t =
j=02Nn+1tj {Ntj+1 Ntj }.
Hence
Yn+1t Ynt = 2
j=0Z
n,jt , (3.19)
where
Zn,jt = (Nn+1tj Nntj )(Ntj+1 Ntj ).
Also, using (3.16) one has
|Nn+1t Nnt | |Nn+1t Nt | + |Nt Nnt | 2(n+1) + 2n 2.2n (3.20)
and hence (using that (Ns) is a martingale), one has
E[(Zn,jt )2]4
22nE[(Ntj+1 Ntj )2]=
422n
E[(Ntj+1)2 (Ntj )2].(3.21)
It is easy to see that E(Zn,jt |Ftnj ) = 0 and Zn,jt is Ftnj+1 measurable. It then follows
that for i = j ,
E[Zn,jt .Zn,it ] = 0and hence (using (3.21))
E(Y n+1t Ynt )2 = 4E
j=0Z
n,jt
2
= 4E
j=0(Z
n,jt )
2
1622n
j=0E[(Ntj+1)2 (Ntj )2]
1622n
E[(Nt )2]. (3.22)
466 Rajeeva L Karandikar and B V Rao
Thus, recalling that Yn+1t , Y nt are martingales, it follows that Yn+1t Ynt is also amartingale and thus invoking Doobs maximal inequality, one has (using (3.22))
E[supsT |Yn+1s Yns |2] 4E(Y n+1T YnT )2
6422n
EN2T . (3.23)
Thus, for each n 1 (writing X2 for the L2(P) norm : X2 =
(E[X2])),
[supsT |Yn+1s Yns |] 2 82n
NT 2. (3.24)
It follows that
=
n=1supsT
|Yn+1s Yns | < a.s.
(as 2 < by (3.24)). Hence (Y ns ) converges uniformly in s [0, T ] for every T a.s.to an r.c.l.l. process, say (Ys). As a result, (Ans ) also converges uniformly in s [0, T ] forevery T < a.s. to say (As) with Yt = N2t N20 At . Further, (3.4) also implies thatfor each s, convergence of Yns to Ys is also in L2 and thus (Yt ) is a martingale.
Since Ans converges uniformly in s [0, T ] for all T < a.s., it follows that
P( : N() D) = 1and At = [N,N]t . We have already proven that Yt = N2t N20 [N,N]t is a martingale.This completes the proof.
We are now in a position to prove an analogue of the DoobMeyer decompositiontheorem for the square of an r.c.l.l. locally square integrable martingale.
Theorem 3.4. Let (,F , P ) be a complete probability space and let (Ft ) be a filtrationwith F0 containing all null sets in F . Let (Mt ,Ft ) be an r.c.l.l. locally square integrablemartingale i.e. there exist stopping times n increasing to such that for each n, Mnt =Mtn is a martingale with E[(Mtn)2] < for all t, n.
Let
[M,M]t () = [(M.())](t).Then
(i) ([M,M]t ) is an (Ft ) adapted r.c.l.l. increasing process such that Xt = M2t [M,M]t is also a local martingale.
(ii)P([M,M]t = (Mt)2, t > 0) = 1.
(iii) If (Bt ) is an r.c.l.l. adapted increasing process such that B0 = 0 andP(Bt = (Mt)2, t > 0) = 1
and Vt = M2t Bt is a local martingale, then P(Bt = [M,M]t , t) = 1.
On quadratic variation of martingales 467
(iv) If M is a martingale and E (M2t)< for all t, then E ([M,M]t ) < for all t and
Xt = M2t [M,M]t is a martingale.(v) If E ([M,M]t ) < for all t then E
(M2t
)< for all t, (Mt ) is a martingale and
Xt = M2t [M,M]t is a martingale.Proof. For k 1, let k be the stopping time defined by
k = inf{t > 0 : |Mt | k} k k.
Then Mkt = Mtk is a martingale satisfying conditions of Lemma 3.3 with C = kand = k and hence Xkt = (Mkt )2 [Mk,Mk]t is a martingale, where [Mk,Mk]t =(Mk ())t . Also,
P({ : Mk () D}) = 1, k 1. (3.25)
Since Mkt = Mtk , it follows from Lemma 2.2 that
P({ : M() D}) = 1 (3.26)and
P({ : [Mk,Mk]t () = [M,M]tk()()}) = 1.
It follows that Xtk = Xkt a.s. and since Xk is a martingale for all k, it follows that Xt isa local martingale. This completes the proof of part (i). Part (ii) follows from Lemma 2.1.For (iii), note that
Ut = [M,M]t Btis a continuous process and recalling Xt = M2t [M,M]t and Vt = M2t Bt are localmartingales, it follows that Ut = Vt Xt is also a local martingale with U0 = 0. Bypart (i) above, Wt = U2t [U,U ]t is a local martingale. On the other hand, Ut being adifference of two increasing functions has bounded variation, VarT (U) < . Since U iscontinuous, by Lemma 2.2,
[U,U ]t = 0 t .
Hence Wt = U2t is a local martingale. Now if k are stop times increasing to such thatWtk is a martingale for k 1, then we have
E[Wtk ] = E[U2tk ] = E[U20 ] = 0and hence U2tk = 0 for each k. This yields Ut = 0 a.s. for every t . This completes theproof of (iii).
For (iv), we have proven in (i) that Xt = M2t [M,M]t is a local martingale. Let k bestop times increasing to such that Xkt = Xtk are martingales. Hence, E[Xkt ] = 0, or
E([M,M]tk
) = E(M2tk ) E(M20 ). (3.27)
468 Rajeeva L Karandikar and B V Rao
Invoking Doobs maximal inequality, we have
E
[supst
|M2s |]
4E[M2t ] < . (3.28)
Now, M2tk are dominated by the integrable function supst |M2s | and hence M2tkconverges to M2t in L1. By monotone convergence theorem,
E([M,M]tk) E ([M,M]t ) . (3.29)Thus from (3.27),
E ([M,M]t ) = E[M2t ] E[M20 ]
and convergence in (3.29) is in L1. It follows that Xkt converges to Xt in L1(P) and hence(Xt) is a martingale.
For (v) let k be as in part (iv). One has using (3.27) and that [M,M]t is increasing,
E[M2tk ] = E[M20 ] + E([M,M]tk) E[M20 ] + E([M,M]t ).
Now using Fatous lemma, one gets
E[M2t ] E[M20 ] + E([M,M]t ) < .Now we can invoke part (iv) to complete the proof.
It may be noted that we have not assumed that the underlying filtration is right-continuous.
Remark. We conclude with an observation that we cannot get a mapping
: D([0,),R) D([0,),R)
such that for any martingale M ,
(M.()) = M,M.(). (3.30)Let = R and F = B(R), the Borel -field on R. Let X() = and let P denote theprobability measure on (,F) such that X has Gaussian distribution with mean 0 andvariance 2. Let M be defined by
Mt() = X()1[1,)(t).It is easy to see that M is a martingale with respect to its canonical filtration and
[M,M]t () = X2()1[1,)(t)while
M,Mt () = 21[1,)(t).
On quadratic variation of martingales 469
Thus if a mapping satisfying (3.30) exists, then(M())t = 21[1,)(t) a.s.P . (3.31)
On the other hand, it is well known and easy to see that for any 0 < 1, 2 < , theprobability measures P1 and P2 are mutually absolutely continuous contradicting (3.31).
Thus for a martingale M , while the -path of the quadratic variation process[M,M] () can be computed using only the path M() of the martingale, the path ofpredictable quadratic variation M,M() cannot be computed or expressed in a similarfashion.
References
[1] Bass R F, The DoobMeyer decomposition revisited, Canad. Math. Bull. 39 (1996) 138150
[2] Beiglbock M, Schachermayer W and Veliyev B, A short proof of the DoobMeyertheorem, Stoch. Process. Appl. 122 (2012) 12041209
[3] Karandikar R L, Pathwise solution of stochastic differential equations, Sankhya A 43(1981) 121132
[4] Karandikar R L, On quadratic variation process of a continuous martingales, Ill. J. Math.27 (1983) 178181
[5] Karandikar R L, Stochastic integration w.r.t. continuous local martingales, Stoch. Pro-cess. Appl. 15 (1983) 203209
[6] Karandikar R L, On pathwise stochastic integration, Stoch. Process. Appl. 57 (1995)1118
[7] Meyer P A, A decomposition theorem for supermartingales, Ill. J. Math. 6 (1962) 193205
[8] Meyer P A, Decomposition of supermartingales: the uniqueness theorem, Ill. J. Math. 7(1963) 117
[9] Meyer P A, Integrales stochastiques, IIV. Seminaire de Probabilites I. Lecture Notes inMath. 39 (1967) (Springer: Berlin) pp. 72-162
[10] Meyer P A, Un cours sur les integrales stochastiques, Seminaire Probab. X, Lecture Notesin Math. 511 (Springer Berlin) pp. 245400
[11] Rao K M, On decomposition theorems of Meyer. Math. Scand. 24 (1969) 6678
On quadratic variation of martingalesAbstractIntroductionThe quadratic variation mapQuadratic variation of a martingaleReferences