175
jlojpvlf Csk kh A Pactical lhtrodqctioh

Stochastic Calculus

Embed Size (px)

DESCRIPTION

My students motivations for learning this material have been diverse: somehave wanted to apply items from probability to analysis or differential geometry, others have gone on to doresearch on diffusion processes or stochastic partial differential equations,somehave been inte#ested inapplications of thesideasto fnance,or toprpblemKih operations reseyrch. My motiyation for writingthis book, like that for Pvobabilily Teorgandfzcpvled,wws tosimplifymy lifeas a teacher b# bringin:together in one plaeuseful materialtht is scatterediu a variety of souicesk

Citation preview

Page 1: Stochastic Calculus

jlojpvlfCsk khA Pactical lhtrodqctioh

Page 2: Stochastic Calculus

PROBABWITYAND STOCHASTICS'.E

..q

''

.y

.g.

''

..

''.'

j::!!jjjjjijiiiij:kdtllljlljlljjjjjji!llq:!:,,d((((111111j(4441;4111j:::-,.((4(41111114(()*.

.(llllllllllqjiilitlrrlj,.'jj!!!!jjiiiiiij;::;-....

'

).'yE'

....

'

).j. .'...

'.. .

''-y'.')...)...'.

i.'-''..- .

. .

.

.

-

.yy...y).q

.E'')'

-..'

sditedbyktyak otkryita sikk fis.,,.

..

).)....' ' .. ' ' '. .'..jy .''E .E.

t .'...,j.t.(.

.

.....-'

''

.

. E.E.E..

'

...

'

..

..' '

E '' '

'

.

.

. .

'

y...'......

g.. (. '... . .

E...

.

. . ..

)'

.' .

...'.'

''''''

..''''- ''' ..'..

.,.'

.... '-...''-. -- r..t)''t''.

..... t

.., . .

....''.',. ...

...) .t.....).....

... . ....r ),,.......'..

Linear Sclmstic Ccnf-rolSystems,Gunnrong Chen, Goong Chen, and

Sitia-Flsu Hsu

Advances in Queting: T/lcong,Methods, nnd Open Prnbletns,

JewgertiH. Dshalalow

StoclmsticsCfkfclfllfs;./1.

Practical fafrtdffclcn,Richazd Dllrrette

Chass Expfmsinn,Multiple Wcincr-ln Integrals fmff Applications, Chz-isti.artHoudre

and Victor Perez-Abreu

WhiteNoise Dfsfrfsllfion Theonol,Htti-l-lsiung Kuo

Topics n Cnntempnt.y Probabiliiy, J.Lattrie Snell

Probabilityand Stochascs Series

jY ,,C,,,,,;'''' H'''''''''''''i''''.),

j,,,,,'''Ytl' ''''''t','jC')-.'''-

'y.--....'1111!.::......,.,.2,.1..1:.11..1'-tt,-...

.

-....-.

..

....

7))#rr),...........

-.

-IIIii1li!iIl.IIi'iiIi!!I1...-. . .--

-., .. .t)...

--..

--'..-.-.

..

.-..!..1,--,1.1......1.-------...1.21..'!..1------..

-

.,....,....

.-..-'--'''------.yyy.,

(-...y-.. L)))-;j-.j.j)..).)))...,.,j.))...j;.).;.j-j..-

j-,-,j,,,,j;j;j;;j,jjjjjj!j!!!rr!jjj!jjjjjjjjjjjjjjgjjjjjjyy,y,,,,-,-,

tjjjjjjjjjjjjjjjjj-----,-------,,,-----,,-s......

.

. .

.

.

.

.

.

.

-,'

..--.... .

.

.

.,-

.-'

,jjjjjjjjjjjjjjjjjk,,,,,,,---------,---------,--,---,,,;jjjjjjjjjr...y)t...

.)'-

.

.

---..

-y-.y.-

..

y'.yyf

.,

.

,

.

....

.

.

, .

.

, . -gjjjjjjjjjjjjjjjjy,--,-,-,----,,--------,-....-.

.

.

y

-,'.

..,y.

tyf

...-.,.

-'

.

-.. ,

!jjjjjjjjjjjjjjjjj,,,,,,,,,-,--,-,,--,,-,----,,,,,-,,r,;jjjjjjjjjr'---,----,'.

...-

., .-..

---'

..

yy',y.

-.

.

.

.

.

-j

yy'-.

yy.

y.jryf

..

y'

..'(::rrrjjjjj!jjjjjjjjjjjjjjjjjjjjjjjjj;;jjj;j,-.

.,.

.

. . . .

.--...

.

.

.

--..

,, ....'A Practical Introduction

Richard Durrtt

CCCkC Press

Boca Raton New York London Tokm

Page 3: Stochastic Calculus

'***---4

j. .. . ....

...

..

.

..

. .. ..

. . .

...

. .

'' :. '

. .' j..u'kB tk& 7 v , ,,

, .. ' ..(

. . j r)' ;:'.j.

.......

e+ 0P 0 gjo x%V yhy, . q%%%/ /.y g L)j . .y

u-* 5 E

.... . .

. . ..

. E ...

. .. ..

.E

. ' '.'....''.'

i ': ''j . .

<.. : E C

Prefae7

.):.

..

: -.:

This book is th: Mincarnation of my qrstriook Bwiti gp Motion and

.l sis whih was published by Wadslbiti 144984. For mpreMartingqles in na y ,

tharza decardeI ave used Chapters 1, 2, 8, :4d.-9 f (ila'f ook.to give 'eading

courses'' to graduate ytqdents Fho have conplitd.the flrst year grarduate probrability course ancl Fere intere.ted i learning moreabout procsqes that .moye

continuolzslyin spac: and time. Taking the advice from biology that torm fol-'' I have taken that material on s#ochastic kntegratiohjstochwsticlowsfunction

diferensialequations, Brownian motion and its relation tp p>rtial diferentialequations tq be the core pf

,thi;

book (Chaptys 1-5). . To this I have >dddother practically important topics: on dimensipnql diffuppp, semigrqups @nd

geperators, Harri, chaipsi and peak convergence.. I have struggled Fith thismateri>l for almst twepty years. I zpF Shipk that J Jmderstand most of it,

. . ( 'sp to help you mmster it in less time, t have tmed to pxplain iti ms simply andclearly as I can. , q E E , ,

My students? motivations for learning this material have been diveyse: somehave wanted to apply idemsfrom probability to analysis or diflbrential geometry,others have gone on to do zesearch on difusiop prcsses or stochmstic partialdiferential equatins, some have been inte#ested in applications of thes ideasto fnance, or to prpblemK ih operations reseyrch. My motiyation for writingthis book, like that for Pvobabilily Teorg and fzcpvled, wws to simplify my lifeas a teacher b# bringin: together in one plae useful material tht is scatterediu a variety of souicesk . . i i , ,

,

.

' An old joke says that tfifyou copy from e book that i plagiarism, but

if you' copy from tez boks that is scholaiship.'' From that viewpoint this ischolarlybook. Its main contributors for the various subjects are (a)stochatic

integrationand diferential equations: Chung and Williams (1900),lkeda artdWataabe (1981),Karatzmq and Shreye (1991),Protter (1990),Revuz and Yor(1991),Rogers and Williams (1987),Stroock and Varadhan (1970);(b)partil

diffrential equations: Folland (1976),Diedman (1964),(1975),Port and Stohe(1978),Chung and Zhao (1995);(c)one dimnsional diflksions: Karlin andTaylor (198.1))(d)semi-groups and generators: Dynkin (1965),Revuz and Yor(1991);(e)weak convergence: Billlngsley (1968),Ethier and Kurtz (1986),Stroock and Varadhan (1979).If you bought a1l those books you would spend

more than $1000 but for a fraction of that cost you can have this book, theintellectualequivalent of the ginzu knife.

Tim Pletscher: Acquidng MtorDawn Boyd: Cover DesignerSusie Carlisle: Marketing ManagerArline Massey: Associate Marketing ManagerPaul Gottelver: Msistant Managing Editor, EDPKevin Luong: Pre-pressSheri Schwndm: Mmmfactuling Assistant

Library of Congress Cataloginge-Hblication Data

Catalog record is available 9om the Library of Conp'ess

This book contains information obtained from authendc and highly regarded sources. Replintedmaterial is quoted with pennission, and sources are indicated. A wide variety of refemnces are listed.Reasonable effortq have beert made to publislt reliabl data and information, but the autlzor atd thepublisher cmnnot assume responsibility forthe validity of all materials or for the comsequence,sof theiruse.

Neithertls booknoruypartmaybereproduced rtmnsmitted in any fonn orby anymeans.electronicor mechanicak including photocopying, microfllming, nd iecording, or by any information stomge orretrieval systenx witlmut prior permission in writing hp4p tlle publisher.

CRC P=s, lnc.'s consent does not extend to copying for geneml distribution, for promodon: forcreating new works, or for resale. Specitk pennission mstb obtained in wtiting from CRC Press for suchcopying.

g

Direct all inquirie.sto C'RCPrss, lnc., 2000 Corpomte Blvd.. N.W., Boca katon,Florida 33431.

@ 1996 by CRC Pmss, lnc.

No clnim to original U.S. Government worksIntemadonal Standard Book Number 0-8493-8071-5Plinted in the United States of America 1 2 3 4 5 6 7 8 9 0Printed on acide,e paper

Page 4: Stochastic Calculus

(( .. vi Preface

Shutting off the laugh-track and turning on the violins, the road from thisbook's frst publication in 1984 to its rebirth in 1996 hmsbeen a long and winding

one. In the second half of the 80's I accumulated an embarrassingly long listof typos from the flrst edition. Some time at the beginning of the 90's I talked

t the world John Kimmelto the editor who brought my first three books in o , ,

about preparing a second edition. llowever, after the work was done, te secondedition was personally killed by 2ill Roberts, the President of Brooks/cole. Atthe end of 1992 I entered into 'a contract with Wayne Yuhasz at CRC Pressto produce this book. In the first few months of 1993, June Meyerman typedmost of the book into TeX. In the Fall Semester of 1993 I taught a course fromthis material and began to organize it into the current form. By the summrof 1904 I.thought I was almost done. At this point 1 had the good (and'bad)fortune of having Nora Guertler, a student from Lyon, visit for two months.When she was through making an average of six corrections per pagej it wasclear that the book was faz from flnished. '

During the 1994-95 academic year most of my tine was devoted to prepar-ing the second edition of my qrst year graduate textbook Probability: Tct)r.qand Eamples. After that experience my brain cells could not bear to work

on another book for another several months, but toward the end pf 1995 theydecided 'tit is now or never.'' The delightftll Cornell tradition of a long winterbreak, which for me stretched from early December to late January, providedjust enough time to finally fnish the book. '

I am grateful to my students who have read various versions of this bookand also made numerous comments: Don Allers, Hassan Allouba, Robert Bat-tig, Marty Hill, Min-jeong Kang, Susan Lee, Gang Ma, and Nikhil Shah. Earlierin the process, before 1 started writing, Heike Dengler, David Lando and I spent

a semester reading Protter (1090)and Jacod and Shiryaev (1987),an enterprisewhich contributed greatly to my education. . .E ( ,

The ancient history of the revision process hmsunfortunately been lostk Atthe time of the proposed second edition, 1 transferred a number of lists of typosto my copy of the book, but 1 have no record of the people who supplkd thelists. 1 remember getting a number of corrections from Mike Brennan and RuthWilliams, and it is impossible to forget the story of Robin Pemantle whq tookBvownian Motion, Martingales and Analysis as his only math book for a yearrlong trek through the South Seas and later showed me his fully annotatvd copy.However, I must apologize to others whose contributions were recordd butwhose names were lost. Flame me at rtdllcornell.edu and 1'11have somethingto say about you in the next edition.

About the Author

. '. . . ' . '. . . (

Ttick Durrett received his Ph.Dt in Opeitions research from StanfordUniversity in 1976. He taught i ihe Mathematics Deprtment at UCLA fornine years before becoming a Professor of Matheniatics at Cornell University.He ws a Sloan Eellow'1981-83, Guggenheim Fellpw 1988-89, and spoke at theInternational Congress of Math in Kyoto 1990. E

' ! EE'

Dmrett is theEauthor

of > gradlpt tltbppk, Prqbqbility: Teorg and'zcmpled, and an tudergpdu>ty one, The Eqentials of Jiropllip. He hms

ujjjj ( taj jj co-aythorj an'd en 19 studentswritten almost 100 papers > 9. u.

comylete their Ph.D.?s under his dtxciippr lds recept rrsearc fjcuses n theappllcations of stochuti spatial mpdl: it?vayiolp problerns in zlor.

. . . ..

Rick Durrff

Page 5: Stochastic Calculus

Stochastic Calculus: A Practical Introduction5. Stochastic Diferential Equations

5.1 Examples 1775.2 It's Apmoach 1835.3 Extension 1905.4 Weak Solutions 1965.5 Change of Memsure 2025.6 Change of Time 207

6. One Dimensional Difusions

6.1 Constructiou 2116.2 Feller's Test 2146.3 Recurrence and Qansience 2196.4 Green's Ftmctions 2226.5 Boundary Behavior i296.6 Applications to Higher bimensions234

7. Difusions as Markov Processes7.1 Semigroups and Generators 2457.2 Examples 2507.3 Aansition Probabilities 2557.4 Harris Chains 2587.5 Convergce Theorems 268

8. s7eakConvergence8.1 ln Metric Spaces 2718.2 Prokhorov's Theorems 2768.3 The Space C 2828.4 Skorohod's Existence Theorem for SDE 2858.5 Donsker's Theorem 2878.6 The Space D 2938.7 Convergence to DiAsions 2968.8 Examples 305

Solutions to Exercises 311

References 335

lndex 339

1. Brownin.n Motion1.1 Deflnition and Construction 11.2 Markov Propexjy, Blumenthal's 0-1 Law 71.3 Stopping Times, Stropg Markov Property 181.4 First Formulms 26 ; E ,

'

2 Stochastic lntegraon ..

.. ; .

d i P Qitble Pr'eje 33 E2.1 Integin s !!.

(' . '.j

.. ..2.2 Integrators: Continus Lc>l Mitikl 37t.3 Variane and Covariahe Prcse 422.4 Integration wa.tt Bded Myitinzle 52

'

t tity 59 E' ' i '2.5 The Kit-Watab Ih 4

2.6 Integration w.r.t. Local Maztingales 632.7 Change of Variables, lt's Formula 682.8 Integration w.r.t. Semimartingales 702.9 Associative Law 742'.10Ftmctions of Several Semimaztingales 76

Chapter Summax.y 792.11 Meyer-rfnaka Formula, Local Thne 822.12 Girsanov's Formula 90

3. Brownirm Motion, 113.1 Reculzence and Transience 953.2 Occupation Times 1003.3 Exit Times 1053.4 Change of Time, Lvy's Theorem 1113.5 Burkholder Davis Gundy lnequalities 1163.6 Martingales Adapted to Brownian Filtrations 119

4. Partial Diferential EquationsA. Parabolic Equations4.1 The Heat Equation 1264.2 The Inhomogeneous Equation 1304.3 The Feynman-lfac Formula 137B. Elliptic Equations4.4 The Dirichlet Problem 1434.5 Poisson's Equation 1514.6 The Shrdinger Equation 156C. Applications to Brownian Motion4.7 Exit Distributions for the Ball 1644.8 Occupation Times for the Ball 1674.9 Laplace Transforzns, Azcsine Law 170

Page 6: Stochastic Calculus

1 Btownian Motioh

1.1. Defnltlop and Constructlonjt g g j u jsu yyytj (yyjytyjjoj k( sjju tvautV l iS 0llOh We Will def/ ziowa o . ,

.'.' . .. . '. . . . . .. . . . .

:( . . .. .g .

. . . .g . .

lili thebirtlz of child, i mssf att 'jaihful,

tztaftei a wit w itl b ibleto have fun with our new arrival. We begin by redtiin:

'

th deqittn of ad-dimensional Brownian motion with a geneqal starting point to th>t 9 a one(t il ioal BlWi rfttitt titi: t 0

'lt

itikt tw sttie t (1t) >ndiifl s . , r(t 2) re part of tmi tteqnitioh.* 1 . .

(1.1) Tzmnslatlon hwarlnnce. (.Df -'.

Bz, k 0Jis independent of Bz nd hmsthe same distributioh as a Brownia motio with Bz = 0. . .

g

tj V g jwyj jgyllj j Uj5

. o . y (j j j $(1.2) Independence of coordlnate -

0) are independent one dimensional Brownian fnotions startig at 0k E '

. .: . . . (' E'

. g. . ...

.

. . ) . j :j ... ...

.. . . k . .. .

.. .....:.

kw * dte x on dimnsiiat Briokdnn zotin ktaiung at t t be a.'''''''

'...'''.''rocess Bt , k 0 taking yalues in R th>t hms the fpllowing profitisiP ,

(a) If to < z < ... < ta then Btc),.5401)

= Btc), . . . Btn) - B (t,z-1)areindependent.

(b). If s, t k 0 then ., . .

Ptps + t) - Bst e .A)s /..a(2vt).7'/2exp(sz2/2t)kz. ''.

' '

... . . . .

( . .

. . . . . . . . ..' . . . .

With rpbability one, Bz z:: 0 and t -+

t is continku:() p , :,

. . . .: . .

:. .

'

. :

'

.. .

' '

(a) says tat lkmsindependent increments. () says th>t theinrementBs + t) - W(s)haq a normal distribution witlt mean vectpr 0 >nd variance t.(c) is self-explanatory. The reader slkould note that above we have sometimes

Page 7: Stochastic Calculus

2 Capter 1 fz.owan Motion

wzitten Bs and sometimes written.8(s),

a practice we will contoirme in whatfollows.

An immediate consequence of the defnition that will be useful myuy times is:

(1.3) Scnllng relatlon. lf Bz = 0 then for any t > 0,

Section 1.1 Dennition and Construction 3

For each 0 < tl < ... < rt deqne a measure on Rn by

#r,4z,...xatXl X - - - X Xa) = dzl . . . dza ptm-tm-z (zm-1,zm)

xztz ala

1m=

where zp = z, tc = 0,

-1/a j - alzjgtjPttc, ) = (2+) expl-t

and A e.7?.

the Borel subsets of R. From the formula above it is easy to seethat fr flxed' z th fkmil: JI is a onsistent set of qnite dimeional distributios

' that isj if (s1,. t . srz-tl C (tlj . t t twl and tj / (t . . . sn-t) then(f.d.d. )j ,

pw-aykkkia-u.',(-,

x . . . x .Aa-,) = #.,,,,....t-(., x . . . x Al- x R, x A) x . . . x za-)E .

: .E ... .'. . i. .'. .:. . ' . . .

'.

'. . . . . . . . . . . . . . .

'. . . . .

'.''...

l .. ' . '.' .1 !

. '5.

This is clear when j = n. To check the equality when 1 %j <' n, it is enough

to show that

ptyro-z(z,#)7h,+-t.f(g, z) dy = pty-ysyszlz, z)

...= k ; .By translatzon mvariance, we can without loss pf generality mqsume z = 0, but

>11this yay. iq that cwse is the sum of indpendnt normalslwith man 0 andvari>nces tj - tJ-1 and h+1 = tn ha,N p, normal istribution with mean 0 azdyayiance t.+1 -

j-l. With th consistency of f.d.d,'s verifed we get our flrstconstruction of Brownian motion:

(1.4) Theorem. Let Do = (functionstd : (0,x) -+ R) and Fo be the J-field

generatd by the fnite dlmenslonal sets (w: ttf) E A for 1 K %n) wher:

df E ,J2: For each z E R, there is @unique probability memsqre vr on tflo,Fo)that v (td: (0)= z) = 1 and when 0 < tl . . . < taSO r

(+) zzwtt : u'(@f) e.f)')

= >m,,,,...,a (.A1x . - . x .A,,)

.(l,z,s z 0). zfc ftlzBs, s k 0).

To be precise, the two families of random variables have the same flnite dimep-sional distributions, i.e., if sz < . . . < sg then

(sazt,. ..,

../,-.,) d (t'/2s,,, .. .

,tl/2.s,-)

. ' ' :'

'

ln view of (1.2)it suces to prove this foi one dimensional zrowntanmotion.To check this when n q.z:1, we note that 1/2 times a pormal Fith me>n 0 andvayiqpces lsa ppifnt withmean 0 and variance si. The resvlt torn $ 1 toltowsfrom indpendent increments.

q ; y.

., y yi seckztt equiilent defttick' f one diinsiopal Biownia znbtion s'titin' .. . . .. . .E.

' : . . ..'

. . . . . ..

:

.. : ' .' . . . .frofnBz =i 0, Which we witl ccmsionalty finct seful, i that , t k , i a ieyl

l cl tisf in .Va tle PTOCeSS Sa y g

(a#) Bv is Gaussinn process (i.e.,all its flnite dimensional distributions aremultivariate normal),

(b') EB, = 0, EB,BL = s A t = mints,t),(c) With probability one, t ,-.+ Bt is continuous.

lt i? easy to see th>t (a)and (b)imply (a?).To gt (b')from (a)and (b)supposeS < t htl WR

ygp n . w4:w24 x. wnfn Lpj.ga;;

E iE

4 .LJ t '''''.' -L'2 t2-za7-1-

.EJ k=4

= s + EB,ECBL - B,j = s. .

! '. . ..... E(.

E :: .The converse is eveu emsiez. (a')and (b?)specify the finite dimensional distrt-

butions of Bt , which by the last calculation must agree with the ones definedin (a)and (b).

The flrst question that must be addressed in any treatment of Brownianmotion is, tsls there a process with these pxoperties?'' The answer is tYesy'' ofcourse, or this book would not xist. For pedagogical remsops we will pursue

h that leds to a dead end and tlken retreat a little t ieitify thean approacdiculty. In view of our deqnition we can restrict our attention to a one di-mensionalBrownian motion starting from a flked z e R. We ould take z = 0but will ot foi remsons that become clear in the remark afte (1.5).

Thij follow: from a generalization of Kolmogorov's extension theorem. We willther with te details since at this point we are at the dead end referrdnot bo

to above. If C = (td: t --+ td(t) is continuous) theq C / So, that is, C is not ameasurable set. The easiest way of proving C Fo is to do

Exerclse 1.1..4

6 So if and only if there is a sequence of times t1, tz ,...

e (0,x) and > B e T?/1,2,...) (theinfnite product J-feld T x R k . .

.)

so that.4 = ((J : @(t1),(f2),

...)

Bj. In words, a1l events in Fo depend on only

countablymauy coordinates.

Page 8: Stochastic Calculus

4 Capter 1 Prownan Mbton

The above problem is easy to solve. Let Q2 ::zq (m2-n: m, n k 0) be thedyadlc rationals. If fl = (tJ : Qa -+ R) and Fq is the c-fleld generated by theflnite dimensional sets, then enumerating the rationals 1, q,, . . . and applying

' t ion theorem shows that we can construct a probability vKolmgorov s ex ens .

on (f1g, Fq) so that vrf : tJ(0) = z) = 1 and (+)in (1.4)holds when theii e Qz. To extend the defnition from Q2 to g0,x) we will show:

(1.5) Theorem. Let T < (x) and z E R. vo mssignj probability one to pathstJ : Qa -+ R that are uniformly continuous on Q2n (0,T).

j . . . .. . .. . .

'

. . . . . . ... . . .

> Remark. It will take quite a bit of work to prove (1.5).Befoye takipg pn t/tmsk we will attend to the lwst memsure theoretic detail: we tidy tlkings up y$ -

. . . . ..

.

movingour proability measures to C, c) where = (continuous(t2 : (0,x) -+

R) and c is the J-field generated by the coozdinate maps t -+ w(), To do this,we observe that the map # that takes a uniformly continuous point jzi fl to itsextension in C is meyurable, and we set . . ;

P. = vc o #- .

Our construction guarntees that Bt @)= vt has the right Enite dimensionaldistributions for t e Q2. Continuity of paths and a sinple limiting argumentshows that this is true when t 6 g0,x). ' ' E

Ay mentioned earlier the generalization to d > 1 is straightforwd i thtit C = (cntinuos tp (0co)' 14cooxdinates ate independent. In this gehera y ,

Rdl and c is the c-field generated by the coordinate rriaps t -+ (). The rdershould note that the result of our construction is one set of random variableBt @)= (t), and a family of probability memsures P=, z G Rd, so that under.Pa, Bt is a Brownian motion with P=Bz = z) = 1. It is enough to constrct tlieBrownian motion starting from an initial point z, sihce if we want a Browniaxtmotion starting from an initial memsure y (i.e.,have PgBz e

.4)

= p(X)) wesimply set . E

.P,(.A) = p(dz).%(.)

Proof of (1.5) By (1.1)and (1.3),we can without loss of generality supyoseBz = 0 and prove the result for T = 1. In this case, part (b)of the deqnltionand the scaling xelation (1.3)imply

.E'n(If- /,14) = fnlf-,I4 = Ct - s)2

where C = .E'1/114< x. From the last observation we get the desirqd uniformcontinuity by using a result due to Kolmogorov. In this proof, we do not usethe independent increments property of Brownian motion; the only thing we

Section 1.1 Denition and Construction 5

use is the moment coudition. In Sectipn 2.11 we will need this result when theXj take values in a space S with metr p; so, we will go ahead and prove it inthat generality.

(1.6)Theorem. Suppose that Epxj , .X)# %ff lt -

sl1+G where a, p > 0. If

7 < alp then with probability one thete is a contant Cw) so that

F(X , Xr) f Cl - rl'/ for al1 q, r e Q2n (0,1)

Proof Let .f < a/#, p > 0, Q = ((,./) : 0 % j < 2n, 0 < j - < 2n9Jand Gn = (p(aY(./2-n),X(2-n)) S j - )2-r)'T for al1 (,j) E QJ. Sincet/P(IYl > c) S E lFl# we have

E

#(G ) S ((J - )2-n)-Jh.E(.Y(j2-'n), X(2-n))#(,J)EJk

z-n Lp-f--vaK K i - ) )

(J)E.Ik

by our %sumption. Now the number of (, jj E f is ; 2n2n9 so1

PG') S J'f . 2n2n9 .

(2n92-n)-#-/+1+G= ffz-''

where = (1- p)(1 + a - pyj - (1+ p). Since y < alp, we can pick p smallenough so that > 0. To complete the prf ow we will show

(1.7)Lemma. Let.z4

= 3 . 2(1-n)-//(1 - 2-'T). On #w' = xxGn we have

Pxq, Xr) S Xl - rl-/ for q, r E Q2n (0,1)

with 1 - rl s 2-(1-8)X

(1.6) follows euily from (1.7):(x7 G) gg-N

PH' ) S S PGn' ) S ff S 2-n =N 1 - 2-

n=N nN

This shows pxg, .X'rl <.1:

- rl-/ for I - rl %:(a) and implies that we havepxq, Xr) S C(Y)I- rl-f for q, r e (0,1).

Proof of (1.7) Let q, r E Q2f3(0,1) with 0 < r - q < 2-(1-'?)X. Pick rlz k Nso that

2-(m+1)(1-p) < r - q < 2-m(1=p)

Page 9: Stochastic Calculus

6 Chapter 1 Brownian Motion

and write.

...m

-r(1)-r(z) '

r =.72

+ 2 + . . . + 2 . ,

z-m - 2-(1) - . . . - 2-() . ( , i .q =

Secon 1.2 Marov Property Blumqnthal's 0-1 raw 7

We claim that -s C Gn . To prov Shis we consider s = 1, which we claim isj (j tjjat

.ye

< scjn,.the worst possibility. In this cmse we want to conc u e n-2,a

For this we observe that j = 0 is the worst case, but even then. E

.

3 2I#(a-a)/n- f(r&-2)/nl < If(r'-a)/r'- #11 + Ilz

.-z #(r'-2)/al < c -,z + -,z

.'

. .g

Using l.n c:Gu and the scaling relation (1.2)gives

5c' aP(Xn) S P(Gn) S n P If1/aIS -

n3 10C 35Cs rj . ( zjy

. (:z.)-1/2= n P 1/11< 1yan T! .

izicexptvzi/z) s 1. Letting n-.+

x show,.p(.rt)

-+ (). Noticing ?k -.+

.a tsq .

i emsing hows /(.A ) = t f6r all n d cotpletes th fmof. I::Ir r.

Exerclse 1.2. Show by considering k incremhts istd of 3 thai if k $1/2 + 1/ then with probability t, Brownian p>ths are not Hlder continuous

with exporent q at any point of (0,1).

The next zesult is more evidence that Bt - B, ;k$ t - s.

Exercise 1.3. Let Am,a = f(tnz2-n) - B (t(?'rz- 1)2-n). Compute

wherem < r(1) < . . , < r(f) and m < (1)< r. . < (:)r No* 0 <; r -

q <1- ) m2-m( %

j so j - f) < 2 P and it follows tltt on Vx . ,

(a) p(X(2-m),XU2-'n))K ((2rn9)2-m)T

..: '

. (.. . . .

On Il it follows from the triangle inequality that(x)

(b) p(x#,x(2-n)) : )'--!(2-())'ys 5-!(2-'-')s cvz-rn=l =m

where Cv = 1/41 - 2--/) > 1. Repeating the lmst corriputatiop shows. .

'.

:

. i'-tn -J2 ' '

(c) pxr , x'.i2 ))s cv2combining(a)-(c)gives

(X Xr) S3Cv2--O(1-0)

i 3C 2(l-0hjr - l-# q , v . ,

since2-m(1-0) < 21-njr.- I. This completes the proof of (1.7)and hence of(1.6)and (1.5) (3

The scaling relation (1.2)implies

flft - mI2m= Cmlt - slm where Cm = .E'1/112r''

So using (1.6)with p = 2m and (: = m - 1 and then letting m-.+

(x7 gives:

(1.8) Theorem. Brownian paths aye lllder continuous with exponent 7 forany Y < 1/2.

It is easy to show:

(1.9) Theorem. With pmbability one, Brownian paths are not Lipschitz con-tinuous (andhence not diferentiable)it any point.

Proof Let -n = ( : there is an s e (0,1) so that lft - B, IK Clt - 41 whepIt- sl K 3/n). For 1 %k %zz - 2 let ,

y

k + j k + j - 1Fk,n = max B - B : j = 0, 1, 2 '.

,

n zGn = ( at lemst one 'Fk,n is K 5C/n)

2

E E l,,,2

a-

mK2n

aicl l1se' the Borel-cantelli lemma to conclqde that EmsaaA2m,s -+ t a.s. +!.rn =.* X.

lt is true t?we consider a sequence of partttions H1 cRemek. The last resu

llz c . . . with mesh -+ 0. See Freedman (1970)p.42-46. The true quadratic

i tion deqned as the sup over all partitions, is x for Brownian motion.Var a ,

1.2. Markov Property, Blumenthal's 0-1 Law' ': '

. . . j :. g

lhtuitively theMarkov prrty say

tgiven the present state, B' , any other information about Ayhat hap-pened before time s is irrelevant for predicting what happens aftertirn.e s.'' .

Page 10: Stochastic Calculus

8 Chapter 1 Prowzkan Motion

Since Brownian motion is translation invariant, the Markov property can bemore simply stated as:

Nf s k 0 then #f+, - Bs , t k 0 is a Brownian motion that is indepen-dent of what happened before time s.''

This should not be surprising: the independent increments property ((a)in thedeinition) implies that if sl S .:2

. . . %sm = s and 0 < tl . . . < a then

(#t:+, - Bz , . . ..w+, - Bsj is independent of (.Gz,

. . . Bsvj

The major obstacle in proving the Markov property for Browni>n motionthen is to introduce te measure theory necessary to state and prove the result.The flrst step in doing this is to explain what we mean by ttwhat happenedbefore time s.'' The technical name for what we are about to deqne is a flltra-1 h f y term for an increasing collection of flelds F i e if s < t thzt , a n , , , . .,

,

F c F Since we want Ba e $a,' i.e., B, is meadurale with respc' t t Fa', tltea f.

flrst thing that comes to mind is

F: = tztfr : r K s)

For technical reasons, it.is convenient to replace F: by

y+ = njxay-jo.

The felds F,+ are nicer because they are right continuous. That is,

nt>,Jl+ = nf>, (nu>t.T-:) = nuyyaF: = Ft

In words the Fs+ allow us anttinflnitesimal peek at the future,'' i.e,, . e Ss'f if

it is in Ss*+zfor any 6 > 0. If Bt is a Brownian motion and f is any memsurablefunction with fuj > wh.en

'tz > 0 the random variable

Secton 1.,2 Mrov Property Blumenthal's p-1 faw 9

s,'' it is convenient to deqne the shift transformations % : C -+ C, for s k 0by

(#,tJ)() = tJ(s + t) f0r t k 0

In words, we cut of the part of the path before time s and then shift the path

so that time s becomes time 0. To prepare for the next result, we note that if' : C -+ R is C memsurable then F o 0, is a function of the future after time s.

To see this, consider the simple example F(tJ) = J(tJ(t)). ln th case

y o 0, = J(patJ(t)) = J((s + )) = fBs-vt )

Likewise, if F4) = fzh , . . .tJfa) then

F o p, = fB,+t , . . . B,+tn )

(:.1) The Markov property. lf s k 0 and F is bounded and C measurable

then for al1 z E Rdf'zty o %1F,+) = En.Y

: . .

j.p

where the right hand side py) = EvY evaluate at f = ().

Exylnnation. In words, this says that the conditional expectation of y o %given &+is just the expected value ot F for a Brownian motiop starting at B, .

To explain why this implies ttgiven the present state B, , any other ipforpatn, ? jy t ju jjyjjskbout wht happened before time s i irilevant for predictihg w p

after time s,'' we begin by recalling (see(1.1)in Chytei 5 of Duirett (199$))that if c: F and .E'(gl.f') e f then .E'(Zl7) = .E'(Zlf). Applying this with

.F = F+ aud = c(l,) we have4 :. .

E .

lat'i'-o es1.,:-a+)= .E'.(y o 0s11,)

If we recall that X = .E'(ZlF) is our best guess at Z giyen the information in Ftin the sense of minimizing E LZ - X)2 over X E .4F, see (1.4)in Chapter 4 of

Durrett (1995))then we see that our best guess at F o % given B, is the same

as our best guess given Fa+, i.e., any other information in ..F9- is irrelevant fprredicting '

o %.

'

p

Proof By the deqnition of conditional expectation, what we need to show is

(MP) f.(F o #, ; W) = &(.Eb.F; d) for a1l.z4

6 F,+

limsup Bt - #a)/J(t - s)t ta

is memsurable with respect to Fs+ but not F(. Exercises 2.9 and 2.10 onsiderwhat happens when w take fn) = V-uand fn) = 'tz lg log(1/u).' HoWever,

mswe will see in (2.6),there are no interesting examples of sets that are in F+,

!but not in S:. The two J-flelds are the same (u to null sets).

To state the Markoy property we need some notation. Firstj recall that wehave a family of measures .P., z e Rd, on C, C) so that under Pr, .8t(tJ) = ()is a Brownian motion with Bz = z. In order to de:ne Gwhat happens after time

Page 11: Stochastic Calculus

10 Capter 1 Brownan Afoton

We will begin by proving the result for a special clmss of F's and a special clssof W's. Then we will use the 'm

- theorem and the monotone clmss theorem toextend to the gerteral cmse. Suppose

'i'(-)= lf.Iy,,ttmlllKmsn

where 0 < :1 < . . . < ri and the fm : Rd -+ R are bndd d fiisurable. Lt0 < < 1, let 0 < sl . . . < sk S s + , and let

.,1.

= (tJ : g(s/) E Aj , 1 i j i kwhere Aj E 'Itd for 1 K j K k. We will call these .A's the flnlte dlmensionalor f.d. sets in F* .a+!

Prom the definztioz of Brownian motion it follows that if 0 = 'utl < 'tzt <then the joint dezisity of Bu ,

. . . Buzj is given by. . .

'l-lz

z

z.!%lfu, = g1, . . . Bue = pz) = puf-uf-, (:w-1,yij

f=1where yz = z. From this it follows that

Section 1.2 Markov Property Blumenthal's 0-1 rw 11

(MP-1) E.Y o %; .A) = E=pB,+ , #); .A) for a11f.d. sets .Ain FXsTo extend the clmss of .A's to al1 of Fo ?,, we usea+

(2.2) The g' - theorem. Let .4 be a collection of subsets of fl that containsfl and is closed under intersection (i.e.,if ., B e Jt then ,4

n B e.4). Let f be

a collection of sets that satisfy

(i) if.4,

B E f and . D B then W - B E

(ii) If d,z e and dn 1 .A, then . E .

If .4 C then the J-feld generated by .4, t7'(.4) d .

Proof See (2.1)in the Appendix of Durrey,t (1995).(MP-2) E.Y o #,;

.4)

= Eg,pBaw;, , #);-)

for all . G Lh..

. . '' ''.

'' . . ' ... .. ..('

. . .. .. .

ikoof Fix F and 1et be the collection of sets X fpr whith the desired equality

i? tiu. A simple subtraction shows that (i)in (1,2)hold, While the zootnejt

jtnvergence theoremshows that (ii)holds. If wr let t be t pllction pf nitedimensional sets in Sso..s then we hve shw / c: F,%s (i.4) dwhih is the desired conclusion. (:I

Our next step is

(MP-3) ErY o #a; .A) = EzpBs , 0); d) for a1t W E Fi

Proof Since St C F,*+s for any > 0, all we need to do is let -+ 0 in(MP-2). It is easy to see that

zEr 'I7giBui) = dy puz-unlpa, !/1).1(g1)

=1

.. . dpzpuz-uz-z (pz-1,pz).z(gz)

Applying this result with the ui given by 1, . . .

, sk , s + , s + t1, . . . + tn andth obvlou chois for the gi We have

k w.E'.(F o p,;.) = Ec T11x,(l,,) . laa (a.??j). F1'ym(s+,m)

J=1 m=1 *y) = J1(p1) dy1pta-tztgz, p2)J2(p2)

= dzlpaztz, z1) .. . dzk p,k-,p-z(zk-l;zk)ykz

..4.

. dpp,+?,-,.(zk, p) py, &)Wd

.. . dyn pta-ta-z (pn-1,ynjfn (pn)

is bounded and memsurable. Using the dominated convergence theorem showsthat if -+ 0 and zs

-+

z thenwhere

w(g,) = dpl ;hz-?&(p, pz).f1(p1) . . . dyn pta-ta-zlp,:-z, ynlfrtyuj

kUsing the formula for E= 1-Ifalgi Bui ) again we have

ezh, ) = Jdplpfz-hlzh, p1)/(p1) -+

ez, 0)

Using (MP-2) and the bounded convergence theoyem now gives the desiredresult. 1Zl

Page 12: Stochastic Calculus

12 Capter 1 Brownian Motion

(MP-3) shows that (MP) holds for F = Fllzm<a.fm&(m))

when th 11.arebounded and memsurable. To extend to gen

'-erallounded

memsurable ' wewilluse

(2.3) Monotone class theornm. Let .4 be a collection of subsets qf fl thatcontains f apd is closed tmder intersection (i.e.,if .A,B G

,4 thep .An B E #).Let 7f be a vector space of real valued functions on f/ satipfying

(i) If.4

e.4, lx e 1i.

(ii) If 0 S fn E 7 and A 11, a bouqded function, then f E 7.

Then 14 contains al1 the bounded functions on f th>t are memsurable with

respect to J(.1).

Proof See (1.4)of Chapter 5 of Durrett (1b95).To get (2.1)from (2.3),fix in J1. E F,+ and 1t 7 = the collection of bundedfunctions '

xfor

which (MP) holds. 7 clearly is a vector space satisfying (ii):' . . ' . .' . . .

' '

Let.4 be the collctih of sts of the foiin ((J : ulj ) E Aj, 1 K J S nl Skh

d The specil caae treated aboke shows that if W e.4 then lx W'.At E 7 .

. ..

..

.

. .

.

.

'

Thi shows (i)holdi and th desired clusion follchksfror (2.3). t. .. ' '

'.' . E . ..

The next seven exercises give tyjical applications of the Markov proprtPerhas the simplest example is

Exercise 2.1. Let 0 < s < . If.f

: Rd -+ R is bounded and measurable

.E'.(J(ft)l?,) = En.1Bt-s4

Exercise 2.2. Take fz) = z and f(z) = zizj with # j ip Exercise 2.1 tolude that Bi and BgiBi are martingales if i # j.COnC j :

The next two exercises prepare for calculations in Sectin 1.4.

Exerclse 2.3. Let Tn = infts > 0 : B. = 0J and 1et R = inft > 1 : Bt = 0J.R is for right or return. Use the Markov property at time 1 to get

(2.4)

Exercise 2.4. Let Tn = infts > 0 : B, = 0) ard let L = supt K 1 : Bt = 0J.L is for left or lmst. Use the Markov property at time 0 < < 1 to conclude

.!%(S > 1 + ) = j pltz, p)Pp(% > t) g

Section 1.2 Afarov Property Blumenthal's 0-1 faw

Exercise 2.5. Let G be an open set and let T = inft : Bt / G). Let ff be aclosed subset of G and suppose that for al1 z e ff we have #.IT > 1, Bt E ff ) ka, then for all integers n k 1 and z E J'f Wehave P=T > n, Bt e ff ) k an .

E

The next two exercises prepare for calculations in Chapter 4.

Exercise 2.6. Let 0 < s < . If lt : R x Rd --> R is bounded and measurable

Er (Jftr,Brjdrj

.'&j

= j' tr,Bv) dr0 0

-a

+ Ss(a) j hs + 'tz, Bn4 du

Exercise 2.7. Let 0 < s < t. lf f : Rd -+ R and : R x Rd -+ R are boundedand measurable then

E=LlBt

)exp (-ftr,m4drj j

.'%(

= eXP Ljoh(r, Brjdr1

#7s-(T(fz-,)exp (.('-'(s+ u,

.%4 dn) jThe xeader will see many other applications f the Markov property below,

so we turn our attention now to atttriviality'' that hmssurprising consequences.

siuc .

Ek (Y o %1.'F'a+) = En.Y C F:

it follows (see,e.g., (1.1)in Chapter 5 of Durrett (1995))that

ErY o pa1F,+) = Eckr o pa1J7)

From the last equation it is a short step to

(2.6) Theorem. If Z E C is bounded then for all s k 0 and z e Rd,

ErzLFt, = Erz6Fsot.

(2.5) Pnt S f) = j pt(0, p)J5(% > 1 - t) dp

Proof By .the monotone clmss theorem, (2.3),it suces to prove the resultwhen

z = 'I'IJm(l(tm))m=1

Page 13: Stochastic Calculus

14 Chapter 1 Brownian Motion

and the Jmare bounded and measurable. lh this cmse Z = .XIF o 0aj whereX e F: and F E C, so using a property of conditional expectation (see,e.g.,(1.3) in Chapter 4 of Durrtt (1995))and the Markov property (2.1)gives

f.(Z1Fa+) = XEnLY o p, 1.T-,+) = XEB.Y e F:

and the proof is complete. IZI

If we 1et Z E F1' then (2.6)implies Z = ErzkF:) 6 F: , so the two J-flelds

h to null sets. At flrst jlance this conclusion i not exciting.are t e same up ,

The fun starts when we take s = 0 in (2.6)to get

(2.7) Blumenthal's 0-1 law. lf . e Ft then for a11 z e Rd,

Pr(X) C (0'11

proof using(i) the fact that.,4

e.$t,

(ii)(2.6),(iii).zz

= (/.(./:,) is trivialunder

.&,

apd (iv)if is trivial E (aYlf) = EX gives

1 E (1 1.&+)= >Q(1xIJ7).&(.) >. .s.

.4 = = 4

This shows that the indicator function 1.A is a.s. equal to the number #.(X)and it follows that .P.(.) E (0,1) . I::I

+ t j j sus yespltIn words, the last result says that the germ fleld, Fo , ls triv a .

is very useful in studying the local behavioz of Brownian paths. Until furthernotice we will restrict our attention to one dimensiortal Brownian motion.

(2.8)Theorem. lf r = inft k 0 : Bt > 0) then.&('r

= 0) = 1.

Proof J%('r K t) k .Pc(# > 0) = 1/2 since the normal distribution is sym-metricabout 0. Letting t t 0 we conclude

Section 1.2 Markov Property Blumenthal's 0-1 Tzaw 15

Combining (2.8)and (2.9)with the Markov property you can prove. . ' . ' . . . . .: . . .

'l

S'

'. . E' . .

Exercise 2.8. If c < b then with probability one there is a local maximumof Bt in (c, ). Since with probability one this holds for all rational c < b, thelocal maxima of Brownian motion are dense.

Another typical application of (2.7)isExercise 2.9. Let ft) be a function with ftj > 0 for a11t > 0. Use (2.7)toconclude that limsuptc #(t)//(t) = c

.!5

a.s. where c E (0,x) is a constant.

In the next exercise we will see that c = cx) when ft) = 01/2. The 1aw of theiterated logarithm (seeSection 7.9 of Durrett (1995))shows that c = 21/2 when

T(f) = (t1oglog(1/t))1/2.

xerclqej.40. showthat limsuptlclttl/l/z = x J)c a.s., so with probability

one Brownian paths are not Holder cpntipuou of oider 1/2 at 0.

Remark. Let 7G@)be the set of times at Fhich the path u) e C is Hlders g jy,continuous of order y. (1.6)shows tht #t7v (0,(R))) = 1 or j < .

Exercise 1.2 shows that Pt7G = 9) = 1 for @> 1/2. The last exerclse shows#t e 7z/al = 0 for each t, but B. Davis (1983)hms shown P(7f1/2 # 9) = 1.

Comlc Rellef. There is a wonderful way pf expyssing the complexity of

Brownian paths that t learned tromWilfridKendallz

ttlf you run Brwnian mtion ih two dimensihs for a psitive amountof time, it will write your name.''

Of course, on top of your name it will write everybody else's name, mswell as al1

the work of Shakespeare, everal pornographic novel, and' a lot of nonsense.Thinking of the fuction g' as ttr ijnture we can mak precie statement

msfollows:

(2.10)Theorem. Let g : (0,1) -+ Rd be a continuous function with #() = 0,let E > 0 and 1et trt t 0. Then fi almost surely,

Botnjsup = = goj < 6 for ipinitely many n

0s8s1 Vrn

.&l'r = 0) = limatr S t) k 1/2tt0

so it follows from (2.7)that #otr = 0) = 1. D

Once Brownian motion must hit (0,x) immediately starting frpm 0, itmust also hit (-x, 0) immediately. Since t -+ Bt is continuous, this forces:

(2.9)Theorem. lf Tc = inftt > 0 : Bt = 0) then .45(Tp = 0) = 1.

Propf In view of blinenthal's 0-1 law, (2.t), and the staling relation (1.3),we can prov this if we cn slkoWthat

(*) Jan sup lf(#) - p(p)l < e > 0 for any e > 0pspsl

Page 14: Stochastic Calculus

16 Chapter 1 Pzownfan Motion

This was easy for me to believe, but not so easy for me to prove when studentsasked me to f11in the details. The flrst step is to treat the dead man's signature

g(z) H 0. . . ,

Exerclse 2.11. Show that if e > 0 and t < cxnthn

.l% sup sup 1.B,l < 6 > 0i tlsas

ln doing this you may fnd Exercise 2.5 helpful. In (5.4)of Chapter 5 we witl

get the gyneral result from this one by change of msure. C@nthe rader ;nd

a simple direct proof of @)? , ,

With opr discussion of Blumeuthal's 0-1 1aw omplete, the distinction be-tween F+ ad F: is no longer importnt, so we Fill inke on iul irprovemet>

in our c'-felds aud remove the superscripts. Let E '

Section 1.2 Afarov Property Blumenthal's 0-1 fzaw 17

(2.11) allows us to relate the behavior of Bt as t ,-> (x7 to the behavior mst --> 0. Combining this idea with Blumenthal's 0-1 law leads to a very seful

result. LetF; = O'CB, : s k ) = the future after time tT = nfkn

.6

= the tall c-fleld

(2:12)Theorem. If.,1.

E I then either #.4A) H 0 or #.4.A) H t.

Remark. Notice that this is stronger than the conclusion of Blumenthal's 0-1law (2.7).The examples

.4

= (tJ : tJ(0) E BI show that for .A in the germJ-fleld F1 the value of .&(.A) may depend on z.

Proof Since the tail J-fleld of B is the same ms the gerin c-field for X, itfollows that .J%(.) E (0,1) . To improve this to tlze conclusion given observethat ,4 E FL,so lx can be written as ls o 0. Applying the Markov property,(2.1),gives

#.= (.4::4

c B with #.() = t)sFI = (X+U Vr)

F, '

= %Fsm

.P.(.A) = .E'.(1s o 0) = .E'a(.G(1s o p1lF1)) = EzEnz 1s)

= /(2v)-d/2expt-lp - zl2/2).Py(#) dy.

E

Sa are the null sets and FJ are the completed c-lelds tor#.. Since we donot want the flltration to depend on the initial stat we tak the intersection of

all the completed J-felds. This technicality will be mentioned at on ppint inthe next section but can otherwise be ignored. .

(2.7)concerns the behavior of Bt ms i --+ 0. y psing a trickwe qll usethis result to get information about the beh>yior ms s+ &.,

(2.11) Theorem. lf Bt is a Brownian motion starting at 0 then so is the processdefned by Xz =. 0 and Xt = t #(1/t) for t > 0.

Proof By (1.2)it suces to prove the result in op dipension. We begin byobservingthat the strong law of large numbers implies X3 -+ 0 ms t -+ 0, so Xhmscontinuous paths and we only have to check tht X has tlkeright f.d.d.'s.By the second deqnition of Brownian motion, it suces to show that (i) if0 < tl < ... < ta then (I(t1), ... .X'(tn)) hms a rimltirit norwl ditributionwith mean 0 (whichis obvious) and (ii)if s < t the

ECX'XL) = s.E'(#(1/s)#(1/t)) = s C1

Taking z = 0 we see that if #(44.A)= 0 then PvBj = 0 for a.e. y with respect '

to Lebesgue measure, and using the formula again shows #.(.) = 0 for all z.To handle the case f%(.A) = 1 observe that dc E T and f%(.Ac)= 0, so the lastresult implies

.&(.c)

= 0 for al1 z. IZI

The next result is a typical application of (2.12).The argument here is a

closerelativ of the one for (2.8).

' Let B be a on tltmensionalBrownian motion and let.4

=(2.13) eorem: j

n (#t = j for some k nJ. Then /.4.) = 1 for all .n

ln words, one dimensional Brownian motion is recurrent. It will return to 0Ginfnitely often,'' i.e., there is a sequence of times @r&1 cxnso that Bt = 0. Wehave to be careful with the interpretation of the phrmse in quotes since startingfrom 0, Bt Fill return to 0 inqnitely many time: by time e > 0.

Proof We begin by noting that under #.,.h/zt

hms a normal distributionwith mean zlvi and variance 1, so if we use x to denote a standard normal,

PrBt < 0) = PrBtlvi < 0) = Px < -xIVi)

Page 15: Stochastic Calculus

18 Chapter 1 rz'ownlanMotion

and limt-.x P.Bt < 0) = 1/2. If we let Tc = inftt : Bt = 0) then the lmstzesult and the fact that Brownian paths are continums implies that for al1 z > 0

P/I%< =) k 1/2

Symmetry implies that the lmst result holds for z < 0, while (2.9)(orthe Markovproperty)covers the lmst case z = 0.

Combining the fact that #.IT: < x) k 1/2 for all z with the Markovpropertyshows that

PzBt = 0 for some t k n) = Erpnn (Tn < (x))) k 1/2. . .

E. .

'. . '

Letting n-+

(:xa it follows that .&(.A) k 1/2 ut.4

E T so (2.12)implies.&(Sf = 0 i.o.) H 1. D

1.3. Stpplg Times, Strong M/kv Proprty

We call a random variable S taking values in (0,x) a stppplng tlme if for al1

t k 0 LS < t) E SL. To bring this denitioh to life think of Bt '

as giving the1

price of a stock and S ms the time we choose to sell it. Then the decision to sellbefore time t should be inemsurable with resfect to the information known attime .

In-the lmst de:nition we have made a choice between (S < t) and (S K t).This makes a big diference in discrete time but none in continuous time (foraright continuous fltration hj : E E

Section 1.3 Stopping Times, Strong Afarov Property

Proof (T < f) = UntTa < t). E1

(3.3) Theorem. lf Tn is a sequence of stopping times and Ta 1 T then T is astopping time.

Proof (T K @)= nn(Ts %tJ.

..'

''..(3.4)Theorem. lf ff is a closed set and T = inft k 0 : Bt 6 ff J then T is astoppingtiz. '

Proof Let D(z, r) :':c: fy : Iz- pl < r), let Gn = u(.D(z, 1/n) : z e ff ), andlet Ta = inf (t k 0 : Bt 4 dn). Since Gn is open, it follows from (3.1)that Tis a stopping tipe. I clpirp that as n 1 x, T,z 1 T. To prove this notice thatT k Tn fo# a1l iz, so liin Ta %T. ro proke T %lirri Tn we can supjse yhatTa 1 t < &. Sinte #(Ts) e G,z fr al1 (i and #(Ta) -+ Bt), it foltws that

tj j ggBj e ff .

.'

g.

. (. . . : . .

.. .

j . . . . . .

Remark. As the reader might guess the hltj tirfte f Brel et /, /x =

inf (t : Bt E .A), is a stopping time. However, thij tprnj pltt to be a dicultresult to prlke d is ot true tes th fltratip is tiri/leted as we did atthe end of the lmst section. Hunt was the fiist to plove this. The reoer cnflnd a discussion of this result in Sectipp 10 of Chapter 1 of Blumenthal and

h i ii M (1978)We will notGetoor (1968)or in Chapter 3 6f Dllc t. e an eyer .

worry about that result here since (3.1)and (3.4),or in a pinch thr next result,' .

'.' :'

.

. .

..

:

.

.

. .

will be adequate for all the hitting times we will considf .'

Exerclse 3.1. Suppose .Ais an F, i.e., a cotable union of closed sets. Showthat Tx = infft : Bt e JI.Jis a stopping time.

Exercise 3.2. Let S be a stopping time and let Sg = ((2nSj + 1)/2n where(z) = the largest inte/er %z. That is,

. . .: .

'. .

s = (m+ 1)2-n it rzj-n k s < (m+ 1)2=n4 , ,-

.'

. . ' ' ' .'

'

L E E '. . . .

.

In words, we stop at' the first time of the form tirn >ftr S (i.e.,> S). Fromthe verbal description it should be clear that Sn is a stopping time. Prov thatit is.

If (S K ) C Ft then (5'< f) = Uats K t - 1/n) St.

If LS < t) e Fj then LS K@J= nat,s < t + 1/n) e &.The flrst conclusion requires only that t -.+

.%

is incremsiftg. TV second relies

on the fact that t -+ Ft is right continuous. (3.2)and (3.3)below show thatwhen checking something is a stoppinz timeit is nice to knw tht the twod fl itions are equivalent ' '

' '

en .

(3.1)Theorem. If G is an open set and T = inftt k 0 : Bt 6 GI then T is astopping time.

Proof Since G is open and t -.+ Bt is coritinuou (T < t) = tJ<f(.We Gkwherethe union is over all rational , so (T < @) e &. Here, we need to usethe rationals so we end up with a countable union. E1

(3.2) Theorem. If Tn is a sequence of stopping times and Ts t T then T is astopping time.

Exercise 3.3. If S and T are stopping times, then SAT = mints, T), SVT =

maxts', T), and S + T are also stopping times. In particular, if t k 0, thenS A t, S V t, and S + t are stopping times.

Page 16: Stochastic Calculus

20 Chapter 1 Brovnian Motion

Exercise 3.4. Let Ta be a sequence of stopping times. Show that

sup T,z, inf Ta , limsu T,z, liminf Ta are stopping timesn n a n

Our next goal is to state and prove the strong Markov property. To dothis, we need to generalize two deinitions from Section 1.2. Givena nonnegative

random variable S(tJ) we define the random qhift os which Quts ofl- the part

of before Sulj and then shifts the path so thattime Swj becomes time t.'tJ(,$'(td) + t) on (,$'< x)

(pstd)(t)= th on (x$,si x)

Herp L is an extra point we add to C to covr the cmse in which th lkol

th ts hifted away. Some authors like to adbpt the convztio that allatj

f ttions hake J() = 0 to tak car of the second cmse. H6weker, We willj.t jj oj-sually explicitly restrict our attention to (S < XJ so that the second a

the defnition will not come into play.

The second quantity Ss, ttthe information known at time S,'' i a titttemor ubtle. We culd have deflned

y, = nrocg(sw(,+r),t k ())

so by analogy we could set

Fs = ne>()cBths-vzj, t k 0)

The deflnition we will now give is less transparent but easier to wor with.

Fs = (. : .An LS K t) E St for all t k 0)

In words, this makes the remspnable demand that the part of Jt that lies in(S K ) should be measurable with respect to the information available af time@. Again we have made a choice between K t and < t but as in the cmse of

stopping times, this makes no disezkence and it is useful to know that the twodeinitious are equivalnt.

Exerclse 3.5. When Ft is right continuous, the deflnition of Ss is unchanged

if we replace (5'K t) by (S < ).Eor practice with the definition of Fs do

Section 1.3 Stopping Times, Stpng Alhrkov Property 21

Exerclse 3.6. Let S be a stopping time and let A : Fs . Show that

S On.4

is a stopping timeR = (. o,j xcExercise 3.T. Let S and T be stopping times.(i) LS < t), (S > ), LS = t) are in Fs.(ii) LS < T), LS > T), and (S = T) are in Fs (andin .T).

Two properties of Fs that will be useful below are:

(3.5) Theorem. If S K T are stopping times thenFs C F1'.

f lf pl E .F then . n (T < ) = (.4n ( s t)) n (T s ) e h.Proo s

(3.6) Theorem. If Ts t T are stopping times then Sr = nnF(Ta).

Proof (3.5)implies F(Tn) D Fw for all n. To prove the other indusion, let

. e n.T1Tn). Since . n (T',z < ) e h and Tn t T, it follows that.4

n (T <t) C F so .Ae Sv .

'

E)l

The last result and Exerciss 3.2 and 3.7 allow us to prove something thatis obvious from the verbal deinition. .

Kxercise 3.8. Bs C Ss, i.e., the value of Bs is memsurable with respect tothe information known at time S! To prov this lt Sh = ((2nS) + 1)/2n be thestopping times deflned in Exercise 3.2. Show BSn) E Fsv. then 1et n

-+

(x) and

use (3.6).The next result goes in the opposite direction from (3.6).Here frj 1 measn

-+ frz is increasing and f = c'(fr,).

:7 i 3 9 Let S < (x) and T be stopping times and suppoqe th>t Tri 1 (x)Nerc S@ . . rj

as n 1x. Show that Sshn 1 Fs ms n 1 x.

We are now ready to state the strong Markov property, which says thatthe Markov property holds at stopping times.

(3.7) Strong Mazrkov property. Let (s,tJ) -+ F(s, ) be bounded and T x Cmeasurable. If S is a stopping time then for a1l z E Rd

E=Ys o pslFs) = Ens4Ys on (5'< x)

Page 17: Stochastic Calculus

22 Chapter 1 Pzowzlan Moton

where the right-hand side is py, ) = Evj' evaluated at y = Bsj, t = S.

Remat'k. ln most applications the function that we apply to the shifted pathwill not depend on s but this flexibility is important in Example 3.3. The verbal

description of this equation is much like that of the ordinary Markov property:

W'...).

.

' :'. ltyyljjjyjjij.

.'.(

:E. .j . .

......E;.!!

p'

p'hpi)qt('.

('

.i.E

El,r)tt

section1.3 Stopping Times, Strong Markov Property 23-.r.;..-

-

g.);

-j'

jgt.jyy,pg

t'

.j..

-.

,jy.j(;y.-. jyy'yyyy.,yyy'jr-y.''''t,; is bounded and continuous. .EE.,l.tli.jijl

. . . .

.lj....ijjil,. .t)))qj Having assembled the necessary inzredients we can now complete the proof.'''$''(7.'1::tet .Ae ss. sinces s s,,, (a.5)impli-es .g.

e s(sa). Applylng the special case:'E..j):(

(')t'

jj

jjyd

. ..!E' -

'

ri-

;$'

('.. . ..rqlr'ljtlytj qf (3.7)proved above to Sn and observing that (& < x) = LS < x7) givesyjyj..

-

..

-'

.-

-

.i

r'

(Il!

)'''

!!-

. .

.

. . . .

.,'

. .

.

.

.

-.

..

. .

.. E .tlt'plftll)jlt s tyj o ps ;

.g.

n fs < x)) = ErpBsnj, %) ; n fs < x))lt) = a a

.i.(!.y

/'

-rjt).yr

rjjd

..E.Ei(.):);j)j)'j

E

j'.rdtjd

.ytiytttj Now as n-+

x, Sn t S,.n(,$k)

-+ BLS), w(.s(&),%) -.+ pBsj, 5') and.;iE.y).)yjy.)..t.)(jj;.

.t'.t)l).ltl

tjjjtl) ysa o pya -+ Fs o os,Elitl))

,'1Eil)

'''7'''l't '

lt b ded convergence theorem implles that (3.7)holds when J hks the)t,.))), s t e oun,E'.)t#,)t)

form given in (+).' '.E-

-'

(-E--j

)1*-)*))p))))..'

ttt.: ''

',tittjtt) To complet, the pwof now we use the monotone clasp theorem, (2.3).Let''itlt k the colletion of bouded functions for which (2.7)holds. cleprly7-/ isE

.-.'Ejl;.

1*

(ll'it))q'.;jE.i. ;.. . . . .

'E.t)'tt)

a vector space that satisnes (ii)if ys e-zf

are nonnegative and inciemse t at'.tll!,y),;

'''./

i . bounded y', then '

e.tt. 'ro chet (i)now, let .4 be the collection of sets of

..:..jE;E-Eqrjrj

yjgjrjd

j!yl'tjlijyy

'''l:''pp)

the foim (u : (a(t/) e Gjj where Gj is an open set. If G is open the function.EEy!.).t...(: . .

.

EE))!))i j decjeasing limit of the continuous fupctionsfktz) = (1- k disttz, G))+ ,..E('.i

-)l))j)'tt'k('.

rrjd

.)).

(:111...-1t:::;;,,' :,!pI.

:1!.5.1.,,*

'''''i7')t here disttz,c) is the distance from z t G, so if .A e pt then lx : .H. This'..-'E--

j-!!- )()*411*:1IE/iIII)-).; .

'''!i;7'!i;p'''

'''?ttt.'

shows (i)holds aud the desired conclusion ollowsfrom (2.3). cl' .Ej.lEj.j....).Etjt)t..r(

..EEi(.l.;.,jjlt't..jqj.',Eg.

... . ... .

.

y .. . .... .

. ...

..

.. .. ..

.

..)..)yjyjjgy.y.',''':)it) sxnmple3.1. zeroso Brownlan motlon. considerone dimensional Brow-..- .

'

-

;q-.

)

rjd

jti

)5*jtjjjj1*

.i.j--.

-. . . .trytjyilypj.,nian motion, let Rt = infttz > t : Bu = 0) and let T'n = inf (tz> 0 : Bu = 0).t'[email protected]

xow(2.1a)implies.p.(a

< x) = 1, so Bt) = 0 and the strong Markov'q-E'.

-

.-

-E:

1)*

k;.r:

1*

-

)'

)q; . ..... ..E..r.:ittytyli

j'

)1.)!..... ... . .. .. . . .. ..tt'ytlll. propeity and (2.9)imply'.EE'.t)j)q

..EE.

..!;.-

-

I'fl

)'j)'(j)'

qg! . . ..

..E)t))l)).j.

gy p to o op,> cjsx,) as()tz > c) .c'....'E.'EE!)E

)'j)';j).ii1(jj')'

-

..

. ..

. ;::jI:;,.';'..::

.l(.l-li!'ILfj...

jj

kkd

tttlE)!. . .. (......q'....-

..

.

'

!.E::

.

Ejs'''jjjjl'..-

jlt'tyf't))yE

.(..

'...'....'. .... . . ..

. .

'''

'

E)'Eq-

.y

)'('

jiy)E.(1yjjjjj.q(.y.' ..

'...'...

E

. .'

..

' iljtljtyjq) Ey Taking the expected value of the lmst equation we see that

.':''Er'..'i-'...(ytj

jt).( ...y

...-(..yE.:.

.

.'-...','i!kjjll

))'j..E.. (E.

. j.-.....- .'jiilittty.t.q.yy

y., p tz,(jo op,> ()sr some rational t) = 0''(..'-

. (:)

1(*

'

)'

tl'ttttj...: .

.. .(....E: . y.....

.::1::,-*

';-. .

E

-

-q)jki

)'')'

f'tj'jj($-.

rttd

;.

Eq.y

..

..

...y

E.

.

...

.;y....

..y.. . . -

'''

'.'.'EEEEEiE

jltd;;'

.!..r. . ..

(E. .- . . . .

..'.).-'

.

E-lE:jy!:!@;.(j;

jh';;j(jjjgy';lll'r,j-

('j-

.

.E.--

.

.

.

. --.

. E.E..

. .E.qiE

..

..t'E(/,.k)(

.rq.rE, From this it follows that with probability ne, if a point u e .J(tJ) > (t :

r'i'l)r'i''k-'j ',iE,t..t,,. ,,.

st(w)= c) js isolated on the left (i.e.,thre is a rational t < '?.l

so that (, tz) n...E.l.)r.;!E.i)'.(C;'-1!)/)(([email protected]..

-. -

. .. q. . . .(E;)))t)y)'.

yE)

y z(w)= j) then it is a decremsing limit of points in J@). This shows that tlte'.;Ei:. E.E:@..

..:...1....

-)t'kq'q'

. j...q.

.E.

:... (. .

:... .'

'tt

b3jli')'itt'). closed set z(w)has no isolated points and hence must be uncountable. For the:,Et

pl,

E g

tit).tt,tE)'tlE'.i.klE,

t jmst step see Hewitt and Stromberg (1969),pyg 72... .:.

..E-

.

--.

();

ll'-tpd

....*7))11111rr'.!:.

.

-

.

.

. .

.

.

E..E:(:

.EE..q....)..

i ...'' 7771''1l'll''i );E),.

t If we let lz()I denote the tebesguememsure of,g(td)

then Fubini's theorem-...-..-.

..

r;l.l-

-'

!.:t')pjr.

.hk:,'

i-.'LL'

!.i..(-.E.

-

...

......r.-

.

..y...

-

. ..

..

-.'.'.: .

EE.i:i.:

E

j.!

t'

'

;Ekybijji,,

Epjjt'j..,

p'

.

jy.(.....y.

:.-y.(...y.j(

:.

.y:.jy...

E::y. :

ggjj,dgjjgy.y)jgr.yjjj.djgjjjjy;jjdjjjj,dyjjj,djjyygg,;-jjjyg;;d

..E

yq..i..EE)E(i

)'

j

llttlj)d

). j.....-y..yy..-...-....y.y.

-.yyy.

E

y.

.

.

yEl/tt)lt))y,,l,EE

ji),),.E . ; Ltn

.. ...)iyt.)(j

jji........,...jjyjy.....jj jy..j....

-.-.....';;Lj,j$-q;3.;qL-1L.L

.-..)...p(.-..).E.y...-E.....t.q.

.r....

.-illiiiir':;:,.

41:)11.-;:ii!:7

4(:4'i,,.-,-(zlidr''..'lkIIII4(::)1,, rqEilr''-'ljl11(:).::::::::::

.-ilr:::'':,r:-

44:4..;!!iEiI,rj,-:::::::::.4(::),(2))dt:;i!ri!r ::::::::::1(:21)

-.q

-

.

.

'

.yq!)'--tri((t$r'.jlll1-)(.;'E

'

..-.E....

-

..i;.-

E-

. . i.- ..

... .. E

11:::,,*

. . -.

.'''l lEi!i'j!')itt)@.:...(...'...E....('E.y..(.'...';.',E.

(..E..iEjlEji.)

.),....(.y.j.y(.j..y.r.q.,,.,y.,.q,ttljjijjy/ytyE

qjjjjyy

y) tjjj ygy,jyjgyjgj

.j.-

j.iyjljg;

j'

yjty'jjyy.jij..y...y.j.y.y...jy-.yy...

-j...

y...yg.y. y

y

........!E-yj.(y(yj.j'..jjrj(.jy

gjl.ttttt)tyjd

y.jjq..E

.

.jy.gyj-yy--

.

-yy.j

-y..-

g.yyy

..y.'....

(...E.-.y)!j)iljt'tjjy'jltjlphtrqj..-j

-

g..yjy....j..-

y.(yEj-.yyr.

!.j.- .j...-.....

-.

.1((y

)'

)

iyjpjdlljjj!;d

y(ry.-

. -

-E.y.(y

.yy.yE

y.yj.y.y.q....-.

.....jyqy...y..jj...'......

y.y(y(.y

jjyj'j'jjd

jjjjyg.- .

yyy...yg.j..j.

y'

g.'yy,

y'

..'j

ygd

j

y'

y

..

.'y.,'

.'

.g.g..'j.g...yy.'...

,,.'..

-'

-

'

Ei.i..!.q

t'

!1*..)y'.

tid-r'jlqjj...

.,..

,'

;.. ;

.-,.'

tthe conditional expectation of F o os given Ss* is just the expectedvalue of Fs for a Brownian motion starting at Bs

.''

Proof We flrst prove the result under the mssumption that there is a sequenceof times ts 1 x, so that PZIS < x) = E P=S = ta). ln this cmse we simplybreak' things down according to the value of S, >pply the Markov property and

put the pieces back together. If we let Zn = 'Ia @)and ,1. E Fs then

f.(Fs o 0s J1 n (S < x)) = V E=Zu o #a ; X n (S :.=: tnl)n =1

Now if .A e Ss, A n (S =,z)

= (. n (5'K ts)) - (.4(3 (5'K tn-1J)E .F(in)?

o it follows from the Markov property that the above suin is

=E.E'.(.Eb(f-)zk;. n fs = ta)) = .G(.Eb(s)'Fs;.n fs < x))n=1

To prove the result in general we 1et Sa = ((2n5')+ 1)/2n where (z)= thelargest integer %z. In Exercise 3.2 you showed that Sa is a stopping time. Tobe able to 1et n

-+

(:x) we restrict our attention to F's of the formn

Fa(=) = hst 1-fJm(a'(m))m=1

where 0 < l < ... < ts and A, . . .

, fn are real valued, bounded and continuous.If f is botmded and continuous then the dominated convergence theorem impliethat .

z.-+ dylhz, ylfyj

is continuous. From this and induction it follows that

w(z,s) =EZYS

= hsj dplptz (z,pl).f(p1)

.. . dps pta-za-, (pn-1,pn)J'(pn)

Page 18: Stochastic Calculus

24 Chapter 1 Brownian Motion

So Z@) is a set of measure zelo. In. .. g .

.

Exnmple 3.2. Let G be an open set, z 6 G, let T = inftf : Bt / Gj,and suppose PSCT < x) = 1. Let d (7 0G, the boundary of G, ad 1etu(z) = P.CBP E .A). 1 claim that if we 1et > 0 be chosen so that D(z, J) =

fy : lp- zl < J) c G and let S = infff k 0 ; Bt / D(z, J)), then

u(z) = EruBs)

Secton 1.d Stoppng Tmes, Srong Markov Property 25

Exnmple 3.3. Refetion princlple. Let Bt be a one dimensiopal Brownianmotion, 1et a > 0 and 1et L = inf ( : Bt = cJ. Then

(3.8) .J%(Tc< t) = 2#(,(#f > a)

Intultlve proof We observe that if Bs hits a at some time s < t then thestrong Markov property implies that Bt - .5(L) is independent of what hap-pened befpre time Tc. The symmetry of the normal distribution and PzBt =

c) = 0 then imply ,

(3.9)

Multiplying by 2, then using (#t > (I) (: (T. < ) we have

g ( J%(T < ) = 2J%(T < jj Bt > c) = 2.J%(#j > c)

Proof To make the intuitive proof rigorous we only have to prove (3.9).Toextract this from the strong Markov property (3.7),we let

1.&(T. < t , Bt > a) = j..&(Tc < t)

1 if s < f, u2(t - s) > a5',() = (() othrrwise. .

:

)'

.'. .. . .. . . .,

. ..

. .

.

We do this so that if we 1et S = inf (s < t : B, = cl with inf 0 = cxl then

Fstps) = 1(sf>c) on LS < XJ = (Tc < t)

If we 1et wlz,s) = .F, the strong Markov property implies

fntys o pslls) = vBs,Sj on LS < XJ = (:r'.< t)

No Bs = a op ( < x) and w(c,s)= 1/2 if s < t, so

#nIL < t, Bt > c) = .E(1/2 ; Tc < @)

which proves (3.9).Ekercise 3.11. Generalize the proof of (3.9)to conclude that if u < 1) i athen

(3.10) #OIT. < t, u < Bt < 1?) = J%(2c - n < Bt < 2c -

'u)

Intultion Since D(z, J) C G, Bt cannot exit G without qrst exiting Dk, J)at Bs. When Bs = p, the probability of exiting G in ,4 is nyj independept of

h B t toE '

OW %0 y.

Proof To prove the desired formla, we Will apply the strozk Markok proprerty,(3.7),to

F = 1 s Ek)( c'. .. . . . . . . .. .

' .

To check that this leads to the right formula, we observe that since D(z, J) C G,we have Bv cos = Bv and ltsv.uxl o os = 1(swEx). In words, o and the shiftedpath eswmust exit G at the same place.

Since S % T and we have supposed #.IT < x) = 1, it follows thatP.IS < ) = 1. Using the strong Markov property now gives

.G(1(sv.Ex) o pslfs) = .Eb(s)1(s.ux) = nBsj ,

Using the deinition of u, ltsz:x) o 0s = ltswex) ,and taking expected vlue of

the previous display, we have

u(z)= .G1(s.Ex) = f.(1(s.ex) o ps)

= E=Ez: (1(s.Ex) o ps IFs) = ErktBst

Exercse 3.10. Let G, T, D(z, 6j and S be s above, but now suppose

that EyT < x for all y e G. Let g be a bounded function aud let u =

.E',I :(0,) dst. Show that for z E G

S (

u(z) = E= j gB'j ds + uBs)

Our third application shows why we want to allow the function F that weapply to the shifted path to depend on the stopping time S.

Page 19: Stochastic Calculus

26 Capter 1 frownan Motion

To explain our interest in (3.10)ilet Mt = maxcs/i B, to rwrite it ms

P M > a u < B < r) = f%(2c - 17 <.h

k 2a - u)t)( , t

Letting the interval (u,-p)

shrink to z :ve see that1 a-(2c-/) /2ff%tMf > c, Bt = z) = f%(# = 2c - z) = c2%

Diferentiating with respect to c now we get the joint densityr(,yy

.z;

aa-mjayay3 11) Pn(Mf = c , Bt = z ) = e- (1

'

a2%

1.4. Flrst Forrnulas

We have two interrelated aims in this section: the first to understand the be-havior of the hitting tims T. = inf (1: Bt = c) for a one dimensional Brownianmotion #; the second to study the behavior of Brownian motion in the upperhalf space .I = (z E R,d : zd > 0). We begin with T. and observe that thereflection principle (3.8)implies . . .

P T < t) ::c? 215(#t > tz) = 2 (2>)-1/2exp(-z2/2f) dzn( cJ

Here, and until further notice, we are dealing with a one dimensional Brow-nian motion. To flnd the probability density of Tc , we change variables z =

1/2c/s1/2 dz = -tI2a(2s3I2ds to get1

0

(4.1) Jb(Tu < t) = 2 (2*)-1/2exp(-c2/2s) (-t1/2c/2s3/2jd.) N.

t

: :3-1/2

2 :s) ds= ( A's ) a expt-c /0

Using the lmst formula we can compute the distribution of L = supt K 1 :

Bt = 0Jand R = inft k 1 : Bt = 0J,completing work we startd in Ekercis2.3 and 2.4. By (2.5)if 0 < s < 1 then

Cr

.POIf.%s) = p,(0, z).&(% > 1 - s) dz

2 (2xs)-1/2exp(-z2/2s) (2:rr3)-1/2exp(-z2/2r) drdp= Fl0 1-a

1 Co *

3)-1/2 (-z2(r+ s)/2rs) d dr=- (sr z expg' 1-, ()

1 co:$)-1/2 s/tr + s) dv=

- (sr r'm 1-,

Section 1.d First Formulas 27

Our next step is to 1et t = s/lr + s) to convert the intejral over r E (1- s, x)into one over t e (0,s). dt =

-slr

+ slz#r, so to make the calculations emsier

weflrst rewrite the integral as

a 1/21 t:r (r+ s) s- dv'/r 1-, r s (r + s)2

Changing variables ms indicated above and thep again pith t = 112 to get

1*%

'

#c(f, S s) = - (t(1- t))-1/2 dt';r () ,

V Q-( 2 .:

x t- , ?n u ...

z r. Nc'= - (1 - u.) az -

au = - arcslnjvsp'm c zr

The reader should note that, contrary to intuitih, the density function of L =

the lmst 0 before time 1,

1 '

.!actz,= t) = - (t(1- t))-1/2 jbr 0 < t < 1A' 0

' . . ( . . .. ..

is symmetric about 1/2 and blows up near 0 and 1. This is one of two arcsinelaws associated with Brownian motion. We will encountr the other one inSection 4.9.

The computation for R is much emsier and is left to the Teader.. ( (

.:

'

. . . .. :

'

. , .

Exercise 4.1. Show that the probability density for R is given by

P LR = 1 + t) = 1/(*1/241 + j)) for t k 00

Notation. ln the last two displays and in what follows we will often writePT = t) = J(t) as short hand for T hms density functio ft).

. . . . .'

. .. .

t ' ' ' ' 'As our next appllcation of (4.1)we will compute the distribqtiop of Bvwhere r = inftt : Bt / S'J and 11 = (z : zd > 0J and, of coprse, Bt is a ddimensional Brownian motion. Since the exit time depepds only on the lmstcoordinate, it is independent of the flrst d - 1 coordinat aztd we cart computethe distribution of Bp by w'riting for z, 0 E Rd-l p 6 R)

.

P 'B = o 0)) = dspn,vtr ='S)(2=f)-(d-1)/2e-Ir-#IQ/2J

(r,&) r '

0# -(d+2)/2

-(ja-#j2+pa)/2a

= ds s e(2T)d/20

Page 20: Stochastic Calculus

28 Chapter 1 lrownan Motion

by (4.1)with c = p. Changing variables s = (lz- pj2 + y2)j2t giyes

c a a (a+2)/2# J -(Iz - #1 + !/ ) dt

2f n-f). 2t2 lz- pl2 + p2(2T)d/2

so ve havey vqdjgj(4.3) Pr,vlBr = (#,0)) =

#jz .j. pjajz gd/a(lz-

where r(a) = .L*pc-le-Vdg is the usual gamma function.When d = 2 and z = 0, probabilists should recognize this as a Cauchy

distribution. At flrst glance, the fact that Bv hms a Cauchy disyribution might

be surprising, but a moment's thought reveals that this must be true. To explain

this, we begin by looking at the beh>vior of the hitting times (Tc , a k 0) s: .' . . .'.'

...

the level a varies.

Section 1.4 Flrst Forznulas 29

By induction it follows that

n rl :

E /(Ztk - Z;-,) = f2k(Zf;-t;-z)=1 =1

which inplies the desired conclusion.

The scaling relation (1.2)intplies

4.6) n L czz'z(So consulting Section 2.7 of Durrett (1995)we see that Tu hmsa stable 1awwithindex t:e = 1/2 and skewness parameter tc = 1 (sinceTc k 0). .

To explain the appearance of the Cauchy distribution in the hitting 10-cations, let n = inft : Bt1 = cj (whereBI is the second component of atwo dimensional Brownian motion) and observe that another application of thestrong Markov property implies

(4.7)Theorem. Under.5,

(Ca = f(r,), s k 0Jhas stationary independentincrements.

D

Exerclye 4.2. Prove (4.7).The scaling relation (1.3)and an obvious symmetry imply

48) c d sc, c, Jc-c,

( . ,

so again consulting Section 2.7 of Durrett (1995)we can conclude that % hmsthe symmetric stable distributiou with index t:v = 1 and hence must be Cauchy.

To give a difect derivation of the list fact let isj ::i E (exp(pC,)). Then(4.7) and (4.8)imply E

vevett = (pes + f), vesj 5: wp,(1), f&(s)= e-os)

The fart that Cs::1:

-C, implies that pos) is real. Sirce 0 -+ (p:(1) is coz!-i the secod equation implis s

.-+

po yu, (1)is continpos ndt nuotzs, ,

a simple argument (seeExercise 4.3) shows that for each t, /p (s)= ekpt:s),The last two equations imply that c, i.=: sc1 >nd c-# = c: so c: =

-xlpl

for some' ..:

'.

s, so Ca has a Cauchy distributin. Sipc tlie arguments above apply equally1 a E

'

well to Bt , cBt ), whre c is an# onstant, we cannot determine lc with thisargument.

(4.4) Theorem. Under f%, (L, c k 0Jhas stationary independent increments.

Proof The first step is to notice that if 0 < a < then'

T: o 0v. = T5 - Tc ,

so if 1 is bounded and measurable, the strong Markov property (3.7)and trans-lation invariance imply

Ez ll'b - Tc) 1.f.) = Ez (J(T) o 0r. 1.T.)

= E T(T) = .E/(T-c) .J : j j

The desired result now follows from the next lemma which will be useful later.

(4.5) Lemma. Suppose Zt is adapted to G aud that for each bounded mea-surable function 1

.E'(T(A- Za)lG) = .E'.f(Z-,)

Then Zt hms stationary independent increments.

L t < t < t and 1et h, 1 K i K n be bounded and meeurable.Proof et () 1 . . . ,, ,

Conditionin: on Ftvv- and using the hypothesis we haveg ' (

n

E I-fyftzt;- z-,)i =1

n-1

= E FI'fizti - zi-z ) . Efrtzt-v - z--z)lJl--,)

+l .E

.

n-1

= E 1-1hzt - Zf-,) E1Zt=-t=-jf=1

Page 21: Stochastic Calculus

30 Chapter 1 frowzlan Motion

Exerclse 4.3. Suppose w(s)is real valued contirmous and satis:es w(s)w(t)=ps + t), w(0)= 1. Let #(s) = ln(w(s)) to get th additive equation: #(s) +#(t) = #s + ). Use the equation to conclude that #(m2-n) = m2-n#(1) forall integers m, n 0, and then use continuity to extend this to #() = @4(1)..

Exercise 4.4. Adapt the argument for po (s)= E (exptC4l) yo shpv.. .

:( . . . . '' ..' ..E.' j. . .. .

.lclexpt-r,zll =exptc-cxi)

The representation of the Cauchy process given above allows us to see thatits slmple paths a

-+ L and s-+ Cs are very bad.

Exerclse 4.5. lf u < 1) then.&tc

-+ Tc is discontinuous i ('tz,'p))

:i= 1.

Ekercise 4.6. If u < r then fils --> Cs is discontinuous in (u,r)) = 1.

Hint. By independent increments the probabilitie Eirt Exercises 4.5 nd 4.6

only depend on'p - 'tz but then scaling implies that they do not depend on the

size of the interval. ,

..

. . ..

. . . ' . ..E..' ..E ' ' ' . .

' ' ..

The discussion above has focused on how Bt leaves H. The rest of thesedion is devoted to studying where Bt goes before it leaves H. We begin with

the case d = 1.

(4.9) Theorem. If z, y > 0, then P=Bt = y, Tn > t) = pjtz, y4 - pftz,-p)

where-1/2

-(p-.)2/2f

#f(z,#) = (2A) d

Proof The pqoof is a simple extension of the argument we used in Secti

1 3 to prove that Pa(L K ) = 2#c(#, k c). Let J k 0 wlth .J(z) when

z K 0. Clearly( .

f7.(T(.&f);Tn > t) = ErlBt) - .E%(.f(lt);% S f)

If we let J-(z)= T(=z), then it follows from the strong Mrkov property nd

symmetiy of Brownian motion thatg. l ':.. . . . . .

'.

' '

.

'

. .

>Q(#(.R);T'tlK f) = E=LEz1Bt-n)3 T'nS fl

= .Eb(.E/'(.t-v.c);T0 ; t)

= .E%(/(#t);Tn K ( = E=1-Bt44

Section 1.4 First Formulas

since hy)= 0 for y k 0. Combining this with the frst equality shows

E=JBt) 7%> ) = E=1Bt ) - E=hBt)

= jptz,p) -pt(z,-p))J(p)#p

The lmst formula generalizes easily to d k 2.

(4.10)Theorem. Let r = inft : Btd = 0). If z, y E H,

#rl.R = ll, r > tj = pttz, #) - ptlz, #)

here y-= (p1, . . . , pd-l,-pa)-

W

rxerclse 4.7. Prove (4.10).

Page 22: Stochastic Calculus

r . . ..'.

: '. . .

'. . . . . .

2 Stpchastic Integation

this chapter we will deflne our stochastic integral h = J!.Jfadx, . To motikateIp othe developments, think of X, ms being the price of a stock at time s and ffa

msthe number of shazes we hold, which may be negative (sellingshort). Theitegral h then rfresents the et proflts at time t, relative to our wealth atiime 0. To check this note that the inqitesimal rate of chae of the inttaldIt = Ih d-'' = the rate of chge of the stock times the number of shares we

d ( qhol .

yj# jj) jyjtjoduce the integrand H the rdttable(E It th flrst setion e w aj pN ihematicalversion of the nti tht ttke number of stia#esPrOCeSSeS, a nxa

held must be bmsd on the past behavior of the stock and not on the futureh nd sectiori we will intidu oi intgiatis X theperfoymance.I t sec , ,,

t tinus local iiiitingales.'' Ihtuitikely Irtinales are fair games whileCOIl , 1

the tlocal'' refers to the fact that we reduce the integrability Tequkqmentsto admit a wider clmss of examples. We restrict oui attention td the cae ofpprtingples with continuous paths t -+ Xg to have a simpler thep'y.

. .' '' . ' E' ' . '

i. 1 table Processes2. . Integrands: Pred cEE

.E :

. . . .'

. .

To motivate the clmss of integrands we consider, we will discuss integrationw.r.t. discrete time nartihgales. Here, we will assume that the reader is familiarWith th basics of martingale theory, as taught for example in Chapter 4 of

Durrett (1995).However, we will occasionally presept results whose proofs can

b f d theree OMn .

Let Xg , n k 0, be a martingale wa.t. Fn . If Il, n k 1, is any proess, wetan define

. n

.l ' X)a = Hmxm - Ak=1)m=1 q

T intivate th lst formula and the restrictioz we ar abot to place on theH we will consider cocrete example. Let &,fa, . . . be indpendnt withmi

P = 1) = P ==t)

= 1/2, and let Xa = &+. . k-l-a. Xn is the symmetrlc

slmple rnndom wnllr and is a martingale with respect to Fn = o'(1, . . .

, &).

Page 23: Stochastic Calculus

34 Capter 2 Stochastic Integration

If we consider a person iipping a fair coin and betiing $1 on heads each timethen Xu gives their net winnings at time n. Suppose now that the person betsan amount Hm on heads at time m (withHm < 0 interpreted ms a bet of -Hmon tails). I claim that CH . X)a gives her net winnings at time n. To checkthis note that if Hm > 0 our gambler wins her bet at time m and increses ltrfortune by Hm if and only if Xm - Xm- = 1.

The gambling interpretation of the stochastic integral suggests that it isnatural to let the amount bet at time n depend on the outcomes of the flrstn - 1 flips but not on the flip we are betting on, or on later flips. A process Hnthat has Hn E Jk-1 for all n k 1 (here

.-f'a

= (0,t'1), the trivial c'-fleld) is said

to be predictable since its value at time n can be predicted (withcertainty)

at time n - 1. The next result shows that we cannot make money by gamblinton a fair game. .

(1.1) Theorem. Let Xn be a martingale. If Il is predictable and each Hn isbounded, then ..

. .X')n is a martingale.. )

.. . .'

.

Proof lt is easy to check that CH . Xla 6 fn . The boundedness of the kimplies E 1(S' . X)a l < (x7 for each n. With this establiqhed, we can computeconditional expectations to copclude

EH .X)n+1l.&) = CH ' .X')a + f(Jfa+1(.Xk+1 - .Xk)I.'&)z.=CHX4. + ffn+1f(an+1 - .Xkl.&l = H ' aY)n

since .Jfrj+l E Su and .E'(Xrz+z- Xn 1.Tk) = 0., t

The last theorem can be interpreted ms: you can't make money by gmbliik

on a fair game. This conclusion does not hold if we only assume that Hn isoptional, that is, ll 6 Fn, since then we can bmse our bet on the outcome of

the coin we are betting on.

Exnmple 1.1. lf Xn is the symmetric simple random walk considered above

and Hn = a thenn

ll.x)s

= )-7m.m = n

m=1

sincefm2= 1. (:I

ln continuous time, we still want the metatheorem ttyou can't make moneygambling on a fair game'' to hold, i.e., we want our integrals to be martingales.However, since the present (t) and past (< t) are not sqparated, the deqnitionof the clmss of allowable integrands is more subtle. We will bgin by considering

a simple example that indicates one problem that must be dealt with.

Section 2.1 Integrands: Predictable Processes 35

Elnmple 1.2. Let (f),F, P) be a probability space on which there is definedarrandom variable T with PT < t) = t for 0 K t K 1 and an independentNpdop variable ( with P = 1) = P4 q= =1) = 1/2. Let

0 t < T ,X =

t z :ti

Apd 1et Ft = (rlXa : s S t). In words we wait until fime T and then flip a coin.Xt i. a martingal with respect to Ft : However, if we deflne the stocastic

aj u..

itegral h.!tl X,#.Xa to be the ordinary Lebesgue-stieltjq integral then

l

yt = Xs d-''a = xk . = (2 = 10

j '. .To check this note that $he memsure dX, corresponds to a m>s of slze >t T

and hepce the iniegral is times the v>lue there. Npting nw tltat 'Fn = 0 while$ :.::.z 1 we see that h is not a martingale.

, o' The problem wtt te lmst example is th same as tlte problem with the

one in discrete time -

our bet can depend on the outcome of the event Weare betting on: Again there is > gambling inteypretation th>t illustratrs what

is wiimz. Consider th game of roulette. fter thewheel is spun and the bais rolled, people can bet at any time before (<) tiieball corries to rest but notyfteq(k). One way of guaranteeing that our bet be made strictly befpre T,i.e., a sucient condition, is to require thp,t the amoupt of money pe have bett time t is lett contiuuous. This iplies, fbr iustauce, that we cappot ieacta

instaptaneously to tpe advantage of a jpmp. in the prpcess we are betting on.The simplest left continuous integrand we can imagine is made by picking

a < b C E Sa, and setting ,1

(+) Hs-w) = c()i(c,5!(t)E' ' . . . . . .. . . ' :

'. . .

1*

.. l 'tn words, we buy c(=) sares of stock at time a bmsed op our knowltke ten,i.s., C e F. W hold thm to time b and thrz setl thern all. Clearly, theeyamplesin (+)shoulb be allowable integrands; one should be able to add tro

ormore o these, and take limits. To encompass thepossiltities in the previons. j . .... . .

: '....

sentence, we let 11be the smallest J-fleld containmg all sets of the form (c, )x .

where .Ae Sa. The 11we have just deflned is calld the predictable c-fleld.We Fill dmand that our integrands .I are measurable F.r.t. ll.

In the previous paragraph, we took the bottom-up approac jo t e enition of lI. That is, we started with some simple examples and then extendedthe clmss of integrands by taking yums and limits. For the rest of the section, wewill take a

tttop-dowq'' approach. We will start with some natural xequirements

Page 24: Stochastic Calculus

36 Capter 2 Stochutic Integration

and then add more until we arrive at 1I. The descending path is more confusing

since it involves four defnitions that only difer in subtle ways. However, thereader need not study this material in detail. We will only use the predictablJ-qeld. The other de:nitions are included only to allow the reader to makethe connection with other treatments and to indirectly make the point that thememsure theoretic questions mssociated with continuous time prpcesses can bequite diult.

Let & be a right continuous flltrytion. The frst and most intuitive cn-

cept of tfdependingon the pst behavior of the stock and not on the futpre

formance'' is th followilg: ',

per

Hs, ) is said to be adapted if for each we have Ht E &.

We encountered this notiorl in our discussion of the Markov property in Chapter1. In words, it says that the value at time t an be determined frin theinformation we have at time . The definition above, while intuitik; is iit

strong enough. We are dealing with a function deflned on prodttct sf ae(0,co) X t-1,so we need to worry about mesurability as a function of the twovariables.

H is said to be progressively measurable if for eah t the mapping(s, td) --, Hs, tJ)' from (0,t)to R is ef?-

x & memsurable.

This is a reuonable de:nition. However, the tsmodern'' apfroach is to use

a slightly diserent deflnition that gives us a slightly smaller J-flld. '

Let A be the J-fleld of subsets of (0,cx)) x fl that is generated by theadapted processes that are right continuous and hav left limits, i.e.,the smallest (r-fleld which makes all of these processes memsurable. A

process H is said to be optional if Hs, (.J) is memsurable w.r.t. A.

Accordipg to Dellacherie and Meyer (1978)page 122, the optional c-feld

contained in the prgressive J-fleld and in the case of the natural liltiation fBrownian motion the inclusion is strict. The subtle distinction between thelmst two c-fields is not important for us. The only purppse here for the lmst tWde:nitions is to prepare for the next one.

'

Let IP be the tr-field of subsets of (0jx' )xo that is genezated by the leftcontinuous adapted processes. A process H is said to be predlctable

if Jfts, tJ) e 11/. '

As we will now show, the new deflnition of predictable is the same as th old

one. We have added the ' only to make the next statement and proof possible.

Section 2.2 Integrators: Contuous Local Martingales 37

(1.2)Theorem. 11= 11/.

Proof Since all the processs used to deflne 11 are left continuous, we have11(:: 11:. To argue the other inclusion, let Hs, u7)be adapted and left continuopsand 1et Snts,l = .Jf(m2-n,) for m2-n < s K (m+ 1)2-n. Clearly Il'n 6 H/.Further, since H is left continuous, Hns, (J) -+ Sts, tJ) as n

-+

x. (:le

.The distinction between the optional and predictable c-felds is not impor-

nt for Brownian motion since in that cmse the two-fields

coincide. Our lmsttafact is that, in general, 11C X.

Exercise 1.1. Show that if Hs, tJ) = 1(c,,)(s)1#@) where .A 6 Sa, then.I is the limit of a sequence of optional processes; therefoxe, H is optional and

C .

2.2. Integrators: Contlnuous Local Martlngales

In Section 2.1 we described the clss of integrands that we will consider: thepredictable processes. In this section, we will describ our integrators: contin-uous local martingales. Continuous, of course, means that for almost every Y,

the sample path s-+ Xa @)is continuous. To define local martingale we need

some notation. If T is a nonnegative random variable and''i

is any process wedefine

1.'YtI'At 0: (T > 0X =

() on (z = 0)

(2.1) Defnltion. X3 is said to be a local martlngale (w.r.t.(.T, t k 0)) ifthere are stopping times Ts 1 (x) so that XI=is a martingale (w.r.t.(--#-ag'.:

t k 0)). The stopping times Ta are said to reduce X.

We need to set XT> 0 on (T = 0) to deal with the fact that A'a need not eintegrable. In most of our concrete examples a' is a constant and we can takeT1 > 0 a.s. However, the more general deqnition is convenient in a number ofsituations: for example (i)below and the deqnition of the variance process in(3.1).

In the same way we can defne local submartingale, locally bounded, locallyof bounded variation, etc. In general, we say that a process '.F is locnlly A ifthere is a sequence of stopping time T'a 1 (x7 so that the stopped process XT

hmsproperty .A. (Of course, strictly speaking this me>ns we should say locallya submartingale but we will continue to use the other term.)

Page 25: Stochastic Calculus

38 Capter 2 Stochastic Integration

Why local martingales?

There are several reasons for working with local martingales' rather thwith martingales. .

(i) This frees us from worrying about integrability. For exanple, let X$ be amartingale with contirmous paths, and let p be a convex function. Then pXt)is always a local submartingale (seeExercise 2.3). Howver, w can cnclude

that ph) is a submartingale only if we knoW .E'lw(.X)l < (x7 for each 1'$'a factthat may be either dicult to check or false in some cases.

' '

( ,

'r < x, then the concept ot mrtihgale is meaningless, since for large i X' i pot( ydeined on the whole space. However, it is trivial to deine a local mart gal :

there are stopping times Ts 1 r so that . . .

Section 2.2 Integrators: Con tinuous Local Martingale 39

(2.2) Theorem. (uYv(f),.T-v(f), t k 0). is a martingalek

First we need to show:

' . (..

(2.3) The Optional Stopping Theorem. Lei X b a continuous local mrn-gale. lf S K T are stopping times and Xht is a uniformly iptegrable martingalet en 1.

Proof (7.4)ih Chapter 4 of Durrett (1995)shows that if L %M are stoppingtimes and kaa is a uniformly integrable martingale w-r.t. n then

ftyzTlfzl = 'Fz

To extend the result from discrete to continuouq time 1et Sn = ((2n51+ 1)/2n.Applying thp discrete time result to the uniformly integrabl marting>le Fm =

Xzamz-a With L = 2 Sa and M = (x$ w s that( '

.

'

.

E'

. '. . ' . ' . . .

. .

X'xr llsa) = Xsa; r

Letting n..--.y

(x7 and using the dominated convergence theorem for conditicmal

expectans ((5.9)in chaptert of Durrtt (1905))te resutt foltows.,

(:I

Proof of (2.2) Let n = (@)+ 1. Since v(t) S TrzA n, using $he optional

stopping theorem, (2.2),gives Xv(f) = Exnhn lFv(t)). Taking conditionalexpectation with respect to ?v(a) we get

f(Xv(t)I.Tk,)) = f(-Ygkar'I.'G(,))= -'G(,). ' . .. . ,. ' .. .

roving the desired result. (:lP

jThe next result is an example of the simplcations that come rotn assum-i local martingles are continuous. '

ng

(2.4)Theorem. lf X is a continuous local martingale, we can always take the

sequencewhich reduces X to be Ts = inftt : 1-'V1> n) or any other sequenceT < Ta th at has T 1 cxnas n 1 x .

Proof Let sn be a sequence that reduces X. If s < 8, then applylng theoptionalstopping theorem to Xy at times r = s A Tm?and t A Ti gives

Ex A Tm'A Sr:)1(sa>p)l.T'(s A Q A &)) = Xs A Q A &)1(sa>0)

.'

.(( . . .. .'

(iii) Since most of our theorems will be proved by introducing stopping times a

to reduce the problem to a question about nice fnartigales, the proofs ar noharder for local martingales deflned on a random time interval than foi ordinarymartingales.

.'

i .'(. .' .

'.

Remson (iii)is more than just a feeling. There is a construction that makes

it-almost

a theorem. Let Xz be a local martingale deflned on (0,r) and letTa 1 r be a sequence of stopping times that reduces X. Let To = 0, supposeTz > 0 a.s., and for k k 1 let

t - ( - 1) Tk-l + (k- 1) K t KTk + ( - 1)n'(f) = tn n + k - 1) s t s zk+ k

To understand this deflnition it is useful to write it out for 1-= 1, 2, 3:

f (az',ln /,,n + 1)

t - 1 (T1+ 1,Ta + 1)-/(@)=

z go+ 1,o + 2j2t - 2 (Tz+ 2, Ta + 2)T'a IS71+ 2, Ta + 31

.g. ., .. :

.

.

In words, the time change expands g0,'r)

onto (0,x) by waiting one unit of timeeach time a Ts is encountered. Of course, strictly speaking y compresses (0,x)

onto (0,m) and this is what allows .;G(f)to be deflned for all t k 0. The reasonfor our fscination with the time change can be explained by:

Page 26: Stochastic Calculus

40 Chaptr 2 Stochastic Integration

Multiplying by 1(tm>n) E Fo c Fs h TmJ A %j we get

E (A-(f A Tm?A Sa)1(sa>0,z,k>c)l-'/1.s A TCA Sn)) = Xs h TmJ A &)1(sa>0,m>:)

As n 1 cxl, Fs h Tm'A Su) 1 Ss h Tm?), and

-Y(r A Tm'A &)1(sa>c,tm>0)-+ .X'( A Tm')1(n>t)for all r k 0, and 1aY(rATm'A,$k)I1(s.>(),rm>n)%rr, so it follows from the dom-inatpd convergence theorem for conditional expectations (see(5.9)in Chapter4 of Durrett (1995))that '

E (-Y(t A Tm?)1(v',k>n) l.T1s A Tm')) = Xs A Tm')1(:m>f))

proving the desired result. (:l

. y...; .

)'

. ..' jryijjg..!,l g.

ln our deqitio of local mrtingle n (2.1), as m f

i le (wr t jkmhf., t k 0)). Wedid thi with the proof 6f (2.4)in mitlmart nga . . .a

.

Ta js aThe next exercise shows that we get the same deflnition if we assume Xjmartingale (w.r.t.(J5, t k 0)).

Exercise 2.1. Let S be a stopping time. Then XgS is a matingale w.i.t.

Fghs, t k 0, if and oly if it is a martingale w.r.t. Ft , t k 0.

In the flrst version of this book I said. .

':.

. ... ..

t'You should think of a local martingale ms something .

that would be a martingale if it had E l.X I< x.:'

As many people have pointed out to me, there is a famous example which showsthat this is WRONG.

Exnmple 2.1. Let Bt be a Brownian motion in dimension d k 3. In Section3.1we w11lsee that X = 1/l#f Id-2is a lcal martingale. Now

E l.X'jr = (2<)-d/2e-I#-?I2/2fj j-(d-2)pdy < x,, f y..

' ( '.

if and only if p < dld - 2) since (i) there is no trouble with the convergepce

for large y and (ii)ignoring(2'mI)-d/2e-lp-rIQ/2t

which is bounded near 0 and

changingto polar coordinates we have

Section 2.2 Integrators: Continuous Local Martingales 41

if and only if-4d

- 2)p + (d- 1) >-1,

i.e., p < #/(d - 2).To see that Xt is not a martingale we begi by noting that Bt - z) =a

t12(Bi - z) under Pr so as..--y

x, 1#tI-+ (:<) and Xt '-> 0 i probability. Toshowthat E.XL -. 0 we observe that for any R < x

E X K (2<)-d/2 lgl-(d-2)dy + a-(d-2)ir fI!/IS.a

l is convergent so msup-.. Erxt s A-(d-2) Since R is arbitraryThe integra .

ECXL -+ 0. sinceE=XZ > 0 it follows that E,X, is not constant and hence Xtis not a martingale. E E1.

.. . (' ..., ..... . . .

An eve worse copnterexampl is pvided by'

lxikcls 2.2. Irt jtion 3.t * will lt2 that it Bt is , twtjdimnsional. .. . . . . '

. . .. . . . . ''.

. .

ivi fnotion the Xt z.z:lok IAI is a lcal mrtiklet Show E 1.X':IF < o... .

:

'

.

''''''''' ''' ''' '''

.. : i

'''''''''

. ' '

'''

''''

1''

. .

fr'ny > < x but limt-x' LS'Xg=' i:xnso X i ot a mrtiltgle.

q jj jj jE

k tjjt uo'amount

The last example may leave the reader wit t mpress onot integrability will guarantee that a local martingale is aq honest martingale,tt ks th next kery usful tt hW, thi i a fls irprlssioh.

,

(2r5) Theorem. lf Xt is a lpcal spbmartizg>l artd

E sup lA%I < cxlosasf

for each t then X2 is a submartingale.

Proof Cleazly our mssumption implies .71A'tI< cxl for each t so all we have tocheckis that ECXL 17,)k X.. To do this, ;we note that if Tn i? a sequenc pfstopping ti.mes that reduces X

.7txpIaa,ul k x-

then we let n-.+

x and apply the dominated convergence theorem for condi-

tionalexpectations. See (5.9)in Chapter 4 of Durrett (1995). (:I

As usual, multiplying by-1

shows that the last result holds for supermartin-

gales, and once it is true for supem and sub- it is true for martingales (bypplying the two other results). The following special case will be important:

(2.6) Corollry. A bounded local martingale is a martingale.

1-(d-2)p d-1 dv < xr r

g

Page 27: Stochastic Calculus

42 Captez 2 Stochutic Integration

Here and in what follows when we say that a process Xt is btmnded we meanthere is a con>tant M so that with probability, 1, lAhlK M for all t k 0

l!!qpp irumstanes F Fill ned a pnvergp theorm fpr lpcrl mar-tingales. The ibllowingsimple result is sucient for our needs.

(2.7)Theorem. Let Xt be a local martingale on (0,r). If

E sup 1A%l < cxl0KJ < r

then Xp = limu,r Xt exists a.s. and EXz = EX. .

Proof Let y be the time scple of (2.2)and let X = .Xv(). X is q.pprtingale.rik ti/ale conkigit theremThe upcrossing inequality and the froof of th r c

given in Sectipn 4.2 of Durrett (1995)generalize in a Ayrqightforward way tocptinus time >nd allow t!s to Eclude th>t y limjx

'i

xitj a,.s. andX. :: limf.rXg exists .q. ro

prove the lst oncluion *e not thatXZ =

f'l then use the .s.

convergenc and the drinatd cvefkence therm tconclllde EXz = fF: = E Fx = EXv .

y , k y E)

Zirclse 2.3. Let X b a continupus local iii>qtlnal and 1rt b gkku . : ( ) ( E

function. Then tX. ) is a local subinartinale. '

Exerclse 2.4. Let X be a continuous lotal maftingal ad lt X k ckl bstopping time. Then K = .Xk+, is a local martingale with respect to G =

FR+. .

Exercise 2.5. Show that if X is a continuous local martingalewith X3 k 0and EXz < x then Xt is a supermartingale.

2 3 Varlance and Covariance Processe E* @

.:: r '. . '

. . . . (.. ... '.

The next deqnition may look mysterious but it is very important.

(3.1) Theorem. If Xt is a continuous locat martingale, then we defne thevariance process (.X)f to be the unique continpous predictable increasingprocess A that hms .a = 0 and makes XI= A a local martingale.

To warm up for the proof of (3.1)we will prove a result in disrte tim, ,

(3.2) Theorem. Suppose Xa is a martingale with EXn2 < (xn for all ?!. Then-there is a unique predictable process Wa with .c = 0 so that Xxn- Wa is a

martingale. Purthermore, n-+ da is increasing.

Secton 2.3 Vrance and Covarlance Processes 43

Proof Lt dc = 0 and deflrte for n k 1

.s = Xai y + E (.L2j.'Fn. l ) - A1 .y

From the de:nition, it is immediate that W,z6 .Tk-1, d,z is incremsing (sinceXj is a submartingale), and

Exj - ajFa-l) = S(Ak2j.'&-l) -.a

= 72 - yla-ln-1

so . has the desired properties. To see tha d is unique, observe that if Bis another process with desired properties, then pl.n - Bn is a martingale audda - Bn 6 .Tk-1. Therefore

da - Brt = S(.n - Bn 1.7k-1)= .s-l - fa-land it follows by induction tht .s

- Bn = Jlc - Bz = 0. ID

The key to the uniqueness may be summarized aq t'ay predictable dis-crete time mrtingale is constant.'' Since Brownian motion is a predictablemaringale, the lmst statement fails in continuous time and we need anotherMsumption. '

(3.3) Theorem. Any continuous local martingale Xt that is predictable andloally of bonded variation is constant tintifne).

roof By subtracting v', we may suppose that Arou'z0 nd prove that X isidentically 0. Let F( be the variation of X on (0,t1. We leave it to the raderto check

yl

tjExe*cl 3t1r lf X is o tirmous and locally of bounded variation thn r-y 14is continupk

. Defln s ciu inqs : Mtk zf). sincet / s irplies I.X I< Jf (2.4)iplies'

)that the stopped rocess Mt = Xt A 5') is a bounded martingale. Now if s < tusitk well knownproperti of conditional expectation (seee.g., (1.3)and'otherresultsin Chapter 4 of Durrett (1995))we hve

2 .'F .E'(M2j.F ) L 2M F(A1'tl.'>-,)+ Mi ES((A1 = Mal l a) =t , a ,(3.4) y a a stsy.z

..xyajs,lostvj ta)...us

.y ,

an equation that we will refer to y the orthogopality of martingale incre-ments since the key to its validlty is the fact that ECMS (Mt - .6)1F,) = 0(andhence EM,(Mt - G)) = 0).

Page 28: Stochastic Calculus

44 Chapter 2 Stochastic Integration

Letting (j = tn < l . . . < ta = t be a subdivision of (0,t), the orthogonality

of martingale increments, the inequality E c?& K (Ef lcf1)sup.f lcjl,and thefact that Mas %ff imply

n

2 = E S M2 - M2E Mj . jm-z

m=1rz

=E jgtwzv-- M,--,)2m=1

< E v's sup lMzm - Mt--z Im

< ff E sup lMm - Mm-, l '

m

lf we take a sequence of partitins Art = (0 = g < h ;'. . k @'i = tJ in which1 (rz)the mesh Ilal = supm Ifnm- tnm-ll -+ 0 theu continuity implies supm I.&lk..s.

' n ;

Mfz-z j -+ 0. Since the supm S 2ff the bounded convergence therem iipllsfsupfw lMtz - Mzz-, I -+ 0. This shows EMI= 0 so Mt = 0 a.sk In Eth lastcohclpsion t is arbitrary, so with probability one we have Mt = 0 foi all rational

t. r'I'he desired result now follows from continuity. E El

Section 2.3 Vrfance and Covarlance Procsses 45

If the reader recalls what we are trying to deflne (see(3.1))then the remson forthe dehnition is explained by

(a) Lemma. If Xf is a bounded continuous martingale then Xf2 - Q/(X) is amrtingale.

/oof To prove this, we flrst note that remsoning as in the proof of (3.4)onecan conclude that if r < s < t then

E ((.X -

-')2l?,)

= E ((A-, - A-a)2l.T'a)

- 2(Xa - .X'r)J2(.X'j- A'alfa) + (uL - .X'r)2

= E ((.m- X,)2IF,) + (.X', - aYr)2

With (3.3)established we can do the

Proof of uniqeness ln (3.1) Suppose A >nd./4(

aie tw ppiessef Witithe desired properties. Then Jk - z4' is a contlnuous local martihgate tht islpc>lly of boundd variation and hence must be constant. Sine Jlc = X p 0 itfollows that

.&

s d( for all . . ., , , q (:I

. ( ).

Proof of existence ln (3.1) when XL is a bounded martlngale Thexistenc pf

.z4

is onsiderably more complicated th:rl its uniqunrq. To ipjpiqe

thereader ?orttte battle to come, we would like to point out thyt (i)Fs areabout to prove the special case that we need of the celebrated oo - eydecopposition and (ii)the construction will prvide som: lpfql infprmation,

e.g., (7.6).Tltough (3.1)and (3.8)are quike tmpprtynt,the dethlls of thkts are not, so the squeamish reader can skip ah>d tp the boldfard TEppr9o

of Existence Pzoof'' flve pages ahead. , , ; . E . . q, ,

'

rGiven a partition h = ( = to < .1 < ta . .

.) wtth ltms t,, = x, we tet() = supt : tk < ) be the indk of the lat point before time @. (To avoid

confusionlatr, we emphuize now that k.ltj is a number not a random variablt)

We next deqne for a protess X, an approximate quadratic variation by

(f ) E

Qr (X) = puxk - Xtg- ) + (.;L- As(,) ), y k

k=1

since ECXL - Xs 1.S) = 0.g a a a (j jkorkjng out the difterence

,Writing Qj = Qs + (Qj - Qa) an ,

.'' .. ' .. . . .. .'. . .i . . . . . .. . . . ..

(z) '

A A x - x )2+ txj - xj )2Qt - Q, = ( p fk-z k(,)

=1

k(a)a a- IA). - Xtk- ) + t-X - Xtkfst)

k=12 2

= (-X.(.)+z - .;V.(.)) - (X, - Xtpta))(f)

+ hu - .Xp-,)2 + (.X - Xtkfk))2=k(,)+2

The next step is to take conditional expectation with respect to Fs and use theflrst formula with r = k(,) apd t = tk(a)+1. To neyten up the result, deflnea sequence 'tlrj

, (s) - 1 K n ; (t)+ 1 by letting uk(a)-1 = s,'tzf = t for

i(s) K i < (), aud tik(f)+1.= .

Exz - 4A(x)Ia)t.1

(t)+1= EXlLFsl - Qlxj - E )C (v'u - Xui- )2&

=k(,)-f1j '

k(z)+1'W

= E (.V1.S) - Q) (X) - E J-) .L2y- Ak2y., y-af=(a)+1

= x2 - Qhx).% 4

Page 29: Stochastic Calculus

46 Chapter 2 Stochastic Integration

where the second equality follows from (3.4). E1.

. sj.. .. .. .g j

. . . . ' . . . .

Looking at (a)and noticing that Q)(.X) is incremsing except for the lastterm, the reader can probably leap to $he conclusion that we will conssxuct theprocess . desired in (3.1)by taking a sequence of partitions with mesh goingto 0 and proving that the limit exists. Lemma (c)is the heart (ofdarkness)of the prpof. To prepare for its proof we note that using (a),taking expectedvalue and using (3.4)gives ,

(b)

To ispire you for the proof of (c),Fe propise that once (c)is established therest of the proof of (3.1)is routine.

Eqtxt - Q)(-Y)IF,) = EX3 - .Y.71.'&)

=ztxt - xalzlsa)

Section 2.3 Varlanc and Covarance Procepps 47

Summing the squares gives' h aa' 2Qr (Q ) S Qr (.X')suptlapx.z + Xsg - 2.X,(.))

kwherejkj = suptj : tj %skl. By the Cauchy-schwarz inequality

a ' a hh' 2 l/2 4Eov * (Q ))K LEQ. (X) J E suplAlk..z + X'k - 2A)J(,))

Whenever IAI+ I'l -+ 0 the second factor gos to 0, since X is bounded andti it is enough to provecon nuous, so

lf IX 1K M for a11 thn EQhL'Xj2 S 12M4(e) r.

Th valu 12 is not ifnprtant but it does mak it clear that the bond doesot Edepend

on r, A, or L'. '

Proof of (e) If r = LL' is the partition 0 = sa < sl . . . < sn = r thena 2

Qr(-Y)2 = J'-!(A-,-. - A-,--z)2r

m=1

n

(e.1) = 5-2(x,-- x,--,)4m=1

n .- 1

2J7l-Ya - X,m-,)2(Qrr(A-) - QB (X))+ . a.m=1

.

To bound the frst term on the right-hand side we note that if IX IK M for allt then some arithmetic and (3.4)imply

n n

E V(X,m - X,m-z)* S (2M)2f Slx,m- Xam-z)2. m=1 m=1

(e.2).

.-.o

- x-x.x.2

va= 441 ON &, A ,.

- .aa..,

m=l2EX2 %4M4K 4M r

a' botmd the second term we note that (Xa. - -Y,m-z) E Fam then use (b)and IIr - Xanxl K 2M to get

#((aLm - Xawk-z)2(Q1;(X) - QCa.(X)J1.fa#a). = (xa.

-ux,..z

lzstt:rg(.x)z :r,.(x)) jau )(e.3) a a= t.X'a.,v- #,m-,) Exr - v'Lm)IF,m)

2.u')2(x, - 2,.-,)2K ( m

(c) Lemma. Let Xj be a bounded continuous martingale. Fik r 9 0 and ltL be a equce of partitins 0 z g k ty . . . < tnk czi r of $ r) Svitlk meshn a

3

llal = supk Itnk- nk-ll -+ 0. Then Q1=(.X') converges to a limit in L1.

Proof If L and A? are two partitions, we calt Lf the partition obtained bytaking al1 the points in A and in L'. If we apply () twice and take diferepces

h t yi = QhX)-QL'Xj is a martingyle. By definition M2-QAA (y)we see t a t t tis a martingale, so

' '

h a' 2 2 A'Eor (-Y) - Qr (A-)) = f'Fr = EQr ('F)

For remsons that will become clear in a moment, we will drop the argument

from Q) when it is X and we will drop the r when we want to refer to the

rocess t -+ Qgh.SinceP

(3.5) (c+ )2K 2c2 + 22 for any real numbers c apd b

(since2c2 + 22 - (c+ )2= (c - )2k 0) we have.

' . .E .' . . '. . . . '

J LL' A L' A'Qr (F) K 2Qr (Q ) + 6Qr (Q ) .

Combining the lst two results about Q we see that it is enough to prove that,. .

.. . . .

d) if IAI+ IA'l -+ 0 then EQ7L'Q*4 -+ 0.(To do this let sk C AA? and j 6 A so that %K sk < sk-v K tj.j., . Recalling

athe deinition of Q, (X) and doing a little arithmetic we have

Qh - Q) = (xaa+,- .X,)2 - (.X',p - .X,)2k+ k

= (-Yak+z- -'Lp)2+ 2(a'Lp+,- -L.)(A%p - .X,)

= (X,s+z - .X',p)(-Y,k+z+ X'u - 2.X,)

Page 30: Stochastic Calculus

48 Capter 2 Stochastic Integtation

Taking expected value in (e.3),summing over m, and using (3.4)ms before, we

get

n - l n .

x-x )2(Qr(-Y) - Qr (A-)) s 4M2E EX2 -X1f g( ,m am-z r am am ,m-,

(e.4) m=1 m=1ZEXZ %4M4s 4M r

Plugging this bound and (e.2)into (e.1)proves (e)which completes the piof

of (d)and hence of (c). .I:j

. It only remains to put the pieces tgether. Let An be the partition wtthoipts z-nr with () s k s 2n. sinceQtnx - Q).' i? a mqrtirtgay (by(a)),p .

usingthe.:2

maximal inequality (seee.g., (4.3)in Chapter 4 of Durrett (1995))gives

(f)

From this and (c)it follows thatZ,w(&) limit .. uniformly on g0,r).(g) There is a subsequence so that Q -+

a

Am-

:Aaj2s 4sjt.)l,a - :Aa j2E StlP IQ t r r

sr

Section 2.3 Varance and Covarance Processes 49

The last detail for the cmse in which X is a bounded martingale is to checkthat Ml - yl. is a martingale. This follows from the next result (withp = 2).

(3.6) Lemma. Suppose that for each n, Z.J'is a martingale with respect to &,and that for each , Z; -.+ Z$ in LP where p k 1. Then Zt is a martingale.

Proof The martingale property implies that if s < then .E'(g1tIFa) = Z1.Th right-hand side converges to Zs in LP. To check that the left-hand side contverges to ELZL 1.,#-a)we note that linearity properties of and Jensen's inequalityfor copditional expectation (seee.g., (1.1a)and (1.1d)in Chapter 4 of Durrett(1995)) imply

E If(Z1' 1F,) - E (.Z 1F,) T = ELE Z1 - Zt 11,)1'g sstjgy

.yj

jpja; az sjzy.gg

y ...y ()

Taking limits in LP it follows th>t ELZ, 1.F,) = Zs and the proof is complete. (:I

Proof of exlstence in (3.1)when X ls a loal martingale Our flrststep in extending the result to local martingales is to prove a result about thequadratic variation of a stopped martingale. Once one remembers the deflnitionof XT given at the begiriningof Section 2.2, the next result should be obvios:the quadratic variation does not incremse after the process stops.

(3.7) Lemma. Let X be a bounded martingale, and T be a stopping time.Then (xYT) = (X)T.

Proof By the optional stopping therem (XT)2 - (vY)T is a local martingaleso the result follops from uniqueness. (:)

Let Ta be a equence of stopping times incremsing to cxl so that

'i'n = XT=. 1 z >o) is a bounded martingale( a

By the previous xesult there is a unique continuous predictable increasing pro-cess Wn so that (Xn)2-

.y

is a martingale. By (3.7)Wy = .A7+1 for K Ta sowe can unambiguously deflne (aY): = z4y for t %Ts . Clearly, (X) is continuous,predictable, and increasing. The deflnition impliesXy aj

. 1(z.>a) - t.X'lzaa: isTt

a martingale so Xl - (.X') is a local martingale. E1

End of Existence Proof

Proof Since Q) a convezges in L2, it is a Cauchy sequence in L2 and we canE

pickan increasing sequence n(1-) so that for m k n()

im n(p) 2 2-kf lQr - Qr l %

Using (f)and Chebyshev's inequality it follows that

Aa(p+z) Aa(p) 1/j:2 < k42-.P sup lQ, - Q, l >t Kr

Since the right-hand side is summable, the Borel-cantelli lemma implies theinequality on the right only fails finitely many tims and the desird result

.'''''' . .'. . .' '..

' E.

:'

. .

follows. El

By taking subsequences of subsequences and using a diagonal arzment we

can get uniform convergence on (0,Nj for all N. Since each Q? is continuous,

, a hA is as well. Because of the lmst term (.;%- .X()) ,

QL is not increqing.

'n is incremsing, so k -+ dkz-ar is increasing.However, if m k n then k --+ Qka-arSince zzis arbitrary and t --> A is continuous it f6llows that t -+ A is incremsing. Finally, we have the following extension of (c)that will be useful later.

Page 31: Stochastic Calculus

50 Chapter 2 Stochutic In tegration

(3.8) Theorem. Let Xt be a continuous local martingale. For every t andse4uence of subdivisions Aa of (0,) with mesh IAnl --> 0 we have

sup IQ)n(X) - (vY)al -+ 0 in probabilityast

Section 2.3 Vrfance and Covaziance Processes 51

Proo Since

1 a an y.s))Q)n(X, F) = (Qaa(X + F) - Qa (

tis follows izmediately from the deflnition of (X, #;)and (3.8).Our next result is useful for computing (X, F)k .

(3.11) Theorem. Suppose Xt apd X are continuous local martingales. (X, F)fis the unique continuous predictable process A that is locally of bounded varization, hmsJlc = 0, and makes a'V - A a local martingale.

Proof From the de:nition, it is sy tb se th,t

E1Proof Let J, : > 0. We can flrtd a stopping time S so that X( is a boundedmartingale and PS K t) % . lt is clear that Q*(.X') and QLLXS) coincide n(0, ,%.

From the deinition and (3.7)it follows that (A') and (.XS) are equal on(0,51. So we have i E

P sup IQ)(X) - (X),I> : K + P suplQ)(-YS) - (-YS),I> e

'

a<t as

e..

Since X-n is a bounded martingale (c)implks that the lst tern joe to 0 aslAI-+ 0. (:1

(3.9) Defnltion. lf X and 'are two continuous locl martingals, Welet

1(-Y,1') = k((X + F)z - (.X'- 'F)t)

-emark. If X and F are random variables with mean zero, ,

1 ..

ArZ'i - (X, 5')z = ((,Yz+ K)2 - (aY+ F) = (( - K)2 = (X - F)))

is a local martingale. To prove uniqueness, observe that if Jk and /( re twoprocesses with the desired properties, then Aj - .A( = (vY! -

.()

- (A'i$ - yk)is a cntinuous locat martingale that is lotally f bottdd varition ad henceH 0 by (3.3). , (:1

From (7.11)we get several useful properties of th covariance. I whatfollows, X, ' and Z are continuous local martingales.First since V'VX= Xx,

(.X ')t= (F -Y)t --

-'''

1 ). .

Exerclse 3.2. (-'Y+ F, Zjg = (mZjt + (F, Z).

Ekercise 3.3. (uY- a'V,Z) = (X, Z).

Exercise 3.4. lf c, b are real umbers then (c.X', F)t = abLxjF). Takingc = b, X = F, and noting (yY,yYlf = (.X')f it follows that lc.Xlt = c2(A')t .

Exercise 3.5. (.XT, FT) = (A-,F)T

1 zcovtx, F) = EXY = k(f'(.X + F)2 - EX - Fjzj

1= ztvartX + F) - vartx - F))

so it is natvral, I think, to call (vY,F) the covariapce of X and Y.

Given the defnitions of (.X)t and (vY,'Flz the reader should not be surprisedthat if for p, given partitipn L = (0 = p < l < t2 . . .J with lims tn = x, welet (t) = supt : tk < t) and deflne

k()Q)(-Y,F) = (A'k - Xp-zlt'Ft. - Kk-,)=1

+ (X - .X,(,) )($ - 5$k(,))then we have

(2.10) Theorem. Let X and F e continuous local martingales. r every tand sequence of partitions Aa of (0,t) with mesh llal -+ 0 we have

sup lQ)n(Y, ')- (-Y,F), l -+ 0 in probability

(x,'.FT) = (A-,5')T

Since we a<e lazy we will wait until Section 2.5 to prove this. We invite thereader to derive this directly from the deqnitions. The next two exercises pre-pare for the third which will be important in what follows:

Page 32: Stochastic Calculus

52 Chapter 2 Stochastic Integration

Exercise 3.6. lf Xt is a bounded martingale then XI - (.X') is a uniformlyintegrable martingale.

Exerclse 3.7. If X is a bounded mrtingale and S is a stopping time then'I = Xs-v -Xs is a martingale with respect to Fs-vj aud (F) = (X)s+t-(X)s.

Exercise 3.8. If S K T are stopping times and (.X')s = (X)z, then X is

constauton (S, T1.

Exerclse 3.9. Conversely, if S S T are stopping times and X is constant n

(,%T1 then (Y)s = (-Y)g.. E

2.4. Integratlon w-r.t. Bounded Martlngales

ln this section we will explain how to integrate predictable processes w.r.t.bounded continuous martingales. Once we establish the Kunita-Watanbe in-equality in Section 2.5, we will be able to treat a general continuous localmartingale in Secion 2.6. . .

As in the case of the Lebesgue integral (i)w will begin witi te simplest

fupctions and then tatt limits to get th general case, and (ii)the seqvpce of

qteps used in defining the integral will also be useful in proving results later on.

Step 1: Bajlc Integrnnds

We say Ills, (J) is a baslc predictdble process if Hs, tJ) = 1((,,514s)C@)where C E Sa. Let Tlo = the set of basic predictable processes. If .I = 1(.,)Caud X is continuous, then it is clear that we should deflne

section 2.4 Integration w.r.t. Bounded Martingales 53

(ff - X, J'f - 5')t = Jnf.Jfaffa d(.X, F), E(c)

In this step we will only prove a version of (a).The other two results willpake thir flrst appearance in Step 2. Before plunging into the details recall

t t the end of Section 2.2 we deflned Ht to be bounded if there was an Mth aso that with probability one Ht K M for all t k 0.

(4.1.a) Theorem. If X is a continuous martingale and H e llc = f.l E llc :

# is bounded), then (S . u'Yltis a continuous martingale.

Proof0 0 St %a

(ff r.-)t

= CA -

-n)

S t S bcxs - Ak) s t < (x,

From this it is clear that (H ..X')t is contilwous,.l

.-Y) G Ft and .E'I(ff.X)zl <

x. Since .. .-Y) is constant for t / (c, )we can check thr martingale propertyby considering only a < s < t % . (See Exercise 4.1 beloW.) ln this case,

E.. . .X')lF,) -

.. .A-)a = EH .-Y)t - CH .X),l.'&)

=ECCIXL - u'L)1J7,)

=CECXL - .X',)IF,) = 0

where the last two equalities follow from C E F. and Euj 1.f-,)= X, .

Exercise 4.1. lf 5$ is constant for t / (c, ) and .E'(5$1..#'a)= F, whep c K s <then is a martingale.t K

Step 2: Siraple Integrands

We say Sts, tJ) is a slmple predlctable process and write .Jf E 1I1 if H. ' ''

''

. .

E'

'. .8an be written ms the sum of a llnit nuzber of bmsicpredictabte processs. lt

is easy to see that if Il' e II1 then H can be written as

jmdxa = c()(.x@) -

-n(g))

Here we restrict our attention to integrating over (0,(:r), since once we knowhow to do that then we can deqne the intgral over (0,t) by

t

(# . aYlzH Hs dX, = S41(n,t)(d)d.X,0

m

Hs, ) = 1(.t-z,t;)(s)6k(td)i =1

To extend our integral we will need to keep proving versions of the next thrre.

. . .

'...

. . . .

results. Under suitable mssumptions on Il', ff , X and F

(a') (H . .X')t is a continuous local martingaleE

(b) ((#+.Jf ) ..X')t = (# ..X')t+(ff .Ar)t,

where tl < tl < . . . < tm aud Ci e Fti- . ln this cmse we let

.l .(X +F))f =.l

..X') + (#.#)f

m

Hsdx. = ci(m - xj y-z )i = 1

Page 33: Stochastic Calculus

54 Chapter 2 Stochastic Stegration

The representation in the deflnition of H is not unique, since ohe may yubdivide

the intervals into two or more pieces, but it is emsy to see that the right-hand

side does not depend on the representation of H chpsel.

r, ( (?q

j..r j.4) 2 b) Theorenl. Supf 0se .X and. Y are coptinuous Jnarttngales. llfn , IL C 1T1.( . ,.then .

((ff + ff ) ..X)t = LH . .X')t + (ff . uY)

CH . (X+y))t = (# .A)t+.l

.Y)t

Proof Let H = H aud H1 = ff . By subdividing the intervaks in the defln.

i'

Hi = E'n 1 (s)6U for j = 1 2. ln this case it ist ODS We Can SIIPPOSe a =J. (ff-yjf j i t

easy to see that each side of the frst identit# is equal to

m

. C)tt.X; - A-z) + lYi -

't-zll

=1

Since the sum of a finite number of continuous martingales is a continuous

martingale, it follws from (4.1.a)and (4.2.b)tat we hake

(4.2.a)Theorem. If X is a continuous martingale and H e l1l = f.Jf 6 l11 :

.I is boundedl, then ..

. .X')f is a continuous martingale.- -

rlrning to our third result ,

(4.2.c) Theorei. If X and 'F are bouhded continpous martingales and ff, ff E

lli tlien'

l . .t E

(ff . X, ff . 'F)f = #,ff,dlx, Y),0

Consequently,EH . Xltff . F)t) = E JJHsl'kl,d(X, Y), and

t

EH . vY)2j= E jo/62#4.1,

Remark. Here aud in what follows, integrals with respect to (.;tF), areLebesgue-stieltjes integrals. That is, since s

-+ (mF), is locally of bounded

((.21+ V2) . aYlt.Jf .F)t = (#l .X)t(ff . F) + (S2 . .X')t(.Jf.y)j

Section 2.4 Integation mn. Bounded Afarlngafes 55

variation it defines a J-finite signed mesure, so we fix an a) and then integratewith respect to that memspre. In the caqe under consideration, the integrand is

i t t s the integral is trivial'

piecew se cons an .

Proof To prove these results it suffices to show that

t

Zt = (.Jf . Xlztff . F)t - j Jfyffa (f(7, Fls

is a martingale, for then the Erst result follows from (3.11),the second by takingexpected values, and the third formula follows from the second by taking H = ffand X = Y. To prove Zt is a martingale we begin by noting that

The lmstt*o obervationd imply that if the result holds foi (S1 ff ) and (S2 ff ),1 1

it holds for (.J.JI+#Z, Jf ). Similarly, if the result holds for (.Jf, ff 1) and H, ff 2),

1 2'

'

it holds for (S, ff + ff ).In view of the results in the lmst paragraph, we can row pyoye the result

by establishing itfi th cii..l' = 1(c,)C, ff = 1(c,ajD, diW cn furtheifnoremssume that (i)b K c or (ii)c = c, b = d.

Case 1. In this cmse JJS,ffadtx, ')4H 0, so we need to show that tff.A'ltff .

'F) is a martingale. To proke this we bserve that if,/

= C(% - Lk)D1(c,aJthen CH . .X')t(.Jf .'F) = (J . '.F)t is a martingale by (4.1.a)Case 11. In this case

(). s < c

% = CD ((.Xa - .;L)(K - Fcl - ((X, F), - (aY,F))J a K s K bCD ((A - Xc)(% - Fc) - ((X, F) - (X, F)c)l s k b

so it sues to check the martingale property for a K s K t K . To do this wenote

Zt - % = f7D(.;%'Fi - X,K - Xutl - F,) -

'ctxt- Xs)

- ((.X',')t - (-Y,'F),))

Taking expected value and noting Xa e &, .E'(X- F, 1F,) = 0 we have

Ezt - %11,)= CDECXtYL- (Ar, ')t- (yY,K- (m ')a)IF,) = 0

This completes the proof of Cmseii and finishes the proof of (4.2.c), C1

Page 34: Stochastic Calculus
Page 35: Stochastic Calculus

58 Capter 2 Stochastic Integration

Since ll.J:fn- s'nllx -+ 0 ms m, n-+

x, Hn . aY) is a Cauchy sequence inaud by (4.6)must converge to a limit in M2 , which we define to be the integralCH.X). To see that the limit is independent of the sequence of approximations

chosen suppose llSn - Sllx -+ 0 and ll/n - Jfllx -+ 0 with Hn . x .-, y artd

Wn.X -+ ya Form a third approximating sequence by setjing Xn = H% if n isodd, Xn= Hn if n is even. llffr'- ffkx --+ 0 so we have Hn .X .-> Zi Lokingat the even and odd subsequences of Hn it follows that F = Z = C so the limitis unique. ln words, the lmst argument shows that if the limit exists for ANY

sequence of approximations, it must be pnique. Note that ms a corollary of theconstruction we get

M

(4.3.a)Theorem. lf X is a bounded continuous martingale and H 6 H24.X)then H . X e M2 and is conthmous.

2Proof The fact that H.X E M is automatic from the definitiim. lf H e TIlhave Ilffn= Jfjjx -+ 0 then (4.2.a)implies t#n. Xlt is contipus. Sinelll#n . -Y) - CH . -Y)II2--> 0, using chebyshevnd the L2 maxiral inqulity

et that; E

w g

p(supItsn.x)

- u .x)I >,)

s c-assup llsn.x)

- u r xllzt .

<E'-2

. 4jI(.Jfn . A') - (H . .A'llla..u4

0

Since this holds for any .7 > 0 we saytst#n .Ar)t converges uniformly to (# ..X)t

in probability.'' By passing to a subsequence we can hve

sup 1(#n. .X') - CH . uYltI -+ 0 a.s.

and it follows that t .-+

.. .X'lf is continuous. El

(4.3.b) Theorem. lf X is a bounded continuous martingale, and H, ff e 1I2(X)

then H + ff e H24A') and

((# + ff ) .aY)t = CH . .X)f + (ff . .X')f

Proof The triangle inequality for the norm 11.IIximplies that S'+ff 6 1L(Ar).Let Hn , ff n e llz with llffn- ffllx -+ 0 and llffn - K Ilx-+ 0. The triangl

inequalityimpes ll(JJrz + ff r&) - CH + Jf )llx-+ 0k (4.2.b)implies

Seclon 2.5 Te Tfunfta-Watanabe Tnequalty

The proofs of the remaining two equalities .

(ff . (.X'+ F)) = (# . .X'lf + CH . y)ft ,

,,

H .X Jf.5')t

= #,zf,d(x, #),( '

0

will be given in the next section. To develop the rsults there we will need toextend the ismetg prppexty from 5111io H24.X'4. To do this it is useful torall some of your undergraduate analysis.

Exerclse 4.4. lf 11- 11is a norm and llzn- zII-.+ 0 then llznll-+ llzIl.Exerclse 4.5. lf X is a bouned martingale and H 6 lla(aY) then ll# . X112=

II.f

II--

.

2.5.rlnlte Knita-watanabe Ineqallty

Ip this section we will prove arl inequality due to Kunita >nd Watanabe andafply this result to extend the formula given in Section 2.4 for the covarianceof two stochastic integrals.

(5.1) Theore>. If X and F are local martigales and H and ff are twomesurable processs, then almt surely

(x) co 1/2 x 1/2

Isffa IdI(.X',F)1, K H, d(-Y), ff, d(F),0 0 0

where dl(vY,F) lastands for dk:t where 16 is the total variation of r-+ (X, Flr

jon (0,).+

. r

Remarks. (i) If .Y = y and d(vY)a = s then d('), = d1(.X',F) 1,= s and(5.1) reduces to the Cauhy-schwaiz ineiuality. ()i)Notice that H ad ff rnot mssumed to be predictable. We assupe only that Hfs, (z2) and ff (s,tJ) aremmsurable with rvspect to .R.x F wheie 7 is the Biel subsets of R. ht

rrasztwe ca ttyin this leyel of generality tstlat tt notiop of nrtingqledoespot ntr int the fiof arfter thefirst stef.

.. ( .

' ' '' .E

.

Proof Steb : Obserk tht if s S @,(#+F, X+yl (.X+F,X+'/),..:

'''''''

(.'''''''''''''''''' ' '

..

.

''' '''''''''' '''

.

lf Welet (., #)f, z (M, Nj3 - (M, #),, then

Now 1et n-+

x and use the fact that (Gn . vY)t -+ (G . vYlt in 442 ivhen

G = H, ff , .I + ff . Cl

0 K (aY+ F, X + ylz - (.X'+ F, X + 'F),

= (.X',x)t, - 2(A', F), + 2 (F, y)

Page 36: Stochastic Calculus

60 Capter 2 Stochutic In tegration

for al1 s, t, and . lf we llx s and t and throw away a countable umber of

sets of measure 0 then with probability one the last inequality will hold for all

rational . Now a quadratic cz2 + z + c that is nonnegative at al1 the rationals

and not identically 0 hms at most one real root (i.e.,52 - 4cc K 0), so we havethat

'

Secton 2.5 Te Tfunlta-Watanabe Tnequallty 61

To see that this exists, note that J, is the Radon-Nikodym derivative of d(X, F),with respect to d1(.;LF), 1.Now apply (5.2)to .Jf, = Iffl and ff, = J,lff, l..El

With (5.1)established, we are now ready to tpe care of unflnished businessfrom Section 4.

(5.3)Theorem. If H E 1I2(X) n 1la(F) then H E l1z(X + F) and... . ...

'..

.l . (A-+ y))t = .. .A-)t + CH.5')z

Proof First note that Exqrcije 3.2 implies

x +'i')

= (x)t + (F), + i(x,')t(and the Kunita-Watanabe inequality, (5.1),implies

x '.F)1,s ((A-)('))1/2 s ((A-) + ('.F))/jI( ,

since 2c K c2 + bz, i.e., 0 K (c - )2.So we have. ( . . (.

(-'r+ 1') s 2((x), + t'1')t)

((-Y,5'),)2 K (aY,-Y),(F, 'Fla

Step 2: Let 0 = tn < tl < . . . < ta be an incrmsilg sequence of tims,' lethi, ki , 1 ; K n, be random variables, and define simple mauiable proc

n

.Jfts, tJ) = f()1(t-z,.j)(s)

= 1

n. . . . .

g. . . .

.. . .

ff (s (J) =-ft$d

)1 t f) (s) ), ( i- z , ; q

i=1

From the deinition of the integral, the resplt of Step 1, >nd the Cauchy-schwarzinequalityit follows that1

n%? .

S'aff ,d(X, '), <- Ifl 1(.X, ')1:-, I .

0 =1

n 1/2

< 1.,:I((A-, -Y)1j-z(F, '.F)1'-,) Ii'lf=1

1/2 a1/2

n

< :,2(xa'lti t-?(F, 'l

- i 1 - l - z=1 =1

(which should remind the reader of (z+ yj2 S 2(z2 + p2)) and it follows thatH 6 112(.X + F). To prove the frmula now we note that by (4.5)we can fndSn in I1l so that llffn- #llz -+ 0 for Z = X, F, X + F. (4.2.b)implies that

t.Jfn . (.X'+ F))t = Hn . X)f + Hn . ylj. .

.. ..''i

. '.

'. .

#owlet n-+

x and use Hn . Z)t -+

.l

. zlt torg =-, #,k + t.

($.4) Theorem. If X, F are bounded continuous martingales, H E I12(X),

ff e 112(y) thent

(S . X, ff . F)t = Saff,dt.X, F),n

Proof By (3.11),it su/ces to show that

(t)

is a martingale. Let Hn and ff rt be sequences of elements of l1l that convergeto # and ff in H2(.X') and 112(F), respectively, and 1et Z1' be the quantitythat results when Hn and K n replace H and ff in (t). By (4.2.c),Z'J'is a

in that for simple measurable processes:pzov g

(5.2)

vhich is (5.1)with the absolute value. ogtside the iptegral.

tep :.1: tet M b a large numbr and let T = inftf i (uY)t t- (F) %.).

By

for s k T and 1//,1,lff,l f M for s K T. llaving restkited ur atttion

to (0T) (X) and (F) are flnite measures; so usin: the bouded conveizce1 1

theprem,w see that (5.2)holds for bounded meyurable processes. To improvel t the proof) 1et J b a rftemsurnbl fr8ess taiin(5.2)to (5.1)(andcomp e e , ,

valuesin (-1, 1) such that

. . jg . jgj/, ,z,zr,d(x,z),1

s Lj,cd(x),) Ljzr.7d(y-),)

' dtx z),1 - y,,d(x,y),/-l ,

-

Page 37: Stochastic Calculus

62 Chapter 2 Stochastic Integration

martingale. By (3.6),we can complete the proof by showing Zsn -+ Z, in f,1.The triangle inequality and (4.3.b)imply that ,

Section 2.6 Integation w.z'.l. Iocal fartfngales 63

Sst)p 1(.J.f' .A')t(ff n .'F)t -

.l

. A'lttff.1')1

K E sup l((#n- #) . Iltff n . F) It

( a g ; .

..y;

;yj+ E sup lt.Jf . .X)t((f -

t

Taking expected values and using the Cauchy-schwarz inequality

'1 q ,

E I#,n- .Jf,II.Jf,nldI(-Y,F) 1,0

j 1/2 j 1/2

< E lffan- Jf, I2d(A-)a E lfn I2d(F), -.+ 00 0

as n-->

x, since llsn - ffllx -+ 0 and Ilff'l11y-.t llfflly< x. Combining thiswith a simila estimate on JJ1S,1 lfr - ff, Id1(J, 1') lait follows thatTo bound the flrst terml we use (i)the sup of the product is smaller than the

d t of the sup's (ii)the Cauchy-schwarz inequality, (iii)the L2 maximklyro uc ,

znequality and (iv)the isometry property in Exerclse 4.5:

; ftsup I((ffn- 1Il . A-l lsup ltff''. 'F)t IJ

,1 q 't

(ftsup 1((.Jfr - Hj . #)tl2).E'(sup llffn- F)tI2))1/2K 4II(Sn - H) .XII2IIffn

.'112

= 411ffn - ffllxllffnllz -+ 0

as n-+

x, since Ilsn - S'IIA- -+ 0 and IIffn l1'i--+ IlffII&'< . A similarestimate shows

fnff nd(X,5'), -+ j S,ffadtx, F),j Jf a ,

,

inz,l.wehave now shown an --, za tnL and the desired result fbllsvs hom(3.6). !7I

hemark. In some treatments (seee.g., Revuz and Yor (1991)p. 130), if M G/42 apd H e 11a(M), H.M is DEFINED to be the unique L 6 M2 with Lz = 0ci tht

(y, yj a:j s . ts, x)for all N E M2.

Exerclse 5.1. Prove the uniqueness in the last claim.

(5.4) allows us to do easily soze things that we would have hatl to work toprove in Section 2.3. Let X and ' be continuous local martingales, and suppose.it'c= yc = 0. The lmst assumptipn entails no lqs of generqlity. jee Ekercisegn

Xtst)p1(.Jf. A')t((ff n - lk-) . F)t Ij -+

so we have shown Hn . A'latff n . F), -+ (S . .X),(ff . F), in f,1.To estimate the second term in ZJ' - Zt, we again begin by putting the

absolute valpes ipside, replpcing the memsure by its variation, and then using

t e triangle inquality

.1 : tHnnlkandqx,Fla - .Jf,ff,d(.X, F),

0 0

% lHsnff an - S',ff , ldl(.X', ' )I,

0t 't

S I#,n- S'al Iff,nld1(-Y,F) Is+ 1#,1Iffan- Jf, Id1(-'LF) 1,0 0

Using the Kunita-Watanabe inequality, (5.1),the frst term is

Exerclse 5.2: ( , ) = ( , .

xvclse 5.3. VY,TI$- yJI is a local martingale.

2.6. Integratlon w.r.t. Local Martingale

ln this section we will extend our integral so that the iptegy>tors cap be con-

tinuouslocal martingales antt so that the integrands re in

)

I1a(.X') = H : Jf,2d(.X')a< x a.s. for al1 t k 0g

.tl/2 t 1/a

sLjIzc - ,z,I2d(x),) LjlzcI2d(z),)

Page 38: Stochastic Calculus

64 Captez 2 Stochutic In tegration

We will argue in Section 3.4 that this is the largest possible class of integrands.To extend the integral from bounded martingales to local martingales we beginby proving an obvious fact.

(6.1) Theorem. Suppose X is a bounded contirmous mattingale, H, ff eIIa(X) and ff, = Jfa for s %T where T is a stopping time. Then (S . .Xla =

(ff ' X), fOr s S T.

Proof Write Jfa = Sal + Ih2 and ff , = ff ) + ffa2where

H =

.rf'l

= .H' ltasz) = Jga 1(asz), a 4

Clearly, H . X) = (ff1. '.F)j for al1 t. Sipce Sa2 = ffa2 = 0 for s K T, (5.4)

i liesmp (Jf2 . .X'), = (.Jf2 .

.xla

= 0 for s < T

and it follows from xercise 3.8 that (S2 . .X')f = (Jf 2 . .X'l = 0 for t k-tr Combining this with the qrst result and using (4.3.b)gives the desiredconclusion. El

Execise 6.1. Extend the proof of (6.1)to show that if X i a bondedcontinuous martingale, S, ff E l12(-Y) and H. = A-, for S K s K T where X,Tare stopping times thn (S .X), - (S.X)s = (.Jf .X),

-(ff

.Xjs for S %s i T.

To extend the integral now, let Sn be a sequence of stopping times with

Sa %Tp = inf(t: IA'tl > n or .Jf,2(f(1), >nj

> 0

' .E

. '. .. kand Sa 1 n. tt Sl' z ffaltassa), and observe that if r? xg n, (6.1)Jmplis

that Hm . Xlf V: Hn . X)3 for t K Sm , so we can defln H . X by settingCH ' X)t = (Vn ' Xl fOr t S Sn .

To- complete the definition we have to show that if Rn and Sn are twosequences of stopping times K Ta with Rn 1 G) and Sa 1 (x7 then we nd pwith the same (# . .X')t. Let SJ be the stopped version of the process deflned

t the beginning of Section 2.2, let Qn= Rn A S,z and note that (6.1)inplia

(IIR,. .X'), = (SW.. .X'), for s %Qn

SinceQu1x, it follows that (.Jf.J)t is independent of the sequenc of stopping

tires chosen. The uniqeness result and (6.1)imply

(6.2)Theorem. If X is a continuous local martingale and H E IIa(aY) then

H$ . X =.l

.X)V = H .

XV = HV.

XV

14Jf 6 IIa(X) f3 1L(F) then H E 11a(X + F) and

(s.(x

+z)), = (s .A-),+ u.rj,

Section 2.6 Integration w.r.t. Iocal hkartingale.s 6:

ln words if we set the integrand = 0 after time T or stop the mavtingale at tilT or do both, then this just stops the integral at T.

Our next t%k is to generalize our abc's to integrands iri llat

(6.3)Theorem. If X is a continuous local martingale and H : lIa(vY), thenCH . .X')t is a continuous local martingale.

Proof By stopping at Tn = inf ( : SJd(X), > n or IXtI> iz1, it sucesto show that if X is a bounded martingale and H E I1a(#), then CH . .X')f is acpzinlou. martingale bt this follows from (4.3.a). (:3

6 4) Thorem. Let X and F be cpntinuouq locq) in>ytirlgales. If .J ff q( .

,

l1a(-Y) then H + zf6 IIa(x) and

((.Jf + K ) . .X')t = CH . .X')f+ (ff . aYlt

Proof Tp provy the flrst formula we note Shat th trt>ngle inequality for the1 implles H + ff E l1a(.X). Stopping atnorm 11. Ix

't tTa = inft : IA)I, H,2dt.X'la , or Jf

,2

d(F)a >nj

0 0

reduces the result to (4.3.b)For the secohd formula we note that #he argument in (5.3)shows that

.I e Hat.X'+ 5') and by stopping we can rectuce the result to (5.3). E1

After seeing the last two proofs the reader can undoubtedly imyrove (5.4)to(6.5) Theorem. lf X, 5r are continuous lqcal martingales, H e l1a(vY) >ndK e 1Ia(F) ther

j

LII . X, ff . F)z = #,ff,d(X, F),0

Using Exercise 3.2, we can generalize (6.5)to sums of stochmstic integrals.

(6.6)Theorem. Let X = E'C=1Hi . X and y = E/n=I JfJ . Yi where the Xand Yi arq continuous local martingales. If Hi e l1a(.Xi) and ffJ 6 lIa(5Y),then j

(.;Lyl = Hjlf'is dxi ,FJ),

i j t)l

Page 39: Stochastic Calculus

66 Chapter 2 Stochastic Integration

We leave it to the reader to make the qnal extension of the isometry property:. .

. ' . .'. :

'..

Exerclse 6.2. If X is a continuous local martingale and S E 112(X) then.I . X e 442 and llS . Xlla = IlSllx.

We will conclude this section by compttting some stochutic integrals. Thefirst formula should be obvious, but for completeness must be derived from thedeflnitions.

Exerclse 6.3. Let X be a continuous local martindale. Let S K T < cxl bestopping times, let C@) be bounded with C@) E Fs, and deqne Jfa = C forS d s K T and 0 othrwise. The .I E Ha(.X) and (

Sectbon 2.6 Tntegraton w.r.t. Locl fartngales 67

Proof Exercise 6.2 (or4.4) implies

Ill#n= H4.7111

= Ilsn= J.f11i K Mlsup I#an= .JzaI2.-.-, 0J

.

as n-+

tx by the bounded convergence theoremt

The rest of the section is devoted to investigating what happens when weekaluate at the right end point or the center of each interval. For simpliity wewill only consider the special case in which Hs = 2X,.

Exercise 6.4. If X is a continuous local marting>le then

j Jfa dXs = (7(.Xy - Xs)

f

2A%dX' = XI- X - (.')f0

.. . . . ... . . .For the rest of this section, zkazz (0 az tg < ty < ... < fnkta) = tJ ls

a sequence of partitions of (0,f) With mesh Ir,I = supf ltJ'- t7-1I -+ 0. Ourtheme here is that if the integrand Ih is continuous then the integral is a limitof Riemann sums, providedwe evaluate te integvand c f8 lefi entlpcin of eintervals. '

(6.7) Theorem. If X is a continuous local martingale and t --, Ht is continuousthen as n

-+

cxl

Exercise 6.5. Sltow that if X is a continuous local martingale and we evaluateat the right end point then

1-12A12 zlX(f2+1) - .X(f7)l -+ 2X4 dXa + 2(X)t = A)2 - XJ+ (.X')z+

i 0

Comparing the lmst two exercises you mightjump to the onclusion that. k

if we evalute at the midpoint we get an integral that performs llke the one incalculus. However, we will not ask you to prove this in general.

Exerclse 6.6. Show that if Bt is a one dimensional zrownianmotion startingat 0 then

t.&'t7l-Y(Pz+z) - -Y(f7)) -+ #, dX' in probability

i ()

Proof First note that H 6 l1:$(.X')so the integral exists; By stopping atT = inf (s : s, (.X), , or 1#,1> M) and making H constant after time T topreserve continuity, we can suppose (A')t K M and IS,l K M ' for all s. LetSan = Iht for s e (t7, t)z+z),Iljn = Ih for s > t. Since s

-+ Jf, is continuous o(0,t) we have fox each ta that

2 a = 1

J'-)2#(( + 1/2)2-nt)(f(( + 1)2-ntj - #(l:2-nt))k=0

converges in probability to Jc2f4 dBs + t = B t2.

This approach to integration, called the Stratoovich integral, is more con-venient than It's approach in certain circumstances (e.g.,diflksion processeson manifolds) but we will not discuss it fvrther here. Changing topics we haveone more consequence of (6.7)that we will need in Chapter 5.

Exercise 6.7. Suppose /t : (0,x) -+

, is continuous. Show that J , dB' has1distribution with mean 0 and variance Jpf/12ds.a norma ,

sup l#,n - H, l -+ 0

The desired result now follows from the following lemma which will be usefpllatr. '

.

(6.8) Lemma. Let X3 be a contirmous local martingale with (.X')t %M for all

. lf .Jfn is a sequence of predic#able processes with lSn IK M for alt sLw and' - .Jf 1-+ 0 in probability then Hn . .X') -+ CH . .X) in Mz.with supa lS, ,

Page 40: Stochastic Calculus

68 Chapter 2 Stochastic Integration

2.7. Change of Varlables, It's Formula

This section is devoted to a proof of our flrst version of It's frofrmla, (7.1).InSection 2.10, we will prove a bigger and better version, (10.2),with a slicker.proot but the mundane proof given here has the advantage of explaining where

the term with T' comes from.

(7.1) Theorem. Suppose X is a continuous local martingale and f has twocontinuous derivatives. Then with probability 1, for a11 t k 0

Section 2.7 Change of Vaiables, It's Formula 69

Comparing (7.4)with (7.1),it becomes clear that we want to show

T'(.XI-)(-Yz:-+z- -'Ly) -+ fxsjdx.0

1 1 t

- .w(*)(-'L;'+z- A77)2 -.+

j f''Xs4dLXLs2 t)i

'rtxaldxa+ !./'r'(x,)d(x),y(.m)- y(.x) = j, : ,

(7.6)

in probability as n-+

x. (7.5)follows from (6.8).To prove (7.6),we let Gns= gl @)= fncxj: , .;Vy+,) when s 6 (f)', t;%1),

Gn = JN(.;V)for s k t and' 1et

.n = (.m+, - xlzX$i

t

:,P(*)(A7:.+, - -X7)2 = G1d.'',0

so that

and what we want to show is

(7.7)

To do this we begin by observing that th uniform continuity of T' implies that

as n-+

x we hav Gn --> y&t.X'aatl uniformly in s, while (3.8)implies that .An,r

convergesin probabillty to (X)a . Now by taking subsequences we can suppose

that with probability 1, Wn,atconverges weakly to laYlaaz. In other words, if

wefix td and regard s-+ Wna and s =+ (.X').at as distribution functions, then

the ssociated memsures converge weakly. Having done this, we can flx LJ and

deduce(7.7)from the following simple result: '

'tot (?J, (Izj .-+

j Jnx, )g(x),

Remarx. If Jk is a continuous functiop that is locally of bounded variation,

and f hms a continuous derivative then ('ee Exercise 7.2 below)

(7.2) .J(.t) - /*(dn)= fAsjdAs0

As the reader will see in the proof of (7.1),the second term coms frolit thefact that local martingale paths have quadratic variation (.X')t, while the 1/2in frpnt of it comes from expanding f in a Taylor seriesk

Proof By stopping at Ta; = inf (t : 1ARl or (X)z k MJ,it sces to 'prv

the result when l.;Vland (aY)t < M. From calculus we know that for any a and

, there is a ctc, ) in between c and b such that t

1 2 g j;;J() - laj = ( - ajfas +'j'(

- a) 1 (c(c,

Let t be a flxed positive number. Consider a.sequence An of partitions 0 =

tg < ty . . . < t2a = t with mesh lAnl -+ 0. From (7.2)it follows that (7.8) Lemmak If (i)memsures pn on (0,t) converge weakly to px j a flnitemeasui; ad (ii)grt is a sequence of functions with IpalK M that have thepropertythat whenever sn G (0,f) -+

s we have gusri) -+ gs), then ms n-+

cxz

(7.4)

1Xt4 - /(A%) = J(.XLr+,)- fhh:j

= .f'(.;%;-)(-'L7+z- Xt: )i

1 2

+ - :,P ()(-'V7+z- Xt: )2 i

Jpd/za'-+ jgdpco

where gl (al)= fnch: , .;Vy+,)).

Proof Bj letting p'n(.) = ynAllpup, t)), we can assume that al1 the yzaare probabllity memsures. A standard construction (see(2.1)in Chapter 2 ofDurrett (1995))shows that there is a sequence of random variable: Xa with

distribution pa so that Xn -+ Xx a.s. ms n-+

(x) The convergence of g,z to g

Page 41: Stochastic Calculus

70 Chapter 2 Stochastic Integration

implies ga(.Xk) --> gx ), so the result follows from the bounded convergencetheorem. 1:)

(7.8) is the last piece in the proof of (7.1).Tracing back through the pif,

we see that (7.8)implies (7.7),which in turn completes the proof of (7.6).Soadding (7.5)and using (7.4)gives that for each @,

'1 J. tJ(.X) - 1Xz4 = J'tlald-'a + ;; lmxstdxss

g o 0

sinceeach side of the formula is a continuous function of t, iy fpllows that with

bability1 the equality holds for all t k 0, the statemeni made in (7.1). C1pro .

(7.9) Remaz-k. In Chapter 3 we will need to use It's formula for complex

1 d f The extension is trivial. Write f ='tz + r apply It's formula to 'tzVa lle . ,

and 't, multiply the 'p formula by i and add the two.

E 1se 7.1. Let ul.t be a continuous process with total variation l.lt S Mxercfor all t. If ff n is a sequence of memsurable processes with Iffn (s,uJ) IK M fora11s, o and sup, lff,n- Jf, I -+ 0 then sup Ilffn - ff ) . W)tl -+ 0. E

Exercise 7.2. Use the fact that fb) - faj = Tca, bljb - c) for sone c7 )between c and ,

and Exercise 7.1 to prove ( . .

2.8. Integration w.r.t. Semimartingales.

'

.( ..' 'X is said to be a continuous semlmartingale if Xg can be writtn as.h

+A,where T is a contirtuous local martingale and .t is a continupus Adapted pro-

cessthat is locally of bounded variation. A nice feature of continqops semi-

martingalesthat is unheard of for their more general cpunterparts i.

(8.1) Theorem. Let Xt be a continuous semimartingal. lf th (ontippops)rocesses M and A aTe chosen so that .o = 0, then the dcompotio! X =P

Mt + A is unique. E E ,

Proof lf Mg'+ A is another decomposition, then A - A = AT?- Mt is acontinuous local martingale and locally of bounded variation, so by (3.3)A -Ais constant and hence H 0. (ZI

Remark. In what follows we will sometimes write tslzet X3 = Mt + A be acontinuous semimartingale'' to specify that the decomposition of Xg consists of

the local martingale Mg and the process A that is locally of bounded variation.

Section 2.8 Integration %v.r.j. Semimartngales

Exerclse 8.1. Show that if Xt = Mt + d and X3' = Mg' + a aTe contirm-ous semimartingales then Xt + JV= Mt + Mt') + (.At + A) is a ontinuoussemimartingale.

In this section we will extend the class of integrators for our stochastic in-tegral from continuous locl martingales to continuous semimartingales. Thereare three remsons for doing this.

(i) If X is a continuous local martingale and f is (72 then It's formula showsus that fxt) is always a semimartingale but it is not a local martingale unless/&(z) = 0 for all z. In the next section we will prove a generalization of It'sformula which implies that if X

,is

a continuous semimartingale and f is C2then JXt4 is again a semimartingale.

(ii) lt can be argued that anyfsreasonable integrator'' is a semimartingale. To

explain this, we begin by defining an easy integrnnd to be a process of theOrn1

n

H = Hi 1(n,n+,)=n

where 0 = Q S T1 K . . . S Tnmz are stopping times, and the Hi 6 Fn havelsl < (x, a.s. Let lle, be the collection of bounded emsy predictable processesthat vanish on (t,x) equipped with the uniform norm

IlSllu = sup 1.J.f,(/)14)

For easy integrands we deflne the integral as

n

.l .

.x)

= Hixn.. - xvkli=1

Finally let L0 be the collection of a1l random variables topologizrd by conver-gence m probability, which comes from the metric 11.X110= f(1-Y1/(1 + 1.A-1)).See Exercise 6.4 in Chapter 1 of Durrett (1995).

A reult proved independently by Bichtler and Dellacherie states

(8.2) Theorem. If H -+ (.Jf . .X') is continuous from lle, -.+ 0 for all then. . .

.

' 'r' .. .X is a semimartingale.

Since any extension of the integral from flejf will involve taking limits, weneed our integral to be a continuus function on the silple integrands (insomesense). We have placed a very strong topology on the domain and a weak

Page 42: Stochastic Calculus

72 Chapter 2 Stochutic In tegration

topology on the range, so this is a fairly weak requirement. Protter (1990)takes (8.2)as the dennition of semimartingale. In his approach one gets some

very deep results without much efibrt but then one must sweat to prove that asemimartingale (definedms a good integrator) is a semimartingale (asdefineabove). .

(iii) Last but not least, it is emsy to extend the integral from local martingales

to semimartingales if we replace I1a(X) by a slightly smaller clmss of integrandsthat dos not depend on X.

Getting back to business, let Xt = M+d be a cohtinuous semimartingale.

We say 1l' e l11 i::: the set of locally bounded predlctable processes, ifthere is a sequence of stopping times Ta 1 (x, so that 1#(a,u?)I%n for %Tn.If Il' E ll1 ve can deflne

,1

as a Lebesgue-stieltjes integral (whichexists for a.e. g). To integrate with

respect to the local martiugale Mt we note that

Tazz2d(AT), s n2(AT)z- < co3

0

and Ta -+

x. So l11 (: l1a(M), we can define CH . Mlt , and 1et

LH . -Y)t = LH .

.flt

+ (# .

.)t

since by the uniqueness of the decomposition this is an unambiguous deinition.From the deflnition above, it follows immediately that we have

(8.3)Theorem. If x is a continuoussemimaztingale and H e lrl, then (zz.x),is a continuous semimartingale.

The secpnd of our abc's is also emsy. We name it in honor of the similar rela-

tionshipsbeween addition and multiplication.

.'

. .

(8.4)Dlstributlve Laws. Suppose X and F are continuous semimartingales,

an j.

Sectlon 2.8 Integation mnt. Seminiaztingales 73

Proof This follows easily from the deqnition of the integral with repect toa semimartingale, the result for local martingales given in (6.4),and the factthat the results are true when X and F are locally of bounded variatio. ' lnprovingthe second result we also need Exercise 8.1. , E1

The rest of this section is devoted to deqning the covariance for semimartin-gales and proving the third of our abc's.

(8.5)Deflnition. lf X = M+A and X' = M' +yP are continuous semimartin-galeswe deflne the covariance taY,X/lf = (M, M/).

To exptain thk we recall the approximate quadratic, variation QtLX) definedin Sectlon 2.3, and note

(8.6) Theorezn. Suppose X = M + . and X' = tf'' + A' are continuoussemimartingales. If n is a sequence of partitions of (0,) with mesh IAr,I-+ 0then

l / :Q n(X,-Y ) -+ (M, M )t in probability

Proof Since

Aa x x') = Qhnu M') + Qa(. M'4 + qhxx.,)

Q. ( ' .t ,.t

, .t. ,

it suces tt? show that if X is a continpops prpcess and J4 is continuous andan j d this wlocally of bounded variation then Q (F, 7) -+ 0 almt urely. T o

note thatXQ, -(F, 7) S I7ltsup lFki-+,- 'FkyIi

As n -+

cxl the second term -+ 0 atmt surely, while the flrst one hmsI7It< tx7apd the desird result follows. EI

(8.7)Theorem. supposeXi and Yi are continuous semimartingales ff, Ki Ern Hi

-''f

'F = En ffJ . Yi thenlH, X = Ef=1 '

, j= 1

t

(-X','.F)t = HsiJfj dtv' ,FJ),

i j 01

((ff + ff ) . aY) = LH . A')t + (Jf . .X')

LH . (.X + 'F))t = LH . X) + LH .')

Proof Let Mi and Ni be the local martingale parts of X and Yi , let M =

E'rlzz:tHi . M, aud N = E.fn=I Ki . Ni . It follows from our definition that(.X',F)f = (M, #)f, so using (6.6)and then (Mf , Nijt = (.X', F)f gives thedesiredresult. (:1

Page 43: Stochastic Calculus

74 Chaptez 2 Stochmtic Integcation

2.9. Associative Law

Let Xt be a continuous semimartingale.Hte the integral relationsipT

In some computations, it is useful, to

Ikdxs0

Section 2.9 Associative aw

To exteqd to more general integrands, we will take limits. First, however,we need a technical result.

.

(9.2)Lemma. Suppose p is a signed memsure with flnite total variation, h andtk are bounded, and let p((0, )) = -,Jz(ds). Then

t t

avdsj = ltnks yds)

0 0as the formal eqtation

t '1 .

where the dK and dXt are ictitious objects known msttqtochmstic difl-eyeptials.''

A god xample is the derivation of th following forfrmla, which (fofobvius

reasons) we call the associatlve law:'

@) H . (ff . Ar) = l#ff ) . X

Proof uslng stochastic dlserentlals dH . F)t = Htdj? .

(7f . aYl and observing #K = f dXt gives

dH . (.Jf . aY)) = Ht #t.Jf . .Xlt = Ihh dXt = #((#ff ) . X)t

The above proof is not rigorous, but the computation is useful because it

tells us what the answer should be. Once we know the answer, it is routine

t verify by checking that it holds for basic prdictable processes xrtd thenfolloWing th extension process we used for defining the ihtegral to onclude

that it holds in general.

(9.1) Lemma. Let X be any process. lf S, ff E 111, then @)holds.. . .

' . . . .

Proof Since the formula is linear in Ii' and in ff separatelyj we can witht

loss of generality suppose H = 1(c,)C and ff = ltcjdjD and further that either

(i) b ; c or (ii)'c= c, b = d. In se (i),both sides of the equation are H 0 and

hence equal. In cmse (ii),' '

Proof of (9.2) The RadozwNikodym derivative dvjdy = ka so the desiredidentity is just the well known fact that

t /d dv,.J, vdsj = ,

-dg yds)

See for example Exercise 8.7 in the Appendix of Durret (1995).To simplify the extension to l12(.X), we will tke th limit for ff and then

for #.

(9.3) Lemma. Let X be a continuous local martingale. If H e llz ardJf E lIa(X), then @)holds.

Proof Let ff n E 5111such that ff n -+ Tf in lla(X). Since H is bouuded,Sff n -.+ S.Jf in 112(Ar),and it follows from Exercise 6.2 that CHK '')

. X c-zytS.Jf ) . X in Af2. To deal with the left side of @),we observe that using (6.4)twice, Exercise 6.2, (6.5)with (9.2),and the fact that IL is bounded we have

.'

. . ' . . ' . : ..

'

... .

l1.5(ff - vY)- H - (ffn . .X)IIa2= IIH - ((ff - ffn) . .X)Ila2.

d((.!r- zn) . x),= E Hs0

X

= E /f2(f - ffanlz (f4.;y;),4

B

s c IIff - Jf n II,42

so as n-+

x, .H . (ffn. .X') -+ H . (ff . .X) in 442 qrtd tle result follows.

(9.4)Lemma. Let X be a continuous local martingale. If H E llatff . .X') and

ff e l1a(aY), then @)holds.

0 0 S t S a(ff . .X')t = Dxt - JL) a S f S b

D1A%- Ak) b %t < cxn

0 0 K t K c

LH . (ff .A')) = CDLXL - .Xc) a K t K bCDIX: -

an) %t < cxl

and it follows that (ff . (ff . X))t = ttSff ) . Xjt .

Page 44: Stochastic Calculus

76 Chapter 2 Stochutic Integration

Proof Let Jfn e l1l such that Hn -+ H in l'Ialff .'X). (6.4)and Exercise

6 2 imply E

jjffn . (ff . -Y) - H . (ff . .X')11z= lllSn - Hj . (.Jf . -X')ll2= llffrt= ffllaix: .. .

E.'

so Hn . (ff . .X') --> H . (ff . X) in.42

. To deal with the riglkt side of (), we .

observe that using the definition of the norm then (6.5)with (9.2)

llffnff - Sff 11a2-= E (#,n- Salzff a2d(.X')a0

= 1lHn - H llz2.x

-+ 0

. . .E .q . . .. ; - . . . .

.. E ..

... E. .

.

.

.

.

. .

..

.

.

l ing Exercise 6.2 we have Hnlz . X -.+ HK . X i M2 tiSO aPP y.

(9.5) Lemma. Let X be a cotinuous locl martingale. lf ff E l1a('F) and

H Q Tlatff . X) then @)holds.

t 'j**

:

Proof tet :r',, :i= inftt : Jaff a2d(Ar), or #,zdtff . A )a > nJ. tt SJa::*::

Jf,1(,<,zk) and 1%T= .Jf,1(,sa-a) . (9.4)implies that

- Tn g--T= x ) = LkTn gn ) . XS . ( t '

Using- (6.2)uow it follows that CH . (ff . .X')) = (Jfff . .X'lt for t K Ta, and

letting n-+

G) gives the desired result. E ILI

(9.6) Associative Law. Suppose X is a continuous semimartingale. If S, ff E

lII then H . (ff . Ar) = HK . X.'. . .

. . . ':

'

. L'.

.' ..

Proof In view of (9.5)and the deqnition of the integral it suces to pve theresult when Xj = Wt is continuous adapted and locally of bounded variation.

However, by stopping the irst time lS,l, Iff,l or the variation of . on (0,s)

exceeds n the statement reduces to something covered by (9.2). 1:)

2.10. Functlons of Several Semimartingales

ln this section, we will prove a version of It. 's formula for functions of several

semimartingales. The key to the proof is the following

(10.1) Integration by Parrts. lf X and 'F are continuous semimartingales

then t t

u'Vx - A'OF: = JI#X, + A-adls + (vY,F)t0 0

Setion 2.10 Functions of Several Semimartingales 77

Proof Let n = (tJ') 'be a sequence of subdivions of (0,t)with mesh IASIgoingto 0. We can write

Xt - A-P'FO = (Ahr+,- Xt y)(Kyyz

-'f

y)i .

.

+ (-'V2+,- X3ylF y + Xt 2t'F 2+,

- yi )w)

(8.6)implies that

(.X+, - -L;)(k+z - K) -+ (-'',')

in probability, while application of (6.7)to each of the last two sums impliesthat they converge to J X,dXs and to JJX, d .

. , (:l

When is locally of bounded variation then (X, 1/)t H 0 and this reducesto the ordinary integration by parts formula of Lebesgue-stieltjes integration.

F,dx, = .x'tK - Ab'n - X,dF,

Note that the right hand side makes sense ip a path-by-path sense.

We are now ready to prove the following jentalization of (7.1):(10.2) It's formula. Let f : Rd --, R, and 0 % c % d. If Xt, . . . y X( are

tinuous semiiaxtingales and Xc+1 Xd are locally of bounded vaxiationC.OI1 g J . . . ) j

t en

d. i 1 . jJ(Xz) - 1Xz4 = Dilhjdx, + -0 lhtfvk-stdq.x' J X )a

=1 0 .'lsflsc 0

provided the derivatives Dj f, 1 K J' %d and Dijf, 1 % i, J', < c exist and arecontinuous.

'

Remark. At this point saving a derivative or two may seem like pinchingpenniesbut Whenwe get to Chapter t and apply this result to utt, Bt) we willbe very happy to not have to check that the partial derivatives lf and ut.wexist. .Proof By stopping, it suces to prove th result when lax'tf1,(.X)t %M fora11 , t. Since any function / satisfying the lzypotheses can be approximated by

Page 45: Stochastic Calculus

78 Chapter 2 Stochastic Integration

polynomials gn in such a way that ga , Dign , 1 % K d and Dijgvt, 1 % , j %c

converge to 1,Df.f, and Dij.f uniformly on (-M, M)d, it suces to prove theresult when f is a polynomial. For then (6.8)and Exercise 7.2 allow us to pmss

from the result for grg to that fpr J. '

To prove the result for a polynomial? it suces by linearity to prove the

, ku . . . zk= where /:1,1-2,. . . , kn e (1,. . . , dj..'

result when I is a rponomial z z(The reader should note that 1-1,. . . , krt are superscripts, not powers, e.g., ourmonomial might be zlzzlzlzlzz .)

If n = 1 and *l = -, then (10.2)says that

k xk = ldxXt -

n ,0

which is trivially true. To prove the result for a general monomial, we usen (m) .

induction. Let X = l'-lmazlXt be a monomial for Fhich (10.2)holds, and

k(n+1)1et ZL = X3 . Applying (10.1)gives

Capter Summar.y 79

To evaluate the third term (F, g) , we observe that by (10.3)and the formulafor the covariance of stochastic integrals, (8.7),

n

('.Fz) = Akmt iltxtfl x(n+1))1 )

.'

f<n 0 ,n= lm#

Adding the last three equalities gives

n+1F Z - FPZ: = Xkmt tfxt)

f t , ,

0 m=!i%a+1m#:

t n+1

+ - X, %-' d(A -h

', A %4 '),

2 (,,j'sn-l:z nzmzi.J n 9%'.jm#

(Notice tht for each i in ttte sum for (F, Zjt , there are two terms i = f, j = n-l- 1,and = n + 1,

.i

= i in the lmst sum.) The proof is omplete. (7I

Chapter SummaryWith the completion of the proof of the multivariate fprm of Is's formula, wehave enough stochastic calculus to carry us through most of the applicationsinthe bookj so *e pause for a moment to recall,what we've just dtje. ,

In Section 2.1, Weintroduced the integrands ff, the predictable processesnd in Setion, 2.2 the (flrst)integrators Xt , the contipuous local maitingals.

In Section 2.3 we defned the variahce process assiated with p, local maptingale, and more generally the covariance (.X, ')f of two local marting>leg.

t h iance cpuld hye been deflned as theTheorem (3.11)tells us tha , t e covaruniqu continucms predictable process dt that i loca ly of boundd vari>tion,has dtl = 0, and makes u'V'i - .t is a martingaler The next result gives apathwe interpretation of (.X,F)t :

(t.0) Quadrtlc Varlation. Sppose X ad 'F 'are contius tcalmartin-gales.If rj is a sequence f pititions of (0,t)Svith mesh llnl

--k

0 th

Taking X :::F:'.F we see that (aY)zis the tquadratic variation'' of the path up totime t.

Applying (10.2)to F gives

(10.3)

j nktrnl dxkijs - i = X' ,

0 m=tKn m # .'

C1 ij

1 k(m) (jjxl) x'(J))+ - X, ' ,

2 () z,zi ,fn m

i#i mt/i.

so using the associative 1aw (9.6),

: n+1

klrnl dxkitZ, dFa = X, ,

n fsa 0 m=!

my:

t r+11 k(m) #(xk(f) x(J))

+ - X, , >

Q sa 0 msl.J y: , j.yd m .

By deflnition,.1 t Tl

(m) J.X'k(rl+1)y, dga = I'IXs ,

0 0 m=1

Page 46: Stochastic Calculus
Page 47: Stochastic Calculus

82 Chapter 2 Stochastic Integation

The developments described iu the previous paragraph are used only in

(9.3) of Chapter 4 and in Example 2.1 of Chapter 5. The material in Sectioh

2.12 concerning Girsanov's formula is not used until Section 5.5. We would like

to suggest that the reader proceed now to the beginning of Chapter 3.

Section 2.11 The hfeyer-lnaka Fomula, Local Tfiaae 83

interval, it is easy to see that A(z) -+

.f(z)

uniformly on compact sets. Sincel.Xtl f N, it follows that

fnv) '-> Iht uniformly in t

To dal with the irst term on the right, we difl-erentiate (a) to get

'. y,(z) = y?(z+ .X.)gy) dy.y

yztz)(d) a n

.:

'

t ' x )tu and z = J J'x )d.u then tlis n-.+

co . Now if 1: = X( 4 4 () , ,;Z2 maximal inequality for martingales and the isometry property of stochastic

integrals (Eyrcise 6.2), imply

(sup(z;'-

.n)2)

s xsupzhu - zlzE'1 t

Kln'

(-Y,) - T'(A%))2d(M), -+ 0- 4s /,by the bounded convergence theorem. (Recall (M)x K #.) By passing to asubsequence we can improve the last conclusion to

(e) sup (Q*- ft)2 -.+ 0 almost surely

't ?' t /Let J1 = L (-Y,) dAs and h = /' (JL) dAs . ln this case it followsfrom the bounded convergence theorem that for qlmt every o

(f)

T take the limit of the third and final term ffy = 1 Jf .J??IX )d(X), weo r, ,2 ()note that (b)implies

- 1Ja/(x,)- J'(x,)l dA' -+ 0sufplJ: - Jt Is j,

g)Combining (c),(e),and (f)we see that almost surely the right hand sie of (g)converges to a lint uniformly in t, so th left hand side dots also. Since Ikn*is continuous adapted and incremsing the limit is also. E)

ff''k

= fnu(.Xt) - 1nu(.X0) - ILg (Xa)dX,, ()

If we fix X then for a given f we call ff the increasing process associ-

ated with f. The increasing process associated with Iz= cl is called the local

2.11. The

ln this section we will prove anextension of It's formula dtte to Veyer(1976)that hms its roots in work of Tanaka (1063).See (11.4).Loosely speaking

it says that It's formula is valid if f is a diflrence of two conkex functions.

Since such functions are diFerentiable except at a cuntable t and hke a

second derivative that is a nice signed memsure.. this is a modest generali/>tim.

HoFqver, this is the best one can do in general: if Bt is a staudard Brwnian# fmotio and Xj ::p fBtj is a seminpytingale t en f mpst be the di renc o

twoconyex iunctiops. seecinlar,Jacod, protter, and slkazpe(1se).'rke

(jdevelopments in this section follow those in Sections 4 and 5 of hapter IV o

Protter (1990)but things are simpler here since we hav no jupps. Our flrst

step is

Meyer-Tanalta Formula, Local Time

(11.1) Theorem. Let f : R .-, R be convex and let X be a contiuous

semimartingale. Then f(X) is a semimartingale and

t

fvt - Ixzj = 1'X't dA', + Ik0

Here f') = limhtntftz) - (z -

.))/

is the left derivative which exists at a1l

z by convexity and f is a continuous adapted increasing proess..

' ..'..

.

Proof Let Xt = Mt +.lf

be the decomposition of the semimartingale. By

'stoppingwe can suppose without loss of generality that l.XtlS N, l.l N and

(M)z %N for all . Let g be a C* function witlt compact upport in (-zx,0)

and having j'gsj ds = 1. Let

(a)

Then Jn is convex and C* . Using It's formula we have

t (j. t

(b) 1uXj4 - Jn(A) = Jn'(X,) dxs + 'j fn''xsl d(X),0 0

Our strategy for the proof will be to 1et n-+

co in (b)to get the desired

formula. Since a convex function is Lipschitz continuous on each bouuded

Js(z) - j J (z+')

,(p) dp

Page 48: Stochastic Calculus

84 Capter 2 Stochutic In tegation

tlme at c, and is denoted by LI. To prove the flrst property that justifles this

name, (11.3),we need the following preliminary

Proof Let A(z) = (z -(z)+, A(z) = (z - c)- , and 1et fff be the increasing

processmssociated with Jf. We begin byobserving hh-h = Iz-cj,so ff/+ff tl =

.! . Our second claim is that since h - Ja = z - c we have ffl - fff2 = 0. To

ee this let g) = < - c which has g?(z) = 1 and note Xt - A'c = J(11#X, o 'sthe associated fft must be H 0. (

. . ' E ' . . . . ' 'E '

.' il

.. ' .: .:

t' l i when X = c r t be' preis tf*' li(11.3) Theorem. Lj on y ncremses t ,

ft' be the memsure wlth distribution function t -+ LI, then a is sqpported by( : Xt = cJ. ,

Proof Intuitivly, I-'V= cl is a local martingale except when Xt = a so LIis constant wlzen Xt # a. To prove (11.3),however, it is emsier to use thealternative deqnitions in (11.2).Let S < T be stopping times so that (S, T1 C

(t : Xt < c). pplying (11.1)to /'(z) = (z = c)+ we have

Sectfon 2.11 The Afeyer-Tanaka Formula, Local Triae 85

stopping we can suppose I.X I%N and (-'Y)t < N for all t. Having done thiswe can 1et

N1J(z) = - pztdcllz - cl2 -x

Since (J -g)H

= 0 on (-#, Nj it follows that fz) - gz) z a + bz for IzI/ N.Since the result is trivial for linear functions, it suces now to prove the resulttor .q, which is almt trivial. Difl-erentiating the dqnition of .q

we have

N'(z) = j.1j yda) signtz - c)#

-N

Starting with the deflnition of the local time

't

1A) - cl - lA%- cl c'= signtxa - c) dxs + Lta

Nand then integrating 1/2 Lx Jztdtll gives

r 1xr - c)+ - Xs - c)+ = 1(x.>.) dXs +

'j'(f.1.

- 1)S

' . . .' : ( . . . . .. . . . .

'

1 /.N rtp(vY) - .(A%) = 'j j.xptdc) j signlx, - c) dXs

N1y ydajL;+-N

The left-hand side and the integral ou the right-hand side vanish so L = Zj.Since this holds when S H q with q any rational and T = inf (t > S : Xt kc - 1/n) for any n, it follows that a((t : Xg < c)) = 0. A similar argument

using (z - a)- instead of (z - a)+ shows '((f: Xt > c)) = and the proof is

cornplete. :

We are nop rvady for our main result.

(11.5) and (11.6)below will justify interchanging the two integtals in the flrtterm on the right-hand side to give

t 1 CNp(A%) - p(A%) = j g'xsj dXs + y j.xJtldclf4

Since LI = 0 for 1a1> N (recalll-'LlK N and use (11.3))this proves the result

for g and completes the proof of (11.4). E1

For the next two results let S be a set, let c be a c-fleld of sgbsets of S, artdlet ;t be a nnitemeasure on (,%c$-). For our purposes it would e enoug totakeS = R, but it is just as emsy to trat a general set. Tlle first step in justifyingthe interchange of the order of integration is to deal with a measurability issuq:if we have a family .Jf/@) o? integrands for a E S tlien we an deqne thqintegrals .Jf: #X, to be a measurable function of (c,t, td). , ,

(11.5) Lemma. Let X be a continuous semimartingale with Xo = 0 andlet S/&) = Stc, ,

=) be bounded and S x 11 measurable. Then there is a

(11.4) Meyer-Tnnalca formula. Let X be a continuous semimartingale. Letf be the diference of two convex functions, f' be the left derivative of f aud

suppose r?= y in the sense of distribution (i.e.,# has distribution functin

T) . Thent 1

/(m) - Ixzt = I'X'j #-Y, + j' /z(#c).U0

Proof Since f is the digerence of two couvex functions and the formula islinear in J, we can suppose without loss of generality that J is convex. By

Page 49: Stochastic Calculus

86 Capter 2 Stochatic Tntegrafpn

Z(c, t,) ets

x 11 such that for p-almost every a Za, @,tJ) is a continuous '

't .

'

versionof J;HI dxa .

Proof Let Xt = Mt + Jlf be the decomposition of the semimartingale. Bystopping we can suppose that l.l %N, lMtl S N and (.X')t = (Af)z K N fora11t

Let 7 be the collection of bounded 11 E S x 11for which the colusi

h lds Wewill check the assumptions of the Monotone Clss Theorm (2.t)O .,

in Chapter 1. Clearly 1'f is a vector space. To check (i)of the MCT suppose

#tc,t, ) = Jtclff (t,tJ) where ff (t,td) e l'l and fa) e bS. In this case

'1 tHs' d.X, =

.f(c)

ff s dxs0 0

soJzf e ?t. Taking /' and zt-to be ipdicator functions of sets in u% and TI

espectively,we have checked (i)of te MCT with.4

= (X x B i . e S, B E H).T

To check (ii)of the MCT let 0 K Hn E'// with Hn 1 H and H bounded.

The L2 maximal inequality for martingales and the isometry property of the

stochmstic integral imply

E sup It.Jfn'c .

.f')

- CH' . M)tI2 g 4sup EkHn' . Mjt - ll'' . M)l2't

= 4E (Jfn,c - Hall d4M), -+ O

By passing to a subsequence we have for p almost every c

sup lt.Jfnk't' .

.'l

- (ff '- Af)t l -+ 0

To deal with the bounded variation part we note that

.Section 2.1 1 The Afeyer-Tanal'a Formulat Local Time

a'V= M3 + A be the decomposition of the semimartingale.stopping we can suppose that l.Alz< N, lMl K N and (.X'): = (M)f Sa1l . As noted in the proof of (11.5),the result in the special cmse X = .Ajust Fubini's theorem from memsure theory, so we will supposethe proof that X i.:zz M.

Let 7 be the collection of bounded H E S x li for which the conclusionholds. Again, we will check the assumptions of the Monotone Class Theorem,(2.3) in Chapter 1. Clerly :/ is a vector space. To check (i)of the MCT,suppose fftc,t,l = Jlalff (t,Y) where ff (t,kJ) e ll and .J(a) e bS. In thiscmseZI = J(c)(ff . .X')t so

Proof Let

JsZ /z(da) = 1L ' .X')tJsIat /2tdc)

=(l/s T(') pdas Tfl ' A'lt = CH . -Yl

so f ff e 7t. Taking f and ff to be indicator functions of sets in S and 1-1respectively, we have checked (i)of the MCT with ,4

= (.Ax B : ,1. 6 <S,B G 11/.To check (ii)of the MCT let 0 S Hn E 'S with Hn 1 H and H bounded.

Letting 11,11be the total mmssof pj applying Jensn's inequality to the proba-bility measure Jz(.)/IlJzl), then mltlplying each kide by 11,11we have

1 s(/ sup lzlz'a- z/lydallz

11/211ss E sup Izfn'?- z/ 12gda)

S t

Using Fubini's theorem, the L2 maximal inequality, and the isometry propertythe right-hand ide

= fsup Izn' - 7/12 gda)S 1

s 4 sup.i'latn'a

- Z.112gda)s j

Cr

= 4 E t.Jfan'l - Hslll #(;), plda) .-> 0s )

by using the bounded convergence theorem three times.values inside the integral we have

'z..)I sEj*

Izz;''c- H:6 dl.lf -+ 0ssupltzznz'z..),

-

.

f 0

u:dx- ,(dc) - /'/ utyda) dx,/-/- - -

,c2stsypl/szlz ,(dc) - j zC,(dc)I)

s.)(

/ sup lzJ',-- zplptdalhz,-, ()J S ' J

Page 50: Stochastic Calculus

88 Chapter 2 Stochutic Integration

Taking H1 = js tff''' Jlldcl and pmssing to a subsequence nk we have that with

n ..X') = JsZlk'' /.lldtzl converges uniformly to JsZI lztdcl.probabilityone LHThe lmst detail is to check that #n . X converges to .I . X in

.&d?k

for then it

followsthat h Zf llda) = CH . Xlt . ln view of the isometry proprty, we cap

completethe lst detail by showing that llffn- ffllx -+ 0 but this is routine.

x J2

1jjE j j #,nztlgdaj -

y Iha gldaj (14.Ar),11# 0 s s

X

K E (.J6 'J- S:) ydaj #(.X)a -z+

0 s

by using'the bounded convergence theorem three times. (:1

We can now complete the identi:cation of L3 ms the local time at c.

(11.7) Theorem. Let X be a continuous semimartingale with local time LI.If g is a bounded Borel memsurablefunction then

co j

f0:(c)da = .(X,)d(JY)a '

-x 0

Proof Suppose flrst that g is continuous and 1et f G C2 with fH = g. In

this cmse compaTing (11.4)with It's formula proves the identity in quesion.

Since the identity holds for any continuous function, using the Monotone Class

Theorem proves the result for a bounded memsurable .q. (:1

When X is a Brownian motion the identity in (11.7)becomej

Secton 2.11 Tlle Meyer-Tanaka Forznula, Local Time 89

Exnmple 11.1. (11.8)is not true for Xt = lftl, which is a semimaztingale by(11.1). lntroducing subscripts to indicate the process we are referring to, weclearly have f,1 (t) = 1,1 (t)+ f,s-'(f) for a > 0 and fg1(t) = 0 for a < 0. Since

0s() / 0 it follows that a--> j-(t) is discontinuous at a = 0.

One can complain that this exampl is trivial since 0 is an nd point ofthe set of possible values. A related example with 0 irt the middle is proviedby skew Browninn motion. Start with l# l and flip coins with probabilityp > 1/2 of + and 1 - p o - to assign signs to excursions of Bt , i.e., maximaltime intervals (a, ) on which lft 1> 0. lf we call the resulting process

'i$

thenf,e (t) -+ 2ps0 (t)as c t 0 and f,e (t) -+ 2(1 - plso (t) as c 1 0. For details seeWalsh (1978).Proof As usual, by stopping we can suppose that lm1K N Ald (vY): i N

11t B (112)' 'for a . y .

1-M = (.;%- c)+ - (.X - c)+ - 1(x.>c) dXs2 c

It is clear that (c,t) -.+ (X - c)+ - (A%- c)+ is continuous, so we only haveto investigate the joint continuity of the stochastic integral

R = 1(x.>u) dX,0

Fix a time T < x and regard a-.+ Ia as a mapping from R to C4(0,T!, R), the

real valued functions continuous on (0,T1, which we equij with the sup normIIJII= supcysz IJ(s)I.ln view of Kolmogorov's contlnulty teorem (41.6)inChapter 1) lt suces to show that for some a, p > 0 we have

E 11.f?- f.511/f Clc -

11+G

Suppose without loss of generality that c < . One of the Burkholder DavisGndy inequalities (45.1)in Chapter 3) implies that

x tfC#(J) da = #(X,) ds

-x 0

From this, we see that ? can be thought of as th time spent at a up to time

@,or to be precise it is a density function for the occupation time measurej

lA(.) = lx (Xa)ds.There are many ways of defining local time fpr Brownian motion. The

one we have given above is famous for making it easy to prove that there is a

version in which L: is a jointly continuous function of c and t. It is somewhat

remarkable that this property is true for a continuous local martingale and it

is not any more dicult to prove the result in this generality.

(11.8) Tlzeorem. Let X be a continuous local martingale. There is a version

of the LLin which (c,t) -+ L3 is continuous.

yz 2.E'1lf.J

-

.0114

< CE / 1(u<x.s) d(.X'),J 0

To bound the right-hapd side we note that using (11.7)and then the Cauchy-Schwarz inequality and Fubini's theorem:

b 2 &

= E j L dz < ( - ajE /.(.14)2db

= ( - c) j .E'4f4)2 dz g ( - c)2 sutjjj; S(j)2rE

Page 51: Stochastic Calculus

90 Capter 2 Stochastic Integration

To bound the sup, we recall the deinition of f1,.T

.'..

' '..''''''''.'...'

fgj, = laYrr- zl - lan - zl - sgntx, - z) dXs0 : ,

:

whichwith tlze trivialinequalities (c+)2 :g 2/2+252 and jg-zj-jzrzj.%

jp-zlimpliesthat

Secton 2.12 Gfrsanov's Formula 91

This shows af is a martingale/P. lf F is a local martingale/o then thereis a sequence of stopping times T,l 1 x so that Xv' is a martihgale/ Q and

phence aykagk is a martingale/P. The optional stopplng thorem implies thatmagksarr'a is a martingale/' and it follows that at is a local martingale/'.

To prove the converse, observe that (a) interchanging the roles of P and

Q and applying the lmst result shows that if pt = dpt/do, and Z3 is a (lca1)martingale/P, then ptzj is a (local)martingale/o and (b)#t = th-', so lettingZt

'=

mK we have the desired result. C1

Since 1 is a martingale/o we have

(12.2) Corollary. cet = dotldpt is a martingale/p.

There is a converse of (12.2)which will be usful in constrttcting examples.

(12.8) Lmma. Given m a nonnegative partingale/p there is a unique locally' E. '. '

etwivalent pfobability memsure Q so that dQt(dP3 = &t,

Proof The last equation in the lemma defines the restriction of Q to .F forany t. To see that this deqnes a unique meure oh C, lt t1 < ta . . . < frz, Wfbe Borel subsets of Rd and let B = (1 : ttfle Jlf for i %i K n). Deflne the1

fnite dimenpional distTibutions of a measure Q by setting '

QBt = t dPB

whenever t k fn. The martingale property of tyt implies that the qnite di-mensional distributions are on?istent so we have desned a unique memsurq on(C, C). Ea

We are ready to prove the main result of the section.

.r'

j(12.4) Girsanov s formula. If X is a local martingale/p and we let dt =

f -1d

Y) then Xt - Wt is a local martingale/o.Joaa (a,.z

, ,

Proof Although the formula for z1. looks a little strange, it is easy to see thatit must be the right answer. If we suppose that .A is locally of b.v. and hms.zl.n= 0, then integrating by parts, i.e., using (10.1),and noting (a,.A)t = 0since W hms bounded variation gives

't t t(i2.5) tvj(aL-z4t)-pX: = (X,-u4,)d(,+ asdxs - a,d.,+((, A'lf

0 0 0

T

.E'(f4)2 %ZELXr - .Xb)2+ 2Est

signtaL - z) dXs

S 2(2X)2 + 2S((X)z) K 8Nl + N4

by the isometry property and the bounds we have imposed pn lxl and (.X')tbyi This gives us what we heed to clzeck th iqulity i kolmogrk'stopp ng.

continuity theorem and completes the proof.

Remrk. The proof above almost works for semimartingals X) = M + A.f we deqne f/ = J'f1fx >.) dMs and J/ =

t 1(x >a) dAs then the argument1 0 f t'

above shows that ((z,t) -+ 11 is continuous, so we wl11 get the desired conclusio

if we adopt mssumptions that imply (c,t)-'->

JI is continuous.

2.12. Girsanov's ormula

In this section, we will show that the collection of semimartingales and th:dennition of the stochwstic integral are not affected by a locally equivalent

change of memsure. For concreteness, we will work on the canonical probability

space C, C), with h the qltration generated by the coordinate maps Xj @). Two memsures Q and P deflned on a qltration & are said to be locally

equivalent if for each t their restrictions to.'&,

Qt and Pt are equivalent, i.e.,mutually absolutely continuous. In th cmse we Yta = dot/dfl. The Temsonsfor our interest in this quantity will becom clear as the story unfoldsy

(12.1) Lemma. X is a (local)martingale/o if and only if as is a (local)martingale/'.

Proof The parentheses are meant to indicate that the statemept is trt!e if

the two locals aie removed. Let'i

b a martingale/o, s d t ltd X q Fs. X/W,. . .

.. ..'

..

if Z e .'Ft, then (i)J Z dp = f Z dfl, and (ii)J Zat dfl = f Z dQf.So ir(i), (ii),the fact that ' is a martingale/o, (ii),and (i)we have

afykd.p = tnrjx dPt = 'FidoW W A

= Y,dQ,= a,F, dP, = a,Y,d#.4

,4 yt

Page 52: Stochastic Calculus

92 Chapter2 Stochutic Integration

At this point, we need the assumption made in Section 2.2 that our Eltrationonly >dmitp conyinuous martingales,so there is a continuous version of t:e towhich we can apply our iptegrationzby-parts formula. . E

Since (:4 and Xs are local martingales/p, tlze fifst two terms op the right .

t, '

m (12.5)are local martingales/P. Irt view of (12.1),if we want Xt - Wj to be

a locll martiugale/o, we need to choose Jl. so that the sum of the third and

fourth terms H 0, that is,

Section 2.12 Girsanov's Formula 93

(12.6) Theorem. The quadratic variation (;)t and hence the covariance(aY, ?')f is the same under P and Q.

P-oof The second conclusion follows from the first and (3.9).To prove thefrst we recall (8.6)shows that if Aa = (0 = tg < tkt . . . < f2n = t) is a sequenceof partitions of (0,@)with mesh llrjl -+ 0 then

(Ax, - x,7)2-+ (A')tt

a, d.Aa= (a,-Y)0

F'rom th last equation and the asspciative law (9.6),it is clear that it is neces-

sary and sucient thatt

.= tu-ldta, xla

The last detail remaining is to prove that the integral that deflnes yk exisss.

Izet T,z = inf (t : at < n=1J. If t KTrj, then E E E .

' dI(a,-Y)I, '

s n djta,xlj, < x0 .

tx .5 0

by the Kunita-Watanabe inequality. So if T = limn-ztx,T,z then A is well

defined for t %T. The optional stopping theorem implies that Safwza = Et,

so noting ataza = at on Ts > t we have

.E'(; Ts K t) = Ean ;Trj i t)

Since at k 0 is continuous and T's K T, it follows that

f'tcet;T S t) K .E'(&t;Ta S f) = .E'tag'aiTn S t) K 1/n

Letting n-.+

x, we see that at = 0 a.s. on (T S @J,so

Q(T K t) = f(a;T S t4 = 0

in probability for any semimartingale/P. Since convergence in probability isnot afected by locally equivalent change of measure, the desired conclusionfollows. (:I

(12.7) Theorem. lf H E fll then (# . .X')t is the same under P and Q.Proof The value is clearly the same for simple integrands. Let M and N bethe local martingale parts and

.4

and B be the locally bounded variation partsof X under P and Q respectively. Note that (12.6)implies (M) = (#)t. Let

Tn = inftt : (M), l-l or lll k n)

lf H G f11 and ffz = 0 for t k Ta then by (4.5)we can flnd simple Hm so that

11.2=- SIIAT, 111.,',1- S'IIN -+ 0 and j I#r - Jf,1 d(I.I+ lfI) -+ 0

These conditions allow us to pass to the lirnit in the equality

Hm . (M + z4)) = Hm . CN+.p))t

(the left-hand side being computed under P and the right under Q) to conclude

that for any H e fH the integrals under P and Q agree up to time Tn . Sincen is arbitrary and Tn -.+

(x7 the proof is complete. E1

But PL is equivalent to Qt, so 0 = f$(T %.t) = PIT %t). Since t is arbitrary

#(T < x) = 0, and the proof is complete. (:I

(12.4) shows that the collection of semimartingales is not aflcted by change

of memsure. Our next goal is to show that if X is a semimartingale/', Q islocally equivalent to P, and H : l'l, then the integral LH . .X'l is the same

under P and Q. The flrst step is

Page 53: Stochastic Calculus

. ..'

''

.

3 Brownian Motion, 11

j,

j tj ( utjsg oj syow.In this chapter we will use It s formul to deeperi ou un ers anian motion or, more generally, continuous local martingales.

3.1, Recurrence and Translencel flf Bt is a d-dimensional Brownian motion and B1 is the th component tll B3

is martingale with (#)f = . If i # j Exercise 2k2in Chapter 1 tells us thatBi Bi is a martingale, so (3.11)in Chapter 2 implies (# Bijt = 0. Using this.1 't 1

infoimation in It's forzula we se that if f : Rd -+ R is C2 theni. . . '. . .. . ' '

. . . . . .: : E E . j E : i % ( E E (

j.E f(Bt) - JBn) = DLILB, ldsj + g DiifBsjds

i 0 ' i ()

Writikvy:::'z Dl, . . . , by) fbr tlw rceht of J, ftd l i:z8Ef.l Lbiil for

the Laplacin of J, we cn write the lst quation mre natlf ws'd 1 i

(1.1) 1Bt4 - 1Bz4 = VT(#a) ' dm + ;; Lfmtds( 0 0

H, tv tloti ti qrst ieim td fo# the inr prdt of two vects hd

the precise mening of that is given in the previous equation.

nmctions with AJ = 0 ax c>lld harmonl. (1,1)shops that if we pm-

pos a hrmonic futi with BioWian ltion the iult i a lcl irtartihgale.The next reult (nd judicious choices of hl4monic functions) is the key to de-riving pxoperties of Browian motin frop It's formula.

(1.2)Theprem. Let G be a bounded open set aqd r = inft : B j G). If

J e :2 and AJ = 0 in , and J is continuous o tlkeclosure of G, G, then for

z E G we hakeJ(z) = k=fB.).

Proof Ottr qrst step is to prove that.&(r

< *) = 1. Let ff = suptlz - #1:

z, y e G) be the diameter of G. lf z E G and 1#1= zl '> ff then B / & and

r < 1. Thus

.&(m< 1) k .P.(1f1 - zj > ff ) =.b()#1j

> ff ) = > 0

Page 54: Stochastic Calculus

96 Capter 3 rzowzlanMbfon, .II

This shows supy.&(m

k ) K (1- Exl holds when k = 1. 'To prove th.e lmstresult by inductlon on k we observe that if ptz, y) is the transition probability

for Brownian motion, the Markov pzoperty implies

( E ' ' : E

.l%('rk ) S pzlz, 2/).P!,('rk - 1) tyK (1- )k-1#.(#1e G) S (1- l

where the lmst two inequalities follow from the induction mssumption and thefact that 1#1- zl > ff implies B / G. The lmst result implies that

.&('r

<

x) = 1 for all z Q G and moreover that. .

'.

'.

'.

. '.. .

'. .

'.

. .

(1.3) sup Errp < (:x7 for all 0 < p < (xi '

T

Section 3.1 Recutsence and Aansezice 97

(1.5)Theorem. For all z and y, /.IT: < x) = 1.

Proof Since.p.(4

< x) = /.-:,(:r < x), it suces to prove the result

when y = 0. A little reqection (punintended) shows we can also suppose

z > 0. Now using (1.4)/.41L < Twz.) = (M - 1)/M, and the right-hand side

approaches 1 as f =+ oo. D

It is trivial to improve (1,5)to conclude that

(1.6) Theorem. For any s < x, PzBt = y for some t k s4 = 1.

Proof By the Markov property,

Prm = y for some t k s) = f.(Ps(',) (Tp < (:x))) = 1 El

The conclusion of (1.6)implies (rgue b# contradiction) that for an# ywith probability 1 there is a sequence of times t,z 1 x (whichwill depend on

the outcome 2) so that Btn = y, a conclusioh we will hereafter abbreviate asBt = y inflnitely often'' or CCBt

'=

y i.o.'' In the terminoloa of the theoryof Markov processes, what we have shown is that onodimenonal Broppianmotion is recurrent.

E lse 1 2 Use (1.6)tp concludeXerc .

. .. .:'

j'

. .. ..'

E .. . (

When Lf = 0 in G, (1.1)implies that fBt4 is local rrartingale on (0,'r).

We have assumed that f is continuous on d and G is bounded, so f is boundedaud if we apply 9he tiqze h,>nge n'deflned in Section 2.2, Xt =F J(#v(:))is abounded martingale (Wlth respect to G = .G(:)). eing a ounded martingale

Xj copyergep almt surely to a limit Xx which hms Xt = S.(Ak lft) and

hnc EZXg = EnX. . Sinc r < (k$ an1 f is continuous on d, X. FzifBpj.Taking = 0 it flloWs tliat fkj = ECXO

z EmX. :: Ezfmj. E1

G)

E rr = 1tp-p (r > @)dt . E= tr

0

' i tion we willuse (1.2)to prove some rsults concernin:In the rest of t s secthe range of Browni> motion fBt : t k 0J. We start with the pnerdirpension>lCase.

' . . . .:

.'

. . . . .' '

.l .!

.(1.4) Theorem. Lt c < z < b and T = tnftt: Bt 4 (a, )J.( (j .... jj y

b = zJ%l. = c) =-

fblfz = ) == a

Proof J(z) = b - z)/( - c) hs fn = 0 in (c, ), is cohtinuou on (c, ),and

hms fa) = 1, J(b)= 0, so (1.2)izplies J(z) = EcfBrj :::.&tfz

= a). EI

exercise 1.1 Deduce the last result by noting that Bt is a marting>le and

th tional stopping theorem at time tr'.usinx e op

Let Tz = inftt : Bt = z). Fom (1.4),it follows immediately tat

limsup Bt = cxn liminf Bt =-x

t-+ oo I-+ CO

In order to study Brownian motion in d k 2, we need to flrtd some ap-

propriatehazmonic functions. In vieW f the spherical synmtry of Brownianniotion, an obvious way to do this is to let w(z) = J(lzI2)and try to pick

f : R -+ R so tht L = 0. We use 1212z'= z2z+ . . . za2Eyther thar lzlsince itis euier to diferentiate: '

DJ(IzI2)= #'(IzI2)2z#2T(IzI2)= .f''(lzI2)4z3+ 2F(14l2)

Therefore. for A = 0 we need. . ..

i' 1'. . .

.. ..

. .' ..' '

.'. . ' ..

j0 = )-)(yzz(lzj2)4z/+ zy/tjzj))

=4lz12/'N(1zl2)

+ 2d J?(lzI2)

Page 55: Stochastic Calculus

98 Chapter 3 Brovnian Motion, .1T

Letting y = lzI2,we can write the above ms 4yjmy4 + 2d/'(:) = 0 or, if y > 0,

Section 3.1 Recurrence and 'Ikansince 99

for all 6 > 0, so.!5(.h

= 0 for some t > 0) = 0, and thank: to our deqnition ofSo = inf ( > 0 : Bt = 0J, we have

(1:10) Prsz < x) = 0 for all z

Thus, in d k 2 Brownian motion will not hit 0 at a positive time even if it startsthere.

Exerclse 1.3 Use the continuity of the Brownian path and P=Sz = x) = 1to conclude that if z # 0 then P.IS, 1 (x7 as r 1.0) ::1: 1.

For d k 3, formula (1.7)says

.jp-# - jzjz-df (,S < Ss) =

a a uus-aa s-'' E

' '.

'

.. . .

.. E. . .

:. . .

There is no point in fxing R and letting r.-+

, here. The fact that twodimensional Brownian motion does not hit pointsEimplie, that three dimensionalBrownian motion does not hit points and indeed will ot hit the line (z : zl =

zz = 0J.If we flx r and 1et R -+

cxl in (1.11)we get' '.. '.E.

' .. . .' . . .

.i .

d-2(1.12) Prsr < co) = (r/lzI) < 1 if lzl> r

From the last result it follows easily that for d k 3, Brownian motion is trnn-sient, i.e. it does not return inflnitely ofteh to any bounded set.

( .

(1.17) Theorem. As t -+

x, l#tl-+

co a.s.

Proof Let Wa = (I#tI> n1/2 for a1l t k Snj and note that Sa < co by (1.3).The strong Markov property implies .

P (a4c)= Erpms-lsnz/u < co)) = (nl/2/n)d-2--, 0X n

as n-+

x. Now limsup Wa = nxcKkz UaO=N Xa has

Ptlimsup.4n)

k limsup #(.a) = 1

ho inqnitely often the Brownian path never returns to (z : jzj K n1/2) after

time Sn and.this

implies the desired result. EI

Dvoretsky aad Erds (1951)hav proved the following result about howfast Brownian motion goes to cx) in d k 3.

-d ,T''(p)= 1 (#)2p

Taing J'yj = cp-d/2guazantees Lp = 0 for z # 0, so by coosing C appro-riately we can let

1oglzl d = 2p(z) = a-d d z aIzIWe are now ready to use (1.2)in d k 2. Let Sr = inft : I#fl = r) and

<.&.

Since (p has L = 0 in G = (z : r < lzl< J0, and is continuous on ,r(1.2) implies

/(z) = Enpmj = epwsc < 5) + w(A)(1 - PrS < Ss))

whrr w(r)is short for the v>lue of w(z)on (z : Izl= rJ. Solving now gives

(.a) - w(z)P(1.7) Pr'v < 5k) =

s).(p(g)

P(

Ih d = 2, th lmst forqmla says

(1.8)

If we fzx r and 1et R -+ (:x) in (1.8),the right-hand side goes to 1. So

msv < (x)) = 1 for any z and any r > 0

1ogR - log IzlPrsr < 5k) =

logR - 1ogr

a' d repeatlng the proof of (1.6)shows that q

(1.9) Theorem. Twdimensional Bropnian motion is recurvent in the sensethat if G ls any open set, then PrBt G i.o.) > 1.

lf we llx R, let r-+ 0 in (1.8),and 1et So = izftt > 0 : Bt = 0), then for

z # 0Pzuk < Sa) K lim P=% < S.a) = 0

r-w0

Since this holds for all R and since the continuity of Brownian paths implieSa 1 cxl as R 1 x, we have Pcsz < x) = 0 for >ll z # 0. To extend the lmstresult to z = 0 we note that the Markov property lmplies

.Fl.8t = 0 for some t k e) = .E'c(#s.(To < (:xl)) = 0

Page 56: Stochastic Calculus

100 Caper 3 Drownfan Motfon, Tf

(1.14) Theorem. Suppose g(t) is positive and decreasing. Then

J%(I.IK#(t)[email protected]. as t 1x) = 1 or

accordingas )* #(t)d-2p dt = x or < x.

Here the absence of the lower limit implies that we are only concerned with thebehavior of the iutegral ttnear x.'' A little calculus shows that

X

t-1 log-c @dt ::: x or < &

dingms a K 1 or a > 1, so Bt goes to x faster than Zf/(1ogt)EVd-2 foraccor

1 Note that in view of the Bmwniarl scaling relationship Bt =d 81/2.81anya > .

e at a fmster rt thau [email protected] lmst rsuttwe could not sensibly exct e pshows that the escape rate is not much slower.

Revlew. At this point, we haverderived

the basic facts about the recurr

rence and transience of Btownian motion. What we have fouud is thatj ' :

(i) .P.(#.&fI< 1 for some k 0) H 1 if and only if d K 2

(ii)-P.(. = 0 for some > 0) H 0 iu d k 2.

. ..

(. . . . .

..: .'. .'

. ..

.. .

The reader should observe that these facts can be traced to properties of what

we have called p, the (uniqueup to linear transformations) spherically sym-

metric function that hmsAw(z) = 0 for al1 z # 0, that is :

lzl d = 1

/(z) = log IzI d = 2Izjz-d i k :

and the features relevant for (i)and (ii)above are

(i) w(z)-+ cxl as (z(-+

x if and only if d < 2

(ii)lw(z)l-.+ co as z-.+ 0 in d k 2.

3.2. Occupatlon Times

Let D =.8(0,

r) = (p : 1p1< rl the ball of radius r centered at 0. Iu Section

3.1, we learned that Bt will return to D i.o. in d ; 2 but not in d k 3. ln this

Section 3.2 Occupation Times 101

* 1 (B )dt and show that forsection we will investigate the occupation time Jc D t

any z

(2.1) .Pa,-b'

lo (#f)dt = x) = 1 in d G 2

(2.2) Es .JC1o(@) dt < x in d k 3

P'

f of (2.1) Let Tn = 0 and G =.8(0

2r). For' k k 1 1etrOO j j

Sk = inft > Tk-l : Bt E DjTk = inf (t > Sk : Bt C G)

Writing for T1 d using the strong Markov property, we get f6i k k 1

Tk y.rP= jg1D(f t) dt k s Ssk = Pmsk) jo 1r (ft )dt k s = Hst

From this and (4.5)in Chapter 1 it follows that

nlo Bt )dt are i.i.d.

Sk

Since these random variables have positive mean it follows frm the strong 1awof large numbers that

lo Btj dt k 1im 5C lo (.h)dt = (:x) a.s.0 S''*O

O

S=1 X

provig the desired result.

Proof of (2.2) If f is a nonnegative function, then Fubini's theoyem implies

x pco px pE

Erj /(.B,) dt = j .E%T(.R)dt = j j pttz, p)T(p)dydt

= j j*ptz, y) dt y(p)dy0

V 2 t5

here plz, yj = (2<)-d/2e-Ir-pI / is the transition density torBrownianW

tion.As t -.+

x, p,(z, y4 - (2xt)-d/2so if d s 2 then Jpttz,yjdt = x.mo J

Page 57: Stochastic Calculus

102 Chapter 3 Pzownan Moiion, 11

When d k 31 changing variables t = lz - :12/2,: gives

Sectlon 3.2 Occupaton Tmes 103

where the cf are constants we will choose to make the integral converge (atleast when z # yj. To see why this modifled deqnition might be useful notethat if J f(#)dy = 0 then (mssumingwe can use Ftzbini's theorem)

(2.3)

(x7 (X 1 a-Ip-zl /at dtpflzlp) dt = d/2 e

0 (2.mt)0() d/2 2

s -,

-.lz

- plcu= &

Flz - #l2 2s2lz - # l2-d *

(d/2)-2 -3

ds= s c2,0/2 0r(# - 1) 2-a

=2 lz - pl

27+/2

/c(z,,)y(p)o-j- ,'..f(a)a0

..:E..

When d = 1, we let tu = p,(0, 0). With this choicek

1 *aG(z, g) = (e-(p-''r)/2t - 1)t-1/2 dt

2.mc

and the integral converges, since the integrand is K 0 and ew-(g

-z)2/23/2

asa-+ Y. Changihg variables t = y - z) /2zI giveswhererta) =

n* s-e-'ds is the usual gamma functipn. If we deqne.

'i' z'r ' ' 4.1. 1k.1r ' ' ''''''

. .

' '' ' .. . '..

'

g .:

then in d k :3,G(z, y4 < co for z # y, and

(2.4)

To complete the proof of (2.2)now we observe that taking f = lo with D =

/(0, r) lxld changing tp polar coordiuates .

r c(f d-1 c s2-d ds =

-y.r2

< xG(0,p) p = s d 2D 0

* pttz, v)dt(?(z, y4 = j, ,

z./-.f(s,)a

- /c(z,,).f(,)oJ 0

(2.5)

t) 2 1/2 a1 -u

j.)'u

-(!/

- z)G4, p) = (e -

a z du,/-2';r

x (y - z ) 2'u

X N1p- zl -.ds ,,-a/ztu=- e2W c c

x x.sj

zjy - z j .s u ju= - ds eW c a 2Ip- zl =

,/a=

- ds e-'s- =-jp

- zlW 0

Tp extend the last conclusion to z # 0 observe that applying the strong Markov

propertyat the exit time .r from B, lzI)for a Bzownian motion starting at 0

tl using the rotational symmetry we have' '

''

an

sinceco co 1ds :- , s- 1/ 2 = dv :- r2/ 27-2= - 1-2. 2z. = y-

. a 2()The computation is almost the same for d = 2. The only thing that changes

is the choice of at . If we try c = pt(0, 0) again, then for z y the integrand. '

. ''

ew-f-1

as t -+ 0 and the integral diverges, so we let at = pt(0, d1) where

el = (1,0). With this choice of cf , we get

(r G) T (

j is (#,)ds = .E j lp m)ds K Eo j lo (fa)ds1 t:o

aC(z yj F= (e-lT-#I /2f - e-1/2t) j-lj, zg )

We call G(z, yj the potential kez-nel, because G(., yj is tht electrostatic

potentialof a unit charge at p. See Chapter 3 of Port and Stone (1978)for more

on this. In d K 2, j') pf (z,#) dt H (X) so we have to take another approach todennea useful G:

Gz' p) = (pt(z,p) - ct) dt0

(2.6)

j x jzt=

- e-' ds f-ldj21 c jz-pja/af

x l/aa1 1=

- ds e-3 t- dt21 () jx-yja/z,1 =

- 1=

-zg ds c-4 (- logtlz - pj2)) = logtlz - p1)

Page 58: Stochastic Calculus

104 Captez 3 frownlan Motion, 11

To sum up, the potential kernels are given by

(r(d/2 - 1)/2xd/2) . jz - p12-d d k 3(2.7) Gx, #) = (-1/ ' 1Og(I= i/l) d'H 2

'

-1 . Iz- pl d = 1The reader should note that in each cse G(z, p) = Ctpllz - p1) where v isthe harmonic function we used in Section 3.1. This is, of course, no accident.

z-+ Gz, 0) is obviously spherically symmetric and, as we will seei satisflesG(z, 0) = 0 for z # 0, so the results above imply G, 0) =

. + Sw(Izl).The formulas above correspond to W = 0, hyhich is nice and simple. But

what about the weird looking B s? What is special about tm? The answer issimple:They are chosen to make laLGz, 0) =

-&

(apoint mass at 0) in thedistribution'al sense. It is emsy to see that this happens in d = 1. In that cmse

(z) = IzIthenP?(z) = (1' X 7* 0

(P 1 z < ()

so if B =-1

then Bpnzj = -2Jn. Mre sophisticatd yeaders n check thatthis is also true in d k 2. (See F. John (1982),pageq 96-97.)

tial lternels, we return to Brow-To explain why we want to deflne the potennianmotion in the half space, frst considered in Section 1.4. Let .I = (g : yd >

0), v = inftt : Bt / H, and let

# = (p1, . . . , pd-l,-p4)

be the zqection of p through the plane (g6 Rd : yd = 0J(2.8)Theorem. If z e H, ; k 0 has compact support, and (z :

.f(z)

> 0J 11*then

s-(/-y(s,)a)- /c(z,,)y(,)o-/c(z,,).f(,)oJ 0

Proof Using Pubini's theorem which is justiied since f k 0, then using (4.9)fiofn Chapter 1 and the fact that (z :

.f(z)

> 0) C H we have

E. J T(f) dt = / EslBttr > f) dtJ 0 J 0

* j (pt(z,p) - ntz, f))J(p) dydt

-/- ,,

j- j (pt(z,,)- atttytdydt=

0 H

j- j (pt(z,#)- atttytdydt0 H

= JGx, p)J(p)dy - JG(z, #)/.(! dp

Section 3.3 Exit Tlnies 105

The compact support of 1 and the formulas for G imyly that J lG(z, yjlyjkdyand J IG(z, #)T(p)ldgare qnite, so the last two equalltles are valid. E E1

The proof given above simplifles considerably in the case d k 3; however,part of the point of the proof above is that, with the deflnition we haye chpsenfor G in th #eurrent case, the formlas and proofs an be th yame toial1 d.

t G f?(z #). We think of Guk yj s the S'ekpectedLe.rztz,

p) = (z,p) -

, ,

ocqpation time (density)at y for a Brownian motiori staztin: f z and killdwhenit leaves #.'' The rationale for this interpretation is that (forquitlble Jj

T

E= J(ft)df = Gxz, 7/).f(p) dpg

Witi this interpretation for GH introduced, we invite the reader to pasfor a minute and imagine what y

-+ Gx, yj looks like in one dimension. Ifyou don't already know the answer, you will probably Iot guess the behaviormsy

-+

x. So mh for small tallt. The corhptation is easier than guessingthe answer:

t7(z:#) = -Iz - #1

so Gxz, yj =-lz

- pl + lz+ p1. Separating things into cases, we see that

-(z - p) + (z+ y) = 2p when 0 < y < zGxz, p) =

2z when z < y-(2/ - z) + (z+ y4 =

so we can write

(2.9) Gn, y) = 2(z A y) for al1 z, p > 0

lt is somewhat surprising that y-+ Gn, yj is constant = 2z for y k z, that

is, a11points y > z have the same expected occupation time!

3.3. Exlt Tlmes

In this section we investigate the moments of the exit times r = inftt : Bt / G)

forvarious open sets. webegin with G = (z : Izl< r) in which case r = Sc inthenotation of Section 3.1.

(3.1)Theorem. If 1zIs r then Ecsr = (r2- lzl2)/(I

Proof The key is the observation that

d

l.512- dt = ((lj)2 - )i =1

Page 59: Stochastic Calculus

106 Capter 3 Prowpan foton, ff

being the sum of d maztingals is a martingale. Using the optional stoppingtheorem at the bounded stopping time SvA t we have

.lzI2= Emllfsratlz- Sr A t)dl

(1.3) tells us that Er,h < cx), and we have lfsra 12S r2, so lrtting t -+

xand using thedpminated convergence theorem gives IzI2= Er (r2- Srd) whici

implis the desired result., q I::l

Exercise 3.. tet c, b > 0 and T = infft : Bt / (-c, )). slzowEzT = ab.

To get more fomulas like (3.1)we need more martingales. Applying lt'sformula, (10.2)in Chapter 2, with X3 = X3, a continuous local martingile, andX,2 = (.X)j we obtain i .

(3.2)

. . . . .' . .'. ..

lXt, (-Y)) - J-%, 0) = jvblh , (-Y),)d'a

t

+ joDallla, (I)a)d(A-),

1 t

+ .j joDtxs, (x),)(ftx)a

From (3..2)we see that if ()D11 + Da).f = 0, then fh , (X)t) is a local mar-tingale. Examples of such functions are

J(z, y4 = z, z2 - y, z3 - 3zg, z4 - 6z22/+ 3:2 . . .

or to expose the pattern

f 1Z #) = Cnrn2*-2FN FFI

n , , #0smK(n/2)

where (n/2)denotes the largest integer K n/2, cs,o = 1 and for 0 %m < (n/2)we pick

1j cn,mtn - 2rrI)(n - 2m - 1) =

-(m

+ 1)c,z,m+1

so that D=/2 of the mth term is cancelled by Dv of the (m+ llth. (,D.. ofthe (n/zjthterm is 0.)

The frst two of our functions give us nothing new (a'L and XI - (.X)f relocal martingales), but after that we get some new local martingales:

X3 - 3.;%(X)f , Xt4 - 6.42 (.X') + 3(.X)2,j

Proof (i)follows from (3.1),so we will only prove (ii).To do this 1et Xt =

Bg4- 6BIL + 3t2 and Ta K n be stopping times so that Ts 1 cxnand v'Vaguis amartingale.Since T,, < n the optional stopping theorem implies

Section 3.3 Exit Times 107

These local martingales are useful for computing expectations for one dimen-sional Browniau motion.

(3.3) Theorem. Let a = inft : I.&Ik c). Then(i) Ezra = c2, (ii)Ezraz = 5a4(3.

The dependence of the moments on a is easy to explain: the Browniau scalingelationshipBct =d c1/2#j implies that va/az =d n. Er

. . .. .

:.'

. .. . .

0 = EzfBp-hl.n - 6./aaza (a Ta) + 3('q (/'n) JNow lfaarr'al i c, so using (1.3)and the dominated cppyergence theorem we

4 2 2can 1et n -+

cxl to conclde 0 = c - ttzEora +3.Eq . Usij (i)and ieranginggives (ii). (:I

>) ise 3.2 Find a b c so that B6-c#4t+#2j2 -cj3

js a (local)martingaleXerc t j j j

and use this to compute Ezrt.

Our next result is a special cmse of the Burkholder bavisGundy inetual-ities, (5.1),but is needed to prove (4.4)which is a key step ih qur pot of(5.1).

.:

(3.4) Theorem. If Xg is a continuous local martingale with A = 0 then

E sp 44 < 3811,(.;')2.

f First suppose that IXtl and (.1):are S $J' fr all . t this case (2.5)Prooin Chaptr 2 implis X4 - 6.42 (.X') + 3(X)2 is a rartingale, so its expectationis 0. Rearranging and using the Cauchy-schwarz irtequality

>lJU/ qF3/l(Jf)/ zz 6>l(JUt2(Jf)t) %6(1lJF/)1/2(/;(7f)/)1/2. ' : .

. .

Using the L* mxximal inequality (/.3) in Chapter 4 of Durrett (1095))ard thefact that (4/.3)4K 3.1605 < 19/6 we have

1/2E satl X1 S (4/3)4fX1 X 19

LEsyuy

X1(E (.1)7)'72

Page 60: Stochastic Calculus

108 Capter 3 rrownanMotion, 11

Since 1.X,1< M for all s we can divide each side by ftsup,<t X14ll then squareto get

-

E sup .X4 S 381(f (.X)t2)1/24%t

The last inequality holds for a general martingale if we replace t by Ta = infti :

t, l.;V1, r (.X): k n). Using that conclusion, letting n-+

(:x) and. using themonotone convergence theorem, *e have the desired result. E1

If we notice that T(z,yj = exptz - p/2) satisfles (1aD11 + D2)J= 0, thenweget another useful result.

(3.5) The Exponential Local Martingale. If X is a continuous local mar-tingle, then Cxlt = exptz) = 1(.X')j) is a lcal martingale.

2

If w let 5$ : xbtx - lo(X)), then (3.2)says that.

. . .

''''

.

.''''', .1.. ..9;49. ....

..'

...

. . ..'.....

(3.6) - Fn = Kd.X,i 0

or, in stochastic diferential notation, that dls = d.;L. Tis property gives

X the right to b called the martingale exponential of X . As in the cse of theordinary diferentialequation

T'(f)= T(f)c() J(0) = 1

which for a given continuous function a(t) has unique solution J() = exptyk),

whereJk = Jcc(s) ds. It is possible to prove (updersuitable assumptions) thatZ is the only solution of (3.6).See Dolans-Dade (1970)for details.

The exponential local martingale will play an important role in Section 5.3.Thn (audnow) it will b useful to know whvn the exponeriql loc>l martipg>lris > martiug>le. The next result is not yery sophisticated (see(1.14)and (1.15)in Chapter Vl1I of Revuz and Yor (1991)for better results) but is enougi forOl1r PIIrPOSeS.

(3.7) Theorem. Suppose X is a continuous local martingale with (-Y)f %Mtand A = 0. Then X = explA - J(.X)t) is a martingale.

Proof Let Zt = exptzxf - la(2.X')), which is a local martingale by (3.5).Now,

12 = exptzx: - (X)j) = Zg exp((W)z)

Section 3.3 Exit Times 109

So if Ts is a sequence of times that reduces Zg, the L2 maximal inequalityapplied to Xan gives

2 a ut AnE Supxagu K 4f5$avk S 4e Ezthn) = 4eas

' ' '. .. . . . ' . . '

Letting n 1 (% and using the monotone convergence therem we have

2

4 Mt E sup y2 k lsup 1F,1e k ,

'>Kt 's Kt

by Jensen's inequality. Using (2.5)in Chapter 2 now, we see that'i$

is amartingale. Ia

f that if X ls continuous local mar-Remark. It follows from the lat pr o ttingale with a'

= 0 and (X) %M for a11 t then 5$ = exptX - lo(.X')) is ati/ale ih M2

Y' '

mar . ,

Eetting 0 E R and setting Xt = 0Bt 'in (3.6),whre Bt is a one dimensionalBrowrian motion, gives us a family of martingales exptpf - 01tj2). Thesemrtingales are useful for computing the distribution of hitting times associatedwith Brownian motion. ' '

(3.8)Theorem. Let Tc = inftt : Bt = a). Then for a > 0 and k 0

-avfb expl-n) = e

Remat'k. If you are good at inverting Laplac transforms, you can use (3.8)to prove (4.1)in Chapter 1:

ta 1/2

.-aulzs

dJ5(n K t) = (21rs) ce s0

Proof #nIT. < x) = 1 by (1.5).Let XL = expt#f - #2/2) and Srt %n bestpping times so that Sn 1 (x) and Xthsn is a martingale. Since Sn %n, theoptional stopping theorem implies

1 = Ez expt#fnasa - #2(L A Sa)/2). . .. .

..'

(. . '. .

E : '.

'. .

'. . . . .

. .'. . . . . l

h i ht-hand side is K expt#c) so letting n -A (x) and using theIf 0 k t q 4 gbounded convergence theorem we have 1 = Eo exptpc - 02Taj2). Taking 0 =

2 now gives the desired result. (:)

Page 61: Stochastic Calculus

110 Captez 3 rrowmlanfoton, 11

(3.9) Theorem. Let a = inftt : j#fIk c). Then for c > 0 and k 0,

-cWI/(1 + e-acW.)Ez expt-a) = 2e

Proof Let la () = Ez expt-L). Applying the strong Markov property at

time a (anddropping the subscript a to make the formula emsier to typeset)(

g vesEz expt-T'c) = >otexpt-r); Bp = c)

+ fntexpt-'rl/zctl; Br t::z-c)

Symmetry dictates that ('r,Bpj =d ('r,-#z).

Since Bp e (-c, c) it follows that

r and Bp' are independent, and we have

1#.() = (1+#2.()).E expl-r)

t te xpression lbr,/.()

given in (3.8)now zivedthe dsird re.ult. lUs ng

Another consequence of (3.8)is a bit of calculus thatwill come in handyin Section 7.2. , .

(3.10)Theorem. lf 0 > 0 then

t:o J .za

/ zj e.#j

gj a1

oxp (.j

z jyyj/, va' w,Proof Changing variables t = 1/2s, dt =

-dsl2s2

the integral above

* 2.: a ds . . .

- z 4 - #/ 24= B C

2x 2s201 X 1 ay-ie- p/2a e-a J. ps=

20 () 27.s3

Using (3.8)now with c =

'i

and = z2 and consulting the remark afe: (3.8)for the density function of T. we see that the lmst expression is equal to theright-hand side of (3.10). . D

The exponential martingale can also be used to study a one dimensionalBrownian motion Bt plus drift. Let Zt = c's + pt where c' > 0 and p is real.

Xt = Z%- pt is a martingale with (-Y)t = c2 so (3.7)implie that(.

exptplzz - pt) - 02o.2tj2j

Section 3.4 Change of Time, Lvy's Theorem 111

is a martingale. If 0 =-2/.1/12

then-0y

- 020.2/2 = 0 and exp(-(2p/?)Zt)is a local martingale. Repeating the proof of (3.8)one gets

Exerclse 3.3. Let T-c = inftt : Zt =-a).

If c, ;t > 0 then

JtT-c < co) =exp(-2(w/J2)

3.4. Change of Tlme, Llvy's Theorem

ln this section we will prove Lvy's characterization of Brownian motion (4.1)artd use it to qhow that every continuou local martingale is a time change pfBrowhian motionk '

,'

(4.1) Theorem. If X is a continuous local martingale with a'= 0 and

(.X') % t, then Xg is a one dimensional Brownian motion.

Proof By (4.5)in Chapter 1 it suces to show

(4.2) Lemma. For any s and, Xa.k.t - X. is independent of F, and hms a

normal distribution with mean 0 and variance t.

Proof of (4.2) Applying the complex version 9f It's formula, (7.9)in Chap-/ ie=ter 2, to X. = Xs-vv - Xa and fzj cii e , we get

: o, t#X1 1 io e#x'. dl? - = , cfpxv' du8 '- =

'z 2 '

.0 ) g

tet Fp' ::.:.z a+r and let . e F. =..$.0'. 'rhe Erst termon the riat, which we

ill 11'.?

i a tocalmartingale with respect to .F* To get rid of tat termletW ar .t

,, , .j .

Ta 1 fx, be a sequence of stopping times that reduces K, replace i by t h T,,,. .. .. . . . . . .

'

:and integrate over .. The deqnition of conditional expectatin implles

.E'tKavki.Al = E CElKaau lh'4 ;.A) = .E'(y;.A) = 0

since Fo = 0. So we have

2 tan0 ioxtfGlavu;

.)

-

.p(.,4)

= () - -E e v tfzj..xfle - ,Z a

Page 62: Stochastic Calculus

112 Caper 3 frownan Afolon, 11

Since Ie#'l = 1, letting n.-,

cxn and using the bounded convergence theoremgives

:2 ti8x1 .g. .p(z)

= --E epX.' d .AEe ; ) - p;2 ()

:2 ,

=-

-z E (ef#X v ;.A) du

by Fubini's theorem. (The irtegrand is bounded and the two measures are

luite.)writing../()

= zex, ;z), the lmst equality says

02 t

it - PI.A) = --. J('z)dn0

... . E. ..

:.

.. .''

Since we know that IJ(s)l S 1, it follows that I.(t)= y('tz)1% If..z-

'tzl :2/2,

so j is continuous and we can diferentiate the last equation to conclttde j isdiferentiable with

-p2J'(f) = itj '

2 .

T ther with.j(0)

= .P(.A), this shows that.j()

=#(.A)e-#Qt/2

orOge j

#x:.z) =:-#2z/2

dpEe ,

W

Since this holds for all .AE F it ibllowsthat

(4.)

or in words, the conditional characteristic function of VY(is that of the normal

di tribution with mean 0 and variance @.sTp get from this to (4.2)we flrst take expected values of both sides to

.concludethat X; hms a normal distribution with mean 0 and variance . Thef t tht th condttional charcteristic function is a constant suggests tat X'ac tis idependent of FL.To turn this intuition into a proof let g be C fhctlonwith compact support, nd let

E . ) .' j . '

poj = j e#'plzl dz

f8Xl yl ) az e-#2t/2Ee l ()

be its Fourier transform. We have mssumed more than enough to conclude thatp is integrable and hence

1 ior o)tyz#(2) = - C W(-2*

Section 3.4 Change of Time, Lvy's Theorem 113

Multiplying each side of (4.3)by e(-#) and integrating we see that F(p(X()l.'S)is a constant and hence

.E'(:(X1)1.T-1) = EgvM i

'

A monotone clmss argument now shows thqt the last conclusion is ttue forble Taking g = ls and integratig thJ lst eqalityany bounded measu.a g.

ovyr.4

E SLwe have

J7(.)#(aX1 e B) = .E'(1s(A1)1.'S)dp = P(A1 q B)

. . .*1

. . :

by the deflnition of conditional expectation, and we have proved the desiredindependence. EI

Exerclse 4.1. Suppose Xgi, 1 S K d are continuous local martingales with

A = 0 and

XiLt= (f if = i(-Y ,

0 otherwise

then Xt = xt, . . . j Xjdj ij a d-dimensional Brownian motion.

An immediate consequence of (4.1)is:

(4.4)Thepreim Every continuous local martiugal Tvith A%s 0 and i>ving(A'). > (xa is a time change of Brownian motion. To be precise if we 1et y(u) =

inftt : (X)f > ul then Bu = Xvtul is a Brownian motion and Xt = #(x), .

.. E

' '. ' ' . . ( :

Proof Since ;((X)j) = t the second equality is >n ippedtate consquence off h ter 2 implies tt 'u .-+ Bthe first. To prove that we note Exercise 3.8 o C ap u

is continuous, sp it suces to yhow that B%qand Bu2- 'tz are lpcal maytingales.

(4..5)Lemma. Bu , .Fv(u), u 0 is a lpcal martingale.

Proof of (4.5) Let Ts = inftt : 1Ah1> n. The optional stopping theoremimplies that if u < 'p then

'

Exvjhn l.'G(u))= Xvlulac-,k

wherewe have used Exercise 2.1 in Chapter.2

to replce F-tulhn by .'G(t).Tolet ?,

-+ (x) we obsvrve that the 12 maximal inequality? the fact that X2 v,-

,v(v)A a

tXlvtvlazais a zpartinggle, an the definltipn of ,(z)) imply

2 k 4sup EX2 ' E

E sup X (vlaza - vtvlav'avn Fl

= 4sup .E'tA-lvtvlwzas 41n

Page 63: Stochastic Calculus

114 Chapter 3 Brownian Motion, 11

The last result and the dominated convergence theorem imply that as n-+

x, Ivttlagk -+ Xyt) in L2 for t = 'tz, 't?. Since conditional expectation is acontraction in

.:2

lt follows that f'txvtvlagu l.'G(u))-+ Ehv) I.'G(u))in.1,2

and the proof is complete. E1

'N complete the proof of (4.4)now it remains to show. .

E'. .'

.'''' ''' '''

.

(4.6)Lemma..5u2

- u, .'G(u),'u k 0 is a local martingale.

Proof of (4.6) As in the proof of (4.5),the optiozal stopping theorem impliesthat if u < 'p then

. a 2Exnjhn - (X)v(v)an lFv(u)) = Xvtulavk - lA-lvtula(rk

2 2 2To let n-+

x we observe that using (c+ ) K 2c + 25 , (3.4)j nd thedeqnitionof

'y($

then

2E SllP

(Xv2(v)agk-

(X)v(vjAn1K 2E su-p

a'vvlagk + 2.E'(X)v2(v)n A

:* H E

s cf(x)2 s c12-?(v)

The proof can now be completed ms in (4.5)by using the dominated convergencetheoremj >nd the fact that conditihal expectation is a contraction in L2. (:l

' . . . . ' .'

Our next goal is to extend (4.4)to Xt with #((X)x < x) > 0. In thiscmseArvtul is a Brownian motion run for an amount of time (-Y)x . The llrststp in making this precise is to prove

(4.7)Le-ma; limxx exists almost surely on ((.X')x < tx7).

Proof Let Tk = inftt : (X)t k nJ. (3.7)in Chapter 2 implies that E E

(XW)i = (X)tAn S n '

i lk xUsing this with Exercise 4.8 in Chapter 2 we get a'Vatr'a6 M so lmt-.x zagkexists almost surely and in 2. The last statement shows lim-.x Xt existsalmost surely on t7k= x) D ((X)x < n). Letting n

-+

(>a now gives (4.7)yCI. .

'''' ' '''' '''

'''

''

. ..

'

. . (:

To prove the promised extension of (4.4)now, 1et'y(u)

= inftt : (X)t %it)

when u < (X)x , let Xx = limf-... Xt n ((.1). < (x7), 1et Bt be a Browianmotion which is independent of (a'V,t k 0J,and 1et

-Yvtu) 'z < (X)xYu = (x.+ Bu - (x).) u k (x).

Section 4.3 Cange of Tlme, L4vy's Therm 115

(4.8) Theorem. F is a Brownian motion.

Proof By (4.1)it suces to show that Fu and FJ - u are local mrtingaleswith xespect to the fltration J(5$ : t K u) . This holds on (0,(.X)1) for kemsonsindicated in the proof of (4.4).It holds on g(uY)x, x) because B is a Brownianmtion idpendent of X. ' C1

:. q '. ..

g'

' The reason for our interest in (4.8)is that it leads to a converse of (4.7).(4.9) Theorem. The following sets are equal almost surely:

c = (lim-.xXg e'xists ) B = (sup:1,41< x)

.A = ((X)x < *1 f+ = huptXg < xl

Proof Clearly, C c:B C B+ . ln Sectiop 3.1 we showed thp Bpwnian potionhas limsupz-..x Bt = x. This and (4.) implies tliat n- C r+, pr r+, 7 x.Finally (4.7)shows that

.4

C C. EI

.:

'

. . .'

. :. .

' (' . . E

The result in (4.8)can be used to justify the assertion we mae at tiebegizming of Sections 2.6 that IIa4uY)iq the lgrgest possible class of integrands.Suppose H 6 11and let T = supft : JJffldtxla < co). CH ' X)f can be definedfor t < T and. hms

' ,4 ;

(zz.x),

= j Jcdtxla

so on ((.Jf .X)z = XJ D (T < x) we have

limsuptS . X): = cxl liminftff .X)f =-x

zrz'' ' 41+

d there is no remsonable way to czttihue to deflne CH . #):' fo'i t k T E

ah .

Convergence is not the only property of local martipgales that caq bestuied siug (4.9).Almost any almost-sure prprty ciitini tlt iwni

hca. be trauslated isto a corzesponding result for lol rzkiuk tates.Tlkispatimmediatelygives us a nurnher ol-theorems about the behavior of patks o localmartingales. We will state only:

(4.10)L.k f the Iterated Logrithm. Let fz(t) = 2t lozlogt for t k c.Then on ((.X')x = x),

limsupXglLfxjgj pz 1 a.s.-+ oo

Page 64: Stochastic Calculus
Page 65: Stochastic Calculus

118 Chapter 3 Prownfan Motion, 11

To prove (b)we interchange the roles of Bt h'r)

and t A r)1/2 in the flrstset of deflnitions and let

1/2S = inftf : ('rA t) > )Sa = inftt : (r A t)1/2 > #)T = inftt : l#(r A t)l > J)

1/2 itReversing the roles of Bs and s in the proof, it is easy to check t

.p(,r1/2> #, Bv*< J) < .P((r A 5'2A T) - (r A & A /l k (#2- 1)2) !

2 1 .-' 2% (# - 1)- f((r A 5'2A T) - ('rA l A T)J

audusing the stopping time result mentioned tnthe proof of (a)it follows thatthe above is

Section 3.6 Mavtingales Adapted to Brownian Filtrations 119

Replacing by a nonnegative random vriable Z, taking expectations and using

Rzbini's theorem gives

(5.5)

From or lqumption it follows that

X %2) s: PX > 2 ' K J) # >(X > 2 'Jr> J)P , ?.. . '.. . ..'.. . . .

:..

s 2#(x > )+ #(y > Jj

Integrating dpj arld using (5.5)vith Z = X/2, X, Yl6

-. -

EvXI24 K &zu't Ev(YI8)

'X'

pz > )#w()zvz) = j,

2 - 1)-1h-iS(#(r A Sa A 1a)2- BLvh S1 A T)2j= (#s (#2- 1)c1-2(J)2.p(S1 < x)

2 - 1)-12.p(g.1/2 > )% p

Pick J so that ff :2 < 1 and then pick N k 0 so that 2N > :-1. Fom therowthcondition and the monotonicity of , it tollwsihatg

Evk-lnq K KNEVCY)

ombiningthis with the previous inequality and usipg the growth cpnditipnagain gives

E (X) ; fffw(X/2) %ff 82EpX) + ffN+1Xw(F)

Solving for EpX4 now gives

ff N+1Evxt K jaf w(F) E1

1 - ff

381 Revlslted. To compare with (3.4),suppose p = 4. ln this case ff = 24 =

a16.Taking 8 = 1/8 to make 1- ff J = 3/4 ad X :z 3, we get a constant which

i:164. 4/3 = 87, 381.333 . . .

''

Of course we really don't care about the value, just that positive flnite constants

0 < c, C < (x7 exist in (5.1).

since1f('rA T)IK (J)2 proving(b). E1

; .' .

'..

It wlll take ohe more lemma to extract (5.2)from (5.3).First, we need

> deinition. A function p is sajd tc! be moderatly lncrasing if tp i? anondecreasing fnction with w(0)= 0 and if there ts a constant ff so thatw(2) K ff w(). It is emsy to see that w(z) = ZP is moderately increing(ff = 2F) but w(z)= ea= - 1 is pot for any z > 0. To complete the proof of

(5.2)now it uces to show (takep = 2 in (5.3)and ote 1/3 / 1)

w

(5.4) Lemma. lf X, F k 0 satisfy P(X > 2, F K 4) S &zPX zk2) for l18 k 0 and p; is a moderately incremsing function, thrn there is a constant Cthat only depens on the growth rate ff jo that

EvX4 S CEeCY)

Proof It is enpugh to prove the result for bounded p for if the result holdsfor v A n for a11 iz k 1, it also hols for p. #ow p is the distribution functionof a measure on (0,x) that hms

h Jcxle() = j dwt) = j 1(?,>) dwt)

3.6. Martingales Adapted to Brownlan Flltrations

Let (f%,t k 0) be the fltration generated by a d-dimensional Brownian motion

bility space (f),f, >). ln this sectionBt with Bz = 0, deqned on some proba

wewill show (i)all local martingales adapted to t1%, t k 0Jare continuous and

Page 66: Stochastic Calculus

120 Capter 3 Brownian Motion, 11

(ii) every random variable X E 2(f1, Bco, #) can be written ms a stochmsticintegral.

(6.1) Theorem. All local martingales adapted to fBt, t k 0) are coninuotls.

Proof Let X3 be a local martingale adapted to fl'h, t k 0) and 1et T,z %nbe a sequence of stopping times that reduces X. It suces to show that foreach n, X(t A Ta) is continuous, or in other wprds, it is enough to show thatthe result holds for martingales of the form 5$ = .E'(Fl1%)whee '

e Bn . Webuild up to this level of generality in three steps.

Step 1. . Let 'F = fBn ) where f is a bounded contitmous function. lf t k n,then X = /(#n), so t .-> l is trivially continuops for t > n. If < n, theMarkov property implies K = .E'('jl%) = (n- t, Bt ) where

1 -jN-.ja/a,y(yy; dy.ls, z) =a/a e(2vs)

It is emsy to see that ts,z) is a continuous function on (0,x) x R, so'i

iscontinuousfor t < n. To check continuity at t = n, observe that changing

variablesy c: z + zv-s, dy = sd/2 dz gives '

&(s,z)- / (zgjd/ac-I-I2/2.f(z + z./.;) dz

so the dominated convergence theorem implies that as t 1 n, Itn - t, Bt4 -.>

lBns.

Step 2. Let F = fBtlhBgu) where tl < a K n and f , Ja are bounded and

contlnuous.If k 1, then

zi= 11BtlEhBtzl3Btt

so the argument from step 1 implies that K is continuous on (t1,co). On thother haud, if %t1, then Yi = .E'tlz 11)aud

'tz= .f1(lt,).E'(.f2(fta)ll%z) =.f1(ft,):(ft,)

where1 .jg-zjayatja

-,)

y Lyjtu:12) =d/a d

(2x(tz- t,))

is a bounded continuous function, so

K = f(./(ft,):(ft,)I1t) for t %fl

Step 3. Let '.F e Bn with E 1y1< x. 14follows from a stand>rd application of

the zpozlotoze class theorem that for any E > 0, there is a random vartable Xfof the form considered in step 2 that hws E l.XT

- Fl < 6. Now

, jz s ijs stjx - yjjsj)1f(A' 11'lt)- f.( l t

jand if we let Zt = .E'(lXfs F111%),it follows from Doob s lnequality(seee.g.,(4.2)in Uhapter4 of Durrett (1905))that

Section 3.6 Matingales Adapted to Prownan Filtrations 121

aud it follows from step 1 that 5$ is continuous on (0,1). Repeating the ar-gpment above and using inductiim, it follows that th result holds if F =

hBtt - . . lkBtuj where tl < t2 < . . . < tk % n and h, . . .

, fk are bopndedtinuoustunctions.con

' sup Z) > K EZn = Slxe - 'Fl < E

t Kn

Now .X'f(t) = ECX' l1) is continuous, so letting e-+ 0 we see that for a.e.

td,X(u?) is a uniform limit ot continuous functions, so s is continuous. I:I

Remark. I would like to thank VichaelSharpe for telling me about the proofivez above.

We now turn to our second goal. Let Bt = (.R1,. . .

, Btd) with Bo = 0 and

let t1't,t k 0) be the qltrations generated by the coordinates.

(6.2)Theorem. For any X e fa2(f),lx, P) theye are unique Hi E 1la(ff)with

d x

x = Ex + J'l H1 tllj

=1 0

Proof We follow Section V.3 of Revuz and Yor (1991).The uniquenesq isimmediatesince,the isometiy property, Exercise 6.2 of Chapter 2, ifnflis thris ozly on way to write the zero randoz variable, i.e., using Hi (s,1) > 0.To prove the existence of I%, the flrst step is to reduce to one dimension by

in :'

prov g

(6.3)Lemma. Let X e f,2(t'1,Bx , #) with EX = 0, and 1et Xi = S'IXIF.I.Then X = .X + . . . + Xd.

Proof Geometrically,see (1.4)in Chapter 4 of Durrett (1995),Xi is the pro-

jectionof X onto f,?, the subspace of mean zero random variables memsurable

Page 67: Stochastic Calculus

122 Caper 3 Pzownlan Afotlon, 11

with zespect to lx . lf F E f,?, and Z E L) then y and Z are independent soEYZ = EY . EZ = 0. The last result shows tht the spaces f,,? and L elorthogonal.Since f,l, . . . ,

.:2:

together span a1l the elelents in /2 with inean ,

6 7) follows.' E

' ' 1Z1( .

Proof in one dlmenslon Let I b the set of integrands which can be writte

En 1 where the j and sj are (oniandoml)real umbes pdas i= J (az,agl0 = sc < st < < sn . For any integrand

./f

e I c 1EI2(.8), we can dfi' thestochmsticintegral X = JJJfa dmj and use the isometry property, Exercise 6.2

iudeM C ptz. Usig (3.7)and the remark afer its proof,ofChapter 2, to conc

we can further de:ne the martingale exponential of thr integral, Zt = f(F)fd concl de that Zt e M2. tt's forrla (3.6),iiriyli that Zt - Z'q ian ,

' z IF Recalling yk =.l

.

.s)

an sink the soclative law (j.6)tJn a ' . ,

Chapter 2, we haveZL - Zo = ZsS, dm

0

'Fn= 0 and (F)n = 0, so a = 1. Since Z3 E 442 we have EZ3 = 1 = Zc and itfollows that (r

Zt = E Z + g,.Jf, 1y,j) (s)##,. ()

i.e., the-desired representation holds for each of the random variabl t the set

.' = (f(ff . B)t : .Jf e 1, t k 0).

To complete the proof of (6.2)now, it sues to show

..('.'''.!'E

'

.j!((jj))..lqj.(lqjljjjf

i:(jiq' yy)'tyf ti(')''j' :

yj(i.E!.'(''.:''

5*' '

i.Er(.y.lkE()';: @;))'....r.y...

-.E. .

1*.:...

..' '.

1..qE. .-(E(ji.jE!)1*.

)'j'q

y'

1*.1:j

))':

(';)y(Eyjyjj,.

;.-yj.y.(...y(....

EEy

.... i

E.

.

. y...j((

:y.:

jjjjjjjyj.j,j

j'

.

tj

('j'j'

r .:. jj...

...j....

.,..'

',''E;'E.)E'.Ei).,!'i;(i))l?..1,!1E'j)..!,..'..iqi.,..E.,y..

'Eiili.i.ilpjrlj'tiik.yiy/yljyljyijlry(),y) section3.6 Aaztlrgales Adapted to lzownan Filtrations 123-''.(i..E:..y

)'('

jy!

;'

j

jjz-'yjjjd

........-. gi- .

..

y..

. ..E .

..!;

j'

.jjEyjjyj(jj .y.....j........

g. .. .

... g(: ,ggg.jjj.j

...' .(.'E.,'..'.,;.r.,')y(j(rkItp!('E(j-,t'.....).'..'

'iqliljjljjt.jljEri.EEE

lj real z so complex variable theory tells us that p must vanish identically. In'kllkyyrtyj.yjylytjjlq.j,Etiy

'y l E, a ,.illiltj,jtll)l; Ei

ucularv(i) = () when =-1.

The lmst equality implies that the imageiiitlliytjjjjy..jjj.qE.; par ,EE,jijpjlyrlyjlltltjrt.yrjj.j iytE.t

o I,z dp uuder the map'.kE.'.i!.k:!!;(!jt(i.,iyj,;,

Eqt,

.ii('i:iE.''qt.!i!jq,'j!r!g!j!(Ij'gy'j..j(j'j.)

..(...y..(.)....(.:(

,..tjj.)j)rj . yyjijy.j,ypiljijyylj.tiqitjyjl.ty(,

j w-.. (sj, (w)- sj, (w),. . .

, sj.(w) - sja-,(w)).ij.lriiiyykjy.l,jy.,tqjlyj

( Ey;, ;....

-. .j.k)E.y)rjiy

:jj!(tI;Ii

j'

jji.y

ijyd

E.(jy'jjj..

..

y. y

.

.

.

yy;yyE...-

..'..q

E)..-.Ey

--iqj.k.jjj'y;ryyjyj'jjjyq)y

j-jd

)jj.yy.gyy...-jy.-...

ygd

;'l..,''.'lli;!'ti)l:)';t:j,.,' is the zero measure since its Fourier transform vanishes. This shows that 1,'gdp.E... :

'.

E.g;..q:

.

g:kjj. !j. yjj(;j..k....... ..';'(E,'':;y',!!jl!!'1y'g:',l'j'j'!''t'j')E''1E,')y('.

vauishes on o'Bt - Btv ,. . .

, Btvs - Bt a-, ) = o'Bg, , . . . , Btn) and the proof ofE tjjjljjjjrj),, ),,. ..'

EE. i.Ek;r.!!jjj

t'

:jjqj, .j.jyi( .y(...q..:

y: y:

...'..i..E. -

.

. !;i!@;!yyj.pEjIj

.

j'

j-j-

j!

y'

j'I

j-y

trIt';jjjy.-.

--.j...-.-.

y

.(.-

.;-yy-.-

E.-.

-.-....y...-

-.

.

114:(4*111E1151;*--,'.-.!:111::*

((::,p'Eiii'd

:.E!E!Ik.1:::::.dlt::::),-ill:':lll:':llk,.(II;::,,.(:11..

.1iii,,'

'dIt:;.'1kCi5,':

.-.'

11:::::::::11*

q....-E. j.y(..j

t'

-

qqE)rjjjjyjyy.. .. j- y....y..y..(...!..)...;.,,:)),:.;.;.,.tt.t.ifq

, ,.E. . ,E E.l'i.itlEiiljjllitEE'iyEjEzjEy

yyjyyj p (65) jt js cuar that is a subspace. Let X 6 f with'-.

. i-i

r-

y!5')(;@r.!r;p)'.

!q!j()! ...;.q-

.

g-

,'

...gE (:111::-,.,11:::::1,1.11::::,1,,

dll:::::l,kd

---- :,,,:,,;.

. ......y...yyj....:,r).j)..),.jjjj(jj.)jjj,j,j....y.,y.j,.y.q.(y....,y

y .'..':..

.

:

.ljrkj;.y.gjj

('

iyE.

j'j'

j!yjy.j,y..j.j: .

j.yy.j.yyj..y.

.EEEi))).jt ( rj x..

-

-

..' .

i.g.r;iy;-:E;;

('q'

yy-j)-.. $1*-ji-E.. ... ..

..;.iEy...... .')' -;-

-

.jr- yE:.rj

-

j!

t'

!

)'

:

)'

.j

j'

iyqr,t)'.).).

r'

;.y.EE

..E

...j.-..

.(...-

-

..E.......

jj::yyd

41:gjj5,,

.-,'

41(g4j5,,

(grljjd

..,k;;:!jij;:::...

r,,-,;,j,d.,,.,..,..,...'

...;j!!Ejjr;;rI

.,.-;;2:!1,;2::r..*

:j,,:,,;.

::::::::::::::'

..,,fjj),f'...,,,,:','''.;'k.jjr::;jj!r.

,.,;j!rqjji;,,',;,,,y'

. . 'j'......jyjjjj.;,j

.

('

:q

., ,

.

..

..

.. j jy

y.....E : E,:EyEi:,:iltitrjliltjjjjjj

yjg:::,y

jyy (j..: E..E(y.E.

E.jjjjjj-(,)..y

)'

yy-( jq.j.y!E..;.yq- y-

''..'.....E.

.(j

.

qr-y

y

:i-

y.jj(jjkyjjj'iy

y'jytd

7g

l'

.jEj!

('

j

j'

j;:gj;.....(.g...(...j.y....g..y.-...(..y.'.

E

!'E.'!'. . E

.-!)yj(y;(ijij

!

t.)'t')'

.;q!jjrj-:

:.i..(..(.

- .

:yk..q:y.iEr-.)EE..E')'Ei,@Ej'@ii,j:j;r)gryt,:ty/i

,ilytjltktyyEj

and suppose that Xn -+ X in L2 . Using an elementary fact about the variance,..........,,g..y,.y.r('(.pj!jgyi.yy.j);j;jjj.;..jj.yj.jjt,.r.j..y...y..,..y,..,...y..y.yjilliy.Ei'rliljllrkl.jtl.ljtttpjyyl,,,y.i,jjttily then (6.6),and the isometry property of the stpchastic integral, Exercise 6.2 inE

'.l..'jj:);)t!),jy))(yj,

yyt),q(yyyr,..y

cjjapter2, we have.....'r(yj..ik'jjg.y

Ej!)jjjrj!

rjjtr.j: (y.j.:. E

y.y.....

.

''.: E(E('(j-i.E

y'

lElEi(l(j!.y1;(1.jjijiy

'

-

J'

)..j'l!ji

L'

.''.

r'y:.

y..E

..-y-

.

.

(.:.

j..j...jy..

y.y...'.q..E

:

........q. .

.

..

'

E!:..

!'

i;.!...

('

.!..

:

!;ii2(rE;#..ip;i:j!,'jl;)?t)t)l@yEqyljEjttiti,El)

s(x - x )2- (sx. - sx.)2 = sttxs -

.xm)

- zxg - sau))2,''liilil,lll'i.tiijyltyqyj

yjqjyttkjjj a mq!'.7l'i;i'!.))r!)jiy!y.j.jytijyj.g,..

;.j...:;..ijk(E(j.rqr;jy

j'

j

j'

.. j.....-

y-.

yy yy(

i.EE,,E,jE(!gj'!.!jl)rl)ll!i

.,E E,.

a-.:

.l.).q..:).;i-jy.jj

t'j'

.'j-

j.kj.

$'

j'.

yjyjtd

.

j-

jg't.E..!..E..i-

.

.

-

yj...yj:(yq...q-

.-.-

.

.

y

::::::::::::::'

.,-;1!:111(j;,..

41::4)*

.,.;j!,:i;jiq'''. ,,rjjr':l,'r'.k.

......,..,....

.,.;i!!q:;j!r,'''jrljy.:l'''i,-':.,,d

(y;))I

(j!::;i!!''-;1jji;,'

..Ejy,lij,t,jjtjjjyjyjyjyyj

,,jy,,.jyyyj,

()'..(. .(.

EE. ;yj.!-;Eqy.

j'

t..yy yy..jgjy.y..yy. yj........y-

E...yq.EE-.

E..j.q

-

.-

i.

-.i!

j'

jy.j.y.!j:.rq(jr

)'

y.jr

t'

.;j....

-

q...y:

.y....yE.

...E.-

..j'(E.'''Ek';l:Ei'i)))'.k'j'jtyi('tj';jk'y'tEE'))t't.E

sucexn -+ x ju.:2

jmplies Exn - Ak)2 -+ 0 and EXn -+ EX, it follows....r.......i,i;..)E....(;.i;.,);.k.,

-)t..t.(..,.y...(.,.

...i'..

EEE.'.

-

E)..;r:;-:.iEit':-.

yi.

r'

j

q'!j'

.jt-l'j

)'t't'

Ej!.

('

'yjEjjjy

.'f'jj-.i..;

-

j(-

j..E

yyij.j.

-.

. y...yE.... jjjjy,rjj.,

,jjj;;-

jjdjjd

,,,;jjjqjj;jjr,...

:,,,:,,,,:,;,'

,,.,,,..,,.,,,

,,-;jjj:(jjjjg,,,.

tjj,,:,;,djjd

jjs

,,jjj!yjjj;,'

,,...,........,g,,,,

jjrygyjjjd-,,'

'.''.k.'4.

-'!'iEE-..!.y(

t'

yii!jj;j'.-

y'

jip'.j..

E

j.....y-

...qjjy.

-. E

y- j- y.

.11:;;'

:;!E!Ilt-..E

. ..- . .

Ej(.jj;.jj.gi:)j.(;..j

k'

.jjjyy

('

.(jE....- .yj. .jq.y...y.

.1i111'1)!ti.E'Eiitp' The completeness of 112(.5) see the remark at the beginning of Step 3 in..

.r.E...'.'.,.',.r@;)Ejij.I@,pljjj.jlj...,.,.y.(yj..(....,).yqy.,..;.y.rq ,E)i,.(.i!.j,l;:1)l.,)..II'.,t,yy,.r .y

section2.4, implies that there is a predictable H so that Il#n- #lls .-> 0.....:'

.

E.-

E(.(. E

:.Ep.j:E:q

('

:!;!!Ei!iiI.y

;'

E..q. .:. :

:r.....

...

.

..E iilkrEi',lydjilEllijjl'j'j'jliyljilk,jljjtjlij,.,,,,''j'y,y,'qE,q'y.r

wuotheruse of the isometry property implies that Hm - B converges to H . B..:'

.(.E-. .

. jy;;; j-..jyg..

;.j.-

.

j.yy-y.

jyE

g.-qy..j..j......

j.ililt')i)jljttt.yyryq,

a v ksg jjmjts irj (6.6)gives the representation for X. This completeskk;Ey'E;!E;jlEl)))j!l$!1jljl!i

qjjrlyiytjtyE in M . ar;r(i(E;iy).yy(!)j(!)t.j!j!Tj)())jE(yilyE tljyyyE tjje proof of (6.5)and thus of (6.2). IZI'': ..- . )'.

EiiT'lE(jEk.ij

)'j'

-k7jt1j

j'

jjy'l

)'

:

j'

g@!..!.i:('

EjE

y.j.i.E.

- .E..

: . ;.- .

iEi...!;.j('E.. -rjjj

y'

.j.

j'

jjyyy.y

j'

.

y...-(-..y.;yyjj).

E.

j(.q;.y......y

E..j

...-.

-

E.-- k;.:j.

)'

rj)-li-

)'

jq

j't'

j

j';'

yjy..-

-(.,y.;-y.-.y

g.y!.-i

y-i-.y..(...y.

.j.q.....yE(.(jy.

!.,(;Ey-jy-j:j,ijt'yjkjj;

jjdtg-'

j'.j.!g

-

.

.

-j-j.qgyyy-

.j..j...j

-y-.y.y.yj..y

.y

.-

E.y-.

.E

jEiE.i).!.-

-

E.;i.rj!ry-

jg.).-y.jj;jty,

)'

y

j'

p-i

y'

jj-.!-.. l.E)j- .

-

-Ery-...!..)j-y-jyqyyy.-j..-..

..:-;.-yE(;.-.!-y.

j.jri.

j'

jy;;'.ii

y'

));j

j'

;;yyj(g-:qj.

-.

-.

--

Eq.g.

-

y.:-ji.y..-j..-

.(y. y......q'Ei.i.jEEjy.yig)'..E)jl;;;y'.j-j

t'jyd

.jj.j.jj.y.;ryj.y.yy.j

gjjy.y(j.q-. -.E..y-

y..-. y' .

E...r.EE..ji..!gyj

j'

jI. jy jy(- !yj.-

..

:jgy:. . yg

..y..'...-

.

y--

.(..qg(yE(.

E.

jlqjq;l(jy.p.y

::l

)'

));.

j'j'

E

j'

i.-

.(;.yjE.yggE;yE..y-

.y.-y

.-

-y

..y.yy.i.

.y.E.

y....(....i.;.(.

-

-.-

. ;gj.yE!yE)

j'

.yj,r

j'

jyyi

ydj'

:j

t't'

j.- .q..j......jy-.

.

y.-.-

y.(..y

.. . y..y'.-

.

.

-.-

.. -

..

E)EE.l-. - -;;

t'

g.

t'

-.-.

i'

;.i

;'

j'.i.i

.

)'

-. !...y.r...((.y;y.-y)ryE-..-yj.y

- y-.. ..

- y-

yE. i:

'Ej.E':E.q;.k(jji)!'ij

.j'

'yji'

Ejj!ig;.!I

tk'rryqtj(j...jj..jj

.r.j:.jy.y:.jy

.:j:

.g..yjj..(y.j..j..

y.j..-

.

....y...

-

..

..

gy-.- ;.yyEj.

jjjyyj

t'

jjr

jrd

yj'.ktr-

jy;jy;.-

y;j(ygyj.-).-

.yy--

.jy.jyy.y.yj...y....

j....

q.E.-.-.;.)iy

y'

jj

j'

.

)'.jjjy

j'

yj'.q

('j'

)):

yE.;.(yjyy.

.

.yjE-yy(yyjjE--gy.y.y.yyy.y.g...g!E..-

.

-

-

-. .!;;i.j;jjy;j;i

;y'

jyjyj!j

)'j'

j

y'jj'j'

ljj;-.j:.(y.r.gy;.;

-y.yy

-

yy-j-. .y

-.q-...j.y.

y.y..'.

-

-. E

. -.j.;;jyj-

.j.yj.-j

j'j'

jrjjyj

jyj-jdy.y'jd

jj-......yyj..(yE(-y

-

jy.j-jj.g-yy

-y.-..

..

y.q.........E

-...'

.l'.(

-jEq(q.yjq.;!lj

t'

p. ii

y'yjd

jjj.gE.(iyE.!j

.

yj.j;..jpj.yjyy.

-y.y...!(.yy..

-......g....-

.

-yi.!.!- .

-

.j.y...yy.

y;!

j'

y(t'j.j'('!:.rr'.

(j'

i;'(4!tj.

)'j'

j'j..r...-

.-

-.

.-

.

.

.

yjyyyyyjyj.j.

-yE!jyg.yq-.y-.jyy.yEj.y.-..yyj...

-..

y-..'.)..E.'.y!(.E(..jErq(

t'

;

)')'

;

t'

ij'!.

tjdj'1'tj'j'j'

,jy.

.

..

-

y.jy.jy.y.j.ry..jj..j-

.

j(.yyyyy...y.-j..-jj..y.-. -. .

- y.......E...(.iryy:.jq

y'

;y,.

jydj'

jy'.

j)dt'

- y.. yjy-y;yy-ygy-.. - j.yyy

-...E.

-

..E.lE');)i

y'

:y,yy

y'

j

-t'

)'jtdyytydt.yyjdj'j'

yjgyy.'j..y.yq.-yjE

yyjyg.y..;yE.-

-(j.y..y- (.-

.-!

.- -

..-

-

E

:j!.y(..- .

. y(yjj;.j( -

....:'

.

.:.

..

riy.

ik.i.E(jjE

j.iy

j'

r.j.jijjIjg..

j'.jj.,$'jj'..

('

j)qjj.y.-..j;jE..-qy.gjyyjy!qy.j.

.

(y.;j:-.j.-.-

yq...jE-y

E.j(.y.:r

...;'E

-:.j:.i-

. Eq yyjjyjg

yjdlqjd

y.

t'

yy..jyE(..(yr: gj.

-

-j.jy-:.,()y

-

jiy.y.-

... .

..

....;iy,....j.i)j?,!.))r)t').4t.

.j..;y,!..j,g,jj(..q..,.,E,.((y.,....,i,g.i

''.'.i.,.@l;y.(.r,yy.(.,;r,!',!k!j,)j')(jIj)).j.j..j.j..;..q;y;y..,,y@....;j),y;j.:;;,,..y.,E,..y..y......yy..j.

...,t),. t jjI ..; y,yy.j,.j,yyyyjgy,r,j.yjyg...y,j.........(-(!.:

.

. rr.jj

.y'

I

s')d

.

..j.'

j'

;y'j

)'y'

.r

j'

jy...y(...j.i..y.).

:.jqg:y-yg.j.y.-

yy..y. .

yj.y

...y-.

.

E.qE(-!y..(j;yyr;jr

i;jy

)'

jp'ji.

j'

t)':.rq'lt:!)ri

J'yjd

y-tj

!j'

kj..

j'

!..

g.-y

-.

yyyyy!yg.yyyjj.jk

!-y

j-gy;j,E

yyyj.j...,gjyj.

-y....yy.jyjjy.......

EE..i

.qE(..--y.(jqt.j():ijj('-

))'

);;

jjjyy'jjd

ttE

j...

.

!-.yyyyjj-y!y-y

.yj.--.jy.yyyy

q

yjy-.y.g...j.j.....EE

....:

!jt,Ej-! Ertk

j;yd('

jjjyj.q.j..

E. Iryjg.yyj.r.!yy.y-..yy...g...

;..EE(i-

-EEE

-Ej-

)'t'

r

#'j'@'

(tijjj..

y'

)j E

j!j(.-

.

y-((.qjjyEyqyjr-jj.y.-y!.yry(.(.j.y......

!.E

E-

r'

(E!i!

)'

j.jrj!tjjr

tjljd

. ..j.jg!E

yi-E(jyyEjyyjy.y

q.(yj-

--;y.yy.

-

,

..

g--y.yy.......

-y. ..

qj-

.ri-(-.. .-Ejj.j.!j.ljjyyj.yj

.

jd

))y'.

y'

jj..yy.y, -.

(yjyy- ,jyyj..yyy-- y-k(yj

yy- -

.yy.y- .

jy.-.......i..yj.(j.iI.rjy

)'

j-tjjjt'.-

;')'('y'

, -,j-

yyy-j.jj;yyjjjy.- jy-yjy)y

y'

-g. -

.yj... ....i..Eri...(i.g;.)..!.).l.

kl;,;,q)'t((:q'.ljy'yjyjyyyy-p-

.y...g.jy-

-yj.--y..;.j..yyy.yg.

.

-

.;,y.g.-y-.y..--(..;;. (.

y'

yy..yy..j. y...... ((.((... .

.:(I)IIi

5*

.;$.

)'

(6.4)Lem-a. If #F e L2 hms .E'(g#F) = 0 for a1l Z E .' then WfH 0.

(6.5)Lemma. Let be the collection of.1,2

random variables X that can bex pwrittenas EX + ff, d%. Then f is a closed subspace of L .

Proof of (6.4) We will show that the meuure I/Fdp (i.e.,the memsure pwith dpldp = *F) is the reromeasure. To do this, it suces to show th>t14:dp is the zero measure on the c-fleld c(.&, ,

. . . , Bt,. ) for any nnitejequepc:0 = tc < tl < 02 < . . . < a. Let j, 1 % j ; n be real numbers and z be acomplex number. The function

n

w(z) = E lF' exp z j-) j (.R, - Bta. z)

is emsily seen to be analytic in C, i.e., w(z)is represented by an absolutely

convergent power series. By the assumed orthogonality, we have pz) = 0 for

Page 68: Stochastic Calculus

4 P tial Digerertial Equqtionsar

Paraolic EquationsIn the flrst third of this chapter, we will show how Brownian motion can beusd to contruct (clmssical)solutions of the followihg equations:

'

1

uf = -.AiI21

uf =-l

+ g2

g y

u : -Au + cu2in (0,x) x Rd subject to the boundary condition: u is continuous at ch pointof (0)x Rd and u(0, z) = J(z) for z 6 Rd. Here,

(.)2 :21Il 'tzd

Au =

a + . . . + aoz t'lzd

and by a classical solution, we mean one thyt hms enough derivatives for the, y a jequation to mke snse. That is, u e C '

, th functions that have oe cont n-

uous derivative with respect to t and two with respect to each of the zi. Thecontinuity in 'tz in the boundary conditipn is needed to establish a connection

.between the equation which holds in (0, ) x IR.J and u(0, z) c.z: J(z) which

holds on (0) x Rd. Note that the boundary cpndition canpot possibly holdunless J : Rd -+ R is continuous.

W will :ee that the solutions to these equations are (undersuitable as-sumptions) given by

s.y(s,)s-(y(st)+ /'gt - s, s,)

ds)J 0

z-(z(s,)exp (/,'ct - s,s-)ds))

Page 69: Stochastic Calculus

126 Capter d Partial Diential Equations

In words, the solutions may be described s follows:

(i) To solve the heat equation, run a Brownian motion and 1et u(, z) = EcfBt ).(ii)To introduce a term gzj add the intgtal of g along the path.

'

(iii) To introduce cu, multiply fBt ) by mt = exptf c(t - s, f,)ds) beforetaking expected values. Here, we think of the Brownian particle as having mass1 at time 0 and changing mass according to mj = c(t - s, #,)m, , and when wetake expected values, we take the particle's mmss into account. An alternative

f t - s B )dsj is the probability theinterpretation when c K 0 is that expt cl , ,

particle survives until time t, or-c(r,

z) giye> the killig rat Jvhn th Articlis at z at timer.

In the qrst three sections ot thischapter, we will say moreabout why theexpressions we lkave written above solve the indicated equations. In order tobring out the similarities and difl-erences between these equations and their el-

liptic counterparts discussed in Sectins 4.4 to 4.6, we have adopted a ratherrobotic style. Formulas (m.2)through (m.6)and their proofs have been devel-oped in parallel in the flrst six sctiohs, and at the end of most sections wediscuss what happens when something becomes unbounded.

;g j y uajou yzySectim Y.1 Te ea q

.X'0= - s and Xi = B jbr 1 < < d gives> > 3.

...- .-

(t - s B.j - u(0, Bz) =-w t - r, #r) dN ,

()

+ Tnt -

r,#r . dBr0 ,

'

1 s

+ - A'tIt - r, Br) dr2 c .

To check this note that duho =-dr

and .Xr0has bounded variation, while the.

X%vwith 1 K %d are indpedent Brownian motions, so

i xJ) . tr 1 % = j S d(X ) r 0 otherwise. . ' E

. . ' ''

(1.2) follows easily from the It's formula equation sihce -.'tzz + JLu = 0 andthe second term on the right-hand side is ' local martingale. E1

' :. . E

.' . . . .

Our next step is to prove a uniqueness theorem.

(t.3)Theprem. If there is a soltioh of (1.1)tht i oudd the it must be( ' . ' 'i

..: ..

..

.'

. . .. . .. 'E. . . . . ' .. . .

r(@,z) > EcfBt)

Here H means that the last equation deqnes r. We will always use n for ageneric soltztion of the equation and w for our special solution.j. ... . . : .

'

.. . . .

. . .. . . .'

..'' ( ' . .. . .. .

Proof If we now assume that 'u is bounded, then M,, 0 K s < t, is a boundedmartingale. The martingale convergence theorem implies that

Mt H 1imM, exists a.s.,1

s>tisfles (1.1b),this limtt must be JBt ). Since M, ip pntformly integrable,lf vit follows that

utt, z) = ETMZ = EZMt = 1?(t, z) I::l. .. . '.

:'

.

Now that (1.3)has told us what the solution must be, the next logical stepis to flnd conditions under which v is a solution. It is (andalways will be) easyto show that if 'v is smooth enough then it is a clmssical solution.

14) Theorem. supposeJ is bounded. If %' e c1'2then it satisqes (1.1a).( .

4.1. The Heat Equation

ln this section, we will consider the following equation:

(1.1a)'?z

e (71,2 and 'uf

= lal'tz in (0,x) x Rd.

(1.1b) 'tz is continupus at each point of (0Jx Rd and u(0, z) = /'(z).

This equation derives its name, the heat equation, from the fact tht if theunits of measurement are chosen suitably then the solution ult, z) gives the

dtemperature at the point z E R at time t when the tempratpre proflle attime 0 is given by J(z). E

rhe frst step in solving (1.1),ms it will be six times below, is to flnd alocal martingale.

(1.2)Theorem. lf 'tz satis:es (1.1a),then Ma = utt-s, B,) is a local martingale

on (0,8).

Proof Applying lt's formula, (10.2)in Chapter 2, to ulzo, . . . ,zdl with

Page 70: Stochastic Calculus

128 Chapter 4 Partial DiRrential Equations

Proof The Markov property implies, see Exercise 2.1 in Chapter 1, that

>Q(T(ft)l.T-,) = fb.(T(#t-,)) = 'J(t - s, #,)

The left-hand side is a martingale, so the right-hand side is also. If 1) e C1,2,then repeating te calculation in the proof of (1.2)shows that

Section d.1 Te ffeat Equation 129

Writing D = 0l0zi and Dt = 0l0t,herepttz, y) = (2,rt)-d/2c-I.-,Ia/2t5V .

little calculus gives

-(zf - yi)Dfpttz, y4 = pttz, p)

t(z - !n)2 - tDipttz, p) =

a,t(z, p)

t(z - ytzj - yj )Diilhz, p) =

apftz, p)

t-(f/2 Iz- p12

Dpttz, p) = + apttz, p)

t 2t

lf 1 is bounded, then it is easy to see that for a = i, , or t

j ID.pt(z,p) J(g)Idy k cxl

and is continuous in Rd, so (1.6)follows from the next restlt on diferentiatingJmder the integral sign. This result is lenjthy to state but short to prove since

we mssume everything we need for the proof to work. Nonethless w will :eethat this result is useful.

(1.7) Lemma. Let (S, 3, m) b a J-qnite memsure space, and g : S -.+ R bemeuurable. Suppose that for z e G an open subset of Rd and some o > 0 wehave:

(a)u(z) = Jsff xtkjgmdyj

where ff and (Hkvoz : G x S -+ R are measurable functions with

* ltei p) - ff (z*,y4 = Jl jA- (z*+ oei,p) do for 1/z1K hz and y G(b) Jf (z + , z

(c')'zf(z) = JskA- (z,p):(p) mdyt is continous at z.

tj (;; j jyo jog (z..y.

ogjyyjgtyjj go mtty; < %.an s

-.,0

arkThen nnloziexists at z* and equals w(z*).

Proof Using the desnition of lz in (a),then (b)and Fubini's theorem, which

isjustiqed for I#I%ltz by (d),we have

,u(z*.+lzei) - ,z(z+) = /s(zf(z*+ ltei,y) - zf(z*,p))p(p)m((lp)h (?.rt'

=j Js:z (z*+ 0e' #)#(#) mdDt do

, 11?(t- s, Bs) -

'p(f, .80)

= (-'pt+ jAr)(t - r, Brldr0

+ a local martingale

The left-hand side is a local martingale, so the integral on the right-hand sideis also. However, the integral is continuous and locally of bounded variation,

so by (3.3)in Chapter 2 it must be H 0 almost surely. Since 'pf and L'v areontinuousy it follows that -kn

+ l.aL'v H 0. For if it were # 0 at some point(, z), then it would be # 0 on an open neighborhood of that point, and, hencejwith positive probability the integral would not be H 0, a contradiction. (:1

lt is emsy to give conditions that imply that 'p satisfles (1.1b).In order tositionsimple' , we nrstcosider the situation when f i bounded.keept e exp

(1.5)Theorem. If f is bounded and continuous, then 'p satisfles (1.1b).

Proof Bt -

.&) Y-@1/2# where N hms a normal distribution with rfiean 0and variance 1, so if tn -+ 0 and zn

-+

z, the bounded convergence theoremimplies that

The flnal step in showing that 't/ is a solution is to flnd conditions thatguarantee that it is smooth. ln this case, the computations are ot very difficttlt'.

(1.6) Theorem. If f is bounded, then 'v

E 6'.1,2and hence satisfies (1.1a).

Proof By deqnition,

1'(t,z) = ErlBtj = Jpttz,y41y4 dy

Page 71: Stochastic Calculus

130 Capter d Partial Dilrential Equations

Dividing by h and letting -.-? 0 the desired result follows from (c). IZI

Later we will need a result about diserentiating sums. Taking S = Z withS = all subsets of S, and p is cotmting lnmsur in (1.7),then setting g > 1and A(z) = K (z,n) gives the following. Note that in (a)att (c)of (1.7)it isimplicit that the integrals exist, so here in (a)and (c)we mssume that the sums

converge absolutely.

(1.8) Lemma. Suppose that for z 6 G an open subset of Rd and some hz > 0we have:

(a)'u(z)

= ErzA(z)where fn and ofnloziqG -+ R, n E Z are memsurable functions &yith

* *)= J 01n (z*+ oeijdo fbr ll K hz and n : z(b) A(z + e) - Jn(z c ari

dy .(c)'tzf(z)

= Er a; (z)is continuous at z

and(d)E,zJ-p!o jkT: (* + oei)jdo < %.Then nuloziexists at z* and equals u(z*).

. . .E : :'

.

' (

Unbounded f. For ome applications, the assumption that f is boundedis too restrictive; To see what type of unbounded f we ca allow, we observe

that, at the bare minimum, we need .GIJ(.lh)l < tx7 for all . Since

.s'.ly(.st)I- / ta.)la/ae-I--,l-'2tly'(p)Id,

a condition that guarantees this for locally bounded f is

Izl-2log-bI.f(z)I-+ 0 as z-+

co

Replacing the bounded convergence theorem in (1.5)and (1.6)by the dopinatedconvergence theorem, it is not hard to show:

(1.9) Theorem. If J is continuous and satis:es (+),then 'p satisfles (t.1).

4.2. The Inhomogeneous Equation

In this section, we will consider what happens when we add a function gt, z)

to the equation we considered in the lmst section. That is, we will study

(2.1a) 'u

e C'1,2 and 'tzt

= lal'tz + g in (0,x) x Rd

which proves (2.2),since -eu +-1.:

Ly =-g apd the second trm on the right-.' .'''''' ''''''.

..

. ..

. .g (hand side i: a local martingale. cl

. Azkin the hext step is a uniqueness result.'

j 3) Therez. Stippose g is bounded. lf there is a solutin of (2.1)'that is( .

boundedon (0,T! x Rd for any T < x, it must be

Section 4.2 The Inhomogeneous Equation 131

(2.1b)u is continuous at each point of (0) x Rd apd'u(0,

z) = /'(z).We observed in Section 4.1 that (2.1b)cannot hold unless J is continuous. Hereg = 'tl - 1al'u so the equation in (2.1a)cannot hold with 'u e C1,2unless gt, z)is continuous.

,'

Our flrst step in treating the new equation is to observe that if ul is asolution of the equation with 1 = A and g = 0 which we studied in the lmstsektion, >nd ua is a solutioa of the eqttation with f = 0 and a = gn then ul + uzis a solution of the equation with 1 = A and .q = pj so we can restrict ourattention to the cmse J H 0. '

Having made this simpliflcation, we will now study the equation above byfollowing' the procedure used in the lmst section. The frst step is to llnd anasociated local martingale.

(2.2) Theoren. If u satisfles (2.1a),Ethen

4

M, = 'tIt - s, Bs) + gt .= r, Bv) dv0

is a local martingale on (0,8).

.' ' 'Proof Applying It s formula ms in the proof of (1.2)gives

! 11ttt - s, Bs) -

'tztt,

Bz) = (-ut + jAtI)(t - r, Bc) dr0

3

+ Tut - r, #r) . dBr0

.t :

'

.p(t,z) > E= jogt - s, , )ds

Proof Under the assumptions on g and 'tJ, M,, 0 %s < t, defined in (2.2)is abounded matingale and u(0, z) M so

t

Mt H limM, q:z gt - s, Bs) ds.$1t ()

Page 72: Stochastic Calculus

132 Chapter 4 Partial Dillential Equations

Since Ms is uniformly integrable,

'tztt,z) = Erlh = Ezlh = 1)(t, z)

Again, it is easy to show that if v is smooth enough it is a solutio.

(2.4) Theorem. Suppose . is bounded and continuous. If 'v

e C1,2, then itsatisfles (2.1a)in (0,c=) x Rd.

Proof Using the Markov property, see Exercise 2.6 in Chapter 1, givqs

E= j J(f - r, Br )dv.,#-,

,.-- :

=j gt - r, sr )dr + ss. jv' gt - r,sc) dr +

,p(t

- s, l,)

-jThe left-hand side is a martingale, so the right-hand side is also. If '?? 6 C1.?,thn repeating the calculation in the proof of (2.2)shows that

. #

'p(t- s, #,) -

'p(t,

#c) + gt - r, Br4 dr- c

: 1= (-r + -'n + p)(t - r, Bvj dr

2+ a local martingale

The left-hand side is a local mattingale, so the integral on the right-hnd side

is also. Since the integral is continuous and locally of boundd varitin, (3.3)in Chapter 2 implies it must be H 0 alpost slzrely. Again this implies that

. (-w# 1aAr+p) H 0, for our assumptio impl# that thi quantity is cntinuous

:nd if it were # 0 qt some point (f,z) we would have a cpntradititp. ID.

:' ' '

.

.. .

Th ext step is to give a condition that guarantees that 'p satis:e (21b)en . .

As in the lmst section, we will begin by considering what happens when every-thing is bouuded.

(2.5)Theorem. lf g is bounded, then 1) satisfles (2.1b).Proof If lglK M, then ms @-+ 0

' 1,(..- s,s,)lds s.u't

- 01.p(@,z)Is.s j

Section 4.2 The Inhomogeneous Equption 133

The final step in showing that 't? is a solution is to check that v E C1,2.Since the calculations necessary to establish these properties are quite tedious,we will content ourselves to state what the results are and do enough of theproofs to indicate why they are true. If you get dazed and confused and bailout before the end, the

Take home message ist it is not enough to mssume a is continuous to havrn : C1'2. We must assume g to be HBlder continuous locally in . Thatis for any N < x there are constants C t% e (0,(k7), which may dpend otk

'#

1 1 1

such that 1p(,z) - gt, p)1 K Clz -

plE' whenever t s N.

The remson for this assumption can be found in th proof of (2.6c)and againin (2.6d).

The first step in showing .l)

e C1,2 is to mssume g is bounded'and

useFubini's theorem to conclude

j

r(, z) = ds p,(z, ylgt - s, p) dy0

d/2 -Ip-rIQ/2, '

where 1?atz, yj = (2xs)- e . The expressipn we have just writtn for'p above is what Priedman (1964)would call a volume potentlal antl would write

7(z, t) = Zz, ;, r)g((, rldt drrrc D

To translate between notations, set Ta = 0, D = Rd,

Zz, @;(, r) = p-rtz, )and change variables s = -

'r,

y = . Because of their importance for theparametrix method, the dilerentiability properties of volume potentials arewell known. The results we will state are just Theorems 2 to 5 in Chapter 1ofpriedman (1964),so the reader who is interested in knowing the whol story

.. ' '' .( ''''' E. . .

'''''''''.

8 firtd theinissing detail thre.. . ' . . .

(2.6a) Theorem. If g is a bounded memsurable function, then r(t, z) is contin-

uous on (0,x) x Rd.

Proof Use the bounded convergence theorem. E1

(2.6b) Theorem. There is a constant C so that if jpl%M is memsurable, thenthe partial derivatives Di 't?

= ovloi have lD17l % CMt1/2, are continuous,and are giveil by

t

D1) = Dfp4tz, yjgt - s, y) dy ds(j

Page 73: Stochastic Calculus

134 Chapter 4 Partial Dirential Equations

Proof Using the formula for Dips from (1.6),the right-hand side is.

E' .

': '

. .:

'

1 .,o (z, - v; al-.k-la ;n. . .

Ex w

...z ,

-(2vs)-Nz * - --'

e 'm./1

' 'egtf- s, ylay as

.' ' ' . '

.'.' 1

.' 'E '

. ' . . . . . .

t' E

ds i=

-

-l'.g(zf

- Bsjat - s, sa))a s

Wtthoughthe last formula looks spspicipus because we >re integr>ting s-l near

0,everything is really all right. If l.Id M, then ,

t - s B )1K ffzlz - fj l =.: CMsllfxltzf - B'tg , ,

so=? haye ,

: ds-E l(zf- Sj)#( - s,

.8,)1

%2CMt3I2 < xg S

Using our result on dilerentiating under the integral sign, (1.7),it follows thatthe partial derivatives Din exist, are continuous, and have the idicated form.

: 'i . . . . . . .

Things get even worse whn we take seond derivatives... .

: '

(2.6c) Theorem. Suppose that g is bounded and Hlder continuops locally int. Then the partial derivatives Dijn = oanloziozjare continuous, and

DiiT = DfJ#,(z, #)J(f - s, #) dVds()

,, .

.' '

: .E

t 6 for the prmula fprIarof supposelbr stmplicity t at =.i.

consulting( . ) ,

Dffp, , the right-hand side is

d/a (zf- yi )2- s-jz-yjaya,

tj.s, yjg (ys(2'ms)- e g20 s

f (2f- f)2 - 8

= E= ' gt - s, #,) ds2() s

..

'

. .

'

This time, however, .E'.l(z - j)2- sl = sltll )2- 11so

Section 4.2 The Itthnmogeneous Equatiim 135

We can overcome this problem if g is Hlder continuous at z, because the factthat Erzi - #,)2 = s allows us t write E ,

. '' '' '''

:' (. . . . . . .. 'l . .

. i ' . ' .E

.

#)2 - s(z -

a j .s, s,)Ez a.(

S !

(z = f,f )2- s E E '

= E= a f#(t- s, #,) - #(f.r,- s, <)JS

Using the Hlder continuity pf g one can show the above is S Cs--alz so. .1

its integral from s = 0 to t convergs absolutely, and with a little work (2.6c)follows. (See Friedman (1964),pages 10-12, for more details.) (:1

.:.

''

. .' E

The lmst detail now is:

(2.6d)Theorem. Let g be as in (2.6c).Then 0.H0texists, and

0.v f 0- (t,z) cci g(t, z) + dv

-aptmrtz,

p)g(r, yj dyot . () ,

.(, jj.. .

Proof To take the derivative w.r.t. , we rewrite 'p

asj

1?(t,z) = pt-rtz, yjgr, y4 dy dr0

. . .'

.. . . .. . . . . . . . k ; . .Diferentiating the right-hand side w.r.t. t gives two terms. Diserentiatmg theupper limit of tlle integral givs g, z). Diflkentiating the integrand and using

f la from (1.6)givsa Ormu

t r ojo j wpt-rtz,ytgr, p) dy dr

fj-:/2

exp t-jz- pj2j

,(y., yj tsts= j (2v)d/2(j..2.

r)(d+2)/2 2(j = r)

f 2T( - r))-d/21Z - #12 -Llz ''- #12 '

+ 1l 2(t - r)2 exP (2(t - r) )gr, !/) dudr

-d f dr dv= z j -

vkgr, ft-r) + :(j - r)a Xa(Iz- ft-rI2.(r,#-r))o () ,

In the second integral, we can use the fact that E+ lz- Bt-r I2)= C(t - r) tocancel one of the t - r s antt make the second exprssion like the frst, but evenif we do this,

t dv= X

f-r0

(zf - fj)2 - sdsEc = (x72

Page 74: Stochastic Calculus

136 Caper 4 Parlal DiRren tial Equations

This is the diculty that we experienced in $heproof of (2.6c),and the remedyis the same: We can save the day if g is lllder continuous locally in @. Forfurther details, see pages 12-13 pf Friedman (1964). E1

Unbounded g. To see what type of unbounded g% can be allowed, wewill restrict our attention to the temporally homogeneous cmse g, z) = (z).At the bare minimum, we need

Sectlon 4.3 The Felman-ffac Formula 137

source can be consulted for a wealth of information about these spaces. We willcontent ourselves here with an

Eynmple 2.1. Let #(z) = (lzI)where (r) =. r-F for r K 1 and 0 for r > 1.We will now show that

(2.7) Theorem. : ffa if p < 2.

The special case p = 1 is the Coulomb potential which is importgnt in physics.

Proof If t:v < 1 then changing to polar coordinates and noticing the integralis largest when z = 0 we have

/ wtlz= pI)I(p)I y / cdjkvrtkrtd-dvsu' Ir-#IfG 0

If we replace /J(r) by r-F and w(r) by r-(d-2) which holds in d # 2, then the1

abovefr

= C r1-F dr -+ 0dp

when p < 2. In d = 2 Fhen p < 2 we get

= cd ?rl-p logr dr -+ 0 El(j

' l,(s-)Ids< xz-j,

faE z.a jo tf, )ds -+ 0

If we put absolute varllzes ipside the integral and strengthen #he last result toauniform convergence foi z E RE', then we gt a deqnition that is essentially due

to Kato (1973).A function is said to be in ffd if

limsup

.s(

/' I,(s-)I#s) - ()t0 r kJo

By Fubini's theorem, we can write the above as

liy sup itz, p)I(p)ldp= 0tt a,

where

Remark. We will have more t say about these spaces at the end of the nextsection. For the developments there, we will also need the space ffaloc, which isdefned ip th pbvious way: f E ff alocif for every R < x, the spatially trunatedfunction./i.(1zI<s) ltd.

4.3. The Feynman-Kac Formula

In this section, w will consider what happens when we add cn to the right-hndside of the heat equation. That is, we will study

(3.1a) 'u

e C1'2 and u = laLn + cu in (0,x) x Rd.

(3.1b)u is contluous at each point of (0) x Rd and u(0, z) = J(z).(.

If ctt, z) % 0, ten this equation describes hat flow with coolig. That is,u(t, z) gives the temperature at the point z e Rd at time t, when the heat at

z at time t dissipates at the rate-c(t,

z).

t -d/2 -Iz'-pI/,#kt(z,y) = (2xs) e st) ,

By considering the msymptotic behavior of-ftz,

y) a t -r, 0 and Iz- g12/0 -+

c,we can cast this condition in a more analytical form as

lirj sup wtlz- p1)I(p)1dp= 0ctt r jw-pju.

where-(d-a) d z ar

:(r) = - log r d = 2

r d = 1The equivalence of (+)and (++)is Theorem 4.5 of Aizenman and Simon

(1982) or Theorem :$.6 of Chung and Zhao (1995).Section 3.1 of the latter

Page 75: Stochastic Calculus

138 Chapter 4 Paztial Dirential Equations

The flrst step, ms usual, is to find a local martingale.. . .

(' ' .. .... . . :.

. : ... .

(3.2)Theorem. Let ca = JJc(t - r, Br) dr. lf u satisfies (3.1a),then

M. = .tI(f - s, Bs) exptc,)

is a local martingale on (0,).Proof Applying It's lbrmula with x = t - s, Arj = sj for 1 < i / d, aztd

xd-i-l= c gives that8 J

tz(.- s, #,) exptc,) -

'ult,

#(,)J 4

=-pj(f

- r, Br) exptctr) dv + xpttrlvult - r? Br4 . dBr0 0

t dct +-

hut - r, lr) exptcr) dv+ joutt - r,fr) exptcr)r j' () y , y

since we havet if 1 < = J' K d(.X XiLt=' -

' 0 otherwise

Using dcrt = c(t - r, Br) d.r and rearranging, the ridht-hand side is

' 1=

-ut + cu + -A'u (t - r, Br) exptcr) dr2()

The next step, again, is a uniqueness redlt.

(3.3) Theorem. Suppose that c is bounded. lf there is a solution of (3.1)thatis bounded on (0,T) x Rd for any T < cx), it must

,be

, (

.p(t,z) - s.(y(s) exptcll)

. ' ' .' : E'

y'

..' j .' . .' ( ' :

Proof Under our assumptions on c and u, Ms, 0 K s < @,is a bounded mamtingaleand Aft H lim,-. Ms = J(# )exptc). Since M is uniformly integr>bleit follows that

'tz(f,z) = ETMO = ErM4 = r(, z)

Section 4.3 The FeynmanulLac Formula 139

As before, it is emsy to show that if v is smooth enough it is a solution.. ... ! q .

'.

.. .

. .

;'

..

)'

.' .'

(3.4) Theorem. Suppose f is bounded and that c is bounded and continuous.If 'p 6 C1,2, then it satis:es (3.1a).Proof The Markov property implis, see Exercise 2.7 in Chapter 1 >nd take

tr, z) = c(@- r, z), that

ErfBtt explclllla) = exp(c,).Eb.(y(lt-,) exp(c1--,,))

= explc ).p(t- s, Bs)

The left-hand side is a martingale, so the iight-had side is also. If 1) C C1,2,then repeating the calculation in the proof of (3.2)shows that

rtt - s, B,) exptc,) -

.p(f, .8c)

' 1= (-p + c'v + j'??)( - r, Bv) exptctr) dr

()+ a local martingale

The left-hand side is a local martingale, so the integral on the right-hand side

is also. Since the integral is continuous and locally of bounded variation, (3.3)in Chapter 2 implies that it must be H 0 almost rely. Azai this inplies that(-1n + cr + laA1g)( - r, Bcj exptcr) H 0 for ou4 mssumptions imply this quantityis continuous and if it were # 0 at some point we would hay a contradiction.

The next step is to give a condition that guarantees that 'p satisqes (3.1b).As befpre, we begin by considering what happens when everything is bounded.

j(3.5) Theorem. If c is bounded and f is bounded an continous, then 'p

satisfles (3.1b)..

*

Proof If lcI% M, then c-Aft K exptcl) < eM$, so exptcl) -+ 1 as t =+ 0.

Letting 11.f11x= sup.IJ(z)Ithis result inaplies

I.E'aexp(c1)J(#f)- ErfBtlk S IlJllx .E:l exptcl) - 11--> 0

(1.5) implies that (f,z) ..-, ErfBtj is cpntinuous at each point of (0Jx Rd andthe desired result follows. E1

This brings us to the problem of determining when 'p is smooth enough tobe a solution.

Page 76: Stochastic Calculus

140 Chapter 4 Partial Diential Equations

(3.6) Theorem. Suppose that J is bounded and Hlder continuous. If c isbounded and Hlder continuous locally in t, then 'p

G C1,2audy hence, satisflesja; j (

y g g ((3. ..

.'

.E . .. .' . . ' . ' .

'

l i '' : ' 'E ' l'

' ' ' '

Proof To solve the problem in this cmse, we use a trick to reduce our result

to the previous cms. We begin by observing that'

'

cf = c(f - r, Bc) dr3 :

0

is continuous and locally of bounded variation. So It's formula, (10.2)inChapter 2, implies that if lt E C1 then i

(. ' . . .

f

tcl ) - #(cfc) = Y(t,) dc,0

Taking #(z) = e-r we have

j

expt-cl) - 1 = - expt-calctt - s, Bs )dso ,

M1 il b (c1)givsultiply g y - expj

kptc') - 1 = c(t - s, Bsj explct - c,) tse t0

Pllzgging in the definitions of ct. and ct we have

't

exp c(@ - r, #r) dr = 1 + c(t - s, B,) exp (t - r, #r) dr ds: 0 0 :

Multiplying by j'Bt ), taking expected values, and using Fubini's theorem,which is justiqed since everything is bounded, gives

t r g.i

'?(f,z) = ErfBt) + j E= tctf- s, O,) exP Lj c( - r, #r)

drl J(.%)) ds

Conditioning on Ss, noticing c(t - s, Bs4 G .T-,, and using the Markov property

s in Exercise 2.7 of Chapter 1,

./. (c(- s, l,) exp (/,'c( - r, Br)dr) J(lt)j Fs(

= c(t - s, l,)'p(t - s, Bs4

Section 4.3 The Feynman-ffac Formla 141

Taking the expected value of the last equation and plugging into the previous

one, we havq

fErfct - s, Bslvt -u

, Sallds1?(f, z) = ErfBt )+ jH r1(, z) + r2(t, z)

'. The first term on the right, t?1(t, z), is C1,2 by (1.6).The second term,rztt, z), is of the form considered in the last section with gv, z) = ctr, z)'p(r, z).If we start with the trivial observation that if c and f are bounded, thn 'v isbounded on (0,Nj x Rd and apply (2.6b),we see that1

' l'?2(f,z) -

't'2(,

#)1 S NI:= p1 whenevr t / #To get a similr estimate for t?1 1et ht be a Byownian motion starting at 0 and

... . .. . . . :

. ..

observe that since f is H3lder chtious

Irz(t,z) -

'pzt,

p)1 = I.E1T(z+ A) - 1y+ 't))l

. ) - 1y + :,)1 s clz - pl''; .E'IJ(z+ t

Copbinipg the lmst two estimates we sre that'p(?',

z) is Hlder continpous locallyj j jsjaerin t. Since c and 'v

are boundd and w h>v suppojed tha ctr, z) scontinuous locally in t the triangle inequality implies gr, z) is Hlder continuouslocally in t. Using (2.6c)and (2.6d)now, w hk r2(0, z) 6 C1'?kE E E

' '

E E)

Unbounded c. As in the last section, we can generalize the rsults above

to unbounded c's, but for simplicity we restrict our attention to the temporallyhomogeneous cmse ctt, z) = ?l(z). Given the formula (+)above, which expresses1) s a volume potential, it is perhaps nt to surprising that the appropriatemssumption is c 6 ff a. The key to working in this generality is what Simon(1982) calls E

(:.7)xhasminskil'slemma. Let g k be a function on R,d with

a > suzpEz jvgB, )ds < 1

s-exp(/',(s,)ds)s (1-a)-t

supr 0

Then

Proof The Markov property ahd the nonnegativity of g imply that

sup f. . . . ds . . . dsu p(#,,) . . - glfaa )-< a%

.'r 0 < ,s

z< .-

.<

a a<

Page 77: Stochastic Calculus

142 Chapter 4 Partial Dirential Equations

Since the integrand is symmetric in (slj . . . ,s,z) we have

.. . dsl . . . dsn g(#.N

z) . . . glBsn )0<a z< --

.<

., w<f

t1= u

. . . ds . . . dsn gtfaz ) . . . glBsn )Tl- ,t=0 ,.::r0

n1=

- 9B'$ d'S

n . , =(),

Summing on n now gives the desired formula..

'. ( .'

('

. .

'

...

From.the

last result, it should be clear wy assuming /z E ffa is atural inthis context. This condition zuaranteesthat

s.(/'I,(,)lA) + 0,SuPr 0

and, hence, that . j:

supE. exp l(#,)I ds -+ 1Jr 0

With these two results in hand, we can proceed with developing the theory

mch ms we did ih the cas of bounded coecients. ' q E

. . . . .E

. . .'

B. Elliptic EquationsIn the next three sections of this chapter, we show how Brownian motion can

i kbe used to construct clmsslcal solutlons of the following equations:

1

0 = -Au21

0 = -Au + g21

0 =-1z + cu2

in an open set G subject to the boundary condition: at each point of 0G, 'tz iscontinuous and 'fz = J.

We will see that the solutions to these equations are (undersuitable ms-sumptions) given by

Section 4.4 The Dirichlet Problem 143

where 'r = inftt > 0 : Bt / GJ. Tp see the similarities to the solutions givenfor the equations in Part A of this chapter think of those solutions in terms ofspac-time Brownian motions B, = t - s, Bs) run until time

'h

= infts : Bs /(0, x) x Rd). Of course, f' * . ,

4.4. The Dlrichlet Proble'n

ln this section, we will cosider the Dirichlet problem. That is, we will study.

.' ..' .

(4.1a) 'u

e (72 and hu = 0 in G.

(4.1b)At each point of 0G, 'u is continuous and 'tz = f.To see what this means, note that if we let ht, z) = u(z), then satisdes theheart equ>tion

= Aat

Thus, 'tl is an equilibrium temperature distribution when OG is held at a flxedtemperature pro:le 1.

As in the flrst three sections, the irst qtep in solvirtg (4.1)is to find a localmartingale.

(4.2) Theorem. Let r = inftt > 0 : Bt j GJ. If u satissej (4.1a),thenMt = 1ltftl is a local martingale on (0,r).

Proof Applyitl: lt's formula gives

t 1 ttI(#t) - u(#o) = TuB,) . dB, + w LuBslds

0 0

for t < r. This proves (4.2),since L'tt = 0 in G and the first term is a localmartingale on (0,r). E:l

The second step is a uniqueness theorem. For this we introduce

P ('r X) = 1 fOr all z(t) = <:

... . . .

... .' '

. .

which (4.7b)below will show is necessary for uniqueness. , Note that (1.3)inCh ter 3 implies bounded sets G satisfy (t).ap

(4.3) Theorem. Suppose G satisses (t). If there is a solution of (4.1)that isbounded, it must be

r(z) H EcfBpj

,'-z(s-).s(z(s-)+ jo',(s-)ds)

s-(z(s-)exp Lj,-c(s,)ds))

Page 78: Stochastic Calculus

144 Capter 4 Patial Dirential Equations

Proof lf 'tz is bounded and satis:es (4.1)then Ms , 0 K s < r, is a boundedlocal martingale. Using (2.7)in Chapter 2, (t), and (4.1b),we have E

. .. ' . . . . .. . (

. . . .. .

... . ... . . .

.. . ..

.j . .

.

Mr H lim Ma = fBr) E

a1ru(z) = ETMZ = .%Mp = r(z) C1

(t) is not needed for the existepe of solutions, blt to drop (t) we need tomodify the deinition to take the possibility of r = (x) into account. Let

(z) H Er (.J(#r)1(r<x)). .. .

l '.. . J '.

.'

. ..

As in the first three sections, it is emsy to phow:

(4.4) Theorem. Let G be any open set and suppose f is bounded. If 6 C2,

then it satisfles (4.1a).Proof The Markov property implies that on (-r> s)

Ez (T(f,.)1(m<x)1.F,) = (#) '

The lefhand side is a local martingale on (0,r), so (#,) is also. lf 'pt

6 C2,

then repeating the calculation in the proof of (4.2)shows that, for s E (0,r),

Section d.# Te Dirichlet Problem 145

Proof The frst step is to show

(4.5:) Lemma. If t > 0, then z--. .P.(,r K f) is lower semicontinuous. That is,

ifzrz-.-,

<, thenliminf#.a(r s t) z J'atr / f)

n''ex

Proof By the Markov property

PcB. e G' for some s E (E, t)) = prtz, pl#,tr K t - f) dy

Since y-+ .Pptr K t - f) is bounded and neaurable and

-d/2 -la-pIa/2e#,(z, #) = (2TE) c

it follows from the dominated convergence theorem thatC

i

z--> Pcm 6 G for some s E (:,@))

is continuous for each .7 > 0. Letting e t 0 shows that z-+

.!%tr

K ) is anincremqing limit of continuous functions and, hence, lower semicontinuous. (:)

If y is regular for G and t > j, then fjt'r K i) = 1, so tttollowsfrom (4.5a)that if zn

-+

v, then.

''''''''

'19.9'.

'''

. ..

..

. .

'.

. .

liminf.&n

('rK t) 1 for all t > 0'n-x

With this rstablished,it is easy to complete the proof. Sini f is boundd andcontiuotts, it suces to show that if D(p, J) = (z : lz - pl < J), then

(4.5b) Lemma. If y is regular for G and zrt-->

y, then for all &> 0

Prn ('r< (X), Bv E DD, f)) --+ 1

Proof Let 6 > 0 and pick so small that

1''

i7(#a)-

'p(fo)

=

j' Lb.B r) dr + a local martingale

The left-hand sid.e is a local martingale on (0,r), so the integral on the right-

hand side is alo. However, the integral ls continuotls and locally of boundedvariation, so by (7.2)in Chapter 2 it must be H 0. Since A is continuous in

' G it follows tht A'J H 0 in G. For if A # 0 at some point we Would have a1

contradiction. El

Up to this point, everything hs been the same ms in Section 4.1. Diferences

appear when we consider the bouudary condition (4.1b),sin it is no longersucient for f to be bounded and continuous. The open set G must satisfy a

regularity condition.

Deflnition. A point y 6 DG is said to be a regular polnt if #vtr = 0) = 1.

(4.5) Theorem. Let G be any open set. Suppose j. is bounded and cohtinuous

and p is a regular point of 0G. If zn E G and zn-+

y, then :(zn) -+ fy).

JJb sup 1#,l > - < :

0s,st 2

Since / (r S ) -+ 1 ms za-+

y, it follows from the choices above thattr r&

liminf Pru (r < x, Bp E .D(p, ))k liminf fa v %t, sup 1#,- zn I% -:

n -IXI n''* (X) pg y < j

6liminf Pcv ('rK t) - 15 sup I#fl > - > 1 - f

n-..x csas, 2

Page 79: Stochastic Calculus

146 Chapter 4 Paztial Dirential Equations

Since E was arbitrary, this proves (4.5b)and, hence, (4.5).(4.5) shows that if every point of OG is regular, then :(z) will satisfy

the boundary condition (4.1b)for any bounded continuous f. The next twoexercises develop a converse to this result. The qrst identi:es a trivial cmse.

Exercise 4.1. If G C R is open then each point of OG is regular.

Exerclse 4.2. Let G be an opn set in Rd with d 2 and 1et y E OG have.!5tr = 0) < 1. Let J be a continuous function with J(p) = 1 and fzj < 1

f 11z # y. Show that there is a sequence of points zu-+

y such thator aliminfa-.a (za)< 1.

From the discussion above, we see that for 't7 to satisfy (4.1b)it is sucient

(and almost necessary) that each point of OG is a regular point. This situation

raisestwo questions:

Do irregular points exist?

What are sucient conditions fo a point to be regular?

ln order to answer the flrst question we will give two examples.

Exnmple 4.1. Punctmed Disc. Let d k 2 and 1et G H:z D - (0), wheieD = (z : 1zl< 1). If we let Tp = inf (t > 0 : Bt = 0J, then f%(Tn = x) = 1by (1.10)in Chapter 3, so 0 is not a regular point of 0D. One can get anexample with a bigger boundary in d k 3 by looking at G = D - ff phere

.L.

. . . ...

.

' .' .5. .

.' :. '

ff = (z : zl = z2 = 0J. In d = 3 this is a ball minus a line throgh the prigin.

Exnmple 4.2. Lebesgue's Tllorn. Let d k 3 and let

Section 4.4 Te Diichlet Problem 147

immediately that if we let Ta = inftf : Bt 6 (2-n,2=n+1) x rj,csld-lj and pick

c,, small enough, then J%(Ta < x) K 3-n . Now Eactt3-n = 3-1(3/2) = 1/2,so if we 1et r = inftf > j : Bt / Gj and = inftt > 0 : Bt j (-1, 1)d), thenFe hlve

Co

1s(r < ,) s y'latrz'r,< xl s .j

n=1

Thus Polv > 0) 9 Pzlv = tr) > 1/2 and t is an irregular pint. D

The lst tWo exmples show that if Gc is too small near y, then y may beirregtlar. The next result shows that if Gc is not too small near y, theu y isrtlzli

... .... .

r',

,.

.

. . .. ' ' .. .' .. . .

(4 5c) Cone Cpndition. If there is a cone V' havin: vertex y nd an r > 0such that F' n Dy, r) C G', then y is a regular point.

y. . .. '' . .''.. .

Pxoof The frst thing to do is to deflne a cone with vertex y, polntingln direction 'p, with openlng c as follows:

7'4g;v c) = (z : z = y + ov + z) where 0 G (0,cr), z.1.

'?p, and Izl< c)

Now that we have defined a cone, the rest is easy. Since the normal distributionis spherically symmetric,

PyBt e 7(p, '', c)) = ea > 0

where ea is a constant that depends only on the opening c. Let r > 0 be suchthat F4p, '??, c) n D(p, r) (: G'. The continuity of Brownian paths implies

lim Py sup 1#,- gl > r = 0-.0 , s j

where the n = (x7 term is the single point (0J.Clnlm. lf cn t 0 suciently fmst, then 0 is not a regular point of 0G.

Proof Pzlhl, .R3)= (0,0) for some t > 0) = 0, so with probability 1,'a

Brownian motion Bt starting at 0 will not hit

In = (z : zl e (2=',2-'+1) za = za = . . . = a = 0). )

Since Bt is transient in d k 3 it follows that for a.e. u2 the distance between(#, : 0 K s < x) and % is positive. From the lmst observation, it follows

Combining the lmst two results with a trivial inequality we have

na K liminf PnBt eGC) K lim/tr % ) K .J$(,r = 0)

tt0 *

t0

and it follows from Blumenthal's zero-one law that./5(,r

= 0) = 1..

'.... . . '

.'. ' : : . '. . : . . . . . '. . (. . .

', (4.5c)is ucient for most cases.

Exerclse 4.3. Let G = (z : #(z) < 0) where g is a C1 function with Tgyj 0for each p E 0G. Show that each point of OG is regular.

Page 80: Stochastic Calculus

148 Chapter 4 Partial Dirential Equations

However, in a pinch the following generalization c>n be useful.

exerclse 4.4. Deflne a flat cone y,'p, tz) to b' th intfection of 74#, b, a)

with a d - 1 dimensional hyperplane that contains the line (#+ 0'v : 0 E RJ.Show that (4.5c)remains true if ucone'' is replaced by ullat cone.''

When d = 2 this says that if we can find a line segment ending at z that lies i!zhe complement then z is regulr. For example this ifnplies thateach potitt if

tthe boundary of the slit disc G = D - (z : z: k 0) za = 0Jis regular.

. . . . . . . .. .. .'

.' E . . .

. E.

.

.('.

. . . . -.s k ' . .:

. :. . . .. . . .

Having completed our discussion of the boundary condltion, we noF jurp

our attention to determining when ip is smooth. As in Section 4.1, this is tteunder minimal assumptions on J.

(4.6) Theorem. Let Ghence, satisfes (4.1a). .

. . . . . . . . . . ;.'

. . . (.

Proof Let z E G and pick J > 0 so that D(z, J) C G. If we let o' = tnf(@:

Bt 4 D(z, J)) , then the strpng Markov' property implies that (formore detailsE ample 3.2 in Chapfei 1)See X

e any open. i .

set. If J is ounted, then 'p

E i' and,

Section 4.4 The Dirichlet Problem 149

Note that we use l!/- zl2 = E(!n - z)2 rather than j!/- zl which is notmooth at 0. Making a simple change of varibles, then looking at things in

polar coordinates, and using the averaging property we have

gzt = Js(,,,)#(IzI2)( + z) dz

J

=c jodv rd-lktra).ts(,,r;

(z+ a) ,r(#a)

J

= C jodv rd-1#(r2) /z(z)

So is a cpnstant times g and hence C .

ro conclude now that A 0 we note that the multivariate version ofTaylor's theorem implies that if Ig- zl K r then

hy) = (z)+ yi - zlDftzli

+ yi - zf)(pJ - z.flDltzl + Ey, z)

j k

where Ic(p,z)1 E% Czrt. Integrating over. D(z, r) w.r.t. dp/1D(0, r)1 and usingthe averaging property we have

(z) = hz) + 0 + C2A#(z)r2 + O(r3)

subtiacting (z) froin each side, dividin bj r2, and letting r

-+

0 it follows. . . . .

. .. . . . .that /t(z)= 0. c)Unbpunded G. As in the previous three sectins, 4ur last topic i to

diss wliat hpbens when somethinx becozes nbounddk Thi time tve $v1lltocu p d ignjre J. combining(4.s),(4.5), d (4,6)we haki

(4.7a) Therem. Suppose that.f

is boupded and contiuous d that eachoint of OG is regular. If for al1 z e G, .J$tr < x) = 1, ttten 'p is the uiqueP

bndd lttn of (4.1).

where ';r is surface meuure on D(z, J) normalized to be a probability memsure.The lst result is the 'tavexaging property'' of harmonic functions. Now a simple

analytical argumnt takes over to show 5 6 C* .

E i

(4.6a) Lemma. Let G be an open set arnd / be a bpunded function which hmsthe averaging property

(z)= .Js(z,,)'(p)''rtdpl

when z e G and J < r where % > 0. Then E C* and Lh = 0 in G.

Proof Let # be a nonnegative inqnitely diferentiable function that vanishes

on (J2, x) but is not H 0. By repeatedly applying (1.7)it is routine to show

that:(z) = 1/7(1# - zl2)(p) dy e C*

D(r,J)

To see that there might be other unbounded solutions consider G = (0,co),J(0) = 0, and note that u(z) = cz is a solution. Conversely, we have

(4.7b) Theorem. Suppose that f is bounded and continuous and that eachpoint of OG is regular. If for some z 6 G, P.('r < x) < 1, then the solution of(4.1) is not unique.

Page 81: Stochastic Calculus

150 Chaptez 4 Partal DiEerential Equations

Proof Since @)=.&t'r

= x) has the averagin: proprty given in (4.6a),it is C* and hms Llt )= 0 in G. Since each point y e OG is regular, a trivialcomparison and (4.5a)implies .

q

limsup Pztr = x) K limsup Pztr > 1) K Ppt'r > 1) = 0!/

.E

.

The last two observations show that /z is a solution of (4.1)with f H 0, which

completes the proof. I::I

By working a little harder, we can show that adding &.&('r = x) to (z)is the only way to produce new boudel solutions.

(4.7c) Theorem. Suppose that f is bounded and coninuops and that each

i t f OG i regular. If 'tz is bounded nd tisfl (41)''i G th thie iPO '!1 9 ? -

,

stht 4 such that

'z(z)= Erlmtkr < x) + tJ%('r = x)....

:'. .

': .. .

.' '. '

. . . .

We will warm up for this by proving the following special cmse in which G = Rd.

(4.7d)Theorem. If u is bounded and harmonic in Rd then 'tz is constant.

Proof (4.2)above and (2.6)in Chapter 2 imply tlzat uBtj is a bpundedmartingale. So the martingale convergence theorem implies thay s t -+ &;u(#t) -+ Ufn. Since U is memqurable with respect to the tail c'-fleld, it follows

U < bj is eithi H 0 or H 1 for anyfrom (2.12)in Chapter 1 that P.( < x

a < . The lmst result implies that there is a onstapt c independnt of z so that,

,

Ku,z c c) 1.'akig expected values it flloWs that i(z) c E=U. k c. E

Proof of (4.7c) From the proof of (4.7d)we see that 1z(#f) is > bounded localrsy

frprtingayn (0,i-) s U. = lipnyr'tz(#t).

On (i.< x4, w hk U, iii J( r) so

h t we need to sh is that'thet

i a cstat ce ifdepekt of tlt st 'titlk

Wa

pointBo so that #r + a o (r = co). Intuitikely, this i qen ot th

trivialityof the msymptotic c'-qeld, but the fact that 0 < Pztr = cy?) < 1 makes

t difft ult to extract tltl. from (2.12)in Chapter 1.' '

t cTo get around the diculty identiqed in the previos p'ra raph, Fe 4i1l

tendu to the whole space. The two steps in doing this aie t ''

ex(a) Let #(z) = utzl - EcfBpj r < x). (4.6)and (4.5)imply that # ibounded and satisfles (4.1)with boundary function ). H 0.

(b) Let f = llllx and look at

(z) + Mpzr = x) z e G1D(z) = (c z s cc

Section 4.5 Poisson 's Equation

To complete the proof now, we will show in four steps that o(z) = ppxr = x),from which the desired result follows immediately.(i) When restricted to G, 1t? satis:es (4.1)with boundary function

.f

e 0.The proof of (4.7b)implies that Mprr < x) satisfies (4.1)with boundaryfunction f > 0. Combining this with (a)the desired relt follows.(ii) w k 0.

To do this we use the optional stopping theorem on the martingale hBtj attime r h and note (#z)= 0 to get

/z(z) = .2.((.B);m > t) 1 -MJ(r > @)

Letting t -+

cxl proves (z)k -MPrr = x).

(iii) 'tp(St) is a submartingale.

Because of the Maxkov property it is enopgh to show that for all z and t we. '' ..

.'

: ''''' '... ..' r . . . . . .hake tptzl S .E'.o(#t). Sinc w k 0 this is trivial if tr / G. Tp prov this for

G te that (i) implies J/l = wBt) is a oundect local martingale onz E , we no(0,

'r)

and I'Ip'g= 0 on (r < x), so using the optional stopping theorem

o(z) =.Q(I#,.Af)

= .E%('tp(ft); r > t4 K .#Q('tz?(ff))

(iv) There is a constant p so that o(z) = ppzr = tx).

Since w is a bounded submartingale it follovs that as t s+ x, wBtj convergesto a lin'lit W'x . The argument in (4.7d)implies there is a constant # so thatftW'x = p) = 1 for all z. Letting t -.+

x iy

'tp(z) = ErwBt );r'> t)

arid using the bounded convergence gives (iv). (:I

4.5. Poisson's Equatlon

In this section, we will see what happens when we add a function of z to theequationconsidered in the lmst section. That is, we will study:

(5.1a)u E 6'2 and lzLu =-g in G.

(5.1b)At each point of 0G, u is, continuous and 'u = 0.

As in Section 4.2, we can add a solution of (4.1)to replace 'tz = 0 in (5.1b)byu = f. As always, the first step in solving (5.1)is to find a local martingale.

Page 82: Stochastic Calculus

152 Capter 4 Partial Dilerential Equations

tfk= nBt ) + j g Bs)ds

Section 4.5 Poisson 's Equation 153

The left-hand side is a local martingale on (0,'r),

so the right-hand side is also.If ?? 6 C2, then repeating the calculation in the proof of (5.2)shows that fors C (0,

'r),

' gBr, 4v = /' (.j1A.p+ .q (lr) drv(s,)-

,,?(s,)

+ /,. ,

alocal rpartingale

The left-hand side is a local martingale on (0,r), so the integral on the right-hand side is also. However, the integral is continuous and locally of bounddvaiation, so by (3.3)in Chapter 2 it must be > 0. Since Jlr + g is continuousin G, it follows that it is H 0 i G, for if it Were # 0 at some point then weFpuld have a contyadiction. I::I

ttertlteektensive discuion lii the lt setiop, the conditiops rtded toguarantee that the boundary onditins hold should corfl ms no suipris.

5 5) Theorem. Suppose thxt and. are bounded. Let y be a regulaz point( .

of 0G. If za E G and z,z-+

y, then r(zn) -+ 0.

Proof We begin by observing:

(i) lt foliows from (4.5a)that if e7> 0, then Pcn (r > :)s+ 0.

(ii) If G is bounded, then (1.3)in Chapter 3 implies C = supz E=r < (x) and,

hence, 11'p11x< cllpllx<; x.

Let 6 > 0. Beginning with some elementary inequalities then using theMarkoy property we have

Tht' ! T

I'p(zn)IK Ez- Ip(#a)Ids + E==.(#,)

ds ; r > e0 e

K e'll#llcz + .E'a,-(l'??(.e)l;'r'> e') S ellpllx + II't?Ilcx,JD.-(r > t')

. . ' . ' '

Letting n-+

x, and using (i)and (ii)proves (5.5)since e: is arbitrary. E1

is a local martingale or (, r).

Proof Applying lt's formula ms we did in the last section givesyj E

j pg'u(#t) -

'u(#c)

= / %nB,) . dB, + :;- / LumjdsJ 0 o J 0

for t < r.' This proves (5.2),since laLu =-g and the flikt term on the right-

hand side is a local martingale on (0,r). I:I

The next step is to prove a uniqueness result. ,

5 3) Theorezn. Suppose that G and. re bottnded If tht'e is a soluti f( .

?

(5 1) that is bounded,it must be'

. .

)'

. .. . :'

. . ..

. .. .

T f

w(z) H Ez gBtldt0

Proof - If 'tz satisfles (5.1a)then Mt defined in (5.2)iq a lpcal martingale ontl z : G.(0,r). If G is bounded, then (1.8)in Chapter 3 implies Err < (x) for a

If 'tl nd g are bounded then for t < r' E

I-fl K Ilullx+ rllpllx ' .

Since the right-hand side is integrable, (2.7)in Chapter 2 and (5.1b)implyT

Mv H lim Mt =,(#)

dtt1r o

tztzl = ErMZ = EzM; ) = r(z)

As usual, it is emsy to show .

i(5.4) Theorem. Suppose that G is bounked and g is continuous. If 'p E d ,

then it satisfles (5.1a). EE

Proof The Markov property implies that on ('r > sJ,

sz (J*gnt, dtla) - J-gntt (ll + '?(s-)

J 0 j J J 0

Last, but not lemst, we come to th question t smoothness. Foi theedevelopments we will assume that g is deflned on Rd not just in G and hmscompact support. Recall we are supposing G i bounded and notice that thevalues of g on G are irrelevant for (5.1),so there will be no loss of generalityif we later want to suppose that J#(z)dz = 0. We begin with th case d k 3,because in ihis case (2.2)in Chapter 3 implies

Co :

1(z)= Ez Ip(f)I < (x,

0

Page 83: Stochastic Calculus

154 Chapter z.l Pattil Dirential Equations

and, moreover, is a bounded function of z. This means we can deflneX

1t)(z) = E, gBt) dt()

use the strong Alarkov property to conclude

Section 4.5 Poisson 's Equmtion 155

As in Section 4.2, txopble starts when we consider second derivatives. If# j, then

Dij lz - pI2-d= (2- tpt-(plz - p1-d-2(zf - yilzj - yjj

ln this cse, the estimate used above leads to

IDllz - pI2-dIS c'lz - p1-/

which is (justbarely) not locally integrable. As in Section 4.2, if g is Hldercontinuous of order a, we can. get an extra lz- g1t' to save the day. The detailsare tedious, so we will content ourselves to state the result.

T .

o(z) =Ej #(#) dt + EwBp )

and change notation to get ,

(*)'p(z)

= o(z) - E=wB.),

(4.6) tells us that the second term is C in G, onepd pnly prpve that w is, a task that is made sipple by the fact thart (2.4)and(2.3)lrt Chpter tl iv) te followingexpllcit ttfnl

,,(z)= cd lz - pI2-d,(p)dy

The flrst derivative is easy.

(5.6a) Theorem. If g is bounded and has compact support, then w is C1 anct

there is a constant C Svhich only depends on d so t at.

. . . ; . , .j . . .. y. yy (g .. .

.. .. k

j'

. . . g.

dylfh'u'(z)IS CII#IIx jz - pjd-: < &''' '

f,#0J

Proof M?ewill content ourselves to show that the ekprsion s jet by differ-entiating under the integral sign converges and leave it to the reader to apply(1.7) to make the argument rigprous. Now ,

a(5.6b) Theorem. lf g is Hlder contizmous, then w is C .

The reader can flzld a proof either in Port and Stone (1978),pages 116-117, orin Gilbazg and 'Pr4ldinger (1977),pyges 53-55. Combining (+)with (5.6b)gives

(5.6) Theorem. Suppose that G is bounded. If g is Hlder continuous, then'p 6 C2 and hence satisfes (5.1a).

The last result settles the question of smoothness in d k 3. To exted theresult t d < 2, we need to flnd a substitute for (+).To do this, we 1et

o(z) =

jGz,ylgy) dy

where G is the potential kernel deflned izl (2.7)of Chapter 3, that is,

-.1.logtlz - pI) d ::.z: 2G(z,:) = '

-lz - pI d = 1.yy2 - d 2Di Iz- pl2-d = (z.j- yj ) 2(zy - yi)

2 J

So diFerentiating undr the integyal sign'.. . .. '.'

.' .. ''''. . . . . ' ., (zy- yi)Diwj = Cd(2 - d) d gy) dyjz - pj

Thr integral on the right-hand side is conyergent sincey

(zf- yij(j s jj,jj. y

yy..z < xp(p) yIz- pld (,g()) lz - pl

G was defined as

-f,t(z,f)- fla./r-where the at were chosen to make the integrarl converge. So if jg dz = 0, wesee that

rG(z,p):(p)dp = lim E=

.(#,)dt

T-x p

Using this iuterpretation of w, we can emily show that (+)holds, po again ourfiblem is reduced to provinj that w is C2, which is a pioblem in catctl.

Once all the computations are done, we fln.d that (5.6)holds in d %2 and thatin d = 1, it is sucient to assume that g is ontinuous. The reader can flnd

Page 84: Stochastic Calculus

156 Chapter d Parla/ Diflrential Equations

details for d = 1 in (4.5)of Chapter 6, and for d k 2 ih either of the sourcescited above.

4.6. The Schrdinger Equatlon

In this section, we will consider what ltappens when we add c'tz to the left-handside of the equation considyed in Section 4.4. That is, we will study

E Section 4.6 Te Schrdinger Equation

The general solution is Wcos z + B sin bzj where b = 2v. So if we want theboundary condition to be satisfled we must have

1 =. cos c + B sin ba

1 = . cost-c) + fsint-a) = Wcos ba - B sin ba

Adding the two equations and then subtracting them it follows that

2 = 2.4 cos ba 0 = ZB sin c

From this we see that B = 0 always works and we may or may not be able tosolve for ..

.' . . . . '

If os ba = 0 then there is no solution.

If cos ba # 0 thrp u(z) = cos z/ cos ba is a solution.

We will see later (inExample 9.1) that if ab < x/2 then

r(z) = cos z/ cos ba

However this cannot possibly hold for ab > x/2 sine r(w) k: 0 while the right-

hapd side is % 0 torsome valus ot z. ,;

, . . , ., . i:l

'. .'. '' ..

.. . .

'

.

)'

. . . .E

. . .''l '

' ''

. .

''

' '

.E.' ' ' '

.

;'

. . .

'

. i'l. ... .. . ...'.:

.'

....

'.:

.

'

.

Wewiilsee betow (agatnin Exampte j.i)) tht thetroublewtt the'imst

exlmpl is that if c > x/2 then c H y is too large, or to be pmise, if 5ye let

o(z) = frexp c(#,)#s

is a local martingale on (0,r).

Proof Let ct =t cB'lds. Applying It's formula gives

-'tI(.h) explct) - 1t(#()) = exp(c,)V1t(l,) . dBo +

'tz(1,)

expto) dca0 0

j

1+ .j- A'tz(#,) expto) ds', 0 :

for < r. This proves (6.2),since d% = c(#,) #s, laLn + cu = 0, and the'llrst

term on the right-hand side is a local m>rtingaleon (0,r). E1E

At this point, the reader mlght eypect that the next step, wsitr hps beenfve times before) is to mssume that everything is bounded and conclude that ifthere is a solution of (6.1)that is bounded, it must be

'p(z) H EzfBr) exptcrl)

We will not do this, howeyer, because the folloFing simple example shows thatthis result is false.

(1 f X 1 ThO eollztiox We'f

Exnmple 6.1. Let d = 1, G = (-c, c), c H n',an .

c6nidering is1-u& + 'y'tz = 0

'a(c)

= 1It-r-cl = 12

then w H co in (-c, c). The rest of this sctipn is devoted to showing that if% / x, then deverything is flne.'' The development will require several stages.The flrst step is to show

(6.3a) Lemma. Let > 0. There is a p > 0 so that if H is an oprk set with

Lebsgue measure lfflS it and rn p inf (t > 0 : Bt / ff) then.

i. .. .''''''......''.

( '' ' g''' :

'

.

supE. (exptpcwl)s 2g)

'

laroofpiic .t, > () so that ee, s 4/3. clearly,

1 -Ir-pIQ/2-?

dy s1S1

s1

zrn >-/)

< jn(2za)d/2e (2za)d/a

Page 85: Stochastic Calculus

158 Chapter d Paztial Dirential Equations

if we pick Jl so that pl2A.7)dl2< 1/4. Using the Markov property ms i!l the

proof of (1.2)in Chapter 3 we can conclude that (

Section 4.6 The Schrdinger Equatiim

If J % a and D(g, J) c: G, multiplying the last inequality by rd-l andintegrating from 0 to J gives

Prrx > n')= Espnxrn > ( - 1)y); rn > y)

1s : sup Pvrn > ( - 1)-/)9

d: .

-1

j ,.,(

z) dy1z'(p)SV cd s(,,,)

So it follows by induction that for all integers k k 0 we have

1supPatrn > n')< w= 4

Since e' is incremsing, and eo K 4/3 by mssumption, we have

DO

.''. . '...

. :. .

.

: (.. r .Ez expldlw) K Eexptpalftl- 1)-/ 4 rn % n')

=1(x) cxl4 1 4 1 4 1

s J7 4:-:= -a J7ak-,

= .

1 - z= 2

k=1 k=, a

Careful readers fnay have noticed that we left r.t = 0 out of the expected value.'h l 0-1 1aw either Prg = 0) 0 ln which cmse ourHowever, by the Blument a

computation is valid, or Prrx = 0) = 1 in which cmse Erexporu) = 1. E1

Lt c* : s> Ic(z)l.By (6.3a),we can pick rc o mall tltat itrr :+ inft :r

I.&- Bz I> r) and r s a, then l'xexplc*tr'r) < 2 for all z.

(6.3b) Lemma. Let 25 K a. 11-D(z, 2J) C G and y E D(z, ), then

wybK 2d+2o(z)

Th for Olr interest in this is that it shows o(z) k ckl imptis wij 2 (:x)e remsonfor y e D(z, J).

Proof If Dy, r) C G, and r K ro then th strog Varkovproferty ijlies

Fhere (72 is the surface area of (z : 1zI= 1). ltearranging we have

xd(+)'tz'(z)

dz k 2-1*'

wyt 'E

CoD(p,O

where Co = Vld is > constant that depends only pp d.Repeatirk the flrst argument in the prof with y = z and using the f>c4

that cr. k -c*Tr gives

tb(z) :::i .E'.(ekp(c(s)u)(#(Tr)j k f.(ex#(-c*X)+(#(Tr)))

*Tr)) j ztptzlvtdzl= fxlexpt-caDztr)

Since 1/z is cnvex, Jehsen's inequality implies:

:'' '

f'alexpl-c*Trlz k l/.zalexptc*garl) k 1/2

Combining the last two displays, multiplying by rd-l, and integrating from 0

to 26 we get the lower bound

(2J)d -1 1tp(z)k 2 . -- 1t)tzl dz

d c.d s(.,aJ)

Rearranging and using D(z, 2J) D Dy, J), w k 0 we have

Co(++) tp(z) k 2-1

attltzl dz(2J) oy.n)

where again Co = d/cd. Combining (++)and (+)it follows that

1D(:) = Splexplcg'rl'tz'l./ (Tr))) K .G(exp(*Tr)to(#(Tr)))

= .;'plexptc*Trll jopfy,vjo(a)r(dz) s 2jopLu,vj 1stzlrtdzl

:-z co j .(g) dztp(z).k s)a(2 0(,,,)

C 8d-1 o-1

:-(d+2)k 2 d. 2 wyj :ic wy)

(2J) Co D

where z' is surface memsure on oDy, r) normalized to be a probability memsure,since the exit time, Tr, and exit location, #(Tr), are indepehdehtk (6.3b) and a simple covering argument lead to

Page 86: Stochastic Calculus

160 Chapter 4 Partial Dirential Equatiops

(6.3c) Theorem. Let G be a connected open set. If w / (:x) then

1o(z) < x for all z 6 G

Proof From (6.3b),we see that if 1t)(z) < x, 23 K a, and D(z, 2J) C G, thenw < (x) on D(z? J). From this result, it follows that Gz = (z : tlJ(z) < XJ isan open subset of G. To argue now tht Gz is also closed (whenconsidered mqa subset of G) we observe that if 25 < a, Dy, 8) (EEG, and we have z,z E Gowith zrp

--,

y e G then for n suciently large, y 6 Dtza, J) and Dtzrj, 2J) C G,so wy) < x. E1

C SX WC MYOCCCCIto tMcSRiQRCRYS I'CSl;It 5Ve 5V1t to Strengthen th: lte ,

l i !onc s on.

(6.3d) Theorem. Let G be a connected opep set with inite Lebsgue measure,1G1< x. If 't / i2$then

qup1D(z) < txl

Proof Let ff c G be compact so th>t 1f?- ff l < y the const>nt in (6.3a)fpr0 = c . For each z e K we can pick a ta so that 25. K rc and D(z, 2&) C G.The open sets D(z, &) coker ff so there is a qnite subcover Di, &), 1 K %f .

Clearly,suo

'tt7(zf)

< cx:hls-sz(6.3b)implies wyj K 2d+2o(z) for p 6 Dzi, .;),

so

Sectfon 4.6 Te Schrldfnger Equaton 161

(A3) w / x.

(6.3)Theorem. lf there is a solution of (6.1)that is bounded, it must be

'p(z) % ErfBpj explcrl)

toof (6.2)implies that Ma =,tz(/,)

exptc,) is a local martingale on (0,r).Since f, c, and u are bounded, letting s 1 r A t and using the bounded conver-gence theorem gives

'u(z)= ErlBvt exptcr); r < t) + EsuBtj explc); r > t)

Since J is bounded and u?tzl = Sxexptcr) < x, the dominated convergencethepremimplies that as t -+

x, the first term converges to EofBpj exptzl).the second term -+ 0 we begin with the observation that sinceTo show that ,

(rt,> t) E Ft , the dennition of conditional expectation and the Markov propertyimply

.'.

' '

E .

ErltBt) exptcr); r > t) = EzkztBt) explcrllF); r > t)

= EruBt )exptclzstlt) ; r > f)

Now we claim that for a1l y E G

'r(p)k expt-c*l.!%tr S 1) k e > 0

The flrst inequality is trivial. The last two follow easily frpm (A1). See the firstdisplay in the proof f (1.2)in Clzapter 3.

Replacing wBtj by 6,

fz.tl'ztftllexptctl;r > t) s :-l.E'.(lu(#t)l exptcr); r > t)

S e'-'lleullxfwtexptcrl; v > t) -+ 0

as-+

x, by the dominated convezgence theorem since u)(z) = Es exptcr) < tx,and

.l%tr

< x) = 1. Going back to the first equation in the proof, we haveshown 1ttzl =

'p(z)

and the proof is complete. Cl

This completes our consideration of uniqueness. The next stage in ourprogram, furtunately, is ms emsy as it always has been. Recall that here and inwhat follows we are assuming (A1)-(A2).

(4.4) Theorezn. lf v e C2, then it satisfies (6.1a)in G.

M = sup wyj < cxlvK

. . . . . .( .

' . l'2 by (6.2a)so using the strongIf y 6 H = G - ff , then Ey texplc*rJzll K

Markov property

1t?(p)= Ey (exptcrx);rn = r) + Ev (exptcolzpllra);BvH : Ik-4

K 2 + f'f'p (explcrx);Bpn e ff ) i 2 + 2M

With (6.3d)established, we are now mpre than ready to prove our unique-

ness result. To simplify the statements of the results that follow we will nowlist the mssumptions that w will malte for the rest of the section.

(A1) G is a bounded connected open set.

(A2) f and c are bounded and continuous.

Page 87: Stochastic Calculus

162 Capter d Patial DiRrential Equations

Proof The Markov property implies that on ('r > s),

f.(exp(cr)J(lr)l.'&) = explollbxlexplcrlytlzl)

= exp(c,).??(.s,)

The left-hand side is a local martingale on (0,'r),

so the right-hand side is also.If 'tl 6 C2 , then repeating the calculation in the proof of (6.2)shows that for

r)s e ( , ,

Section 4.6 The Schtdinger Equation 163

to the previous Emse.We begin with the identity established ther, which holdfor a11 t and Brownian paths LJ and, hence, holds when t = r(tJ)

r T T

exp cBsjds = 1 + cB,) exp ctrldr ds0 0 ,N

Mgltiplying by JBp) and taking expected values gives

* Er (c(l,)expLjT ctsrlfrj J(lz)1(,<,.)) dsr(z) = Erl (.nr) + j a . ,

Conditionzg on L and using the Markov property, ve can write the abov ms

'?() = EnlBvt + fb(c(l,)'p(f,); r > s) ds0

> r1(z) hF r2(z)

The 'Ilrst term, 1?1(z), is C* by (4.6).The secopd term is

T

'p2(z)= Em c(f,)1?(f,) ds0

s-lexpto)-

.p(.o)

- /'()as+cs)

(srlexptcrlfr'?( 0+ a local martingale

.. . . .: ' '

.

. .

The left-hand side is a local martingale on (0,r), so the integral on the right-hand sid is also. Howeker, the integral is continuous and locally of boundedvariation, so by (3.3)in Chapter 2 it must be H 0. Since 'p E ()2 and c iscontinuous, it follows that 1Ar + cv H 0, for if it were # 0 at some point then?we would have a contradictlon. EI

Havzg proved (6.4),the next step is to consider the boundary condition.

As in the last two sections, we need the boundary to be regular.

(6.5) Theorezn. 'tl satisfles (6.1b)at each regular point of 0G.

Proof Let p be a regular point of 0G. We showed in (4.5a)and (4.5b)thatif zn

-+

y, then J%a(r K 4 -.+ 1 and.&a(#z

6 D(p, )) -+ 1 for al1 Jr > 0.Since c is bounded and / is bounded and continuous, the bounded convergencetheorem implies that

.

. .

'

so if we 1et gz) = c(z)v(z) then we can apply results from the last section. If cand f are bounded and w / x, then 'p is bounded by (6.3d),so it follows fromresults in the last section tht ra is C2 and has a bounded derivative.

Since 'p1

E C* and G is bounded, it follows that 'p is C ad has a boundedderivative. If c is Hlder continuous, then gzj = c(z)'p(z) is I'Ilder continuous,

and we can use (5.6b)from the last section to conclude 'pa

E (72 and hence

(6.6) Theorem. If in addition to (A1)-(A3), c is Hlder continuous, then

'pE C2 and, hence, satis:es (6.1a).

C. Applications to Brownian Motio

f'ralexplczlllfr);r < 1) -+ Jy)

To control the contribution from the rest of the space, we observe that if lcIK Mthen using the Markov property and the boundedness of w established in (6.3d)w have

E (exp(cr)J(#r); r > 1) < eTII.fIlx.E'.-(o(1); r > 1)C=

S eMIlTIlxlIu'IIxP.-(r > 1) -+ 0

In the nxt three sections we will use the p.d.e. results proved in the lastthree to derive some formulas for Brownian motion. The first two sections areclosely related but th third can be read independently.

E1

This brings us finally to the problem of determining when 'v is smooth

enough to be a solution. We use the same trick used in Section 4.3 to reduce

Page 88: Stochastic Calculus

164 Caper d Patial Dirential Equations

4.7. Exit Distributions for the Ball. ' ... . .

...

' .

. .:

. . ..'

.. . .

In this section, we will use results for the Dirichlet problem proved in Section4.4 to find the exit distributions for D = (z : lzI< 1). Our main result is

Section 4.7 Exit Distributions for the Ball 165

(7.1) Theorem. If f is bounded and measurable, then1

- l:!)12E=1B.4 = dT(p)zr(dg)an Iz- pI

.: .:

.

where ';r is surface measure on OD normalized to be a probability memsure.

Proof An application of the monotne clmss theorem shows that if (+)holdsfor J e C* it is valid for bounded mesurable f. In view of (4.3),we can prove(+) for 1 E C* by showing that if V(z) = (1- Iz12)/lz- pIf@nd

Jdo ,(z)T(#)r(d#) z 6 Dr(z) =

opJ(z) z ethen n solves the Dirichlet problem (4.1):

(7.2a)In D, w E C2 and Lv = 0.

(7.2b) At each point of 0D, 1) is continuous and'p(z)

= fz).The llrst, somewhat painful, step in doing this is to show

(7.3)Lemnla. If y 6 OD then Lky = 0 in D.

Proof To warm up for this, we observe

Summing the lmst expression on gives

-2d 1zI2- z . yl1u(z) =

a + 4: .

,f+z

Iz- pI 14- !/1

dz+ 2d (12

+ (1- IzI2) a+a-

jz - gjdx.aIz- p1

-2dIz- p12 4d(Izl2- z . y4 (1- lzl2). 2d= d+2 + dA2 + Iz- p1d+2Iz- pI Iz- pI

If we replace the 1 by 1g12,the expression collapses

lt-#tz) =d+a (-Iz - pl2 + 21z12 - 2z . y + lpl2- lzl2)= 0

Iz- pl .

sinceIz- p12 = (z - y4 . (z - y4 = IzI2- 2z . y # lpl2..

' ' : ' ':

. . . E . . ' ' ' . ' ' .'

' .

(t kd theopep set D, 'tl is a linear combtpatlon of tlt t- . So bringingthe) nsl e viliflkrrptiationupder the integral (andleaving it to the reader to justify tlttsuqing (1.7))gives

rhj, y sqtisfies (7.2a).To check (7.2b),the flrst step is to show ,

1 - 1z 12f(z) = ag't#pl H 1

an Iz- pl

yp/2Dlz - pIF = Di ( zj - p.f)2y = p1z - p1P-2(zf - yi)

i

1 a-d(zf

- yi)Di tx(z)= (-2z) .

a + (1- IzI) '

a+aIz - :1 Iz- :1

so taking p ='-.d

Fe have

This is just fcalculusy'' but we prefer to use a soft ncmpzpputational apprpah

instead. We begin by observing that f(0) :'::z 1, I is invariant under rotations,and LI = 0 in D. (For the lmst conclusion apply the result for 'p with f H 1.)

To conclude f > 1 now, let z E D with l1 = r < 1, and 1et r = inftt : If I> rJ.Applying (1.2)of Chapter 3 with G = D(0, r) now shows

f(0) = EoIB.) = f(z)

where the second equality follows from invariance under rotations.To show that r(z) -+ ly) as z

-+

p E 0D, we observe that if z # y,

1 - Iz12 0kz z) =

d--,

d as z-->

yIz- zl Ip- zl

Diferentiating again and using our fact with p =-(d

+ 2) gives

1-d(zf

- yi4Diikvz) = (-2) .

d + 2 . (-2zf) .

d+aIz- pl Iz- pIa dtd + 2)(z - :w)2 d

+ (1- lzl) di4-

jz - pjd-l-zIz- pl

Page 89: Stochastic Calculus

166 Chapter # Partlal Diential Equations

From the lmsicalculation it is clear that if J > 0, then the convergence is niformfor z e Bz6j = OD - Dy, J). Thus, if we 1et B3(J) = OD - #n(J), then

/so(,) a(2)''r(d2) -+ 0 and

since .J(z) = 1. To prove that r(z) -+ fy) now we observe

Ir(z)- T(p)I= jyztzlltzjxtda)

- J(p) jtzlzlxldzlj

f 2lIJIlx ztzl'mtdzl + sup lJ(z) - J(p)Ifo(J) zE.8t(J)

The Erst term -+ 0 as z-+

y, while the sup in the scond is small it &is. Thisshows that (7.2b)holds and completes the proof of (7.1). Cl

The derivation given abpve is a little unsatisfying, since it starts Fith theansWr ad thn verifls it bt it is simpler thafl zepsinjE aid' With lflvir's!

. . ..

r . ..,. (trsfoimations (seepages 100-103 in Port ad Stone (1978)).It ls h' th

.. . . y..

.. .merit of explaining why 1-p(z) is the probability density of exiti: atE

g: kiis a nonnegative harmonic

.function

that has :(0) = 1 and kyzj -+ 0 when

z-+

z e OD and z # y.

Exercise 7.1. The point of this exercise is to apply the remspning of this sectippto r = inftf : B3 / HI where # = ((z,p) : z e Rd-l y > 0). Ff 0 E

Rd-1E 1et1

Secton 4.8 Occupation, Times for q Ball 167

4.8. Occupatlon Times for the Ball.

.LL:i.. '

. .E

In the lmstsection we considered how Bt leaves D = (z : lzl< 1). In this section

wewill investigte how it spends its time before it leaves. Let r = inf (t : Bt /DJ and let Gz, y) be the potential kernel deined ih (2.7)of Chapter 3. Thatis,

- lz - pl d = 1Gtz , yj :z7 - Axlog lz - y I d = 2

a.(j cjCdlz - p l k 3' .

.

'

. E. . .whre j t= r(#2 - 1)/2gW/2.

.

' ' . y .. '. . .

. . .

.('

.'.. . . . . .

(.1) Therein. lt j i tuzded and meurabl tert

z-/-,(s,)a- /cs(z,,),(,)or

where1 - lzl2

.Gn4, p) = Gn, p) = jz - zja Gz, p)x(dz)

roof Combine (+)in Section 4.5 with (7.1).We thin'k of Gnlz. as the exoected amount of time a Brownian motion

. ' . . . . . .'''''''''' 'k '' 'G .''# ''*

. .. .

.. . .. . - .

startingat z spends at y before exiting G. To be precise, lf . c tn theexpected amount of time Bt spends in X before exiting G is JxGD (z,yj dy.Our task in this section is to compute Go(w, p). In d = 1, lvhere D s (s1,1),

(8.1)tells us thatCdyptz,p) zz

a ald/a(Iz- :1 'F y

where Cd is chosen so that j' do :,:(0, 1) = 1 and 1et

,u(z,,)= / deltkz,yt-f, 0)

z + 1 1 - z&otz, yj = G(z, y) -

zG(1, yj -

: t7(-1, yj

z + 1 1 - <:z.c-Iz - pl + : (1- !/) + : y + 1)

= -Iz - pl + 1 - zy

where J is bounded and contzuous.

(a) Show that Llta = 0 in H and use (1.7)to conclude Lu = 0 in H.

(b) Show flz, p) = J do plz, H 1.

(c) Show that if zu--> k, pn

-+ 0 thenutza, pn) -+ J, 0).

(d) Conclude that Ez,yjfBpj = J do ptz, ylfo, 0).

Considering tlle two cases z k y and z K y leads to

m v..j

az (1 - z)( 1 + yj = 1 g y g z < 1(.2) uosa;, u) j.,;(j

+ z)-1

u z < y s 1(Geometrically, if we 'Ilx p then z

--> Gnz, p) is determined by the conditions

that it is linear on (0,p) and on (p,1) with G0(0, yj = G0(1, p) = 0, andGD(#, #) = 1 - #2.

Page 90: Stochastic Calculus

168 Chapter d Partial Dirential Equations

In d k 2, (8.1)works well When y = 0 for then Gz, 0) is constant on(1z1= 1) and the expression in (8.1)reduces to

Cd(IzI2-d= 1) d k 3 i(8.3) Gotz, 0) = Gz, 0) - G(e1, 0) =

js/g; jtjg jzj d :r 2 .(-

Section 4.9 Laplace Transfom, Arcsine Tzw 169

Proof To check (A) and (B), we observe that if we let Lzt dnote the Laplacianacting in the z variable for flxed y then

hrGz, y) = 0 for z # y

4,.

(a) Since p/1p12 ( D, (+)implies that the second term is harmoni for z e D.

(b) Let z e 0D. Clerl#, the right-hand sid i cotiu6us t z. To ee th>t itvanishes, we note

To get an explicit formula for Gnz, y) in d kE 2 for y # 0 we will cheat and

look up the answer. This weakness of character hms the advantage of makingthe point that Go, yj is nothing more than the Grern's funtion for D Fith

j(), (jjqyju oe rjeekjsDirichlet boundary conditions. Folland (1076),kage ,

function for.D to be the fpnction ff (z,yj on D x D dtermined by th followingproperties:

(A) For each y E D, ff (.,y4 = G(., yj is C2 and harmonic in D.

(B) For each p e D, if zn-+

z E 0D, fftza, yj -+ 0.

3 tRemark. For convenience) we have changed Follahd notation to c ormto ours and interchanged the roles of z and y. This interchange rrtalies nodilerence, since the Green's function is symmetric, that is, Ik-, y) = ff y, z).

l d (1976)page 110 ) '(See Fol an ,.

.. . .. . .

..

.. .j '

...(ak.a)Cd C'd yG(z, p) - Ipl2-dc(z,p/Ip12)=

a-2-,

jgjauuaz=

jgjzIz- p1Cd t

= - V 0Iz- pld-2 lzlpl- p1p1=114=2 E

The last equality follows from a fact useful for the next proof:..

:'

(8.6) Leznzna. If lzI= 1 tllen lzlpl- plpI-1I= Iz- pI.Proof Using lzl2= z . z and then lzI2= 1 we have

Izlpl- p1p1-112 = IzI2lpl2- 2z .y + 12

- 22 ' # + lz12= Iz-.L'gj2

= 1p1

Proof Using (8.1)and (7.1)$ve have Again turning to page 123 of Folland (1976)%vettguess''

(8.7) Theorenz. ln d = 2 if 0 < 1p1< 1 then1- lzI2t?otz.zl- G, z) = - aGy, z)zr(d#) = -ExGBv, p)

ao Iz- p1- 1 - 1Goz, p) =

. (lnlz- p1 - ln Izlpl- p1p1 1)This is harmonic in D by (4.6).(4.5)implies if z,z

-+

z e 0D, then

Ez=GB. , p) -+ G, #) D

. . . . . .. ' . . . .E

. . . . .. .

Having made the connection in (8.4),we cn now flnd G.o by guessing

and verifying'' functions with properties (A) and (B). To tguess,'' we turn topage 123 in Folland (1976)and fnd

(8.5)Theorem. ln d k 3, if 0 < lpl< 1 then

cstz,!,l= c(z,,) - I,I2-dc(z,p/Ipl2)

Proof Again we need to check (A) ard (B).

1 y(a) Gox, p) - Gz, = - ln z -

a + ln 1p1g' 1p1

Again the frst term is harmonic by (+)since p/1p12 / D. The second term doesnot depend on z and hence is trivially harmonic. (b)Let z e 0D. Clearly, theright-hand sde is continuous at z. The fact that it vanishes follows immediatelyfrom (8.6). D

Page 91: Stochastic Calculus

170 Chapter 4 Partial Dimerential Equations

4.9. Laplace Trqnsforms, Arcsine Law

In this section we will apply results from Section 4.6 to do three things:(a) Complete the discussion of Example 6.1.

(b) Prove a remarkable observation of Ciesielski and Taylor (1962)that forBrownian mtion starting at 0, the distribution of the exit time from D = ( :

lzl < 11 irt dimension d is the same ms that of the totql occupartion tipz of Din dimensiond + 2.

(c) Prove Lvy's arcsine law for the occupation time of (0,x).7 .. :

The third topic is indepertdent of the flrst two.

Exnmple 9.1. Tlke d = 1, G = t=c,a), c(z) H y > 0, and J H 1 in theproblem cosidered in Section 4.6.

(9.1) Theorem. Let r = inf (t : Bt $ (-c, c)). If 0 < y < 12/8c2, then

Section 4.9 Laplace Transforms, Arcsine Tzaw

computations in Example 6.1 imply that in this case there is no nonnegativesolution. D

Exerclse 9.1. Show that if,.; 0 then

coshtz 2#)E. expt-/r) =

coshta 2p)

Since costz) = (ef?+ e-f7)/2 this is what we get if we set -t' =-p

in (9.1).Exnmple 9.2. Our second topic is the observation of Ciesielski and Taylor' '

. . .'. ': . .

'. ':

..

.'

'

.(191). Th proof f the yeneralresult, see Gtor and jhaiye (1g70),setiops5 ad. 8 r Kight (t981)pak 88-89 requires fnore thi a littl fpiliaiity) 1 1

. . .with Bessel functions, so we will only show that the ditrtbuti of th ekittime from (=1,1) in one dimension starting from 0 is the same as that of thetotal occupation time of D = (z : Izl< 1) in three dimensions starting fromthe origin.

Exercise 9.1 gives the Laplace transform of the exit time from (-1, 1) sowe need only compute

s(z)- s-exp(-p/-1s(a)a)

0 .

tit l have to compute t?(0) but as in Example 9.1, and Exercisecouise,we on y9.1,we will do this by using a diFerential equation to compute

'p(z)

for a1l z.Several properties of n are immdiately obvious (inany dimension d k 3):(i) Spherical symmetry implies

'p(z)

=.f(tzl).

(ii) Using the strong Markov property at the hitting time of D and (1.12)fromChapter 3 gives for r > 1

(a) J(r) = r2-dJ(1) + (1- r2-d) . 1 = 1 + r2-d . (y(1)- 1)

(iii)The strong Markov property and (6.3)imply that 'p is the unique solutionof

1Lv - pv = 0 for z e Di'r(z) = J41) for z 6 OD

To express the lmst equation in terms of J we note that if p(z) = /(1zl)then

coslzv?JY)E e'T=Jr cosjgyj

If y k 72/8a2 then Eze'tr .H x.

Proof'

By Example 6.1. !

.

u(z) = costz 2y)/ costc 2y)

is a nonnegative solution of

1 ?,-u + n,'u= 0

'tz(-c)

= u(a) = 12

. To check tllat w / G), we observe that (6.2)implies that Mt = 1ttltle-t is alocal martingale on (0,

'r).

If we 1et Ta be a sequence of stopping times thatreduces i'J't , then

E=(1t(fnAn)e'Y(YkZ''))

1ltzl =

Letting n-+

(x) and noting Ta A n 1*r, it follows from Fatou's lemma that

u(z)k Ezenr

The lwst equation implies w / x, so (6.3)implies the result for y <.52/8a21

To see that'tz?(z)

H x when y k x2/8c2, suppose not. ln this case (6.3)implies '?J(z) = ErfBp) exptczl) is the unique solution of our equation but

? 2Divz) = y (Izl)jzj

, 1 z:,,(jzj;

z;ofvtz) = J (lzl) jzj

-

jzja + T jzja

Page 92: Stochastic Calculus

172 Capter d Patial Dirential Equations

Summing over i gives

1 1 gttl -- 1 ,

'j''p ='q'f

(lzI)+ zjzlT (171)

Section 4.9 Laplace Transforms, Arcsine Tzaw 173

which shows that (b)holds.To complete the solution, we have t pick C(c) to make f E C1. Using

formulas for Tr) for r < 1 from the previous display, for r > 1 from (a)iandrecalling # = 3, we lzave

Multiplying by 2 we see that f sltis:es

g-1

(b) J''(r) + J'(r) - 2p.f(r) = () for r < 1r

'p tie equatipns (a)and (b)together, we note that by applyink the reaij' .

'' . ' ' ' ' '' ' ' '

' ''

''' '

iu (iii)t 0(0, ) with q > 1 ad uking the prf of (t.6)(Which rfers to5Ca)) * tn cntlde '

( -

, . .

(ik) n is C and hence F(r) is continuous at r = 1.

Facts (ii)-(iv)give us the information we need to solve for /' at lemst ip'rinciple: (b)is a second order ordinary differential equaticm, so if we specify

J(0) = C and J/(0) = 0, the latter from (i),there is a unique solution fc on(0, 1). Given Jc(1), (ii)gives us the solution on (1,(x7). We then pick C so that(iv) holds.

T0 Carry OU.t O1lr Plan, We begin With the Stwell known'' fact that the only so-ltltions tO- (b)Wlich stay bounded near 0 have the form crl-td/zlf

ayalmy(r 2#)(where

x (a/g)v+2mS(Z) =

mlPtzz + m + 1)m=0

is one of the happy families pf Bessel flmctions. Letting c = 27 and recallipg

r(z) = (z - 1)r(z - 1), we see that when d = 3 and hene v = 1/2 thr solutiorhms a simple form

/(1-) = C(.(z)coshtc) - C'(c)c-1 sinhtc)

.f'(1+)=-(/(1) -,1)

= 1 - C(c)c-1 sinhtc)

Solving gives C(c) = 1/ coshtc). Since c = 27 this matches the result inExercise 9.1 when z = 0. I::l

Exnmple 9.3. Our third and llnal topic is Kac's (1951)derivation of

(9.2)Llvy's arcslne law. Let Ih = l(s6 (0,t) : B k 0) Iklf 0 K 0 K 1 then

a d 21 rPzlh %0t) = -

= - arcsintu)g' 0 7-(1 - r) '7r

Remark. The reader should note that by scaling the distribution of S/t doesnot depend on t, so the fraction of time i (0,) that a Brownin gmbler isahead does not converge to 1/2 in probability as f -+

x. Indeed r = 1/2 is thevalue for which the probability density 1/x r(1 - r) is smallest.

Proof For the last equality see the otlzr arcsine law, (4.2)in Chapter 1. Toprove the irst we start with

(9.3) Lemma. Let c(z) =-a

- #1y,x)(z) with a,# k 0. Suppos 'p isbolmded, C3, aud satisqes

1-Ar + cv =

-1

2for a11z # 0. Then

'p(z) = dt e-Gff e-pll'tr

0

Poof Our ssumptions about 'v imply that't?''(z)

=-2(1

+ c(z)'p(z)) dz inthe sense of distribution. So two results from Chapter 2, the Meyer-rfanakaformula (11.4)and (11.7)imply$ 1

co (ar)2m c(a)c(c) I'---2 = sinhlcr)(2r?z+ 1)! ' crm=0

Having tfound'' the solution we want, we can now forget about its origins

and simply check that it works. Diferentiating we have

tRc) Ca)f'rj = coshtcr) -

asinhtcr)

r Jr

??.)

=C(c)a i,,l,tc,.) -

2C(a) coshtcr) +2C(c) sinhtar)J (' s a :,r r Jr

2 ? Clclc a.f''(?.)+ -J (r) = sinhlcr) = c.f(r)

r r

.j j .

r(#t) -

.?/(.1)

= jo'p/(.8,)

dBs - j (1+ c(#,)r(#a)) ds

Page 93: Stochastic Calculus

174 Chapter 4 Partz'al Differen tial Equations

Letting q = JJc(f,) ds and using the integration by parts formula, (10.1)inChajter 2, with Xt =

'p(#t)

and X = exptct), which is locally of boundedvariation, w have

' q ' ,

Section 4.9 Laplace Transfoms, Arcsine fgaw 175

It is a little tedious to solve these eqations but it is easy to check that

W-#-#-W s=W-W-+-#W=(&+ 7):/J asl-lTn

j

z?(f)exptct) -

'p(.s(,)

= Joexplcalz,/ll,l dB'

texplc-lfl + c(s,)s(s,)Jds./r-t

+ jv'??(/,) explc,ldc,

gives a solution. What we are interested in is

(9.4)

To go backwards from here to the answer written in (9.2),we warm up bychanging variables z = 2y, dz = (1/2) 2.tjt dt to get

1 1'D(0) = W+ =

a + p ata + p)

soMt = vtz) exptct) + fn'exptca) ds is a local martingale. sincey is boupdedand q K 0, lh is bounded. As f -+

x, explq) K e-' -.+ 0, so using themarting>le and boppded cpnvergence theorems gives

p(z) = EZMZ = ECM. = Er exptc, )ds0

Plugging in the dehnition of c(z) nosv and using Fubini's theorem leads to theformula given above. El

:. ' ' . ..'...

j..

(9.3) tells us that we wnt to fid a bounded C1 function 1) with

z>0

ion -/'p = J'tl*

+ 1 we write 'p = 1?0 + 171 where rl(z) O 1/7To solve the equatand note that

1 ,?

y'pl =-r1 + 121 u'/S0 = -K02

zs/* Picking the signs to keep 'v buded, we havesowe have

'pt)(z)

= Ce .

-w 2(.+p) + 1 z > ()de .+p

1l(z)=

zjw.t

< ()Be= + x&

To flhd . and B we note that we want v to b C so

1 1Jl + = B + -

a + # a

-z 2(cf+ p) = B/(-

(X).-@f

(x)d -z.2/a

dt = e 2/y tfz

xl-t n02 1

=- . - . z--2z. = g yy'y 2

This identity and Fzbini's theorem imply

1 1 1 * :-(E'+#)' CK' e-G(-')

=- dt ds

a + p W'a = o W , t - s

(x, j t-#a

= e-tzt - ds dt0 Z' 0 s(t - s4

From (9.3),(9.4),and the uniqueness of Laplace transforms it follows that

t t-7'

eEz expt-p.Jftl = - ds dt'm n s(t - s)

and invoking the uniqueness again we have the desired result. C1

Page 94: Stochastic Calculus

' .. ..

. . . . . (.. .. .. .

..'.

..E

. . . ..

5 Stochqstic DiFereptial Equations'

.' .:''

- . ..

. .'

. . . . .E.

.'.' '

. .

''''''*.'

. . -... . . . .. . . . .

. .'

..j( '

jjd

. .(

..

5.1. Exampls

ln Ethis chapter we will be interested in splking stochastic diseretial ilations(i SDE's) that are writtn informlly ks E '

dXs = (X,) ds + c(.X',) dBs

or more formally, as an integral equation:

j j

(*) Xt = Ab + 1/(.X'a)dq + unt.dm

'E. .

:. .

. .... .

'

.

.. . .

'

. .

.'

. E .' E

..'

.: . . .'.' l

. ...:

';... .

i. . .

'' '. .

.. . . . . . . .

.''

We will give a prcise forpulatiop of @)at the beginnipg of, Sectiop 5:2.In @),Bt is an m dimensional Brownian motion, c is a x z?z matrix,

and to make the matrix operation wcyk- out zight we will think of X, and Bas column vectors. That is, they are matrices of size d x 1, d x 1, and r?z x 1respectively. 'Writing out the coordinates @)says that for 1 K i %d

m

i x + biA) ds + njxj) dBXg = c a, 0 q

'E ajy 0J

Note that the number m of Brownian motions used to cdrive'' the equation

be more or less than d. This generatibtto ds not'ase

'any

additialmaydilculty, is useful in some sttuations (seeExample 1.5), and will help us dis-tinguishbetween cT which is d x d aud T which is m x m. Here C'T denotesthe transpose. of the matrix c.

'

To explain what we have in mind when we write down @)and why we wantto solve it, we will now consider some examples. ln some of the later examples

we will use the term dxusion process. Tltls is cstomarily deflned to bea

ttstrong larkov process with continuous paths.'' llowever, in this section,

Page 95: Stochastic Calculus

178 Captez 5 Stochatic Difirential Equations

the term will be used to meantta family of solutions of the SDE, one for each

starting point.'' (4.6)will show that under suitable conditions the farrly ofsolutions desnes a difusion process.

Exnmpl .1. Expnentlal rownlnn Mtlo. Lt Xg Xt ekjtpt-l-ls)where .X is a real number and Bt is a standard one dimensional Brownianmotion. Using lt's formula

Section 5.1 Examples 179

using the mssociative law, and taking expectd values

t .

i i i J J (x )daftlja' ) = z z + E .X', i ,

0'f 't

ib (.A-)ds + E aij (x,) ds+ E Xs j a0 0

'f 1Xg = a'Y + j pX, ds + j cXs dS, + j j /2*, ds

If aij is also continuous, diferentiating gives(. . . . . .

:.

. .. .

.

'

.....

..

d EXixil = zibiz) + zbjz) + aijz)''- j

f=0

Using Exji = zi + E J' f(-Y,) ds and digerentiating again we get0

g

yt (atla)jjotl= zibiz) + zd/tz)

Subtracting the lmst two equations gives. .

. . .. j.

..t i

.j iexjj j odyytz)

-dt(.E'(.;v't.zq,) - Ex, ,

yst)

justifying the pame lnflniteylmal covarlnnce.When d i= 1, we call c(z) the lnfnlteslmal vaz-lanc and c(z) = c(z)

the infniteslmal stnndard dviation. In Example 1.1, l(z) = o'z is pro-portional to the stock price z. The ininitesimal drift (z) = (/I + (>2/2))zisalso proportional to . Note, however, that in addition to the drift p built in to

-the exponential there is a drift c4l2 that comes from the Brownian motion.More precisely it comes from the fact that e= is convex and hence exptcft) isa subinartingale.

so Xt solvej @)with (z) = (p + (c2/2))zand &(z) = cz. ExponentialBrownian motion is often used ms a simple model for stock prices because itstays nonnegative and Quctuations in the price are proportion>l to the price ofthe stock.

Tp eyplpil the lwst phrme, p retprn to the geperal equation (A.) apd qup-ose thatb and tr are contmuous. If .X$q= z then stopping at t ATa for slzitableP

stopping times Ta 1 x, taking expected values in @)and letting n-+

(x) wehave

t

Exji = zi + E txa)ds()

So if is continuousd i.,;'.;L1=()= f(z)W

Because of the lmst equation, is alled th infniteslmal drift.To giv the meanihg 6t the the cocient w hote that if

c() = J(z)#T(z)

.. .

''

.

then the formula for the covariance of two stochmstic integrals, (8.7)i Chapter2, implies

't j

(aY, Xjt = Jkk(x,)ok(A',) ds = aitA) dsk 0 0

Usipg the intgration by parts formula

Eknmpl 1.2k Th Bessel Ptoces.es. Let W'i be a d-dizensional Brownianmotion with d :$. 1, and Xg = I#J$1.Diferentiatint gives

: ) j. g k. ) y uu j.zi z,Di lzl'.'.p,.

!

jzjDlzl =

jzj-

jzj?Zlzl =

jzj.

'. . . '

E .' '.. . .

'.'

. . .

j j

iXj i j Xj dXi + Xi dXj + t.xiXjj gX3 = z z + , , a a , t0 0 , So using It's formula

SubstitutingdXk = k(x,) ds + J7nj (X,)dBi,

J,.

i

t I/vf 1 t d - )Xg - A'o = S ' dea + - ds(, I#F,I 2 () 1#F,l

Page 96: Stochastic Calculus

180 Chapter 5 Stochutic DiRrential Equations

We will use Bt to denote the flrst term on the Tight-hand side, since Bt is alocal martingale and (#) =

, i.e., Bt is a Browniau motion by Lvy's theorem,(4.1) in Chapter 3. Changing notation now we have

d - 1(1.2) Xt - X0 = Bt + ds

2.Xa0

so @)holds with (z)= (d- 1)/2lzIand o'(z) 1.. . E

. .E

. . :

'

. ..

'

: . . . .. .''

Exnmple 1.3. The Ornstein Uhlenbeck Process. Eetht be a one difnen-sionalBrownian motion and consider

(1.3) dX3 = -ce.iLit + # dBt

which describes one component of the velocity of a Brownian particle which isslowed by friction (i.e.,experiences an acceleration -a timesits velocity aLI.This is one cse where we can flnd the solution explicitly.

(1.4) Xt =c-Gf .X%+ ea' #fa

0 ,

To check this informally, we note that diferentiating the flrst term in the prod-

uct gives -aXL dt and diferentiating the second gives # dBt . To check thisformally, we use lt's formula with

Section 5.1 Examples 181

Here V( is an Ornstein-uhlenbeck velocity process, and 5$ is an observationof the position process subject to observation error described by a d#Ft, whereW' is a Brownian motion independent of B. The fundamental problem hereis: how does one best estimate the present velocity from the observation(K : s K )? This is a problem of important practical signifcance but not

that we will treat here. See for exmple Roger and Williams (1987)One , j

p. 327-329, or Wksendal (1992),Chapter VI.. . . .. . j .

. .. . .

Exnmple 1.5; Brownlnn Motion on the Circle. Let Bt be a one dimen-sional Brownian motion, 1et Xg = cos Bt and X lin Bt . Since XI+ n2= 1,(.;%, X) always stays on the unit circle. It's formuela implies that

1dXg = -.-XdBt - Xt dti'(1.7)1d = Xt df -

j dt

To write this. in the form @)we take-z/2 -.y

(z,p) =' Jtz, g) =

-p/2 z

Note that c is always a unit vector tangent to the circle but we need the driftbz, y4, which is perpendicular to the circle and points inward, to keep ( , X)from iying ofl'.

: ...' . ' ' . .:

Exnmple 1.6. Feller's Branding Disuslon. This Xj k 0 and satisfes

(1.8) dXt = pXt dt + c Xf dBt

To explain where this equation comes from, we recall that in a byanhingprocess, each individual in generation m hms a idepedent d ideticallydistibuted number of ofspring in generation m + 1. Consider a sequence ofbranching processes (Z;j , r?z k t) in which th , prbability of k children is p2and suppose

(A1) the mean of pz is 1 + (#a/n)with pn -+ p(A2) the variance of ,2 is d with J2a -.+ ,2 > 0(A3) for any 6 > 0, Ek>,s 2p2 -+ 0.

Since the individuals in generation 0 have an independent and identically dis-tributed number of children

fu,'tl)

= e-q'''p U1 = t.

:

to conclude

(1.5) Theorpm. When A = z, the distribution of X is norm>l with pan-Gf and variance 0.2 Jfe-2Gr dv.ze ()

Proof To se that the integral has a normal distribution with mean 0 and theindicated vari>nce we use Exercise 6.7 in Chapter 2. Adding natuk = e-atz

ow we have the desired result. Eln

Exnmple 1.4. The Knlman-Bucy Filter. Consider

dl = Pkdt + tdl'li

(1.6) d/i = -cI4 dt + d#f

pE (Z1nlZl = nz ) = nz . 1 + -

nvar(z1nlg; = nz ) = nz . %2

Page 97: Stochastic Calculus

182 Chapter 5 Stochastic DiFerential Equations

So if we let X1 = hngtjlnthen

(X1N/a- 2) lX''( 0) ''= z)

' z7a '

-1'

'

EE n

var (XlN/s jXH (0)=zj

= zJa2.

-1

n

The lmst two equations say that the inflnitesimal meananb variance o? the

rescaled process X;t are z#s and zok2. These quantities obviously converge tothose of th proces Xt in (1.8).In Exgmple 8.2 of Chapter 8, we. will shw thatunder (A1)-(A3) the processes (X1' , t k 0J converge weakly to (.X , t '0J.,Exnmple 1.7. Wright-Fisher Dlffusion. This Xt E (0,lj and satisfies

dXg = (-a.X + 7(1 - .;V))dt + u'V(1- aX) dBt

The motivation for this model comes from genetics, but to avoid the detailof that subject we will suppose instead we have an urn with n balks iu it thatmay be labelled .A or c. To build up the urn at time m + 1 we sample withreplacement from the urn at time r?z. However, with probability aln we ignorethe draw and place an c ilt, and with probability #/n we ignore the draw and

plac ap .. ink 1I genetics terrns the lmst two events correspond tb mut>tions..

- ' . .

Let Z;b-be

the number of /'s in the urn at time m. Now when the,fraction pf

.A's in the urn at time is z, the probability of drawing an.4

on a given trial is..

...

( : g.. ... .

& #ps = z . (1 - -s )+ (1 - z ) . -s

Sine weare sampling with replacement,

Section 5.2 It's Approach 183

arise from genetics. See Karlin and Taylor (1981),vol. lI, pages 176-191 for ,

someof the others. . E .

5.2. It's Approach

We begin this setion by giving some esqential defnitions and iptroducing acoterekmpl Shat will kylin th eed fr sme of the fifnaliti. W will7

. . . . . ' ' . .

thenprctd to ou. mai bsint the nit' exitence d uikiks rutt tor. .. . ' ......

''''''

.. . .

...'

. . .

:....*.

'

.

('

SDE, which wws proved by K. It long before the subjt brile entaikld ithe complications of the general theory of processes.

To flnlly beome precisv, a solution to olr SDE @)iq a triple (.X, Bt ,

.,&')

where Xf and t are continuous processes adapted t a flltratin.-&'

so that

(i) Bt is a Bqowpiyn mption with respect to &, i.e., for e>ch s amd , ff-j.a - Btisindpent of

.&'

and is normally dtstribvted with men 0 nd variarp: s.

(ii) Xg satisqes'.

.

' E ' ' ' .

E E . t , !@) , . , ( , A =?

a'tq+ (-'L)ts #, &(xa) dh

. . . . ,..

.0 0 , k,

,

.. . . . .' ''.'

E ' ': . .

.''

. . E'.' .' .. ' ' ' '. . .. .' .

'.' . (.. .. '

. . .'

. ...

.

.'

.

'

.

':

' : ''

.

'i. .

'i' .; ..''.: ' j

..' .' .i '.

..

...

:

.

'. (

)'

''.'

.

'

.

. .

'

. (..

j'

..'.'. .

:.(''i.

'..''

. ..

'

.

.

.

'

g'

.

:: .

.' .

..'.

' '.

'.

..': :

.. .

'

:i

.

'.

..'

r .'

We will lways assm that tr and are measurable and ldcall# bouzded so the' '. '

.:

. . . . .

' '

. g

('

'.'

.integral ezstsk l '

. E' ( , ,

XtBIt is emsy to see that if the SDE holds for some.&'

it will hold for hthe flltration generatd by Xg , #f). When Xg is . adapted to FF the filtration

d X is calledgenerated by the Brownian motion, then we can take.&'

= h an g

a strong solqtlon. It,is

dicult to imagine how Xj could satisfy (..)witholttB b t th re is a famws eem' pl dtle tp H T>np> showingbeingqdapted tp h u ,

that this can happen. i , E ,. .. y , q

Exnmple 2.1. Let 144 be a one dimensional Brownian motion with I/'zn= 0and 1et , E E

: :

Bt = sgntNa )d<a' ' 0 f

.E.' ' .where sjnt) :i 1 if z k 0, sgntz) ::'z:

-1

if z < 0. Since Bi is a local martingalewith (.8)z= t, (4.2)in Chapter 3 implies that Bt is Brownian notion with re-spect to F , the flltration generated by 141. Since sgnlzlz = 1, the mssociativelaw implies that

.'t

j

sgntea )d% = d<a = I/li0 0

B' FXBSo 14:is a solution of @)with b > 0, J(z) = sgntz), and X = h = g .

E (glnjz; = nz ) = npn

var(z lA = nz) = nyatl - pa)

If we let .X1' = hnvtjlnthen

1E (.X'I'/s- z jXn(0) =

z)= t-(>z+ #41..= z)) . vyj

1 . E..

var(Xy/aj Xn(0) =

z)= p,,(1 - pa y.

-y;

y yj , y

Now pa-+

z as n-+

x so again the infnitesimal mean and variance of A'l'converge to those of X3 irt (1.9)but the rigorous connection has to wait until

Example 8.3 of Chapter 8. This is just one of many of the diflksions that can

Page 98: Stochastic Calculus

184 Chapte 5 Stochastic Dilrential Equations

To se that JF is a not a strong solution we apply the Meyer-Tanaka fommula, (11.4)in Chapter 2, with Xt = #Fkand flzj = IzIto get

t

1m1- L't = sgn(<,) d<a = Bt0

Section 5.2 T's Approach 185

(2.2) Theorem. If for a11 , j, z, and u we have l(%lzl= c%(p)l K ff lz - pland l(z) - (p)lS ff Iz- pl , then the stochmstic dilerential equation

j

(*) Xt = z + c(A%) dm + (I,) ds0 0

has a strong solution and pathwise uniqueness holds.

Proof We construct the solution by a successive approyimation scheme thatreducesto the clmssical Picarb iteration fnetotl for splving ordinary diferentialequationsin the special cmse c p 0. We begin by irroducing the zptation

t t

Z = Xz + l(-Ya) dB. + (Xa) ds

to describe oze iteration. In words, we usr. the pwcess (Xo t k 0) ms input fortlte coecient c >nd , and the output

'1s

th process (v'%, t k 0). To solve@) we begin wiih th initial duessX: H z and dehne the successive guess byiteration:

xa4.l ...,z sf ft k c> 3

The rest of this section is dvoted to prving that this procedure constructs astiong solution and that pathwise uniqueness holds. To help the reader digestthe argument we have divided it into sectis. Oh of the claims is emsy. lt isclear by induction that A'1' is adapted to LFF.

The baslc estlmate which is the key to the proof is:

wliqrqLt is the local time of INl at 0. The ocupatiqn time densityformula in..

111'*

. . . .

'.. .

. .. . . .

'''

'' ''' ' '.

''''''

.'

. .

..jjL'

' ' . .' '' '' .

'''' : .E .'. : .

(1t.7) pf haptey), and the continuity of te locat tlme establisheb in (1.8)of Cltpter t, imyly tht ,

1 t 1 6xu g p ..y po L j E( E

. w) l.(ll<lsol s = ;;; k c z as E:

*6 (j ( E ou ... 6 g ) : ( ( g : E i

.. '.' ..

:. . ... .

:.

''

. .

so Lg* td hhye t utu I1z1- f,o is melurabl witlk ippt it t'h qttrtinr E

ij jgnrtd b# 1*41,k 0. Howevei, I'F ls clirly nt daptd toth ltltit o

generatedby 1m1. t:;

As usual when pe have an equation, we are intvrested in having a unique

solution. For SDE there are several notions 6f uniqenessk Our flrst, pathwist j

unlqueness holds if whenever Xt and X3 are solutions of @)with Xo = v'L= z

driven by the ame Brownia'n motion Bt , then with probability one Xg = X;for all t k-0. Here, the fltrations St and F; are allowed to be diferent.

(2.1)TheoreG. Pathwise uniquness does not hold in Example 2.1. .

k. '. . .

P/f W observed in Example 2.1 that 1/7 was a solutionk ro show that-W is alo a olutin, Went tht sgnl-z) =i

-sgnl)

ekept when = 0, ihwhich cmse the left side is 1 and the right side is

-1.

So '

(2.3) Lemma. Let r be a stopping time with respect to &, let T < (>:) and 1ettd + 16d2)ff 2 ThenB = (4 .

AEL rcssystpv'az If - Z12 S 2E IFn - 7012+ BE jo IF, - h l2ds

Proof The inei uality is trivial if the right-hand side is infihite so we n

suppose that each term on the right is fnite. To begin we recall that (c+ )2%2a2 + 22 Using this twice E

*.

2 2a2 + 2( + c)2 s 2c2 + 42 + 4ci(c+ + c) sSo the left-hnd side of (2.3)is

''

't ( . z ! jj sgn(-N,)d#, = - j sgntI#,l dB, + 2jo1fyv.=o) d,

To prove that the second integral is (andhence the rightzhand side is = -W))recall that #Fkis a Brownian motiou. Example 3.1 in Chaptrr 2 impls that(s : W'a= 0Jhmsmeasure 0, and hence

'

f 2 t.

E 1(e.=p) dBs = E 1(11:.::z:)ds = E 1(sK t : I'F, = 0JI1:z 1:10 0

Turning to positive results:

(a)

tA r 2

K2.E'1F:- A 12+ 4E nssuyspzj c(X,) - J(Z,) dB'

:Ar 2

+ 4E cssuyspzj tK ) - bzs )ds

Page 99: Stochastic Calculus

186 Chapte 5 Stochastic Dirential Equations

To bound the third term in (a),we observe that the Cauchy-schwarz inequalityk :

.lmplies that (

fAr 2 A,r ar(K) - biz't ds S ((K) - f(Z,))2 ds . lds

0 0 0

Taking supnssz and using

d

(b) sup l'??()I2< sup142(t)

E : : ( i ()SjKT m y 0 K ST :

Lischitz ontinuity assmption givewith the

'. . l

.....'.r. . ..

.'

.' : : .

.

')'E

(c)

tar 2

' 4E sup (K).-t

bzs) ds

jj $ :

A i' *,

V X j tjs 1 1

s 4E )-l su> bi ,)- f(

, ,yntttrfl n ( E i

f =1

S 4T . E J7 (y(F,) = bi(Z,)) dsf 1 !

0r

4,, rrpr . q .q

(

y ggjk .s g j s

..y

jygg j

To bound the second term in (a),let b the kthiw t , 1ttAT

Mt = (c.(#-,)- Jf(,)) . dBs0

d let ra = inttt : lMtl k n) A r.'he i maxtmal inequality >nd ttk formulaan

for the variance of a stochmstic integral imply thaty

yhy

yE z i z k 4s jgtu) - yk(a )j2dsX SIIP (11) ; 45 (W/ara ) -

OKKTAJ',X 0

Letting n-+

x, then uqipg (b) >pd Lipschitr onthpity w havej A r E ( 2

. ' E l .E . . . ' . . . . ' :

.'

LE sup (c() - c(g,)) tBsnszsc- 0

(d)

d f Ar 2

K 4f J'l sup (tn(K) - (n(Z,)) . dBsnssc- ()

= 1d Tar

s 16s l(n('i&)- c.lzallz ds0=1

Thr

< 16d2.Jf2E j IK - Zs 12dsJ 0

Section 5.2 It's Approach 187

Combining inequalities (a),(c),and (d)proves (2.3)k E!. . . . . ( . . '

. .:

.. j . .

' i l. ' . .

Convergence of the sequence of approxlmatlons. Let

Zr&() = X SIIP IAC-XH- 12,!0K, K

artd observe that since YO = 70 (2.3)implies that for n k 1n n ''- 11

T

(2.4) As+1(T) %'B Aa(s) ds0

To get started we have to estimate l1(s)I. Ciu erlir inequality f?r squaresa 2 2implies that for norms lc+ l K 21cl + 211 . So we have

1.X-,1- xa0j2< 2 tjgtzlla12+ 1(z)sl2)Using the fat that

l,(z)s,lL t'/2 sup Io.lzlslsu>0KaK 0K,K1

and that the rightllkand side hmsflnite xpected value we have

1(f)S Ct + f2)

Combining this with (2.4)and using inductior we have

tn 2@N+1

(2.5) Ar,(t) < Bn-c +qV (n+ 1)!. . . . '

. . ':'

'

Flom (2.5)we emsily get the existehc 4f a litnit. Cieb#sev's ipqualitylk that

' '

S OWS

P sup IXJ'- 7/-11 > 2rn %22ns(T)osfstr

(2.5) implies the right-hapd side is summable, so the Borel-cautelli gives

P sup lar -

.&n-11

> 2-n i o. = 0

Since Ea2-n < (:x) it follows that with probability 1, A'C-->

a limit XTuniformly on (0,71.

Page 100: Stochastic Calculus

188 Chaptez 5 Stochatic DiFerential Equations

The llmlt is a solutln. We begin by showing that convergence also

occurs uniformly in L2 .

(2.6)Lemma. For al1 0 K m < n S x,

a2

m n 2 a (r)1/aE sup 1x, - X, l / J7 kOKaST k=m+1

Proof Let Ilg112= EZ24l2. If n < x, then it follows from monotonicity of

expectedvalue and the tripngle ipequality fpr the qorm 11. llathat. n

11cssu,spylxam- xsnI11zs 57 tu,p<zlx- , s x,,- '. l , , . .

=m+1 = ''< ? : : gg . . .

g'

T1 n

s''''!

Il--su-p--lxrk- x-krlllla-,

a,(z')1'2=m+1

v=-=- =m+1

Letting n-+

cxnand using Fatou's lemma proves the result for n +: &.'

To see that XT is a splution, we let $ :::q A'J and Zt = Xr in (2.3)to get

zE sup I.&n+1- XrI2 < BE IA'n - X* 12dsJ J

(lststr

K BT . E sup 1aY,n- XT12 -+ 00s,sg'

-co n+1 coby (2.6),so Xt = lima'v = X3 .

Uniqueness. The key to the proqf is'

' (27) Grppwall'j Inequallty. Spppqs vt) % .A+ B JJe ds for yl1( k 0. .r .... . . . .. , . .

.. .. . .

... .. .E. . (

. .,L

.. . ..

d (t)is continus. Tlin () %Xcpl. y

Proof If we let #() = (z4+ eleft then #?(t) = .B#(t) sp

t

4(t) = . + E + B j #(s) d

Let r = inf (t : w(t)k #(t)J. Since # and (p are continuous, it follows that if

r < cx) then tp(r) = #('r), but this is impossible sihc

' y(s) ds >.,4

+ s j' e(s)ds w(t)4(z)= z + , + s jo ,

(2 8) Lemma. Let'i'i

azld Zt be continuos processes adapted to Ft withy = Zz = z and 1et r(J4 = inftt : IMlor lZI > ' J. Suppose that Bt isBrownian motion w.r.t. Sg and that and Zt satify @)on (0,r(JMj.ThenX = Zg on (0,

'r4.)

.

'i.

' ' '

. .' . . ': E

. . .: E .

Prof Let

Since X and Z3 are solutions, (2.3)implies

Section 5.2 It's Approach 189

The last contradiction implies that vt) K (X + E)cP, but E > 0 is arbitrary, sothe desired result follows. IZI

To prove pathwise uniqueness now, let (X , Bt , Si) and Zt , Bt , FI)be twosolutions of @). Replacing the hi by Ft = h v h2 we can suppose h = .,V.To prepre for developments in the next section, we will use a more generalformulation than we need now.

zA,r(a)

w()S B E j 1F, - %12dsj E E E . t

s./

.Erj, llr - zrli d s'p j vs, ds

so (2.7)holds with .A = 0 and it follows that p H 0.'.. .

:. .

' '.

.

' E

The last result implirs th>t twp solutipns must agree until the flrst timelt' 11 f ittius R tdf us thi implies that the twoone of them leaves t e ba o .

solutionsmust exit at the same time.) Since solutions are by deflnition contin-X,d in t hav' '(J4 1 x as R 1 cxj and theuous functions from (0,x) -4

, we usdesired result follows.

Temporal InhomoMenelty. Ip. some applic>tions it is important to allow the. .. . . .i

'''''''''''

.:.

. !.

t'

''''. ''''''' .. ' .. ' .coeciets d t dpend n tin, that is, t6 copsider

't

(*8 Xz =an + bs, .X,) ds + s, X,) dB.

0 9

In many cmses it is straightforward to generalize proofs from @)to (*.). Forexample, by repeating the argument abpve with some minor modi:cations onecan show

Page 101: Stochastic Calculus

190 Chapter 5 Stochutic Df/l-erenfal Equations

(2.9) Theorem. If for a1l , j, z, g, and T < cxn there is a constant ffv' so that E

fort : r, l(qltf,0)IK Jf, If(f'0)I< Ikt

Ic'J(f, z) - att, p)1 S ffzlz = pI and It, z) - bit, p)1 ; ffg'lz - pl

then the stohmstic diferential equation (*z.)hmsa strong solution and pathwise

uniqueness holds.

Comparing with (2,2)one sees that the new result mply msstme, Lipshitzcontinuity locally uniformly in t but neds pew cozditions; lcutt,0)IK Tfgand jft, 0)j S fz that am automatic in the tmpprally homogeneouj cas.The dependence of the coecients on t usually does not introduc any newdiculties but it does make the statements and proofs uglter. (Here it would

be impolite to point to Karatzms and Shreve (1991)and Stroock and Varadhyn(1979).) So with the exception of (5.1)below, where we need to prove the result

for time-dependent coecients, we will restrict our attention to the temporallyhomogeneous case.

5.3. Extensions.

. .'

In this section we will (a)extend lt's reslt to coecients that are only locallyLipschitz, (b) prove a result for one dimensionat SDE due to Yamada and

Watanabe, and. (c)introdce exathples to shoW that the results in (2.2)and

(3.3) are fairly sharp.

a. Locnlly Llpschitz Coecients

In this subsection *e will xted (2.2)in thq followig way.. . . . . . .

''''

*

.'.. . '..- . ..

..

.

(3 1) Theorem. Suppose (i)for any n < m we have.

*.

.. . . .

..

.j ..

y'

. . .:

. . . . ( k

lcu(z)-(%(p)1 f ffnlz - pI ltz) - ?4(p)1

-k

fr'l - i/l

when 1z1,IpI%n d (ii)thre is constant .A< cxl rid a functioh w() o-xf j j k; j s ermltjngal Tltnthat if Xt is a solution of @)then e p ) i a o a p .

@) hms a strong solution and pathwise uniqueness holds.

(3.2) Theorem. Let a = o''I' and suppose

Section 5.3 Extensions 191

then (ii)in (3.1)holds with./4

= B and w(z)= 1 + IzI2.We begin witi the emsier proof.

Proof of (3.2) Using It's formula with X:0= t and

tftzp, . . . , zd) = e-frnwtzl, . . . ,

zd)

. t .. :(

wehavet

-A X ) - w(.X) = -B -Ba (A- )dse w( t (p ,

0d : d jj .s

a g y j.-.Ba gax'fdyi + .. g : ( a+ C a a

f=1 0 f=1

= local martingale

.1 d

+ e-B' -#w(.X,) + tzX,klxal + cff(.X,)Jds0 f=1

onrvsumptiopimplie, that the lwst term is s 0 s? r-.Awl.ml is a localt 1Sllpermar ilg; :yy yj y Z

.(

. . ; , , ( ,,

UICYJ:?f (3.i) iti.k

k Cxij nd i.t:rodillil G'R and b'R Withy

s j j s u j sy jajc p(a),' (z) i':= #( ) b (z) , (z)'..' (. . . . . E .

' . . .'

.. . .'.' '

;'

.:'

.E

g . .. . . .

'

:.

. ...

.. E

.. . .' . : '

. . . j.

(b)JR(z) pi Oand a(z) = 0 for IzIk 2/ '

(c)ly4,,()- 1#,,(p)1 s Jf/lz = p1, I#,(z) : ,#(#)1

K ff'l - pIEor an explicit constructih do

xe:lse 3.1.'suppose

I?i(z)- :(p)1 < C'lai-.1

7/1when IzI%.&.

Let

2R - IzI(z) i:z . (Sz/Izl) f:r R K l<l:2/ .

.

R.

.

''

. .

d (z) = 0 for lzlk 2S. Show that is Lipschitz continuous on R? withan

costantc = 2c1 +.d=l

l(0)1. .,'

Fix a Brownian motion and let u''y be tlte unique strong solution of

(*n)g

(2zf(z) + af(z)). K f(1+ Izl2)f=1

' n(x,) ds + /' ,r'(x,) dB.x =.x

+ jo ,

Page 102: Stochastic Calculus

192 Capter 5 Stochastic Dilrential Equations

Let Ta = inftt : lXf l k n). (2.8)implies that if m < n then A'r = X1 fort KTm. From this it follows that we can deflne a process Xr so that

XT =ar for t %Ts

and X will be a solution of @)for t < Tx = limnx Ta . To complete the proof

now it suces to show that T1 + (x7 a.. If Artl = z then using the optionalstopping theorem at time t h Ta , which is valid since the local supermartingale

is bounded before that time, we have

w(z)k E(e-A(AT-)w(x,azk))

k e-#t/lTs s t) :ji,njfaa

e#

Rearranging gives

.P(T,zs t) s eAtg'tzl inf w(p)p:1p1=r

Since Wehave supposed v-+

(:x) wsy-+

cxn it follows that #(Ta K t) -+ 0 as

n-+

(x) which proves the desired result. C1

In Words, what we have shwri is that for locally Lipschitb' coecints thsolution is unique up to T,z the exit time from the ball of radl h.. If Ts 1 txlwe get a-solution deqned for a1l time. lf not then the solution reaches inqnityin fnite time and * say that an kpllsih tts ccried. !

Intuitively (3.2)says there is no explpsiqn if, Fhen I<Iiq lqrge? the part of?

jthe drift that points out, (z) . z/ll, is mallei tha t;'121ad the var ance of

each component is smaller than CIl2t We do not need onditions on the ofl-

diagonalelements of aj since $he Kunita-W>tanabe inequality implies

I(-Y x)tI2 s (xf)z(A-.f)z$

.

' Note that since our condition concerns a sqm, a drift topqrd the origin cancompensate for a larger variance.

Exnmple 3.1. Consider d = 1 and suppose (z) czi-z3.

Then (3.2)holds ifc(z) S B (1+ z2) + 2z4.

The next two exaples show that when considered separately the conditions

on b and a are cloye to the best possible.

Exnmple 3.2. Consider d = 1 and let

c(z) = 0, (z):.:= (1+ lzl)J with J > 1

Section 5.3 Extennions 193

Suppose a' = 0k > 0 so t -+ XL is strictly increasing. Let Ta = inftt : Xg =

n). While Xt 6 (n - 1, n) its velocity bxj ) k nl so T,z - Ta-z S n-J. LettingTx = limTa and summing we hav Tx %EQI n-J < x if 6 > 1.n

We like the last proof since it orfly depends on the msymptotic behaviorof the oecints. One, can alsp solve the equatipn xplicitly, which hms thedvnte ot giving the exact explosion time. Let 5$ = 1 + kt. tfi'k = %?dt soa

if we guess yk = (1- ctl-r thenap

tfyi/d =

(1- cf)r+1

Setting c = 1/p and tn # = 1/45 - 1) to have p + 1 = Jp we get a slutinthat explodes at time 1/c = p :.= 1/(5 s. 1).

.,j;j ( E.

.

Exnmple 3.3. Suppose b = 0j and o. = (1+ 1zl)-f. (2.t)implies there is noexplosion when 8 K 1. Later wt will show (seeExample 6.2 and (6.3))that

in d k 3 explosiop ccus for J > 1ih d K 2 n xplionrfof an# J < txl

. .. . . ..

. r . ... . . . .. .

b. Ynmada and Watanabe in d = 1. .. j' . . .

. .

' '

:

In one dimension one can get by with less smoothness in .

(3.3) Theorem. Suppose that (i)there is a strictly incremstng continuous func-tion p with 1J(z) - c(7/)IK ptl - p1) where p(0) = 0 and for all 6 > 0

C

p-2(u) du = (x)

0(ii) there is a strictly ircremsing and concave function x with 1(z)- (p)I%xtlz - p1) where x(0) = 0 and for ll e' > 0

6

x-1('tz)du = (x)

Then pathwise uniqueness holds.

Remark. Not that there is no mention here of existence of solutions. That istaken care of by a result in Section 8.4.

Proof The lrst strp is to deflne a sequence pn of smooth approximations of

lzl. Let c,z t 0 be de:ned by ac = 1 and ,

Ja-l

p-2(u)du = nJa

Page 103: Stochastic Calculus

194 Capter 5 Stochastic Dfsrezlta.l Equations

Let 0 %/rz4u)K 2p-2(u)/n be a continuous function with support in (cw,ca-1)

a d .

n .

Ja-l

#n(u)d = 1G a

E

Since the upper bound in the defnition of #n integrates to 2 ove: (cs,c-l)xi ja (uch a function e sts. e

wa(z)= dp dz *uzt

By deqnition t/u(z) = yu(-z), and w's(z)= JJ#a(u) du, for z > 0, so

= 0 for IzIK anIy$(z)Iis ; 1 for ca < 1zI< ca-l

1 for cn-l s lzI'

and it follows that patz) 1Izlas p 1 x.Su ose now that 71 and X2 Ay tskospttitions wlth X1 = .X2 = z drivenPp t . , () t)

by the same Brownia inotion, ad 1et At = X1 - Xt2.t

a, = j (c(x)) -

,(xa2)) dBs + yottat) -(-Y,i)) ds

so It's folrnula gives. .

'. .

:

'w'-(a,)da, + )j w:(ak)d(a),-(at)- j ,j

= jo/a(A,)f&(-V)- &(.X'.7)JdB'

-w--(a-)t(-'t.l)- tvfll.ks+/-1/' ''hsjfcxs)-

.(xa2))a ds+ pn0

> A(t) + .I() + f2(t)

A(t) is a local martingale so there are stopping times Tm 1 tx with

(b) E Alf A Tm) = 0

To deal with.12

in (a) we note that #?(z) = 4a(lzl) K 2p-2(Izl)/nwhenlzl E (ar,,cn-l) and is 0 otherwise. sousmg (i) .E

.j, t :p-2(ja,j) z y j; vgs..j.

lf2(f)IS : j ,?z P (I s ?z

Section 5.3 Extensions 195

For I in (a)we observe I#a(z)IS 1, so using (ii)and the concavity of x wehave

(d).E'I.n()Is j .E'I(-Y)) - (-Y.7)Ids

' xx(la,l)ds s /'x(sIa,I)dss/- -g'

Combining (a)-(d)we have

Letting m-+

(x) and using Fatpu's lemma, then letting n-+

(xn and using themonotone convergence theorem (recallpn (z)k 0 and yu(z) 1 lzI)we have

' (.sIa,l)dsl'la Is /,,.t:To inish the proof now we prve a slight improvement of Gronwall's inequality.

. ' . .'

. . . '' .. . .' .' .'

. .E'

(3.4)tepma. Suppose Jt) j'olx(J(s)) ds whrr f is ctptipupo, <(z) > 0 ise

-1

kincreasing for z > 0, an has lc (z)dz = tx7 for any 6 > 0. hen ftj H 0.

Proof Let g) be the solution of p?(t) = x(#(f)) with g(0) = e. It followsfrom the proof of (2.7)that J(i) S gt) . To show that g is small when 6 is smallnote that q is strictly ipcrewsipg and 1et lt be its inverse. Since g(()) = z, we

h ' 'c

1/ '(()) H'1/x(p((z)) :::': 1/x() nd hence if c <?? (z) g

g: A g i

' () - #(c) = 1/x(z) dz ru

ln words the right-hand side gives the ainouht time it takes g to climb fromc to . Taking c = t' and b = 6 > 0, tlte assumed divergence of the integralipplies th>t the amoupt of tipe to climb to &approaches (x, as .7

-.+ 0 and thedesird result follows. E::l

c. Exnmples.. ' .( . .

''

. ..

We will now give examples to show that for bounded coecients the com-bination of (2.2)and (3.3)is fairly sharp. In the llrst cmse the counterexampleswill have to wait until Section 5.6.

Page 104: Stochastic Calculus

196 Chapter 5 Stochatic Dilrential Equations

Exnmple 3.4. Consider (z) = 0 and (z) = lzl;A 1. (2.2)and (3.3)implythat

k 1/2 in d = 1pathwise uniqueness holds for J k 1 in d > 1

Example 6.1 will show

< 1/2 in d = 1pathwise uniquenes faits if (s < j. ju Lj o j.

Exnmple 3.5. Consider J(z) = 0, (z) = lzl;A 1, and -'

= 0. If k l then(2.2) implies that there is a uniq solution: Xs p 0. However, if J < 1 there

h :: CtP inltare ot ers. To flnd one, we guess Xf .

F dX, = Cpsp-k ds taYl = Cnspn

soto mke dX, = (x,) ds we set

(p - 1) = p i.., p = 1/(1 - J)

C = (X i e., C = p-1/(1-Op .

lf J < 1 then p > 0 and Jv have created a secoud solption starting at 0. Once. ' . ..

'.'

..' ' '

. .

'''''''''.

wehve two olutions we can construt inflnitely many. Let %0 d let

X at k a

xerciso.2.

The main reson for interest in (t ) ls th weakening of Llps-

. . . . . .. . . . .-

,r

.. . . . .. . . ... .. . ..- . ., .

chitz continuity of . However, the condition on is als att inyrovemnt, act

further sharpens the line between Georems and counterexamples. (a)Showt.hat (ii)in (3.3)holds when C > 0 and

s(z)= tC'1OZt1/') fO' ' S d-2

c(e-2+ z) oi z k c-2

d f) = expt-l/p) with p > 0 atd show that g'tt) i: .(, (#(tj)(b)cflsi er g ,

where4,(p) = pp(log(1/p))(p+1)/p.

5.4. Weak Solutlons'

..

''. . .

''.

.

Intuitively, a strong solution corresponds to solking the SDE fo: a given Brow.nian motion, while in producing a weak solution we are allowed to constrttct

Secton 5.4 Weak Solutions 197

the Browniau motion.and

the solution at the same time. In signal processingapplications one must deal with the noisy signal that one is given, so strongsolutions are required. However, if the. aim is to construt diflksion processesthen there is nothing wrong with a weak solution.

Now if we do not insist on staying on the original space (f),F, P) then wecan use the map LJ

-+ (X @),Bt (u?))to move from the original space (Q, F)to (C x C, C x C) where we can use the fltration generated by the cordiaatesto1(s), 2(s), s K t. Thus a weak solution is completely speciqed by giving thejoint distribution (.X , Bt ).

'

Reoecting our interest in constructing the process Xj and sneaking in somenotation we will need in a minute, we say that there is uniqueness in distrl-bution if whenever (X, B) and X',

.8/)

are solutions of SDE(, tr) with Ab =

XL= z then X and X' have the same distribution, i.e., PX e a1.) = PLX' E x1.)

for all./1.

E C. Here a solution of SbE(, tr) is something that satises @)in thesense deqned in Section 5.2.

Exnmple 4.1. Since sgntzlz = 1, any solution to the quation in xafn/le2.1 is a local martingale with (X) , H f. So Lvy's theorem, (4.1)in Chapter(3, implies that Xt is a Brownian rriotion. The last result asserts that unique-ness in distribution holds, but (2.1)sllows that pathwse uniqueness fails. Thenext result, also due to Yamada and Watanabe (1971),shows that the ptherimplication is trne.(4.1) Theorem. If pathwise uniqueness holds then there is uniqueness in dis-tribution.

Remark. Pathwzse uniqueness qlso implies that every solutiop i.s a stropglution. However that proof is more complicated and the conlpsion ij potso

important for our purposes so we refer the reader to Revuz and Yor (199t),p. 340-341.

proofsuppose(x1,s1) aud (xi, sc) are two solutions wit x = x ciii z.(A remarked above the solutions are spciqed by giving the joint distributiqnsXi #) ) Since CCc) is a complete seprabl fhetri space, we can flrid regular( , . ,

conditionaldistributions Qi,.)

deflned for (.d : C and.4

e c (seeSection 4.1in Durrett (1995))so that

Pxi e.,1.1#i)

= Qf(f,.)Let Po be the memsure on C, C) for the standard Brownian motion and definea measure k on (C3, C3) by

g'(.c x .1 x .Aa) = jgod.&@c)Q1(tJc, .1)Qa(tJc,.A2)

Page 105: Stochastic Calculus

198 Capter 5 Stochastic DiEerential Equations

If we let y/@n, 1, 2) = tJf(t) then it is clear that

1 l1).!.

(yl y0) aud (x2 Blj - (y7jy0)CX ,-

, ,

Take.42

= C or .A1= C respectively in the deqnition. Thus, we have two solu-tions of the equation driven by the same Brownian motion. Pathwise uniquenerss

1 2 ith robability one. (i)follows sinceimplies F = '.F w p

1 jz 1 .y 2 #=x 2X-

.,Let c(z) :.::z cvT(z) and recall from Section 5.1 thaj

(.X'..;'Jf = cf(A-,) ds '

: :

. ( . . . ( j ' . . . :

bl foi b nd orWe say that X is a solution to the martlngal pk' m ,

simplyX solves MP(, c), if for each and j,f t E

- bixs, ds and Xgixji -

it (xa)dsXt0 0 !

: ( jar tcal martingales. The second conditioh i, of coure, equiklentE

. . . . . ...' ' . . . . . .

. . .f

(.X', Xiit = c./(A,) ds0

.y''

.'

...

'

..

To be precise in the desnition above we need to specify a flltyation .J for the.

' '.

' '.

lpcal rrtaxtlnkales. ln posing the martiugale ptoblem we a nlf iptersted tnthe proces Xt , so we n assume without loss of geherality that the undetlyig

spie is C, C) with Xt@)= tlf and the flltratitm is the one generted by Xt .

h formulation above b and a = o'1$ are the bmsic data foi t fflar-In t etingaleproblem. This presents us with the problem of obtaiping tr fropz c.Tt? solve thi problemwe begi by intoduing sqme pypiti of !: jizlce

y j j iE E a ,

(# , X )t z (aY, X )f,a is symmtric, tht is, aij i: a;i . l, if . t

o E

.z.

-

z aj k c..

k z-

--

zi tzk k > 0E

zia g- i , J, J -

i J' i 2- k i; l l

So c is nonneqative defnite and linear algebra provides t!q with the followinguseful information.

(4.2)Lemma. For any symmetric nonnegative deflnite matzix c, we can llnd

an orthogonal matrix U (i.e.,its rows are perpendicular vectors of length 1)

Section 5.4 Weak Solutions 199

and a diagonal matrix A with diagonal entries t k a . . .

kE

a k 0 so thatc = UVKU. a is invertible if and only if a > 0. E

(4.2 tells us a = UVhU, so if we want c = co'r we can take tr = Uwv-h,where A is the diagonal matrix with entries . (4.2)tells us how to flndthe square root of a given c. The usual algorithms for fnding U are such thatif we apply them to a measurable c(z) then the resulting square xoot J(z) willalso. be measurable. lt takes more sophistiation to start with smooth c(z)

and construct a smooth l(z). As in the cmse of (4.2),we will content ourselvesto just state the result. For detailed proofs of (4.3)and (4.4)see Stroock and

Varadhan (1979),Section 5.2.

(4.3)Theorem. If oTazjo k (:1#12 and llc(z)- c(p)ll S Clz - p1 for all z,pthen Ilc1/2(z)- c1/2(y)jj s (C72al/2)jz - pj for al1 z, y.

. . . . . . . '. . ..

.. ..

... :

.

.

. ..

..

:.

E j (When a can degenerate we need to suppose more smoothness 4: gt ar ipsc rContilmtms Sqlare yoot.

(4.4) Theorem. Suppose aij E (72 ad

g' ' 2mp.xIp

.afctzlpl

/ clpllKtKd

then llc1/2(z)- c1/2(p)j1 < d(2C')1/2jz - pj for all z, y.

To see why we need a e c2 to get a upschitzalz consider d = 1 and a(z) =1

lzl. In this case J(z) = lzl/2 is Lipschitz continuous >t 0 if and only if k 2.

Returning to probability theory, our flrst step is to make the connectionbetween solutions to the martingale problem and solutions of the SDE.

(4.5.)Theorem. Let X be a solution to MP(, c) and c a measurable square

root of c. Then there is a Brownian motion Bt , possibly defned on an enlarge-

ment of the original probability space, so that (X, B4 solves SDE(, J).

f xi - jt (x )dsProof lf o. is invertible at each X this is easy. Let 'Ft = : () i ,

and lett

B = )'-) oj- 1 (x,)dni(a) f t0J

From the last two deflnitions and the mssociative law it is immediate that

(b)'f '1

X dB = J$ -

'n

= X, - A-a - bvn4dsc( ,),

0 0

Page 106: Stochastic Calculus

200 Chapter 5 stochaticDiEerential Equations

so @)holds. The.5j

are local martingales with (.&, Bijt = jt so Exercise 4.1in Chapter 3 implies that B is a Brownian motion.

To deal with the genral case in one dimension, we let

z = (1 if J(X,) # 0' 0 otherwise

let Ja = 1 = I., let I/'/a be an independept Brownian motion (todeflne thi. we

mayneed fo eplarge the probability space) and let

f .r

,

B =' dy + J dpl/

' o.x )* ' '

- o , ()

t. . ..The two integrals are well deqned since thelr vrinc processe ae K . Bt

is a loal martingale with (#) = 8 so (4.1)in Chapter 3 implies that Bt is aBiwhin moti. E

To extract the SDE now, we observe that the associative 1W ad th factthat Jsxjj H 0 imply

Section 5.4 Wak Solutions 201

Our result for one dimension implies that the Z,i may be realized s stochsticintegrals with respect tp independent Brownian motiops. Tracing b>ck throughthe dennitions gives thedesire representation. Fo) mre details se Ikvcta andW>tanabe (1981),p. 89-91, Revuz and Yof (1991),p. 190-191, or Karatzl indShreve (1991),p. 315-317. (ZI

(4.5) shows that there is a 1-1 correspondence between distribttipns of qo-;'

.. .. . ...

..,,.. . .

.;,'

,,k.

.

..

.

lutions of the SDE and solutions of the martigale problmj Whiclk b# definitipnar distributiohs on (C, C). Thus there is uniqueness i distribution for th SD2

' ..''' . . ' ':

. .

: ' . . . ' ''

if and tmly it the maa-tlngau problezfz hs a unique solutton. our faltopic is an important remson to be interested in uniqueness.

(4.6) Theorem. Suppose a and are locally bounded functions and MP(, c)has a unique solution. Then the strong Markov property holds.

Propf For each z E Rd let Pr be the probabiltty peasure on (C, C) that gives4h distribution of th uniquewsolytion. starting from a'

= z. If you talk fmstthe proof is easy. If T is a stoppmg time then the conditional distributioh of(.X+,, s k 0) given Fr is a solution of the martinjale problem starting fromXr and so by uniquness it mudt be Pxv). Thus, lf we 1et 01, be the rndomshift deined ln Section 1.3, then for any bounded neasurable F : C =+ R we

ave the strong Markov property

t E (F o 01.1.&) = Exl'jv .(.7)

To begin to turn the lmqtparagraph into a proof, fix a starting poit ()

za. To simplify this rather complicated argument, we will follow thq stand>4d.

''''' '. '' '.

.r . . . ..

practice (seeStroock and Varadhan (1979),p. 145-146, Karatzs ad Shrk(1991), p. 321-322, or Rogers and Williams (1987),p. 162-163) of giving thedetails only for the case in which the coecients are bounded and hence

't.1

(r(.X',)dB, = f, dF,0 . 0

= X -'o

- L JF,

In view of (b)the proof will be complte when we show JJJa dF, H 0. To dotltis Fe note that

t 2 r.t

>; joJs tfyk = E jo,z,2

(I(y)a

t

= E jo,z,2

ax, )ds = 0'

t

Mfo = x - (x,) ds't0

ii xi-x.i- aij (a'c)da.#'t =

0To h>ndle higher dimensions we 1et J(z) be the orthogonal matrices con-structed in (4.2)and 1et

zt = Urxs) dF,0

lntroducing Kronecker's hj = 1 if = j and 0 otherwise, we have

't t

(zf,zJ) = Ui,kxsjak,txsthljx'a, ds = htnixs) ds

l

0 kt 0

are martingales. We have added an extra index in the frst cmse so we can referttthe Mi ''to al1 the martingales at once by saying .

Consider a bounded stopping time T and let Q(Y,.)

be a yegular condi-

tionaldistribption fo# (.X+f , t k 0) given FT. We want to claifit that

(4.8)Lemma. For .P.0 a.e. f.J, all the

i.i o/.l =uij

o ovM.l. -

r t

Page 107: Stochastic Calculus

202 Capter 5 Stochastic DiRrential Equations

are martingales under Q@,.).

.' ' ' 'To prove thi we consider rational s < t and B e

.'& and note that tlie ptitmalstbppig theorem implie that if .A 6 Fr thh

i - Mii )1s) o 01'j 1x) = 0i' l

Ezz (((.6 ,

Letting B ruh through a counta le collection that i closed under intersectiopzdgenertds

.#,

it fotlows that .J'., .q., the ekyected vale of tM!fl r.-.#.f,U)t..s

i 0, whlt inflies (4.8).Vsingul4enss now gives (4.t)and tofnflt.k tr.

'' .

. (. ' ' '.

.'

. . .. .: .' ' .

E l... .. .qPTOOf. (1

. . . ' . .:. . .

.. g. .

'' .

ch'

f Measure5.5. ange o

Our next step is to use Girsanov's formula from Section 2.12 to solve martingaleproblems or zore precisely to change the drift in a' existin: soltiok Taccordate Example 5.2 below we must consider the cmse in which tv a'ddddrift b depends on time ms well ms space. t E ' t:

(5.1) 'heorem. Suppose Xt is a solption of MPI#,), which, for ioncfetenessand withut loss of generality, we suppose is deflned on the ianonical prbability

space (C, C,P) with & the fltration generated by.;@)

= td. Suppose c-1(z)exists for-all z, and that ts,z) is memsurable and has lTa-lts, z)l %M. Wecan defne a probability measure Q locall# e4uivalent to P, so that under Q,Xt is a solution to MP(# + , c).

.E

Piof Let u'kj

y vn - Jcp(#,)ds, let cts, ) : a (z) (s,z) and letuj .

5$= cts, x,) . do

.which exists since

t

(F)t = cfls, Xslcjs, .;L)d(X, X),0 ij

t t

= csacs, X4) ds = Zc-lts, A'jl ds %ML0 0

by our assumption that I c ts,z) l%M. Letting Qt ad Pt be th restrir. . '.

'' '

'. .

' '

.'

tions of Q nd P to & we deine our Itadon-Nikodym deiivttv t be

d% 1= a = exp yk - .j(J')t

dPt

Section 5.5 Change of Meuure 203

Since (F): < .Mt, (3.7)in Chapter 3 implies that tn is a martingale, aud

invoking(12.3)in Chapter 2 we have deqned Q.To apply Girsanov's formula ((12.4)from Chapter 2), we have to compute

Ai = J a-1 dte X),. It's formula and th asspiative law imply that0 a )

' j

tef - 1 = ce, dFa = ceact s , Xs ) i dX-,o ()

* '

So by the fovmula fp: the covyxiauce of stochastic integrals

't

(a,.X' )t = ceacts,xa) d(-Y' ,.#)a

0t (

T'

= s (c a)j (sk.Xal ds = ce, jtsj Xa) ds0 0

since cls, z) = c-1(z)(s, z). Ij. follows that

t fj - 1d

.x-

i j (s X )ds.zl.t= aa (a, ), = ) , a0 ()

Using Gixsanov's foxmula now, it follows that

t E tT - (s,X,) ds = X1 - pj(x,) + bjs, .;Ll dst J

0 0

is a local martingale/o.This is half of the desired conlusion but the other half is easy. We note

that under Pj

(.x xJ), = aijxs) ds1

and (12.6)in Chapter 2 implies that the cov>riance of Xi and Xi is the same

uner q. ,

'

n

Exerclsp 5.1. To check yom understanding of the formulms, consider the capein Fhich b dpeq nqt depend on s, start with the Q cozstruted abpye, changeto a measure R to remove the added drift , and shop that dRtldot = l/m soR = P. .

Exnmple 5.1. Suppose Xt is a solution of MP(0, f ), i.e., a standard Brownianmotion arzd consider the special situation in which (z)= VJ(z). In this cmse

Page 108: Stochastic Calculus

204 Capter 5 Stochastic Dirential Equations

recallingthe deflnition of X in the proof of (5.1)and applying lt's formula toUX ) we have,1

. ..'

. '.l '. . .'' .l . k .

TUCXS). X' = V(Xz) - U'(X) -

jl AV(X,) dsK = j ,

Thus? we can get rid of the stochmstic integral in the deqnition of the Radon-Nikodym derivative:

Section 5.5 Change of Measure 205

CAss 2. Suppose .P1and PL are locally equivalent. Izl the definition of tn, (F)fand X are independent of Jl by (12.6)and (12.7)in Chapter 2, so doldp =

de ldp, and q # q,. o2

Ohe can considerably relax the boupdedness condition in (5.1).The nektlt will cover many examples. lf it does nt cver yotlrs te tat tlte minl'estl y

ingredients of the proof are that the conditions of (5.1)hold locally, and weknw that solutions of MP( c) do ot explod.1

(5.3) 'eorem. Suppose k is a solution of MP(, c) constructed pn (C, ).Suppose that c-1(z) exists, Tc-1(z) is locally bounded and memsurable, and

the quadratic groFth copdition frorp (3.2)hplds, That is,

d

(zztzl + cf(z)). K.z4(1

+ Izl2)i =1 -

. z''L.

Then there is a 1-1 correpondence between solutions to MP(, c) and MP(0, c).

dQt - exp (g(m)- u(xa) - )/'hux-, + Ivv(x,)lads)

dPt n

In the special case of the Ornstein-uhlenbeck proess, (z) =-2az,

where t:e

is a positive real number, we have J(z) =

-a1l2

AJ(z) =-2da,

and theexpressionabpve pimpliqes to

td% 2 2 d t - :azjxs jzda= exp-al-'

I + al-' I + adfl p

In the trivial cmse of constant drift, i.e., (z)= p = VV(z) we have Uz) = p.z

and Luzj = 0 so

dQf . 1 a= exp p . Xz - p . A - - 1p1tdPt . 2

It is comforting to note that when A = z, the one dimensional density functionstisfysa

j

Qt(.;%= p) = exp Jz . y - p . z - -: I#tI2Ptvj = y4

-a/2 jjz/2j)yy= (2JrI) exptr-lp - :r = // ;

as it should be since under Q, Xt has the same distribution ms Bt-#

#@.

(5.1) deals with existence of solutions, our next result with uniqueness.

52) Theorem. nder the hypotheses of (5.1)ther is 1-1 colresp' ndep( .

, ,

between solutions of MP(#, (I) and MP(# + b, tz).

Proof Let P and P2 be solutions of MP(7 c). By chage of measue * ct.

producesolutions Q yd Qz of MP(# + ,c). We claim that if .F # Pk theh

Q # Q2.To prove thls we consider tw cmses:'E !

1CASE 1. Suppose P and 15 are not locally equivalent measures. Then since

Qi is locally equivalent to 1l, Q1and Q2are not locally equivalent and cannot

be equal.

Remark. Combining (5.3)with (3.1)and (4.1)w see that if in addition to theconditions in (5.3)we have c = cT where o' is locally Lipschitz, then MP(, c)hms a upique solution. By using (3.3)instead of (3.1)in the prvious sentehcewe can gt a better result in one dimension.

Proof Let Z and a be as in the proof of (5.1)and 1et Ts = inft : IARl > n).

From the proof of (5.1)it follows that cttt A Ta) is a mrtingale. So if we 1ethn be the restriction of P tohhn and let dolldln :::i cvt A T,z) then underQn,X3 is a solution of MP(, c) up to til'ne Ta.

Th (txdftic growth odition implies that solutiops of MP(, c) do not1 d

'

We will po* show that ih the abqence f ekplsios aj is martingle.ekp o .

,

First, to show at is integrble, we nte that Fatu's lemza imlies

(a) Eat %liminf f'tatt A Tr,)) = 1n'-x

where E denotes expected value with respect to P. Next we note that

f'lm - aavkl = ftla - a(Ta)l;Ta %t)(b) S E (a(Ta); Ta S @)+ Eo - fltyl; Ta > t)

The absence of explosions implies that as n-+

(:x)

E (a(Tn); Tn K ) = Qn(Tn K @)-+ 0

Page 109: Stochastic Calculus

206 Capter 5 Stochastic DiRrential Equations

This and the fact that a(t A Ta) is ar martingale implies. ' .. . ' . .

(d) fta; Tk > t) = 1 - f(a(Ta); Ta K ) -+ 1

s n-+

x. (b),(c),>nd (d)imply that t:vtt A Tn) -+

cu in L . It folloFs (seee.g., (5.6)in Chapter 4 of Duirett (1995))that

at A zk)c:h E (a Iaas )'

and from this that a', 0 %s K t is a martingale. The rest of the propf is the

dameas that f (5.1)ad (5.2). El. .. . ...

:' '

'

(

Our lmst chore in this sectitm is t complete the proof f (2.10)i Chapter1 by showing

5 4) Theorem. Let Bt be a Brownian motion startig at 0, 1et g be a contin-( .

uous function with g(0) = 0, and let :7 > 0. Then

P sup I.Dt- #(t)1 < e > 00KtK1

.' . . . .

('

. . . .. . . ' ' '

. ' . . .' . . . ' . '' .'

.

Proo? We can qnd a dl'function lt with (0) = 0 and l/t(s)= y(#)l < E/2

for s % @,so we ca suppyse without loss of generalijy that g E C3 and letb. = #(s). If we let X = ji b, df, and deflne Qt by

j jdQt 1 a= exp , dBs -

j. 1,l dB'd11 () o

.: '. .

'

..

' ' ' 5 ' '

then it follows from (5.1)that pnder Q,Bt-#(t)

ls a stanard Brownian potion.L t G = (1./ - #(s)l <; : for s < J Exercise 2.11 in Chapter t imptles thate , ! .

.

G(G) > 0. It followq from the Cauchy-schwarz inequalhy that

21/2

e(c)- / ddop:. lc dn s / Lddoptttdzl

.p(c)1'2

To complete the proof now we 1et E denote expectation wit respect to Pt and

observe that if l,1 S M for s K t then

dQt 2 t taj (dy j dPt = E exp Lj2, dBs - j l,I

ds(

- exp (/'l,1ads)

s eu-tJ0

Sction 5.6 Change of Tlrzie 20;

since exp (JJ24 dB. - (1/2)JJI2,I?dsj is a martingale and hence has ex-pected value 1. For this stqp it is important to remember that a = g'sj is a

.

'.

: .

non-random vector. (a

5.6. Change of Time

In the last section we lqarned that we could altr the b in the SDE by changinjthe measure. However, we could not alter o' with this technique because thequadratic variation is not alected by absolutely pntinuous chapge of measure.In this section we will introduce a new technique, change of time, that will allow

us tp altr the difiksipp coeient. To prepare for devrloprprnts in Section 6.1wewill allow torthe possibility thatour processes are deqned on a iandom tizeinterval.

61) Theorem. tt x be a qolutiop o? Ml, c) fpr t < (. That is, for eah( .

and i, (

j j j g'' - bi(A%)ds and XLXL - aij (Xa) s() q ()

are local martingales on (0,(). Let g be akposttive function and suppose thatt

c = gxs )ds < cxa for all t < ( .

g

Deqn the invers of a. by 'ya = inftt : tn > s r t k (J nd 1et F = .X'(v,) for< = ct . Then K is solution of MP(/#, c/#) for s < 3.

Proof We begin by noting.that Xt = F((n), then in the second step change

variablesr = n , dr = gxjjds, A'a = A'v(c) = 5$., to get

f

Xi ..u j (.'L)da = yj -u

h (y..)dsf

0 0qt

='it

- f(yr)/g(Fr) drf

.

Changing variables t = yu gives11 '/v

E

'i' b (y )/#(yr)dr =': Xi -f(#,) dsu

-

i r v0 0.

' . ..l

. .

sincethe 'yu

are stopping times, the rkht-hand side is a local martingale on (0,)andwe have checked one of the two conditions to be a solution of MP(/#, algj.

Page 110: Stochastic Calculus

210 Chapter 5 Stochastic Dierential Equations

Then MP(0, c) is well posed, i.e., uniqueness in distribution holds and thereis no explosion.

('

.Proof We start with Xt = Bt , a Brownian motion, which solvs MP(0, .J) andtake g(z) = (z)-1.Silce gBtj %Gl for K Ts = inftt : lfl > J?Jwe haveJfgBj ds < (:x) for any t. On the other hand0

..' ' '

6

* :(f,) ds k fR-1l(t: lftl S 1Jl= cxn./-since Brownian motion is recprrent, so applying (6.1)we have a solution ofMP(0, c). Uniquene follows from (6.2)so the pof is complete. E1

Combining (6.3)with (5.2)we can construct solutions of MP(, c) in onedimension for very general b and c.

(64) The-emk Conider izflesion d = 1 and suppose''

.*...

. . . E. . . . . .

.. j.

(i) 0 < Ea %ay) %CR < (x7 when IplK R

(ii) (z)is locally Ebounded

(iii) z . (z)+ c(z) <.4(1

+ z2)

Then MP(, c) is well posed.

(1.1).' .

.(

j

One Dimpnsiona.

.:

E

kblguslons

6.1. Constructloh

ln this section We will take ai approach to constructing solptions pf. ' . . . ' .'

dXt F.= (Xz) dt +'J(.m) dBt

that is special to one dimnsion, bt that will allow us to obtain a very detailedunderstanding of this cse. To obtain results with a minimum of fuss and in agenerality that encompmsses most applications we will assume: E

(1D) b and c' are continuus >pd c(z) = J2(z) S 0 fpr >11z

The purpose of this section is to answr the follpwing questions: Does asolutio exist whn (1D)E

holds? Is it' niqe? Does it kpbde? Or approach. . .

' g..

''''

'

''''

. ..

.y y. y . ,

.

..

.'

: j ... . .. (

. .j. . . , .

.

..

.

..

..

.inayb utlihd as folW:

functioz p so thqt if X) lsa polqtion of the DE on (0,) then(i) We deflne aX + w(aV)is @.local martingal on (0,). y (

(ii)oi'.4

h (#)z+ Jc tkalds s ktiuct t t tg joloh of P(0, ltjby time changing a Brownian motion.

. .'

. .'

(iii) We deqne Xt = w-1(K) and check that Xt is > soltitio of the SDE.E

To begin to carry out our plan, suppose that Xt is a solution of MP(, (I)

for t < f. If J e C2 It's fimula implies thatfor t < (1

J(Xi) .zziJ(.X) + j J'(.X',) dka #.1

j y''(.;L)d(.X')a2 a9 pj

= local mart. + j z,J(x,) ts

Page 111: Stochastic Calculus

212 Captez 6 One Dimensional Df/fu:ons

where

1 ,, ,(1.2) T(z) =

: c(z)T (z)+ bzjl (z). (

-nd c(z) = (8 (z). From thi we see that fXt4 is a local martinlale on (0,()if and only if J(z) > 0. Setting L1 = 0 and noticing (1D) implles c(z) > 0gives a Erst order dilerential equation for f'

-2 :fj' = IJ

Section 6.1 Construction 213

To construct solutions of MP(, d) we will reverse the calculations above:

we will use a time change of Brownin motion to construct a X which solvsMP(0, ) and then 1et X%= w-1(1$).With Exampls 1.6 and 1.7 of Chapter 5in mind we generalize our set-up to allow the coecients b and tr to be defnedon an open interval (a,#) with t < 0 < #. Since #(z) > 0 for a11 z, the imageof (a,#) unde v is an open interval f, r) with -x K ; < 0 < r K x.

Letting m be a Brownian motion, ( = inftt : J'I'' / (', r)J, .q = 1/,

't

n = #(W'a) ds for t < ( and w = inf (t : tr > s or t k ()

Solving this equation we flnd,

r(,)- sexp (./*-2''(z)

dz4. .

oc(z)

Using (6.1)in Chapter 5, we see that 'k= 14J'4w) is a solution of MP(0, ) for

S < X q .'

To deine Xg now, we 1et'#

be the inyerse of p nd 1et Xt = /(X). Tocheck that X solves MP(j c) until it exits from (&,#) at time , we diferentiatew(#(z)) = z and rearrange, then diferentiate again to get

andit fbllws tht

J(z) =.,4.

+ j' .a

exp (/'-25(Z) dz)

dy

a nc(z)

Any of thes functios call be called the natural scale. We will usually take

.= 0 and B = 1 to get E

'

(1.4) wtzj- y-expLju-lbzbab)d,n n a(z)

Howevr, 11.4som pituatlons it wijl be conyepient to replqce the 0's at the lowertimitsby some oter point. Note tat our mssumptipp (1D) lmpliesp is /2 so

our use of lt's formula in (1.1)is justifed.! Since '.K /(.;%) is a local martingale, results in Section 3.4 imply that it

is a time change of Brownian motion.ToEftnd

the time chang function we note

that (1.1)and #h4fo4mula for th varianc of a stochastic integral imply

gq: i

(F), = /(x,)2d(A4ayg:

(1.5) ,

= p'X,)2 ct.xal ds = (F,) ds0 0

where

tl =

.(/(w-'(p)))2c(w-1(p))

> 0 is continpolw

So if we 1et n = infts : (F), > t) then g = 5'r, is a Brownian motion run for

an amount of time (F)4.

t 1# (z) =

,(y,(z))V

yz/ --L zz z(z)- ('(4(z)))a/ (#(z))#(z)

jUsing the first equality and p/(p)

=-(2(g)/c(p))p'(p),

which follows frrri(1.3),we have

?? 1 2(#(z))# (z) =

,(#(:)))aq-

a(#(z))(wThese calculations show # e C2 so It's formula, (1.1),and (1.5)imply thatf9r f <

E j S :

j. i

#(K) - /('i1) = j #'(F,) d' + ' j #''(F,)(K) ds

. ' .i l

. . . .. . '

For the second term we note that combintng the ormulasfor.(?.

an gives(recall# = w-1)

1 ,,

'j'# (p)(p) = (#(p))To deal with the flrst term observe that I is a solution of MP(0, It) up to timeso by (4.5)in Chapter 5 there is a Brownian motion B' with dya = /z(F,) dBs.Since 4/(p) &(p) = J(4(g)), letting Xt = ./(K) we have for t <

,(x,)ds, + /' (x,) dsx'V- Xc = j ,

Page 112: Stochastic Calculus

214 Capter 6 One Dimensional Disions

The last computation shows that if 5$ is a solution of MP(0, ) then Xt =

#(X) is a solution of MP(, c), while (1.1)shWs that if X3 is a solution ofMP@, c) then X = pxjj i a solutiim of MP(0, #). This etablishes there is1-1 correspondence between olutions of MP(0, #) and MP(, c)z Using (6.2)of

Chapter 5 now, we see that . E

(1.6)Theorem. Conside: C, Cj and let Xg() = vt . Let (: < # and ktpj =

inf(.X'() / (a,#)J. Under (1D), uniqueness in distribution holds for MP(, c)

on (0,q.,p)).

Using the uniqueness result with (4.6)of Chapter 5 we get

j(1.7)Theorem. For each z E (a,#) let Pc be the 1aw of Xj , t < 'q.,#)

rom(1.6). lf we set Xg = for k r,p) , where L is the cemetery state of Section1.3,then the resulting pxoess hms the strong Markov property: . ,

6.2. Feller's Test

I the previous section we showed that under (1D) there were unique solutionsnto MP(, c) on (0,qa,#)) where q.,#) = inf ( : X / (&,#)J. The next resultgives necessary and sucient conditions for no explosions, i.e., 'r(o,m = (x7 a.s.Let

Tp = inf (t : XL = p) for p E (a,#)T. = lim Tw and n = lim Tp

plG G = p#

ln stating (2.1)we have With6t loss of generality pptxd 0 e (, #).If you are confronted by an (a,#) that does not have this property pick yotzi

favorite y in the interval and tr>nslate the system by -y. Changing variablesin the integrals Ewe see that (2.1)holds when a1l of the 0's are replaced by y's.

(2.1) Feller's test. Let e(z) be the natural scale deqned by (1.4) and 1etmtzl = 1/(w'(z)c(z)). '

-

'

(a) #.(Tp < T0) is positive for some (a1l)z E (0,#) if and only if

/'j dz mtzl epb - w(z))< cxn. . ..'

.:

. : ''

. .:

.

(b) /,:I:rL < :1) is positive for some (all)z E (a,0) if and on y if

Section 6.2 Feller's Tet 215

(c) If both integrb are infnite then.&('r(.,p)

= x) = 1 foy all z E (a,7).

Remarks. lf ppj = tx then the frst integrand is H co and the integral is x.Similarly the second integral is x if w(a)=

-x. The statement in (a)meansthat the following are equivalent

(a1) Prl < Tn) > 0 for som z e (0,#)a . . . . .

' (a2)f'i dz mlzl qep) - w(z))< x

(.3)Pzl < Tc) > 0 for all z e (0,#)oof The key to the pioof of otzi suient condition foi no kplqsions givpr

in (7.1)and (3.2)of Chapter 5 was th fact that if .Ais sucientl# large.

':.' '

'

'

'

. . . . .

' '

.

''

2..-/t

( js a su/ermartigteSt = (1+ l.X'fI )eTo get the optimal result on explosions we have to replce 1+ lzl7by a functionthat is tailor-made foi the procs. A naturat choie that

'g'

ives p nothing is ai (j that Eflmd On # k SO

...... ( . . ' . j.y'. .

. .' E .

e-fg(.X) is a loc>l pprtingale

To flnd such a g it is convenient to look at things on the natural scale, i.e., letK = w(.;V).Let f = limxt. w(z)and 1et r = limwjpw(z). We will ;nd a C2function Jx) so that

.f

is decreasing on (', 0), f is mcremsing on (0,r) apd

c-.J() is a local martingale

benouement. Once we have f the conclusipn of the >rgpment is easy so tol in our motivations we begi witlt the end. Ltexpa

. '' ..: i

' '.. . ..

'' .. .:

J(') = 1imJ(g) J(r) = ltmJ(p)(7 ptz p'yr ( .

(2.6)and ttk clulatious at the nd of the ftijofwittsiktk that

(2.2a) The integral in (a)is finite if apd only if f < m.. (

(2.2b) The integral in (b)is flnite if and only if f) < x.

Uce these are stablished the conclusions of (2.1)follow emsily. Let.

.. . ..'

g . . .. .. .. . .. . . .

.:r; = inft : yt = yj for y 6 (f,

r)

f'r = lim f; and f'z = limf;p$r ptz

(z,r)= llz/$ l?r0

J.dzmtzl (e(z)- e(a)) < cx)

Page 113: Stochastic Calculus

216 Chapter 6 One Dimensional Dihsions

Proof of (c) Suppose f) = J(r) = x. Let ca < 0 < bv be chosen so thatJ(cs) = Jbuj = n and let 'ra = Qa A Z,t. Since e-'J(F,) is bounded beforetime 'ra A we can apply the optional stopplng theorem at that tirreto concludethat if y e (cn, n), then

etytp)Pp (Tn < f) f '-+ 0 as T!'->

*

nLetting -,

(x) we conclude 0 = wFtAz,rl< x) = Pw-z(,)('q.,#) < cp).

Proof of (a) Let 0 < y < r, 1et bn 1 r with l > y, and 1et gk = It A Za .

If frj = cx)j applying the optional stopping theorem at time 'ra A t, we

concludetat

esfyjPpttjla < rL A t) s-+ 0 ms n

-.+

cxnfbn)

tetting( -+ co we concllzde 0 = .J$t7-t.< :&) = #.-!(p)(o < z). This show,that if (a2)is false then (1) ksfalse, i.e., (a1)implies (a2), . ,

If ;(r) < xt applying the optional stopping theorem at time a (vhih i.-% X) %fbuj for t S a) we conclude thatjustifled since e f

1< Jy) .G(c-ray('i))' < 1 + Jlrl.Eiy (e-faiqta < (&)

.

'''

Rear#atljing gives

Section 6.2 Feller's Test 217

so we want

1 u(2.3) j (z)J (z) - J(z) = 0

To solve this equation we let h H 1 and deflne for n k 1

(2.4)

(1.8) in Chapter 4 implies that if we >et.f(z)

= E*r,=ofnz) and

x

(2.5) J'''!sup lA(z)l < (x, for any R < xa=?

ISJKR

.'

g: '

y. . .g .. . .y. ... . . . . .

, ..

..'/? T '' Using r'(z) = 2A-u1(z)/(z) for 7 1 andthn 1 (z) :Fp EsudJn(z). a

A'(z)= 0 it folloWs that '

' dyj' (la 2Tn-1(2)lnz)= j, , liz)

= ,,

*

2ya-,(z) 2y(z)1''%t= J7ln (z)= I''''I =

(z) (z)n=0 n=1

Tba ft < fL) kTCV) - 1 0k j- ; w ytg;

>

Noting X = Q < (:xn is impossible, and letting n-+

cxl we have E !E '

E (c-f'ba; fa < :&) t Ey (e-Tr; (r'r < :&)

#

whichis a contradiction, uhless 0 < Jtf'r < ft) = #w-,(p)(T# < T0). Thisshows that (a2)implirp (a3).(a8)implis (>1)iq triyial so the proof of (a)is

comp ete.

Proof of (b) is identical to tht of (a).The search for f. To find a function f so that e-IJ(') is a local mar-

tingale, we apply It's formula to fz, z2) = c-rlJ(z2) with Xjk = f, XI = X,and recall X is a local martingale with variance procesq given by (1.5),so

i-e-j J' satisfes (2.3). .

To move(2.5),we begin with some simple properties of the fn .

: '

..

: .

(2.4a) ln k 0

(2.6b) fu is convex with J,$(0)= 0

(2.6c) fn is incresing on (0,r) and decresing on (', 0)

Proof since (z)> 0 (2.6a)folloWs emsily by induction from the de:nition' .. ''

in (2.4).Using (2.6a)in the deflnition, (2.6b)follows. (2.6c)is an immediateconsequenceof (2.6b). E E1

' Or next stp is to use induction to show

(2.7)Lemma. A(z) ; (A(z))n/n!,(2.5)holds, and

1+ l S f K explll)

Proof Using (2.6c)antt (2.6a)the second and third conclusions are immediateconsquences of the frst ozw. The inequality A (z) i (J1(z))n/n! is obvious if

1 1e-'y(yk)- /.('F) = jol-c-'J('F,) + c-'J''(',)(',))ds

+ local mart.

Page 114: Stochastic Calculus

218 Chapter 6 One Dimensional Diffusions

n = 1. Using (i) the deflnition of Jn in (2.4),(ii)(2.6c),(iii)the deqnition of

f and h, (iv)the result for n - 1, and (v)doing a little calculus, we have

,, fv :ys-,(a)Jn(z)= j dy jo dz

/j(r)

**A-,(p) /' dz 2S jv , z)

#=j d,Ja-,(,).fI(p)

'r (J,(p))n-1 (y,(z))ns jody(,, - 1)! J1(p)=

n!

To complete the proof now we only have to prove (2.2a)and (2.2b).Inview of (2.7)it sces t proye these rrsults with f repl>cd by hi Changing

z z jvariable z = w(r),dz = p ('p)dv then y = tp(u), dy = p ('u) u *e have

Sectlon 6.3 Recurrence and Aanslence 219

To evaluate the first integral we note that if y > 0

# ,-(1

+ p)1+J + 1v'yj = exp j -(1

+ z) dz = exp j..4.

so w(x) < x. To use (2.1)now, we note that c(p) = 1 so my) = 1/7(p) sothre is no explosion if and only if

* ts exp (41+1?)1+J - 1 * -(1

+'u)1+:

+ 1

-- /- 1 + , )/-d- --p ( 1 + s )

X

dv j*d'tfexp (t1-1- *)1-88

- tl+ 1l)1+d

- 1' - 1 + s )y.qo yggkq - z1+4

n(z)= j-dy /' dz , l ))(? (, p (#(z))4(,/(

.c 4(p) 2

=j, dy jo dvw,(s).(s)

q'rj u 2

- j, d,?,yl,,l j (s.,(s).(v)

where in the lmst step we hav changed variables z = n + 1, y = 'p + 1 to get ridof the l's. The next lemma estimtes the last expression and shows that it is

= cx) if 0 < &K 1

(2.8) Lemma. If &> 0,

Letting z 1 r and using Fubini's theorem

(x) l+ 1+J# - zy dz exp -+ 1 ms y

-+

x(1+ J)#

p 2 y.pX(r) = j d'D

w?(s)u(s)jv d'?z/(1z)I'p

sE = : j ts mtsl (p(g)-E /(s))

y

)

yjj

i '

g y

(

Proof Changing variables z = y + zy-: we have

x 1+; 1+4 x v-vzy-ny - z J ,dz exp = y- dz exp - w dw1 + 8 at? g

a. - 6

Since z : JVY'V w' dw K zy-ny + zy-nj' .-.+

z ms y-+

x, the desired resultpfollowsfrom the dominated convergence theorem. El

6.3. Recurrence and Translence

The naturalvscale, which was used in Section 6.1 to construct one dimensionaldiflksions, can also be used to study their recurrence and transience. Let Xt bea solution of MP(5, a) and suppose

A similar argument shows that the second expmssion in (2.1)is (exceptfor afactor of 2) f) and the proof is complete. C)

To see what (2.1)says in a concrete case We considr

Exnmple 2.1. Let c(z) = 1, (z)= (1+ 1zl)J/2 where 6 > 0. When K 1 thecoecients are Lipschitz continuous, so there is no explosion. We will now use(2.1) to reprove that result and show that explsion occurs when J > 1. Whny < 0, p'yj k 1, so e(-x) =

-x and the secnd integral in (2k1)is x.

Page 115: Stochastic Calculus

220 Chapter 6 One Dimensional Disions

(1D) b and c' are continuous with c'(z) > 0 for a1l z

Let c(z) = &2(z), and 1et p be the natural scale defined by

z v zjtz; v vy/(z) = j exp j -

a (z) z

Let Tp = inftt > 0 : Xj = !,) and 1et .r = T. A T. The flrst detail is to show

(3.1) Lemm. If c < z < b then.&tr

< x) ::ti 1.

Proof wesaw in section61 that $ = w(-'t)i a slution f MP(0, &) where

&given by (1.5)is positive and continuous. Thus (6.1)in Chaf tr 5 implies

that '.F can be constructed ms a tim change of Brownia motio, Bt ;

1

y, =.8(-/a)

where w = inf (t : o > s) and tr = dslltmj0

Since Brownian motion will exit (w(c),p()) with probability one, X will exit

(w(c),tpbjj with probability one, and X will exit (c,bj with probability one.

With (3.1)established we can study the recurrence and trausipr aq we

did for Brownian motion. fjhhhp)is a uniformly bounded marngale, so tlieoptionalstopping theorem lmplies

fp(z)= Erv-j = (p(a).&(Tc < T5) + w()(1 - .&(Tc < Th)J

and solving we have

Pb - P(z)p (n < z.) =

P(z) - fP((1)(3.2) .PZIT. < T) =

) - paj:r pLb)- w(a)e

Lettinx (J)(x) = lims-.x w() and w(-x) = limc-e-x paj (thelimits exist.

''''.'''' 'F ''i.

..'#''

' '' '''.

'''

'' '' ''.

... ..

..

since e is strictty incremsing) we haye

(3.3)Theorem. Suppose c < z < .

PaIT. < x) = 1 if and only if w(x)= xi.&(T < x) = 1 if and only if w(-x) =

-x.

. .. . . . . . . . . . . '. . . . .. .' ' . . . .. .

ln one dimension we say that X is recurrent if PrTv < x) = 1 for all y. Frrit

(3.3)it tollowsthat

Section 6.3 Recurence and Dansience 221

(3.4) Corollary. X is recurrent if and only if w(R) = R.

This conclusion should not be surprising. Xrb=:pxj ) is a local martingale soit is a time change of a Brownian motion ru2' for a random amount of time andif w(R) # R that random amount of time must be fnite.

To understand the dividing line between recurrenceanb transience we con-

sider some examples. We begin by noting that the natural scale only dpendson b and through the ratio 2/c.

Exercise 3.1. (i)Show that if byj %0 for y k yo then P.ITP < x) = 1 fral1 z > 0. (ii)Show that if blyjlay) k : > 0 for y k yz then P.IT: < x) < 1

' .'

. .. .

. . . .

'''

.' E'''...

( '. .. '..for all > 0. E

Exnml 3tik The exercise identzes the interetinj ce as beig byj k 0With blayj ,.4 0 as y

.'=+

x. Supse that for z k 0

2(z)/c(z) = C(1 + zl-r where C, r > 0

X

-G1 + zl-r dz = -Jf > -x

If r > 1 then

. . . . . ..'

so p'yj k e-K for al1 y > 0 and w(x) F:z x. lf r < 1

and it follows that

* Cw() = exp - ((1+p)1-r - lj dy < x1-r0

ln the borderline cmse r = 1 the outcome depends on the value of C.

SO(x)

.g < x jj g > jw(x)= (1+ p) dy f c s 1 EI

g= (X7

To check the last calculation note that Example 1.2 in Chapter 5 shows thatthe radial part of d dimensional Brownian motion has (r)/a(r)= (d- 1)/2r,while Section 3.1 shows that Brownian motion is recurrent in d K t and transientin d > 2, which corresponds to C %1 and to C > 1 respectively. Sticklers for

Page 116: Stochastic Calculus

222 Capter 6 One Dimensional Disions

detail may complain that ve do not hve any Brownian motions deined fordimensions 2 < d < 3. However, this analysis suggests that if we did then theywould be transient. , .

.'' . ..'

'E '

Exerclse 3.2. Suppose a and satisfy (1D) and are periodic with period 1,i.e., c(p+ 1) = ay) and (g+1) = (p)for al1 y. Find a necessary and sucient

condition for Xt to be rqcurrent.

6.4. Green's Funtions. .. . .. . . . .

.'. .. ' ' .'

suppose vz is a solution of VP(, c)) whereb and c satisfy (1D). tet jz <

be real n mbers, which you should not confuse with the coecients, let D :8:z

(c, ), apd 1# r = inftt : X) / (c, )). 0,ur goals in th sectiop, whiqh >relishedin (4.t)and @4) elow, are toshow that (i)'if g i bounded apdaccor

!'

. ....

.

...

. ..

.. .

.

'.

..

memsurable then

Section 6.4 E Green 's Functions 223

and note that 1 k 0 in (w(c),w()) with J&(z)=-2/p.

It's formula implies

pztza)T(X) - T(Fn) = loc>l mart. - jz gds

Our choice 9 p implie? ltyj k p, so J(X) + t is a local supermartingale on(0 r). i Usink the otional stppping theorem at time a A n where a = inf (i :

1 . ....

. ..

. . . .

5 / (e(c)+ l/n, pb) - 1/n)) we have

.' '..

: . ... . .''

';. J(z) k Ezln-hnt + Ern A n)

Since 0 K Jz) < :2/: for z 6 (w(c),w@)Jwe have P/p 1 Errn h :)k Nqw, 1et

n-+

x and use the monotone convergenc theorem. E)

(4.2) Theorem. Suppos a is bounded. If there is a function 1) with(i) 'p e C2, Ln =

-g in (c,bj(ii) tl is continuous at c and b with r(a) = r() = 0 then

. ..t')

.f (

rgx-) ds - / cotz, ytgyt omj ''gxst dsr(z)= zrjv

and (ii)to give a formula for the Green's function GD (z,:v). As in the discussionof (8.1)of Chapter 4, we think of Gstz, g) msthe expected amount of time spentat y before the process exits D when it starts at z. The rijorous ztieaning ofthe lasf sentence is given by (i).

The analysis here is similar to that in Section 4.5 but instead of the Lapla-cian, , we are concerned wit

!a(z)y??(z)+ (z)y?(z)zJ(z) =

:

Proof Let Mt = ,p(-'L)+ CgX,4 ds. (i)and (1.1)imply that for t < r

' gx't dsv(.m)- ,??(Ab)= loal mart. - jso Mt is a locat mitingale on (0,r). If 'p and g are bounded then for t < r

l-tflK 111711x+'rllpllx

(4.1) implies that th 4ight-hand side is an integrable random variable so (2.7)i Chapter 2 and (ii)imfly E

The frst step is to generalize (1.3)from Chapter 3.

(4.1) Lemma. supzctcj) Ezr < x.

Proof Let p be the natural scale deqned in (1.4)and consider $ = w(-;V).By (1.5)yt is a local martingale witli (F)t =

t #(F,) ds where (p)is positive

and continuous. To estimate r = inftt : / (w(c),w())) we 1et

v = (w(c)+ w())/2 be the inidpoint of the interkl,

f = (w() - w(c))/2be half the length of the interval,

p = inftlp) : y 6 (e(c),e())) > ,

y(z) = (z2- (z - zjjp,

T

M. > lim M3 = #(#t) dtt1r ()

T

r(z) = EnMZ = E.M. = Ec #(Sf) dt0

Solvlng (4.2).The next two pages will be devoted to deriving the solution

to the equat.ion in (4.2).Readers who are content to guess and verify the answer

can skip to (4.4)now. To solve the equation it is convenient to introduce

1m(z) =,(z)a(z)

Page 117: Stochastic Calculus

224 Chapter 6 One Dimensional DiEusions

where w(z)is the natural scale and note that

1.d..

( 1 zd;.c(z) dz.f c(z)

-w'z(z)

ld oytz)(4.3) 2m(z) d p'z) tfz 2 (fz2 +2 (p'z) )dz

=

sine the deqnition of the natural scale implies w&(z)/#(<)=-2(z)/c(z).

'a1

is called the (densityfunction of the) speed measure, thogh as we willse inExample 4.1 the rate of movement of the proce har is inkerel# proctinalto m(z)! To simplify notatiop ald to facilitate comparison with our source forthis material, Karlin and Taylo (1981),Vol. II, w tets(z) = p (z) be thedrrivative of the natural scale. ,

ro solve equation Lf c.:::-g

no&y,we use (4.3)to wrlt, , q y , jy

d 1 dv.-.

- =.-xam/zov,

rdz s(z) dz

- ' J'= %-.LJ

r q

.:

.: : '

.. .'

d integrate to conclude that for some constant pan

section 6.4 Green 's Functions 225

6 haveBreaking the first ja dy into an integral over ,z) and one over (z, ) we

(8) =1) j' dy s(p) j' dz,n(z),(z)s(z)= 2(u(z)a (1

b r!/+ 2u(z) /.dysyt j dzmtzlplzl

Ralling syj =f/yj

Fe have.... ..

!j .: E.E .

.. . ..'

. .E

. . ..

.,,(z)/

dgsv =(Pt - /(c)

. (w() - w(z))= (1- ,,(z)) / dyqv

r w() - w(c) .

m E

Multiplying the lst identity by 2 ja dz m(z)#(z), we have

b ' tr z. r

2,u(z)/.dysyq j dzmtzl,tzl - 2(1 - u(z)) j dysv j dzmtzl,tzl

Using this in (c)we can break ofl- part of the second term and cancel with theflrst one to end up vith

' d s(p)=

tfzmtalptalr(z) = 2(1 - u(z)) j y-

(d) y.b /.;+ 2u(z) / dysyj dzmtzlplz)

Using Fubini's theorem now gives

1 d.v zyp (smtal,tzl=#-sy) a

Multiplying by sy) on eah side, integrating y from c to z, and recalling that17(c)= (1and s = p' we have

r j.!/

(a)'p(z)

= #(w(z)- w(c))- 2ja dysy) j dzmlzlptz)

In order to have'p()

= 0 we must have

2 b p

/'-

vbt - va) /-dus''b /-dzmtilft)

Plugging the formula for # into (a)>nd writing

(e)

If we denne G(z, z) to be

e(z) - w(tz) ) - w(a))m(z) when z k z2 . (w(:() - /(c)(4.4) vb, - w(z) he. z s z2 ) - vaj

' (P(Z) - P(t')) rr(Z) W

ethen we hake

'rdz?.,,(z),(z) jzdysvs(z)= 2(1 - u(z)) j .

ilp''fs

15,

+ 2u(z) /.dzmtzlglz) j dysy)

w(z) - w(c) s (zj < z ;u(z) =

) - w(c)=

r q/(

we have

(b)(f) r(z) = jj G(z,z)#(z) dz

Page 118: Stochastic Calculus

226 Captez 6 One Dimensional Dihsiops

Combiningthe calculations above with (4.2)shows

(4.5)Theorem. lf g is bounded and memsurable then

r bE= jo,(-Ya) ds = j Gz, a)p(z) dz

Section 6.4 Geen 's Rmctfons 227

Exnmple 4.1. Speed mnsure? If X has no drift (e.g,,Bmwnian motion orany diflksionon its natural scale) then w(z)= z and (4.4)becomes

2m(Z)( - Z)(2 - d)/( - J) When z $ zGx' Z) =

2m(z)( - z)(z - aljb - a) when z-%

z

Taking g = 1, c = z -

, and b = + the lmst expression becomes

Proof By the monotone class theorem it suces to prove the xesult when, is

continuous. To do this it suces to show that the 1) deined in (f)satisfles thehypotheses of (4.2).To check (ii),we note that s z

.-#

c or z -+ b, G(z, z) -+ 0,and G is bounded so the bounded convergence theorem implies r(z) --> 0. Tocheck that v e (22 we note that

eb - w(c) =

r(z) = (w() -(p(z)) ja(w(z)- w(c))m(z):(z) dz

2b

+ (w(z)- w(c)) pbt - wlzlim(z)#(z) dz

. . . .':

.: . .

'

...

Diferentiating and noting that the terms from the limits of the integrals cancelwe have

m(z)(z + /z - z) when z k zG(z, z) =

j wheu a s zm(z)(z - z +

Letting 'qc,,) = inftf : Xg $ (c, )), and using (4.5)with g = 1 we have

.+?,B=r(z-s,=+hb = (z + lt - z) mtzl dz

4 6 ' =( ' ) .

+ (z - z + h) m(z) dz ew m(z)#2r- h

as.=>

0. Thus mtzl gives us the time that X3 takes t xit a small intervalcentered at z, or to be precise, the ratio of the time fpr Xt to that for Brownian

otion.PFor another interpretation of rlz note that (11.7)in Chapter 2 implies that

if LLis the local time at z up to time t then for a bounded memsurable,

.' . ' .( 'vbt - w(c) , ,

a'

- z.v (z) =

-w (z)ja(w(z)- w(c))m(z):(z) dz

b

+ #(z)J.(w() - (z))m(z),(z) dz

DiFexentiating again we have

w() - e(c) ,, ,,(z) j=Lpzj - w(u))m(a),(z)dz,p(z)=-v

2 c

+ w''(z)/z(w() - w(z))m(z).(z) dz

- e'zteb) - w(c))m(z)J(z)

This shows n e C2. Multiplying the lmst two equations by (z) and c(z)/2,

adding,then usipg Lp = 0 and the deflnition of m(z) Wehave

w() - w(a)os(z)=

-a(2)

. y(z)(w() - w(a)),(z1).(z)

gz)2 2 p

so Lv =-g. This shows that '?? satis:es (i)in (4.2)and the result follows. (:l

( '.( : ( : .

fzrgtzldz = .(-Y,) d(X)t =

''

g-lax't ds0 0

When Xt hmsno drift p') = 1 so mtzl = 1/c(z) and taking #(z) = J(z)m(z)we have

$

,.mtzlf4 T(z)dx = T(;,) ds

.' . ' . : . . . . . .

'lk 1ti l 1ngthe local times by mlzl cortverts thm irto ocupation times.us mu p yThis and the previous interpretation suggest that it would be natural to allm(z) dz the occupation measure. However,, it is too late to try to changethe original name, speed measure. El

Exnmple 4.2. Second Proof of Feller's test. Let V = inf (t : Xt = z),T(x= limrta T. and 'r(c,,) = Tc AT:. We cont.ent ourselve to establish variant

of (b)in (2.1).The flrst of two steps is

(4.7) Lemma. Let 0 < < p. Then

0/. (w(z)- w(a))m(z) dz < x

Page 119: Stochastic Calculus

228 Capter 6 One Dimensional DiRsions

if and only if inftx<un PPIT. < T:) > 0 nd supo,zu< .FQIT. A T:) < x.

Proof (3.2)implies

Section 6.5 Boundary Behavior 229

Letting r = n AT: and repeating the proof of (1.2)in Chapter 3 now, we have

.&('r > kM) = Erpxu'r > k - 1)M); r > AT)

S (1- :) sup .Pptr > k - 1)M)a <# <

lt follows by induction that for a1l integers k k 1

. ksup #(r > kM4 K (1- c)

;r'E(a',)

Using (5.7)in Chapter 1 of Durrett (1995)nqw, we get an upper bound (?n a11the moments of the exit time r. EI

vbs - g'(0)15(L < T ) =

w(#)= w(c);

so info<uun #CIL < T:) > 0 if and only if w(a)>-x. Taking g = 1 in (4.5)

' .'..

.. E' ' ''

and using (4.4)we have (

(4.8)

, be(z) - w(c)j (s() - p(a)) mtajtsEzra,sj = 2ebj - w(c) =

Eebt - w(z) Z'

+2 .() - w(c)/-(e(z)-t'tlrrttzel'z

6.5. undry Behavior

In Section 6.1 when w constzycted a diflzsion on p. half lipe pr > bmpdedintervalj we gave up when the process reached the budary and we said thatit had exploded. I!I this section, we will identify situations in Fhich Fe capextend tlke life of the process. To explain how w might contizpe F begip Fith

t Wial example.a r

. ..

.

'

.

The frst integral always stays bounded ms atE

a. So &'qc,&)stays bounded as

c-.+

a if arpd only if w(a)>-x apd

0

(w(z)- w(a))m(z)dz < cxl Ela E

The. two conditions in (4.7)are equivalent to f%(To < T:) > 0 and .E(Ta ATh) < x. To complete the proof npw we will show

(4.9) Lemma. Jf ,!5(T. < T) > 0 then Ezl A T) < x.

jtProof PaIT. < :J) > 0 implies .&(Ta < cx)) > 0. Pick M large enoug sothat #OIT. K 1') k c > 0 and that .J%(T S M) k E > 0. By considering

the flrst time the process starting from 0 hits z and using the strong Markov. .

''''' -

. . . . . .

. . . .

prfrt#, it follws that if 0 d z < then ('

P (1-S ) = f'o(#.(T K t = T );T S f)0 b z'

s.z'wtn

/ ).l3n(:r',,s t) s 'a.tt/ s i)

A similar argument shows

#ZIT. K t) k J5(Ta K t) for < z < 0

Combining the last two results shows that

Pz (:Q A T K M) k E > 0 for all ce < z <

Exnmple 5.1. Refectlng boundary at 0. Suppose (z)H 0, and c(z) ispositiveand contkmous on (0,co). To deqne a solution to dX3 = tr(Xt) dBtwith a

teflecting boundary at 0,:' we exted d' to R by tting ig(-z) :: c(z),

let 5$ be a solution of dyt = c($) dBt on R and then 1et X3 = 1l$1.To use the lmst recipe in general we start with Xg a solution of MP(, c)

and let p be its natural scale. = pxtl solves MP(0, ) where.'''

..........y

j'

j.g

. .. . .

th(#) =(/(P-A(#))l2c(/-1(g))

To see if we can start the process K at 0 we extend lt to the negative half-lineby setting (-g) = (p)and let Z3 be > solution of MP(0, lt) on R. If we let

j?h4g) = 1/(lpl) be the speed measure for Zt , which s on its natural scale, we

can use (4.6)and the symmetry rW(-p) = r-tpl to conclude

1 e.E,q-,,,) = (e- #)F-?(#) dy

() ,

Changing variables y = w(z),dy = w'(z)dz, e: = w(J) the above

J

- / (.(,) -

.(z)).,(zj.(z)4z

0

Page 120: Stochastic Calculus

230 Capter 6 One Dimensional Dilsions

Now, (i)recalling that the speed measure for Xg is m(z) = 1/#(z)c(z), and

changing notation p'z) = s(z), and (ii)using Pubini's theorem, the above

J n /.' z

=,

s(z) dz m(z) dz jvjom(z) dz s(z) dz

Introducing M ms the antiderivative of m to bring out the analoa with thecondition in Feller's test (2.1),we have

(5-1)

To see that this means that the process cannot escape from 0, we note that;'

.. . .

. . .. (j

5' '

.'i. (repeating the proof of (4.9)shows E

.E'nq-e,e) = 2j (M(z) - M(O)) s(z) dz

Section 6.5 Boundary Behavior 231

This leads to four possible combinations, which were named by Feller as follows

I J name< cxl < n regular

< (x) = cxn absorbing

= x < cxl entrance

= x = x natural

The second cmse is called absorbing because we tan get in to 0 but caot getout. The third is called an entrance boundary because we cannot get to 0 but wecan sta'rt the process there. Finally, in the fourth ase th procss c itherget to nor start at 0, so it is reasonable to exclude 0 from the state space. Wewill now give examples of the various possibilities.

Exnmple 5.2. Fellqr's brmlc-hlng dlffuslon. Le$

dxg . px, dt + 0. xtdt r y

Of course we want to suppose o' > 0 but we will als sppse # > 0 sinc thecalculations are somewhat different in the cases # = 0 apd p < 0. See Exercise5.1 below. Using the formula in (1.4),the natural scale is

. y-xa

P(z) = exp a dz d!l0 0 C Z

r 2-2pp/cQd

......-t7'

1 - e-zpmlcl= e y= ( )

a 27

whichmaps (0,x) onto (0, 2j2p). The speed memsure is

1 2p./,a arn(z) =,

= e /(J z)

w(z)c(z)To investigate the boundary at 0, we note that

(5.2)Lemma. If J%(4-,,,) < x) lk 0 then .E'r(-e, < x. 'E

.i . E

.: ' !

. . .. .. . E .. . . . ' . .

.. .

. . (..:. . . .

'

(..E.E. ..

..' '

. ..

.

. ...

'

.

Proof Pick . so that>ct'q.u, )< .:J') = > 0. Vsingtht sthg Varkovpropity s ih the proof of (4.9)we frst get !

: ' .:

'

' l

sup.&(n-,,,)

> M) ; 1 - &tr 6 ( - E'.t'1

and then conclude that for each integer 1-k 1

from which the desired result follows.

Consider a diffusion on (0,r) where r K x, 1et E (0,r), and let

(JV

, z : z.!yyy v 4 a.I = ttpk? -

vbvl)mbz) u.j

q

(T(z) = iT(0)) s(z) dz0

Feller's test implies that

when I < x we can get IN to the boundary point:

The analysis above shows thatwhen J < x we can get OUT from the boundary point.

l 1 1 aI =jlo rn(z)(/(z) - P(0)) dz =

jlo2pz

(e2#'/G- 1) dz < oo

since the integrand converges to 1//2 as z-+ 0. To calculate J we note that

a jm(z) ew 1/(J z) ms z-+ 0 so M(0) =

-x and J = x. The comb nation I < xand J = (:x) says that the process can get into 0 but not get out, so

0 is an absorbing boundary.

To investigate the boundary at x, we note thatO

m(z)(e(=) - w(z))dz = j* 1dz = cxh

l1 2px

Page 121: Stochastic Calculus

232 Chaptet 6 One Dimensional Dihsions

To calculate J we note that when p k 0, m(z) k 1/(&2z) so M(x) = (:x) andJ = x. The combination I = x and J = (x7 says that the process cannot getinto or out of x, so

x is a natural boundary.

Exercise 5.1. Show that 0 is an absorbing boundary and t:xn is a naturalboundary when (a)p = 0, (b)p < 0.

Exnmp e 5.3. espe process.

Section 6.5 Boundary Behavir 233

The flrst conclusion is reascmable since in d k 2 we can start Brownian motionat 0 but then it des not hit 0 at positive times. The second conclusioh ca bethought of as saying that in dimensions d < 2 Brownian motion will hit 0.

Exerclse 5.2. Show that 0 is n absorbing bounda if y K =1. We leave itto the reader to ponder the meaning of Brownian motion in dimension d K 0.

Exnmple 5.4. Power noise. Consider

dXt = AJ dBt

on (0,c=). The natural scale is (p(z) = z and the speed mrmsure is m(z) :

1/( '(z)c(z))= z-2J so

'

vHere 7 >

-1

is the lndex of the Bessel process. To explain the restriction

on y and to preparefor the results we will derive, note that Example 1.2 inChapter 5 shows that the radial part of d diiensional Brownian motion is aBes:l prpcess with 7 p (d- 1). The natural scale is

(z) = exp - v/z dz dy 'E'

1 1

*' ln z if y = 1=

-%

d =# # 1-v - ))/(j. - y) jj. v p 1(21

From the last computation we see that if .f k 1 then w(0)=-x and I = x.

To handle-1

< y < 1 we observe that the speed memsure

11-2: d =

< o:) if < 1I = z z= cxn if 6 > 10 -.

WhenJ k 1/2, M(0) =-x and hence J = x. Wften&< t/2

1 1 .-. 2:2'J = dz < (x)

( j .z g

1m(z) =t

= z%

w(z)c(z)So taking q = 1 in the defnition of f

1 1-'y.Cf = z'Y dz < (x7

1 - 7o

Combining the lpst two conclusions we see th.t the boundary poiht 0 isnatural if 8 C (1,x)absorbing if h' 8 C g1/2,t)regular if J E (0,1/2)

,

The fact that we can start at 0 if and only if < 1/2 is sggeste by (3.3)and Example 6.1 in Chapter 5. The new inforrpation her: is that ths solutioncan reach 0 if and only if J < 1. For another proof of tltat resutt, recall thetime change recipe in Example 6.1 of Chapter 5 for constructiht dolutionsi fthe equation above and do

Exerclse 5.3. Consider

rr'o'z'c

-2J1 ds and H' =

.5-2:1

ds.J.I':= B, (s.s1) J a (s.>1)a ()

Show that (a)for any J > 0, Pllj < x) = 1. (b)Ellj < (x7 when < 1.(c) Plh p.m) = 1 when J k.1.

Exercise 5.4. Show that the boundary at x in Example 5.4 is natural if 6 ; 1and entrance if 6 > 1.

To compute J we observe that for any y >-1,

M(z) = z=+1/(y + 1) and

1 v+1Z -v

J = z dz < cxla 7 + 1

Combining the lmst two conclusions we see that 0 is an

entrance boundary if y C (1,co)

regular boundary if 7 E (-1, 1)

Page 122: Stochastic Calculus

234 Chapter 6 One Dimensional Disions

Remark. The fact that we cannot reach x is expected. (5.3)in Chapter 5implies that these processes do not explode for any J. The second conclsionis surprising at flrst glance since the process starting at cxl is a time change of

a Brownian motion starting from x. However the function we use in (5.1)of

Chapter 5 is gzj = z-2J so (2.9)in Chapter 3 implies

EM J(#,)1(s.sM) ds = z-2: . 2z dz0 1

which stays bounded as M -->

cxnif and only if J > 1.

Exercise 5.5. Wright-Fisher dlfihslpn. Recall the deqnition given in Ex-mple1.7 in Chapter 5. Show that the boundzy point 0 i

absorbing if 7 = 0regular if p e (0,1/2)entrance if p k 1/2

Hint: simplify calculations by flrst considering the cmse a = 0 and then arguing' ' . .. .

' ... .. .'

.' : '

.. .. .'

. ..'

that the value of a is not ifnportant.

6.6. Applicatlons to Higher Dlmenslons

ln this section we will use the tehnique of comparing a multidimensional dif-fusion Kith a one dimensional one to obtain sucient conitions (a)for noexplosion, (b)for recurrence and transience, and (c)for diflksions to hit points

or not. Let Sr = inf (t : I-'LI :::1 rl and vb = limr-..x Sr . Throughout thissection we will suppose that Xt is a solution to MP(, a) on (0,Sx), and thatthe coecients s>tisfy

(i) b is measurable and locally boundedrE ' ''

ii) c i. optinuous and nondegenerate for each z, i.e.,(i

*

pcfJ(z)pJ > 0 when y # 0

i,j

The frst detail is to show that X3 will eventually exit from any ball.

(C.1) Theorem. Let Sr = inftt : l-YtI= r). Then there is a constant C < (x,

so that E=S. K C for all 4 with 1zlK r.

Proof Assumptions (i)and (ii)imply that we can pik an tr lrje s that i?

la:I:S'-&

alz) k 1 + l(z)lY

To see that the last estimate is fairly sharp consider a special case

Exnmple 6.1. Suppose d = t, c(z) = 1, and (z) =-#sgn(z)

with # > 0.The natmal

'pcyle

hasCr 1

(z) = e2I* dy ='*-

(d2#r- 1)* 2:0

Section 6.6 Applications to Higher Dimensions 235

d h Using It's formula we havefor all z with 1zIK r. Let (z)= E=l cos (azf).t d

(Ah) - ltvl = a sinhtaIj )(A%) ds + local mart.0 =1

. d1

-

2 i+ -: I'''-Itr coshtax,lcfftx,) ds(? =1

Noting that coshtp) k 1 and

. ey ee*#coshtp)k max

-j-

, : k jsinhtgll

our choice of a implies

2a i i..j- coshtax,lcfftxa) + t:ysinhtaaLlf(.;L)

)(E'a(x,)

- If(A-a)I)k' acoshtaxj) k ak a coshtaxay C

. ...

''

'

.''' '

so hlaj.-l

tad is a local submartingale on g0,Sr). Using the optional stoppingthorem at time Sr A t we have

S (z)K Erfhlvvhg) - a#(Sr f))

jince ltxsvht) K d coshtar) it follw that

Ersn A t) K coshtarl/a

Ltting t =+ cxl gives the desired result. (:I

hemark? actng back througtheproof shows C = ositorrl/a where iscosn so tlkt

-as(z)k 1 + Iktzll2

Page 123: Stochastic Calculus

236 Chapter 6 Onq Dimensional DiEusions

for z k 0 and w(-z) =-w(z),

so the spqed measure

Section 6.6 Applications to mgherDimensions 237

Our plan is to compare St = 1.412with

d% = /z(Zt) dt + bLztjdNt on ((),x)

So we introduce the natural scale w(z) nd speed measure mtzl for Zt:

1 -apjal

rrl(z)=,

= ce (z)

Using (4.8)now with b = r, c =-r,

z = 0 an noting that

2 w(r)- w(0) 2w(0) - w(=r) .1=

e(r) - w(-r) w(r)- w(-r)

we have

=zgy) ojs(z)= exp (-j ,a(,)

=s(,) dpw(z)= jim(&)= 1/(s(z)p2(z))

r 1E 'r

= (d2#r- ezpz)e-zpz dz0 (-r,r) gpo0 1

+ gp(-n2#IzI +

e2#r) :-2#lzI dz-r

The two integrals are equal so

j. r,

E 0'r(-r,r) = -fj

.

2V(r-Z)- lj dz

. . .m u . ' .,

.:

t z r 1 r-

a2J(r-z) - - = (elpr - lj - -

2#2 p (j ' 2p2 *,

' p

To compare with (6.1)now, note that when c(z) :: 1 and (z)c: =#Qrt(z),thremark after the proof says we can pick a = 27 + 2 so the inequality in (6.1)is

coshttz; + 2)r)E 0q-r,r) S gp + : , ,

a. Exploslons

With the little detail of the fniteness f Ersr out of th way Wenow intro-duce a compyrison between a multidimensional difl-usionand a one dimension>l

. .' .. ' . .' . ' '.

on, which is d t Khsminskii (1960).We begi by ihtrodciik thi yiisof deqnitions that may look strange but are tailor made for coxrijutatitm in(d), (e),and (a)of the proof of (6.2). '

In view of Eellez''s test, (2.1),the next Tesult says that if Z3 can't reach (:x7 in.' . ... .. ''finite time then X3 des not explde.

(6.2) Theorem. There is no explosion if

* dzm(z)(w(x) - .(z)) - x./r,Proof To prove the result we will let Rt = IaLl2and flnd a function g withg(r) -+

(x) as r-+

(y) so that e-'g(Jj) is a supermqrtingale while Rt 1. Tosee that this is enoug, let Sr = inf (t : Rt pu; zr) d use the optional stoppingtheorem at time S1 A Sa A 8 to conclude that for IzI> 1

..

:

j t-j a p y < s y j;#(lzI) 2. e gn ) r(n t

tetting . = limryx%, then n-+

cxl and t -+

x we have Pss. < sj :'= 0.To deine the function g, we follow the proof of (2.1).Lt X z.= pzgj,

use the construction there to produce a function f so thatc-f.J(')E

is a localmartingyle, thn take g = J o p where p is the natural scale of Zt . It followsfrin ui ssmptio? nd (2.2a)that grj -+

(:xn as-+

n.Since g 6 C2 it follows from It's formula that

. ..(. .. . . . .. .'

. . . .. . .

e-%gztt- #(.%) = - j e-'.(z,) dsf

+ e-'g'zslyzs) ds + local mart.0

t1 -,

u g ;#a(z; d,s+ j e g ( , ,

a(z) = zzTctzlz and

#(z) : jzjz =

rjv(r)= sup ta(z)

/z(r)= p(r)s'(r) and p(r) = 2p(r)

Page 124: Stochastic Calculus

238 Chapter 6 One Dimensional DiRsions

Since e-fg(Zf) is a local martingale it follows that

1 2 ,, ,

=g0(z)J (z)+ ?.z(z): (z)= :(z)

Using the deflnition 9 0 and p npw we can rewrite this as

(a) p(z)p''(z) + #(z)p(z),'(z) = gzj

Before plungiug into the proof we need one more property which follows from(2.6c) and the fact that our natural scale is an iucremsingfunction with w(1)= 0:

(b) #J(r)k 0 for r k 1

A little cal.culus gives

Secton 6.6 Applications to Higher Dimensions 239

Combining (d)and (e)shows that e-fg(.&) is a local supermartingale whileRt k 1 and completes the proof. E1

b. Recurrence and Translence

The next two results give sucient conditions for transiene and recurrercein d > 1. ln this part we use the formulation of Meyers and Serrin(1960).Let

z z , , x 2z . (z)+ tr (c(z))a(z) = . c(z) tekz? =

z(z)IzI Izl

.Dp(lzl2) = p'(IzI2)2z.lJ#(Izl2)= J''(IzI2)4zjz.j #

.i

2 ?/ 2 l 2.Dfptlzl ) = g (IzI)4z7+ g (Izl)i

ivhr tr (c(z)) i:z E cf#(z) is the trace of c. We call detz) the effpctive

dlm' sih at k beuse (6.3) nd (6k4)will sltw tht there will be rettrrence

r transiezce if the dimension is < 2 or > 2 in a neighborhood of infnity.T formulate a resllt that allows de(z) to approach 2 ms lzI.-,

(x) we need adeflnition: J(t) is s'id to b Dll function' if and oly if '

E

Fj7 (j)dt < cxl1. t

. .: '

. . . . .

(yjgyr'jd.):..'

..

In the next two results we will >pl# the deqriition to

t :(s)

6t) = exp - ds1

.%

...'...l

To seewhkt the conditions say note that &tj = (1gt)-F is a Dini function if

p > 1 but not if p %1 and this corresponds to E(s) = p/log s.

(6.3)Theorem. Suppose detz) k 2(1 + e(lzI))for lzIk R and J(t) is a Dinifukction,thn Xt is transient. To be precise, if IzI$ R then .&(Ss < x) < 1.

Here Sr = inftt : l.Xl = rJ, and being transient includes the possibility of

explosion.

(s.4)Theorm. Supjose detz) %2(1+c(1zI))for 1zIk R and 5(t) is not a Dinifunction,then Xt is recurrent. T be precise, if lzl> R then Pz < x) = 1.

Using It's formula now we have

c-#(af) - glj = -e-'#(A,) ds0

-'.'(.R,,)2Arji(A%) ds + local mart.+ ei 0

1 ..s p p izjxxjayjLxg)d+ - e , ( , , a2 . ()i J

1 -, , .y,

;:(jyy(x,)ds+ - c g ( a2 ()

Uqing the functions a(r) and 7(r) introduced above we can write the last equa-tion compactlf ms

e-Xc(.!)- cflin) = local mart.

(d) I '

+ e- f-#(l,) + #(-%),'(A,) # a(*a)J''(,)l ds0

u V

To bound the integral when Rz k 1 we use (i)the deqnttio of v and (b),then(ii) the equation in (a),the fact that g k 0, and the deqnition of p

pt-Ya) ,

a ) + y,(sa)) a(x,)-#(.Ra) + tzxjg ( ,

(e) <-:(./,)

+ ('z(l,)#'(Aa) + g''llsjk a(x,)

.a,)+t#(X') jpt.aals 0s

-p(

p(A,)

Proof of (6.3) The flrst step is to let

(r)= j*exp (-/' e(') dq-ds

', t sr 1

Page 125: Stochastic Calculus

240 Chapter 6 One Dimensional Dihsions

which hmsg

-j r r(j)P (r) =

rexp - Jyj

dt / 0

z/ 1 + e(r) r f(f)

P (r) =r2 eXP - jj j

dt

and hence satisies

yq , vpuLvj

..y

(y..y r(r;;jy(r; zz ()

Letting Rt = IAhl? and using It's formula we get the following which we will

use four times below:

(6.5)Leppa. If ranrj + n'(r)g'(r)= 0 for r : (u2,?2) apd #/(jzl2) . (de(z)-- u- o -,-

t-7(IzIz))S Fhen lzlq ('u,r), then gat) 14> lockl suprrmltinyaly Wl1qjlep 6 1.XI< '?, , ( ) .

''

(( . . ' . .. .(

. . .:

.'

.'

..

.' . .

'(.' (.. ..

l ' i .

.

.

g.. . .... ..

. . . .. .E..

': . g'' .. .

' . ' ' . ./oi tisingItt's forppta with (c)fri thifrpft o? (6rtyj , y (

t

glkt - gRzt = :'(A,)2At(.X,) ds + local mart.

i t)

1 u i j x ) (js+ -

g (.R.,)4.X',.X',adj'l ,2 ij1 #+ -

g (S,)2cf(X,) ds2

1 ?, ? de(X,)d= local mart. + jo2J(.X,) Rag (Jj) + g (S,)

: s

..

'.

. . :. , .

Using our two mssumptions on g, we get for %t < la'LI< n

.'a ?,(s,) + g,R,) de(A%)= y(.a,)

#e(X,)- v(a,) s ()sg t z

which Eprove. the result. E1

x(6.5) implies that pRtj is a local supermartingale while R' k R5. If.R.K r K s < cxnthen using the optional stopping theorem at time r = Sr A Sa((6.1)implies that

.&(r

< x) = 1) we see thatw(IzI2)k Emvlvl k w(r2)#z(Sr < S,)

Rearranging givesw(IzI2)P=% < Ss4 % a < 1vr )

Section 6.6 Applications to Iligher Dimensions 241

The upper bound is independent of s and (6.3)follops.

Proof of (6.4) This time we let

#(r)= j' exp (sj'Ctf) dt( ?-f

1 1. t .E s

Since ##(r) =-y/(r)

we again have

'' zk)+ (1.+ :(r))'/'(r) = 0r (..(l ' . .

but this time #J(r) k 0. Since we have mssumed detz) K 2(1+6(lzl)) for Izlk Rit fllows from (6.5)that #Rt ) is a local supermartingale while Rt k R2i IfR K r d s < x using the optionpl stopping thorep r.ttime r = Sc h S, ((6.1)implies that

.&('r

< x) = 1) we see that

a z g a j..pts

< s ))#(IzI) k Ez:*Rvt = #(r tprs <,)

+ #(s )( -

r ,

Rearranging gives

, 4(s2) - 4(lzl2)Prsc < 5.,) k a a)q ) .= #(rLetting s

-.+

(x, aud noting that #(s2) -+

x since J(t) is nt a Dini functiongivesthe desired result. El

c. Hittlng Points

The proofs of (6.3)and (6.4)generalize emsily to investigate whether mul-tidimensional difusions hit 0. Let Ta = inft > 0 : X3 = 0J.

(6.6)Theorem. If detz) k 2(1 - f(lzI))for 1zIK z? ad

C1Z ) --YJ/c exp (-/, a

dz)y

= cxl

then #a(Tc < oo) = 0.

4 7) Theorem. lf #e@) 2(1 - E(IzI))for Izff n and( .

qg. ( .1 1 : (z)

.yd

q j,exp (=

zda) y

ky, u

then #.(Tc < oo) > 0.

Page 126: Stochastic Calculus

242 Caper 6 One Dimensional DiEusions

We have changed the sign of 6 to make the proofs more closely parallel theprevious pair. Another explanation of the change is that Brownian motion

is recurrent in d K 2 hits potnts in t k 2is transient in d > 2 doesn't hit points in d k 2

so the boundary is now a little below 2 dimensions rather than a little above.

Proof of (6.6) Let

Sectjon 6.6 Applications to Higher Dimensions 243

Using (3.2)and the Markov property now extends the conclusion to lzlk p. C1

Proof of (6.7) This time we let

r 1 :(j) ds#(r ) = j ex p - Jat

dt) -s

Since's'rj

=-w'(r)

we have #'(r) k 0 and

r#''(r) + (1- e(r))#'(r) = 0

Letting Rt = 1.X12and using (6.5)shows #Rtl is a local supermartingale whileRt e (0,p2). If 0 < r < s K p ujing the optional stopping theorem at timev = Sv h Sa we see that

y '

i)(tiiI2)k tz,.iRrb = #(r2)1%(,$'r< &) + #(s2)(1= Prsr < &)J

which has

and hence satisqesr/'(r) + (1- e'(r))#(r) = 0

2 tzk 6 5) Aws pR,4 is a local supermartingale whileLetting Rt = 1Ah1 and us g ( .

Rt e (0,p2). If 0 < r < s K p using the opsional ssopping thqorem at time= Sr A Sa we see that

w(IzI2)k ErvRr) k vrzlprsr < S,)

Rearrapging gives

Rearranginz gives- ,)(s2)

- ,)(Izl2)pksv < ss) k a a#(s ) - 4(r )Letting r

-+ 0 and noting #(r2) -z+ 0 yives

Pz (Tn < c) > 0 for lzI< p

Using (3.2)and the Markov property now extends the conclupion to IzIk n. El

tl ? i d 3 or (ii)d = 2 and c is Hlder continuous then(6.8) coro ary. I ( ) k#.IT: < x) = 0 for all z.

Proof Referring to (4.2)in Chapter 5, we can write c(0) = UThU where Ais a diagonal matrix with erries f . Let r be the diagonal matrix with entries1/ f and 1et F' = UVT. The formula for the covariance of stochmstic integralsimplies that Xt = I/a'z hms

(0) = 77'c(0)F = I

So we can without loss of generality suppose that c(0) = f. In d k 3, thecontinuity of c implies detz) -+ d as z

-+ 0 so we can take :(r) H 0 and thedesired result follows from (6.6).If d = 2 and c is Hlder continuous withexponent J w can take 6(r) = Cr5 . To extract the desired result from (6.6)wenote that

Pcsr < ,S,) K w(lzIz)/w(r>)Letting r

-+ 0 and noting w(r2)-+

(:x) gives

#.ITO < Sa) = 0 for 0 < IzI< l

To replace Sa by txnin the lmst equation let cb 0 and for n k 1 1et

zk = inftt > tu-l : lAhl = PJtu = inf (f > ru : lu'L1= p/j)

The strong Markov property implies that the n-

a are i.i.d. so o'n-+

cxl as

zt-->

x. It is impossible fo the procss to lkit 0 in lcu,tral and hitting 0 ink,,,

'rh+1)

hms probability t)so we have

#.IT: < x) = 0 for 0 < 1zI< p 'exp- /' cz,-z dz ad- /' exp (-vc'(1-

,,)) ad- x

Page 127: Stochastic Calculus

244 Capter 6 One Dimensional Dihsins

since the exponential in the integrand converges to a positive limit ms y-+ 0. 1:l

It follows from (6.7)that a two dimensional diflksion can hit 0.

Exnmple 6.1. Suppose (z)H 0, c(0) = 1, and for 0 < lzI< 1/e let c(z) bethe matrix with eigenvectors

1 1,t?1= (zl,za) 1)2 = (-z2,zl)

IzI Izland mssociated eigenvalues

t = 1

These deqnitiops are chosen so th>t detz) = 2 - 2p/log(lzl) and hezc we cantake e(r) p/logtr). Plugging into the integral

j = 1 - 2p/log Izj

7.1..'.. ...

'.

'

.

z/e ,/- dz (f ,/e d'

j exp - /, atgz

''Ay-'' j exp (plog Ilog ,1) ay

l- (f

=lz pllolplp Q = if P > 1

Remvk. The problem of whether or not difusions can hit points hs beninvestigated in a dilerent guise by Gilbarg and Serrin (1956),see page 315.

(1.2)

DiFusions as Markov Processes

ln this chapter we will assume that the martingale problem MP(, c) has aunique solution and hence gives rise to a strong Markov process. As the titlesays, we will be interested in difusions ms Markov processes, in particular, intheir symptotic behavior as

-+

x. ,

Semlgrups hnd Generators '

The resultq in this section hold for any M>rkov processj Xg . Foy any bppndedmewsuryblr f, pd t k 0, 1et . .

'l'tleMarkov proprty implies that

.'.'

.E E '

. .'

S taking expecte values gives the semigrouy propertyo

za+tJ(z) = Ta(=y)(z)

for s, k 0. Introducing the norm IITII= sup lT(z)I and L= = (/' : 11J11< cc,1

we see that Tl i a contractlon semigroup on L=?, i.e.,

IIZTIIS 11T11Followin Dynkin (1965),but using slightly di/erent notation, we will de-

;ne the domam of T to be ,y,

p(T) = fl e L* : IIZJ - Jll -+ 0 ms t -+ 0)

As the next result shows, this is a reasonable choice.

(1.3)Theorem. p(T) is a vector space, i.e., if h, A E O(T) and cz, ca E R,ch + (:2/2 e O(T). D(T) is closed. If f 6 O(T) then Ta; : Dl'j and s

-+ TaJis continuous.

Page 128: Stochastic Calculus

246 Capter 7 Dihsions as Marov Pzocesses

Remark. Here and in what follows, limits are always taken in, and hencecontinuity is dellned for the uniform norm 11. II.Proof Since 7ltcllt + c2J2)= cptllll + czt/lla , the llrst conclsion follows .

from the tiiangle inequalit To prove t)(tT') is closed, let fn T(T) with

llla - ;II --, 0. Let .7 > 0. lf we pick n large llA - Jll < e/3. lu e O(T) soIllrlia- ln11< E/3 for t < (). Using the triangle inequality and the contraction

property(1.2)now, it follows that for < tp

IIZJ - TIIS 117l/- 71/nII+ IIQA- All+ Illa- TIIK llJ= AII+ E/3 + E/3 < e

To prove th last claim in the theorem we note ttlat the semigroup and cqp-tractionproperties, (1.1)and (1.2),imply that if s < t .

Section 7.1 Semigroups and Generators 247

Kxnmple 1.1. The property in (1.4)looks innocent but it eliminates a numberof operators. For example, when k k 3 is an integer we cannot have Af = J

his we begin by noting that gkz) = 1 - z2 + zthe th derivative. To prove t ,

ha,s a local muimum at 0. Prom this it follows that we can pick a small J > 0and deflne a C* function fk k 0 that agrees with gk on (-J, 6) and hmsa strictglobal maximum ther. Since A1k(0)=

-!

> 0, this contradicts (1.4). EI

Returning to properties of actual generators, we have

(1.5) Theorem. #(.) is dense in p(T). If f e 'p(.A) then Af e p(T),1)1 C D(X) hd. .

:,. d

Xf = XXf = XxfWt

Ttf - 1 = T.Af ds

Remark. Here we are diFerentiating and intejiting functions that take valuesin a Banach space, i.e., that map s 6 (0,x) into L*. Most of the proofs fromcalculus apply if you replace the absolpte value by the norm. Readers who wanthelp justifying ur cofnputations can cnsult Dynkin (1965),Volme 1, pages19-22, or Ethier and Kurtz (1986),pages 8=10. !

Prof Let f e 'p(T) and ga = JJT,.Jds. To explain the motivation for thelast deflnition, we note that

ll7l;- T,JII= llT,(TlsaT- T)Ilf II:l-,r,T T11

Rzirk. While T(T) is a reasonable choice it is certainly not the only ohe.A common alternative is deflne the domain to be Cft, the continous flmctionswhich converge to 0 at (y) equipped with the sup norm. When the semi-group:Il maps Cz into itself, it is called a Feller semi-group. We will not followthis approach since this assumption does not hold, fpr example, in d = 1 whrn

(x) is an entrance boundary.

The lnllniteslmal generator of a semigroup is deflned by

T&f - JAJ = 1imAt0 h

Its domain O(.A) is the set of f for which the limit exists, that is, the f fprwhich there is a g so that .

%f - 1- g

-. 0 as pz-+ 0( :' i' E

Before delving into properties of generators, we pause to identify some operatorsthat cannot be generators.

..

'

. .

(1.4)Theorem. If 1 e /(.) and J(zc) k fyj for all y then .J(zn) %0.

Proof Since zp is a maximum, 1Jl/'(za) - ;(zo) K 0 so d.f(zc) K 0.

1111-zll

s J-llz)z- ttlds- )c fl ()

as c-+ 0 since J e p(T). Now

c c+

nga = %+s1ds = ga + TaTds - 111 ds0 tl 0

so we have.(('.

'.

n,-- ,-

- v-g -y)j

s yt/-+jjz'-y- nyjjdsl s -

1+ j IITJT- T11ds -+ 0

This shows that ga e 'p(.A) and Aga = Taf - f. Since the first calculationshows gala e DT) and converges to f as c

-.+ 0 the flrst claim follows.

Page 129: Stochastic Calculus

248 Chapter 7 Dl/lbsloms as Afarov Pzocesses

The fact that Af e 'D(T) follows from (1.3)which implies Tsf - 141h6p(T) and O(T) is closed. To prove that If e

'p4d)

we note that the contra-

tion property and the fact that f 6 'p(.A) imply

jjn71t-11- mxyjj= jjmLn-1 s

./)

jjs jjny

- J- xyjj-+ ()

The last equality shows that A%! = SIl.4; and that the rkht drivativ of Ifis %Af. To see that the left derivative exists and has the sarri value w note

jy'

t at

jjr;'n-sl

- mxyjjs jjm-,Lrsly-T - xyj jj+ jjm-stxy - zxyljjIIT'T- 1 -

avjj

+ llxy- nzyll .-> 0 '

S s..

. . ' .. . .

'.(

'

. .

'

.-y() fince J e p(z) awd A.f e p(r). The snalequatin in (t.5j isbtatnedby integrating the one abve it. EIO

To make the conlwction between generators and martingale probleps wewill now prove

(1.6) Theorem. If J 6 ,27(.A)then for any probability memsure la

Section 7.1 Semigrpups and Genertors 249

But for any g, it follows from the defnition of Tr and Fubini's theorem thatt-a

Ev (;(Xt-a)- 1Xz4 - j .J(Xr) drl

p-,

= rJl-aT(!8- T(p) - j Tr.J(p) dr

wkichis 0 by (1.5)Our flnal topic is an ipport>pt concept for some developments but will

play a minoi role here. For each > 0 dehne the resolvent operator forbounded measurable 1 by

( ygyvj :z: J e-t g)ytzl dt = Ec*

e-ty(.m) dtJ0 0

Clearly ll&'lllS lIJII/. A more subtle fact is that

(1.7)Theorem. U maps /(T) 1-1 onto ,1)(.A).ttsinverse is ( - .A).

Proof Let Alf = nf- J)//l. Plugging in the definition of U and changingvriables s = t + li ad s = we have

1 *

Juzl = .. e-tm+sy - nlldt0h 1 (:o 1c -

-4 -a

d= e Taf ds - e T, J sh 0

To conclude that Uf 6'2)(.)

and Auf = f; - f we note thatAlxs) ds.u'/e gxg) - fxz ) - jis a Pp martingale with respect to the flltration Fj generated by Xt .

prool-sincey and A are bounded,.u1

is integrable or each . sinceusl is

..r.memsurable'

>

zpul 1.,--)- M1 + s. (.f(xt)- lx-t - j' ./(.;) dr1.z-,)

#

By the Markov propezty the second term on the right-hand side is

f-,yyfyvj

ts;

= Ex. (J(.;V-, ) - fv'%) - j

and compare the last two displays to conclude (using1171J11S 11/11)that

..E

r'

'((.'

'

c, - 1 .j

jnc-,jj/jj ds+ / n-,jjyjjasIlxsgy - (uy - y)IIs j y , ,

j h

+ J (1-t

e-')1IJll ds + I j IIT'T- Jll ds -+ 0() ()

as-+ 0. The formula for Auxf impltes th>t ( - Ajlf = J. Thus Ux is 1-1

and its inverse is -

..

Page 130: Stochastic Calculus

250 Chapter 7 Dihsions as Afarkov Pzocesses

To prove that U is onto, 1et f e'p(.)

and note (1.5)implies

Section 7.2 Examples 251

X%defnes our pute jump Mazkov process. To compute its generator wenote that Ng = suptn : Ts K ) has a Poksson distribution with mean t so if fis bounded and measurable= e-f yjty - Agj dtz (J - A14 = j

* e-f pgdt- j*e-z dpg dt

-/- - w7lJ(z) = E=1Xtj = c-AJ(z) + e- j pz, o)y(p)+ o(t2)

Integrating by parts

* d xG)

-' Ildt = (e-'mJ) l + vje-tlldt

e w c , ; q0 0

Combining the last two displays shows Wt; - AJj = f and the proof iscomplete. El

7.2. Examples

Doing a little arithmetic it follows that

7lT(z) - lzj . jpz,dylgly) ..-

ykj)t=

-lte-

- 1 + jlytz).

. . ...

-z 1)jpz',dvgv + o(t)+ (c -

ln this section we will consider two f>milies of Markov processes >pd computetheir generators.

txnmple 2.1. Pure Juinp Varkov proesses. Let (S, Sj te a memuralespacelnd let Q(z,.A) : S x c

-+ (0,x) be a transitlon iernel. That is,

(i) for each .Aet$',

x-+ Q(z,.A) is memsurable

(ii) for each z e S, .A -+ Q(z,.) is a inite measure

(iii)Q(z,(z)) = 0

Intuitively Q(z,.A)gives the rate at Fhich our Markov charins makes juppp frpmz into .. and we exclude jumps from z to z since they are invksitle.

To be able to construct a process Xt with a minimum of fuss, we will

supposethat = supgE Q(z,S) < x. Deflne a transition probability P by

-1-f -t

1 d.p(z

dyj is a uausitionmobabilitySitce t (c - 1+ t)-+;0j

c =+, an ,

it follows that

jj7lltz) - J(z)- jpz,dvgy)

-'. ytzlljj-+ ct

as--> 0. Recalling the deqnition of Pat, dy) it follows that

(2.1)Theozem. 'p(.A) = 'p(T) = L* and

Al) = JQ(z, dylly) - J(z))

We a:e now xeady for the main event.

Exnmple 2.2. Dlfrusion processes. Suppose we have a family of memsuresPx on (C, C) so that under Pc the coordinate maps Xt () = f.J() give the uniquesolution to the MP(, c) starting from z, and St is the flltration generated by

a j:the XL. Lt Cx be the C functions with compact support.

(2.2) Theozrm. Suppose a and b are continuous and MP(, c) is well ppged.Then Cx2c @(.) and for a1l f e Cx2

1.P(z,.) = yQ(z,.) if z ( .

1.P(z, lzJ) = y( - Q(z,5'))

Let tl , t2, . . . be i-i-d. eyponential with parameter , that is, Pti > t) = e-'..jz j; u kov chainLet Ta = tl + . . . + s for n k 1 and let Tn = 0. Let a e a ar

with transition probability P(z, dyj (seee.g., Section 5.1 of burrett (1995)fora defntion), and let

Xt = ya for t G gTa, Ta+1 )1Alz) = 'j' aitztlhjft + biztDilzt

Page 131: Stochastic Calculus

253 Chapte 7 Diffusions as Afarov Processes

Proof If J e C2 then It's formula implies

t

T(-Y) - J(a') = J7 Dilunjbix''t ds + locat zilart.i 0

1

+' 7 Diilxstaitxst ds

ij 0'

. ..' . .' .

:E.

:

lf we 1et Tn be a sequence of times that reduce the local martingale, stoppingat time t A Tn and taking expcted yalue gives

tATa

Ezlunsn - J(z) = Er a/(.X,)(-Y,) ds0

tAT,X

1+ . J7.&. Dit.f-nlaijvn) dsi 0 .

; '

If f e G%2and the coecients are continuous then the integrands are boundedand the bounded copvergence theorem implies

E.

(2.3)

. 't

zslxtt- y(z) = 5-.7./,,J Loilx-tbix-,

dsy 4 tl

1 t

+.j. J7Ez j,.E%.f(-Y,)aJ(A%) ds

i,J

'zyt-v-lds= E = joTo prove the conclusion now we have to show that

jjEzfxt) - J(z)- xyjj-+ ()

The first step is to bound the movement of the diflksion process.

(2.4)Lemma. tet r > 0, Jf c: Rd be compact, and ffr = (p : lz - pl Sr fr some z e ff ). Suppose 1(z)l%B and Ef cft) % .A for all z ffr. If

welet Sr = inf (t : l.X - A Ik r) then

B WsupPrsr K t) G t . 2 - + g=zK T

Section 7.2 Examples 253

Proof Let #(p) = Ej(!&- an)2. Using It's formula we have

'1

(.;%) - (.X) = J7 2(.X' - xilbi (-Y,) ds + local mart.>0

j

1+ -z J-l 2cf(X,) dsi 0

Letting Ta be a sequence of tims that reduce the local martingale, stoppingat t h Sr A Ta , taking expected value, then letting n

..-+

x and invoking thebounded convergence theorem we have

'f v..E ifat-xasr) = E= J-l(2(X, - zfltxa) + ck(X,)Jds K ztrB +

.4)

0

Since /1k 0 and ltXs.) = r? it follows that

r Pzsc K t) K ztvB +.4)

which is the desired result. E1

Proof of (2.2) Pick R so that Jf = (z : lzIS R - 1Jcontains the supportof f and 1et ff = (z : IzIS AJ. Since J e cx2, and thE coeciets arecontinuous, Af is continuous and hms compct support. This implies Af isuniformly continuous, that is, given : > 0 we can pick E (0,1) so that iflz - pl < &then I.4J(z) - .A/*(p)1< e7. If we let C = supz l..J(z)Iand use (2.3)it follows that for z E ff

Erf-) - J(z)- uytzlj

s tsajtjxytx,) mxytzlj dsI , -

K 6 + 2C suj. %% < )TQ

For z / ff , fzj = 0 and Af = 0. So using (2:3)again gives that for z / ff

E=fXt4 - lz) xytz) .E=1Xt4

s cpcvn < /)t t

i C sup Pys < t)DEOK

wlierethe second equation comes from' using the strong Markov property attime Tx. (2.4)shows that

sup Prs < t) % sup Pzsj < @)-+ 0 as t -+ 0r8K :DEff

Page 132: Stochastic Calculus

254 Chapter 7 DiEusions as Afarkov Pzocesses

Since f > 0 is arbitrary, the desired result follows.

Section 7.3 Aanslon Probabilities 255

For general b and a it is a hopelessly dicult prpblem to compute ,27(.A).To make this point we will now use (1.7)to compute the exact domains of thesemi-group and of the generator for one dirpensional Brownian motion. Let Cbe the functions f on R that are boupded >nd uniformly continuous. Let Cube the functions 1 on R so that 1,#, .fM 6 Cu .

(2.5) Theorem. For Brownian motion in d = 1, D(T) = Cu,'p(.)

= Cu2, andAfzj = r(z)/2for al1 J e '2:)(.A).

Remark. . For Brownian motion in d > 1, T(-) is larger than Cu2and consistsof the functiops so that Lf exists in the sense of distribution and lies in Cu(seeRevuz and Yor(1991),p. 226).

Proof If t > 0 and f is bounded then writin pflz, z) for the Browniantransition probability and using the triangle inequality we have

l:1T(z)- TtJ(p)1K 11T11Ipttz,z) -

my, z)l dz

The right-hand side only depends on lz- pl and converges to 0 as lz- !/1 -.+ 0so lf E Cu . If J E O(T) then ll1JlJ- J11-+ 0. We claim that this imyliesf e Cu-.To check this, pick E > 0, let t > 0 so that IIZJ -

.f11

< E/3 then pzck

so that if Iz- pl < then l7lJ(z) - Tl.f(p)l < EI3.Using the trangle inequalitynow it follows that if lz - pl < J then ,

IJ(z) - J(p)l K IT(z)- TtT(z)l+ I7lT(z)- TlJ(p)I+ ITlT(p)-.f(p)1

< e

To prove that -p(.A) = ct me begin by observing that (3.10)in chapter3implies

Ufj = (2)-1/2j c-Ir-#IVWy(p) dy

Our frst tmsk is to show that if 1 E Cu then Uvjf E C,. Letting g(z) = Uxlkjwe have

-,/ajne-tp-xlzi-.ytpl dy + (2)-,/2 jr e-@-p)ziIy(g) dygzt = (2)T -X

Differentiating once with respect to and noting that the terms from diferen-tiating the limits cancel, we have

'

Diferentiating again gives

X pT !

g''j = Vi-'J e-u-rsvi dy + 'Zi'l / e-x-evy) dy - 2J(z)Jl J-x

lt is emsy to use the dominated convergence theorez to justify diferenti-ating the integrals. From the formulas it is easy to see that g, g/, f e Cu so

'6 /2 and# ugtgqyj .:,(z)

.:y(z)

Since .A:(z) = gtz) - J(z) by the proof of (1.7),it follows that .A#(z) =

#/ (z)/2. ..

To complete the proof now we need to show,1)4.)

D %2.Let E Q and

deine a function f E Cu by

1 u y;. ./(z)= tz)=j (

The function y = lt - Uf satisfles

1 ,?

y (z)- ptz) = 0'i'wyyy

.ry

A11solutions of this diflrential equation have the form -e

+ Be .

Since / - Ux1 is boundett it follows that = Uvf and the proof is complete. (:1

7.3. Transition Probabilities

In the last section we saw that if a and b are continuous and MP(, c) is wellposed then

d-jX.f(z) = ZXT(z)

forJ e cx2, or changing notation'p(t,

z) = T1J(z),we have

(2.1)

where .Aacts in the z variable. In the previouj discussion we have gone from theprocess to the p.d.. We will now turn around and go the other way. Consider

d'v- = z.p(j,z)dt

(r r

'(z) = e-u-rtvby) dy - e-(=-OYiQ'J(!4dyr -X

(3.2) (a) 'u

e C'1.2and 'uf = Lu in (0,x) x Rd

(b) u is continuous on (0,x) x Rd and,u(0, z) = J(z)

Page 133: Stochastic Calculus

256 Captez 7 Dihsions as Afarov Processes

Here we have returned to our old notation1L = J7cfJ(z)ZkJ(z) + J7(z)D.f(z)'*' ij j7

To explain the dual notation note that (3.1)says that r(t,.)

6 ,i7(.A)and satisqesthe equality, while (a)of (3.2)asks for u E C1l2. As for the boundary condition

(b) hote that J e z)4.)c 'p(T) implies that 11Tt1- JlIc.7.. 0 ms $ -+ 0. .. .

To prove the existence of solutions to (3.2)we turn to Friedman (1j75),Section 4.4.

(3.3)Theorem. Suppose a and b are bounded and H3lder continuous. Thatis, there are constqnts 0 < J, C < (x7 so that

n'ct IcfJ(z) - cj.(,)Is cIz - p1, Itz) - tplls'elz = p1,

Suppose also that a is uniformly elliptic, that is, there is a constant so thatfOr a1l z y1

CUE,--lptl.tzlpl

k) lpI2i,i

Then there is a function ntz, p) > 0 jointly continuous in t > 0, z, y, and C2

as a function of z which atisfles dpldt = Lp (withL acting on the z variable)

aud so-that if J is bounded >nd continuous

'tl(, z) = ptlz, :)T(:) dy

satisqes(3.2).The maxnum principle implies l'tzlt,z)l K 11J11.

To m>ke the connection with the SDE we prove

(3.4) Theorem. Suppose XL is a nonexplosive solution of MP(, c). If u is abounded solution of (3.2)then

'utt, z) = Erfhh )Proof It's formula implies

Section 7.3 'Iansition Probabilities 257

so (a)tells us that,utt

- s, .Xa) is a bounded martingale on (0,). (b)impliesthat u(t - s, Xsj -+ fxjj ms s

-+ t, so the martingale cnvergence theoremimplies

utj, z) = swytx) o

pj (z,yj is called a fundnmental solutlon of the parabolic equation tzt =

L since it can be used to produce solutions for any bounded continuous initialdata J. Its importance for us is that (3.4)implies that

.P.l.X g B ) = Jspf (z,p) dy

i.e., n(z, y4 is the trnnsition density for our disusion process. Lt D = (z :

Izl< J0. To obtain result for unbounded coecients, we will consider

(3.5)(a) 'u E 1'2 apd duldt = Lu in (0,x) x D(b) '?z is continuolp on (0,x) x b with

'u(0,

z) = J(z),and u(t, y4 = 0 when t > 0, y E OD

To prove exiqtepce of solutions to (3.5)we turn to Dynkin (1065),Voly II,pages 230-231.

(3.6) Theorem. Let D = (z : 1z1K.0

and suppose that (HC) and (UE)hold in D. Then there is a function ptRt, y) > 0 jointly continuous in t > 0,

z, y e D, C2 as a function of z which satis:es dpRldt = Lp (withL cting on

the z variable) and so that if )' is bounded and continuous

utt,z) = /p1'(z,p)T(p)dy

' duut - s, X,) - 1z40, A) =--(t

- r,.Xr) drdt4

+J-1 ofzlt-vrlftxr)dr + local mart.i 0

1.1

+-u )-l Ditnxlaitxr) dv

ij 0

satisqes(3.5).

To mak the' connection with the SDE we pzove

(3.7) Theorem. Suppose Xt is any solution of MP(, c) and let r = inftt :

XL / D). If 'tz is any solution of (3.5)then

stj, z) = >;.(y(a' );'?- > t)

Page 134: Stochastic Calculus

258 Chapber 7 Dihsions 'as Marov Processes

Proqf The fact that u is ontinuous in (0,t) x X implies it is bounded there.It's formula implies that for 0 %s < t A r

' duu(t - s, aYal- u(0, A%) = --a (t - r,.&)

dv0

.;

+ )C DutxrlftA%l dr + local mart.0

1 #

+-2 y''l Dituxrlaqxr) dv

so (a)tells us that u(@ - s, Xs) is a bounded local martingale on (0,tA r). (b)implies that 1It - s, A%) -+ 0 as s 1 r and u(t - s, .X'al -+ ).X,) as s

-.+ t on('r > t). Using (2.7)from Chapter 2 now, we have

u(i,z) = l'.(J(.A$); r > t) n

Combining (3.6)and (3.7)we see that

Section 7.4 Brris Chains 259

the next, we will mssume that the reader is familiar with the bmsic theory ofMarkoy chains on a countable state space as explained for example in the irstfive sections of Chapter 5 of Durrett (1995).This section is a close relative ofSection 5.6 there.

We will formulate the results here for a transltlon probablllty deqnedon a memsurable space S, s). Intuitively, .P(Ak+1 e dl-'k = z) = ptz, d).Formally, it is a function p : S x J' -+ (0,1) that satisfles

' for fxed.,4

E S, z-+ ptz, .A) is memsurable

j;for fzxed z 6 S, .A --> ptz, .A) is a proba ility memsureIn our applications to diflksions we will take S = Rd, and 1et

ptz,.A) = Jxpztz, f) dy

where pttz, y4 is the transition probability introduced in the previous section.By taking t = 1 Fe will be able to investigate the asymptotic behavior of Xnas n =+ (x7 through the integers but this and the Markov property will allow usto get zesults for t -+

x thmugh the real numbers.We say that a Markov chain Ak with transition probability p is a Hrrls

chnln if we can flnd sets .A,B E &, a function q with (z,p) k 6 > 0 for z G.4,

B d robabiltty measure p concentrated on B so that:y C , an a p(i) If a = inftn k 0 : Xn 6 .4J,then Pz trx< x) > 0 for a1l z e S(ii) If z e

.4

and C c B then ptz, Cj k Jcqlz, #)p(dp)

In the difl-sis we consider we can take X = B to be a ball With radius r, p to bea constant c times Lebesgue measure restricted to B, and q, y) =f mtz, #)/c.See (5.2)below. It is interesting to note that the new thory still contains mostof the o1d one as a special cmse.

Eknmple 4.1. Countable State Sace. Supppse Xu is a Mrkok chal ona ountabl state space S. In order for Xrt to be > Hryis hai!z it is eqessaryand sucient that there be a state 'tz with Pzt-''a = u fi some rl >- 0) % 0 fora1l z 6 S.

Proof To prove suciency, pick r so that pty, r) > 0. If we let ,4= (zIJand

B = ('p)then (i) and (ii)hold. To prove ncessity, 1et ('u)be a point with/7((u)) > 0 and note that for a11 z

.Pkn = u for some n k 0) k .E'.((vYzz)p((u))) > 0 l::l

Emlxj r > f) = Jopllz, p)#(p) dy

Lettig R 1 (x) and ptt, y) = lima-xpRt (z,y), which exists since (3.7)impliesR -+ ;NR(z1 #) is increasing, we have

(3.8) Theorez. Suppose that the ma4tingale problem for MP(, c) is wellposed and that c and satisfy (HC) and (UE) hold locally. . That is, theyhold in (z : 1z1K JOfor any R < x. Thn for each t > 0 there

'is

a lowersemicontinuous function p(t, z, y) > 0 so that if f is bounded and continuosthen

Exfxtj = pflz, yj dy

Remark. The energetic reader can probably show that pt (z,y) is continuous.However, lower semicontinuity implies that pttz, p) is bounded away from 0 oncompact sets and this will be enough for our results in Sectiop 7.5.

7.4. Harrls Chalns .

ln this section we will give a quick treatment of the theory of Harris chains toprepare for applications to difusions in the next section. In this section and

The developments in this section are bmsed on two simple idems. (i)Tomake the theory of Markov chains on a countable state space work, all we need

Page 135: Stochastic Calculus

..'..'.'(E

gE.E)' .j'

E(E;jj@j).lj$jj4jy14j)j)jjy4jy((.yjj'j'(tf)'!y'

!('y' ('(:':.E'

.....-

.

y--...:g.j.yiy;;.jyj

.

jj'

qjyyrjyjyjy,yg(.y.jjy.

.

.

-

yy-

.

.

-...rj.

--yq..;.-.- .

.'.'..k..

EEg.'jjj.j.jj .yg y.

:

y:

.

..

:

j.j.j.

. .(:E(. E( y:

. .y.y.: . ..

:: . y. y:

..:..;-

.

.

-(.:.

.Ey.j.jyjyjyE.j;jgy;;:!j

t')

t'yj.yj.

q...y..

y.g.jj.j.y

.....y

-j..-

.yy.....y

(..;'..g:.E :

y..

yy:

(ijjj

j'

jjyjqy...jy.j .

.::

. :j..jy.. .y.260 Chapter 7 DiEusions us Afarkov ProcessesqiiEljlyyl'.yil,'lyElliyyjyjjjj,E,yjjgyljyq.Ejyryq

......y..

-

i:-..y;.:- .y(jyl'E(;Erj():y.j...(.y

j'

..jyqyy..gy.jy..y...

.y..E-..

-

.

E..(.jr)

iE:;

yE

jIyyyyrj(..-;iyyy.y.

;y.E- -j.' .

(.(.j.j,. jjjjjt.y..j.yj.:.y, y

i-t-h------p-i-ti- u---t-z--p--- --i-i-hi-hit. (ii),,, -u---i- -I,-i- -.---- .

,......,..,,..,...,,,.r,.,,,,,,;,,,,,,,,,,,

....((E(;.y

-(Ej.E

yi

yl(jjyjy;jj)yj..

.yjj:-y..:..

-

..

jE.

jjEj..(

-y.(.g

manufacture such a point (calleda below) which corresponds to ttbeing in B ,'E'E'Fi:El(:',k.iE,i;jij)krijyjjjyy,,y(,,!

. ...(. j(:

!

.

:)..(;(jjg(yjjijjyj jy.j.y.y..yy.j,' The notation needed to carry out this plan is occmsionally

.r,!','iii,;;':;l)l'qi,k,(y.jj.:,j',!.(.,((.

''1i';.'ik;,''

jiii,d

rlll;;.

(lli::llk.d41::1111(.*

iiii.:1!111;..112:;.

()1(::-*(iii.d

'1Ii::::,,.'1IL,(:I1,.!11::k.

iii.d

dl;::::p,l

:111:':111.*

-jj1!::;;,.

---'

. y- -yyy.jy(.j.yyjyjjyy.jjyyjjjj'jyggygj

yyy.y.jy.yy.j.j....

-

.

y......y.y

y.y.y..obnoxious but we hope these words of wisdom will help the reader stay focused

EEEy('Ej;l((y,yr.rE).y),yjjEyjq,jy

. . ......y...j..yjgy

yq,jr.ggyj;

jgy

yy.jyygjyj...g

-

.yy..y..yy.gon the simple ideas that underlie these developments.

EE,'llqijlylEyrirElkyjyjy,,jyjyrylyyj.,,ljyyjlr,yyyyjy

..jjE.(.E

:

y.(:.

.j.j(jjg::jg.ygy: y

y'y'

.gj.

j yy.jGiven a Harris chain on susj, we will construct a Markov cha Xa with E ErqEil,,jli,rjjiijljrllylyyk,jjyjyji.,y,,g

...EEEqr(..j

-).tg'!ki(!jj!j-j;.g;.y.. .y.yj- y.ytransition probability # on (,V,Sj where ,V= S U (aJ and J' = (S, B U (a) : .

(EikgjyqrrylEgrqjiryjjjgjjyqyy,yjyy

E.. g

'.:

: (j((y(.jj:j.

. .

.q.(.

j

j'

.y.jyjjr:j:

jy:

yy. y

y'

y

j'

.yjj.jjgB e d). Thinking of a ms corresponding to being on B with distribution p, we

y,jEyyqjlj.jjEpjilyjj,yljy.yryyyy,,

. . . ....qqj

:

: jjy( j ygjjjjy. ygdeflne the modifed transition probability as follows:,rqyqylyqiyiljjiEjiyjyljyjljl'yjyjy,yjylj.jEyyyrjy.yyyijy

.EE.qE-

E-i.i.-i(yr.jljj

-

gyyyy.yy!(yyyy,yg.-...y

-y.....q - !.j;.y!

-

.

Ej..yjyyjjyqy.yjyyy.yjyyjj-

y...j-

'2111(((

':IIii'''(::;1.:::-*

11::EEEEEEE1..!!iiEi;r''

.''..'.'.'....',....-..:1!1.,.*

!II

.;;i!ii2ii;' 11::((

;::;1!:::,*:,,'11!:::E(:(:271,*

(2:),1::::::::::::::E.;;@4r:;;,- 11::();:;i1!::)':II .1(:::EE((::71-

((:)1i'

':lliir''dl:::::),,

(11(:7-*ilt::!qllli;?''d

il:EEE!i!!i

-1,;:115E;12-*

....EE.)'.g(

:.'y..j-

yq:[email protected],j.j;y)yj

iyE!y---.

j'

j..yy...yjjq.j.

j'

...yyy..y.yy... . ...

q.-. - y.yyy.jjygjyjjyjyyyjy;yyj-. jyg...yy

qlllEd::1IEi7'';::;i!:::t'

dlEEEEEEllii'

,...t-.111!1...*

:,,'

.;jjiiiii;7'

111:4*

;:;;i!:::t.:II

..'IlE()'11!:::21.::7*

((11i.'

((::1i'::::::::::::::'ilts:'d

q.'.r-. .

.'E'EE

y.!.

E-

y.y).)'jiEEi.-iqij-kj!jj.j.,((-jyj

-

.)....

-

j.(

j'

.-.-E(y.jyj-.

---

y.y-..

-.qj..E

.E..j...(.ri....;).q.qiy..)yj,j.jj.yy,j.

y.jj.,yj..,,.j,

.-;i!iiiii;7'

11::(.-;:;!!::r'

:,,.

.14:::E2(2:;?1,*((::)i'

::::::2:::::::.

,;;i!!::;Ii'

!1(:q(

;:::1!:::,*:,,'

.1(::::(2E:;?.-tj::ll-.

....'.........'

dllsE'd

-,41t:::;,-11::44

.11::::((::;7.,*((::)p'

(Illi'''lll::::l,-

(:11::-.*dl!:::qlll;;?.'d

11::EE!EEE!E

dl;:!!iii;l?'d

. y.

...- -.

y..(y.y;y-yj-!;-.jj(g!y

$'

)qjjjyj..;;y!

y

-

jy-.

j.yj

y'

j.,

jjy.y..jy.yyyg.y..y..' ..

(........y

:

. yj.q..:jjgjj:;jy

.

yyjjjjjyjjjjyyjjj.

jjj.ygj..yj..y..j:gy.g...g. jl!llEq'':1IiE7''

;:;;i!:::,'

:::::::::::::::

11(:::211,:.*!,,'

,:jjji@2i;7'

11::4)*

11::::)1.::.:.I

,,-;i!!E((E:;1I''(4:)1h*

:::::::::::::::

,-,.jj!''''''.i!!!::;1''

11::(

.1:::111!'*;:;;1!:2:y*

((:111

.;j!ji!ii;7'

411:)*;:::i!:::?'

:II

,,.;i!!E(((:;;I''((:)1i'

rillq'''ll:::::,,-

:1I(r7''.,.ii!!55((E;;I''

il:EE!!EEE!i

.1Ik!!!!IIj@;''

..'y.'.:.

'

j.q.-.

!yE

qjyiyjq-.ggjjyg,yyyyj

j'

yjjjjjj'j.yyyyyjj..j-j.jy.;j..yy.

y-

-

gy.-y..-.

j-

jj..y..-

:

..yg.

....;.-((

-

.r;..j.

yjiyyjjy)y

-ty

yqyjjkj

y'

jyjjj--

.jjjygj..-

j'

jj-

y,y-.

.y-

j.jyy.-. ..y .Here and in what follows, we will reserve J1 and B for the special sets that yrr.iqi,rit,ykj,yyjygyl.yk.,y.jyj,,.,gy,yjj

. ..-y.yyyyyjyjjjyyyj-

yyyjjjy.y.yjyyjyyjyy-j.jy.

-. y-gyyyy.-

y-.y..ocur in the deflnition and use C and D for generic elements of S. We will ,l,E,E,iyq.ljqgyjyyyy,jjyj,jjjjyyjgj,jy,yyy,yj

. .:. yy-y.

:

.jy-y.jE-yjgjyyjy

fensimplify notatiou by writing #(z, c,) instead o >(z,f.)), ?z(a) tnsteado ,'.'.:.,,(,..;y.).i(,.,p.,i..,.,jpii,,.,y;,,k.,..ykyy,,,.O y y jjyj,. ...

(.y(j..

E)j(y;j.;.jy(.jyyyy.yj:.gyjj..y:

:y.

-ij1!L,-iI:.11::((

..-1144)*

41::::)1.::.

((()1h..'

((:1,1

:,i''1EEiE:''

'dI@:;.

11r::!:;'.-,'

. . .

..

..

.

.

. . ..

. .

.

. .

.

'

.

.

E..'..'-r.E

.-

qy.(-(i

-

-

'.i.;E-

j-y

jjiyry-jt(-;y(jiy)-y.j.yy-y

:-(-.

.r-.

j',.

-

-

y.-gj..jE.;.(...jj.

..;r...

-... .

E!- -rl..k.y). y, ;y(yjjE.

j'

(j(..

(;E.(ryjgy..

!..j..y..jOur first step is to prove three technical lemmas that will help us cafry out

,, ,,,.Ir.r(,;y,.,,r!;j.y,j)I,qjrq(,yyyyyyyyj

'..-

.

'

.E'.(

)'

(!:gry.E..j.y-

(E.E::

'y

j(yjj:jgiEEEEqg

j'

i.yj.E.y

-

.y.y.......the proofs below. Deflne a transition probability )) by E

E.(yr,.)E;ty)jEij,.j(,;gy.;)'i)y'(!jrj,gyj

..Ej.EE..

.)(y.j)jy;.yqgq;.ryjj.jyyy

i

yyyj(y!y--y

.jj.lyj-.yyg-.y

j'

yE.y;-jj-j..yj.j....

.,.,..,.y,.,rj.(..,,,,,.j,.,y,.,,....,.y.y.,,,.,.,..,,,.y..

E-.-

. .. E

.)-.j;j(jyjjjjrjjyj;yyj.jj-y

..jyjj.jyyjjjyjgy- .

..-g

.:i!:,;j''

11::4*:::i1!2::-:j,..-1Iq('

;:::i!:::!'

tll)ji..jl:))i

::::::::::::::'(()1(..*

(jii)..EIIii'''

;::;1!:::-*

11::E!j4E44

.1:!5115;;2.'*

.

..

y....:

yi......yy

Ey.

.jj!.y.i

jgj-yyjgjyyjgjj

;

yjyjg,.j.jjgjj.jjjgjyy.yj.-jgyy..y-j.ygyjy

y

jyjg-

.jj.ygyg.y-. .y

.

y.yy..yjyyjr.y.-:1!:-;,-11::4)

11::::211::.*:,,'.1!::::E2E:;71'*

((:1,1

::::::::::::::'-j4!!::;p''

14::()

11!::::((E:27'-*

(q:1p. ..'..y..

.

E...iy'.i

'.jy'j.iEy

.).Ei.;);j(jyygyj(j..j

..

jyIg)jj-

-jlj!E-Ey.jj.(.j.jyyj..y

q

yy.-

.

jyj.gjE...:.l

E:;Ej.!-:igkq:j-;;(j),..j-(y.;..j.

;;

j'

.q.y-y.. ......;..y..y.yyy.yy..y

jyd

-yyyjj,yjjqyj)y,jj.

(g,j-yjjy-yy-j.-

(

y'

y..jy.jyyyy.yy

--j..yy.yy

.. .- .

.

. . ..

. -y;.q.-q;.

;p;y-

.

q,yy-.y)y.jyyyjy.rj.j.- ,

j'

y.---

,j.y.y.y-

..

-

yyj(..y.j..2111,.

4t .;1!!E11--''!1E::::E11!:::::11:::::,,1

(:11::--*.1::1111)*liid:i1!F::I,:'-;!!i!!I;7',111:;.*

,1:::::)1-

-.41!:::;1.*,--'

' ' '

.

' ' '

.

'

.

'

.

.

.'..'E.j'jir.;

-.

.)!.i

(;--i.)::yrE!;;. j.r..-

-.E. E-..

.:

q- !.(i-!E;gj)jgjyjr;yryjj.j.y

y'

...

('

.yryy.jyy.yE

yy.j..y-y.. ..

y... .. .

... . . . . y.y

...

.. y- .

yj..

-..

.. yy

,'

. . . . . . ..

.-y..-

E.E

..;'(-(.'...;.yE;-y!jEE

y!y.g

,

'g

-j:E.

;yyy(y.jjy:;jj(jjyqj.j

E..yE

jyyd

gEy!yj..yy--y.jy.y-...-

-

y'. .'.....

...jg.(.j:

.

:

,jg.j(,.

j.g;.jy,jgk

('

jgjjjjj.j.jggjjg

.j.jgj.j.

y.'

.' gjj

j'

gj.

j'

.g..jjg414:().

.-.!:111::*

-,-

(::11...*

(q:),1

(:!IIIEEE)-.,'11I:Eii!!.'

:211r:::111::::11k..:111:2::11r::2111..

-112!1,11-.*

--,-

11::4)*;1pE5i1k,,'((:),'

':i!:-;pI,;jijiji;:

::::::::::::::'.j;jjiijii;'klp!i,lk--d

':11:':11..

.1::1111:.*

11::4(

'1IE::::,h'((:11i'

,;jjqiiji;'

.:10:,;,1

::;:::::::::::'

.;j!r::;p,.

.-.'

.

.

.

-

..

yy-..iy..j.

-

y.jjjy..

y-j-

-.

-jj.jy.

j(yyjyjjjyjyy(.jyyjjgr;gjyjy.yjj

-

-jyjy

.j.yy.j;yigjyyj.j-jgj.-yy.g- yg

...,y,,..,.y.y,y..jj.,yyj.j.y.y,y,j.j;,yjyyyyyjyyy.jyg.ygyy.y,.,,..jy........',.!.l..;q.,.,.,j,!jr.yI,yj(!yjqy.y.,,,;.y,j,..yjj;yr,y....(........,

Proof Before giving the proof we would like to remind the reader that ma- 'Ey,Erlqy,yyjyp.l.qijyjijjpjjjj,yyiyjyyqjj,yr,yjj,jjyy,yjyjy,qy.,ly,qjy

,

.,,,'

.,,jr,,,, .,,y. . .:y, . ..,j,,,

,;jj,,,,-'

. ..,jj:::,:,

,jj,,,'

.

-,r::,,-,::j,,'-j,::,k-'

. . ..,jj,.,,,:.,,,-*.

jjy:::,,d

-

.... .

..y.y...jj..j-;j-g-.yyyyjyjy.

.

jyjj-jyg-yjy,-gy-yjy.-;-yyj,yyy-j.- . .

y.--y.j-

-

sures multiply the transition probability on the zezt, t.e., zn oue nrqv cuww - - E.jjlyjyjjjjijlyjjygljjjjyyjjgk,,ijyjyyEyjyq,jjjyjyjyyjyjjyjyyyjEyyyyj

. .

.,j,'.4,*

.

.,,,'.,,,'

.

.

. . (y....jq-j...(j-).(.y.want t show pr/ = yp. lf we frst make a transition accorclh: to anu tneh .,Eiyylty,ylljyiyyy,jjjij.jq.yj/,ji,..,yyjjyy,yjjy,ljjggyglyyyjlklgl,jyyj,,,y,,

one according to #j this amollllts to OI1e tllaflsititm accordill O. j S C yj:jjrjjgygjjyjyggjjjrjjjyjjyjjj,jyjyygjjgyjyjggyrjjjjjjyjjyyyyjyjyyyjyjg,jj

. ...

g. g

.gj..y

y.yy

g

. . jyygy.

:lllq':ll:r':llk.d;I!EiI1;-;:Ei!;;'

IIE!E!;.:1!E5,1k-.

'llkzird41:::2)11::.*

Eiil.:1EiE5;.:12!511--.'lliF:'llir''.liis!r.

11:::::.*

'1It:k,

112E52::*

il::iill;.

'lIi:E::1,'

(::!,p?,'''

.:1!:,;,.*

;11!1!1k-.

'E11:':;IL.'

11::f1112.. .

:.(:..EE.E.Ei(

..:((.E)i.-(

i

Eg.!(yq.jrrj

:

jrg.yEg!r

y...

j;yg.j.-

..

-

..yr..q-yy.yyj.j-

.gj....

yE.j'.(.

':

.j.'g((Eg!.

:

y.!..j..j

ijygg

.j!E;gg.!.j.;

Ejj:,jg(jq.y.gj

.

y.rjjq:

k;gy;.y.j..yj. .

yy.j..E .

...l.y...(!.q(y..j.

j'

jyjy)

t'

r-j.(yyjj'yjyjyy;r.i.j-yyyyjj-jyyjt'y

.

j..yy.-j.j

g.(.y(.-.q.- E-

- jgj. E.jE. gj.y.ygjyy.jq;j.j;jyjjyj-jyj.;g.j

(yjgy..j..yg-:..tytil.jlj..r.l

jjjyyjgjgyjyjyjjjyjjyy

.;i!jiiii;7'il::l)d-1!::::1h:!'',,,'

...ki!!EE(:(;:1''

((::1i'

!::::::::::::;.

..jI!::;,''

11::4)

.1:::;1!!-*

::;iIr:2:'

(2::11*,;i!jiiii;7'

11::((

;::;i!:::,':,,'..-ii!!EEEE::)I''

(2:1,1

.-,'

'

.

.

...

..

y(.-

.

.yj-.i

g.(Ej,;ijj:jyEy.-jyy-i-!yjqi

E(jyj.j(jyyjy;

-.

y.jyj

-y)yyyj.j-

yj...yj.yyjjjjjjjgtyjy..j-y.-y.y-g.jyj.j..

.(....,,q...ygyyjy,jgjyyjj,,j,jjtyjjjjjyy.y.yygj.jjjyyyyy,.

yaaaayygyyayyyyyyyy.yyyayygyyyyyyyy.yyyyyyyyyyyyayayyyyyyyjyyyyyyyyyayyyyayyyyyayy

.......y- .

.;.;..-

-.

gyy.jyjt'-gEjjjkjj-

y'

--;j.j. y-jyyyjyj.

--

-y.y'rhesecond equality also follows easily from the dennition. In words, if # acts . yyi.y..jl,..irjr'jiiyljyjy,jpyjjjy,,y,ryriyylyjjyyyry,jjj,,yyyj,jy,y,g.,yyyylyjj,yjy,,,y

d then n then this is the same ms one trnsition according to p since n'','.r')'i''.::)''.!.;.'i..ii;,,;y(yjyqjg)yj..,yj

(:11(::111..(:11::7.ilii!ll.111:k,

il!ii!lk,.Tllr:':ll..

:jj'

. .. . ..;..j:

jjyj.jj,jgyj(jjj.j('.jyjy)y

('

:

k:

jkjj.:

jjjy.:

j,

j.y.jjjj.j

j,yy.jy.y.g.jyyyj.....r

..r.i.qi..)-j!!;y(y;-;-jjjj,-jjy(j;yg)-y;jy

. jyj-

-

E-y..yjyjyj

;jqy-yjj.y,.g.y.jy(..

:,11::-*

IIIEEE:::'112:k-

.!II,j:1Ik.)ll:!-.(II:':1Ik.:jiisl.

-11:;,*

(411:.2,1;.*

11EEjj:,.r:ll:r':ll:r'll-.

:llEsllk.k:,i!,;d:bii!p;dils)lk.,d

'1It:k.

1I!::::Ii:!''

.11@:;.

11:::::1,*

',!i.;''i.;p-'()1iy'r1h,.

1111::*rjll::-d.1211,::*(iii,d

.1121,.

41:::::.

;:!!5!1t..'4512:.r:11:.::1k.*115,:*

':jlii'''lll::'.

.1:::::,,*(J1rq')I;q'r!1i.',-,'

11:::::::::11.. .yyy

E.y!

y.j.yjyj-j)jjyyjj-jyjy;jg

j'j'

jj

y'

j

y'

j.yyjyg.jyjj.g.y-

.

.

yjy..yyy

j'

j;jyyj.,

j-.

yg.yg.y..y.j.yy..

y....y..;(iiy.jjjgjjy.j.yjyyj.jjryssyy.j..((;j..::

;;.k.jgiy.(i.(y;kj(j(yji.kyjy:y:jjjy-kyjjjkjj;j.jyjj.jyjyyy(.jyyyy..yjj...j

......,qy....,jq;.,.-p,.,t..ji,i..(;(j.)..jj.y,j.!(gr,,,.,,,).yyjgj,..j,..j...;).g.,y.

' ''..: (..j.j.!

:

:yj:(j.Eyjgj

:

jr

k'

y,j

.j'

.j'

jg:jyj

r'

jyE.yjj.j: E.

,,'

. y

gyy,yy;..g

....E.

E

'-.jl':-

r-Ej- -k.)

.

;qjygy(y!grj'yk))'.:(;:y!)-y7j)j.t(-.lp

tltd

:.

E

yg.y.yj.('.(,;,rqjE.!7r)yty)j!I)j,;.;yjgkj.rjjjj;,;(jyyqq.y..j.y,j,jkjjj.y.(j.qjq:.

(........;.(.q..E...:,j(j.y,,.j.yj.j.jjjjjy.j.yjy.j.yj,.j,j.yyjrjyjyy,j.yj..y,.y;,.yj.y.y.

. (..ti;.E rjjjrjjk

yy yyy.:..r....y.....;r2.q,j!..j.j;.j.y;.!jyjj!j,.jyyy.j.jj.,yjjy.j;.jj.jj,.jjjyj,.,.j,y,...yjyyq(....t..tE..j.jt)rtyyjjjyjjjjyyj.jyjj.rylyyyyj,,lryyj.yl.j....y.

...,.yq.jj...,,.Ek.j.j,);..y.jj.,!tj!j.jy.q.jrjjyy(yy,,.,.jg.j.jy,y,y.,.y,.,r.,j..,.j.-,y,....

(....(..(E'.

)'

.

E('

:

....: i;;..

. il1l.ll.

-'

@E!

j'

li

t'

i.E:.l.i. ....

Section 7.4 Harris Chains 261

(4.2) Lemma. Let 'F,zbe an inhomogeneous Markov chain with p2k = z? and

nk-j.l zp #. Then Xa = 'Far,is a Markov chain with tr>nsition probability # andXn = F2s+z is a Markov chain with transition probability p. (,

(4.2) shows that there is an intimate relationship between the asymptpttcbehavior of Xn ad of Xn. To quantify this we need a deflnition. lf f is abounded measprable function on S, 1et j' = ?g/', i.e.,

J(z) = J(z) for z e S

/(a) = 1dp

(4.3) Lemma. If p is a mobabilitymemsure on (,%S4 then

Eplxnt = .G/(#s)

roof bserve tltaf if Xn and R are constructed as in (4.2),and Pv'z 6S) = 1 then a'Y ::.z Xo and Xa is obtained from vk by making a transitionaccording to 'p.

c1Before deyloping the theory we give one example to explain why some of

h t temnts to come will be messy.t e s a

Exnmple 4.2. Perverted Brownian motion. For z that is not an integerk 2, let ptz,

.)

be the transition probability of a one dipensional Brownianmotion. When z k 2 is an integer 1et

F(21,12+ 1l) = 1 - z-2

ptz, Cj = z-2IC r)(0:1)l if z + 1 / C

p' is the transition probability of a Harris chain (takeW = B = (0,1), p z

Lebesgue meaure on B4 but

#at.Xk = n + 2 for all n) > 0

I can sympathize with the reader who thinks that such crazy chains will notarise tsin xpplicationsj'' but it seems easier (andbetter) to adapt the theory toinclude them than to modify the ssumptions to exclude them.

a. Recurrence and translepc

We begin with the dichotomy between recurrence and transience. LetR = inftn k 1 : Xn = a). lf .&(S < x) = 1 then we call the chain

Page 136: Stochastic Calculus

262 Captez 7 Dihsions as Afarov Processes

recurrent, otherwise we call it trnndent. Let Sz = R and for k k 2, letRk = inftn > Sk-l : Xa = GJbe the time of the th return to . Th strongMarkov property implies Pllk < x) = #tz45 < xl so #.lxk = a i.o.) = 1in the recurrent case and is 0 in the transient case. From this the next resultfollows emsily.

(4.4) Theorem. Let (C) = Ea 2-n#n(a, C). In the recuzrent cmse if (C) >then #.IXS

6 C i.o.) = 1. For a.e. z,.&(.2

< x) = 1.

Remark. Here and in what follows F'(z, C) = #.IX. E C) is the nth iterateof the transition probability defned inductively by 751= # and for n k 1

Secton 7.4 Hazjs Cllains 263

Proof If (C) = 0 then the deqnition of implies .&(Xa E C4 = 0, so#.IXk e C, R > n) = 0 for a11 n, and pC) = 0. We next check that # isJ-finite. Izet Gk,&= (z : #(z, a) k J. Let Tc = 0 and 1et Ta = inftm kTn-1 + k : Xm e Gktsj. The deqnition of Gk,&implies

.plTs < nlT'a-l < n) / (1- )so' if we let N = inf (n : Tr, k Ta) then EN < 1/. Since we can only haveXm E t?k,J with R > m when Ts S m < T. + k for some 0 K n < N itfollows that (C%,4)%VJ. Part (i)of the deflnitioh of a Harris chal impliesS C Uk,mlGk,l/m and c-flniteness follows..

E . . :. .

' ' E ' ' . :

Next we show that ## = #. .

CAss 1. Izet C b a set that does not cohtain a. Using the deflnition andFubini's theorem. ,

#n+'(z,6') = /#(z,dpny, 6')

Proof The frst copclusion follows from the followinj fact (seee.g., (2.3)ipCha ter 5 of Durrett (1995)).Let Xn be a Markov chaln ad supose

E E

P

P (um'xkamlt-Yme fmll -Ys) k 6 on txae .An) '

Then #((Ak e da i.o.)) - P((.Xk E Bn i.o.)) = 0. Taking .,z H (aJ andBu H C gives the desired result. To prove the second conclusion, let D = (z :

.F(J@ < x) < 1) and observe that if pnt&, D) > 0 for some n then

jpfpy, ) = S jp-k e dy, R > n)#(p, c)- u azg .

= E a(-1k+,6 c, R > n + 1) = #(c)n=0

.&(Xm= a i.o.) K #n(a,dz).Pa)(S < x) < 1

since t / C and #.(.X = a) = 1.CAss 2. To complete the proof now it suffices to consider C = (a).

Remark. Example 4.2 shows that we cannot expect to have .&(S < x) = 1for all z. To see that this can occur even when the state space is countable,consider a branching process in which the ofspring distribution has pc = 0 and

E kpk > 1. If we take .A = B = (0) this is a Hxrri chain by Example 4.1.Since p''40, 0) = 1 it is recurrent.

b. Stationary measures

(4.5) Theorem. Let R = inftn k 1 : Xa = a). In the recurrent cmse,

R-1 =

#(c)= s 5-71(x-Ec) = y'.7at-1ke c,R > n)n=0 n=0

deines a J-flnite stationary measure for #, with # << , the measure defined in(4.4). p = pr is a J-flnite stationary mesure for p.

#(d#)#(g,a)= )(2 Pa(Xn6 dy, A > n)#(p, a)n=0

= Ea(a = n + 1) = 1 = #(a)n=0

where in the last two equalities we have used recurrence and the fact that whenC = (a) only the n = 0 term contribptes in the deqnition.

Turning to the properties of y, we note that it = pv and #(a) = 1 so p isJ-flnite. To check pp = p, we note that (i)using the deflnition p = p'vand (b)in (3.1),(ii)using (a)in (4.1),(iii)using ## = #, and (iv)using the desnitionp = /t?:

#P = (/'D)(/$ = ji35'D = FT = p CI

To iuvestigate uniqueness of the stationary measure we begin with:

Page 137: Stochastic Calculus

264 Capter 7 Dihsions as Afarl-ov Processes

(4.6) Lemma. lf v is a J-flnite stationary measure for p, then zz(.A)< (x) andp = vl is a J-llnite stationary measure for # with p(a) < x.

Proof We will flrst show that zz(W)< x. If lz(.) = cxl then part (ii)of the .

deqnition of a Harris chain implies vC) = x for all sets C with pC) > 0.If B = tlf with vBj < (x7 then pBi) = 0 by the lmst observation andpBj = 0 by countable subadditivity, a contradiction. So zz(.A) < cxl ad

p(a) = lz#(a) = E#(.A)< x. Using the fact that vp = v, we find

C) = p#(C') iz vC) - 6 vAjpB nyC'l

the lmst subtraction being well deqned since ?z4.) < x. rom the lat equatity

it is cle'ar that v is J-fnite. Since i7(a) = 6p(.), it also follows that pr :ic v. Tocheck p# = p, Fe observe that (a)of (4.1),the last resglt, and the deflnition ofp imply

p# = vp = p# =

(4.7) Theorem. Suppose p is recurrent. If v is a J-flnite tatiimary memsurethen v = p(a)/z where y is the memsure constructed in the proof of (4.5).Proof By (4.6)it suces to prove that if p is a J-finite stationary memsure

for# with p(a) < (:x) then p = p(a)#. Our qrst step is to observe that

C) = p(&)#(&, C4 + dylpy, C)s-(0)

Using the last identity to replace ptdpl on the right-hand side we have

p(&)= p(a)#(a, C) + Js-(.)'7(&)#(a'*)#(#' C)

+/s-f.)Vtd''./s-fal

X''d#)X#'Ct

=p(a).p.(#, e c) + p(a)Jk(-'e, # a,-'ea e c)

+#,(-'% # a,-'e, # a,-'% e c)Continuing in the obvious way we have

n

p(C') = p(&) S Pakk # a for 1 %k < m, XmE C)m=1

+ Pvkk # c for 0 i k < n + 1,Xrj4.lC C)

C) k alpcj

To turn the k into an = we observe that

Section 7.4 Harris Chains 265

The last term is nonnegative, so letting n-+

x we have

p(a) = /(p(dz)#n(z,a) k p(a)j pdzlgnz,a) = p(a)#(a) = p(a)

since (a) = 1. Let Sa = (z : #n(z,a) > 0). By assumption Uask = S. If:(D) > p(a)p(D) for some D, then LDn&) > p(a)#(D n&) for some n andit follows that p(&) > #(a), a contradiction. (:l

(4.5) and (4.7)show that a recurrent Harris chain has p, J-finite stationarymeasure that is unique up to constant multiples. The next result, which goesin the other direction, will be usful in the proof of the convergence theoremand in checking recurrence of concrete examples.

t 8) Theorem. If there is a stationary probability distribution then the chain( .

is recrrent.

Proof Let # ='m#, which k a stationary distributipn by (4.6),and note that

part (i)of the deflnition of a Harris chain implies #(al $ 0. Suppose that thechainis transient, i.e., Pll < x) = q < 1. If Rk is the time of the 1th returnthen PRk < x) = qk so if z # t:e

Cr Cr (0 1E= 1(# =.) = P.lRk < =) < v-l =

a -

j- qn=0 =1 =1

The last upper bound is also valid when z = a since

Co X Co1Ea 1(xu=a) = 1 + P.(& < x) -< qk =

1 -

qa=0 =1 0

Integrating with respect to # and using the fact that '' is a stationary distribu-tion we have a contradiction that proves the result

1 x x

>-E e 1(#a=.) = S #(t) = cxl1 -

q n=0 a=0

Page 138: Stochastic Calculus

W''E: .E!' .(j';j'

;@yj!;i

jq

pj.y'

'.;y.r-

j..ji.

jjf!jEj.Ey.E

.

.'.)..y-;: '

.

-

i..j..E.y.(:...:

E..k:. jkj: jj;yy.jyjgjy..

yjjj.jyjyjjy.y-..-@(- Eyi

t'ij.

-

;-(!y;y.jr- j yyj...j.'......

- .-;).(r(rjy

y'

!.j.!j(i.l.l-;)qy)-).

E..kjj.

E.g.j-

.

.y.(.-.

.(....ir...r(j.

.y.

yg.g.

.j'

.. .

.'

. j

;'

:gj:. jjr..jyg....y..

...EE..-

.

El;q.yy-

kyjyjyjjyqjljiIy!..y..

-

y.(y-

yyj...y:-

.y...y:

y

.

q.--

j...

.111::::::::::.'.!r::t!!l'll.r'':l.:.

dll::!:::d11IEE,:,'

'11It;:.(lllizErll...

.1EEii!':'

.11::::),.-

:Il(:!'-'kll::i!!,d:)11r:::111:::111..*

'..

'...

-

-).-'

;'--'.l')')!i'i'ik.i:-

!-)1'(i.y..

i. -' .

ljE(-. (.i......c. Onverg (qqyyjjjjyjtjjjjy, yyyg

.j.)...j.y(,;,;

yjjyjyyjyj..jy.jy..,.j.s;jssssj....:...tq.;:ij.k

yj....y:y.gyy.

....,.,..E.y,y;E..E(.irj.,.!jy(.E4j..E,r.

);.y,jt.j(If I is a set of positive integers, we let g.c.d.(.0 be the greatest common E

,EE'(@i)(qE!Eiy!@,i(;l'1''i),(El.(E(E,@y,.iyq(r'g

.

''''''''''

'''' ''''

j. .g..j:k

:

j.g!j.j

.jjj.

.jy(jy(gy

j'

jj(yj(r..y. jj.:

.(..j(j .divisor of the elements of f . We say that a recurrent Harris chain Xn is aperl-

.

'E;y.'qj',r'.'E,g:,.'jiI',ijE!.i,'jt, ti

-,.,r..,.,...,:,,,,.i.,.,,,,,.,.,,k;.,,.odlc if g.c.d.((n k 1 : p (a,a) > 0)) = 1. This occurs, for example, if we can E

'iiiiy,lijqllryq.jjlqqiliyjjrriqi,jl

..... i!E'.r.'.).;jjj(iyEyE..j(.qjyij.4;.yyj,.

j;.j.,).y,.(.j.take.4

= B in the deqnition for then pta, a) > 0. A well known consequence of E(q(q;.yy;;.y,yjqr(j;;y,;.Ejy(,kyjyyyj

E..'.

:j.g.jyy..j.j...j.j:

g.g:

j.jjy.gg..yy...;1!!5111--

(jIl;::,,'11EEE!,t':1Ir!!''

:5ii..11(::::).1.1::1111:.

fiit.d111:::::*

(iii..'1Ij2;-(::!i,?''''

iii.dIIE!E!,;X

....g....E(

Ej'.

qi.)iy.!jqj-i-Eqjjj.yyy.j.-y.g(.jj-..

-.j..yy.y..j.y.j

..'..i..i.,.!..rq.:(j....q,,.jyr..rj,q.j(.y..yy.

jy,,.yy..('(....'E':.;.yj.

:

.g.y.;.E.y..E.jj.j

:

.

y'

jyyg.:j.j.g.q:

.d

y.g.:.

y..gj.j

y

y........,..;...g.(yy(yj;.jjqj)..(j.;.!.y,r.j

jy;,,,..(4.9) Lemma. There is an mp < x so that #*'ta, ty) > 0 for m k mp.Ey,'.j'.y.j;y;,j,gjiiy,,r,j,jy,,,y,,)y,;(,jyj,yyyyyqyjy(;yy;,.yyyyy

. . . ..gj:

g:

yiy:j.jqyjyj;jjj.j

j'

.jjjjjrjj

.j.jj;.:j

jjdy'

E.

:y.....j

. . . . .. ..j.... j y j.

:

jj: . .

. ......y..,y.,.,y.y.y..;yjjjj.jygjg.;(yyyy.j.y.y,,

.jy,.y.tjy.y

For a proof see (5.4)of Chapter 5 in Durrett (1995).We are noW ready to pove ('Cl!E;')..;'yj;jiyj(j!jEiyEyy!!qjE.y;((,.yyr.y

., .).q'.,q(.!:(j.',j.';.',l;jy(rjyj,gyygjy,.,,.j.,,j..j((.y...,(.(....,... . . . ... .

..,.y..,.jy..yy.j,y.ygy.jy,jj.jyyjyjjj.y..yy

jss,jysjyysjssj. y..g.j.gj...j.

ggy.g

,gjygyyjy...yjyyy.

gyyjyygjjjy.yyy..,.gjj...y.y(4.10) Tieorem. Let Xn be an aperiodic recurrent Harris chain with stationary ,E'.lp.Ey,,,l,ijlEyiiyr,jylig,qjjjjkyig,jqggyyylyy,j.y.y,,yqj,yy,,

. .. .- .

.;-. .

rj)jqy.y! j ..

y

ill:iilll.d

:iii.

;1E!5!1;'

'1It:;,

;ll:r'd

(iii,.''lIi:::),I

'1Ii.:11k.'

.111:;-

(iii,d

.1:::::),1

E:ll:::llt,d

.::;,-117-..

.--.

(:11(2q*

r:lliir'',.-;i!!:E:::'I'';::.!:,,.

114::4*

.,.;i!!::iii!!:.

.--1:::::::::q

.1::::::,..::::::),.

((:)1p'

.:::::::::::::::' (:111,..

411@:k-

(111:':11..

.1Eii5!,'

Elll:::li,.

CIIES!Ik,-:E!!!I;X

!;p,'';I!;..'''''.'''',.'i!,--. 14::::::,11::::::),-

:,,'

.. . ........q

-j...E

y!y-

i..g.y!.j-i(.k.;.

.j.yj@.

;yjy.j..

j.yq.

j.-.yrqjj..j-

jjd

.k)Ey.(yjE..-..j.yg..jy... j ..

s,gyss.yjyjsjsyjjgyjyyyysygyyyjg.jysgygjjysjy,sgyy

jjjjjyyyy. E.-

..

.i.-

.(y.

j.j.).;y..;igrryjjjjj!jqrE.j!y.;yy--.

!.Ey

.:

..y

jl'jtd

.jj.gy.

-i.yg

E.y.!...y.11.11.,;i!r::;;'' :'''':'.'

ill:E)di::;.!::,?d:.,''-''

(::11.''''''''''''''

'!:;':.:'''d

11::E('-' (::),'11.111.'''':'''''''''i'k''.

11(E2E11,*

'.''.'.''.

--.

E'-i'.-

-.E)';-y

.y1.(-'y,-.Er

';

('j

.

qE.jryIy-i;y.-y---y-!j.j-gr.g;;;-i.

-

1j'

..-(...jqrq.y..y;.q.-.

.

ig.E

.j..;g..r.

!(.jjgy;j.

--;yjyjyjy.-y

. .yj.jjyyyy-y.jy.. .

y......yy..y.yyy

yyyy.yyyyyyyyyy.yyy.yyyyjy.yyyy.y

yy

.y-.yy.

yy.yyyyyyy.y.y. . .....

: ..

:

E.. .

..'.'..('.EqiEj.-ry.yEj'y.-yj.

E'y.j;.

qjd

j.j'.)-

-

y.!-

jj.jj.-Ejqjyy

-y;.jyy.-)j.yg.

(.-.-

-;(gy

yRemark. Here 11- 11denotes the total variation distance between the measures..'.@'i.E(.)i('gi.r,;r!))),j)j)',',,.!,..,tyjt,

. .- . . . . - . . .

-.-.-;--.

r,-.

-.

;-;-.

.

y-

('

-

t'

--;;--j.)-,--yy-

.-

;y-

k!(.yyjy.-y

r-.-.-y.---(4.4)guarantees th.t a.e. z satis:es thehyythesis, while (4.5), (4.7), d

Eiiiiy.'lEyiiqrikEiyiyijj.llyyijkyyyEyyjyyy.yyjyyjygyyEyyyy

. ... . .........y.--.

--,yy,;.-

yyyyyy.

y'

y.y-j;-

j.-;jyjjyyy,yy

-yy.yy---j-j- jyy..jgy- y.yy..y(4.8) imply g' is absolutely continuous with respect to .

, (jyyjjjtittjyyyjyyyyjjy....

E(q..q.y.,jjyyy.j

gy.gyjjyjjjj' ..'':

'

jj(.j(E.j.jj.y(jj..gkij. j . jj.yj: .

y

,. ...E..

.j.(@.y

-

.(.Eryjy

-

j-yqrj-jEgE

yyyyjy;.yr;q(yr(yj...q(.yg.)..yP f In view of (4.8)apd (4.6)it suces to qhow that if

'#

= g'# [email protected]'i'iqjy'jiyjjlryy'lji

l)!E

::...:1111:2.'*

111::::)14!ill::::ll,l..... .

qjj(:y:yj.yjjy;y.y..y,.y.r.yyy.

g.iy..g.j.j.qyy

::.y

yyjjyjsy.d

. . . . . .. . ..

('

.'.'..

.'..:..

j

y'

:

y'..:..y.yqy

.

jy.ygjj,jgj.jjjj:.y

j:j

gjj.j.jjtj...

yj:j,jy.gj.g.(.g.gj.

. .-

E ...

.

..y..yjy.

y.jy--.

y;-.gg.-gyjyjyyjy

-

,jyyj;-jjj..yy j.y.j...y.g..y11.11.

.;i!ii!ii;7t''''''.'

111:E(

i:::'',':,,'

''' (T:)1''''''''''''''.

':iiiiii;k''

11::E-'' (::11.11.111''''''''''''''i!'-'.

11(::E:1.*

'..'.-'...-.

'.-i

q-...---;iq!;.i.,i;j;

iy::..yj;jjjjjjlj(j-jr--qEjjg-q--.

.

-y

,'

j. .jyp.y.y.

-.-j--;y-.

yy.y-

. .( ..j.

.-.

.

- ;j.(j.y.jr;jjk j yjy.. ......

-

.(:.. . (.-(..

-

.i(.

..;-.(..ky. .

..E..(..;. ..,.,

.y.y.y...y....

:

.

.

..Er..y.E...'..-yjq(.'(yy;.;jg

.g;!gj.jEjj

.

-j.:E-y

y.j:.

-

-

.

!..:.

.

E. ;(j:jj...y..j.

EyOur second reduction is to note that if z # t:v then EyyE'y,'E.Ey,lyg!(g)y!kj;ygjy,.E,y'igijjEyy'y(l.,,yE.q

jttjj..;..;.Eiy.-

E-j;rj(iy(

.

j.:j-;;.yiy.ggEj-j.- -

E

(:j

..jq.(y

-

-yyy-.

.

j.'E(.'E

.'...'-Er..j

:.i.

:

y.j.jjE:

Ey(;yj.j(:(qjir).gj.yE.j

-:j.(

Eyy(yq.jya

.,;y,,i(((;.yrjyjjy.yg,,j,j@!(.),,;r.jy;yj

y.j.yy,lyy,a-m

,y.i,;,i,;.(.,,;)q,jyyp,;,yyy(;,,

.y,.yr.,

,;;jiijii;r'''':'l''114:)*:::;1!:::,*:,i'

-'. ((:1)1

::::::::::::::'

.,,;i!!::E7;1I'',:;,I::,.11::4)

..-;i!!::iii!!:'

::::::::::::::'::1,..:;1,.':4,:.*

((2))1.;jijiiii;; 11::4(41::::)1.,7.

:,,'.-''

(r:1p- .....--

yy.yE.-..;.E.

y(.y-.:

yr...j..;q:.yqjiy

jyyj

g'

.yyjjyyj!yj;jjjj.yEj

-(jy.yygy'.j...jj.-

-yg.-Ey

.,jj.

.

gy-.jjjg

y.-j-.j

y.j(.

y.j

--

-.

j....j....y. . .

j '.....j(Ejjjg.k

. (.j

g'

.yy..:11'':1/':)1,.

:::::::::::'(:11..*

g : . ...;.......

E...

!. :E..);.,. .)(j...qkyq(.:.(j.

j:j:

.(E i.(.y......

.

...j.E.E..-

E.(i'(..Ekjk'qE-:-

.i.i.jy..E!:E'E;.j)::Ei.!:..q.y

-

j.jy:.

.j.-..-g ;E-

:..

jj.-q-

-

.

jyj- ...... - r:

.(..:((i. y,(

-

jy.y-

r.r.Eqj;jy)(jjj..jjjj

g.-. .

g.yrg(.-yy.gy.j.-g..

:...j

ljiii!r.11:::::1,.''!i.F''ik'-'11iEiE!t:

(111:::11..*;11!E511--*

''!!',;'''

dlisir,. '

.

-'

.i....iEq-!r(q(

E-

-

i.E;-jjE.yE;;():jrr.yij

- ;..yE-

Ey.qg-Ey.

-

yqiy

-..y.j.y.

.:E;

rg.((j(j:g. .jjrj

j'

:j. j:yyj:yy.y-j.y

r .;y

.y-

..jyy

-

yy

.........:..(: .

:..j.(!).(EjE;.j;.yy(

:2yE;;yjEi-yEjyj-

yy(y... y;Ej.y.jq..y..-

.(..s

yjy,jyyyg,yyj,jyyyy,gygygyyyy,gjj,y,yy,yyg,

yyyyjyjyr.-..

.

. ..

--.-!.--..I;ky.--r(.,yjyijjyj.yy;jjkj

-i.

.(iE.!jj -,y.yj.yjyy,

.

yj.y.;yjII#''(z,

')

- (-)11%-!%(.?

= r'z)II#n-'''(ce1-)

- e(.)11

'.'.'.''.'ii''.)i...',..iyj.i..j);r,(;lyg::i:i.'pj',:,i,'y,;.

,ryi,,,jy,y.,y!,..y

..(.'.:.. :

'

(y..yj-

.i(j.yy.-yjj.jrjyi(jj(jjyjjy(

iEg-yqEi

y...g... .j-(.

q.jyy ...m=1 , i),j j)

jtttlEE

jjyjytjlrq.....

:.(EE

..:

.!(I(.

;.:y:

.jE.

.yj(.(.j!gqj(qj;.j.jy:y:(

E..jjjji

.k

...j.y...

.

j;yjyjyygy.-y..yy.y

(.y.y:':

.'

.:' i.

Eq'.qE'.EE

i(!..j:

.j.

.('

:y

y.q

j-.

!

jjyjyyj.

j.j)yjEj

yj.j.j(jE.

.-j.yy(-

.

yrijy ....yEj.

yi-y::

k-

yy..-

-

-

.

-.

j:.y(......

(g.;.yE.jyy.

j.j..;g.gyyy.yjgjy.j.ygyy.-jj

yjr g gjyryjyy.j.jy

j....

y

;11!E,11-,*::11:.:,1k.*.1::11111.*

:5ii,.

:11:;;*

:jiil;-

'!It,(:11.':lljii!lliiisii,d111::::2*41225:1:*

:jisl;.,t.1:::::)1-(41144::),!

:41r::-*

114::::),1'!!i',p,'. 11(i2::.11.1k.

(211:.:)1k.*11Eir2'

(:11::-.

411555:2*

i:iiili

'1Ik.j:lI.'

(111..

:11:;;*

'-1i,;'Ii';,''

((IIirr11L.'.15515,:*

(211:':11-.

;::;1!:::)*!:::::::::::::'llrrr:rlir!'d.-.'

E'(-

.;-yi.E.ry(.;..y.y

jj.

y.,.ygj-

-y

jj..jjEjE

y

)'

jj-jyjjygyj;yj.gy.y.-yyyj.j..-g

y...jy.y(jgjjyyyyyy.yyj.y

yyy-yj...j.Let 52 = S x S. Deflne a transition probability # on S x S by

(.E.ElEqy'yrjrijjiyi'.ilqE,ijiylljiqijiElp.yyyyjjj

jtyjjyyqj...;..r!..y

-(E--ijj.;j.rj

t'

jyyi

(;(jyj

y'

jy'.yjyyy)Eyj.l-- - -y;ryyy;j..

qj...-.

...(.-.

i.. y.E!

t'

.j!yy;)y-jj j.

-;.j,jjyy.

:

g.,y-j

.

y..jjyy.;-

-

. ..

y...-

E.-.-.j.q..y

-

.j.)y-(i.j(j.rgyqyj(y;jjj

ytd

)(y;yjjg

y.yyyjyjj.gj.jy .

j'

jyyjjy.y-

y(j.ig

..

.

jy

.y..

.;;idriii;;'

dl::ql11::((-:;:i!:::,'

::11..*!,,'::;:1.:::,*::::4!:*((:)1h*:,,'dll!:E::EE:;?.,d

(:11:,.::)),'.::::

dl!!:E:llE:;?.rd

:::2!,:(2:11.

::;:::::::;:::'

.;;@iiiii;' 11::()

;::;1!::)-*E:I(,.':,,'.1!::::E2222?.-*

E5It..(2:1,1

,;;!iiiii7'

111::)*.::;1.:::,*

:::!!::,-

dl!!::E:El:;?.'d

:::4!:(::),. .'-

.'.....-

'....(-.

.

-

-

..)---

jj.Eq-.

.(EgE

j)j.j.E!-

jE-)-.jjpyj'ikj(;.ji.j-l;.)yjj.

y'

y-.jyjj.j.jj

.y..q.j..j..yj-

-y-y

.

-.

yt'yyj.-,lj

jjy.jy-

y

Ey-.y.yj.;.y.y-.!-yj-j....

-..-

...

y-

--

--.yjy;. -jjjjyj,yygyyy, j.......

-. -

qr.qg..y.r.-.jyi.(q

i..yiyyjj!yjjjjjjjjyr)

E

yqjj.!(y.jj.j-

y-j.)yryy-yj.yjyj

!,j.jyryyj-jyjj..q

.y-.

That is, each coordinate moves independently. we will call this the 'tproductE,i,('EE'E'i.,'Egi.l;lj(j!ky.jj!;,k;.,y,jy;

yjyjjyj.-- . -

yqj(rE;y,j..

-y))!jry-,yy.jjyyyyyyyj.jq.qy y.j

.j.,.hain'' (thinkof product measure) and denote it by a .

E(i''';;,','E,i.!'7'i:i.'i!)II;!jjjj(Ejg.j

.tj)),y,,,)

C yjyjjjji,ltjijjjyyyjj,jyjjjjj,jjr,yjjjyjyjytgj

. ....-

.);(iy.j.jj- r.j.j-.i-gyj;jyy

.-j-gj.y.j.j.jjj.y

j.yj-y. y;yj.jyyyyjy.jy- .y

-

..

-.'.'.(..l(E

y(.j.- . .(j.j(E

jy

t'

jii j(jyjy!ry.y.

.j,y..-. . gyyjj.yy.--jyy-g.yj

yy--...'...'-q..yy.y.

yy.jjy- .jyygjEy'jy)

!.!;!j!jjjyj.j(. yyj;.y-yy.

.

jj.

j'

jjy.j.y

.y.y...(..Ey.;..

.

i-

,pj

j'

;i;kji.

j.jjjr

t'

j--j.yj.;)Ek(jiE;,yy;jyy--E. .

j

(('

-jyjyyyy..-

(yj.-

yj..-.. ...

-.

...

EyEE

-

-.j-.gjyyj

-

y-jy;jy

t'

-ty

y';yyjyjyy;g;y

j'

jjjyjj-yyyg.yyj'yy

-.-

. ..

y'

,yyy-jy-

-

yyy.y.-

y.y.............

..j.;..-.y;-..y-.l'y(

!

t'

--;.;j

j'y'

-- jyjyg

yyd

.j,

yyyyjyj-

y-j

yjyjjjy.yy..yy

yyd

-y..yggjyjy-

ygy.yy--.

-j-

yy

.!.-

.E.

-

q.)(E.j!;k.-

-j

y'

j,-j;-.(ji

)'$'

j1.E,;g

('

Ep

)'

j- . y..k- ;,-y.g.....

E.q.y-. -jq ( ;gyyyy jjg:j .y

y'

jy.j;.yy.--

-

'

.

.

-(--

,..;.rE-!-)kEj.y

'

!.).-.

)'-i)-ijj'('('5

p'

jij',ig

j'

ryiij'i

-('

ig-

--

.j;!..j-

.!..

-!.j.,yj.!.('y.y.

--.-;..)-y..;j...

Section 7.4 Harris Chains 267

We claim that the product chain is a Harris chain with = ((a,&)) andh = B x B. Clearly (ii)is satisqed. To check (i) we need to show that for all(z1.,z2) there is an N so that N((z1, za), (a,&)) > 0. To prove this lt ff andL be such that #X(zl, &) > 0 and #f'(z2, (z) > 0, let M k ma, the constant in(4.9), and take N = ff + + M. From the deflnitions it follows that

#A-+f'+3T(z1, a) k #A-(z1, a)#f'+3T (a,a) > 0>A-+T'+M(za, a) k #f'(z1, a)#N+M(a, a) > 0

and hence jN((zl, za), (a,a)) > 0.

The next step is shpw th,t the product chal is recurrrnt. To do this,hot that (4.6)iiriplies

'#

= zr# is a stationary ditributio fr #, ahd th twodinatesmove independently so

ECOOr j

(C1 x C%)= *(C);'i(C) '

deflnes a stationary probability distribution for #, and the desired conclusiontollows from (4.8).

'

' To ptov the convergence theorem now we will write Z- = (Xs, '.Q)andn

consider Zn with initial distribution & x #. That is Xu = t:v with probability1 and

'fk

hs the stationary distribution #. Let T = inftn : Z-rt= (a,a)J = R-.

(4.8), (4.7),and (4.5)imply i x # <<-

so (4.4)implies

f.x(T < ) = 1

Dropping the subscript x'#. frofn P and considering the time T,

rt :

Pvk e c,T s n) = E PT = m)#n-'n(&, c) = .plJ@ae c,T s n)mpl

since on (T mJ,Xm = ik = a. Now: .. . ' '

:.#(Xs E C) = >(Xs 6 C, T K n) + Puk G C, T > ?i)

L

=.p(7a

: c, T s n) + Pun e c,T > n)

%#(fk e C) + #IT > n)

lnterchanglg the roles of X and F we have

J3tt e c) s Pku e c) + PCT > n)

and it follows that

Page 139: Stochastic Calculus

268 Capter 7 Disions as Marov Pzocesses

&S n'-#

X.

7.5. Convergence Theorenas

In this section Fe Fill apply the theory of Harris chains dkeloped i'l th previ-

ussection to difpsion processes.'tb

be able to use thattlyoiy we wilt szppseit f

'

throughouttlii sec on t a q

(5.1)Assumption. MP(, c) i? well ppsed ard th coecients a and b satisfy(HC) and (UE) locally.

(5.2) TheoreG. If X = B = (z : lzIK r) and p is Lebeskl rilmsie onlized to be a probability pewsure ther Xn is p.n aperiodic Itarris chain.norma

.: . . . .l. . .. ' . . '. . i

. .. .

'

..

.

'

'.. : .

'. . . i '

.... k

.Proof By (3.8)there is a lower semicprtpnuop: function pftz, p) > 0 do th>t

.F'.(a' e.)

= pftz, y) dyX

From this we see thqt .P.(S F= 1) > 0 for each z so (i) holds. Lower yemi-

continuity implies #:(z, p) k 6 > 0 wtzen z, p E d so (it)holds and we have#(a, a) > 0. (:I

To flnish checking the hypotleses of the convergence theorem (4.10),wehave to show that a stationary distribution zr exists. Our approach will be toshow that ER < x, so the constructio ili (4.5)ptduces a statihary ieliiwith fnite total mass. To check ER < (x7 we will use a sligllt generalizationofLemma 5.1' of Khasminskii (196)k E '

(5.3)Theorem. Suppose 'u k 0 is C and hws Ln i-1

for all z / ff where ffis a compact set. Let Tx = inf (t > 0 : Xi e A-). Then

'tz(z)

k EmTK.

Proof It's formttla implies that 14 = u(.;V)+ t is a local supermartingale on(0,Tx). Letting Ts 1 Tx be a sequence of times that reduce 14, we have

'u(z)k Ezun h Tn) + Ecn h Tr,) k Ern h Ts)

Section 7.5 ConvergepceTheorms 269

Letting n -+

x and using the monotone convergence theorem the deskqd resultfoltows. In

. . .'

Taking utzl = lzl2/E gives the following

(5.4)Corollary. Suppose tr c(z) + 2z . (z) K -e for z 6 ff c = (z : IzI> r)then ErTK ; IzI2/f.

Though (5.4)was emsy to prove! it is remarkably sharp.

Exnmple 5.1. Consider # = 1. Suppose c(z) = 1 and (z) =-c/z

for lzl 1where c k 0 and set ff = (-1,1). If c > 1/2 then (5.4)holds with :7 = 2c- 1. Tocompute ENTK, suppose c # 1/2, let : = 2c - 1, >nd 1et utzl = z2/:. Ln =

-1

in (1, ) so u(a'V)+ t is a local martingal util tife (t,l p inft : X3 / (1, )).Using the optional stopping theorerp at time q1,,) A t w have

2 Erxv s) aj21 (z,-., = - s.(qz,) A t)

Rearranging, then letti g t -+

(:x7 and using the monotone and bounded conver-gepce theorems, we have

2 /.x22: r(,,s).Q'r(1,) = --

To evaluate the right-hand side, we need the natural scale and doing this it isconvenient to let start froz 1 rather than 0:

z p zc/(z) = exp - dz1 1 Z

= Cy2C

dy =CJ(p2c+1

- 1)1

(

Using (3.2)in Chapter 6 with c = 1 it follow ttiat

2 j2c+1 - z2c+1 1 z2c+1 - 1 52z>Qq1,,)= - -

ac-j.z - 1' - -

2.+1 - 1' -

t: r :

The second term on the right always converges to-1/6

ms b -+

x. The thirdterm on the. right converges to 0 when c > 1/2 and to cxl wlien c < 1/2 (recallthat e < 0 in this case).

Exercise 5.1. Use (4.8)in Chapter 6 to show that ErTK = cxl when c %1/2.

Page 140: Stochastic Calculus

270 Chapter 7 Dillsions t'zs Marov Pzocesses

Exerclse 5.2. Consider d = 1. Suppose c(z) = 1 and (z)K -Vz: for z k 1where : > 0 and 0 < J < 1. Let T1 = inft : Xt = IJ and show that for z k 1,Ezl' K (z1+J - 1)/6.

Exnmple 5.2. When d > 1 and c(z) = I the condition in (5.4)becomes

z. (z)%

-(d

+ 6)/2

To see this is sharp, 1et K = IXt 1,and use lt's formula with fz) = Izlwhich

hms ,

Dif = zf/lzl Diil = 1/IzI- z?z/lzl3

to gettXa Xs tjj,s +

1 d - 1 #y 5$ ='.%

+ . (xa)ds + .

, j x j s

, c 1.x,I () 1-X-,I () I a

We have written 1/7herefor a d dimensional Brwnian intiim, tht we nlet

8 SVeakConvergence

t xl ='

. dptzt .$

() I.X,Ibe a one dimensinal Brownian motion B (itis a local martipgale with (#)t F= t)and write

. 1 d - 1dyk = Xt . (.Xz)+ dt + dBt

1.Xhl 2

If z.(z)

=-4d

- 1+ c)/2 tltisreduces to the pievious example and shows thatthe condition z . (z)<

-(d

+ 6)/2 is sharp.. . . . . . .

'. .

The lmst detail is to make the transition from Erlk < cxnto ER < x.

(5.5) Theorem. Let ff = (z : lzIK rl and H = (z : lzl K r + 1J. IfsupxEs EZTK < (x) then ER < x.

Proof Let Uz = 0, and for r?z k 1 1et V'm = inftt > fm-l : Xt E ff ),Um = inftt > Fm : lml / H or t e ZJ,and M = inftm k 1 : Um E Z). SinceXumj E .A, R S Uu. To estimate EJR, we note that .X'(Jm-1) e .A so

E (7m- Jm-, llg--z) K Cz = sup E=Txr6W

To estimate M, we observe that if rn is the exit time from H then (3.6)implies

inf Pa,trx > 1) k IffI inf pI+1(z,y) = 6: > 0xZK

* - '

=,y*K

wherep71(z, p) is the transition probability for the process killed when it leavesthe ball of radius r+ 1. From this it follows that PM > m) < (1- 0l=. SinceUm - F'm%1 we have EUM < (1+ Czll&z and the proof is complete. (:I

8.1. In Metrlc Spaces

We begin with qa trextment 9 we>k pzyergene on a general space S with atri p i.e., a tunctionwittk (i)plz, z) = 0, (ii)plz, y) = py, z), and (iii)me j

ptz, yj-py, z) k ptz-, z). Open balls are deflned by (g : ptz, yj < r) witlk r > 0and the Bcgelr sts S ar the g-fleld genr>ted by the ppep balls. Throughoptthis sectton F Fill supppse S ij a mtric spae arld S, is the cpllectiop of Brelsets, even though several results are truein much greater generaltty. For tateruse, recall that S is said to be separbl if there i a countable dense set, andcomplete if every Cauchy squence converges.

A sequence of probabilit# msures #n on (S, Sj is said to converge

weaklyif fpr each bounded cpntinupuf ftpction JJ(z)pn(dz) r..-,J J(z)>(dz).I this cmse We write pn

=:$

p. I many situattons it will be coveniept to dealn

di tl with randont variablrs X r>ther thn with $he Msociated distribu-reC y a

tions JIn(.A) = Px E .A). We say that Xu onverges weakly to X andwriteX ale X if for any bounded continuous function f, .E'J(.Xk) -.-> EfX).

' Our flrst result, sometirrws called the Portpanteau Theorem, gives fveequivalentdeqnitions of wak' onvergene.

E ' t (

(1.1)TheoreG. The fllowing staterents are quivalet.

(i) EJCXA) -+ EJX) for any bounded continuous functibn J.(ii) For a1l clqsed sets ff , limstlps-.x Px E ff ) K PX E ff ).(iii) oi all pn sets G, ltminfa-x Pxtt e t?) Px e G).

.' ; ' . .

' ' ' '.

' ''.

(iv) For all sets .A with P(X E t'9.) = 0, limsups-xPtAk 6 W) = PX e.).

(v) Let D) = the set of distontinuities of J. For a11 bounded functions withPX e D)) = 0, E1Xu) -+ E1X4.

Page 141: Stochastic Calculus

Section 8.1 In Metric Spaces 2732T2 Chapter 8 Wa Con vergence

Remark. To help remember (ii)and (iii)think about what can happen when

Pxn = za) = 1, zs-+

z, aud z lies on the boundary of the set. lf ff is closed

we can have za $ ff for al1 n but z e ff . lf G is open we can have zs E G fprall n but z # G.

Proof We will go azound the loop in roughly the order indicated. The laststep, (v)implies (i),is trivial so we have four things to show.

(1) lmplles (i1) Let ptz, ff ) = inftptz, y4 : y e ff ),'thrj

= (1- Jr)+ where

z-b= maxtz, 0) is the positive part, and let h (z) ='thpz,

ff )). h is boundedjcontinuous,and k lx so

limsup Pxn e ff ) K lim Eh (Ak) = ELxjn'ex R**X

Now h t lx as j 1x, so letting j -+

cxl gives (ii).(ii) ls equivalent to (iii) This follows immediately from (a)#(.4)+#(u4c) z

1 and (b). is open if ad'only

if ylc is closed.

11)and tiiiltxibly(iv) tet zf = = the closure f Aj & = Ae':: the(interior of ., nd nte that OA = ff - G, so our assttmptions imyly

PX E ff ) = PX e W) = PX 6 G4( v .

Using (ii)and (iii)nov with the last equality we havey

limsupPxn E a4) %limsup Pu e ff ) K PX ff ) = .P( 6 .A)N*eX .' N=+x

liminf #(Ak E X) liminf PCX,Z 6 G) %PX E G) = PX 6 .A)n- PeX

At this point w have shown. .

.

. .. . .'

'. . . i

. ' :' '..

) f Px e .A) k Px e.)

k lipqup Pun 6 .A)lim n n qnrex amx

which with the triviality liminf %limsup gives the desired result.

The deflnition of the cu implies

0s J''laf/l-'%ke.)

- Efxn, s r=1

and this inequality holds with X in place of Xn . Combining our conclusions wehaye

limsupI.E'J(.Xk) - .E'J(X)l K'26

neX

but 6 is arbitrary so the proof is complete. E1

An important consequence of the (1.1 is... .

'

.

'.:

. ..

. ...

...

..'iII1''.

. ... . :

'''Ii''

. .''v

.. e.

.'

. ' .E

(1.j)cotlnuous mapptng theorem. jupposepn on (, ) opvrrge Fpily

d let v : S -+ S' have a discontinuity set Dp wit gbp) = 0. ren

to /I, ausl o

- 1Jlp O P tF:A# W .?...

.':.

.:E'

.'

. . . .. . . . . .

t t J : i R be bounded and coptinuous, sice DJOV? c Dp, itPrpof c.yfollows from (v)in (1.1)tltat .

Jpa(dz)T(w(z))-'+ Jp(dz)T(w())

Changing variables gives

E.k

yyg o s-ltgajytzj .y

s o p jjjgjyj. . . ' . . ' . . . '.'

. . . .' .sincethis holls for any bounded continuous J, the desiretl result follows. l

In checking convergence in distribution the following lemma iq sometimes

useful.

(t.2)convergingtogether lemma. Suppose Xn :::). X and p(.Ya,t) -+ 0i probability then 'F,zz:z). X.

Proof Let ff be a closed set and ff: = (g : pty, ff ) K ), where jtp, ff ) =

inftptg, z) : z E ff ). It is emsy to see that z / .JfJ lf and only if there ls a y >

so that Bz, y) n ff = 0, so (ff,)cis open and ffJ is closed. With this littledetail out of the way the rest is emsy.

g . . y .. . .. .. . .

#tys 6 K ) S Pxn E ffd) + #(p(Xn ,'s)

> ); f

aipuh E y1f) -+ aipx 6.4)

=1 i = 1

Page 142: Stochastic Calculus

274 Capter 8 Weak Convergence

Letting n-+

cr, noticing the second term on the right goes to 0 by assumption,aud using (ii)in (1.1)we have

limsup .P('Fs e ff ) K limsup PCX. E ff4) K PX E ff,)n -+

oo n-+ OQ

Memsure theory tells us that PX e f) t PX E ff ) as 8 t 0 so we have shown

Section 8.1 Tzlhketric Spaces 275

Let zk be some point in Ak and 1et Xktjwj = xk,j for (J E Ik,t . SinceXk , A%+1., . . . is contained in some element of Ak , which by dehnition hs diam-eter K 1/, it is a Cauchy sequence. So X() = limk A%@) exists and satisflesp(Xk, .X') K 1/. To show that X has distribution y we let ff be a closed set,1et Sk, ff ) = fj : Aktj n ff # 0J, and 1et ff: = (z : ptz, ff ) %JJ ms in the

EEL'7 e

proofof (1.3).

limsup#(Frz e ff ) K PX e ff )n-G)

The desired conclusion now follows from (1.1).The next result, due to Skbrokhod, is useful for proving results about

a qequence that converges weakly, because it converts weak convergence into' . . . . . .''''tinst ure conveigce. ' '

' l oGplete sep-(1.4)Skorokhod s representatlon theorem. Supposearablr metric space: If JIs ::::> px then we can deine random variables Fa on

ts and Lebesgue zeure ) with distributi(0,1) (equippedwith the Borel sej . . ... . .....

so that'i'a

..-> Fx almt surely. ' ''

#pnRemark. When S = R tis is rmsy.Let Fa(z) = yzatsx, z) be the distributionf tion letunC 1

Fs-1(g) = inftz : Fs(z) k yj

let IJ() = td for tl e (0,1) be a random variable that is uniform on (0,1),and let 'Fs = F,71(J). Then Fa -+

'i'x

almost jurely. (For a proof, see (2.1)of

Chapter 2 of Durrett (1995).)W invite the reader to cohtemplate the question:Gls there an emsy way to do this when S = R.2 (oreven S = (0,1)2):?'' while wegivethe klneialprof. The details are somewhat messy and hot important frthe developments that follow, so if the reader fnds herself msking ttWho cares?'' ,

she can skip to the beginning of the next section. ,

'l'. . . . ..

. . .

Proof We will warm up for the real proof by treating the special case pn H p.

babtlitymemsure p there ts radom variable y(1.5)Theorem. Eor each prodefned on (0,1) with .P(F E

.)

= p(.4).

Proof For each k construct a decomposition Ak = (.Ak,1,.k,2,

k .

.)

of S ihtdisjoint sets of diameter less than 1/ and so that Zk reflnes .4k-1.

, i.e., eacset .Ak-1,f is a union of sets in Ak . For each k construct a crrespondingdecomposition Ik of the unit interval so that tfk,j)= itAkkjj and arranethings so that a4k-1, D Ak,j if and only if Ik-,i D Ik,j .

ta' E ff ) K (.X@) E Lhsk,xjAk,jj

s J'-! (-n(=)e .,,J)jzskbKj

= J''! (fJ) = J'-! IlAk,jj K p(.f1/k)izskbhej JE.S(k,.&')

.

Letting-4

x nd noting ffl/k t ff it follows that

limsup txkE ff ) %ptff )So (1.1)implies that Xk :zty y and it follows that X has distribution y. D

Proof of (1.4) Construct a decomposition Xk of the space S ms in the pre-ceding proof but this time require that each Ak hws Ilnrluhk, ) = 0. Foreach n construct decompositions (I'II, of g0,11 so that tf2z ) = ttnluik ?J. ro

' '*' - tJ ''' .*

'k'

N ) J

e'

' . @ '-'' * '-'' '4 '#'

.

arrange the intervats in an appropriate way we introduce an ordering in wich

, ) < (c,tlj if b,%c and demand that IL' <.12

4 if and only if fr. < ITj. Let) *' '* / 4

'*.-1 'v l

zk / E Aktj and deflne Xknwj = k,j for td e Iknj. As bfore, Xn ,= limk-.x Xknt .. ' '

exlstsand hms distribution pn.Since EJ IlnAkj - pn(.AJ) = 1 - 1 = 0, we have

I(.J:l) - (.2g)l= lpw(.k,J) = p,z(.Ak,./)Ii i

= 2 7(#x(W,J) - pn(W,J))+

J E '

where #+ = maxtp, 0) is the positive part of y. Since the px oAktjj c 0, (1.1)implies pa (.4k,/)-.+ pnAk,j) and the dominated convergence thepre!n implies

16) J7Itfr ) - (Q )1-.+ 0 as n-+ (k7(E. ,2f ,;f

.

Fix k and ., lt aa be the left endpoint of IL%- , and let Sjzj :r:z j : 11 4 < Iljo )'*' t4 u ' v '''r

t4'

''< l

which by our construction does not depend on n. (1.s)implies that

ax = lfkxl = 1im lzn? I= lim cen24 n-cxl 1, n--.cK7

JE Sioj JES(Jn)

Page 143: Stochastic Calculus

276 Chapter 8 Wea Convergence

Similarly if we let pu be the right endpoint of Rjo then prt -.-> 7x ms n-+

x.' t dHence if td is in the interior of IT4 it lies in the interior of 1LIj for arge n, an

it follows from the deflnition of the Xn that EE'

limsuppx , Aul s 2/n-cr

Letting k -+

x we see that if (J is not one of the countabl set of points that isthe end point of some IT we have Xn -+ Xx , wich proves (1.k). I::1,.i7

8.2.

Section 8.2 Prokhorov's Theoreins 277

Combining this With the f>ct that.4

is closed under intersection it is esy to seethat patu,xzAl -+ /z(tJ,=1.f ). Given an open set G and an e: > 0, choose sets

-1 , . . .al.k so that /Itus=zAl > p(G) - t'. Using the convergence just proved it

followsthat

;&G4 - e < ptuf-l.f) s lim /,stukzzfl s liminf /z,,(c)

Sine 6 is arbitrary, (iii)in (1.1)follows and the proof ist/omplete. C1(

.

. . . ...

roof for Rd This is a fairly straightforward generalization of tlte argument. jjt kfor d 1. (See e.g., (2.5)in Chapter 2 of Durrett (1995).)In ls cae we can

relyon the distribution functions F(z) ::.:: yfy < z) where for two vectors y Ki each i. Let Qdbe the points in IR.J wit ratinal coordinates.z means yi K zi fo

By enumerating the points in Qd >nd yhenusing a diagonal prgument, we anf h e Qd Calt the limitflnda subsequence Jk. so that Fak () converges or eac q .

G and deqne the pzkoposed limiting distribution functiop by.

!E!)

dF(z) = inftGtl : q G Q and q > zl

where > z means qi > zf fpr each f.To heck that F tsa distribtion function we note that it is an imrediate

cpnsequence of the deqnitiop that(i) if z K y then F(z) %F(p)

(ii)iipuw#(p), = F(z) .

where y t z mns yi t zi for each i. The third conditim for being a distributionfunction is emsier to prove than it is to state:

(iii) let . = (c1, 1) x . . . x (cd,d) be a rectangle with vertices I/' = (a1,,l)

x.

. . x (tld,d)., >nd 1et sgntr) = (-1)n(c,v)where ntc,'p)

is the number of a'sj

t at appear ir b. The inclusion-exclpsion formula implies that the piobability. . ' ' .. .. . .. .

y ..

:. .

.. j '.. . yof 2. is Evuvsgn(1?)F(r) k 0.

The fact that (iii)holds for the Fn 's implie.s that it holds for G and by takinglimits we see that it holds for F.

Conditions (i)-(iii)imply that F is the distribution function of a measureon Rd, which in our case will always have total mmss K 1. Define the ith

arginal distribution function of F by Fi@) =.: lim Fyj where the ltmit ismytakn through vectors y with yi = z and the other coordinates yj

-+

x. F isa one dimensional distribution function and hence its discontinuities X are acotmtable set.

Call z a good point if i / D for each i. We claim that F is continuous atgood points. To see this, let w < z < y, 1et zi be the vector which uses the flrst

Prokhorov's Theorems

Let 11be a family of probabilit# measures on a metric space (S, p). We call 11relatlvely compact if every sequence prt of elqments of 11 contains a subs!r

that converges weakly to a limit p, whic may be / I1. W call 11quence pnktight if for each e: > 0 there is a compact set. ff so tht Jltff ) k 1 - 6 for all

y E I1. This section is devoted to the proofs of

(2.1) Thorem. If 11is tight then it is relatively compact.

(2.2) rheorem. Suppope S is coinplete and separable. lf 11is relatively com-it is tight. 'pact,

The frst result is the one we waut for appliations, but th secohd okl iscomforti: since it says that, in nice spaces, the two notion are equivalent. .

prool'ol-(2.1) weshall prove the result successivelyfor Rd rtx a couwtable) 1

union of compact sets, and finally, a general S. This approych is somewhat

lengthy but to inspire the reader for the flbrt, we note that the llyst two special

cmsesare important examples, and then the last two steps and the converse arearkably emsy.remBefore we begin we will prove a lemma that will be useful for the first two

exqmples.

Proof The inclusion-exclusion formula says

m

P(UP1,-1.i) = .P(.f) - #(. n+) + . . . + (-1)m+1.P(nt,-l.f)f=1 i<j

Page 144: Stochastic Calculus

278 Chapter 8 Weak Con vergence

components from y and the last d - from w and note that the monotonicityproperties that come from the interpretation of F as a mesure imply

g g

F(p) - Fwj = J7F(z) - F(z-1) S J-lF(w) - F('tz4)f=1 i= 1

The continuity of F now follows from the monotonicity and the continuity 9the Fi .

Our next claim is that for good z, Fr,.(z) -+ F(z). To prove this pick. .

. .. . . . ..

'.

p, , r 6 9d with p < q'<

z < r and

F(z) - e S F(p) f F(:) K F(z) S F(r) K F() + e

Sihce 'tz < 'v implie #(u) S G' S F(1?) it follWs tht ' '

. . .. .. . . . .' .. : . . . . . . .

. . .

F() - 6 S Gqt KF(z) S G(r) %F() + e ' ' ' ' '

Using Fa() %Fn(z) %Fa(r) and the convertnce Fnk(j) -.+ Gqj for q E Qdthe claim follows emsily.

Up to this point we have not used the tightness assuzption. lt enters intoh

'oi)f

only to guaiantee that F is tlke distribution fpction f a piobabilityt e pmemsure. Given .7 choose a compact set ff so tht Fk (ff) k 1 - :7 for 1l i,where- Fatff ) is short for the probability of ff under the mewsure associ>ted

with the distribution function Fn. Now pick a good rectangle ., ire., onewiti

ai , bi $ D for all i, so that ff (:: A. The formula in (iii)foi the ibabilitk of

.A and the previou, claim about convergenc at good pointj imply.

.. . .: . .

.'.

'

.. .. . .

' '

F(d) = lim Fas(.A) k timinfFntff ) k i - 6-yx n''ex

' '

:E

g

Finally. we Eav to prove thatJks => F. For thi w) te t fllwirigruli wltich t of interet in its oWn right.

(2.4)Therem. In Rd probability measure pn :::. y if and onl# if theE assoiated

distributionfunctions have #atzl -+ F(z) for a11z that are good for F.. ..

'

E'E. . .' E .

' ' :.

'.

i. ' . '.

'

Proof If z is goo for F, then . = (g : y % z) has p0A) = 0, o if

pnE iz;y p then (iv)of (1.1)implies Fs(z) -+ F(z). To prove the cnverse We

will apply (2.3)to the collection v1 of finite disjoint unions of good rectangles.

(i.e.i rectangles, a1l of whose vertices are good) . We have already observqd

that ps(.) --> /l(.A) for a1l .A E.4. To complete the proof we note that

.4

isclosed under intersection, and any open set is a countable disjoint unio of goodrectangles.

Section 8.2 Prokhorov's Theorems 279

Proof for R= Let 1R,* be the collection of a11 sequences (z1,z2, . .

.)

of realnumbers. To deine a metric on R= we introduce the bounded metric pot, yj =

Iz- pI/(1+Iz- p1) on R and let

ptz,p) = 2-po(7, w)=1 ,

It i easy to see that the induced topology is that of coordinatewise convergence,(i.e., zn -+

z if nd only if z7 -.+ zf) for each and hence this metric makesR= complete and separable. A countable dense set is the collection of pointswith initely many nonzro coordinates, a1l of which are rational numbers. '

(2.5) Lemma. The Borel sets S = 'It* , the product J-lleld gnerated by thefiriite dimensional sets (z : zi e A for 1 % K J. '

P#of To argue that d D 1t* observe that if Gi are open then (z : zi eGi for 1 K K J is open. For the other inclusion note that

n

l!/ : p(z,p) K 51 = nrzzz y : E z-'npotzm,pmlK e 'Jxm=1

solp : p(z, y) < n') = us==z.(p: pz,y) / n'- 1/n) e 'J=. EI

Let g'd : R* -+ Rd be the projection zutzl = (zl, . . .z zd). Our first claimis that if 11 is a tight family on (R= ,

-2*)

then fy o g-i t ;t e ll)'

is a tk htfamily on tRd, 7zd). This follows from the followhlg general result.

(2.6)Lemma. If 11 is a tight family on (S, S) ad if is a continuous mapfromS to S' then (p o

-1

: y e l1) is a tight family on (S?, S').

Proof Given E, choose in S a compact set ff so that ptff ) k 1-E for al1 ;t E l1.If ff ' c'= tff) then ff ? is compact and

-ltff

?) D K so p o /z-1(.Jf ?) k 1 - :7 frall p E Il. El

Using (2.6),the result for Rd, and a diagonal argument we can, for a givensequence ps , pick a subsequence Jlnj so that for a11d, Jlst o

'md

converges to alimit vd. Since the memsures va obvlously satisfy the conslstency conditions of

Kolmogorov's existence theorem, there is a probability measuve v on (R* j T*)1so that v o g'- = vd.

We now want to check that ynk :z:)* v. To do this we will apply (2.3)with .4

= the finite dimensional sets W with lz(:./1.) = 0. Convergence of fnitedimensional distributions and (1.1)imply that ynk (.) -+ ?z(.) for al1 .A e #.

Page 145: Stochastic Calculus

280 Captez 8 Weak Convergence

The intersection of two fnite dimensional sets is a fnite dimensional set and

t'9(.An Bj c: (').Atl OB so.4 is closed under intersection.

To prove that any open set G is a countable union of sets in.4,

we observe

that if p E G there is a 8 > 0 so that By, ) = (z : ptz, yj < ) (: G. Pick J .

so large that By, 2J) n Gc # 0. CG= R= 6 .4

so we can suppose G' # 0.) If

wepick N so that 2-N < J/2 then it follows from the deinition of the metric

that for r < /2 the flnite dimensional set

.(p,r,#) = (p : Iz - !&1< r for 1 S .N'Jc By, J) c G ;

. . . . ' i.

The boundaries of these sets are disjoir!t for diferent values of r so we can pit

r > J/4 so that the boundary of.(g,

r, N) hms v measure 0.As noted above R= is separ>ble. Let y , pa, . . . be an enumeyatipn of th

yi,rf, #)members of the countable dense set that 1ie in G and let Jl =

be the sets chosen in the last paragraph. To prove that Ufyk = G suppose

z g G - tlf-f , let y > 0 so that B (z,-y)

C G, and pick a point yi sp yhatBz, yi) < yl9. W claim that z E Wf. To see this,note that te trilpglinequalityimplies Byi, 8v/9) c: G, so & k 4v/9 and r > J/4 = 719, which

completesthe proof of pug z::)- v. (7I

Before passing to the next case we would like to observe that we haveshown

(2.7) Theorem. ln R* ,weak convergence is equivalent to convergence of flnite

dimensional distributions.

Proof for the c-compact case We will prove the result in this cmse byreducing it to the previous one. We start by observing

(2.8)Lemma. lf S is c-compact then S is separable.

. . .' . '

. . .

le sets is countable, it stzces tpProof Since a countable union of countab

ove this if S is compact. To do this cover S by balls of radius t/n, let zn ,prt the ZQ

m K ms be the centers of the balls of a fnite subcover, and check tha m

are a countable dense set. E1

(2.9) Lemma. If S is a separable metric space then it can be embedded homcomorphically into R* .

Proof Let 1, 2, . . . be a sequence of points dense in S and deqne a mappingfrom S into R= by

(z)= (p(z, z), p(z' :a), - .

.)

Sectfon 8.2 Pzoorov's Teorems 281

lf the points zn-->

z in S then limptzn , qi) = p(z, qi) and hence tzn)-+ (z).Suppose z # z/ and let 6 = ptz, z/). If ptz, qi) < V2,which must be truefor some i, then p(z?, qi) > E/2

or the triangle inequality would lead to thecontradiction :7 < p(z, z9. This shows that # is 1-1.

Our fnal task is to show that /z-1 is continuous. lf za does not convergeto z then limsup ptza ,

z) = :7 > 0. lf p(z, qi4 < 6/2 Fhich must be twe forspme qi then limsup ptza ,

f) k V2,oz again the triangl ineqality would jiva. contradicition, and hence (zn)does not converge to (z). This shows thatif tzw)-+ (z) then zu

-+

z so /z-1 is continuous. C1

lt follows from (2.6)that if 11 is tight on S then (p o-1

: p E 11) is atight family of measures on R& . Using the result for R= we see that if yn is asequence of memsures in 11and we 1et vrt = pn o ?-1 then there is a convergentsubsequence vug . Applying the continuous mapping theorerp, (1.2),now tt? thefunction v = it follows that puk = vng o converges weakly.

The eneral case Whatever S is if 11 is tight and we 1et Ki be so thatg , '

plii) k 1 - 1/ for a1l y C 11 then a1l the measures are supported on thJ-compact set Sz = Uff and this caze reduces to the previous one. (a

Proof of (2.2) We begin by introducing the intermediate statement:

(H) For each t', J > 0 there is a flnite collection W1, . . . ,.,z of balls of radius J

so that ptUksa.Afl k 1 - :7 for al1 y 6 H.

We will show that (i)if (H) holds then 11is tight and (ii)if (H) fails then 11isnot relativley compact. Combining (ii)and (i)gives (2.2).Proof of (i) Fix e and choose for each k flnitely many balls d . . . , da of10 k

radius 1/ so that p(Usapd,) k 1 - V2 for all ;t e l1. Let K be the closureof n2'.1Uuns .Av. Clearly ptff ) 1 1 - &. ff is totally bounded since if Bb is aJ

ballwith the same center as A) and radius 2lk then G?,1 K j %nk covers ff .

Being a closed and totally bounded subset of a complete space, ff is compatand th proof is complete. i '

I::I

Proof of (ii) Suppose (H) fails for some t: @nd J. Enumerate the memersof the countable dense set :1, q2, . . ., let A = Bqi, J), and Gn = Ulkjdf . Foreach 'rt there is a measure yu E 11 so that Jza(Gs) < 1 - 6. We claim that yuhs no convergent subsequence. To prove this, suppose pnu :::t. v. (iii)of (1.1)implies that for each m

zzttpml %limsup pnk (Gm)-.cxl

s limsup paptG,,sl s 1 - ek-+ oo

Page 146: Stochastic Calculus

282 Capter 8 Weak Convergence

However, Gm 1S as m 1 x so we have vsj < 1 - :, a contradiction.

8.3. Th Space C '

.. . .' ( .

'

.

'

.

Let C = fR(0,1), Rd) be the space of continuous functions from (, i) to Rd

eiuipped with the norm'

jjwjj.suj jw(j)jts 1

Section 8.3 The Sppce C 283

As n -..+

x, A() -> fn H 0 but not uniformly. To see that pn does not convergeweakly to yzx, note that h(a)) = supcsfsl (t) is a continuous function but

()sp(d) = 1 /* 0 = (Y)px(d+)

(3.2) Theo4p. Let yn , 1 K n k r:xnbe probbility memsures on C. lf the flnite( y.c.

( uosaoy yy aud jj tjw yg as (jgudimensional distribut ons o pu converge othen y'k z::> #x.

Pl*oof lf pn is tight thn by (2.1)it is relatively compact and hence eachy u ( ( jk yj vsubsequence p,,. hms urther subsequnce pak a converges o a m .If J pR -+ R is boppded apd continuous then fu ,

. . .X%k) is bounded andcontinuus frm C toR and hence

, lxtz , . . .-'L.)/zak dw) -+ Jt.A-t, , . .

..x'k)v(da,)

-1 -1The lat tpdusion implis that p.k o z' ;:z)* v o zrj , so from the assumedconkrkehi f the f.d.'d.'s we tbat the f.d.d's f v are deteriined and hencethere i 'onlj

on: subkequential limit.To e tht thi ifnplies tht te whole sequence converges to v, we use thefollowing result, which ms the proof shows, i kalid to<weak convergence on anyspace. To prepare for the proof we msk the reader to do

Bxetcl 3.1. Let iu b a sequnce of real numbers. If each subsequence of rr,hms a fmther subsequence that converges to r then ra-+

r.

(3k3)Lemzk If each subsqence of itn has a further subsequence that con-verge.sto v then gn =y. v.

Proof Note that if J is a bounded continuous functionk the sequence of realnumbers

.f'

J()ps(#w) have the property that every subsequence hmsa furthersubsequence that converges to J ftvdw). Exercise 3.1 implies that the wholesequence of real numbers converges to the indicated limit. Since this holds forany bounded continuous .f

the desired result follows. (:I

.(

Let B be the collection of Borel subsets of C. Itroduce the toordinate ratlorvayiarbles.X((J) = (8)and deflne the flnite dimensional sets by

(tJ i Xti @).k

for 1 % K )d d E 'where 0 % l < 82 < . . . < tk K 1 and A E X , the orel subsets of R .

E

(3.1) Lemma. B is the same as the J-feld C genrrated by the fipite dimension>lsets.

Proof Observe that if # is a given continu6s fuction

ttd: II - IlK e - 1/nJ= n ((J : Ia'() - ()l-k

: - 1/nJwher the intersection is over al1 rationaks q 6 (0,1). Letting n

-+

x spws(tJ : II - II< e) 6 C and B c c. To prove the reverse inclusion oserve tlkat ifthe . are open the llnite dimepsional set (u?: w(f ) e A) is open so th g' -

theorem implies C C B. (:1

Let 0 K l < ta < . . . < trj S 1 and m : C .-+ (Rd)n be deqned by. . ' '

. g .y'. . r

-aa(tJ) = (*(1),. . -

,(tn)).)

.,.

'

t .. . .:

.

.

..

.

. .

...

('

.....diven a memsure p on (c,c), the memsures p o m-l which give te distqibtp.

:'. . .

ionof the vectors (-'%z, . . . , .Xa) uder g, are called the flpite dlmepjtozwltdlstributions or f.d.d.'s for short. On the bmsis of (3.1)one might hope tliatconvergence of the f.dkd.'s zriight be enough for weak convergence of the yn.However, a simple example shows this is false.

Exnmple 3.1. Let cn = 1/2 - 1/2n, a :::F1/2 - 1/4n, and let pa be the point

mmsson the function '

.

0 z e (, cnl

zlntz- cn) z e a,n)

A() F:z 4s(1/2 - z) z e ps,1/2j0 z e (1/2,1)

As Example 3.1 suggests, pn will not be tight if it concentrates on pathst at oscillate too much. To find conditions that guarantee tightness, we intro-duce the moduls of contlnulty

ta@) = supt ju7ts)- (t)l : ls - lK J)

Page 147: Stochastic Calculus

284 Capter 8 Weak Convergence

(3.4) Theorem. The sequence yn is tight if and only if for each e > 0 thereare n(), M and J so that . E

(i),a(1(0)1 > M) s e for all n k no

(ii)pnw6 > c) < .: for all n no

Remark. Of course by incremsing M and decremsing we can always check thecondition with no = 1 but the foizfmlati i (3t4)eliminates th need fr thaqnal adjustment. Also by taking e: = p A ( it follws that if p is tight thnh is a and an nc so that yrkon > p) S ( for n k nn.

' ' '

t ere

Proof We begin by recalling (see .g.j Royden (1988),page 169) E

. .. . . . .

. : .E: ' .E. . . ...

: . .' ' ' . . :

(3.5)The Arzeia-Ascoll 'heo/em. ALubset .A of a.s p,andonly if supwux 1kv,(0)1< x and lim,-.c supwix

.ts(u)

c .

'

.'

.

To prove the necessity of (i) and (ii),we note that if p i tijht and E > 0we can choose a compact set ff so that pntff ) k 1 - 6 for al1 n. By (3.5),ff C (X(0) %M) for large M apd if e > 0 thp ff c: (ug:%E) for smarll J. .

To prove the suciency of (i)arnd (ii),chopse M sp that pn (IX(0)Ilt M) S2 for all n and choose %.so that ynlnk > E1/:) %e7/2t1 for all n, It we letf/

ff be the closure of (X(0) %M, 'tzl4k K l/- for al1 1-) then (3.5)implies ff ist- d (ff) k 1 - e foy all n. . E

! ' EICMITIPaC an yn

: j j jjeyjCondition (i) is usually emsy to check. For example zt is triv a w.Xk(0) cp za and 4a

-+

z as n -..A x. The next result, (3.6),will b usefulin checking condition (ii). Here, we formulate ih result in terms of J'andpm

variables Xn taking values in C, that is, to say random processes (Xa (f),0 Kt K 1). However, one can ealy rewrite the result in terms of thei: distributionsps(.) = Px 6 .A). Here, d E C. E

.

(3.6) Theorem. Suppose for some a, # > 0

Section 8.4 Skorokhod 's Existence Theorem for SDE 285

Proof Let y < &/#, and pick p > 0 small enough so that

= (1- p)41 + tz - py) - (1+ p) > 0

From (1.7)in Chapter 1 it follows that if./1.

= 3 . 2(1-n)-/(1 - 2-*/) then withprobabilityk 1 - K 2-N/(1- 2-) we have

.l - rl-t for t, r c Qaf3 (0,1) with 2-(1-9)Xl.Xk(U- Xn (r)lS q

Piqk N so thay Jf2-N/(1

s 2-) s : then s 2-(1-n)Nso jhat A% % 6 andtt foltows that >(,t 7k e) k c. o' . .. .

.N. L ..

*. -

8.4k Vkorokhd's Exlstence Theorea for SDEIn this section, we will describe Skorokhod's approach to constructing solutionsof stochmstic dilerential equations. We will consider the special case(*) dXt = 'uk )dft

where o' is bounded and continuous, since we can introduce the term (.X) dtby change of measure. The new feature here is that c' is only assumed to becontinqous, not Lippchitz pr even Hlder continuous. Examples at the end ofSection 5.3 show that we carmot hope to have uniqueness in this generality.Skorokhod's idea for solving stochastic diferential equatipns wms to dis-cretize time to get an equation that is trivial to solve, and then pass to thelimit and extract subseqential limits to solve the original equation. For eachn, defne .Xk(t) by setting Ak(0) = z and, for mz-n < t K tm+ 1)2-n,

AQ(f)= Xa(m2-n) + c(Xn(m2-n))(#t - #(m2-''))

Since Xn is a stochmstic integr>l Fith respect to Bzownian motion: the formulafor covariance ot stottkmstic intgrals implies

then (ii)in (3.4)holds.

Remark. The condition should remind the reader of Kolmogorov's ontinuity

criterion, (1.6)in Chapter 1. The key to our proof i the observation that theproof of (1.6)gives quantitative estimates on the moduls of iontinuity. For amuch diferent approach to a slightly more general result, see Theorem 12.3 inBillingsley (1968).

t(.A- xijt = tc.rc.jkltxatgzrzsj/zallgsn 1 rt

k 0

t

= cfj(A%.((2nsJ/2n))ds0

wherems usual' c = O'O'V. If we suppose that cjtz) < M for all , j and z, itfollowsthat if s < , then

I(A'n', -Yjlt - (.Xk'

, Xnilal ; Mtf - s)

Page 148: Stochastic Calculus

286 Chapter 8 Weak Con vezgence

so (5.1)in Chapter 3 implies E

E sup llk (u)- xni(s)IF s cpl'l(xj ) - (x,$),1F/2s cp(.&J'(t - s))#/2

uE(:,)'

Taking p = 4, we see that

E sup lAk(u)= Xni(s)l4K CM,2(t - s)2

qf:j1 ''.' E '

uE(.s,

Uing (3.6)to check (ii)in (3.4)and then noting that .Xktl so (i)iik(3.4) is trivial, it follows that the sequence Xrt i tight. Ivkink Pfohorov'sTheorem (2.1)now, we can conclude that there is a subsequence Xnk) thatconverges weakly to a limit X. We claim that X sqtis:es MP(0,c). To pypvethis, we will show

2 .

J. o g then(4.1) Lemma. If 1 6 C and J =a Efyait fy

Section 8.5 Donsker's Theorem

Skorkhod's representation theorem, (1.4),implies !that we can onstructprocesses 'F with the same distxibutions s the Xgk) on some probability spacein such a way that with probability 1 as k .-+

x, Fk(@)converges to F(t)uniformly on (0,T) for any T < n. lf s < >nd g : C -+ R is a boundedcontinuous function that is msurabl with rspect t ,F,, then

E :(F). T(K) - T(K) - J' T(Fr)#r))

@ 3

) . y(x) - gnk) - /'zsytyr) dv = c= lims ,(y

Since this hols foi any cotinuous g, an appliction f th motpne lmsstheoremshows '

'

.s(J()

- 7,(y,) - /,'z-ytyrlkrjz'al.''z

0

t

fAt - IAj - j fz.f(A%)ds is a local martingale

k, '

Once this is done the deslred conclusion follws by applying the result to J(z) =

z and J(z) cp izj. ;

'.'

. ' l.'

Proof lt suces to show that if f, Dif, and Dijf are bounded, then theprocess above is a martingale. It's formula implies that .

la

8.5. Donsker's Theorem

Let &,&, . . . be i.i.d. with E = 0 and'z?

= 1. Let Sn = (1+ . . . + a be thenth partial sum. The most n>tural way to turn Sm , 0 S m %n into a processindexedby 0 s t s 1 let

.1'

= sLntjly-.n where (-z)is the largest integer s z.To have a continuoud trajectory we will instead 1et

which proves (4.1).

.f(.Xk(t)) - lxnj = J''! lL'ilutd-Mj (r)i

1 -- t

+-

)! Dijlxrjjdqv'l, xjlri' ,ij

sm/.zriif t = m/nB =

linear if t e (m/n,(m+ 1)/p1

(5.1) Dopsker's Thorem. As n-r,

x, Bn :::> B , Fhere B is a ptapdardrovptanmption:

, y

ill rove thi iesult singt (32)'tis

te are t*o ihigs to do:Prof We w p . .

Convergenc f flnlte dlmnslonal dlstributlons. Since Syj is the. . '

. .. ' ''. .

'

sum of (nt)independent randrr vatiables, lt follows frin the centrl limittheorem that if 0 < 01 < . . .n K 1 then

hn n n n n B B - X B - B )( 'lz

1 ta-

'l.l

, . * *

,.a

-

m-z ) Z:' ( tl 3 t2 t 1. , * * - tm tm-z

To extend the lmst conclusion from hn to Bn, we begin by pbserying that ifrn = nt - (nt)then

Bn = hn + ra knjj-vl-nt

So it follows from the definition of Xn thatr

(Ak , xplr = a.f(Ak(E2nz(2-'z))du0

so if we let1 rj

.a

s ytwa;z-sztr)= .(. )7cf3(Ak(E2r12 )) it -

itd is a local martingale.then J(Ak(t)) - J(.Xk(s)) - Lnfr) r

Page 149: Stochastic Calculus

288 Capter 8 Weak Convergence

Since 0 %rs < 1, we have rnyq-j-l/v-'n -+ 0 in probability and the convergingtogether lemma, (1.3),implies that the individual random variables B7 u:). Bt .

To treat the vector we observe that ,

. '

-

'

CBf'' - B ''.f=

z) = (t''. - 1'.f-, ) = (ft''.- tn.) - CB,')=z

-

.,'',-

z)-+ 0 '

iin probability, so it follows from (1.3)that

(fn , Bjn - B: , . . . ,

.8y.

- Btnz ) c:), Bt z , Btu - B3, ,

. . . , Bt .- Bt m- , )l Q 1 m ..-

Using the tct that (zl,z2, . . .

, za) -+ (z1,zl + za, . . .

, zl + . . . + zm) is acontinpop? papping and invoking (1.2)it follows that the finite dipensionql

. ' .' . . .'. t i.. .

'. '. . . . .

' .'

. . . . . . . . ' '. . .. ..E : '

.'

dlstristln of Nn converge to those of B.

Tightness. The L2 rpaximal inequality for mprtingales (seee.g., (4.3)inChapter 4 of Durrett (10j5))implies

2E max Sj S 4E%2

oslsz)Taking ' = n/m and using Chebyshev s inequality it follows tltat

P xpax 15'I> EWn-k

4/m:2oKlKa/m

-

or writing things in terms of y

P max Iltn- B,n I> : < 4/m:2>XtX>+l/m

-

Since we need m intervals of length l/m to cover (0,1) the lat stimte is not

.good enough to prove tightness. To improve this, we will truncat d cozrtputefourth moments. To isolate these details we will formulate a general result.

(.2) Lemma. Let l , (2, . . . be i.i-d. with mrany. Let Ii = 1(It.1s.&.r),-letS t #. .

.4-

let a(.T) = .E'(l(I2;Ifl % ?) let pu =i Efi, and pv :+ .E'(4?)..t.T Fl: ' '. ? z

Ten w have ' '

. ' . . . . (

(i) P # f) s M-2.E'(lfI2;II > f) = aM)lM2i.

:

(ii) 1,- pwzl K ftlflj lI > M) K aM)lM

(iii) lf M k 1 and a(M) < 1 then

E t,jz- fpuj4 % CJMZ + (2:/2 (5.4)

!tt!

!jl'..

Section 8.5 Donsker's Theorem 289 jl)1(whereC = 32 (#4ay+ i7zT) and 6% = 24 (V+ jlwzj. 1yj. (. . '.

f'

'' '. . . . .

'..'. . .'..... . .E. .

'. '...

.11

c jRemark. We give the explicit values of 1 and Ca only to make it clear. that . ,hey only depend on pu and pv. tlt

Pwof Only (iii)needs to be proved. Since ELo- pu) z::': 0 aud the are tindependent lt$1.tz 4 ii.&t-jz- tpvl4 = z E 3- - pu kt (=1 t

- 4 z 4 -

:? t= XEfi - gwz)+ 2 2Efj - #M) !

lf p is an even integer (4ponly care about p = 2, 4) then

E - putp s zrtJtk + xt',l?lFrom this, (ii),apd our wssumptionsit follows that

Ef- - #wz)2K 41v2 + pwzlJ(.

('

'. . . .

This gives tlze term Cz'2. To estimate the fourth moment we notice thatM

E-#'?)

= 4z3P (16I> z) dN '

() .

M

s 2M2 2z.P(l6 I> z) tfz = 2.:1*2p%

So using our inequality for the pth power again, we haveL

E - puj* K 16(#>4 + 242pwy)'''

. ..

Since we have supposed M k 1, this gives the term CJM2 artd the proof iscomplete. El

We will apply (5.2)with M = x-n.Part (i)implies

-asy-n)(5.3) npfi # f) < n .

sz,,-+ 0

by the dominated convergence theorem. Since y = 0 part (ii)implies

at.zrlnljuzl n . = 0(./)

&/T

Page 150: Stochastic Calculus

290 Capter 8 Weak Convergence

as n-+

x. Using the L4 maximal inequality for mattingales (againsee e.:.,(4.3) in Chapter 4 of Durrett (1995))and part (iii),it follows that if ' = n/mand n is larte then (hereand in what follows C will change from line to line)

E sup 1,9k- wzI4< C'flst - '#wzl4oKkKf

2 1s c(zM2 + z2)< c - + n2

moUsiug Chebyshev's inequality now it follows that

(5.5)

Let B = Jsj/v-n and Ik,m = l/rlz,(k+ 1)/rn1. Adding up rrl of teprobabilitiesin (5.5),it follows that

1limsuoP max max I.n - hunt-I > el9 < c6r4 2 + -

a....

-x

0 < < m , EIy . m' @ - J H* ' ' -

m

To turn this into an estimate of the modulus of continuity, we note that

. :2 1. P sup lxi - kpu lk EWn/9 C%-* - + =

0KK; r rqo

Section 8.5 Donker's Theorem 291

Now the maxhnum oscillation of Bn over , j is smaller than the maximumoscillationof hn over Ilnsl/n,(gnf)+ 1)/n) so

(5.7) uqnnj s ws-v,inhn)

and it follows thatlimsup.Ptu)j/amtfnl > e) f e7n-O

This verifes (ii)in (3.4)and completes the proot of Donsker's theorem.E

The mairt motivation foi proving Donsker's theorem is that it gives ascorollarie? > number of interesting facts about randpm walks. The key to thevault is te continuous mapping theorem, (1.2),which with (5.1)implies: ,

(5.8) Theorem. lf # : C(0, 1) -.+ R has the property that it is continuousPtra.s. then lBnj @ 4(.8). E

Exnmple 5.1. Let'*oj

= maxttt) : 0 K t % 1J. It is easy to see that1'/(*)- #()I E 11- #11so 4 : c(0,1) -+ R. is continuous, and (5.8)inplies

max SmlWn=> M Y max Btosmsn , cstslTo complete the pictuye, we observe that by (3.8)in Chapter 1 the distributionof the right-hand side is

15(M1 k c) = J5(Tc S 1) = 2.P0(11 k c)

txnmple 5.2. Let #(tJ) = suptt < 1 : (f) = 0. This time # is notcontinuous,fpr if oz has tJE(0)= 0, E(1/3)

= 1, :(2/3)= :, w(1) = 2, and

linearon each interval (,/,i + 1)/3) then #@c) = 2/3 but #@,) = 0 for E > 0.lt is ewy to see that if #(w) < 1 and (f) hms positiv and negative valuesin each mterval (#@)- &,#@)) then # is continuous at tJ. By arguments inExample 3.1 of Chapter 1, the lwst set hms.&

memsure 1. (If the zero at #(*)wmsisolated on the left, it would not be isolated on the right.) Using (5.8)nowsptm %n : Sm-l . Sm K Qjlt o L = suptt ; 1 : Bt = 0J

The distribution of L, given in (4.2)in Chapter 1, is an arcsine law.

Bxnmple 5.3. Let /@) = j(t C (0,1) : tJ(f) > tz)l. The point H a showsthat # is not continuous but it is emsy to see that # is continuous at paths tuwith l( e g0,f) : () = clj = 0. Fubini's theorem implies that

'tt'1/m(J)K 3 ctrjaxwtyyj,rrj:a.x..IJ(s)- J(/m)I

since for example if klm i s i (k+ 1)/m K t i (k+ 2)/m

I/(i) - /(s)1K I.f(t)- lk + 1)/m)l

+ IT(( + 1)/m) - T(/m)I+ IJ(/m) - J(,)IIf we pick m and J so that 6:-442 + 1/m) g : it follows that

*45.6) limsup #(1z?1/m(Wn)> 6/3) K e7nY

To convert this into an estimate the modulus of contluity of hn we beginby observing that (5.3)ard (5.4)imply

P sup I'fn- htnI> e/3 -+ 0ostsl

msn-+

cx) s the triangle irtequality implies

limsup.p(o,/m(n)

> e) s cn-X

1.EI( e (0,1J: Bt = c) I= jo.!aa(.st

= c) dt = ()

Page 151: Stochastic Calculus

292 Capter 8 Weak Con vergence

so # is continuous Pn-a.s. With a little work (5.8)implies

llmf n : Sm > c/fl/n zc.)- I( e (0,1) : Bt > c)IBefore doing that work we would like to observe that (9.2)in Chapter 4 showsthat I(t6 (0,1) : Bt > 0Jlhas an arcsine 1aw under

.&.

Proof Application of (5.8)gives that for any c,

I( e E0,1) : B.1 > ca/fi)l =>. I(fe (0,1) : Bt > c).l

To convert this into a result about l(mK n : Sm > cW)Iwe note tht if E > 0then

Secton 8.6 Te Space D 293

we have1 n

stnl/.z,;lkdt -

,,-1-(,/a) y'lsm,s k+1))0y/-( (m=1

i'orsxedu we have wtmaxmssIxmls.,r-t,+2)z'z-)

.-, 0 by (5.s)so usingExample 5.1, apd (1.1)it follows that

limiyf .P(Ga(M)) k P max l# l < Mn-w ()Kj< 1

The right-hand side is close to 1 if M is large so

Ptm!.x IAkl > EW)K nptllml > Evh'j i

5 9) m Kn-' '

( . . . .

s :-2.E'(xm2; Ixml> :W) -+ 0.

. . ..

. . . . ..

5.

' .'.' . l

.'.

.5

.'

.

by dominated convergence, and on (maxmsaIAk lK EVRJ,w hv .

ll(ms ?z : sm> c.zr;)1I(te (0,1) : B;%> (c+ e)4:J)l /n

K I'(t6 E0,1) : B1 > (c= e)./fi)l

Combing this with the flrst conclusion of the proof and using the fact that-+ l( e (0,1) : Bt > JIis contluous at = a with probability one, we arrive

emsilyat the desired conclusion.

Exnmple 5.4. Let #@) = J(a,ljtJttld where k > 0 is an integer. # iscotinuous so applying (5.8)gives

1 nBllv-ntk dt - n-1-(/2) j''qSk -.+ 0( m

0 m=1

in probability and it follows from the converging together lemma (1.3)that

n 1-l-(/2) j'''lsk m sk dtn m t

1 0m=

It is remarkable that the lmst'result holds under the assumption that EXi = 0and EX,? = 1, i.e., we do not ned to assume that .E'l.X'/l< x.

8.6. The Space D

In the previoussection, forming the piecewist linear approzmation' was n >Lv u ,

noylng bookk-eeping detail. In the next sectlon When w consider proces thatjump at random times, making them piecewise linear will be a genuine nui-

sance. To deal with that problem and to educate the reader, we will introducethe space D((0, 1), Rd) of functions from (0,1) into Rd that are right continuousand have left limits. Since we only need two simple results, (6.4)and (6.5)bc'-loW, d e an flnd this material in Chapter 3 of Billingsley (1968),Chapter3 of Ethier and Kurtz (1986),or Chapter VI of Jacod and Shiryaev (1j87),wewill content ourselves to simply state the results.

W begin by deflning the Skorokhod topologyon D. To motivate thisconsider

Kxnmple 6-1k For 1 %n K x let

1 rlj (f1'/W) dt => j Bt dt

To convert this into a result about the original sequence, we begin by observingthat if z < # With lz - pl K 6 and Izl, IpIK M then

p lzl+l :.#.r+1

1z - p ls' ' dz Sk + 1 k + 1

From this it follows that on

c,,(M) = max lxmls M-k+ztv-n, max I,$'m< MWnmKn mKn0 t e (0,(n+ 1)/2n)

A(f) = t1 t s g(n+ 1)/2n, lj

Page 152: Stochastic Calculus

294 Capter 8 Wak Convergence

where (n+1)/2n= 1/2 for n = x. We certainly want f,t -+ Jx but llA-/k 11=1 for all n.

y. . .g

Let A be the class of strictly incremsing continuous mappings of (0,1) ontoitself. Such functions necessarily hate (0)= 0 and (1) = 1. For J, g E Ddefne df, #) to be the infmum of those positive : for which there is a E Aso that

sup l() - l< e and sup IJ(t) - p((t))I < .7t

It is emsy to see that d is a metric. lf we consider 1 = fu and,

= fm in Exampl6.1 then for .7 < 1 we must take ((n+ 1)/2n) = tm+ 1)/2m so

' '

Section 8.6 The Space D 295Eor J',g E D define #()(z, y) to be the inqmum of those potive e for which thereis a e A so that

llll < e. and sup lJ() - g((t))l < e

It is easy to see that dn is a metric. The functions gr in Example 6.2 ave;

.. .

dclla, Jm) = mintl, llogtn/mll)

so they no longer form a Cauchy sequence. ln fact, there are no mor problems.

(6.1) Theorem. The space D is complete under the metric #n.

For the reader who is curious why we discussed the simpler metri d we rtoti

(6.1)Tliikl. The metrics d and dfj are equivalent, i.e., they give Tise t thesie tfliky o D. .

'' E..'

. . . ( . . ..i .' . . . . ' ' !

. '.' . . ' ' ' ' :O fifst tep i studying weak cohvergence in D is to charaterize tight-

ns. Gfiialiing a deqition from Section 7.3 we let.

:. .

:.

. .. (. 'r gjjg jgj :cx gup (! (j) ... tsjj: S j i C Sj

for each S C (0,1). For 0 < < 1 put

n + 1 r?z+ 1 1 1d(Jn,Jm)= - = - - -

2n 2m 2n 2m

When rrl = (x) we have dfn , Jx) = 1/2n so fn -.+ fn in the metric d. ,'

We will see in (6.2)that d deqnes the correct topology on D. Howevex, inview of (2.2),it is unfortunate that the metric d is not complete.

Exnmple 6.2. For 1 K n < cx) 1et

/2)0 t e (0,1gutj = 1 t e (1/2,(n+ 1)/2n)

0 t 6 ((n+ 1)/2n, 1). : '

. j .'.

ln order to have : < 1 in the deflnition of dgn, gm) we must have (1/2)= 1/2)and ((n+ 1)/2n) = (m+ 1)/2m so

. .: .

.' : '.. . ( . .1 1

'

dgn,gml =

s-

ayyj

i twiselimit of gn is g. H 0 but dgn, g.) = 1. Weleake it to the iarderT e po nto show

'thf ) = inf m>x'ttlp

f,,+z)(#)

(t) o<zsr

Where the infmum exteuds ove: the qnite sets (tl with

0 = tc < tl < . . . < r and ti -

i-l > 5 for 1 % K r

The analoge of (3.4)in the current setting is

(.2) 'horei. The sequenc yn is tight if and only if for each 6 > therrare a0j au y

(i)pn(I=(0)l> M) < e for all n k no

(ii)pzrltut .>

4 < 6 for a1l n k nc

'te thktthe only kigefencefrom (s.4)is the ' in (ii).If we remove the 'weget the main result we ned.

Exercise 6.1. If h C D then liminfa-.x dgtz, #) > 0. . ( q.

To fix the problem with completeness we require that be close to theidentity in a more stringent sense: the slopes of a11 of its chords aTe loqe to 1.lf C A 1et

(@)- (s)Illl = sup log#t f -

.9

Page 153: Stochastic Calculus

296 Caper 8 Wak Con vergence

(6.4) Theorem. If for each 6 > 0 there are nc , M, and 8 so that

(i)pr,(I*(0)I> M) K : for all n k nc

(ii)pst'trj > E) < .7 for all n k no

then pg is tight and every subsequential limit p hs yC) = 1.

(6.1)-(6.4) are Theorems 14.2, 14.1, 15.2, and 15.5 in Billingsley (1968).We will also need the following, which is a consequence of the proof of (3.3).(6.5) Theorem. lf each subsequence of yn has a further subsequence that

then p cz;> y.convergej to y r,

8.7. Convergence to DiFuslons

In this section we will prove a result, due to Stroock and V>radhan, about

ttie convergence of Markov chains to i usion procede. ur t': mez erefollows that in Chapter 11 of Stroock and Varadhan (1979)but * will discussboth discret and continuops time in parallel. Our duplicity lepgthepp thv proof

'. . . .: . '

.' . . . . '

. .. .E :

somewhat,but is pzeferable we betieve to te usual yomewhatkangvous)cop-(( jt i)

out: the same argument. with minor changes hanlles t e other case.ln discrete time, the bmsi data is a sequen of transition probabilities

jz j.,:

takug values in Ss (: Rd11?j(z,dy4 for a M>rkov chain m?j , r?z = 0, , ,. . ., ,

e.P('F(m+,), e

.lym,

= z) = Tl,(z,.A) for z e ss,A 'A.d

In this cse we deflne Xgh = Flpyy , i.e., we make Xth constant on intervals

E'zz.,(''n+ 1)).In continuous time, the bmsicdata is a sequence of transition rates Q?&(z,dp)

for a Markov chain Xth, k 0, taking values in Sh C Rd, i.e.,

d ?, s d kth z y.x

PCX, e Al-'L= z) = Qsx,.)

for z e &,.,1.

e 7 , wWNote that although z e

.,1.

is excluded from the interpretation, we will allow

i hich case jumps from z to z (whichar invilble) ccur at aQ(z,(zJ)> 0 n wpositive rate. To have a well behaved Markov chain we will ttpse

.E

.

(B) For any compact set z'f, supwox Qs(z,Rd) < x.

Section 8.7 Con vergence to DiFusih 297

in the limit then we hav weak convergence. To make a precise statement weneed some notation. To incorporate both cases we introduce a kernel

'E l1?&(z,dyjlh in discrte timeJf (z,dy) =

Qsz, dy) in.continuouq

time

and deflnect(z) = yi - zltp.f - clfftz, dy)

Ip-rIS1

)(z) = (p - zflzfstz, dp)I:-;r1 :f 1

1) (z)= Ij,.s(z,

:)c)

where Bz, c) = (g : lp- zl < f). Suppose

(A) cJ and bi ar cotinuus coeciets for which the martingale problem iswell posed, i.e., for each z there is a unique measure #. on (C, C) so that theoordlnate maps .X@) = tJ(t) satisfy #.(.X = z) = 1 and

fXf - bixjj ds and Xtivji - aij (X,) dst

are local martingales.

(7.1) Theprem. Suppose in continuous time that (B) holds, and in either casetl d for each j R < x, and 6 > 0that (A) hol s an , ,

(i)linutp supjxlsa IctJtzl- cj(z)l = 0.' . . . . .' . ... .

(ii) limhtn suplzlss I)(z) = lzll = 0 .

(iii) limta suplxjss A) (z)= 0

If Xt = zts-.+

z then we have X) => X3 the solution of the martingale problemwith a'

::.q z..

. . ..'

. ..' '

...

g'

.

Renaz-sk tlere=> denotes convergence in D((0, 1)tRd) but the result can bed for all T < x.triviallygnetalized to give convergencr D((0, T), R )

In (i),(ii),and (iii)the sup is taken only over points z e Sh. Condition(iii)' annot hold unless the points in Ss are getting closer together. Howevei,there is no implicit mssumption that Ss is becoming dense in al1 of Rd.

Proof The rest of the section is devoted to the proof of this result. We willfrst prove the result under the stronger mssumptions thas for all ,

.i and 6 > 0we have

Page 154: Stochastic Calculus

298 Chapter 8 Weak Convergence

(i')linutc sup. la,ttzl- c2J(z)I= 0 (iv)sups.. 1at(z)I< cxl

(ii') lizl?jl.tlsupwlltz) - f(z)l = 0 (v)sup.. I)()I k &

(iii')limstc supa zs',(z)= 0 (vi)sups,xl) (z)< x.

' E

and in continuous time that (F) supr Q(z,Rd) k x.

a. Tlghtness

If J is bounded and memsurable, 1et

Lhfz) = ffstz, dyjfy) - J(z))

As the notation may suggest, we expect this to converge to(('

:'

1 x...-x x-xZT(z)= ;; 7. aiiq:blhjfq + 7.a(z)DT(z) ri,i ' . (

To explain the reason for this definitin we note that if f is bondd and

measurable then

i fxh ) - E-l FzLlxb ) is martingale.(7.2a) In discrete t me kh j=() ls

?1 f ?t ,(7.2b).1n continuous time fxj ) - JcL T(Xa ) is a martindale.

Proof (7.2a)follows emsily from the deqnition of Lh and inductipn. Since we?.

'* e

y ,

have (B ), (7.2b)follows from (2.1)and (1.6)in Chaptr 7. E E1

The first step in the tightness proof is to see what hppens wh we applyLh to A,y(z)= k',lz- g) where A(z) = g(lzl2/e2), ad

Section 8.7 Convergence to Disions 299

wherezr,v = z + crbvy - z). Integrating with respect to ffN (z,dyj over the setlp - zl K 1 and using the triangle inequality we have

Lhlzj s vJ(z) . y - z) zfktz,dy)ly-rIS1

+ J'ltp- zijyj - ztlihjfzr,yllLz, dyjIp-=lKl i j

+ 2llJIlxff?&(z,B, 1)c)

Let Ae = supz lVJ(z)l, and Bz = supa llDuT(z)Ilwhere

Ilmlll= sup jj uimijut 11i I'?zl= 1

.'

Noticing that the Cauchy-schwarz inequality implies

(7.5) j uini js 1.u11.p1i

and the deinition of llruj'11gives

(7.6) j yi - xijyj -

zjlfhj-flzlj< 1p- zI2IIDfjJ(z)II

,J

it follows that

z/J(z) s -, y s z) zf,tz,dy4Ip-rI :f 1

+ B6 lp- zI2Ksz, dy) + 2II.fIIxff?(z,Bz, 1)9lp-rIK1

(1- z2)(1 - z)3 0 K ; 1#(z)=

() z k 1Now

(7.7) clf (z)= lg- zI2Ik (z,dy)f Ip-rIS1

sorecalling the dennitions of ) aud A, , then using (iv)-(vi)the desired resultfollows. o

To estimate the oscillations of the qample paths, we will let a = 0,

a = inf ( > a - t : la' - X.h j k c/4)R-1

N = inftn k 0 : Ju > IJc = mintra - ra-l : 1 ; n K Nj

0 = maxtlftt) = x(t-)1 : 0 < t s 1)(7.4)

Page 155: Stochastic Calculus

300 Chapter 8 Weak Con s'ergence

To relate these deqnitions to tightnessj we will now prove.

' ' ' '

(7.8)Lemma. If > and 0 %c/4 then wxhj %6.

.'

... . .

'.

' ' . ..'

. .:

.

'

'

Proof To check this we note that if o. > &and 0 < t - s < 6 there are twocmsesto copsider

CAss 1. zk < s < < raml

l.f(t)- J(s)1f lT(t)- J(a)l -.IrlT(,u)- J(,s)l5 26/4

cwsB 2. a-' 1 ; s < lu S t < 'ra.f.l

IJ(t) - T(s)lS IJ() - /(zk)I+ IT(,u)- /.(rp-)1.

+ IT(ra-) - T(ra-1)l+ IT(a-1) - J(s)l S .7 (:I

Tlghtness proof, discrete tlme. One of thq prob>bilities in (7.8)igtrivial to estimate

1 c L(7.9a) Pyo k 6/4) f j, sup 1I?,(z,Bzt 6/4) ) %sup Lej4z) -+ 0= zl

by (iii/).The first step in estimyting Pvc > J) is to estimate .Jj('61 S J). Todo this we begin by observing that (7.2a)and (7.3)imply

'

f :/4(.Xk?,)+ tl/4.1:/1,k = 0, 1, 2, . . . is a submartingale#'

Using the optional stopping theorem at time r A J where = LlltjhK apdBoticig Xpbt = a'Gal it follows that

Evfly,qnuhnt + Cel4r h 5)) k 1

or rearranging and using r h 6 K 6

(7.10) Q/4J k .G(1- Jp,:/4(XzaJ)Jk fjtrt K 8)

Our next tmsk is to estimate PyN > ). To do this, we begin by observing

Ev(e-n.)i.!5l'q

%J) + e-&P (q > J)#

s b146+ e-J(1 - Ce/4) H < 1

(7.11) PyN > kj =.!5ta

< 1) < eke-rk ) < e

Combirting (7.10)and (7.11)we have

(7.12) Pvc ; ) K k sup.&(r

< )+ PyN > t)s cx + er

Section 8.7 Convergenceto; Dihsiotjs 301

Iterating and using the strong Markov property it follows that Eve-rv ) S n

and

If we pick k so that e S e'/3 and then pick 8 so that Csk&K E'/3 then (7.12),(7.9a)j and (7.8)imply that for small lt, sup, Py(t@4A') > 6) S t'. This veriies(ii) in (6.4).Sine Xt = z, , (i)is trivial and the proof of tightes is completein discrete time. E1

Tt tness proof, continuous tlme. One of the probabilities in (7.8)gis trivial to estimate. Since jumps of size > 6/4 occur at a rate smaller thansupxllhtz, B, E/4)c) and e-J k 1 - z

'. ( : . . . .

(7.9b) Pvo k :/4) K 1 - expt- sup Qs, #(z, 6/4)f)) %sup A)/4(z) -+ 0

by (ii/). The flrst step in estimating Pvc > &)is to estiniate >,(n K J). Todo this we begin by observing that (7.2b)ahd (7.3)imply

h C t t k 0 is a submartinglely,el4X ) + ,/4,

Using the optional stopping theorem at time r h J it follows that

Eylvtsln-ss) + Cel4r A J))' k 1

The repainder of the proof is identical to the one in discrete time. 1:1

b. onvergnce

t '

Since we have assumed that the martingale problem hmsa umque solution,it suces by (6.5)to prove that the limit of any convergent subsequence solvesthe martigle ptoblem to cnclude that the whole seqence converges. Thefrst step is to show

(7.13) Lemma. If f e Ci then IIEA.J- Lf 11x-+ 0.

Page 156: Stochastic Calculus

302 Chaptek 8 Weak Convergence

Prof Using (7.4)then integrating with respect to ffhtz, dy4 over the set1: - zl S 1 we have

(7.14)

zytz) = J'lt',(z)ofy(z)i

o Jz. ,)zf,(z,dp)+ )7(!n- zityt - zt4 .J ,

Ip-rI S 1 j

+ (J(p) - T(z))Ikx, dp)Ip-rl > 1

. . '. . .. . . . ..i'.

. . ....

('

. .. . . . . . . .

yRecalling the dflnition of A1 ahd using (vi)it follows that the third terrft goesto 0 uniformly in z. To treat the flrst term we note that (7.5)apd (v)imply

)(z)D/'(z) - biztihlzt K sup I)(z) - .f(z)I I2J(z)l -+ 0J z gr j r

j . ( r . r ( ' '

. E

uniformly in z. ,

i (714) nd E jbitfz)Now the diference between the second term n . ijis smaller than

aijztlhil) - ct(z)DJT(z). . .. . .

E '.' .

;i fJ(7.15)

+ yi - zf)(pJ - zJ) flhilzz,vt - fhJT(z)l ffhtzl d#)Ip-rl %1 ij

By (7.5)the Iirst term is smaller th>n

suplcf.f(z)- c,blzll If%J(z)IJ ij

which goes to 0 uniformly in z by (iv).To stimate the secod teiz iil (7.1)we note that for auy e the integral over (p : e < 1g-zl K 1) goes to 0 unifprmlyin z by (vi)and the fact that the integrand is uniformly bounddk Usin: (7.6),the last piece

. . . . . . '''

. .

(p - zityi - ztt tfhJT(zr,p) - lhilx. fft, dplIp-rlse i.i

(7.16)

s r(e') Ip- zlzfflzlz, dp)1:

-r1

XE

Section 8.7 Convergence to Disions 303

wherer(e) = sup llfh.fJ(z.,,) - DJJ(z)lI

Ip-rlKe

Now zz,v is on the line segment connecting z apd y, Dijf is continuous and hmscompact support, so r(:) .-. 0 as T

-+ 0. Using (7.7)and (iv)now the quantityin (7.16)converges to 0 and the proof of (7.13)is complete. (:l

.. . .'

.

C f discrete time. Let s-%

0 so that Xlt= conklfgsOnvergence PrOO j

weakly to a limit .X'O. (7.2a)implies

k-1

s gj yTtxkla) - hnL n Ixjlj, k = 0, 1, 2, . . . is a martingaleJ=0

To pmss to the lizit, we rewrite the martingale property by introduciug abopndedcontinuous function g : D -+ R which is memqurable with respetto F qnd obkerve that if kn = Lslhnj+ 1 and n = Lilhnj+ 1 the rparting>l,;

; : y

property implies

ln - 1

E (xn) J(.X'ha ) - Jxkhn ) - )-')Fzaznytxk ) = ()# zaa na j a

j=ks

Let x= 0. Skorokhod's reprqsentation theorem (1.4)implies that we can

P 1 % n % cxi with the same distribution as the Xh=construct processes'

,

so that Fn -+ 'O almost surely. Combining this with (7.13)and using thebounded convergence theorem it follows that if f E Cx2then

,1

E :(10) J('0) - .f(A-,0)- Llxr'j dr '';z: 04

Since this holds fpr all continuous g : C -+ R meuurable with respect to Fs itfollowsthat if f E Cx2then

t

fxjnj - LfX,0) ds is a martingale0

Applying the lmst result to smoothly truncated versions of fzj = z and J(z) =

zizj it follows that .X0 is a solution of the martingale problem. Since themartingaleproblem has a unique solution, the desirqd conclusion now followsfrom(6.5). 1Zl

Page 157: Stochastic Calculus

304 Capter 8 Weak Convergence

* L t --> 0 so that Xhn con-Convergence proof, continuous tlme. e n

vergesweakly to a limit .XO. (7.2b)implies

Jtxa ) - Lh%%Jtlaa )ds t k 0 . . . is a martingale0 .

yetj j uuou o (e mar-and repeating the discrete time proof shows tht s a sotingale problm and the desired result follows. C1

c. Locnlization

To extend the result to the original msjumptions, we 1et pk be a C* func-l j jj gtion that i 1 o (, ) and 0 ort X( , k + 1) and de ne

Ik,k, dy) = yutzlffptz, dy4 + (1- pklj&mdy)

hre & it a fiht irs at z. It is asy to see tht if Ii stisfles (i)-(iii)theFIL 1, sati:e (i/l-ttii/) aud (ivl-tvi). Frorft the

'flrst

tw oarts' f the prof it,

'''' N .'

%'

' 'N #'

*'.#'

y 'follows that for each k, Xh,k is tight, and any subsequential limit solves

vth

martingale problem with coecients '

ctz) ='wk(z)c(z)

tz)= /k(z)(z)

We have supposed that the martingale problem for c and has a uniquenonexplosive solution A'q . In view of (3.3),to prove (6.1)it suces to show that

iten any sequence n-+ 0 there is a subsequence W so that X#'a :::). 10 To

a.

pove this we note that since each sequence Xhn 'k

is tight, a diagoyal argumentshows tllat we can select a subsequehce ?nso that for each k, Xa' ::4. 10,#.,

Let Gk = tf.J : (f) e.8(0,

),0 K t K 1). Gk is an open set so if' 0, d If c C s open then (2.1)implies th>tX a' :::ty X an

(.' 0jk 0liminf PX n' e Gkn #) k PX e Gkn ff) = PX e Gk rl 1I)n''-x

since up until the exit from.8(0,

1-),Xo' is a solution of the martingale problemfor b and a and hence its distribution agrees with that of 10 . Now Jlp to the

? ' kflrst exit from.8(0,

k-) X a hms the same distribution wsthat of X a, so weaVe

i f Ptxi : #) k liminf #(.X$ : Gk n S) k .P70 G Gk n Hjlimnn'etr n'ex

sincethere is no explosion the right-hand side incremses to PCXD e #) as t-1x

audthe proof is complete.

Secton #.8 Examples 305

8.8. Examples

In this section we will give applications of (7.1)beginning with th Efollowing

very simple situation.

Exnmple 8.1. Ehrenfest Chain. Here, the phycal system is > box flledwith air and divided in half by a plane with a smll hole in it. W iitodet thismathematically by two urns that contain a total of 27: ball, which we think ofas the air molecules. At each time we pick a ball from the 2n in the two urnsat random and move it to the other urn, which we think of wsa polecule goingh h the hole. Weexpect the number of balls in the left urn to be aboutt roug

n + Cv-n, so we let Z be the number of balls at time r?zin the left urn minus1/a i '

n, and let 'm/a i Zlv-n. 'he stite space S1/s ='fkl-n

t-n

< k < n)

while the transition probability is

.kya r - z zglIl/n(z, z + n ) =

2nn + z W11 (z z -

n-f/2) =1/a , : n-1/2 Ll% '

0 for all z so (iii)holds. To check conditions (i)When e' > n , r (z) =

d (ii)We note thatan

1 n - z Wn 1 n - z Wnl/n(z) = n- . - = . =

-z

Wn 2n Wn 2n

1 . uu . u-n 1 u . y ug E

1/ n (z ) = n- .

'* e' N '''

+ L .

'- - V-

= 1c n 2p. n 27:

The limiting coecients (z)=-z and a(z) ::.;q 1 are Lipschitz iptiuous, jo

t/a ylnthe martingale problem is well posed and (A) holds. Letting Xj =

sjj/sand applying (7.1)now it follows tat

1/n(8.1) Theorem. As n.->

x, Xgprocess Xt , i.e., the solution of

converges weakly to an Ornstein-uhlenbeck

dXt = -Xj dt + dB3

Next we state a lemma that will help us check the hypothses in thr nexttwo examples. The point here is to reflace the trucated moments y narymoments that are emsier to compute. E

.

ju(z) = yi - zijyj - zllffhlz, d

lz) = yi - zilliz, dp)

yvh(z)= lp- zlrff ?,(z, dy4

Page 158: Stochastic Calculus

306 Clpter 8 Weak Convergence

(8.2) Lemma. If p k 2 and for a11R < (x7 we have

(a)limptnsuplalss ItJtzl= cf/tzll p 0

(b)limhlo suplwjss lJ)tzl - (z)I= 0

(c) limhto suplmjss ot) = 0.

' .'

.5

. .. '

then (i),(ii),and (iii)of (7.1)hold.

Proof Taking the conditions in rverse order, we note A) (z)K E-pplzj

lJ(z) - bt (z)I< Ig- zlfftz, dy4 S Ag(z)jf tl $ 1

Using the Cauchy-schwarz inequality with the triviality yk - zk)2 K Ig- z12

Ittzl - ctJtzllK 1(:f#- zilyt = zJllffplz, dy)

a

S 1:- zl fflz, d S -tb (z)lp-rl > 1

The desired conclusion follow. easily from the last three observations.

Our flrst two examples are the lmst two in Section 5.1, and date back tothe begiihgs of the subject, ee Feller (1951)t '

Exnmple 8.2. Branchlng Processesk Consider sequence of br>nchingprocesses(g;)j, m k 0) in which the probability of k children p2 hms mean1+ pnln) and variance %2. Suppose that

(A1) pu -+ p : (-x,x),(A2) c.n

-.+

r : (0,x),(A3) for any J > 0, Ekx,a 2p2 --$

0

gjn y;Following the motivation in Example 1.6 of Chpter 5, we 1et Xt = Zwjln.Our goal is to show

l/a kj to(8.3) Theorem. If (A1), (A2), and (A3) hold then X3 converges wea yFeller's branching difusion Xt , i.e., the solutio of

dXt = pXt dt + o. A't dBt

Section 8.8 Enmples 307

References. For a classical generating function approach and the history ofthe reul, see Sections 3.3-3.4 of Jagers (1975).For an approach based onsemigroups and generators see Section 9.1 ?Ethier and Kurtz (1986).Proof We frst prove the result under the usumption

(A3#) for any J > 0, Ek>,s1-2,2 = for larg itTlten we will argue that if n is large then with high probability we npver sany families of size larger than nJ.

The limiting coecients satisfy (3.3)in Chapter 5, sp using (4.1)there, wesee that the myrtingale problem is well posed. Calculations in Exarmple 1.6 ofChapter 5 show that (a)and (b)hold. To check (c)with p = 4, we begin bynoting that whez Zl = f, 72 is the sum qf f independent random variables t'JlWith ren 1 + pn/: k 0 and varine /2

. Let 6 > 0. Under (A#) We have inn

additionlJ' 1< n& for large n. Using (c+ )4< 24c4 + 2454 we have

1/n st?z-4tzy- rlxjz; = nzl74 (z) = n

s 16n-3(z#r,)4 + 16n-3F((Z2 - nz(1+ #a/n)J4Ig;= nz)

To bound the second term we note that

.E'((Z2 - nztl + pnlnjj4kzl = nz) = nzEL.l - (1+ /k/n))4, ..j.

, yyjzyajyy.jj yyjyjjjy

To bound Efl - (1+ #a/n))4, we note that if 1 + Xjn S nl

nlEfl - (1+ /k/n))4 = j 4z3#(l(J1- (1+ yk/nlj > z) dz

nJ

S 2(n)2 j 22 P(IC - (1+ #s/n)l z) dz %252%2n2

Combining the last three estimates we see that if IzlS R thenlH 3252,25 + 960252n-1 + j6(z/k/n)474 (z)S n n

Since 6 > 0 is arbitrary we have established (c)and the desired conclusionfollowsfrom (8.2)and (7.1).

To replace (A7/) by (A3) now, we observe that the convergence for thespecialcmse .

and use of the continuous mapping theorem as in Example 5.4i l 'ITIPy

n-1 1..2 zn

.x

dn m=)>

, &0m=0

Page 159: Stochastic Calculus

308 Chapter 8 Wak Con vergence

(A3) implies that the probability of a family of size larger thau nl is o(nr?).

Thus if we truncatze the original sequence by not allowing more than 6rtn children

where % -.+ 0 slowly, then (A1), (A2) and (A3J) will hold and the probabilityof a diference between the two systems will converge to 0. (:1 .

Exnmple 8.3. Wrlght Fisher Dlllhslon. Recall the setup of Example 1.7in Chapter 5. We hav an urn with n letters in it which may be

.4

or c. Tobuild up the urn at time m + 1 we ample with replacement from the urn ttime n but with probability aln we ignore the draw and place an c in, and withprobability #/n we ignore the draw and place an .Ain. Let Z be the humber

1/n aof W's in the urn at time n, and 1et X3 = Zuty/n. Our goal is to show

1 y . .

8.4) Torem. As n-+

x, X) /'' converges weakly t the Wright-iser(diflksion X3 , i.e., the solution of

dXt = t-'wX + 7(1 - Xt)) dt + -Yt(1 - -Yt)dBt

References. Again the flrst results are due to Feller (195i).Chapter 10 ofEthier and Kurtz (1986)treats this problem and generalizations allowing morethan 2 alleles.

Proof The limiting oecients satisfy (3.3)in Chapter 5 so using (4.1)there

wese that the rflartingle problem is well posed. Calculations in Example 1.7

in Chapter 5 show that (a)and (b)hold. To check condition (c)with p = 4

now,we note that if Zl = n then the distribution of Z? is' the same ms thatof Sn = l + . . . + n where (1, . . . , G 6 (0,1) are i.i.d. with Ptj = 1) =

z(1 - a/n) + (1- z)7/n.

Secbion8.8 Examples 309

(8.5) Lemma. If for all R < (x) we have

(a) lirrtc supjxjsa It(z)I = 0

(b) limta suplwlss l)(z) - f(z)I= 0then (i),(ii),and (iii)of (7.1)hold with cutz) H 0.

Proof The new (a)aud (b)imply the ones in (8.2).To check (c)of (8.2)withp + 2 we note that

?tz) Fc: Ip- zI2ff(z,dy) = ntz)

k . . . ... .Exnmple 8.4. Epidemics. Conslder a populatipn With a flkd size of nindividual and think of a disease like measles so that recovered individuals areimmune to furthr infsction. Let S; and 11 be the number of susceptible andinfected individuats at time n, so that n. - S1 + Q) is the number of recoveredindividuals. We suppose that S:, Q) makes transitions at the following rates:

(.s, ) -+ (s+ 1, ) atn - s - i4(s, i) -+ (s - 1, + 1) #si/n(s, ) -+ (s, - 1) J

In words, each immune individual dies at rate a and is replaced by a newsusceptible. Each susceptible individual becomes infected at rate p times thefraction of the population that is infected, n. Finally, each infected individualxecovers at rate y and enters the immune class.

1/n S:jn .Jr&/n). To check (a)in (8.5)we note thatLet XL = ( ,

n 4

Esn - npl4 = E E/ - pj=k

4 n 2 2

= nE - p) + 6 2E - p) S Cn

1/n 1t'll (z1,<2)= '-i-?z ' f&ntl- zl - z2) + pnzlzzl

ln 1c12(z1,za)= -y ' pnzzzz

nln 1

ca2(z1,za) = -y . fpnzlzz+ vnzlzaln

sinceEj - pl'n < 1 for all ?'n k 0. From this and the inequality (c+ )4%24c4+ 2454 it follows that

E.

nEfzlln - zJ4lZ)' = z) %nt-az + #(1 - z))/n4 + nCn2/n4 -+ 0

uniformly for z e (0,1) and the proof is complete.

Our fnal example hmsa deterministic limit. In such situations the followinglemma is useful.

and all three terms converge to 0 uniformly in r = ((z1,z2) : zz, z2 k 0, zl +z2 %1). To check (b)in (8.5)we observe

1/a 1 1l (z1,z2) = antl - z1.- za) . - - Jnzlzz . -

n n= ct(1 - zl - za) - pzlza H 1(z)

1/a 1 1bz (z1,zal = Jnzlza . - - Jnza .

-

n n

= pzzz -

vzae 2(z)

Page 160: Stochastic Calculus

310 Chaptet 8 Weak Con vergence

The limiting coecients are Lipschitz continuous on rk Their values outside rare irrelevant so we extend them to be Lipschitz continuous. lt follows that themartingale problem is well posed and we have

.:

..

. ( ':

'In kl to Xt , the splttion of th(8.6) Theorem. As n-+

x, X converges wea yordinary diferential equation

Solutions to Exercises

.

'

. .

dx3= (a'%)di

Remark. lf we set a = 0 thn or epidemit model reduces to one considered inSections 11.1-11.3 of Ethier and lfurtz (1986).In addition to (8.6)they prove

' 1/a

a central lipit theorem for V-nvj - X). ..q

g .'

. . .

E E . . E . E ( ( E' . . .

Chapter 1

1.1. Let X = (X = ((J : @(t1),(a),. .

.)

C Bj : B E 7Ct1,2'''')). Clea4ly, any.A e X is in the

-feld

generated by the fnite dimensional sets. To completethe proof, we only hyve tp check that .4 is a c-feld. The frst and msir step isto note if

.4

= ( t ((t1), (z), . .

.)

e Bj then A' = (td: ((t1)j td(t2), . .

.)

E./c) E

.4. To check that .4 is closed under countable unions, 1et .,j = (tJ :

((y), tJ(y),. .

.)

G S,)., let tlj 2, . k . be an ordrihg of (t;l, : n,'?'n k 1). and

note that we can write da = (tJ : ((t),tJ(t2),. .

.)

E En' so Uaal.,z = (tJ :

((t1), tilj t .

.)

e tlafkl E.4.

' E ' ( E .

1.2. Let .rj = (td: there is an s 6 (0,1) so that 1# - Bs I % Clt - sl7 when

I - sl K klnj. Eor 1 < n - k + 1 let .

+ i f + i - 1yk.jn= max B - B : j = 0, 1, . . . k - 1

n n

Bn = ( at lest one M.,ais S (2i-'-

1)C/n-)

Again .n c Bn bu4 yhistime if -y p 1/2 + 1/, i.e., (1/2s y) <-1,

then.

. ... ... .' l

. .''

. : . ..'

.. . .

.. .. : :

PBu f n#(I#(1/n)IK (1 - 1)C/n%)

K n#(If(1)I/ (2 2'= 1)6'n1/?=-)?

. c,yjk(1/2-v)+1-+ ()

1.3.'rhe qrst step i? to obqervr that th sc>ling relatipnship (1.3)implirs

lctl 2-n/21

myr; 1,0

while the definition of Brownian motion shows EL2 o = t, and E (Al a.s. @)2=

l'''' '' '''' '

C < x. Using @)and the deinition of BrowniAn mption, it folloFs that ifi # m then 12 - it-l'z and 12 - i2-n areindependnt anl hve lean 0 qo

. k,r! rrtln

2 ;

2 - 2-n) = J'l E (A2 ..-. t2-n)2 = 2nC2-2/E V (m,n m,n

lsmsza lsmszn

Page 161: Stochastic Calculus

312 Solutions to Exercises

wherein the last equality we have used @)again. The last result and Cheby-shev'sinequality imply

P J'') 12 - t > 1/n < n2 . c2-nmjn - ..-

1KmK2R

Ansuws for Chapter 1 313

Integrating both sides oyer .s = (T > n, Bn E ff ), recalling the desnition ofconditional expectation, and uslng our mssumption show

.E%I'Fo pr&;.r,) k a.P(.rz)

The right-hand side is smaller than #(.a+1) and the desired result follpFs.l ak.6. tinceJc tr,#r) dv e Xa,we have

f /aE jotrjBr) )v.'5

= tr,#r) dr,

't

+ E ir

./a

tr,Bv)dr Fs

- >To evaluate the second term we let 'F@) = hs + 'u, ul dn and note

The right-hand side is summable so the Borel-cantelli lemma implies

.P J-l am2a - t z 1/n izmnitely often = 0l

mK2%

2.1. Txke F(td) = Jvh-aj.

2.2k, When Jj = , 2.1 becomes .E'.(.I.'&)= Ss(,)#j-,a

= Sj sice under

.&, Sj- is norfnal with mean zi. When fz) = izj with i # J' 2.1 becomes. .

'. .

'. ' .

'

iB'..F ) =? En Bi l ) = BiB:-E'z.(#f , . - f-, a s:.

( :

d. .P.sf

and .Bi are indep endent norfflals With Ineans i and j .since un er ., z-, :-a

2.3. Let F = 1(g'o>f)'and nte that Ta o 0 = R - 1 so'

o o :iz 1(a>1+:). Usingthe Markov property xives ,

'

.17.4.2> 1 + @1.T)= PB (T > t)

Taking expected value now and recalling .J7.(f1 = y) = pl (z,y) gives

J%(A > 1+ ) = pllz, :48.35(Tc> f) dy

2.4. Let F = 1(x>1-t) and npte thp.t 'F o h = 1(z<f). Using the Markpvproperty gives

Jantf, S il.f) 'z1 >s,(Tp > 1 - t)Taking expected value now and: zecalling Jlsi = y4 = p,(0, gives

.15tf-S ) = pt(0, p)Pp(Tn > 1 - )dy

2.5. We willprov th result by induction 'o n.' By mssmptiim it holds for

n = 1. Suppse now it is truefo n. tet 'F@) = 1(v'>1,.,Ex). Applying theMarkov property at time n gives

->

y' o os= #ts + ,u, #a+u) duc ,

so the Markov property implies

z La

E= tr,Br) dr F, = EB. hs + 'a, #u) dus ()

2.7. Since jo*clfr) dr C Ss, we have

s.(z(s)exp(c,)Is-) -e'xptc-ls-tztstlexp (/'c(s-)(s)s,)

J 4

.. lj k--.Ci ...

To evaluate the second term we 1et y@) J(#-,) expt/ ctgul d'uj andj-,

note F o % = fB3) rxptl c(#,+u) dul so the Markpvprpperty implies

.E'.(y o 0njFa) = Pen (T > 1, B3 G ff )

-J

Ez J(#f) expl c(#r) tk) Fa = .Eb(,) JBt-aj expt c(=y) du4.

: g

2.8. (2.8),(2.9),and the Markov property impl# that with probability ethere are times sl > tl > s2 > t2 . . . that converge to c so that S(sr,) = B (c)and S(s) > #(c), In each interval (s,z+1, s,zl there is a local maximum.

2.9. Z H limsup/n #(t)/J(t) 6 Ft so (2.7)implies Pzz > c) E (0,1) foreach c, which implle: that Z is constant almt surely.

Page 162: Stochastic Calculus

314 Solutions to Exercises

2.10. Let C < x, ta t 0, Ax = (#(ta) k CVLL-for some n k #) andX = xAx. A trivial inequality and the scaling relation (1.3)imply

#n(.N) k .Fb(#(fN) k CW7)= .Pn(/(1) k C4 > 0

Letting N -+

(x) and notitk Ag t .A we have .J%(/) k #p(.81 k C) > 0. Since

. e S1 it follows from (2.7)that /:(.A) = 1, that is, limslp-.c f(t)/W k Cwith probability one. Since C is arbitrry the prof is cmplet.

2.11. Since cooydinates are independpt it suffice. to prove the result in nedimension. Continuity implies that if 8 i small

.Pp sup 1 ,1< e/2 H 2& > 0

osasl

, so it follows from symmetry that

J sup ISa l < e/j, m < 0nsasd

= P sup 1/,1< 6/2, m > 0 = 4 > 000SJ KJ

'''. .

j .

Using the top result for a) k 0 ad the bottom one for z ; 0 shows that ifz e (-V2,V2)

Azlswrersfor Chapter 1 315

3.3. Since constat time. are syopping times the last three statements followfrom the frst three. .

(5' A T K f) = (S %t) tl (T K f) 6 Ft .

(,$'v T S tJ :p (S S tJ n (T S J q FtfS+ T < t) = u,rEq:x-rxctlxs> < :) n (T < r) e h

3.4. Defne & by.&1

= T1, Rn = Sn-1 V Tk . Repeated use of Exercise 3.3' shows that 1?,ais a stopping time. As n 1 (x7 Rn 1 sups Tn so th frst conclusion

follows from (3.3).Define Sa b# S = T1, Sn = Sn- A Ta . Repeated use ofN

Exercise 3.3 shows ttwt Su is a stopping time. As rt 1 x, Sn t infn Tn so the'. . . .

'''' '''' ''''''

'.' .jk..

second conclusion flloWs from (3.2).The last two coclusions follow frm thefl t tw SitKeTs

limsup Ta = inf sup Tm and liminf Ta = sup inf Tma n mkn n n- mka

3.5. First if W E Fs then

.4n (5'< t) = Us (.An LS %t - 1/n)) E Sg.

'. . '

On the other hand if .An (S < t) e Fg and the qltration is right continuousthen .An LS K ) =En,, (. n (S < t + 1/n)) e nnFt-vlu = &.

3.6. (A % ) = LS K f) n . e h since J1 e Fs

3.7. (i)Let r = s A t.P= sup l.hI< T, B6 e (-e/2,6/2) k ( > 0

osfs;

Iterating and psing Exercise 2.5 gives that if z 6 (-:/2, :/2)

. . . . .'

. . . . E. . .

'

.

'

Pc sup I.R1k e, r,, e (-/2, :/2) k n i:

0oKtKnl

.. . .. . ' . .

:.

'

.:

'

..3.1. Suppose .

= Uctlffn where ffn are ctsed. Let Sk = U'R cll'6. ffn't rnis closed so (3.4)implles Ts = inf (t : Bt E Hgj is a stopping tlme. In viewof (3,2)We cap complete the proof now by showing that Ta t Tx ms n 1 .

Clearly, Tx K lirztF'n for all n. To see that < cannot hold let t be a time atwhich Bt e .. Then Bt 6 1% for some n and hence limo ; . Taking the infover @with Bt e

./1.

we have limTk %Tx .

3.2. If m2-n < i (m+ 1)2-n then fSu < t) = LS < r?z2-'.') 6 Fmn-u (: Ft .

(,s'< tJ n (,$'< sl = (s' < rl e Fr c F.(5'< i) n t < s) = (5'< r) e F. c Fa

This shows (S < t) and (S %t) are in Ss. Takipg complemepts apd inteysec-tionswe get fs k J, ( > ), and fs z;z ) r tkzsai(iM(S < TJn (5'< tJ = tlx:f (S < ) n (T > ) e h by (i),so (5'< T) e Fs.(5' k T)n(T < ) = U<t(5' < :) n( < T < t) E h by (i),so (S < T) E FT.

Eere the unions wexe taken over rational q. Interchanging the roles of S andT wehave (k$'> T) irzFs n Sr. Takihg complements and intersections we get(S k T , (S KTJ, and (S = TJ ate iu Fs n F .

3.8. If.4

e T thezt

fBSn) e.)

n (& K ) = Unsmszattsk = ml2n n fBmlzn) e.)

: F3

Page 163: Stochastic Calculus

316 Solutions to Exercises

by (i)in Exercise 3.7. This shows (.B(&) E.)

e Fsv so B%4 e Fsv . Lettingn =.+ (:x) we have Bs = lima Bsj e naFsa = Fs .

3.9. (3.5)implies Sshn C Fs. To argue the other inclusion 1qt ,4

e Ss . Since

.= uat. n tTa > S4)

r <. m ;, #x:o wu xu...x,uE

it suces to show yhatX n (Ta > X) = rhn t I.o uo unzo .'v= wvowxvw

. .. . . y .. y . .

, s. g . . . jj.yy y ..

j. ..

'

.. j

..y . .. .. .

j..

..

'..Jt n (gk> s) n (rs A s < t) = fh tTn > ,V)n LS q< t) y

= u<lk n (,$'< )) t:r',,> vj q kg .

since. e Fs implies.4

n fS < J 6 Sq and the fact that Ta i,s> ytopping timeimplies(Tn > :) E h .

' '

s3.10. Since #(f,) ds C Fs we have

-Sgn-t

ds1.'-s)

-

/Sgn-t ds

s-(/- -

Ansvez' foz Chapter 1 317

4.1. We begin by noting symmetry and (2.5)imply

x j

.J%(AK 1 + t) = 2 m(0,yj .Pp(T0= s) ds dy(j g

X 1

= 2 p1(0, g) 15(Tn = s) dy ds() () ()

by Fzbini's theorem, so the integrand gives the density .J%4.2= 1 + t). SincePv(%= t) = .J%(Tp= f), (4.1)gives

x j j z-p2/a -.y /at d#c(S = 1 + t) = 2 e ye y

0 W7 2<31 *

a 1+t)/af Lt .1 t-p ( y= aya ye a/a j. + j)2% () 2* (

4.2. Translation invariance implies that if r < s then

Ez,ctfBp. - f0) = f'(0,p)T(Oa-r)

Applying the strong Markov prperty t F@) = fwn) - 20) at time 'rr, and

usingthe lmst identity we have

Ez,zlfB.. - Oa)I.'&r)= .E'(0,0)T(fawr)

The rest of the argument is identical to that for (4.4).4.3. As ?

-+ 0, ps) .-+ 1, so for some &> 0 we h>ve (pe ($ > 0 for all s %J..sing the equation psjpt) = ps + @)we can ow conclud by induction that

?pip1l n, w(s)> 0 for s < nJ, and se can take logarithms wltlput feellng guilty.lt -r&

2-a) = 4(2-(n=1))and inzcdon tells us thatT e rest is emsy: 442 ) + #( ,

4(2-n) = 2-r'4(1), while #((m - 1)2-rz) + /(2-n) = #(m2-n) and inductionimplies#(m2-n) = m2=''#(1). Since ps) is continuous and does not vanish,

#(s) = w(s)is continuous and the esired result follows.

j j4.4. Let w(c) = .E(exp(-T.)). (4.4) nd (.5)

imp y

w(c)w() = vxa + ) w(c) = wc2(1)As before the flrst equation implies wlc) = exptca) while the second with

= 1 and c = Vigives cb = 'Zfcland this implies c = -.sV-.4.5. Let Ml = maxnsasl f,. Exercise 3.10 implies that with probability 1,

fl > B and hence a-+ Tc is discontinuous at c = M. This shows J5tc -+ Ta

is discontinuous in (0,n)) -+ 1 as n-->

cxl and the result follows from the hint.

T@) ds we haveApplying the strong Markov property to 5'4td) = Jc #(tJ,)

T

Ez jaJ(O,) ds $s ::: ''(O..)

1 if s < , 2c - r < tdt = s) < 2c - u E ('Pk(* - l()otherwise . , ..

.

Sypmetry of the normal distribution implies E% F= Ea% j so it we let S =

intts < t : B, = c) and apply the strong Markov property then oit (S < x)

ErYs o psjl7s) = Eak's = Eah = f'rtY'so psla)

Taking expected values now gives the desired result.

Page 164: Stochastic Calculus

318 Solutions to Exercises

4.6. Let M1 = maxcsasl Bs2 and T = inftt > 1 : Sa2 > .G). SinceBI < rfl the strong Markov properiy and (4.3)imply that with probability

one #g2 #.8v2

yo s =+ % is discontinupt!s at s = M. This shpws.!5ts

-+ Ca1is discontinuous in (0,n)) -+ 1 as n

-+

(x) ahd the result follows from the hint.

4 7 If we let Iz) = T(), then it folloFs tromthe strong Markov propertyand symmetry of Brownian motion that

ErlBtt 'r S f) = E=LEz1Bt-u.r4 r K ! E'

= Erkzo.%-pl g. / )=

.(/(#i);

r S ) :: E=Bt44

The last equality holds since /4!,)= 0 when yd k 0, and the desired formulafollowsas i (4.8).

Chapter 2

1.1. If . e Sa and a < b then Ils, tJ) = 1(c+1/a,#+1/a)1g is pptional and

converges to t(.,)(s)1x@)ms n-+

x. ,.

2.1. The martingqle proprty w.r.t. ar fltration G is that

.E'(xzS; .A) = .E'IJ',S; .A)

for al1./1.

E a. (3.5)in Chapter 1 implis that Fshs C Fs so if X( is aIrtinale with reject t Ss it is a frtartihgle With respect t: Fshz . To agr

..l

. .. . . . . . ..' .. . .the tlii directin 1et . E F. Weclaim that . n (S > s) E Fsa,. To prk

thi w hk#e t check that fr alt r' ' '

(d n IS > ,)) n IS A S'rl Fr C

but this is 0 for r < s and true for r k s. If XaS is a martingale with respect to. j . . .

.

Fsaa it follows that if.4

Fs then

Answers for Chapter 2 319

2.2. To check integrability we note that

p:z.j)-1e-lt/-:r1Q/2f

jjog jpjjpdy < xE ,, lXt 1 = (

t -Ip-zI2/2 t kes care of the behavior forsizlce llpg 1p1Ils integrable near 0 aud e p,lArge y. To show that ErX) .-+ (x we observe that for any R < x

Ecx, k (2'mt)-d/2j 1og1p1dy + (log.R).P.II#fl > R41p1:f l

The integral is convergent and as we showed in Example 2.1, .&(I#fI> R) -+ 1

as t -+

cx s th desited result follws from the fact that R iq arbitrry.

2.3. Let Ta = inftf : l.Xl > nJ. By (2.3)this sequence will reduce XL. Notethat Xn S n. Jensen's inequality for conditionl expectation (see .g., (1.1.b)in chapter4 of Durrett (1995))implies

gk s xn js )) =wtxTalEexj )I.faa,) k e ( t nx' ,

2.4. Let Ta %n be a sequence which reduces X, and Sn = (Ta -

.&)+.

If s < tthen definitions involved (noteS = Ta - R on (& > 0J = (Ta >

.0),

theoptionalstopping theorem, and the fact 1(gk>.a) e SR (::

.fk+,

imply

ftasa 1(s,x>n)lG)= .E'txtamtlarrk 1(za>a)l.Tk+,)

= 1(n>a)f lX(amtlan l.Tk+a)

= .Xtam,lan 1(n>a) = Yshsn1(sn>c)

2.5. Let Ta be a sequence that reduces X. The martingale property shows thatif .Ae J;tl then' E t.Xagk 1(za>a); X) = E (A 1(gk>c); .A)

Letting n-+

x and using Fatou's lemma and the dominated convergence the-

orem gives

on the other hand

ECXtSCAn ('S S.:1)

= f(A1(s>o);. n (S S s)) = ExssA n (S.K sl)

Adding the lmst two equations gives the desired conclusion.

ECXL ; d) %limf Ezhn 1(gk>()) ;.)

n-

S lim f(A1(,zk>n);.)

= .E'(.X;.)n-G)

Taking.4

= f pe see that EXt K EXz < x. Since this holds for al1 . E Fz wet ECXL 1.T) K

.x.

To replace 0 by s apply the lat conclusion to $ = X,yi ,twhich by Exercise 2.4 is a local martingale.

3.1. Since -+ 14 is increasing, we only have to rule out jump discontinuities.Suppose there is a t and an 6 > 0 so that s < t < u then F(u)-- 7(s) > 3: for

Page 165: Stochastic Calculus

320 Solutions to Exerclses

all n. lt is easy to see that 7(u) - V'(s) is the variation of X over (s,u2. Lete < t < un. We will now derive a contradiction that the variation over (s0,

'upl

is infnite. Pick 6 > 0 so that if Irvl < &then lX - Xr l< E. Assuming sn and

un have been deflned, pick a partition of lsn, un) not containing t with mesh < 5and variation > 2E. Let sr&+1be the largest point in the partitiop < t and 'arzml

be the smallest point in the partition > t. Our onstruction shosvs that thevariationof X over (sn,ua)

.-2

r,+1, @a+1)is always > 6, so the ttl variatih

over(sn,'tIp) is in:nite, a contradiction which implies t -.+ 14 is contluous.

3.2. (aY+ 'Flz - (a'LZLt- (F, Z) is a local martingale.

3.3. .-XCZL iq a.,local martingale, so lf 5$ = -Xo then (F, Zj3 / d ihedesiredyeslt follows from Exercise 3.2.

' ' '

3.4. ah )(X) - abx, F) = c tXt$ - (X, F)t) is a local martigale.

v'3.5. If T is such that X is a bounded rartigale it i'otlowsfrom (3.t)and

the dennition of (A-) that (-YT) = (XIT. For a geperal stopping time .-?Tletlast result implies (.X'TATa) =

(XITAT,ZTa = inf (t : 1.XLl> n), note that theand then let n

.-,

x. The result for the covariance process follows immediatelyfrom the definition and the result for the variance procqss.

3.6. Since X1 - (.X) is a martingale and l.XtlK M for all

E (.X)t = 5X/ - EXol K M1

so letting t -.+ (x) we have .E'(.X)x ; M2 . This shows that XI - (X)t isdominated by an integrable random variable and hence is uniformly integrable.

3.7. The optional stopping theorem implies

E (Xs+t l.f+,) = Xs+.

aud Xs e Fs (: Fsma so

E (Xs+t - AI.&+,) = Xsma - Xs

i.e., K is a martingale. To prepare for the proof of the second result we n6tethat imitating the proof of (2.4) q

E ((Xs+ - Xs)2IIs+,) = .E'((A+t - A+a)2lFs+,) + tAma - X,)2

=.E'(-Ys2+t- xs2+,IFs+a) + (A+, - Aa)2

Answers for Chapter 2 321

Let Zt = (.X+f - Xs)2 - ((.X)s+f - (.X)s). Using the lmst qquality we get'.. . .

'. . . . . . . . . . . . . . . . .y

:.

.

. . .

f(ZtI.T+a) = .E'(-Ys2+,= A'ja.a - ((Ar)s+t - (I)s)I.'&+a)+ l.Xma 4 1,)2

= f(-Ys2+t - (-Y)s+zlFs+a) - -Y.l+, + (X)s + (.X+, - A%)2

= -(X)s+, + (-Y)s + (Xs+a - -Ya)2 ='i Zs

whr thelast equality follows from thq optional stopping theorem and Ekercise3.6. This shows that Zt is a martmgale, so (F)t =c (vY)s+f = (X)s .

3.8. By stopping at Ts = inft : IXtl > n) and using txercise3.5 it suflices toprove the result when X is a bounded martingale. Using Exercis 3.7, we can

t imal inequalitysuppose without loss of generality that S = 0. Using the L maxon the martingale Xj61n, and the optional stopping throrem on the martingal

- s . ,

X; - (X)r it follows that

E sup XLv %4ELX;hnj = ELLX,ltarr= 0t Xn ' (

Letting n-+

(x) now gives the desired cpnclusion.

3.9. This is an immediat consequence of (3.8).

4.1. It is simply a matter of patiently checking a11 the cmses.1. s < t K c. '.K= Y E Sft so ,

2.s<a%%

f(XI.'&)= f(f'(Xl.&).'&) = ftFc 1Fa)

= F(J1 1.z',) ='n

= y,

3. s < , > . X = F so this reduces to case 2 if s < a or to our mssumption

if c %s < . 4. b S s < @.E ($ I.T-a)= .E'('F1F,) c:'

= K .

4.2. lf we fzx w then -+ (X)t defines a measurv. The triangle inequality fol'L2 of that measure implies

1/2 1/2/

1/2j # s2(I(.X')f + j .Jf,2(f(.X')j % j LHs+ JG)2(f(aY)j

Now take expected value.

Page 166: Stochastic Calculus

322 Solutions to Exerclses

2 i E.12

< x and hence EX2 < x. To check the other4.3, X e Af impl es sup t ()

conditionlet Ts be a sequence of stopping times 1 x so that X1'= is boundedand (A)n %n. The ojtiopal stopping theorem implies C

E IXn - (.X)Ta) 1(Ta>c) = EX-Z 1(n>c)

Letting n-+

x it fpllows that .E'(.X'x2- (aY)x Fu: EXZI. To prove the converseiearrang the displayed equation to conclude

'

E (AA1(vk>a))s E.M+ z7(A4x

so X is L2 bounded..

- 7.. .

4.4k The triagle inequality implies IIzIlK Ilzn11+ IIz- <a 11so'

IlalllcklirrlizllrIIalr,11

For the other inequality note that IlznllS Ilztl+ Ilz,z- IIso

llzllk lirnsup Ilzall4.5. Let Hn e 11l with Ilffn- Jfllx -+ 0. Since Ilt#n. .X'):=zCH . X)II2- 0,using Exrcise 4.4, the isometry property for l1l, and Exercise 4.4 again

:. .

Ilff- Xlla = lim ll#n- X112= limll#n lIx= Ilffllx5.1. lf L, f/ E M2 have the desired properties then taking N = L - f/ we have(f,, L -

.&)

= (f/,L - f/) so (f, - Uj H 0. Using Exercise 3.8 now it followsthat L - U is constant and hence must be Y 0. '

5.2. If we 1et ff, H 1 and ffa = ltaszl then X = S kX and FT = ff . F. (It is'for this remson we have assumed v'L= Fp = 0.) Using (5.4)now it follows that

Answers for Chapter 2 323

Clearly, .4S1 . .X') = (ff! . '.F) for all t. Since S.2 = ffa2 = 0 for S % s S T,(5.4)implies E .

2 2 '

LH . -Y), = (ff .X)a = 0 for S K s K T .

ndit follows from Exercise 3.8 that (Jf? . .X')f and (ff2 . X)t are cppstant on(S, T). Combining this with the first reslt and using (4.3.b)givts the desireconclusion.

t H2 d(.X) ) to get a bounded6.2. Stop X at Ta = inftt : la'vI > n or Jc , ,

partingale and an integrand in lIa(X). Exerise 4.5 implies

/TaIl.J.rTa

..xTa 11a= E jo j

..r.z/

j(.;')tUsing the L2 maximal inequality it follows thut

Ta

E uzpalV ' X)1 S 4E joSf2d(.X)t

Letting n-+

x now we can conclude

With this in hand we can start with

Ta

EH . .X')z2a

= E j.;:2

d(a'):

't

(AhFT) = j 1(,sc-) #(.X',F), = (X, F)J

and let n-+ x to get thedesiied result.

6.3. By stopping we can reduce to the cmse in which X is a bounded continuous

martipgale alzd (.X)f K N for a1l t, which implies # 6 112(X). If we replace Sand 1aby Sn and Tn which stop at the next dyadic rational and 1et Il'k c Cfor Sn < s K Ta then H.: e lI1 and it follows easily froz the de:nition of theintegral in Step 2 in Section 2.4 that

j .,.7

dxa = cxn - xs-).

'rocomplete the proof now we observe that if 1C@)1 ff then.

:

.

IlSk - Jfllx S ff2(.E'((X)n - (-Y)g') + ttxlsn- (X)s)J-+ 0

5.3. If we let S'a = 1(,sz) and Jf, = 1(,>rr) then XV = .I . X and 'F - 'FT =

K . F. Using (5.4)now it follows that (vYT, F - 'TIfH 0 so the deired rsultfollows from (3.11).6.1. Write .Jf, = ffal + #,2 and ff , = ff al +

.rf,2

where

H = ff 1 = Jfaltssaso = Jfa ltssasz); 4

Page 167: Stochastic Calculus

324 Solutions to Exercises

as n-.+

x by the bounded convergence theorem. Using Exercise 4.4 and (4.3,b)it follows that 11(.Sk. vY) - CH . vY)IIa-+ 0 and hence J H.n dxa -.+

.f'

zzadxa.Clearly, CLXLU - Xsj --> Cxk = A) and th desired iesult follows.

6 4. XI= XJ= aY2(jP y) - .X'2(jP)* '+ t . t

= (-Y'(t7+z) - -Y(P, )l2+ 2Ar(t7)(A-(t7+T) - A-(tP, )Ji i ,

(3.8)implies that the flrst term converges in pobability to (X). The prkioustexercise shows the second term converges in probability to Ja2X, dXa .

6.5. The dtfl-erence betwen evaluating at the right and the left end points is

Answers for Chapter 3 325

as n-+

x by the convergence of the Riemann approximating sums for theintegral. Convergence of the means and variances for a sequence of normalrandom variables is sucent for them to cnverge in distribution (considerthecharacteristic functions) and the desired result follows.

7.1. supf l((ff'' - ff ) . .A)fl%M supa Ifn - ff, l -+ 0

7.2. By stopping it suces to prove the result when I.AIK M. If we letGj = l'cAti ,

Jk+,)) then as in (7.4)we get

t

T(A) - T(.o) = joC dA'

J ?.As 8 -+ 0, Ga -+ f (Xa) uniformly on (0,t) so the desired conclusion fromExercise 7.1. '

8.1. Clearly Mt + MIis > cotinuous local mrtinxale and .t +.A( is pntinuoujadapted, locally of boundd variation nd hs dp + vl 0.

.. ' . . (' . . . . .(.'i

l

3Chap er(

.. . . . j:. .

. .'

'

.. .

1.1. The optional stopping theorem implies that. E E ' '

z = m/z = aPrB1, = u) + (1- PrBr = c))

Solving gives PrBv = c) = ( - z)/('- c).

2J7'(-Y(f7.it) - X(t7))? -+ 2(A-)ti

6.6. lzet ckn= Bk + 1/2)2-nt) - l(t2-nt) and Dk a

= Bk + 1)2-nt) -

Bk + 1/2)i-af). In view of (6.t)it sumces to shw tllat ms n.-+

x

s ) ... tjgSu = J7Ck,nck,u + k,a

.k

.E

in probability. EC2 = 2-n,/2 and kck sfjka

= 0 so ESg = t/2. To complete,n , ,

the proof now we note that the terms in the sum are independent so

2

E'n - t/2)2 = E V Ck2 - 2-nj/2 + Ck nDkvg) T: ,

kE V(C2 -

2-T:4/2)2

+ Ck2 .D2 S 2ncj2-2n -+ (j= a a k,a1 : .

'and then use Chebyshev's inequality. .

6.7: (6k7)implies that if )' is a sequence of partitions of (0,) with peh =+ 0msn

-+ (:x$ we have

1.2. From (1.6)it follow that for any s and n Ptsupk, Bt fz) = 1. This

implie Ptsupfk, Bt = x) z 1 for all s and hente limsup tkjB'

cxn a.s.4-. E #' . . . ..'.....'

E E

1*

. . ' ' ' .' .' ' ' : ' :

..

'

. ..

. . .

j

1.3k If 5'r(Y) 1 @(td)which is < x with positike probabilit# t en #t(o,)() = 0a'ndwe have a contradiction.

3.1. Using the optional stopping theorem at time T A f we have EftB7)2sy=

.E'c(TA ). Letting t -+

x and using the bounded and monoton convergecetheorems with (1,4)we have

ht:Bt:w, - fty) -+ s.,d.%i 0

n

E T = E 01z2= a2 .

W

+ 52 .

*

= c0+c +c

in probability. The left-hand side hms a normal distribution with mean 0 andvariance

t

:,2 (tP - tP) -.+ 132ds't ;% z+ 1 t ai 0

3.2. Let J(z, ) ::.z z6 - a4t + bz2t2 - ct3. Diserentiating gives

4 2 aDtf = -(Iz + 2z t = 3ct

(1/2)DraJ = 15z4 - 6cz2j + j2

Page 168: Stochastic Calculus

326 Solutions to Exercises

Setting c = 15, 25 = 6c, and = 3c, that is c = 15, b = 45, c = 15, we haveDtJ + (1/2)D=.f = 0, so fBt , t) is a local martingale. Using the optionalstopping theorem at time a A t we have . , , E .

6 4 2 2!

3EnfBnht - lsfv.oazt'& A )+ 45#aa(a h i) ) = 15c(a A i)jLetting t .-.+

cxn and using the boun ed and monotone convergnce theorrswith the results in (3.3)we have

15./r3 = c6 - 15c4.&a + 45c25r2 = c9(1 = 15 + 75)fl J

so Ezvt = 61/15.

3.3. Using 'the optional stopping theorem at time T-c A n we havej E E

1 = Eo expt-tz#/J l-eartl y

Ltting n-+

(x) and poting that onIT-U

= x) we have Ztlt = csf/ + p-.+

plmt kulely d hnce Zt - tks a.., the desired ilt fllows. y

4.1. Let 0 : (0,x) -+ Rd be piecewise constant and have I#,I= 1 for a1l s. Let'i = E Jf dXi The formula for the covariance of stochastic itegrals showsi 0 J a

*

'.Kis a local martingale with (F)z =, so (4.1)implies that M is a Brownian

motion. Letting 0 = tc < t1' < . . . < t,z and taking 0s =q 'vj for s E (tl-, t./) for1 %j S n. shows that a'Vz- X3o, . . .

, .X'z. - Xgn- are independent multivariatenormals with zean 0 nd covarince matrices (tl = tilf j . . . , (tr,- n-llf

, andit follows that Xt is a d-dimensional Browniap motion. .

4:2. Xa is a local martingale with (.X), = JJ#r2dr: By mpdifying Fzaaftertime t we cn suppose withopt loss of grnrality tht (A-)s ::: c<k..

,Uying (4.4)

now we see tlkat if,/,4u)

= infts : (aY), >'tz)

then Xvtulls a BrWnln motibn.Sipce (A'), >nd hnce th time change n'(u)are deterministic, the dsired 4q!1tfollows.

Chapter 4

4.1. Let r = inf (t > 0 : Bt $ Gj , 1et p E 0G, and 1et 4 = inftt $ 0 :'# ciii p).(2.9) in Chapter 1 implies #p(Tp = 0) = 1. Since r %Tp it follows that y isregular.

4.2. .Fjt'r = 0) < 1 and Blumenshal's 0-1 law imply .J$tr = 0) = 0. Since weare in d k 2, it follows that PvB. = yj :+ 0 nd hence 6 1 - EyfBpj > .

Let trn = inf (t > 0 : Bt / Dy, 1/n)). a'n t 0 ms n 1 (x) so Pycu <'r)

--+ 1.

1 - e = EvfBp) = .G(J(#.r);r %tzrjl + .G('??(#c.);r > cu)

Answers for Chapter 4 327

As n-+

x the first term on the right -+ 0, so

.G('D(.Wa);r > cu) -+ 1 - E

and it follows that for large n, infxEdotp,l/s) r(z) K 1 -

a.

4.3. Let y E OG and consider U = 74g, Tgy), 1). Calculus gives us

1#(z)- gyt = j Vp(p + p(z - p)) . 0z - y4 tfp

Continuity of Vg implies that if Iz- p1 < r and z e U then Tgy + p(z -

y)) . 0z - y) k 0 so #(z) k 0. This shows that for small r the truncated coneU n Dy, r) C G so the regularity of y follows frprp (4.5).

:

t'

(,.EE..

4.4. Let r = inft > 0 : Bt e ?(p, 1), a)J. By translatzon and rotatzon we cansuppose y = 0, r

::':: (1,0, . . . , 0), and the #- 1 dimnsipnal hyperplane is zd = 0.Let Tp = inf ( > 0 : B.td = 0). (2.9)in Chaptr 1 implis that .J%(Tc= 0) = 1.If c > 0 thenfor som'e k < (x) the hyperplane can be covered by k rotations ofJ-/so f%('r = 0) k 1/i and it follows from the 0-1 law that .J%('r= 0) F.:.Fz1.

7.1. (a)lgnoring Cd and dilerentiating gives

d 2(zf - 0i4yDz.k: = -y .

a a (a+a)/2(1z- #1 + p )y #(#+ 2)(z - #)2pDxa.: =

-d

.

z a (a+a)/a +(jz- #j2 + 2p)(d+4)/a(Iz- :1 + y )

1 2#Dyho =

.. o , ,-- d .

- -. . - -. .

,(lz - #lz + yzjalz (Iz- #I2+ !P)td+2)/2 y

# + 2y d(d+ 2)p/ ' :Dyyhe =

-d

.

i (d+a)/a + (jzz #ja + ,a)(a+4)/2(1z- pI2+ y )Adding up we see that

d-1((-d)(d - 1) -lJ (-d) . 31#X-!Dxrp + Dvuhe =

r;zu pj2.y..

wa(d+a)/2f=1 N l I ' = J E

d(d+2)p(Iz- pI2+ y2)

+ z z (,+.)/z= ()( jz

-'.

pj + y )The fact that u(z) = 0 follows from (1.7)Aas

in the proof of (7.1).

Page 169: Stochastic Calculus

328 Solutions to Exercises

(b) Clearly J do ptz, p) is independent of z. To show that it is independent of

p, let z = 0 and change variables 0i = ypi for 1 S K d - 1 to get

Anslvers for Chapter 5 329

Combining the lmst two results, introducing lIIIx,f= sup(I1-(z)I: z E D 1,andusing the triangle inequality:

I#(z)- (p)I: IlJIIx,2l#()- .(p)I+ I/(z)- J(p)lIl.lIx,a1s 1 . czlz- 7/1+

...:11#11x,,

.%(26%+ l#(0)l/A)lz - !/1 = C1z - pl

since for z e D1, I(z)l K I(0)l+ CR.To extend the last result to Lipschitz continuity on ,1V. wq egin by obr

sexvingthat if z e D1, y e 1)2, and is the joint of bD o the lie segnientbetweenz and y then

jdossz,v- / (gpjwjcawdyyzl,,,.yd-dv

- jdvsro,1) -, 1

(c) Changing variables 0i = zi + rf y and using dominated convergence

d0 ptz, p) = dv rto, 1) -.+ 0D(.,4e Dz,slylr

(d) Since.&('r

< x) = 1 for a11z 6 .I, this follows from (4.3).-#r 1 d it follows from (6.3)tht9.1. c(z) H

-#

< 0 so zptzl = Eie K anr(z) = Ece-pr is the unique solution of

g j r

Guessig u(z) = B coshtz) with b > 0 we flnd

IA(z) - (p)l:i I(z) - (z)l+ l(z) - (g)l/ cllz - al + c2lz= pI (72Iz- pl

since C %C2 and lz - zl + lz- pl = Iz- p1, This shows that is Lipschitzcontinuous with constant (72 on IzIK 2S. Repeating the last argument takinglzl %24 and y > 2./ completes the proof.

3.2. (a)Diferentiatint we have

x(z) =-czlogz

h/(z) =-llog

z - C > 0 if 0 < z < e-1

x&(z) =-67z

< 0 if z > 0

so x is strictly increasing and cpncave on (0,e-2). Since x(e-2) = 26:-2 And/P(e-2) = C x is stric'tly increastn: and concave o (0,cr). When :7 K e-21

1 Bb2-u& - pu = - PB coshtz) = 02 2

ifb = 2p. Then we take B = 1/ coshtc 27) to satisfy the boundary condition.

Chapt r 5

3.1. We frst prove that is Lipschitz conttuous with constant Ca = 2C1 +..a-1l(0)Ion oa = (.R s lzlK 2J0. Let J(z) = 2R - IzI)/S and :(z) =

#(Xz/Izl).lf z, y e D( then

f -1

dz =

-c-l

loglog zje = xCz logz 00

IIpI- lzll 1IT(z)- T(p)I= J, S qilz- pl

itz continuous op D3 = (lzlK J0, and Az/lzl is the projectionSince is Lipschof z onto D we have

(b)lf #() = expl-l/tr) with p > 0 then

y,

y . t p p Llyljp Ey(r g (j)=

j)-j.yexpt- 1/@) = pg()(log(1/g(@)))

t .

5.1. Under Q the coordinate maps vL(u?)satisfy MP(# + ,J). Let

l#(z)- .(y)I= I(./lzI) - (lp/lpI)I

c,jR -

Apjs cllz - g1

Izl 1p1

t

u't,= x, - /,#(x,) + (x,) ds

-'ez- /' txa)ds=

0

Page 170: Stochastic Calculus

330 Solulons to Exercises

Since w are intrested in adding drift - we let c = c-l ms before, but letj

J = - c(#,) . dX,n

t

=-$

+ c(Xa) . (a'L)ds0

=-yk

+ a-l(x,) ds.0

d9 dtfter by a process of bounded varirtion *e ltvesince#i ant

('.f')t= ('.F)t = c-'(x,) ds0

The change of memspre ip this cwqe is given by1

/jj Yj = Xg a j- -

Nw j)O. . .

' ..* '.

.

1= exp -X + -(F)f :: ilg

2

6.1. It suces to show that'f a.s.

Anslsrezsfor Chapter 6 331

so w(z)k z, w(x) = x and recurrence follows from (3.3).(ii) If (z)/c(z)k e ylzen

p

2(a)/c(z)dz < e-zepeXP ''-

0

so w(x) < x and transience follows from (3.3).3.2. The condition is Ja

-2(p)/c(p)

dg = 0. Let f = jl -2(&)/tz(:)

dy.y/tn + y) = enlp'y) so if f > 0 //)(-c=) > -x and if f < 0 then w(x)< n.lf I = 0 then pty) is periodic so 0 < c < p'y) < C < x, and it follows thatw(x) = x and w(-x) = -x.

5.1. (a)If # = 0 then,/(z)

= z >nd m(z) = 1/J2zso

. r ..

'.. . . . . .. .'.' (1 1

'zz(z)(w(z)- e(0))d = c'-2 dz < &0 0

M(0) =-x

so J = (:x) and it follows that 0 is absorbing. To deal with (x7 notethat p(x) = M(x) = (xh so I = J = x.b) When p < 0, #(z) =

c2l7lr/JQso( ,

2 a(z)= g' (2lpIr/c- 1)e 1/511

-2lpIr/trQ 2rn(z) = e /&z

To evaluate I for the boundary at 0, we note

1 1 ). - e-alpla/rm(z)(w(z)- w(0))dz =F ajpjz dz < cxl

since the integrand converges to 1/12 ms z-+ 0. Again M(0) =

-x

so J = xnd it follows that 0 is absorbing. To de>l with ckl w note that w(x)= (:<) sof = x. To evaluate J we begin by notin: that

nlimsup I#,l-11(js,I<a-a)ds > 0

n'-Y 0

To prov the lMt result, 1t Rz = 0, Sa = inf ( > Rrt : I#fl = 2-n), &+1 =

inf (t > Sii : Bt = 0) for n k 0, and N = sptiz : R < TiJ.Now

n N

1f,1-1ltlsxjsa-a) ds k J-22n(Sm -

.&m)

0m=0 :

where th number of terms, N+ 1, hmsa geometric distribution with f4#+1) =

2n, and the Sm - .P.mare i.i.d. with Esm - Sm) = 2-2n and Esm - .P.m)2=

(5/3)2-4n Using the formula for the variance of a r>ndom sum or noticing that*.

PN k 2n) -+ e-1 ms iz -+

x one concludes easily that limsup k C > 0 a.s.

Chapter 6

3.1. (i)If (z)K 0 then

Cr 2-2I#lp/cQ C -2IpI,.r/cQ

e dy = ea 21p1

Sincemost of the integral comes from z K y S z + C-zit follows easily that

exp(-j' 2(z)/c(a)da)

k 10

2J' -zjgjm/ga

f(oo) - T(z) - eIplz

Page 171: Stochastic Calculus

332 Solutions to Exercises

Since#(z) =c-2l#lr/J?

it follows that J = (x) and cxnis a natural boundary.

5.2. The computations in Example 5.3 show that (a)I < t:xn when 7 < 1 and(b) for 7 <

-1

we have AT(0) =-x and hence J = x. Consulting the table

of deinitions we see that 0 i an absorbing boundary for y <-1.

5.3. (a)ffj K Ta < x.

(b) (2.9)in Chapter 3 implies

Answers for Capter 8 333

Hexe and in what follows C is a constant which will change from line to line. If# k 1/2 then w(0)=

-x and f = x. If p < 1/2 then w(z)-(p40)

= Cz1-2#ahd

1 C y.x ap-lr?z(#)= t=

-a#

. :44) - y)ew w#

*(#)J(#)

#

s y-+ 0. From this we see that I < cx) if p < 1/2. When # = 0, M(0) =

-x

so .J = x. When p > 0, Mtzl - M(0) exe Cz2# as z-+ 0 so J < (x7 if p > 0.

Comparing the poibilities for I and J gives the desired result.

Chapter 7

5.1. Letting b -+

(x) in (4.7)of Chapter 6 we have

.= 2(' (z)- w(1))j*m(z) dz +gjkpz)

- w(1))mtzl dz.FQqljxj= l

The result now followsfrom m(z) = 1/fpJ(z)c(z)= a-2c.

5.2. We apply (5.3)to v(z) = z1+J - 1 and note that

since 6 < 1 implies 1 - 26 >-1.

(c) lf # > 1 then .S > H so it suflices to prove the result when J = 1. To dothis, we note that if Bz = z then z-Bnr, is a Brownian motion starting at 1,so the distributiop of

T./aAT2.

l-2 ds under P,,J

0

does not depend on z. Letting Sa = (Ta/a A T2z) o 0n and using the strong

Markov property it follows that the distribution of

S=

I = 1-2 ds under PR*.

does not depend on z. Pick J > 0 so that Plr k J) > 0. The fa-n are inde-pendent so the second Borel-cantelli lemma implies that Pltfa-a k 6 i.o.) = 1

but this is inconsistent with.171(./1

< x) > 0.

5 4 pz) = z o w(x) # (x) and f = x. When J < 1/2 M(=) = (x3 and.

* * 1

henceJ = co. svhenJ > 1/2

, z1-24 - 1 < x jf 6 > jy dz= 2J - 1 = cxl if 8 K 11 :

Combining the results for I and J and consulting the table we ltave the indicatedresult.

5.5., JYhen a = 0

(1 + 8j8 zl-l z;Lu = . + (z)(1+ )- <

-1

2 6 .7

siuce (z)i-6/z#,

(1+ 5)/2 %1, and 2J-1 %1.

Chapter 8

3.1. Suppose rs.j,+

r. Then there is an e > 0 and a subsequence rnk so thatlrn': - rl > t for all k. However it is impossible for a subsequence of rnv, so wehave a contradiction which proves rn

-+

r.

6.1. If dgu, #) -+ 0 then can only take the values 0 and 1. In Example 6.2 weshowed that h cannot be H 0 so it must take the value 1 at some point c. Since

is right continuous at a there must be some point b # 1/2 where () = 1 butin this case it is impossible for dgn, h) -+ 0.

# 2p -,p

y/(z) = exp -- dz = Cy1/2 ,C

Page 172: Stochastic Calculus

Referencs

M. Aizenmau and B. Simon (1982)Browniaa motion and a Ilyrnack inqpalityfor Schrdinger operators. Comm. p'tlrc All. Xf. 35j 209-273

P. Billingsley (1j68)Convevgence ofpvobability Measures. John Wiley anb Sons,New York

R.M. Blmepthal and R.K. detoor(1968)Mariov Pvocesqs and ePofeniclTlteqvy. Academic Press, New York

D. Buzkholder (1j73)Distribution function inequaltties for marhgatest Wnn.Probab. 1, 19-42

D Burkholder, R. Gundy, >pd M. Silverstein (1971)A m>xtrat function char-acterizatiop of the class HP. Trcn.s A.M.S. 157, 137-153

. .. .'' ' ' '

K.L. Chung and .J. Wiltiams (1990)Intvoduction t/ ochqstc Integmtion.secondedition. Birkhauser, Boston

K L Chung and Z. Zhao (1995)From Brownian Molion tl Schrsdingev's fpzc-ion. Springer, New York

' g . ' . . ' . ' . .Z. Ciesielski and S.J. Taylor (1962)First pmssage times and sojlly tipesfor

.u sBrownian motion in space and exact HausdorF memsure. Tmns. . . .

103 t34-450 - ! ! '1

E.. Cinlar, J. Jacod, P. Protter, and M. Sharpe (1980)Semimartingates andMarkv proceses. Z. fuv. Wahr. 54, 161-220

B. Davis (1983)On Brownian slow point. Z. fnr Wahr. 64, 359-367

c oellackerieand P.A. Meyer (1g78)pvobabilities c?,d potentials xqrtsuol-*. ?

land, Amsterdam

C. Dellacherie and P.A. Meyer (1982)Probabilities and Poteniials B. T/lct?o ofMavtingales. North Holland Amsterdam

.1

C. Dol>ns-bae (1970)Quelquesapplications de la formule de changement devariables pour 1essemimartingales. Z. fnr Wor. 16, 181-194

> iility:Theovy and Ezamples. Second edition. DuxburyR. Durrett (1995) ro aPress Belmont, CA)

Page 173: Stochastic Calculus

336 References

A. Dvoretsky and P. Erds (1951)Some problems on random walk in space.Pages 353-367 in Proc. 2nd Berkeley Symp., U. of California Press

E.B. Dynkin (1965)Marko'u Pvocesses. Springer, New York

P. Echeverria (1982)A criterion for invariant measures of Markov processes. Z.kkr Wahr. 61, 1-16

S. Ethier and T. Kurtz (1986)Markon Processes: Ccrccterzcon and Con-

wergence.John Wiley and Sons, New York

W. Feller (1951)biflksionprocesses in genettcs. Pages 227-2246in Proc. 2ndBevkeley ,S'pflzp.,U. of Califorhia Ptss

G.B. Folland (1976)Xn Inkvoduction o Partial Dllkrcnticl Eqnations. Prince-t U Press Princetn NJ E '

* ) ,

d D gsion. Holden-bay, San Fran-D. Freedrpan (1970)Brownian Motion an 9'

.' . .

: . ..

'

.'

..

. . . .

cicd CA E F : E

A Friedman (1964)Pariial Dilkrcnicl Equakions of Paraolic type.Prentice-Hall, Eglewood Clifs, NJ

A. Friedman (1975)Stocltastic Dlrerentfcl Eqnations and diliccfons, olume

1. Acadmic Press, Ne* York. '

. . . .'

. ( . .' '.

'.'

1('

R.K. Getor and M. Sharpe (1979)Excursions of Brownian motion and besselproeses. Z. fnr. Wahri 47, 83-106

o.Gilbarg and ,. serriu(1s56)on isolated singutaritieso second order eniptic

diferetial equations. Ji d'Analyse n. 4, 309-340

D. Gilbarg and N.S. 'ludinger

(1977)Eiliic Partial Dlrrenicl kqnaiionsof

Scond Ovdr. Springer, New York'

b rccf Analysis. Springer, NewE. Hewitt and K. Strom erg (1969)Reql and . sYork

N. Ikeda and S. Watanabe (1981)Stociastic Drereztticl Equations and .D;#k-sion Lprocesses.North Holland, Amstrdam

. j . .. .

J. Jacod apd A.N. Shiryaev (1987)fimif Theorems for Soc/zcdc rocesses.S i New Yorkpr n/er,

P. Jagers (1975)Bvanclting Pvocesses lnit Biological Applicaiions. Joim Wileyand Sons, New York

F. John (1982)Pavtial Dkrrenql Equalions. Fourth edition. Springer-verlag,N York

'

e:?

Referencs 337

M. Kac (1951)On some connections between probability theory and diserentialand integral equations. Pages 189-215 in Proc. 2nd Berkeley Symposium,U. of California Press

1. Karatzms and S. Shreve (1991)Brownian Motion and s'occc Calculns.Second edition. Springer, New York

S. Karlin nd H. Taylor (1981)W Second Conrse in s'occsic Pvoceses. Aca-' demic Press, New York

T. Kato (1973)Schrdinger operators with singular potentials. Isvael J. J/.13, 135-148

R.Z. Khmsminskii (1960)Ergodic properties of recurrent diflksion processes andstabilization of the solution to the Cauchyproblem for parabolic equations.Tlteov. Prob. Appl. 5, 179-196

F. Knight (1981)Essentials of Brownian Motion and Dlrkdon. American Math.Society, Providence, RI

H.P. Mclfean (1969)Siochastic Integrals. Academic Press, New York

P.A.. Meyer (1976)Un ours sur 1es intgrals stochmstiques. Pages 246-400 inLcture Notes ln Vath.511, Springer, New York

N. Meyers and J. Serrin (1960)The exterior Dirichlet problem for second orderelliptic partial diserential equations. & fc. Meclt. 9, 513-538

B. Oksendal (1992)Stocltastic Dlrerencl Eqnations. Third edition. Springer,New York

S. Port and C. Stone (1978)Bvownian Molion and Classical Jbenfcl Tlteo'y.Academic Press, New York

P: Protter (1990)Stochastic Inkegration and Dlrrenicl Equations. Springer,New York

D. Revuz and M. Yor (1991)Continnons Mcringcle.s and Brownian Motion.Springer, New York

L.C.G. Rogers and D. Williams (1987)Dtkgsionst Markov Processes, and Mawtingales. Volnme 2: It Calculus. John Wiley and Sons, New York

H. Royden (1988)Real Analysis. Third edition. MacMillan, New York

B. Simon (1j82)Schrdinger semigroups. Bulletin A.M.S. T, 447-526

D.W. Stroock and S.R.S. Varadhan (1979)Mnltidimensional Dirkdo?z Pvo-cesses. Springer, New York

Page 174: Stochastic Calculus

338 Refezences

H. Tanaka (1963)Note on continuous additive functionals of on-dimensional

Brownian motion. Z. kkrGcr. 1, 251-257

J. Walsh (1978)A diflksion with a discontinuous loil tiiri. Attviqne 52-53,37=45

T. Yamada and S. Watanabe (1971)On the uiqueness of solutions to stoihastic

diserential equations. J. A. Klloto 11. 1, 155-167: lI, 553-563

Index

adapted 36arcsine law 173, 291 E

Arzela-Ascoli theorem 284nxqociative1aw 76 :

averaging propertyl 148

basic predictable mocess52Bessel process 179, 232 '

BichtelemDellacherie theoxem71Blumenthal's 0:1 law 14boundary behavior of dilusions 271Brownian motion

continuityof paths 4de:nition 1Eldex continuity 6Lvy's characterlzation of 111nondilexentiilisy 6quadratic variation 7write.s your name 15 , !

Zeros f 23Burkholder Davis Gundy theorem 116

D, the space 293 q

difeventiating under >n integral 129difusion process 177Dini function 239Dirichlet problem 143distributive laws 72Dolans measure 56Donsker's theorem 287Dvoretsky-Erds theorez 99 ; . .

esectiv dimension 239 .E

exit distributionsball 164halfspace i66

exponentialBrownian motion 178exponential martingale 108

Feller semigroup 246Feller's branching diflkston 181, 231Feller's test 214, 227Feynman-lfac formpla 137fltration 8 .

E

flnite dimensional ets 3, 10Fubini's theorem 86 .

fundamental sollztion 251

generator of a semigroup 246Girsanov's formula 91Green's functions

Brownian motion 104one dimensional disusions 222

Gronwall's inequality 188

C, the space 4, 282Cauchy process 29change of measure

Ein SDE 202change of time in SDE 207CiesielskivTaylor theorem 171cone condition 147contilmous papping theorem 273contraction semigroup 245converging together lemma 273covaxiance mocess50

Page 175: Stochastic Calculus

346 Index

harmonicfunction 95averagingproperty 148Dirichlet problem 143

zaxris chains 259heat'equation 126

inhomogeneous 130hitting times 19, 26

ininitesimal drift 178infnitesimal generator 246infnitesimal varianc 179intjration by parts 76isometry property 56 E .

It' f la 68 ' E 'S Ormll

multivariate77 ' i !1 ' ; E '

:. . . .

' .

5*

. . . . .' '. .( :. .

Kalman-Bucy fllti 180 ' . '' '

Khmsminskii' lmma 141 .

Khsrnihskii's test 237) . . E .

Kolmogorov's continuity theorem 5Kunita-Watartabe inqtzality 59

law of the iterated lojarithr 115Lebesgue's thorn 146Lty' theorem 111 E i

local martigal 37; 113 ''

local tiine 83continuity of 88

'

locally 4uivlent memsures 90 . E

Markov property 9 .

martingale problem 198well psed 210 . E E

Meyer-Tanaka fothmla 84modulus of cotinit# 283 '

monptone class theorem 12

natural scale 212 '

nonnegative definite matricts 198' E .( . ..

E ' '

i (''

occupatiori timsEi Cr

ball 167

half space 31optional process 36

in discrete time 34optional stopping theorem 39Ornstein-uhlenbeck proces 180

pathwise uniqueness for SDt 184'm- theorem 11Poksson's equation 151potential kernel 102predictable process 34

basic 52in discret time 34i I 53 . E .

s mp epredictable c'-feld 35' C ( '

progressivley measurable process 36Prokhorov's theorems 276 ' E'

unctureddisc''146 '

P

dratic variation 50qua

recurrence nd transience E ' E '

Brownian motion 17 98'.E

1

one dimensional difusions 220Hafris chains 262

reflecting boundary 229reflection principle 25regular point 144 E .

resolvent opertor 249relatively compact 276right continuous fltrti

scaling relation 2Schrdinger equati 156semigroup pibperty 245semimartingale 70 E

shift transformatios 9 .

simpte predictable process 53 q

skew Brownian motion 89Skorokhod representation 274Skorokhod topology on D 293speed measure 224

Tndex 341

stopping time 18strong Markov property

for Browni>n motion 21for dilusions 201

strong soltltion to SDE 183

tail c-fleld 17tight sequence of memsures 276transience, see recurrencetransition density 257transition kernels 250

uniqueness in distribution for SDE 197

variance process 42

weak convergence 271Wrightisher diflksion 182, 234, 308

Yamada-Watanabe theorem 193