of 365 /365
Steven Shreve: Stochastic Calculus and Finance PRASAD CHALASANI Carnegie Mellon University SOMESH J HA Carnegie Mellon University THIS IS A DRAFT: PLEASE DO NOT DISTRIBUTE c Copyright; Steven E. Shreve, 1996 October 6, 1997

# Steven Shreve. Lectures on Stochastic Calculus and Finance

Tags:

Embed Size (px)

### Text of Steven Shreve. Lectures on Stochastic Calculus and Finance

Steven Shreve: Stochastic Calculus and FinanceP RASAD C HALASANI Carnegie Mellon University [email protected] S OMESH J HA Carnegie Mellon University [email protected]

THIS IS A DRAFT: PLEASE DO NOT DISTRIBUTE c Copyright; Steven E. Shreve, 1996 October 6, 1997

Contents1 Introduction to Probability Theory 1.1 1.2 1.3 1.4 1.5 The Binomial Asset Pricing Model . . . . . . . . . . . . . . . . . . . . . . . . . . Finite Probability Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Lebesgue Measure and the Lebesgue Integral . . . . . . . . . . . . . . . . . . . . General Probability Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Independence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.5.1 1.5.2 1.5.3 1.5.4 1.5.5 1.5.6 1.5.7 Independence of sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Independence of -algebras . . . . . . . . . . . . . . . . . . . . . . . . . Independence of random variables . . . . . . . . . . . . . . . . . . . . . . Correlation and independence . . . . . . . . . . . . . . . . . . . . . . . . Independence and conditional expectation. . . . . . . . . . . . . . . . . . Law of Large Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . Central Limit Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 11 16 22 30 40 40 41 42 44 45 46 47 49 49 50 52 52 53 54 55 57 58

2 Conditional Expectation 2.1 2.2 2.3 A Binomial Model for Stock Price Dynamics . . . . . . . . . . . . . . . . . . . . Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conditional Expectation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.1 2.3.2 2.3.3 2.3.4 2.3.5 2.4 An example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Denition of Conditional Expectation . . . . . . . . . . . . . . . . . . . . Further discussion of Partial Averaging . . . . . . . . . . . . . . . . . . . Properties of Conditional Expectation . . . . . . . . . . . . . . . . . . . . Examples from the Binomial Model . . . . . . . . . . . . . . . . . . . . .

Martingales . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

2 3 Arbitrage Pricing 3.1 3.2 3.3 Binomial Pricing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . General one-step APT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Risk-Neutral Probability Measure . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.1 3.3.2 3.4 3.5 Portfolio Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Self-nancing Value of a Portfolio Process . . . . . . . . . . . . . . . . 59 59 60 61 62 62 63 64 67 67 69 70 70 73 74 77 77 79 81 85 85 86 88 89 91 91 92 94 97 97

Simple European Derivative Securities . . . . . . . . . . . . . . . . . . . . . . . . The Binomial Model is Complete . . . . . . . . . . . . . . . . . . . . . . . . . . .

4 The Markov Property 4.1 4.2 4.3 4.4 4.5 Binomial Model Pricing and Hedging . . . . . . . . . . . . . . . . . . . . . . . . Computational Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Markov Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.1 Different ways to write the Markov property . . . . . . . . . . . . . . . . Showing that a process is Markov . . . . . . . . . . . . . . . . . . . . . . . . . . Application to Exotic Options . . . . . . . . . . . . . . . . . . . . . . . . . . . .

5 Stopping Times and American Options 5.1 5.2 5.3 American Pricing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Value of Portfolio Hedging an American Option . . . . . . . . . . . . . . . . . . . Information up to a Stopping Time . . . . . . . . . . . . . . . . . . . . . . . . . .

6 Properties of American Derivative Securities 6.1 6.2 6.3 6.4 The properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Proofs of the Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Compound European Derivative Securities . . . . . . . . . . . . . . . . . . . . . . Optimal Exercise of American Derivative Security . . . . . . . . . . . . . . . . . .

7 Jensens Inequality 7.1 7.2 7.3 Jensens Inequality for Conditional Expectations . . . . . . . . . . . . . . . . . . . Optimal Exercise of an American Call . . . . . . . . . . . . . . . . . . . . . . . . Stopped Martingales . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

8 Random Walks 8.1 First Passage Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

3 8.2 8.3 8.4 8.5 8.6 8.7 8.8 8.9 is almost surely nite . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The moment generating function for Expectation of . . . . . . . . . . . . . . . . . . . . . . . . 97 99

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100

The Strong Markov Property . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 General First Passage Times . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 Example: Perpetual American Put . . . . . . . . . . . . . . . . . . . . . . . . . . 102 Difference Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 Distribution of First Passage Times . . . . . . . . . . . . . . . . . . . . . . . . . . 107

8.10 The Reection Principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 9 Pricing in terms of Market Probabilities: The Radon-Nikodym Theorem. 9.1 9.2 9.3 9.4 9.5 111

Radon-Nikodym Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 Radon-Nikodym Martingales . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 The State Price Density Process . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 Stochastic Volatility Binomial Model . . . . . . . . . . . . . . . . . . . . . . . . . 116 Another Applicaton of the Radon-Nikodym Theorem . . . . . . . . . . . . . . . . 118 119

10 Capital Asset Pricing

10.1 An Optimization Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 11 General Random Variables 123

11.1 Law of a Random Variable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 11.2 Density of a Random Variable . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 11.3 Expectation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 11.4 Two random variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 11.5 Marginal Density . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 11.6 Conditional Expectation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 11.7 Conditional Density . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 11.8 Multivariate Normal Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 11.9 Bivariate normal distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 11.10MGF of jointly normal random variables . . . . . . . . . . . . . . . . . . . . . . . 130 12 Semi-Continuous Models 131

12.1 Discrete-time Brownian Motion . . . . . . . . . . . . . . . . . . . . . . . . . . . 131

4 12.2 The Stock Price Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132 12.3 Remainder of the Market . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 12.4 Risk-Neutral Measure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 12.5 Risk-Neutral Pricing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134 12.6 Arbitrage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134 12.7 Stalking the Risk-Neutral Measure . . . . . . . . . . . . . . . . . . . . . . . . . . 135 12.8 Pricing a European Call . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138 13 Brownian Motion 139

13.1 Symmetric Random Walk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 13.2 The Law of Large Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 13.3 Central Limit Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140 13.4 Brownian Motion as a Limit of Random Walks . . . . . . . . . . . . . . . . . . . 141 13.5 Brownian Motion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142 13.6 Covariance of Brownian Motion . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 13.7 Finite-Dimensional Distributions of Brownian Motion . . . . . . . . . . . . . . . . 144 13.8 Filtration generated by a Brownian Motion . . . . . . . . . . . . . . . . . . . . . . 144 13.9 Martingale Property . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 13.10The Limit of a Binomial Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 13.11Starting at Points Other Than 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 13.12Markov Property for Brownian Motion . . . . . . . . . . . . . . . . . . . . . . . . 147 13.13Transition Density . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 13.14First Passage Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 14 The It Integral o 153

14.1 Brownian Motion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 14.2 First Variation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 14.3 Quadratic Variation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 14.4 Quadratic Variation as Absolute Volatility . . . . . . . . . . . . . . . . . . . . . . 157 14.5 Construction of the It Integral . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158 o 14.6 It integral of an elementary integrand . . . . . . . . . . . . . . . . . . . . . . . . 158 o 14.7 Properties of the It integral of an elementary process . . . . . . . . . . . . . . . . 159 o 14.8 It integral of a general integrand . . . . . . . . . . . . . . . . . . . . . . . . . . . 162 o

5 14.9 Properties of the (general) It integral . . . . . . . . . . . . . . . . . . . . . . . . 163 o 14.10Quadratic variation of an It integral . . . . . . . . . . . . . . . . . . . . . . . . . 165 o 15 It s Formula o 167

15.1 It s formula for one Brownian motion . . . . . . . . . . . . . . . . . . . . . . . . 167 o 15.2 Derivation of It s formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168 o 15.3 Geometric Brownian motion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 15.4 Quadratic variation of geometric Brownian motion . . . . . . . . . . . . . . . . . 170 15.5 Volatility of Geometric Brownian motion . . . . . . . . . . . . . . . . . . . . . . 170 15.6 First derivation of the Black-Scholes formula . . . . . . . . . . . . . . . . . . . . 170 15.7 Mean and variance of the Cox-Ingersoll-Ross process . . . . . . . . . . . . . . . . 172 15.8 Multidimensional Brownian Motion . . . . . . . . . . . . . . . . . . . . . . . . . 173 15.9 Cross-variations of Brownian motions . . . . . . . . . . . . . . . . . . . . . . . . 174 15.10Multi-dimensional It formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 o 16 Markov processes and the Kolmogorov equations 177

16.1 Stochastic Differential Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . 177 16.2 Markov Property . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178 16.3 Transition density . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 16.4 The Kolmogorov Backward Equation . . . . . . . . . . . . . . . . . . . . . . . . 180 16.5 Connection between stochastic calculus and KBE . . . . . . . . . . . . . . . . . . 181 16.6 Black-Scholes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183 16.7 Black-Scholes with price-dependent volatility . . . . . . . . . . . . . . . . . . . . 186 17 Girsanovs theorem and the risk-neutral measure 17.1 Conditional expectations under

f IP

189

. . . . . . . . . . . . . . . . . . . . . . . . . . 191

17.2 Risk-neutral measure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193 18 Martingale Representation Theorem 197

18.1 Martingale Representation Theorem . . . . . . . . . . . . . . . . . . . . . . . . . 197 18.2 A hedging application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197 18.3 18.4

d-dimensional Girsanov Theorem . . . . . . . . . . . d-dimensional Martingale Representation Theorem . .

. . . . . . . . . . . . . . . 199 . . . . . . . . . . . . . . . 200

18.5 Multi-dimensional market model . . . . . . . . . . . . . . . . . . . . . . . . . . . 200

6 19 A two-dimensional market model 19.1 Hedging when ;1 < 19.2 Hedging when 203 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204

=1

2 < ) S2^ (! ) (!) = > S2 (HT= = 4 S1 (T ) 2 : S1(T ) = 2

!1 = T !1 = H:if if if if

! = HH ! = HT ! = TH ! = T T:

Theorem 3.26 A stopped martingale (or submartingale, or supermartingale) is still a martingale (or submartingale, or supermartingale respectively). Proof: Let fYk gn=0 be a martingale, and be a stopping time. Choose some k 2 f0 1 k The set f kg is in F k , so the set f k + 1g = f kgc is also in F k . We compute

: : : ng.

IE Y(k+1) jF k = IE I k Y + I k+1 Yk+1 jF k = I k Y + I k+1 IE Yk+1 jF k ] = I k Y + I k+1 Yk = Yk :^ f g f g f f g g f f g g ^

h

i

h

i

CHAPTER 7. Jensens Inequality

95

96

Chapter 8

Random Walks8.1 First Passage TimeToss a coin innitely many times. Then the sample space is the set of all innite sequences ! = (!1 !2 : : : ) of H and T . Assume the tosses are independent, and on each toss, the probability of H is 1 , as is the probability of T . Dene 2

Yj (!) = Mk =

(

;1k X

1

if if

!j = H !j = T

M0 = 0

The process fMk g1 is a symmetric random walk (see Fig. 8.1) Its analogue in continuous time is k=0 Brownian motion. Dene

j =1

Yj k = 1 2 : : :

= minfk 0 Mk = 1g: If Mk never gets to 1 (e.g., ! = (TTTT : : : )), then = 1. The random variable8.2 is almost surely nite

is called the rst passage time to 1. It is the rst time the number of heads exceeds by one the number of tails.

It is shown in a Homework Problem that fM k g1 and fNk g1 where k=0 k=0

Nk = exp Mk ; k log e + e 2= e Mk 2 e +e97

(

;

!)

k

;

98

Mk k

Figure 8.1: The random walk process Mk

e e + 2 1 Figure 8.2: Illustrating two functions of

2 e e + 1

are martingales. (Take Mk = ;Sk in part (i) of the Homework Problem and take (v).) Since N0 = 1 and a stopped martingale is a martingale, we have

=;

in part

1 = IENk = IE e Mk^

"

^

2 e +e

k

^

#

;

(2.1)

for every xed 2 IR (See Fig. 8.2 for an illustration of the various functions involved). We want to let k!1 in (2.1), but we have to worry a bit that for some sequences ! 2 , (! ) = 1. We consider xed As k!1,

> 0, so

e +e2 e +ek; ^

2

;

< 1:if if

!

(0

2 e +e;

0, which gives the value of the perpetual American put when the stock price is x. This function should satisfy the conditions:

v (x) (K ; x)+ 8x, 1 ~ ~ (b) v (x) 1+r pv (ux) + q v (dx)] 8x (c) At each x, either (a) or (b) holds with equality.(a) In the example we worked out, we have For

6 j 1 : v(2j ) = 3:( 1 )j 1 = 2j 2 For j 1 : v (2j ) = 5 ; 2j :;

This suggests the formula

v (x) =We then have (see Fig. 8.4): (a) (b)

(

x 3 5 ; x 0 < x 3:

6 x

v (x) (5 ; x)+ 8x v (x)

4 1 1 x 5 2 v (2x) + 2 v ( 2 ) for every x except for 2 < x < 4.

h

i

CHAPTER 8. Random Walks

107

v(x) 5(3,2)

5

x

Figure 8.4: Graph of v (x). Check of condition (c): If 0 < x If x

3, then (a) holds with equality.4 1 v (2x) + 1 v ( x ) 5 2 2 2 1 12 1 6 2 2x + 2 x

6, then (b) holds with equality: =4 5 6 = x:

If 3 < x < 4 or 4 < x < 6, then both (a) and (b) are strict. This is an artifact of the discreteness of the binomial model. This artifact will disappear in the continuous model, in which an analogue of (a) or (b) holds with equality at every point.

8.9 Distribution of First Passage TimesLet fMk g1 be a symetric random walk under a probability measure IP , with M0 k=0

= 0. Dening

= minfk 0 Mk = 1gwe recall that

IE

= 1; 1;

p

2

0 < < 1:

We will use this moment generating function to obtain the distribution of . We rst obtain the Taylor series expasion of IE as follows:

108

f (x) = 1 ; 1 ; x f (0) = 0 1 1 1 f (x) = 2 (1 ; x) 2 f (0) = 2 3 f (x) = 1 (1 ; x) 2 f (0) = 1 4 4 3 (1 ; x) 5 f (0) = 3 2 f (x) = 8 8 ::: : f (j)(x) = 1 3 : :2j (2j ; 3) (1 ; x) f (j)(0) = 1 3 : : : (2j ; 3)0 ; 0 00 ; 00 000 ; 000

p

;

(2j 1) 2

;

: = 1 3 : :2j (2j ; 3) : 2 42j :1:(: ;(2j ; 2) j 1)! 2j 1 (2j ; 2)! 1 = 2 (j ; 1)!; ;

2j

The Taylor series expansion of f (x) is given by

f (x) = 1 ; 1 ; x X 1 (j) j = j ! f (0)x1

p

j =0

=

X1

X = x+ 21

j =1

1 2j 1 2;

(2j ; 2)! j j !(j ; 1)! x;

j =2

1 2j 1 1 2 (j ; 1)

2j ; 2 xj : j

!

So we have

IE

= 1; 1; = 1 f ( 2) = 2 +

p

2

X1

2j 1;

j =21

22j 1 IP f;

(j ; 1)

1

2j ; 2 :

!

j

But also,

IE =

Xj =1

= 2j ; 1g:

CHAPTER 8. Random Walks

109

Figure 8.5: Reection principle.

Figure 8.6: Example with j Therefore,

= 2.

IP f = 1g = IP f = 2j ; 1g =

1 2

! 1 2j 1 1 2j ; 2 j = 2 3 : : : 2 (j ; 1) j;

8.10 The Reection PrincipleTo count how many paths reach level 1 by time 2j ; 1, count all those for which M 2j ;1 double count all those for which M 2j ;1 3. (See Figures 8.5, 8.6.)

= 1 and

110 In other words,

IP fFor j

2j ; 1g = IP fM2j 1 = 1g + 2IP fM2j 1 3g = IP fM2j 1 = 1g + IP fM2j 1 3g + IP fM2j = 1 ; IP fM2j 1 = ;1g:; ; ; ; ;

;

1

;3g

2,

IP f = 2j ; 1g = IP f= = =

= = = =

2j ; 1g ; IP f 2j ; 3g 1 ; IP fM2j 1 = ;1g] ; 1 ; IP fM2j 3 = ;1g] IP fM2j 3 = ;1g ; IP fM2j 1 = ;1g (2j ; 3)! ; 1 2j 1 (2j ; 1)! 1 2j 3 2 (j ; 1)!(j ; 2)! 2 j !(j ; 1)! 2j 1 (2j ; 3)! 1 2 j !(j ; 1)! 4j (j ; 1) ; (2j ; 1)(2j ; 2)] 1 2j 1 (2j ; 3)! 2j (2j ; 2) ; (2j ; 1)(2j ; 2)] 2 j !(j ; 1)! 2j 1 (2j ; 2)! 1 2 j !(j ; 1)! ! 2j ; 2 : 1 2j 1 1 2 (j ; 1) j; ; ; ; ; ; ; ; ; ;

Chapter 9

f P Theorem 1.27 (Radon-Nikodym) Let I and I be two probability measures on a space ( F ). P f Assume that for every A 2 F satisfying IP (A) = 0, we also have I (A) = 0. Then we say that P f is absolutely continuous with respect to I . Under this assumption, there is a nonegative random P IP variable Z such that f IP (A) = Z f P P and Z is called the Radon-Nikodym derivative of I with respect to I . f IEX = IE XZ ] for every random variable X for which IE jXZ j < 1. f Remark 9.2 If I is absolutely continuous with respect to I , and I is absolutely continuous with P P P f, we say that I and IP are equivalent. I and IP are equivalent if and only if f f respect to I P P P f IP (A) = 0 exactly when IP (A) = 0 8A 2 F : 1 f f If I and I are equivalent and Z is the Radon-Nikodym derivative of I w.r.t. I , then Z is the P P P P f Radon-Nikodym derivative of I w.r.t. I , i.e., P P f (1.2) IEX = IE XZ ] 8X f 1 IEY = IE Y: Z ] 8Y: (Let X and Y be related by the equation Y = XZ to see that (1.2) and (1.3) are the same.)111 (1.3) Remark 9.1 Equation (1.1) implies the apparently stronger condition

A

ZdIP 8A 2 F

(1.1)

112Example 9.1 (Radon-Nikodym Theorem) Let = fHH HT TH T T g, the set of coin toss sequences of length 2. Let P correspond to probability 1 for H and 2 for T , and let IP correspond to probability 1 for 3 3 2

e H and 1 for T . Then Z(!) = IP (!) , so2I ( ) P

e

! 9 9 Z(HH) = 4 Z(HT) = 9 Z(TH) = 9 Z(T T) = 16 : 8 8

9.2 Radon-Nikodym MartingalesLet be the set of all sequences of n coin tosses. Let I be the market probability measure and let P f IP be the risk-neutral probability measure. Assume

f f P P so that I and I are equivalent. The Radon-Nikodym derivative of I with respect to I is P P f IP (!) Z (!) = IP (! ) :Dene the I -martingale P

f IP (!) > 0 IP (!) > 0 8! 2

We can check that Zk is indeed a martingale:

Zk = IE Z jF k ] k = 0 1 : : : n:4

f Lemma 2.28 If X is F k -measurable, then I EXProof:

IE Zk+1jF k ] = IE IE Z jF k+1 ]jF k ] = IE Z jF k ] = Zk := IE XZk ].

f IEX = IE XZ ] = IE IE XZ jF k ]] = IE X:IE Z jF k ]] = IE XZk ]: f IE IA X ] = IE ZkIA X ]A

Note that Lemma 2.28 implies that if X is F k -measurable, then for any A 2 F k ,

or equivalently,

Z

f XdIP =

ZA

XZk dIP:

CHAPTER 9. Pricing in terms of Market ProbabilitiesZ2 (HH) = 9/4 1/3 Z (H) = 3/2 1 1/3 Z =1 0 2/3 Z (T) = 3/4 1 2/3 Z2 (TT) = 9/16 1/3 2/3 Z2 (HT) = 9/8 Z2 (TH) = 9/8

113

Figure 9.1: Showing the Zk values in the 2-period binomial model example. The probabilities shown f are for I , not I . P P Lemma 2.29 If X is F k -measurable and 0

j k, thenj

1 f IE X jF j ] = Z IE XZk jF j ]:

1 Proof: Note rst that Z

Z Z 1 f IE XZk jF j ]dIP = IE XZkjF j ]dIP A A Zj Z= =

j

IE XZk jF j ] is F j -measurable. So for any A 2 F j , we have(Lemma 2.28)

ZAA

XZk dIP

(Partial averaging) (Lemma 2.28)

f XdIP

Example 9.2 (Radon-Nikodym Theorem, continued) We show in Fig. 9.1 the values of the martingale Zk . We always have Z0 = 1, since

Z0 = IEZ =

Z

e ZdIP = IP ( ) = 1:

9.3 The State Price Density ProcessIn order to express the value of a derivative security in terms of the market probabilities, it will be useful to introduce the following state price density process:

k

= (1 + r) k Zk k = 0 : : : n:;

114 We then have the following pricing formulas: For a Simple European derivative security with payoff Ck at time k,

f V0 = IE (1 + r) k Ck h i = IE (1 + r) k Zk Ck = IE k Ck ]:; ;

h

i

(Lemma 2.28)

More generally for 0

j k,

f Vj = (1 + r)j IE (1 + r) k Ck jF j j h i = (1 + r) IE (1 + r) k Zk Ck jF j Zj 1 IE C jF ] = k k j; ;

h

i

(Lemma 2.29)

j

Remark 9.3

f j Vj gk=0 is a martingale under I , as we can check below: P j IE j+1Vj +1 jF j ] = IE IE k Ck jF j +1]jF j ] = IE k Ck jF j ]= j Vj :

Now for an American derivative security fGk gn=0 : k2

f V0 = sup IE (1 + r) G ] T0 = sup IE (1 + r) Z G ] T0 = sup IE G ]:; ; 2 2

T0

More generally for 0

f Vj = (1 + r)j sup IE (1 + r) G jF j ]= 1 sup IE G jF j ]:j2

j n,

1 = (1 + r)j sup Z IE (1 + r) Z G jF j ]2

2

Tj

;

Tj j

;

Tj

Remark 9.4 Note that

f j Vj gn=0 is a supermartingale under I , P j (b) j Vj j Gj 8j(a)

CHAPTER 9. Pricing in terms of Market Probabilities2() = 1.44 S2 (HH) = 16 1/3 () = 1.20 1 S (H) = 8 1 1/3 S =4 0 2/3 0 = 1.00 2/3 2() = 0.72 S2 (HT) = 4 S2 (TH) = 4 () = 0.72 2

115

1/3 S (T) = 2 1 () = 0.6 1 2/3

() = 0.36 2 S2 (TT) = 1

f Figure 9.2: Showing the state price values k . The probabilities shown are for I , not I . P P(c)

f j Vj gn=0 is the smallest process having properties (a) and (b). j

We interpret k by observing that k (! )IP (! ) is the value at time zero of a contract which pays \$1 at time k if ! occurs.Example 9.3 (Radon-NikodymTheorem, continued) We illustrate the use of the valuation formulas for 1 European and American derivative securities in terms of market probabilities. Recall that p = 3 , q = 2 . The 3 state price values k are shown in Fig. 9.2. For a European Call with strike price 5, expiration time 2, we have

V2 (HH) = 11 2(HH)V2 (HH) = 1:44 11 = 15:84: V2 (HT) = V2(TH) = V2 (TT ) = 0: V0 = 1 1 15:84 = 1:76: 3 3 2 (HH) V2(HH) = 1:44 11 = 1:20 11 = 13:20 1:20 1 (HH) V1 (H) = 1 13:20 = 4:40 3Compare with the risk-neutral pricing formulas:

V1(H) = 2 V1 (HH) + 2 V1(HT) = 2 11 = 4:40 5 5 5 2 V1 (T ) = 5 V1 (TH) + 2 V1 (T T ) = 0 5 2 V (H) + 2 V (T) = 2 V0 = 5 1 4:40 = 1:76: 5 1 5Now consider an American put with strike price 5 and expiration time 2. Fig. 9.3 shows the values of + k (5 ; Sk ) . We compute the value of the put under various stopping times : (0) Stop immediately: value is 1. (1) If

(HH) = (HT) = 2 (T H) = (TT ) = 1, the value is 1 2 0:72 + 2 1:80 = 1:36: 3 3 3

116+ (5 - S (HH)) = 0 2 + (HH) (5 - S (HH)) = 0 2 2

+ (5 - S (H)) = 0 1 + (H) (5 - S1(H)) = 0 1 1/3 2/3 + (5 - S (T)) = 3 1 (T) (5 - S (T))+ = 1.80 1 1

1/3

+ (5-S 0) =1 + 0 (5-S 0) =1

2/3 1/3

+ (5 - S (HT)) = 1 2 (HT) (5 - S (HT))+= 0.72 2 2 (5 - S (TH))+= 1 2 + 2 (TH) (5 - S2(TH)) = 0.72

2/3

+ (5 - S (TT)) = 4 2 + (TT) (5 - S (TT)) = 1.44 2 2

Figure 9.3: Showing the values k (5 ; Sk )+ for an American put. The probabilities shown are for f I , not I . P P(2) If we stop at time 2, the value is

1 3

2 3

2 0:72 + 3 1 0:72 + 2 3 3

2 3

1:44 = 0:96

We see that (1) is optimal stopping rule.

9.4 Stochastic Volatility Binomial ModelLet be the set of sequences of n tosses, and let 0 < d k are F k -measurable. Also let

< 1+ rk < uk , where for each k, dk uk rkk k

f Let I be the risk-neutral probability measure: P

k pk = 1 + r;; dk ~ u d k k

; qk = uk u (1 + rk) : ~ ;d

and for 2

equivalent. Dene

f IP !k+1 = H jF k ] = pk ~ f IP !k+1 = T jF k ] = qk : ~ Let I be the market probability measure, and assume IP f! g > 0 8! 2 P f IP (!) Z (!) = IP (! ) 8! 2

k n,

f IP f!1 = H g = p0 ~ f IP f!1 = T g = q0 ~

. Then I and P

f IP are

CHAPTER 9. Pricing in terms of Market Probabilities

117

Zk = IE Z jF k ] k = 0 1 : : : n:We dene the money market price process as follows:

M0 = 1Note that Mk is Fk;1 -measurable.

Mk = (1 + rk 1)Mk;

;

1

k = 1 : : : n:

We then dene the state price process to be

k

1 = M Zk k = 0 : : : n:

; As before the portfolio process is f consists of X 0, the non-random initial wealth, and

k n 1 k gk=0 . The self-nancing value process (wealth process)

Xk+1 =

k Sk+1 + (1 + rk )(Xk ; k Sk )

k = 0 : : : n ; 1:

f P Then the following processes are martingales under I :1 S n M kk k=0and

1 X n M kk

k=0

and the following processes are martingales under I : P

f k Sk gn=0 kWe thus have the following pricing formulas:

and

f k Xk gn=0: k

Simple European derivative security with payoff Ck at time k:

f C Vj = Mj IE Mk F j k 1 IE C jF ] = k k jjAmerican derivative security

fGk gn=0: k f G Vj = Mj sup IE M F j Tj 1 sup IE G jF ] : = j2

j

2

Tj

The usual hedging portfolio formulas still work.

118

9.5 Another Applicaton of the Radon-Nikodym TheoremLet ( F Q) be a probability space. Let G be a sub- -algebra of F , and let X be a non-negative R X dQ = 1. We construct the conditional expectation (under Q) of X random variable with given G . On G , dene two probability measures

IP (A) = Q(A) 8A 2 G

f IP (A) = Z

Z

Whenever Y is a G -measurable random variable, we have

A

XdQ 8A 2 G :

Y dIP =

Z

Y dQ

if Y = A for some A 2 G , this is just the denition of IP , and the rest follows from the standard f f machine. If A 2 G and IP (A) = 0, then Q(A) = 0, so I (A) = 0. In other words, the measure I P P f. The Radon-Nikodym theorem implies that P is absolutely continuous with respect to the measure I there exists a G -measurable random variable Z such that

1

f IP (A) =4

Z

A

Z dIP 8A 2 GA

i.e.,

ZA

X dQ =

Z

Z dIP 8A 2 G :

This shows that Z has the partial averaging property, and since Z is G -measurable, it is the conditional expectation (under the probability measure Q) of X given G . The existence of conditional expectations is a consequence of the Radon-Nikodym theorem.

Chapter 10

Capital Asset Pricing10.1 An Optimization ProblemConsider an agent who has initial wealth X 0 and wants to invest in the stock and money markets so as to maximize

IE log Xn :

Remark 10.1 Regardless of the portfolio used by the agent, f k Xk g1 is a martingale under I , so P k=0

IE nXn = X0Here, (BC) stands for Budget Constraint. Remark 10.2 If is any random variable satisfying (BC), i.e.,

(BC )

IE

n

= X0

then there is a portfolio which starts with initial wealth X 0 and produces Xn = at time n. To see this, just regard as a simple European derivative security paying off at time n. Then X 0 is its value at time 0, and starting from this value, there is a hedging portfolio which produces X n = . Remarks 10.1 and 10.2 show that the optimal obtained by solving the following Constrained Optimization Problem: Find a random variable which solves: Maximize Subject to Equivalently, we wish to Maximize

Xn for the capital asset pricing problem can be

IE logn

IE

= X0:

X!2

(log (! )) IP (! )

119

120 Subject to

X!2

n (! )

(!)IP (! ) ; X0 = 0:

There are 2n sequences ! in . Call them ! 1

!2 : : : !2n . Adopt the notation

x1 = (! 1) x2 = (!2 ) : : : x2n = (!2n ):We can thus restate the problem as: Maximize

2n X

k=1

(log xk )IP (! k )

Subject to In order to solve this problem we use:

2n X

k=1

n (! k )xk IP (! k )

; Xo = 0:

Theorem 1.30 (Lagrange Multiplier) If (x1 Maxmize Subject to then there is a number such that

: : : xm) solve the problem f (x1 : : : xm ) g (x1 : : : xm) = 0(1.1)

@ f (x : : : x ) = @ g(x : : : x ) k = 1 : : : m m m @xk 1 @xk 1and

g(x1 : : : xm) = 0:For our problem, (1.1) and (1.2) become

(1.2)

xk IP (! k ) =2n X

1

n (! k )IP (! k ) n (! k )xk IP (! k )

k = 1 : : : 2n= X0 : (1:2 )0

(1:1 )0

k=1

Equation (1.1) implies

xk =Plugging this into (1.2) we getn

1 : n (! k )

2 1 X IP (! ) = X =) 1 = X : 0 k 0

k=1

CHAPTER 10. Capital Asset PricingTherefore,

121

0 xk = X! ) k = 1 : : : 2n: ( k nsolves the problem Maximize Subject to

Thus we have shown that if

IE log IE ( n ) = X0= X0 :n

(1.3)

then (1.4)

Theorem 1.31 If Proof: Fix Z

is given by (1.4), then

solves the problem (1.3).

> 0 and dene

We maximize f over x > 0:0

f (x) = log x ; xZ:1 1 f (x) = x ; Z = 0 () x = Z

The function f is maximized at x

1 f (x) = ; x2 < 0 8x 2 IR:00

1 log x ; xZ f (x ) = log Z ; 1 8x > 0 8Z > 0:Let be any random variable satisfying

1 = Z , i.e.,

(1.5)

IE ( n ) = X0and let

= X0 :n

From (1.5) we have

log ;

n X0

log X0 ; 1:n

Taking expectations, we have

1 IE log ; X IE ( n ) IE log ; 1 0and so

IE log

IE log :

122 In summary, capital asset pricing works as follows: Consider an agent who has initial wealth and wants to invest in the stock and money market so as to maximize

X0

IE log Xn :The optimal Xn is Xn0 = Xn , i.e.,

Since f k Xk gn=0 is a martingale under I , we have P k

n Xn = X0:

k Xkso

= IE nXn jF k ] = X0 k = 0 : : : n

Xk = X0k k (!1

and the optimal portfolio is given by

: : : !k ) =

X0 ; k+1(!1 X0: : : k+1 (!1 : : : !k H ) Sk+1 (!1 : : : !k H ) ; Sk+1 (!1 : : :

!k T ) : !k T )

Chapter 11

General Random Variables11.1 Law of a Random VariableThus far we have considered only random variables whose domain and range are discrete. We now consider a general random variable X : !IR dened on the probability space ( F P). Recall that:

F is a

I is a probability measure on F , i.e., IP (A) is dened for every A 2 F . P A function X : !IR is a random variable if and only if for every Borel subsets of I ), the set R4 ; 4

-algebra of subsets of .

B 2 B(IR) (the

-algebra of

fX 2 Bg = X 1(B) = f! X (!) 2 Bg 2 Fi.e., X 11.1)

: !IR is a random variable if and only if X

;

1 is a function from B(IR) to F (See Fig.

Thus any random variable X induces a measure X on the measurable space by

(IR B(IR)) dened

in Williams book this is denoted by L X .

X 1(B) 8B 2 B(IR) where the probabiliy on the right is dened since X 1(B ) 2 F . X is often called the Law of X X (B ) = IP; ;

11.2 Density of a Random VariableThe density of X (if it exists) is a function f X

X (B ) =

Z

: IR! 0 1) such that

B

fX (x) dx 8B 2 B (IR):123

124

X

R B

{X B}

Figure 11.1: Illustrating a real-valued random variable X . We then write

where the integral is with respect to the Lebesgue measure on I . fX is the Radon-Nikodym derivaR tive of X with respect to the Lebesgue measure. Thus X has a density if and only if X is absolutely continuous with respect to Lebesgue measure, which means that whenever B 2 B(IR) has Lebesgue measure zero, then

d X (x) = fX (x)dx

IP fX 2 Bg = 0:

11.3 ExpectationTheorem 3.32 (Expectation of a function of X ) Let h : IR!IR be given. Then

IEh(X ) =4

Z Z

h(X (!)) dIP (!) h(x) d X (x) h(x)fX (x) dx: IR, then these equations are

= =

ZIRIR

Proof: (Sketch). If h(x) = B (x) for some B4

1

IE 1B (X ) = P fX 2 B g = Z X (B ) = fX (x) dxBwhich are true by denition. Now use the standard machine to get the equations for general h.

CHAPTER 11. General Random Variables

125

(X,Y)

C y { (X,Y) C} xFigure 11.2: Two real-valued random variables X

Y.

11.4 Two random variablesLet X Y be two random variables !IR dened on the space ( F P). Then measure on B(IR2 ) (see Fig. 11.2) called the joint law of (X Y ), dened by

XY

induce a

X Y (C ) = IP f(X4

Y ) 2 C g 8C 2 B(IR2 ):

The joint density of (X

Y ) is a function fX Y : IR2! 0 1)

that satises

X Y (C ) =

ZZC

fX Y (x y ) dxdy 8C 2 B(IR2): Y in a manner analogous to the univariate case:

We compute the expectation of a function of X

fX Y

is the Radon-Nikodym derivative of X Y with respect to the Lebesgue measure (area) on IR2 .

IEk(X Y ) =4

Z

= =

ZZIR ZZ2

k(X (!) Y (!)) dIP (!) k(x y ) dX Y (x

y)

IR

k(x y )fX Y (x y ) dxdy

2

126

11.5 Marginal DensitySuppose (X

Y ) has joint density f X Y . Let B IR be given. ThenY (B )

= IP fY 2 B g = IP f(X Y ) 2 IR B g = Z X Z (IR B ) Y = fX Y (x y ) dxdy =

ZB Z

B4

fY (y ) dy fX Y (x y ) dx:

IR

where

fY (y ) =

Therefore, fY (y ) is the (marginal) density for Y .

IR

11.6 Conditional ExpectationSuppose

(X Y ) has joint density f X Y . Let h : IR!IR be given. Recall that IE h(X )jY ] = IE h(X )j (Y )] depends on ! through Y , i.e., there is a function g (y ) (g depending on h) such that4

IE h(X )jY ](!) = g (Y (!)):We can characterize g using partial averaging: Recall that A 2 (Y )()A = B 2 B(IR). Then the following are equivalent characterizations of g : How do we determine g ?

fY 2 Bg for some(6.1)

Z

A

g (Y ) dIP =

Z

A

h(X ) dIP 8A 2 (Y )

Z ZIR

1B (Y )g(Y ) dIP =ZZIR2

Z

1B (Y )h(X ) dIP 8B 2 B(IR) 1B (y)h(x) dX Y (x

(6.2)

1B (y)g(y) Y (dy) =g(y )fY (y ) dy =

y ) 8B 2 B(IR)

(6.3)

ZB

Z ZB IR

h(x)fX Y (x y ) dxdy 8B 2 B(IR):

(6.4)

CHAPTER 11. General Random Variables

127

11.7 Conditional DensityA function fX jY (xjy ) : IR2 ! 0 any function h : IR!IR:

1) is called a conditional density for X given Y provided that forg (y ) =

ZIR

h(x)fX Y (xjy ) dx:j

(7.1)

(Here g is the function satisfying

and g depends on h, but fX jY does not.) Theorem 7.33 If (X

IE h(X )jY ] = g (Y )

Y ) has a joint density fX Y , then fX Y (xjy ) = fXfY (x)y) : Y (yj

(7.2)

Proof: Just verify that g dened by (7.1) satises (6.4): For B

Z Z

B | IR

h(x)fX Y (xjy) dx fY (y ) dy =g (y )

Z Z

2 B(IR)

{zj

}

B IR

h(x)fX Y (x y) dxdy:

Notation 11.1 Let g be the function satisfying

IE h(X )jY ] = g (Y ):The function g is often written as

g(y ) = IE h(X )jY = y ]and (7.1) becomes

IE h(X )jY = y ] = g (y ) =

ZIR

h(x)fX Y (xjy ) dx:j

In conclusion, to determine IE

h(X )jY ] (a function of ! ), rst compute

Z

and then replace the dummy variable y by the random variable Y :

IR

h(x)fX Y (xjy ) dxj

IE h(X )jY ](!) = g (Y (!)):Example 11.1 (Jointly normal random variables) Given parameters: (X Y ) have the joint density1

>01 2

2

> 0 ;1 < < 1. :

Let

fX Y (x y) =

2

1 2

1 p1 ; 2 exp

1 ; 2(1 ; 2 )

x2 ; 2 x y + y2 2 21 2

128The exponent is

;

x2 ; 2 x y + y2 2 2 2(1 ; 2 ) 1 " 1 2 2 # x ; y 2 + y2 (1 ; 2 ) 1 = ; 2(1 ; 2 ) 2 1 2 2 2 2 1 = ; 2(1 ; 2 ) 12 x ; 1 y ; 1 y 2 : 2 2 1 2 1 p

1

We can compute the Marginal density of Y as follows

fY (y) =

Z1

1 1 Z 1 e; u2 du:e; 2 y = 2 22

2

1 2

1;

2

;1

e

; 2(1;1 2)2 2 2

2 1

x;

1 2

y

2

dx:e

1 ; 2 y2 2

2

using the substitution u =1 y2 2 2 2

;1

p

1

Thus Y is normal with mean 0 and variance 2 . 2 Conditional density. From the expressions

; = p1 e 2 2

;2

1

1

x;

1 2

dx y , du = p1; 2

1

:

fX Y (x y) =

2

1 2

1 p

1;

e 2

1 ; 2(1; 2 )

1 x 2 1 2

;;

1y 2 2

e; 2

1 y2

2 2

1y 2 fY (y) = p 1 e; 2 2 2 2

we have

fX jY (xjy) = fXfY (x y) Y (y) 2 1 1 p 1 e; 2(1; 2) 12 x ; 21 y : 1 = p 2 1 1; 2 In the x-variable, fX jY (xjy) is a normal density with mean 21 y and variance (1 ; 2 ) Z1 IE X jY = y] = xf (xjy) dx = 1 y 2 ;1 X jY IE

2. 1

Therefore,

"

X;

1 2

=

Z1

y

2

Y =y1 2

#

;1 = (1 ; 2 ) 2 : 1

x;

y fX jY (xjy) dx 2

CHAPTER 11. General Random VariablesFrom the above two formulas we have the formulas

129

IE X jY ] = IE

1 2

Y

(7.3)

"

X;

1 2

Y

2

Y = (1 ; 2 ) 2: 1

#

(7.4)

Taking expectations in (7.3) and (7.4) yields

IEX = IE

1 2

IEY = 0

(7.5)

"

X;

1 2

Y

2

#

= (1 ; 2 ) 2: 1

(7.6)

Based on Y , the best estimator of X is 21 Y . This estimator is unbiased (has expected error zero) and the expected square error is (1 ; 2 ) 2. No other estimator based on Y can have a smaller expected square error 1 (Homework problem 2.1).

11.8 Multivariate Normal DistributionPlease see Oksendal Appendix A. Let denote the column vector of random variables (X1 X2 : : : Xn )T , and the corresponding has a multivariate normal distribution if and only if column vector of values (x1 x2 : : : xn )T . the random variables have the joint density

X

X

x

p o n fX(x) = (2detnA exp ; 1 (X ; )T:A:(X ; ) : 2 ) =2=(4

Here, and A is an n

T T n ) = IE X = (IEX1 : : : IEXn) n nonsingular matrix. A 1 is the covariance matrix1

:::;

4

;

A 1 = IE (X ; ):(X ; )T

h

i

i.e. the (i j )th element of A;1 is IE (Xi ; i )(Xj ; j ). The random variables in if and only if A;1 is diagonal, i.e.,

X are independent

A 1 = diag(;

2 2 1 2

:::

2) n

where 2 j

= IE (Xj ; j )2 is the variance of Xj .

130

11.9 Bivariate normal distributionTake n = 2 in the above denitions, and let4

= IE (X1 ; 1 )(X2 ; 2 ) :1 2;

Thus,

2 A=4

A =

1

";

2 1 1 2

1 2 2 2

#;

; p

1 (1 2 ) 2) 1 2 (12 1;

;1 p

1 2 2 2

(1 ) 1 (1 2 )2;

3 5

det A =

1 2

1;

2

and we have the formula from Example 11.1, adjusted to account for the possibly non-zero expectations:

fX1 X2 (x1 x2) =

2

1

1 p 2 1;

#) ( " 1 (x1 ; 1 )2 ; 2 (x1 ; 1 )(x2 ; 2 ) + (x2 ; 2 )2 : 2 2 2 exp ; 2(1 ; 2) 1 2 1 2

11.10 MGF of jointly normal random variablesLet = (u1 u2 : : : un )T denote a column vector with components in IR, and let have a multivariate normal distribution with covariance matrix A ;1 and mean vector . Then the moment generating function is given by

u

X

T IEeu :X =

T eu :XfX1 X2 : : : Xn (x1 x2 : : : xn ) dx1 : : :dxn o n = exp 1 uT A 1 u + uT : 21 ;1

Z

:::

Z

1

;1

;

If any n random variables X1 X2 : : : Xn have this moment generating function, then they are jointly normal, and we can read out the means and covariances. The random variables are jointly normal and independent if and only if for any real column vector = (u 1 : : : un )T

u 8n 9 8n 9 0, the volatility. S0 > 0, the initial stock price.

2 IR, the mean rate of return.

The stock price process is then given by

Sk = S0 exp Bk + ( ; 1 2 )k 2Note that

n

o

k = 0 : : : n:

1 Sk+1 = Sk exp Yk+1 + ( ; 2 2 )

n

o

CHAPTER 12. Semi-Continuous Models

133;

IE Sk+1jF k ] = Sk IE e Yk+1 jF k ]:e 1 2 1 2 = Sk e 2 e 2 = e Sk :;

1 2

2

Thus

k +1 = log IE SkS jF k ] = log IE SS+1 F k k k k 1 var log SS+1 = var Yk+1 + ( ; 2 2) = 2 : k

and

12.3 Remainder of the MarketThe other processes in the market are dened as follows. Money market process: Portfolio process:

Mk = erk k = 0 1 : : : n:

0Each

1

k is F k -measurable.

:::

n 1;

Wealth process:

X0 given, nonrandom. Xk+1 ==Each Xk is F k -measurable. Discounted wealth process:

r k Sk+1 + e (Xk ; k Sk ) r r k (Sk+1 ; e Sk ) + e Xk

Xk+1 Mk+1 =12.4 Risk-Neutral Measure

k

Sk+1 Sk Xk Mk+1 ; Mk + Mk :

f Denition 12.1 Let I be a probability measure on ( P n oSk Mk

n f f is a martingale under I , we say that I is a risk-neutral measure. P P k=0

F ), equivalent to the market measure I . If P

134

f Theorem 4.36 If I is a risk-neutral measure, then every discounted wealth process P f a martingale under I , regardless of the portfolio process used to generate it. PProof:

n Xk onMk

k=0

is

f X IE Mk+1 F k k+1

f = IE=

X = Mk : k

k

Sk+1 Sk Xk Mk+1 ; Mk + Mk F k S X f S IE Mkk+1 F k ; Mkk + Mk +1 kk

12.5 Risk-Neutral PricingLet Vn be the payoff at time n, and say it is F n -measurable. Note that Vn may be path-dependent. Hedging a short position: Sell the simple European derivative security V n . Receive X0 at time 0.

::: n f P If there is a risk-neutral measure I , thenConstruct a portfolio process

0

;

1 which starts with X 0 and ends with Xn

= Vn .

fX f V X0 = IE Mn = IE Mnn : nRemark 12.1 Hedging in this semi-continuous model is usually not possible because there are not enough trading dates. This difculty will disappear when we go to the fully continuous model.

12.6 ArbitrageDenition 12.2 An arbitrage is a portfolio which starts with X 0

= 0 and ends with Xn satisfying

IP (Xn 0) = 1 IP (Xn > 0) > 0:(I here is the market measure). P Theorem 6.37 (Fundamental Theorem of Asset Pricing: Easy part) If there is a risk-neutral measure, then there is no arbitrage.

CHAPTER 12. Semi-Continuous Models

135

f measure, let X0 = 0, and let Xn be the nal wealth corresponding P Proof: Let I be a risk-neutraln on f is a martingale under I , P to any portfolio process. Since XkMkk=0

fX fX IE Mn = IE M00 = 0: nSuppose IP (Xn

(6.1)

0) = 1. We have(6.2)

f f IP (Xn 0) = 1 =) IP (Xn < 0) = 0 =) IP (Xn < 0) = 0 =) IP (Xn 0) = 1: f (6.1) and (6.2) imply I (Xn PThis is not an arbitrage.

= 0) = 1. We have

f f IP (Xn = 0) = 1 =) IP (Xn > 0) = 0 =) IP (Xn > 0) = 0:

12.7 Stalking the Risk-Neutral MeasureRecall that

Y1 Y2 : : : Yn are independent, standard normal random variables on some probability space( F P). o n Sk = S0 exp Bk + ( ; 1 2)k . 2

Sk+1 = S0 exp (Bk + Yk+1 ) + ( ; 1 2)(k + 1) o 2 n 1 2) : = Sk exp Yk+1 + ( ; 2Therefore,

n

o

Sk+1 = Sk : exp n Yk+1 + ( ; r ; 1 2)o 2 Mk+1 Mk

If

S S IE Mkk+1 F k = Mkk :IE exp f Yk+1 gjF k ] : expf ; r ; 1 2g 2 +1 S = Mkk : expf 1 2 g: expf ; r ; 1 2g 2 2 Sk : = e r : Mk = r, the market measure is risk neutral. If 6= r, we must seek further.;

136

Sk+1 = Sk : exp n Yk+1 + ( ; r ; 1 2 )o 2 Mk+1 Mk o n S = Mkk : exp (Yk+1 + r ) ; 1 2 2 n ~ o S = Mkk : exp Yk+1 ; 1 2 2;

where The quantity;

f ~ P We want a probability measure I under which Y1 dom variables. Then we would have

r is denoted and is called the market price of risk.

~ Yk+1 = Yk+1 +

;

r:

~ : : : Yn are independent, standard normal ran-

i S fh 1 f S ~ IE Mkk+1 F k = Mkk :IE expf Yk+1 gjF k : expf; 2 2g +1 S 1 = Mkk : expf 2 2 g: expf; 1 2g 2 S = Mkk : 3 2n X 1 Z = exp 4 (; Yj ; 2 2 )5 :j =1

Cameron-Martin-Girsanovs Idea: Dene the random variable

Properties of Z :

Z 0.

9 8n = 0 almost surely.

f IP (A) =

ZA

Z dIP 8A 2 F

f IEX = IE (XZ ) for every random variable X . f ~ ~ Compute the moment generating function of (Y1 : : : Yn ) under I : P 2n 3 2n 3 n X ~5 X X f IE exp 4 uj Yj = IE exp 4 uj (Yj + ) + (; Yj ; 1 2)5 2 j =1 j =1 j =1 3 2n 3 2n X X = IE exp 4 (uj ; )Yj 5 : exp 4 (uj ; 1 2 )5 2 j =1 j =1 2n 3 2n 3 X1 X 1 = exp 4 2 (uj ; )25 : exp 4 (uj ; 2 2 )5 j =1 2j=1 3 n X 1 2 = exp 4 ( 2 uj ; uj + 1 2 ) + (uj ; 1 2) 5 2 2 j =1 2n 3 X = exp 4 1 u2 5 : 2 jj =1

138

12.8 Pricing a European CallStock price at time n is

Sn = S0 exp Bn + ( ; 1 2)n 2

o n 8 n 9 < X = = S0 exp : Yj + ( ; 1 2 )n 2 j =1 8 n 9 < X = = S0 exp : (Yj + r ) ; ( ; r)n + ( ; 1 2)n 2 j =1 8 n 9 < X = 1 ~ = S0 exp : Yj + (r ; 2 2)n : j =1;

Payoff at time n is (Sn ; K )+ . Price at time zero is

2 0 1+3 n (Sn ; K )+ = IE 4e rn @S exp < X Y + (r ; 1 2)n= ; K A 5 f f ~j IE M 0 2 :;

8 n

9

n

=

Z

;1

S0 exp b + (r ; 1 2 )n ; K : p 1 e 2n2 db 2 n Pn Y is normal with mean 0, variance n,2under IP . f ~j since j =11

j =1

e

;

rn

o

+

;

b2

This is the Black-Scholes price. It does not depend on .

Chapter 13

Brownian Motion13.1 Symmetric Random WalkToss a fair coin innitely many times. Dene

Xj (!) = 1Set

(

;1

if if

!j = H ! j = T:

M0 = 0 Mk =

k X j =1

Xj

k 1:

13.2 The Law of Large NumbersWe will use the method of moment generating functions to derive the Law of Large Numbers:

Theorem 2.38 (Law of Large Numbers:)

1 M !0 k k

almost surely, as 139

k!1:

140 Proof:

'k (u) = IE exp u Mk 9 8kk 0). dn = 1 ; n : Down factor. r = 0. pn = u1n ddnn = 2 == nn = 1 . ~ 2 qn = 1 . ~ 2p p p ; ; p

146 Let ]k (H ) denote the number of H in the rst k tosses, and let ] k (T ) denote the number of T in the rst k tosses. Then

]k (H ) + ]k (T ) = k ]k (H ) ; ]k (T ) = Mk

which implies,

]k (H ) = 1 (k + Mk ) 2 ]k (T ) = 1 (k ; Mk ): 2In the nth model, take n steps per unit time. Set S0

f P Under I , the price process S (n) is a martingale.

S (n)(t) = 1 + pn

(n) = 1. Let t = k for some k, and let n 1 (nt+Mnt ) 1 (nt Mnt ) 2 2

1 ; pn

;

:

Theorem 10.42 As n!1, the distribution of S (n) (t) converges to the distribution of where B is a Brownian motion. Note that the correction martingale. Proof: Recall that from the Taylor series we have

expf B (t) ; 1 2 tg 2

; 1 2t is necessary in order to have a 2

log(1 + x) = x ; 1 x2 + O(x3) 2so

log S (n)(t) = 1 (nt + Mnt ) log(1 + p ) + 1 (nt ; Mnt ) log(1 ; p ) 2 2 = nt1 log(1 + 2

+ Mnt

1 2 + O(n 3=2 ) = nt ;4 n ; 4n ! 1 2 + 1 p + 1 2 + O(n 3=2 ) + Mnt 1 pn ; 4 n 2 n 4 n 21p 2 n 1 2 pn ;; ;

1 2 log(1 + 1 2

pn ) + 1 log(1 ; pn ) 2

n

n

pn ) ; 1 log(1 ; pn ) 2

!

1 = ; 2 2 t + O (n 2 ) 1 1 1 + pn Mnt + n Mnt O(n 2 ); ;

1

| {z } | {z } !0 !Bt

As n!1, the distribution of log S (n) (t) approaches the distribution of

B(t) ; 1 2t. 2

CHAPTER 13. Brownian Motion

147

B(t) = B(t,) x t

(, F, Px)Figure 13.3: Continuous-time Brownian Motion, starting at x 6= 0.

13.11 Starting at Points Other Than 0(The remaining sections in this chapter were taught Dec 7.) For a Brownian motion B (t) that starts at 0, we have:

IP (B(0) = 0) = 1:

For a Brownian motion B (t) that starts at x, denote the corresponding probability measure by IP x (See Fig. 13.3), and for such a Brownian motion we have:

IP x (B(0) = x) = 1:Note that: If x 6= 0, then IP x puts all its probability on a completely different set from I . P The distribution of B (t) under IP x is the same as the distribution of x + B (t) under I . P

13.12 Markov Property for Brownian MotionWe prove that Theorem 12.43 Brownian motion has the Markov property. Proof: Let s

0

2 6 ) IE h(B (s + t)) F (s) = IE 6h( B(s + t{z; B(s) + 4 | }Independent of F (s)F

t 0 be given (See Fig. 13.4).

(s)-measurable

B{zs) |( }

3 7 ) F (s)7 5

148

B(s) s restartFigure 13.4: Markov Property of Brownian Motion. Use the Independence Lemma. Dene

s+t

g (x) = IE h( B (s + t) ; B(s) + x )]

2 6 = IE 6h( x + 4

= IE xh(B (t)):Then

same distribution as B (s + t) ; B (s)

B (t) |{z}

3 7 )7 5

IE h (B(s + t) ) F (s) = g (B(s)) = E B (s)h(B (t)):

In fact Brownian motion has the strong Markov property.Example 13.1 (Strong Markov Property) See Fig. 13.5. Fix x > 0 and dene

= min ft 0 B(t) = xg :Then we have:

IE h( B( + t) ) F ( ) = g(B( )) = IE x h(B(t)):

CHAPTER 13. Brownian Motion

149

x restartFigure 13.5: Strong Markov Property of Brownian Motion.

+t

13.13 Transition DensityLet p(t x y ) be the probability that the Brownian motion changes value from x to y in time t, and let be dened as in the previous section.

p(t x y) = p 1 e 2 t g(x) = IE xh(B(t)) =

;

(y

;x)22t

Z

1

h(y )p(t x y) dy:

;1

IE h(B (s + t)) F (s) = g(B (s)) = IE h(B( + t)) F ( ) =13.14 First Passage TimeFix x > 0. Dene Fix

Z

1

h(y )p(t B (s) y) dy:

Z

;1

1

h(y )p(t x y) dy:

;1

= min ft 0

B (t) = xg :

> 0. Then

exp B (t ^ ) ; 1 2 (t ^ ) 2

n

o

is a martingale, and

1 IE exp B(t ^ ) ; 2 2 (t ^ ) = 1:

n

o

150 We have

8 n 12 o 0:p

(14.3)

yields

;IE eLetting

;

= ; px e x 2;

2

:

#0, we obtainIE = 1:(14.4)

Conclusion. Brownian motion reaches level x with probability 1. The expected time to reach level x is innite. We use the Reection Principle below (see Fig. 13.6).

IP f

t B (t) < xg = IP fB (t) > xg IP f tg = IP f t B(t) < xg + IP f t B (t) > xg = IP fB (t) > xg + IP fB (t) > xg = 2IP fB (t) > xg

Z e = p2 2 tx1

;

y2 2t dy

CHAPTER 13. Brownian Motion

151

x t Brownian motionFigure 13.6: Reection Principle in Brownian Motion. Using the substitution z

= ytp

dz =

p

dy we get t

IP fDensity:

tg = p2

Z

1

2

x pt

e

;

z22

dz:x2 2t

@ f (t) = @t IP f F (t) =

tg = p x 3 e 2 t

;

which follows from the fact that if

Zba(t)

g (z) dz

then

@F = ; @a g (a(t)): @t @t IEe;

Laplace transform formula:

= e t f (t)dt = e x; ;

Z0

1

p

2

:

152

Chapter 14

The It Integral oThe following chapters deal with Stochastic Differential Equations in Finance. References: 1. B. Oksendal, Stochastic Differential Equations, Springer-Verlag,1995 2. J. Hull, Options, Futures and other Derivative Securities, Prentice Hall, 1993.

14.1 Brownian Motion(See Fig. 13.3.) ( F P) is given, always in the background, even when not explicitly mentioned. !IR, has the following properties: Brownian motion, B (t ! ) : 0 1) 1. 2. 3.

B (0) = 0 Technically, IP f! B(0 !) = 0g = 1, B (t) is a continuous function of t, If 0 = t0 t1 : : : tn , then the increments B (t1) ; B (t0 ) : : : B(tn ) ; B(tn 1 );

are independent,normal, and

IE B (tk+1 ) ; B(tk )] = 0 IE B (tk+1 ) ; B(tk )]2 = tk+1 ; tk :14.2 First VariationQuadratic variation is a measure of volatility. First we will consider rst variation, function f (t). 153

FV (f ), of a

154

f(t)

t2 t1 T t

Figure 14.1: Example function f (t). For the function pictured in Fig. 14.1, the rst variation over the interval

0 T ] is given by:

FV 0 T ](f ) = f (t1) ; f (0)] ; f (t2) ; f (t1 )] + f (T ) ; f (t2 )]= f (t) dt + (;f (t)) dt + f (t) dt:0 0 0

Zt10 ZT 0

Zt2

ZT

t1

t2

= jf (t)j dt:0

Thus, rst variation measures the total amount of up and down motion of the path. The general denition of rst variation is as follows: Denition 14.1 (First Variation) Let

= ft0 t1 : : : tn g be a partition of 0 T ], i.e.,

0 = t0 t1 : : : tn = T:The mesh of the partition is dened to be

jj jj = k=0max 1(tk+1 ; tk ): ::: n;

We then dene

FV 0 T ](f ) = lim 0 !jj jj

n 1 X;

k=0

jf (tk+1) ; f (tk )j:

Suppose f is differentiable. Then the Mean Value Theorem implies that in each subinterval t k tk+1 ], there is a point tk such that

f (tk+1 ) ; f (tk ) = f (tk )(tk+1 ; tk ):0

CHAPTER 14. The It Integral oThen

155

n 1 X;

k=0and

jf (tk+1) ; f (tk )j =

n 1 X;

k=0;

jf (tk )j(tk+1 ; tk )0 0

FV 0 T ](f ) = lim 0 !jj jj

n 1 X k=0

jf (tk )j(tk+1 ; tk )

= jf (t)j dt:0

ZT0

14.3 Quadratic VariationDenition 14.2 (Quadratic Variation) The quadratic variation of a function f on an interval is

0 T]

hf i(T ) = lim 0 jf (tk+1 ) ; f (tk )j2: ! k=0jj jj

n 1 X;

Remark 14.1 (Quadratic Variation of Differentiable Functions) If f is differentiable, then hf i(T ) = 0, because

n 1 X;

k=0

jf (tk+1) ; f (tk )j2 =

n 1 X;

k=0

jf (tk )j2(tk+1 ; tk )20

jj jj:and

n 1 X;

k=0

jf (tk )j2(tk+1 ; tk )0

hf i(T )

jj

lim 0 jj jj: lim 0 ! !jj jj jj jj

n 1 X;

= lim jj jj jf (t)j2 dtjj

!0

ZT0

k=0

jf (tk )j2(tk+1 ; tk )0

0

= 0:Theorem 3.44

hBi(T ) = Tor more precisely,

IP f! 2

hB(: !)i(T ) = T g = 1:

In particular, the paths of Brownian motion are not differentiable.

156 Proof: (Outline) Let = ft0 t1 : : : tn g be a partition of B (tk+1 ) ; B(tk ). Dene the sample quadratic variation

0 T ]. To simplify notation, set D k =

Q =Then

n 1 X;

k=0

2 Dk :

Q ;T =We want to show thatjj jj

n 1 X;

k=0

2 Dk ; (tk+1 ; tk )]:

lim (Q ; T ) = 0:

!0

Consider an individual summand

2 Dk ; (tk+1 ; tk ) = B(tk+1 ) ; B(tk )]2 ; (tk+1 ; tk ):This has expectation 0, so

IE (Q ; T ) = IEFor j

n 1 X;

6= k, the termsvar(Q ; T ) = = =n 1 X;

k=0

2 Dk ; (tk+1 ; tk )] = 0: 2 Dk ; (tk+1 ; tk )

Dj2 ; (tj+1 ; tj )

and

are independent, so

k=0 n 1 X;

2 var Dk ; (tk+1 ; tk )] 4 2 IE Dk ; 2(tk+1 ; tk )Dk + (tk+1 ; tk )2]

k=0 n 1 X;

k=0

3(tk+1 ; tk )2 ; 2(tk+1 ; tk )2 + (tk+1 ; tk )2 ] (tk+1 ; tk )2n 1 X;

=2

n 1 X;

(if X is normal with mean 0 and variance 2, then IE (X 4) = 3 4)

k=0

2jj jj

= 2jj jj T:Thus we have

k=0

(tk+1 ; tk )

IE (Q ; T ) = 0 var(Q ; T ) 2jj jj:T:

CHAPTER 14. The It Integral oAs jj

157

jj!0, var(Q ; T )!0, sojj

lim (Q ; T ) = 0:jj

!0

Remark 14.2 (Differential Representation) We know that

IE (B(tk+1 ) ; B(tk ))2 ; (tk+1 ; tk )] = 0:We showed above that

var (B (tk+1 ) ; B (tk ))2 ; (tk+1 ; tk )] = 2(tk+1 ; tk )2 :When (tk+1 ; tk ) is small, (tk+1 ; tk )2 is very small, and we have the approximate equation

(B (tk+1 ) ; B (tk ))2 ' tk+1 ; tkwhich we can write informally as

dB(t) dB(t) = dt:

14.4 Quadratic Variation as Absolute VolatilityOn any time interval

T1 T2], we can sample the Brownian motion at times T1 = t0 t1 : : : tn = T2 T2 ; T1 k=0(B(tk+1 ) ; B(tk )) :2

and compute the squared sample absolute volatility

1

n 1 X;

This is approximately equal to

hBi(T2) ; hBi(T1)] = T2 ; T1 = 1: T2 ; T1 T2 ; T11As we increase the number of sample points, this approximation becomes exact. In other words, Brownian motion has absolute volatility 1. Furthermore, consider the equation

hBi(T ) = T = 1 dt0

ZT

8T 0:

This says that quadratic variation for Brownian motion accumulates at rate 1 at all times along almost every path.

158

14.5 Construction of the It Integral oThe integrator is Brownian motion following properties: 1. 2. 3.

B(t) t

0, with associated ltration F (t) t

0, and the

s t=) every set in F (s) is also in F (t), B (t) is F (t)-measurable, 8t, For t t1 : : : tn , the increments B (t1 ) ; B (t) B (t2 ) ; B (t1 ) : : : B (tn ) ; B (tn 1 ) are independent of F (t).;

The integrand is 1. 2.

(t) is F (t)-measurable 8t (i.e.,is square-integrable:

IEWe want to dene the It Integral: o

ZT0

2(t) dt < 1

8T:

I (t) =

Zt0

(u) dB (u)

t 0: f (t) is a differentiable function, then

Remark 14.3 (Integral w.r.t. a differentiable function) If we can dene

Zt0

(u) df (u) =

Zt0

(u)f (u) du:0

This wont work when the integrator is Brownian motion, because the paths of Brownian motion are not differentiable.

14.6 It integral of an elementary integrand oLet

= ft0 t1 : : : tn g be a partition of 0 T ], i.e., 0 = t0 t1 : : : tn = T:an

Assume that (t) is constant on each subinterval t k tk+1 ] (see Fig. 14.2). We call such a elementary process. The functions B (t) and

(tk ) can be interpreted as follows:

Think of B (t) as the price per unit share of an asset at time t.

CHAPTER 14. The It Integral o

159

( t ) = ( t 1 ) ( t ) = ( t ) 0

( t )= ( t 3 )

0=t0

t1

t2

t3

t4 = T

( t ) = ( t 2 )Figure 14.2: An elementary function . Think of t0 t1

Think of (tk ) as the number of shares of the asset acquired at trading date t k and held until trading date tk+1 . Then the It integral I (t) can be interpreted as the gain from trading at time t; this gain is given by: o

: : : tn as the trading dates for the asset.

8 > (t0) B(t) ; | (t0) ] B{z } 0 t t1 > < =B (0)=0 I (t) = > (t0 ) B (t1) ; B (t0 )] + (t1 ) B (t) ; B (t1 )] t t t2 > : (t0) B(t1) ; B(t0)] + (t1) B(t2) ; B(t1 )] + (t2) B(t) ; B(t2)] t1 t t3: 2t tk+1 , I (t) =k 1 X;

In general, if tk

j =0

(tj ) B (tj +1 ) ; B (tj )] + (tk ) B (t) ; B (tk )]:

14.7 Properties of the Ito integral of an elementary processAdaptedness For each t Linearity If

I (t) is F (t)-measurable. I (t) =

Zt0

(u) dB (u)

J (t) =( (u)

Zt0

(u) dB (u)

then

I (t) J (t) =

Zt0

(u)) dB (u)

160s t l t l+1 t

.....

t

k

t

k+1

Figure 14.3: Showing s and t in different partitions. and

cI (t) =Martingale

Zt0

c (u)dB (u):

I (t) is a martingale.

We prove the martingale property for the elementary process case. Theorem 7.45 (Martingale Property)

I (t) =is a martingale.

k 1 X;

j =0

(tj ) B (tj +1 ) ; B (tj )] + (tk ) B (t) ; B (tk )]

tk t tk+1

s t be given. We treat the more difcult case that s and t are in different Proof: Let 0 subintervals, i.e., there are partition points t ` and tk such that s 2 t` t`+1 ] and t 2 tk tk+1 ] (See Fig. 14.3).Write

I (t) =

` 1 X;

j =0

(tj ) B (tj +1 ) ; B (tj )] + (t` ) B (t`+1 ) ; B (t` )]k 1 X;

+

j =`+1

(tj ) B (tj +1 ) ; B (tj )] + (tk ) B (t) ; B (tk )]

We compute conditional expectations:

2` 1 3 `1 X X IE 4 (tj )(B (tj+1 ) ; B(tj )) F (s)5 = (tj )(B (tj +1 ) ; B (tj )):; ;

j =0

j =0

IE (t`)(B (t`+1 ) ; B(t` )) F (s) = (t` ) (IE B(t`+1)jF (s)] ; B(t` )) = (t` ) B (s) ; B (t` )]

CHAPTER 14. The It Integral oThese rst two terms add up to I (s). We show that the third and fourth terms are zero.; ;

161

2k 1 3 k1 X X IE 4 (tj )(B (tj +1 ) ; B (tj )) F (s)5 = IE IE (tj )(B (tj +1 ) ; B (tj )) F (tj ) F (s) j =`+1 j =`+1 3 2 k 1 X 6 = IE 4 (tj ) (IE B (tj +1)jF (tj )] ; B(tj )) F (s)7 5 {z } | j =`+1 =0 2 3 IE (tk )(B (t) ; B(tk )) F (s) = IE 6 (tk ) |IE B(t)jF ({z )] ; B(tk )) F (s)7 tk 4 ( 5 };

=0

Theorem 7.46 (It Isometry) o

IEI 2(t) = IEProof: To simplify notation, assume t = t k , so

Zt0

2 (u) du:

I (t) =

k X

j =0

(tj ) B (tj +1 ){z B (tj )] ; } |Dj

Each Dj has expectation 0, and different Dj are independent.

0k 12 X I 2(t) = @ (tj )Dj A

j =0 k X 2 X 2 = (tj )Dj + 2 j =0 i 0. Let IE

be a process (not necessarily an elementary process) such that

(t) is F (t)-measurable, 8t 2 0 T ],

R T 2(t) dt < 1: 0

Theorem 8.47 There is a sequence of elementary processes f n g1 such that n=1

nlim IE !1

ZT0

j n(t) ; (t)j2 dt = 0:

Proof: Fig. 14.4 shows the main idea.

In the last section we have dened

In (T ) =for every n. We now dene

ZT0

n (t) dB (t)

ZT0

(t) dB (t) = nlim !

ZT0

1

n (t) dB (t):

CHAPTER 14. The It Integral o

163

The only difculty with this approach is that we need to make sure the above limit exists. Suppose n and m are large positive integers. Then

var(In (T ) ; Im (T )) = IE(It Isometry:) = IE o

ZT0 T

Z0 0

n (t) ; m (t)] dB (t)2 n (t) ; m (t)] dt

!2

((a + b)2

j n(t) ; (t)j + j (t) ; m(t)j ]2 dt ZT ZT 2a2 + 2b2 :) 2IE j n (t) ; (t)j2 dt + 2IE j m (t) ; (t)j2 dt= IE0 01

ZT

which is small. This guarantees that the sequence fI n (T )gn=1 has a limit.

14.9 Properties of the (general) It o integral

I (t) =

Zt0

(u) dB (u):

Here is any adapted, square-integrable process. Adaptedness. For each t, I (t) is F (t)-measurable. Linearity. If

I (t) =then

Zt0

(u) dB (u)

J (t) =( (u)

Zt0

(u) dB (u)

I (t) J (t) =

Zt0

(u)) dB (u)

and

cI (t) =

Zt0

c (u)dB (u):

Martingale.

I (t) is a martingale. Continuity. I (t) is a continuous function of the upper limit of integration t. Rt It Isometry. IEI 2(t) = IE 0 2 (u) du. o

Example 14.1 () Consider the It integral o

ZT0

B(u) dB(u):

We approximate the integrand as shown in Fig. 14.5

164

T/4

2T/4

3T/4

T

Figure 14.5: Approximating the integrand B (u) with 4 , over

0 T ].

Documents
Documents
Documents
Documents
Documents
Documents
Documents
Documents
##### Shreve, Steven - Stochastic Calculus for Finance I: The Binomial Asset Pricing Model
Economy & Finance
Documents
Documents
Documents
Documents
Documents
Documents
Documents
Documents
Documents
Documents
Documents
Documents
Documents
Documents
Documents
Documents
Documents
Documents
Documents
Documents
Documents