Author
see-keong-lee
View
1.300
Download
13
Tags:
Embed Size (px)
Steven Shreve: Stochastic Calculus and FinanceP RASAD C HALASANI Carnegie Mellon University [email protected] S OMESH J HA Carnegie Mellon University [email protected]
THIS IS A DRAFT: PLEASE DO NOT DISTRIBUTE c Copyright; Steven E. Shreve, 1996 October 6, 1997
Contents1 Introduction to Probability Theory 1.1 1.2 1.3 1.4 1.5 The Binomial Asset Pricing Model . . . . . . . . . . . . . . . . . . . . . . . . . . Finite Probability Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Lebesgue Measure and the Lebesgue Integral . . . . . . . . . . . . . . . . . . . . General Probability Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Independence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.5.1 1.5.2 1.5.3 1.5.4 1.5.5 1.5.6 1.5.7 Independence of sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Independence of -algebras . . . . . . . . . . . . . . . . . . . . . . . . . Independence of random variables . . . . . . . . . . . . . . . . . . . . . . Correlation and independence . . . . . . . . . . . . . . . . . . . . . . . . Independence and conditional expectation. . . . . . . . . . . . . . . . . . Law of Large Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . Central Limit Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 11 16 22 30 40 40 41 42 44 45 46 47 49 49 50 52 52 53 54 55 57 58
2 Conditional Expectation 2.1 2.2 2.3 A Binomial Model for Stock Price Dynamics . . . . . . . . . . . . . . . . . . . . Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conditional Expectation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.1 2.3.2 2.3.3 2.3.4 2.3.5 2.4 An example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Denition of Conditional Expectation . . . . . . . . . . . . . . . . . . . . Further discussion of Partial Averaging . . . . . . . . . . . . . . . . . . . Properties of Conditional Expectation . . . . . . . . . . . . . . . . . . . . Examples from the Binomial Model . . . . . . . . . . . . . . . . . . . . .
Martingales . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
2 3 Arbitrage Pricing 3.1 3.2 3.3 Binomial Pricing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . General one-step APT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Risk-Neutral Probability Measure . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.1 3.3.2 3.4 3.5 Portfolio Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Self-nancing Value of a Portfolio Process . . . . . . . . . . . . . . . . 59 59 60 61 62 62 63 64 67 67 69 70 70 73 74 77 77 79 81 85 85 86 88 89 91 91 92 94 97 97
Simple European Derivative Securities . . . . . . . . . . . . . . . . . . . . . . . . The Binomial Model is Complete . . . . . . . . . . . . . . . . . . . . . . . . . . .
4 The Markov Property 4.1 4.2 4.3 4.4 4.5 Binomial Model Pricing and Hedging . . . . . . . . . . . . . . . . . . . . . . . . Computational Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Markov Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.1 Different ways to write the Markov property . . . . . . . . . . . . . . . . Showing that a process is Markov . . . . . . . . . . . . . . . . . . . . . . . . . . Application to Exotic Options . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5 Stopping Times and American Options 5.1 5.2 5.3 American Pricing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Value of Portfolio Hedging an American Option . . . . . . . . . . . . . . . . . . . Information up to a Stopping Time . . . . . . . . . . . . . . . . . . . . . . . . . .
6 Properties of American Derivative Securities 6.1 6.2 6.3 6.4 The properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Proofs of the Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Compound European Derivative Securities . . . . . . . . . . . . . . . . . . . . . . Optimal Exercise of American Derivative Security . . . . . . . . . . . . . . . . . .
7 Jensens Inequality 7.1 7.2 7.3 Jensens Inequality for Conditional Expectations . . . . . . . . . . . . . . . . . . . Optimal Exercise of an American Call . . . . . . . . . . . . . . . . . . . . . . . . Stopped Martingales . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8 Random Walks 8.1 First Passage Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3 8.2 8.3 8.4 8.5 8.6 8.7 8.8 8.9 is almost surely nite . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The moment generating function for Expectation of . . . . . . . . . . . . . . . . . . . . . . . . 97 99
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
The Strong Markov Property . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 General First Passage Times . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 Example: Perpetual American Put . . . . . . . . . . . . . . . . . . . . . . . . . . 102 Difference Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 Distribution of First Passage Times . . . . . . . . . . . . . . . . . . . . . . . . . . 107
8.10 The Reection Principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 9 Pricing in terms of Market Probabilities: The Radon-Nikodym Theorem. 9.1 9.2 9.3 9.4 9.5 111
Radon-Nikodym Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 Radon-Nikodym Martingales . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 The State Price Density Process . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 Stochastic Volatility Binomial Model . . . . . . . . . . . . . . . . . . . . . . . . . 116 Another Applicaton of the Radon-Nikodym Theorem . . . . . . . . . . . . . . . . 118 119
10 Capital Asset Pricing
10.1 An Optimization Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 11 General Random Variables 123
11.1 Law of a Random Variable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 11.2 Density of a Random Variable . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 11.3 Expectation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 11.4 Two random variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 11.5 Marginal Density . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 11.6 Conditional Expectation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 11.7 Conditional Density . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 11.8 Multivariate Normal Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 11.9 Bivariate normal distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 11.10MGF of jointly normal random variables . . . . . . . . . . . . . . . . . . . . . . . 130 12 Semi-Continuous Models 131
12.1 Discrete-time Brownian Motion . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
4 12.2 The Stock Price Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132 12.3 Remainder of the Market . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 12.4 Risk-Neutral Measure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 12.5 Risk-Neutral Pricing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134 12.6 Arbitrage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134 12.7 Stalking the Risk-Neutral Measure . . . . . . . . . . . . . . . . . . . . . . . . . . 135 12.8 Pricing a European Call . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138 13 Brownian Motion 139
13.1 Symmetric Random Walk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 13.2 The Law of Large Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 13.3 Central Limit Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140 13.4 Brownian Motion as a Limit of Random Walks . . . . . . . . . . . . . . . . . . . 141 13.5 Brownian Motion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142 13.6 Covariance of Brownian Motion . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 13.7 Finite-Dimensional Distributions of Brownian Motion . . . . . . . . . . . . . . . . 144 13.8 Filtration generated by a Brownian Motion . . . . . . . . . . . . . . . . . . . . . . 144 13.9 Martingale Property . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 13.10The Limit of a Binomial Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 13.11Starting at Points Other Than 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 13.12Markov Property for Brownian Motion . . . . . . . . . . . . . . . . . . . . . . . . 147 13.13Transition Density . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 13.14First Passage Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 14 The It Integral o 153
14.1 Brownian Motion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 14.2 First Variation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 14.3 Quadratic Variation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 14.4 Quadratic Variation as Absolute Volatility . . . . . . . . . . . . . . . . . . . . . . 157 14.5 Construction of the It Integral . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158 o 14.6 It integral of an elementary integrand . . . . . . . . . . . . . . . . . . . . . . . . 158 o 14.7 Properties of the It integral of an elementary process . . . . . . . . . . . . . . . . 159 o 14.8 It integral of a general integrand . . . . . . . . . . . . . . . . . . . . . . . . . . . 162 o
5 14.9 Properties of the (general) It integral . . . . . . . . . . . . . . . . . . . . . . . . 163 o 14.10Quadratic variation of an It integral . . . . . . . . . . . . . . . . . . . . . . . . . 165 o 15 It s Formula o 167
15.1 It s formula for one Brownian motion . . . . . . . . . . . . . . . . . . . . . . . . 167 o 15.2 Derivation of It s formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168 o 15.3 Geometric Brownian motion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 15.4 Quadratic variation of geometric Brownian motion . . . . . . . . . . . . . . . . . 170 15.5 Volatility of Geometric Brownian motion . . . . . . . . . . . . . . . . . . . . . . 170 15.6 First derivation of the Black-Scholes formula . . . . . . . . . . . . . . . . . . . . 170 15.7 Mean and variance of the Cox-Ingersoll-Ross process . . . . . . . . . . . . . . . . 172 15.8 Multidimensional Brownian Motion . . . . . . . . . . . . . . . . . . . . . . . . . 173 15.9 Cross-variations of Brownian motions . . . . . . . . . . . . . . . . . . . . . . . . 174 15.10Multi-dimensional It formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 o 16 Markov processes and the Kolmogorov equations 177
16.1 Stochastic Differential Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . 177 16.2 Markov Property . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178 16.3 Transition density . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 16.4 The Kolmogorov Backward Equation . . . . . . . . . . . . . . . . . . . . . . . . 180 16.5 Connection between stochastic calculus and KBE . . . . . . . . . . . . . . . . . . 181 16.6 Black-Scholes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183 16.7 Black-Scholes with price-dependent volatility . . . . . . . . . . . . . . . . . . . . 186 17 Girsanovs theorem and the risk-neutral measure 17.1 Conditional expectations under
f IP
189
. . . . . . . . . . . . . . . . . . . . . . . . . . 191
17.2 Risk-neutral measure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193 18 Martingale Representation Theorem 197
18.1 Martingale Representation Theorem . . . . . . . . . . . . . . . . . . . . . . . . . 197 18.2 A hedging application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197 18.3 18.4
d-dimensional Girsanov Theorem . . . . . . . . . . . d-dimensional Martingale Representation Theorem . .
. . . . . . . . . . . . . . . 199 . . . . . . . . . . . . . . . 200
18.5 Multi-dimensional market model . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
6 19 A two-dimensional market model 19.1 Hedging when ;1 < 19.2 Hedging when 203 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204
=1
2 < ) S2^ (! ) (!) = > S2 (HT= = 4 S1 (T ) 2 : S1(T ) = 2
!1 = T !1 = H:if if if if
! = HH ! = HT ! = TH ! = T T:
Theorem 3.26 A stopped martingale (or submartingale, or supermartingale) is still a martingale (or submartingale, or supermartingale respectively). Proof: Let fYk gn=0 be a martingale, and be a stopping time. Choose some k 2 f0 1 k The set f kg is in F k , so the set f k + 1g = f kgc is also in F k . We compute
: : : ng.
IE Y(k+1) jF k = IE I k Y + I k+1 Yk+1 jF k = I k Y + I k+1 IE Yk+1 jF k ] = I k Y + I k+1 Yk = Yk :^ f g f g f f g g f f g g ^
h
i
h
i
CHAPTER 7. Jensens Inequality
95
96
Chapter 8
Random Walks8.1 First Passage TimeToss a coin innitely many times. Then the sample space is the set of all innite sequences ! = (!1 !2 : : : ) of H and T . Assume the tosses are independent, and on each toss, the probability of H is 1 , as is the probability of T . Dene 2
Yj (!) = Mk =
(
;1k X
1
if if
!j = H !j = T
M0 = 0
The process fMk g1 is a symmetric random walk (see Fig. 8.1) Its analogue in continuous time is k=0 Brownian motion. Dene
j =1
Yj k = 1 2 : : :
= minfk 0 Mk = 1g: If Mk never gets to 1 (e.g., ! = (TTTT : : : )), then = 1. The random variable8.2 is almost surely nite
is called the rst passage time to 1. It is the rst time the number of heads exceeds by one the number of tails.
It is shown in a Homework Problem that fM k g1 and fNk g1 where k=0 k=0
Nk = exp Mk ; k log e + e 2= e Mk 2 e +e97
(
;
!)
k
;
98
Mk k
Figure 8.1: The random walk process Mk
e e + 2 1 Figure 8.2: Illustrating two functions of
2 e e + 1
are martingales. (Take Mk = ;Sk in part (i) of the Homework Problem and take (v).) Since N0 = 1 and a stopped martingale is a martingale, we have
=;
in part
1 = IENk = IE e Mk^
"
^
2 e +e
k
^
#
;
(2.1)
for every xed 2 IR (See Fig. 8.2 for an illustration of the various functions involved). We want to let k!1 in (2.1), but we have to worry a bit that for some sequences ! 2 , (! ) = 1. We consider xed As k!1,
> 0, so
e +e2 e +ek; ^
2
;
< 1:if if
!
(0
2 e +e;
0, which gives the value of the perpetual American put when the stock price is x. This function should satisfy the conditions:
v (x) (K ; x)+ 8x, 1 ~ ~ (b) v (x) 1+r pv (ux) + q v (dx)] 8x (c) At each x, either (a) or (b) holds with equality.(a) In the example we worked out, we have For
6 j 1 : v(2j ) = 3:( 1 )j 1 = 2j 2 For j 1 : v (2j ) = 5 ; 2j :;
This suggests the formula
v (x) =We then have (see Fig. 8.4): (a) (b)
(
x 3 5 ; x 0 < x 3:
6 x
v (x) (5 ; x)+ 8x v (x)
4 1 1 x 5 2 v (2x) + 2 v ( 2 ) for every x except for 2 < x < 4.
h
i
CHAPTER 8. Random Walks
107
v(x) 5(3,2)
5
x
Figure 8.4: Graph of v (x). Check of condition (c): If 0 < x If x
3, then (a) holds with equality.4 1 v (2x) + 1 v ( x ) 5 2 2 2 1 12 1 6 2 2x + 2 x
6, then (b) holds with equality: =4 5 6 = x:
If 3 < x < 4 or 4 < x < 6, then both (a) and (b) are strict. This is an artifact of the discreteness of the binomial model. This artifact will disappear in the continuous model, in which an analogue of (a) or (b) holds with equality at every point.
8.9 Distribution of First Passage TimesLet fMk g1 be a symetric random walk under a probability measure IP , with M0 k=0
= 0. Dening
= minfk 0 Mk = 1gwe recall that
IE
= 1; 1;
p
2
0 < < 1:
We will use this moment generating function to obtain the distribution of . We rst obtain the Taylor series expasion of IE as follows:
108
f (x) = 1 ; 1 ; x f (0) = 0 1 1 1 f (x) = 2 (1 ; x) 2 f (0) = 2 3 f (x) = 1 (1 ; x) 2 f (0) = 1 4 4 3 (1 ; x) 5 f (0) = 3 2 f (x) = 8 8 ::: : f (j)(x) = 1 3 : :2j (2j ; 3) (1 ; x) f (j)(0) = 1 3 : : : (2j ; 3)0 ; 0 00 ; 00 000 ; 000
p
;
(2j 1) 2
;
: = 1 3 : :2j (2j ; 3) : 2 42j :1:(: ;(2j ; 2) j 1)! 2j 1 (2j ; 2)! 1 = 2 (j ; 1)!; ;
2j
The Taylor series expansion of f (x) is given by
f (x) = 1 ; 1 ; x X 1 (j) j = j ! f (0)x1
p
j =0
=
X1
X = x+ 21
j =1
1 2j 1 2;
(2j ; 2)! j j !(j ; 1)! x;
j =2
1 2j 1 1 2 (j ; 1)
2j ; 2 xj : j
!
So we have
IE
= 1; 1; = 1 f ( 2) = 2 +
p
2
X1
2j 1;
j =21
22j 1 IP f;
(j ; 1)
1
2j ; 2 :
!
j
But also,
IE =
Xj =1
= 2j ; 1g:
CHAPTER 8. Random Walks
109
Figure 8.5: Reection principle.
Figure 8.6: Example with j Therefore,
= 2.
IP f = 1g = IP f = 2j ; 1g =
1 2
! 1 2j 1 1 2j ; 2 j = 2 3 : : : 2 (j ; 1) j;
8.10 The Reection PrincipleTo count how many paths reach level 1 by time 2j ; 1, count all those for which M 2j ;1 double count all those for which M 2j ;1 3. (See Figures 8.5, 8.6.)
= 1 and
110 In other words,
IP fFor j
2j ; 1g = IP fM2j 1 = 1g + 2IP fM2j 1 3g = IP fM2j 1 = 1g + IP fM2j 1 3g + IP fM2j = 1 ; IP fM2j 1 = ;1g:; ; ; ; ;
;
1
;3g
2,
IP f = 2j ; 1g = IP f= = =
= = = =
2j ; 1g ; IP f 2j ; 3g 1 ; IP fM2j 1 = ;1g] ; 1 ; IP fM2j 3 = ;1g] IP fM2j 3 = ;1g ; IP fM2j 1 = ;1g (2j ; 3)! ; 1 2j 1 (2j ; 1)! 1 2j 3 2 (j ; 1)!(j ; 2)! 2 j !(j ; 1)! 2j 1 (2j ; 3)! 1 2 j !(j ; 1)! 4j (j ; 1) ; (2j ; 1)(2j ; 2)] 1 2j 1 (2j ; 3)! 2j (2j ; 2) ; (2j ; 1)(2j ; 2)] 2 j !(j ; 1)! 2j 1 (2j ; 2)! 1 2 j !(j ; 1)! ! 2j ; 2 : 1 2j 1 1 2 (j ; 1) j; ; ; ; ; ; ; ; ; ;
Chapter 9
Pricing in terms of Market Probabilities: The Radon-Nikodym Theorem.9.1 Radon-Nikodym Theorem
f P Theorem 1.27 (Radon-Nikodym) Let I and I be two probability measures on a space ( F ). P f Assume that for every A 2 F satisfying IP (A) = 0, we also have I (A) = 0. Then we say that P f is absolutely continuous with respect to I . Under this assumption, there is a nonegative random P IP variable Z such that f IP (A) = Z f P P and Z is called the Radon-Nikodym derivative of I with respect to I . f IEX = IE XZ ] for every random variable X for which IE jXZ j < 1. f Remark 9.2 If I is absolutely continuous with respect to I , and I is absolutely continuous with P P P f, we say that I and IP are equivalent. I and IP are equivalent if and only if f f respect to I P P P f IP (A) = 0 exactly when IP (A) = 0 8A 2 F : 1 f f If I and I are equivalent and Z is the Radon-Nikodym derivative of I w.r.t. I , then Z is the P P P P f Radon-Nikodym derivative of I w.r.t. I , i.e., P P f (1.2) IEX = IE XZ ] 8X f 1 IEY = IE Y: Z ] 8Y: (Let X and Y be related by the equation Y = XZ to see that (1.2) and (1.3) are the same.)111 (1.3) Remark 9.1 Equation (1.1) implies the apparently stronger condition
A
ZdIP 8A 2 F
(1.1)
112Example 9.1 (Radon-Nikodym Theorem) Let = fHH HT TH T T g, the set of coin toss sequences of length 2. Let P correspond to probability 1 for H and 2 for T , and let IP correspond to probability 1 for 3 3 2
e H and 1 for T . Then Z(!) = IP (!) , so2I ( ) P
e
! 9 9 Z(HH) = 4 Z(HT) = 9 Z(TH) = 9 Z(T T) = 16 : 8 8
9.2 Radon-Nikodym MartingalesLet be the set of all sequences of n coin tosses. Let I be the market probability measure and let P f IP be the risk-neutral probability measure. Assume
f f P P so that I and I are equivalent. The Radon-Nikodym derivative of I with respect to I is P P f IP (!) Z (!) = IP (! ) :Dene the I -martingale P
f IP (!) > 0 IP (!) > 0 8! 2
We can check that Zk is indeed a martingale:
Zk = IE Z jF k ] k = 0 1 : : : n:4
f Lemma 2.28 If X is F k -measurable, then I EXProof:
IE Zk+1jF k ] = IE IE Z jF k+1 ]jF k ] = IE Z jF k ] = Zk := IE XZk ].
f IEX = IE XZ ] = IE IE XZ jF k ]] = IE X:IE Z jF k ]] = IE XZk ]: f IE IA X ] = IE ZkIA X ]A
Note that Lemma 2.28 implies that if X is F k -measurable, then for any A 2 F k ,
or equivalently,
Z
f XdIP =
ZA
XZk dIP:
CHAPTER 9. Pricing in terms of Market ProbabilitiesZ2 (HH) = 9/4 1/3 Z (H) = 3/2 1 1/3 Z =1 0 2/3 Z (T) = 3/4 1 2/3 Z2 (TT) = 9/16 1/3 2/3 Z2 (HT) = 9/8 Z2 (TH) = 9/8
113
Figure 9.1: Showing the Zk values in the 2-period binomial model example. The probabilities shown f are for I , not I . P P Lemma 2.29 If X is F k -measurable and 0
j k, thenj
1 f IE X jF j ] = Z IE XZk jF j ]:
1 Proof: Note rst that Z
Z Z 1 f IE XZk jF j ]dIP = IE XZkjF j ]dIP A A Zj Z= =
j
IE XZk jF j ] is F j -measurable. So for any A 2 F j , we have(Lemma 2.28)
ZAA
XZk dIP
(Partial averaging) (Lemma 2.28)
f XdIP
Example 9.2 (Radon-Nikodym Theorem, continued) We show in Fig. 9.1 the values of the martingale Zk . We always have Z0 = 1, since
Z0 = IEZ =
Z
e ZdIP = IP ( ) = 1:
9.3 The State Price Density ProcessIn order to express the value of a derivative security in terms of the market probabilities, it will be useful to introduce the following state price density process:
k
= (1 + r) k Zk k = 0 : : : n:;
114 We then have the following pricing formulas: For a Simple European derivative security with payoff Ck at time k,
f V0 = IE (1 + r) k Ck h i = IE (1 + r) k Zk Ck = IE k Ck ]:; ;
h
i
(Lemma 2.28)
More generally for 0
j k,
f Vj = (1 + r)j IE (1 + r) k Ck jF j j h i = (1 + r) IE (1 + r) k Zk Ck jF j Zj 1 IE C jF ] = k k j; ;
h
i
(Lemma 2.29)
j
Remark 9.3
f j Vj gk=0 is a martingale under I , as we can check below: P j IE j+1Vj +1 jF j ] = IE IE k Ck jF j +1]jF j ] = IE k Ck jF j ]= j Vj :
Now for an American derivative security fGk gn=0 : k2
f V0 = sup IE (1 + r) G ] T0 = sup IE (1 + r) Z G ] T0 = sup IE G ]:; ; 2 2
T0
More generally for 0
f Vj = (1 + r)j sup IE (1 + r) G jF j ]= 1 sup IE G jF j ]:j2
j n,
1 = (1 + r)j sup Z IE (1 + r) Z G jF j ]2
2
Tj
;
Tj j
;
Tj
Remark 9.4 Note that
f j Vj gn=0 is a supermartingale under I , P j (b) j Vj j Gj 8j(a)
CHAPTER 9. Pricing in terms of Market Probabilities2() = 1.44 S2 (HH) = 16 1/3 () = 1.20 1 S (H) = 8 1 1/3 S =4 0 2/3 0 = 1.00 2/3 2() = 0.72 S2 (HT) = 4 S2 (TH) = 4 () = 0.72 2
115
1/3 S (T) = 2 1 () = 0.6 1 2/3
() = 0.36 2 S2 (TT) = 1
f Figure 9.2: Showing the state price values k . The probabilities shown are for I , not I . P P(c)
f j Vj gn=0 is the smallest process having properties (a) and (b). j
We interpret k by observing that k (! )IP (! ) is the value at time zero of a contract which pays $1 at time k if ! occurs.Example 9.3 (Radon-NikodymTheorem, continued) We illustrate the use of the valuation formulas for 1 European and American derivative securities in terms of market probabilities. Recall that p = 3 , q = 2 . The 3 state price values k are shown in Fig. 9.2. For a European Call with strike price 5, expiration time 2, we have
V2 (HH) = 11 2(HH)V2 (HH) = 1:44 11 = 15:84: V2 (HT) = V2(TH) = V2 (TT ) = 0: V0 = 1 1 15:84 = 1:76: 3 3 2 (HH) V2(HH) = 1:44 11 = 1:20 11 = 13:20 1:20 1 (HH) V1 (H) = 1 13:20 = 4:40 3Compare with the risk-neutral pricing formulas:
V1(H) = 2 V1 (HH) + 2 V1(HT) = 2 11 = 4:40 5 5 5 2 V1 (T ) = 5 V1 (TH) + 2 V1 (T T ) = 0 5 2 V (H) + 2 V (T) = 2 V0 = 5 1 4:40 = 1:76: 5 1 5Now consider an American put with strike price 5 and expiration time 2. Fig. 9.3 shows the values of + k (5 ; Sk ) . We compute the value of the put under various stopping times : (0) Stop immediately: value is 1. (1) If
(HH) = (HT) = 2 (T H) = (TT ) = 1, the value is 1 2 0:72 + 2 1:80 = 1:36: 3 3 3
116+ (5 - S (HH)) = 0 2 + (HH) (5 - S (HH)) = 0 2 2
+ (5 - S (H)) = 0 1 + (H) (5 - S1(H)) = 0 1 1/3 2/3 + (5 - S (T)) = 3 1 (T) (5 - S (T))+ = 1.80 1 1
1/3
+ (5-S 0) =1 + 0 (5-S 0) =1
2/3 1/3
+ (5 - S (HT)) = 1 2 (HT) (5 - S (HT))+= 0.72 2 2 (5 - S (TH))+= 1 2 + 2 (TH) (5 - S2(TH)) = 0.72
2/3
+ (5 - S (TT)) = 4 2 + (TT) (5 - S (TT)) = 1.44 2 2
Figure 9.3: Showing the values k (5 ; Sk )+ for an American put. The probabilities shown are for f I , not I . P P(2) If we stop at time 2, the value is
1 3
2 3
2 0:72 + 3 1 0:72 + 2 3 3
2 3
1:44 = 0:96
We see that (1) is optimal stopping rule.
9.4 Stochastic Volatility Binomial ModelLet be the set of sequences of n tosses, and let 0 < d k are F k -measurable. Also let
< 1+ rk < uk , where for each k, dk uk rkk k
f Let I be the risk-neutral probability measure: P
k pk = 1 + r;; dk ~ u d k k
; qk = uk u (1 + rk) : ~ ;d
and for 2
equivalent. Dene
f IP !k+1 = H jF k ] = pk ~ f IP !k+1 = T jF k ] = qk : ~ Let I be the market probability measure, and assume IP f! g > 0 8! 2 P f IP (!) Z (!) = IP (! ) 8! 2
k n,
f IP f!1 = H g = p0 ~ f IP f!1 = T g = q0 ~
. Then I and P
f IP are
CHAPTER 9. Pricing in terms of Market Probabilities
117
Zk = IE Z jF k ] k = 0 1 : : : n:We dene the money market price process as follows:
M0 = 1Note that Mk is Fk;1 -measurable.
Mk = (1 + rk 1)Mk;
;
1
k = 1 : : : n:
We then dene the state price process to be
k
1 = M Zk k = 0 : : : n:
; As before the portfolio process is f consists of X 0, the non-random initial wealth, and
k n 1 k gk=0 . The self-nancing value process (wealth process)
Xk+1 =
k Sk+1 + (1 + rk )(Xk ; k Sk )
k = 0 : : : n ; 1:
f P Then the following processes are martingales under I :1 S n M kk k=0and
1 X n M kk
k=0
and the following processes are martingales under I : P
f k Sk gn=0 kWe thus have the following pricing formulas:
and
f k Xk gn=0: k
Simple European derivative security with payoff Ck at time k:
f C Vj = Mj IE Mk F j k 1 IE C jF ] = k k jjAmerican derivative security
fGk gn=0: k f G Vj = Mj sup IE M F j Tj 1 sup IE G jF ] : = j2
j
2
Tj
The usual hedging portfolio formulas still work.
118
9.5 Another Applicaton of the Radon-Nikodym TheoremLet ( F Q) be a probability space. Let G be a sub- -algebra of F , and let X be a non-negative R X dQ = 1. We construct the conditional expectation (under Q) of X random variable with given G . On G , dene two probability measures
IP (A) = Q(A) 8A 2 G
f IP (A) = Z
Z
Whenever Y is a G -measurable random variable, we have
A
XdQ 8A 2 G :
Y dIP =
Z
Y dQ
if Y = A for some A 2 G , this is just the denition of IP , and the rest follows from the standard f f machine. If A 2 G and IP (A) = 0, then Q(A) = 0, so I (A) = 0. In other words, the measure I P P f. The Radon-Nikodym theorem implies that P is absolutely continuous with respect to the measure I there exists a G -measurable random variable Z such that
1
f IP (A) =4
Z
A
Z dIP 8A 2 GA
i.e.,
ZA
X dQ =
Z
Z dIP 8A 2 G :
This shows that Z has the partial averaging property, and since Z is G -measurable, it is the conditional expectation (under the probability measure Q) of X given G . The existence of conditional expectations is a consequence of the Radon-Nikodym theorem.
Chapter 10
Capital Asset Pricing10.1 An Optimization ProblemConsider an agent who has initial wealth X 0 and wants to invest in the stock and money markets so as to maximize
IE log Xn :
Remark 10.1 Regardless of the portfolio used by the agent, f k Xk g1 is a martingale under I , so P k=0
IE nXn = X0Here, (BC) stands for Budget Constraint. Remark 10.2 If is any random variable satisfying (BC), i.e.,
(BC )
IE
n
= X0
then there is a portfolio which starts with initial wealth X 0 and produces Xn = at time n. To see this, just regard as a simple European derivative security paying off at time n. Then X 0 is its value at time 0, and starting from this value, there is a hedging portfolio which produces X n = . Remarks 10.1 and 10.2 show that the optimal obtained by solving the following Constrained Optimization Problem: Find a random variable which solves: Maximize Subject to Equivalently, we wish to Maximize
Xn for the capital asset pricing problem can be
IE logn
IE
= X0:
X!2
(log (! )) IP (! )
119
120 Subject to
X!2
n (! )
(!)IP (! ) ; X0 = 0:
There are 2n sequences ! in . Call them ! 1
!2 : : : !2n . Adopt the notation
x1 = (! 1) x2 = (!2 ) : : : x2n = (!2n ):We can thus restate the problem as: Maximize
2n X
k=1
(log xk )IP (! k )
Subject to In order to solve this problem we use:
2n X
k=1
n (! k )xk IP (! k )
; Xo = 0:
Theorem 1.30 (Lagrange Multiplier) If (x1 Maxmize Subject to then there is a number such that
: : : xm) solve the problem f (x1 : : : xm ) g (x1 : : : xm) = 0(1.1)
@ f (x : : : x ) = @ g(x : : : x ) k = 1 : : : m m m @xk 1 @xk 1and
g(x1 : : : xm) = 0:For our problem, (1.1) and (1.2) become
(1.2)
xk IP (! k ) =2n X
1
n (! k )IP (! k ) n (! k )xk IP (! k )
k = 1 : : : 2n= X0 : (1:2 )0
(1:1 )0
k=1
Equation (1.1) implies
xk =Plugging this into (1.2) we getn
1 : n (! k )
2 1 X IP (! ) = X =) 1 = X : 0 k 0
k=1
CHAPTER 10. Capital Asset PricingTherefore,
121
0 xk = X! ) k = 1 : : : 2n: ( k nsolves the problem Maximize Subject to
Thus we have shown that if
IE log IE ( n ) = X0= X0 :n
(1.3)
then (1.4)
Theorem 1.31 If Proof: Fix Z
is given by (1.4), then
solves the problem (1.3).
> 0 and dene
We maximize f over x > 0:0
f (x) = log x ; xZ:1 1 f (x) = x ; Z = 0 () x = Z
The function f is maximized at x
1 f (x) = ; x2 < 0 8x 2 IR:00
1 log x ; xZ f (x ) = log Z ; 1 8x > 0 8Z > 0:Let be any random variable satisfying
1 = Z , i.e.,
(1.5)
IE ( n ) = X0and let
= X0 :n
From (1.5) we have
log ;
n X0
log X0 ; 1:n
Taking expectations, we have
1 IE log ; X IE ( n ) IE log ; 1 0and so
IE log
IE log :
122 In summary, capital asset pricing works as follows: Consider an agent who has initial wealth and wants to invest in the stock and money market so as to maximize
X0
IE log Xn :The optimal Xn is Xn0 = Xn , i.e.,
Since f k Xk gn=0 is a martingale under I , we have P k
n Xn = X0:
k Xkso
= IE nXn jF k ] = X0 k = 0 : : : n
Xk = X0k k (!1
and the optimal portfolio is given by
: : : !k ) =
X0 ; k+1(!1 X0: : : k+1 (!1 : : : !k H ) Sk+1 (!1 : : : !k H ) ; Sk+1 (!1 : : :
!k T ) : !k T )
Chapter 11
General Random Variables11.1 Law of a Random VariableThus far we have considered only random variables whose domain and range are discrete. We now consider a general random variable X : !IR dened on the probability space ( F P). Recall that:
F is a
I is a probability measure on F , i.e., IP (A) is dened for every A 2 F . P A function X : !IR is a random variable if and only if for every Borel subsets of I ), the set R4 ; 4
-algebra of subsets of .
B 2 B(IR) (the
-algebra of
fX 2 Bg = X 1(B) = f! X (!) 2 Bg 2 Fi.e., X 11.1)
: !IR is a random variable if and only if X
;
1 is a function from B(IR) to F (See Fig.
Thus any random variable X induces a measure X on the measurable space by
(IR B(IR)) dened
in Williams book this is denoted by L X .
X 1(B) 8B 2 B(IR) where the probabiliy on the right is dened since X 1(B ) 2 F . X is often called the Law of X X (B ) = IP; ;
11.2 Density of a Random VariableThe density of X (if it exists) is a function f X
X (B ) =
Z
: IR! 0 1) such that
B
fX (x) dx 8B 2 B (IR):123
124
X
R B
{X B}
Figure 11.1: Illustrating a real-valued random variable X . We then write
where the integral is with respect to the Lebesgue measure on I . fX is the Radon-Nikodym derivaR tive of X with respect to the Lebesgue measure. Thus X has a density if and only if X is absolutely continuous with respect to Lebesgue measure, which means that whenever B 2 B(IR) has Lebesgue measure zero, then
d X (x) = fX (x)dx
IP fX 2 Bg = 0:
11.3 ExpectationTheorem 3.32 (Expectation of a function of X ) Let h : IR!IR be given. Then
IEh(X ) =4
Z Z
h(X (!)) dIP (!) h(x) d X (x) h(x)fX (x) dx: IR, then these equations are
= =
ZIRIR
Proof: (Sketch). If h(x) = B (x) for some B4
1
IE 1B (X ) = P fX 2 B g = Z X (B ) = fX (x) dxBwhich are true by denition. Now use the standard machine to get the equations for general h.
CHAPTER 11. General Random Variables
125
(X,Y)
C y { (X,Y) C} xFigure 11.2: Two real-valued random variables X
Y.
11.4 Two random variablesLet X Y be two random variables !IR dened on the space ( F P). Then measure on B(IR2 ) (see Fig. 11.2) called the joint law of (X Y ), dened by
XY
induce a
X Y (C ) = IP f(X4
Y ) 2 C g 8C 2 B(IR2 ):
The joint density of (X
Y ) is a function fX Y : IR2! 0 1)
that satises
X Y (C ) =
ZZC
fX Y (x y ) dxdy 8C 2 B(IR2): Y in a manner analogous to the univariate case:
We compute the expectation of a function of X
fX Y
is the Radon-Nikodym derivative of X Y with respect to the Lebesgue measure (area) on IR2 .
IEk(X Y ) =4
Z
= =
ZZIR ZZ2
k(X (!) Y (!)) dIP (!) k(x y ) dX Y (x
y)
IR
k(x y )fX Y (x y ) dxdy
2
126
11.5 Marginal DensitySuppose (X
Y ) has joint density f X Y . Let B IR be given. ThenY (B )
= IP fY 2 B g = IP f(X Y ) 2 IR B g = Z X Z (IR B ) Y = fX Y (x y ) dxdy =
ZB Z
B4
fY (y ) dy fX Y (x y ) dx:
IR
where
fY (y ) =
Therefore, fY (y ) is the (marginal) density for Y .
IR
11.6 Conditional ExpectationSuppose
(X Y ) has joint density f X Y . Let h : IR!IR be given. Recall that IE h(X )jY ] = IE h(X )j (Y )] depends on ! through Y , i.e., there is a function g (y ) (g depending on h) such that4
IE h(X )jY ](!) = g (Y (!)):We can characterize g using partial averaging: Recall that A 2 (Y )()A = B 2 B(IR). Then the following are equivalent characterizations of g : How do we determine g ?
fY 2 Bg for some(6.1)
Z
A
g (Y ) dIP =
Z
A
h(X ) dIP 8A 2 (Y )
Z ZIR
1B (Y )g(Y ) dIP =ZZIR2
Z
1B (Y )h(X ) dIP 8B 2 B(IR) 1B (y)h(x) dX Y (x
(6.2)
1B (y)g(y) Y (dy) =g(y )fY (y ) dy =
y ) 8B 2 B(IR)
(6.3)
ZB
Z ZB IR
h(x)fX Y (x y ) dxdy 8B 2 B(IR):
(6.4)
CHAPTER 11. General Random Variables
127
11.7 Conditional DensityA function fX jY (xjy ) : IR2 ! 0 any function h : IR!IR:
1) is called a conditional density for X given Y provided that forg (y ) =
ZIR
h(x)fX Y (xjy ) dx:j
(7.1)
(Here g is the function satisfying
and g depends on h, but fX jY does not.) Theorem 7.33 If (X
IE h(X )jY ] = g (Y )
Y ) has a joint density fX Y , then fX Y (xjy ) = fXfY (x)y) : Y (yj
(7.2)
Proof: Just verify that g dened by (7.1) satises (6.4): For B
Z Z
B | IR
h(x)fX Y (xjy) dx fY (y ) dy =g (y )
Z Z
2 B(IR)
{zj
}
B IR
h(x)fX Y (x y) dxdy:
Notation 11.1 Let g be the function satisfying
IE h(X )jY ] = g (Y ):The function g is often written as
g(y ) = IE h(X )jY = y ]and (7.1) becomes
IE h(X )jY = y ] = g (y ) =
ZIR
h(x)fX Y (xjy ) dx:j
In conclusion, to determine IE
h(X )jY ] (a function of ! ), rst compute
Z
and then replace the dummy variable y by the random variable Y :
IR
h(x)fX Y (xjy ) dxj
IE h(X )jY ](!) = g (Y (!)):Example 11.1 (Jointly normal random variables) Given parameters: (X Y ) have the joint density1
>01 2
2
> 0 ;1 < < 1. :
Let
fX Y (x y) =
2
1 2
1 p1 ; 2 exp
1 ; 2(1 ; 2 )
x2 ; 2 x y + y2 2 21 2
128The exponent is
;
x2 ; 2 x y + y2 2 2 2(1 ; 2 ) 1 " 1 2 2 # x ; y 2 + y2 (1 ; 2 ) 1 = ; 2(1 ; 2 ) 2 1 2 2 2 2 1 = ; 2(1 ; 2 ) 12 x ; 1 y ; 1 y 2 : 2 2 1 2 1 p
1
We can compute the Marginal density of Y as follows
fY (y) =
Z1
1 1 Z 1 e; u2 du:e; 2 y = 2 22
2
1 2
1;
2
;1
e
; 2(1;1 2)2 2 2
2 1
x;
1 2
y
2
dx:e
1 ; 2 y2 2
2
using the substitution u =1 y2 2 2 2
;1
p
1
Thus Y is normal with mean 0 and variance 2 . 2 Conditional density. From the expressions
; = p1 e 2 2
;2
1
1
x;
1 2
dx y , du = p1; 2
1
:
fX Y (x y) =
2
1 2
1 p
1;
e 2
1 ; 2(1; 2 )
1 x 2 1 2
;;
1y 2 2
e; 2
1 y2
2 2
1y 2 fY (y) = p 1 e; 2 2 2 2
we have
fX jY (xjy) = fXfY (x y) Y (y) 2 1 1 p 1 e; 2(1; 2) 12 x ; 21 y : 1 = p 2 1 1; 2 In the x-variable, fX jY (xjy) is a normal density with mean 21 y and variance (1 ; 2 ) Z1 IE X jY = y] = xf (xjy) dx = 1 y 2 ;1 X jY IE
2. 1
Therefore,
"
X;
1 2
=
Z1
y
2
Y =y1 2
#
;1 = (1 ; 2 ) 2 : 1
x;
y fX jY (xjy) dx 2
CHAPTER 11. General Random VariablesFrom the above two formulas we have the formulas
129
IE X jY ] = IE
1 2
Y
(7.3)
"
X;
1 2
Y
2
Y = (1 ; 2 ) 2: 1
#
(7.4)
Taking expectations in (7.3) and (7.4) yields
IEX = IE
1 2
IEY = 0
(7.5)
"
X;
1 2
Y
2
#
= (1 ; 2 ) 2: 1
(7.6)
Based on Y , the best estimator of X is 21 Y . This estimator is unbiased (has expected error zero) and the expected square error is (1 ; 2 ) 2. No other estimator based on Y can have a smaller expected square error 1 (Homework problem 2.1).
11.8 Multivariate Normal DistributionPlease see Oksendal Appendix A. Let denote the column vector of random variables (X1 X2 : : : Xn )T , and the corresponding has a multivariate normal distribution if and only if column vector of values (x1 x2 : : : xn )T . the random variables have the joint density
X
X
x
p o n fX(x) = (2detnA exp ; 1 (X ; )T:A:(X ; ) : 2 ) =2=(4
Here, and A is an n
T T n ) = IE X = (IEX1 : : : IEXn) n nonsingular matrix. A 1 is the covariance matrix1
:::;
4
;
A 1 = IE (X ; ):(X ; )T
h
i
i.e. the (i j )th element of A;1 is IE (Xi ; i )(Xj ; j ). The random variables in if and only if A;1 is diagonal, i.e.,
X are independent
A 1 = diag(;
2 2 1 2
:::
2) n
where 2 j
= IE (Xj ; j )2 is the variance of Xj .
130
11.9 Bivariate normal distributionTake n = 2 in the above denitions, and let4
= IE (X1 ; 1 )(X2 ; 2 ) :1 2;
Thus,
2 A=4
A =
1
";
2 1 1 2
1 2 2 2
#;
; p
1 (1 2 ) 2) 1 2 (12 1;
;1 p
1 2 2 2
(1 ) 1 (1 2 )2;
3 5
det A =
1 2
1;
2
and we have the formula from Example 11.1, adjusted to account for the possibly non-zero expectations:
fX1 X2 (x1 x2) =
2
1
1 p 2 1;
#) ( " 1 (x1 ; 1 )2 ; 2 (x1 ; 1 )(x2 ; 2 ) + (x2 ; 2 )2 : 2 2 2 exp ; 2(1 ; 2) 1 2 1 2
11.10 MGF of jointly normal random variablesLet = (u1 u2 : : : un )T denote a column vector with components in IR, and let have a multivariate normal distribution with covariance matrix A ;1 and mean vector . Then the moment generating function is given by
u
X
T IEeu :X =
T eu :XfX1 X2 : : : Xn (x1 x2 : : : xn ) dx1 : : :dxn o n = exp 1 uT A 1 u + uT : 21 ;1
Z
:::
Z
1
;1
;
If any n random variables X1 X2 : : : Xn have this moment generating function, then they are jointly normal, and we can read out the means and covariances. The random variables are jointly normal and independent if and only if for any real column vector = (u 1 : : : un )T
u 8n 9 8n 9 0, the volatility. S0 > 0, the initial stock price.
2 IR, the mean rate of return.
The stock price process is then given by
Sk = S0 exp Bk + ( ; 1 2 )k 2Note that
n
o
k = 0 : : : n:
1 Sk+1 = Sk exp Yk+1 + ( ; 2 2 )
n
o
CHAPTER 12. Semi-Continuous Models
133;
IE Sk+1jF k ] = Sk IE e Yk+1 jF k ]:e 1 2 1 2 = Sk e 2 e 2 = e Sk :;
1 2
2
Thus
k +1 = log IE SkS jF k ] = log IE SS+1 F k k k k 1 var log SS+1 = var Yk+1 + ( ; 2 2) = 2 : k
and
12.3 Remainder of the MarketThe other processes in the market are dened as follows. Money market process: Portfolio process:
Mk = erk k = 0 1 : : : n:
0Each
1
k is F k -measurable.
:::
n 1;
Wealth process:
X0 given, nonrandom. Xk+1 ==Each Xk is F k -measurable. Discounted wealth process:
r k Sk+1 + e (Xk ; k Sk ) r r k (Sk+1 ; e Sk ) + e Xk
Xk+1 Mk+1 =12.4 Risk-Neutral Measure
k
Sk+1 Sk Xk Mk+1 ; Mk + Mk :
f Denition 12.1 Let I be a probability measure on ( P n oSk Mk
n f f is a martingale under I , we say that I is a risk-neutral measure. P P k=0
F ), equivalent to the market measure I . If P
134
f Theorem 4.36 If I is a risk-neutral measure, then every discounted wealth process P f a martingale under I , regardless of the portfolio process used to generate it. PProof:
n Xk onMk
k=0
is
f X IE Mk+1 F k k+1
f = IE=
X = Mk : k
k
Sk+1 Sk Xk Mk+1 ; Mk + Mk F k S X f S IE Mkk+1 F k ; Mkk + Mk +1 kk
12.5 Risk-Neutral PricingLet Vn be the payoff at time n, and say it is F n -measurable. Note that Vn may be path-dependent. Hedging a short position: Sell the simple European derivative security V n . Receive X0 at time 0.
::: n f P If there is a risk-neutral measure I , thenConstruct a portfolio process
0
;
1 which starts with X 0 and ends with Xn
= Vn .
fX f V X0 = IE Mn = IE Mnn : nRemark 12.1 Hedging in this semi-continuous model is usually not possible because there are not enough trading dates. This difculty will disappear when we go to the fully continuous model.
12.6 ArbitrageDenition 12.2 An arbitrage is a portfolio which starts with X 0
= 0 and ends with Xn satisfying
IP (Xn 0) = 1 IP (Xn > 0) > 0:(I here is the market measure). P Theorem 6.37 (Fundamental Theorem of Asset Pricing: Easy part) If there is a risk-neutral measure, then there is no arbitrage.
CHAPTER 12. Semi-Continuous Models
135
f measure, let X0 = 0, and let Xn be the nal wealth corresponding P Proof: Let I be a risk-neutraln on f is a martingale under I , P to any portfolio process. Since XkMkk=0
fX fX IE Mn = IE M00 = 0: nSuppose IP (Xn
(6.1)
0) = 1. We have(6.2)
f f IP (Xn 0) = 1 =) IP (Xn < 0) = 0 =) IP (Xn < 0) = 0 =) IP (Xn 0) = 1: f (6.1) and (6.2) imply I (Xn PThis is not an arbitrage.
= 0) = 1. We have
f f IP (Xn = 0) = 1 =) IP (Xn > 0) = 0 =) IP (Xn > 0) = 0:
12.7 Stalking the Risk-Neutral MeasureRecall that
Y1 Y2 : : : Yn are independent, standard normal random variables on some probability space( F P). o n Sk = S0 exp Bk + ( ; 1 2)k . 2
Sk+1 = S0 exp (Bk + Yk+1 ) + ( ; 1 2)(k + 1) o 2 n 1 2) : = Sk exp Yk+1 + ( ; 2Therefore,
n
o
Sk+1 = Sk : exp n Yk+1 + ( ; r ; 1 2)o 2 Mk+1 Mk
If
S S IE Mkk+1 F k = Mkk :IE exp f Yk+1 gjF k ] : expf ; r ; 1 2g 2 +1 S = Mkk : expf 1 2 g: expf ; r ; 1 2g 2 2 Sk : = e r : Mk = r, the market measure is risk neutral. If 6= r, we must seek further.;
136
Sk+1 = Sk : exp n Yk+1 + ( ; r ; 1 2 )o 2 Mk+1 Mk o n S = Mkk : exp (Yk+1 + r ) ; 1 2 2 n ~ o S = Mkk : exp Yk+1 ; 1 2 2;
where The quantity;
f ~ P We want a probability measure I under which Y1 dom variables. Then we would have
r is denoted and is called the market price of risk.
~ Yk+1 = Yk+1 +
;
r:
~ : : : Yn are independent, standard normal ran-
i S fh 1 f S ~ IE Mkk+1 F k = Mkk :IE expf Yk+1 gjF k : expf; 2 2g +1 S 1 = Mkk : expf 2 2 g: expf; 1 2g 2 S = Mkk : 3 2n X 1 Z = exp 4 (; Yj ; 2 2 )5 :j =1
Cameron-Martin-Girsanovs Idea: Dene the random variable
Properties of Z :
Z 0.
9 8n = 0 almost surely.
f IP (A) =
ZA
Z dIP 8A 2 F
f IEX = IE (XZ ) for every random variable X . f ~ ~ Compute the moment generating function of (Y1 : : : Yn ) under I : P 2n 3 2n 3 n X ~5 X X f IE exp 4 uj Yj = IE exp 4 uj (Yj + ) + (; Yj ; 1 2)5 2 j =1 j =1 j =1 3 2n 3 2n X X = IE exp 4 (uj ; )Yj 5 : exp 4 (uj ; 1 2 )5 2 j =1 j =1 2n 3 2n 3 X1 X 1 = exp 4 2 (uj ; )25 : exp 4 (uj ; 2 2 )5 j =1 2j=1 3 n X 1 2 = exp 4 ( 2 uj ; uj + 1 2 ) + (uj ; 1 2) 5 2 2 j =1 2n 3 X = exp 4 1 u2 5 : 2 jj =1
138
12.8 Pricing a European CallStock price at time n is
Sn = S0 exp Bn + ( ; 1 2)n 2
o n 8 n 9 < X = = S0 exp : Yj + ( ; 1 2 )n 2 j =1 8 n 9 < X = = S0 exp : (Yj + r ) ; ( ; r)n + ( ; 1 2)n 2 j =1 8 n 9 < X = 1 ~ = S0 exp : Yj + (r ; 2 2)n : j =1;
Payoff at time n is (Sn ; K )+ . Price at time zero is
2 0 1+3 n (Sn ; K )+ = IE 4e rn @S exp < X Y + (r ; 1 2)n= ; K A 5 f f ~j IE M 0 2 :;
8 n
9
n
=
Z
;1
S0 exp b + (r ; 1 2 )n ; K : p 1 e 2n2 db 2 n Pn Y is normal with mean 0, variance n,2under IP . f ~j since j =11
j =1
e
;
rn
o
+
;
b2
This is the Black-Scholes price. It does not depend on .
Chapter 13
Brownian Motion13.1 Symmetric Random WalkToss a fair coin innitely many times. Dene
Xj (!) = 1Set
(
;1
if if
!j = H ! j = T:
M0 = 0 Mk =
k X j =1
Xj
k 1:
13.2 The Law of Large NumbersWe will use the method of moment generating functions to derive the Law of Large Numbers:
Theorem 2.38 (Law of Large Numbers:)
1 M !0 k k
almost surely, as 139
k!1:
140 Proof:
'k (u) = IE exp u Mk 9 8kk 0). dn = 1 ; n : Down factor. r = 0. pn = u1n ddnn = 2 == nn = 1 . ~ 2 qn = 1 . ~ 2p p p ; ; p
146 Let ]k (H ) denote the number of H in the rst k tosses, and let ] k (T ) denote the number of T in the rst k tosses. Then
]k (H ) + ]k (T ) = k ]k (H ) ; ]k (T ) = Mk
which implies,
]k (H ) = 1 (k + Mk ) 2 ]k (T ) = 1 (k ; Mk ): 2In the nth model, take n steps per unit time. Set S0
f P Under I , the price process S (n) is a martingale.
S (n)(t) = 1 + pn
(n) = 1. Let t = k for some k, and let n 1 (nt+Mnt ) 1 (nt Mnt ) 2 2
1 ; pn
;
:
Theorem 10.42 As n!1, the distribution of S (n) (t) converges to the distribution of where B is a Brownian motion. Note that the correction martingale. Proof: Recall that from the Taylor series we have
expf B (t) ; 1 2 tg 2
; 1 2t is necessary in order to have a 2
log(1 + x) = x ; 1 x2 + O(x3) 2so
log S (n)(t) = 1 (nt + Mnt ) log(1 + p ) + 1 (nt ; Mnt ) log(1 ; p ) 2 2 = nt1 log(1 + 2
+ Mnt
1 2 + O(n 3=2 ) = nt ;4 n ; 4n ! 1 2 + 1 p + 1 2 + O(n 3=2 ) + Mnt 1 pn ; 4 n 2 n 4 n 21p 2 n 1 2 pn ;; ;
1 2 log(1 + 1 2
pn ) + 1 log(1 ; pn ) 2
n
n
pn ) ; 1 log(1 ; pn ) 2
!
1 = ; 2 2 t + O (n 2 ) 1 1 1 + pn Mnt + n Mnt O(n 2 ); ;
1
| {z } | {z } !0 !Bt
As n!1, the distribution of log S (n) (t) approaches the distribution of
B(t) ; 1 2t. 2
CHAPTER 13. Brownian Motion
147
B(t) = B(t,) x t
(, F, Px)Figure 13.3: Continuous-time Brownian Motion, starting at x 6= 0.
13.11 Starting at Points Other Than 0(The remaining sections in this chapter were taught Dec 7.) For a Brownian motion B (t) that starts at 0, we have:
IP (B(0) = 0) = 1:
For a Brownian motion B (t) that starts at x, denote the corresponding probability measure by IP x (See Fig. 13.3), and for such a Brownian motion we have:
IP x (B(0) = x) = 1:Note that: If x 6= 0, then IP x puts all its probability on a completely different set from I . P The distribution of B (t) under IP x is the same as the distribution of x + B (t) under I . P
13.12 Markov Property for Brownian MotionWe prove that Theorem 12.43 Brownian motion has the Markov property. Proof: Let s
0
2 6 ) IE h(B (s + t)) F (s) = IE 6h( B(s + t{z; B(s) + 4 | }Independent of F (s)F
t 0 be given (See Fig. 13.4).
(s)-measurable
B{zs) |( }
3 7 ) F (s)7 5
148
B(s) s restartFigure 13.4: Markov Property of Brownian Motion. Use the Independence Lemma. Dene
s+t
g (x) = IE h( B (s + t) ; B(s) + x )]
2 6 = IE 6h( x + 4
= IE xh(B (t)):Then
same distribution as B (s + t) ; B (s)
B (t) |{z}
3 7 )7 5
IE h (B(s + t) ) F (s) = g (B(s)) = E B (s)h(B (t)):
In fact Brownian motion has the strong Markov property.Example 13.1 (Strong Markov Property) See Fig. 13.5. Fix x > 0 and dene
= min ft 0 B(t) = xg :Then we have:
IE h( B( + t) ) F ( ) = g(B( )) = IE x h(B(t)):
CHAPTER 13. Brownian Motion
149
x restartFigure 13.5: Strong Markov Property of Brownian Motion.
+t
13.13 Transition DensityLet p(t x y ) be the probability that the Brownian motion changes value from x to y in time t, and let be dened as in the previous section.
p(t x y) = p 1 e 2 t g(x) = IE xh(B(t)) =
;
(y
;x)22t
Z
1
h(y )p(t x y) dy:
;1
IE h(B (s + t)) F (s) = g(B (s)) = IE h(B( + t)) F ( ) =13.14 First Passage TimeFix x > 0. Dene Fix
Z
1
h(y )p(t B (s) y) dy:
Z
;1
1
h(y )p(t x y) dy:
;1
= min ft 0
B (t) = xg :
> 0. Then
exp B (t ^ ) ; 1 2 (t ^ ) 2
n
o
is a martingale, and
1 IE exp B(t ^ ) ; 2 2 (t ^ ) = 1:
n
o
150 We have
8 n 12 o 0:p
(14.3)
yields
;IE eLetting
;
= ; px e x 2;
2
:
#0, we obtainIE = 1:(14.4)
Conclusion. Brownian motion reaches level x with probability 1. The expected time to reach level x is innite. We use the Reection Principle below (see Fig. 13.6).
IP f
t B (t) < xg = IP fB (t) > xg IP f tg = IP f t B(t) < xg + IP f t B (t) > xg = IP fB (t) > xg + IP fB (t) > xg = 2IP fB (t) > xg
Z e = p2 2 tx1
;
y2 2t dy
CHAPTER 13. Brownian Motion
151
shadow path
x t Brownian motionFigure 13.6: Reection Principle in Brownian Motion. Using the substitution z
= ytp
dz =
p
dy we get t
IP fDensity:
tg = p2
Z
1
2
x pt
e
;
z22
dz:x2 2t
@ f (t) = @t IP f F (t) =
tg = p x 3 e 2 t
;
which follows from the fact that if
Zba(t)
g (z) dz
then
@F = ; @a g (a(t)): @t @t IEe;
Laplace transform formula:
= e t f (t)dt = e x; ;
Z0
1
p
2
:
152
Chapter 14
The It Integral oThe following chapters deal with Stochastic Differential Equations in Finance. References: 1. B. Oksendal, Stochastic Differential Equations, Springer-Verlag,1995 2. J. Hull, Options, Futures and other Derivative Securities, Prentice Hall, 1993.
14.1 Brownian Motion(See Fig. 13.3.) ( F P) is given, always in the background, even when not explicitly mentioned. !IR, has the following properties: Brownian motion, B (t ! ) : 0 1) 1. 2. 3.
B (0) = 0 Technically, IP f! B(0 !) = 0g = 1, B (t) is a continuous function of t, If 0 = t0 t1 : : : tn , then the increments B (t1) ; B (t0 ) : : : B(tn ) ; B(tn 1 );
are independent,normal, and
IE B (tk+1 ) ; B(tk )] = 0 IE B (tk+1 ) ; B(tk )]2 = tk+1 ; tk :14.2 First VariationQuadratic variation is a measure of volatility. First we will consider rst variation, function f (t). 153
FV (f ), of a
154
f(t)
t2 t1 T t
Figure 14.1: Example function f (t). For the function pictured in Fig. 14.1, the rst variation over the interval
0 T ] is given by:
FV 0 T ](f ) = f (t1) ; f (0)] ; f (t2) ; f (t1 )] + f (T ) ; f (t2 )]= f (t) dt + (;f (t)) dt + f (t) dt:0 0 0
Zt10 ZT 0
Zt2
ZT
t1
t2
= jf (t)j dt:0
Thus, rst variation measures the total amount of up and down motion of the path. The general denition of rst variation is as follows: Denition 14.1 (First Variation) Let
= ft0 t1 : : : tn g be a partition of 0 T ], i.e.,
0 = t0 t1 : : : tn = T:The mesh of the partition is dened to be
jj jj = k=0max 1(tk+1 ; tk ): ::: n;
We then dene
FV 0 T ](f ) = lim 0 !jj jj
n 1 X;
k=0
jf (tk+1) ; f (tk )j:
Suppose f is differentiable. Then the Mean Value Theorem implies that in each subinterval t k tk+1 ], there is a point tk such that
f (tk+1 ) ; f (tk ) = f (tk )(tk+1 ; tk ):0
CHAPTER 14. The It Integral oThen
155
n 1 X;
k=0and
jf (tk+1) ; f (tk )j =
n 1 X;
k=0;
jf (tk )j(tk+1 ; tk )0 0
FV 0 T ](f ) = lim 0 !jj jj
n 1 X k=0
jf (tk )j(tk+1 ; tk )
= jf (t)j dt:0
ZT0
14.3 Quadratic VariationDenition 14.2 (Quadratic Variation) The quadratic variation of a function f on an interval is
0 T]
hf i(T ) = lim 0 jf (tk+1 ) ; f (tk )j2: ! k=0jj jj
n 1 X;
Remark 14.1 (Quadratic Variation of Differentiable Functions) If f is differentiable, then hf i(T ) = 0, because
n 1 X;
k=0
jf (tk+1) ; f (tk )j2 =
n 1 X;
k=0
jf (tk )j2(tk+1 ; tk )20
jj jj:and
n 1 X;
k=0
jf (tk )j2(tk+1 ; tk )0
hf i(T )
jj
lim 0 jj jj: lim 0 ! !jj jj jj jj
n 1 X;
= lim jj jj jf (t)j2 dtjj
!0
ZT0
k=0
jf (tk )j2(tk+1 ; tk )0
0
= 0:Theorem 3.44
hBi(T ) = Tor more precisely,
IP f! 2
hB(: !)i(T ) = T g = 1:
In particular, the paths of Brownian motion are not differentiable.
156 Proof: (Outline) Let = ft0 t1 : : : tn g be a partition of B (tk+1 ) ; B(tk ). Dene the sample quadratic variation
0 T ]. To simplify notation, set D k =
Q =Then
n 1 X;
k=0
2 Dk :
Q ;T =We want to show thatjj jj
n 1 X;
k=0
2 Dk ; (tk+1 ; tk )]:
lim (Q ; T ) = 0:
!0
Consider an individual summand
2 Dk ; (tk+1 ; tk ) = B(tk+1 ) ; B(tk )]2 ; (tk+1 ; tk ):This has expectation 0, so
IE (Q ; T ) = IEFor j
n 1 X;
6= k, the termsvar(Q ; T ) = = =n 1 X;
k=0
2 Dk ; (tk+1 ; tk )] = 0: 2 Dk ; (tk+1 ; tk )
Dj2 ; (tj+1 ; tj )
and
are independent, so
k=0 n 1 X;
2 var Dk ; (tk+1 ; tk )] 4 2 IE Dk ; 2(tk+1 ; tk )Dk + (tk+1 ; tk )2]
k=0 n 1 X;
k=0
3(tk+1 ; tk )2 ; 2(tk+1 ; tk )2 + (tk+1 ; tk )2 ] (tk+1 ; tk )2n 1 X;
=2
n 1 X;
(if X is normal with mean 0 and variance 2, then IE (X 4) = 3 4)
k=0
2jj jj
= 2jj jj T:Thus we have
k=0
(tk+1 ; tk )
IE (Q ; T ) = 0 var(Q ; T ) 2jj jj:T:
CHAPTER 14. The It Integral oAs jj
157
jj!0, var(Q ; T )!0, sojj
lim (Q ; T ) = 0:jj
!0
Remark 14.2 (Differential Representation) We know that
IE (B(tk+1 ) ; B(tk ))2 ; (tk+1 ; tk )] = 0:We showed above that
var (B (tk+1 ) ; B (tk ))2 ; (tk+1 ; tk )] = 2(tk+1 ; tk )2 :When (tk+1 ; tk ) is small, (tk+1 ; tk )2 is very small, and we have the approximate equation
(B (tk+1 ) ; B (tk ))2 ' tk+1 ; tkwhich we can write informally as
dB(t) dB(t) = dt:
14.4 Quadratic Variation as Absolute VolatilityOn any time interval
T1 T2], we can sample the Brownian motion at times T1 = t0 t1 : : : tn = T2 T2 ; T1 k=0(B(tk+1 ) ; B(tk )) :2
and compute the squared sample absolute volatility
1
n 1 X;
This is approximately equal to
hBi(T2) ; hBi(T1)] = T2 ; T1 = 1: T2 ; T1 T2 ; T11As we increase the number of sample points, this approximation becomes exact. In other words, Brownian motion has absolute volatility 1. Furthermore, consider the equation
hBi(T ) = T = 1 dt0
ZT
8T 0:
This says that quadratic variation for Brownian motion accumulates at rate 1 at all times along almost every path.
158
14.5 Construction of the It Integral oThe integrator is Brownian motion following properties: 1. 2. 3.
B(t) t
0, with associated ltration F (t) t
0, and the
s t=) every set in F (s) is also in F (t), B (t) is F (t)-measurable, 8t, For t t1 : : : tn , the increments B (t1 ) ; B (t) B (t2 ) ; B (t1 ) : : : B (tn ) ; B (tn 1 ) are independent of F (t).;
The integrand is 1. 2.
(t) t 0, whereis adapted)
(t) is F (t)-measurable 8t (i.e.,is square-integrable:
IEWe want to dene the It Integral: o
ZT0
2(t) dt < 1
8T:
I (t) =
Zt0
(u) dB (u)
t 0: f (t) is a differentiable function, then
Remark 14.3 (Integral w.r.t. a differentiable function) If we can dene
Zt0
(u) df (u) =
Zt0
(u)f (u) du:0
This wont work when the integrator is Brownian motion, because the paths of Brownian motion are not differentiable.
14.6 It integral of an elementary integrand oLet
= ft0 t1 : : : tn g be a partition of 0 T ], i.e., 0 = t0 t1 : : : tn = T:an
Assume that (t) is constant on each subinterval t k tk+1 ] (see Fig. 14.2). We call such a elementary process. The functions B (t) and
(tk ) can be interpreted as follows:
Think of B (t) as the price per unit share of an asset at time t.
CHAPTER 14. The It Integral o
159
( t ) = ( t 1 ) ( t ) = ( t ) 0
( t )= ( t 3 )
0=t0
t1
t2
t3
t4 = T
( t ) = ( t 2 )Figure 14.2: An elementary function . Think of t0 t1
Think of (tk ) as the number of shares of the asset acquired at trading date t k and held until trading date tk+1 . Then the It integral I (t) can be interpreted as the gain from trading at time t; this gain is given by: o
: : : tn as the trading dates for the asset.
8 > (t0) B(t) ; | (t0) ] B{z } 0 t t1 > < =B (0)=0 I (t) = > (t0 ) B (t1) ; B (t0 )] + (t1 ) B (t) ; B (t1 )] t t t2 > : (t0) B(t1) ; B(t0)] + (t1) B(t2) ; B(t1 )] + (t2) B(t) ; B(t2)] t1 t t3: 2t tk+1 , I (t) =k 1 X;
In general, if tk
j =0
(tj ) B (tj +1 ) ; B (tj )] + (tk ) B (t) ; B (tk )]:
14.7 Properties of the Ito integral of an elementary processAdaptedness For each t Linearity If
I (t) is F (t)-measurable. I (t) =
Zt0
(u) dB (u)
J (t) =( (u)
Zt0
(u) dB (u)
then
I (t) J (t) =
Zt0
(u)) dB (u)
160s t l t l+1 t
.....
t
k
t
k+1
Figure 14.3: Showing s and t in different partitions. and
cI (t) =Martingale
Zt0
c (u)dB (u):
I (t) is a martingale.
We prove the martingale property for the elementary process case. Theorem 7.45 (Martingale Property)
I (t) =is a martingale.
k 1 X;
j =0
(tj ) B (tj +1 ) ; B (tj )] + (tk ) B (t) ; B (tk )]
tk t tk+1
s t be given. We treat the more difcult case that s and t are in different Proof: Let 0 subintervals, i.e., there are partition points t ` and tk such that s 2 t` t`+1 ] and t 2 tk tk+1 ] (See Fig. 14.3).Write
I (t) =
` 1 X;
j =0
(tj ) B (tj +1 ) ; B (tj )] + (t` ) B (t`+1 ) ; B (t` )]k 1 X;
+
j =`+1
(tj ) B (tj +1 ) ; B (tj )] + (tk ) B (t) ; B (tk )]
We compute conditional expectations:
2` 1 3 `1 X X IE 4 (tj )(B (tj+1 ) ; B(tj )) F (s)5 = (tj )(B (tj +1 ) ; B (tj )):; ;
j =0
j =0
IE (t`)(B (t`+1 ) ; B(t` )) F (s) = (t` ) (IE B(t`+1)jF (s)] ; B(t` )) = (t` ) B (s) ; B (t` )]
CHAPTER 14. The It Integral oThese rst two terms add up to I (s). We show that the third and fourth terms are zero.; ;
161
2k 1 3 k1 X X IE 4 (tj )(B (tj +1 ) ; B (tj )) F (s)5 = IE IE (tj )(B (tj +1 ) ; B (tj )) F (tj ) F (s) j =`+1 j =`+1 3 2 k 1 X 6 = IE 4 (tj ) (IE B (tj +1)jF (tj )] ; B(tj )) F (s)7 5 {z } | j =`+1 =0 2 3 IE (tk )(B (t) ; B(tk )) F (s) = IE 6 (tk ) |IE B(t)jF ({z )] ; B(tk )) F (s)7 tk 4 ( 5 };
=0
Theorem 7.46 (It Isometry) o
IEI 2(t) = IEProof: To simplify notation, assume t = t k , so
Zt0
2 (u) du:
I (t) =
k X
j =0
(tj ) B (tj +1 ){z B (tj )] ; } |Dj
Each Dj has expectation 0, and different Dj are independent.
0k 12 X I 2(t) = @ (tj )Dj A
j =0 k X 2 X 2 = (tj )Dj + 2 j =0 i 0. Let IE
be a process (not necessarily an elementary process) such that
(t) is F (t)-measurable, 8t 2 0 T ],
R T 2(t) dt < 1: 0
Theorem 8.47 There is a sequence of elementary processes f n g1 such that n=1
nlim IE !1
ZT0
j n(t) ; (t)j2 dt = 0:
Proof: Fig. 14.4 shows the main idea.
In the last section we have dened
In (T ) =for every n. We now dene
ZT0
n (t) dB (t)
ZT0
(t) dB (t) = nlim !
ZT0
1
n (t) dB (t):
CHAPTER 14. The It Integral o
163
The only difculty with this approach is that we need to make sure the above limit exists. Suppose n and m are large positive integers. Then
var(In (T ) ; Im (T )) = IE(It Isometry:) = IE o
ZT0 T
Z0 0
n (t) ; m (t)] dB (t)2 n (t) ; m (t)] dt
!2
((a + b)2
j n(t) ; (t)j + j (t) ; m(t)j ]2 dt ZT ZT 2a2 + 2b2 :) 2IE j n (t) ; (t)j2 dt + 2IE j m (t) ; (t)j2 dt= IE0 01
ZT
which is small. This guarantees that the sequence fI n (T )gn=1 has a limit.
14.9 Properties of the (general) It o integral
I (t) =
Zt0
(u) dB (u):
Here is any adapted, square-integrable process. Adaptedness. For each t, I (t) is F (t)-measurable. Linearity. If
I (t) =then
Zt0
(u) dB (u)
J (t) =( (u)
Zt0
(u) dB (u)
I (t) J (t) =
Zt0
(u)) dB (u)
and
cI (t) =
Zt0
c (u)dB (u):
Martingale.
I (t) is a martingale. Continuity. I (t) is a continuous function of the upper limit of integration t. Rt It Isometry. IEI 2(t) = IE 0 2 (u) du. o
Example 14.1 () Consider the It integral o
ZT0
B(u) dB(u):
We approximate the integrand as shown in Fig. 14.5
164
T/4
2T/4
3T/4
T
Figure 14.5: Approximating the integrand B (u) with 4 , over
0 T ].
8 >B(0) = 0 > : : : > (n;1)T :BTif if if0 u < T=n T=n u < 2T=n(n 1)Tn;u < T:By denition,ZT0B(u) dB(u) = nlim !1n 1; XB kT n k =0B (k + 1)T ; B kT n n:To simplify notation, we denoteso4 Bk = B kT n ZT n;1 X B(u) dB(u) = nlim !1 k=0 Bk (Bk+1 ; Bk ): 0n 1 k =0We compute1 2n 1 k =0; X(Bk+1 ; Bk )2 = 1 2; X2 Bk+1 ;n 1 j =0n 1 k =0; XBk Bk+1 + 1 2n 1 k =0n 1 k =0; X2 Bkn 1 k =01 2 = 2 Bn + 1 2 1 2 = 2 Bn +; XBj2 ;; XBk Bk+1 + 1 2; X2 Bkn 1 k =0 n 1; X ; X2 Bk ;n 1 k =0; XBk Bk+1= Bn ;1 2 2k =0Bk (Bk+1 ; Bk ):CHAPTER 14. The It Integral oTherefore,165n 1 k =0; X2 Bk (Bk+1 ; Bk ) = 1 Bn ; 1 2 2n 1 k =0; X(Bk+1 ; Bk )2n 1 k =0or equivalentlyn 1; XB kT n k =0B (k + 1)T ; B kT n n1 = 1 B 2 (T) ; 2 2; XB (k + 1)T nk T2:Let n!1 and use the denition of quadratic variation to getZT01 B(u) dB(u) = 2 B 2 (T) ; 1 T: 2Remark 14.4 (Reason for the 1 T term) If f is differentiable with f (0) = 0, then 2ZT0f (u) df (u) =ZT0f (u)f (u) du0= 1 f 2 (u) 2In contrast, for Brownian motion, we haveT0= 1 f 2 (T ): 2ZT0B (u)dB(u) = 1 B 2(T ) ; 1 T: 2 2The extra term 1 T comes from the nonzero quadratic variation of Brownian motion. It has to be 2 there, because ZIET0B (u) dB (u) = 0(It integral is a martingale) obutIE 1 B 2 (T ) = 1 T: 2 214.10 Quadratic variation of an It integral oTheorem 10.48 (Quadratic variation of It integral) Let oI (t) =ThenZt0(u) dB (u):0 2(u) du:hI i(t) =Zt166 This holds even if is not an elementary process. The quadratic variation formula says that at each time u, the instantaneous absolute volatility of I is 2 (u). This is the absolute volatility of the Brownian motion scaled by the size of the position (i.e. (t)) in the Brownian motion. Informally, we can write the quadratic variation formula in differential form as follows:dI (t) dI (t) = 2(t) dt:Compare this withdB(t) dB(t) = dt:(t) =n 1 X;Proof: (For an elementary process ). Let = ft0 t1 : : : tn g be the partition for , i.e., (tk ) for tk t tk+1 . To simplify notation, assume t = tn . We havehI i(t) =Let us compute hI i(tk+1) ; hI i(tk ). Letk=0hI i(tk+1) ; hI i(tk)] :tk = s0Then= fs0 s1 : : : sm g be a partition s1 : : : sm = tk+1 :sZ+1 j sjI (sj+1 ) ; I (sj ) =so(tk ) dB (u)= (tk ) B (sj +1 ) ; B (sj )]hI i(tk+1) ; hI i(tk) =m 1 X;j =0I (sj+1) ; I (sj )]2m 1 X;= 2 (tk )jj jj!0 ;;;;;!It follows thatj =0 2 (tB(sj+1 ) ; B(sj )]2k )(tk+1 ; tk ):hI i(t) ==n 1 X; ;k=0 n 1 tZ +1 X k k=0 tk2(t )(t k k+1 ; tk ) 2 (u) du 0 2(u) du:jj 0 ; jj!! ;;;;;ZtChapter 15It s Formula o15.1 It s formula for one Brownian motion oWe want a rule to differentiate expressions of the form f (B (t)), where f (x) is a differentiable function. If B (t) were also differentiable, then the ordinary chain rule would gived f (B(t)) = f (B(t))B (t) dt0 0which could be written in differential notation asdf (B (t)) = f (B(t))B (t) dt = f (B (t))dB (t)0 0 0However, B (t) is not differentiable, and in particular has nonzero quadratic variation, so the correct formula has an extra term, namely,dt df (B(t)) = f (B(t)) dB (t) + 1 f (B(t)) |{z} : 20 00dB (t) dB (t)This is It s formula in differential form. Integrating this, we obtain It s formula in integral form: o of (B(t)) ; | (B (0)) = f {z }f (0)Zt01 f (B(u)) dB (u) + 20Zt0f (B (u)) du:00Remark 15.1 (Differential vs. Integral Forms) The mathematically meaningful form of It s foro mula is It s formula in integral form: of (B(t)) ; f (B (0)) =Zt0f01 (B (u)) dB (u) + 2 0167Ztf (B (u)) du:00168 This is because we have solid denitions for both integrals appearing on the right-hand side. The rst,Zt0f (B(u)) dB (u)0is an It integral, dened in the previous chapter. The second, oZt0f (B (u)) du00is a Riemann integral, the type used in freshman calculus. For paper and pencil computations, the more convenient form of It s rule is It s formula in differo o ential form:df (B (t)) = f (B (t)) dB(t) + 1 f (B (t)) dt: 2 There is an intuitive meaning but no solid denition for the terms df (B (t)) dB (t) and dt appearing0 00in this formula. This formula becomes mathematically respectable only after we integrate it.15.2 Derivation of It s formula oConsider f (x) = 1 x2 , so that 2 Let xkf (x) = x f (x) = 1:0 00 0 00xk+1 be numbers. Taylors formula implies f (xk+1) ; f (xk ) = (xk+1 ; xk )f (xk ) + 1 (xk+1 ; xk )2f (xk ): 2In the general case, the above equation is only approximate, and the error is of the order of (x k+1 ; xk )3 . The total error will have limit zero in the last step of the following argument. Fix TIn this case, Taylors formula to second order is exact because f is a quadratic function.> 0 and let = ft0 t1 : : : tng be a partition of 0 T ]. Using Taylors formula, we write: f (B(T )) ; f (B(0)) 1 = 2 B 2 (T ) ; 1 B 2 (0) 2= = =n 1 X; ;k=0 n 1 X k=0 n 1 X;f (B(tk+1 )) ; f (B (tk ))] B(tk+1 ) ; B (tk )] f (B (tk )) + 1 20n 1 X;k=0B(tk ) B(tk+1 ) ; B(tk )] + 1 2n 1 X;k=0B (tk+1 ) ; B(tk )]2 f (B(tk ))00k=0B(tk+1 ) ; B(tk )]2 :CHAPTER 15. It s Formula oWe let jj169jj!0 to obtainf (B (T )) ; f (B(0)) ==ZT ZT0 0B(u) dB(u) + 1 hB{zT ) 2 | i( } f0T ZT (B (u)) dB (u) + 1 f 2 0 |00(B (u)) du: {z }1This is It s formula in integral form for the special case of (x) = 1 x2 : 215.3 Geometric Brownian motionDenition 15.1 (Geometric Brownian Motion) Geometric Brownian motion isS (t) = S (0) exp B(t) +where Dene andn> 0 are constant. f (t x) = S (0) exp x +1 ;22ton;1 22tosoS (t) = f (t B (t)):Thenft =According to It s formula, o;1 22f fx = f fxx = 2f:dS (t) = df (t B(t)) 1 = ft dt + fx dB + 2 fxx dBdB | {z }=( ; dt + f dB + 1 2 f dt 2 = S (t)dt + S (t) dB (t)1 2 )f 2Thus, Geometric Brownian motion in differential form isdtdS (t) = S (t)dt + S (t) dB (t)and Geometric Brownian motion in integral form isS (t) = S (0) +Zt0S (u) du +Zt0S (u) dB(u):17015.4 Quadratic variation of geometric Brownian motionIn the integral form of Geometric Brownian motion,S (t) = S (0) +the Riemann integralZt0S (u) du +Zt0S (u) dB(u)F (t) =is differentiable with F 0 (t) =Zt0S (u) du S (u) dB (u)2 S 2(u) du:S (t). This term has zero quadratic variation. The It integral o G(t) =Zt0is not differentiable. It has quadratic variationhGi(t) =Zt0Thus the quadratic variation of S is given by the quadratic variation of G. In differential notation, we writedS (t) dS (t) = ( S (t)dt + S (t)dB(t))2 = 2S 2 (t) dt15.5 Volatility of Geometric Brownian motionFix 0 T1 T2. Let = volatility of S on T 1 T2] isft0 : : : tng be a partition of T1 T2]. The squared absolute samplen 1 X;T2 ; T1 k=0 S (tk+1) ; S (tk ' T2 ; T1 T1 2 S 2(T ) ' 11)]21ZT22 S 2(u) duAs T2 # T1 , the above approximation becomes exact. In other words, the instantaneous relative volatility of S is 2 . This is usually called simply the volatility of S .15.6 First derivation of the Black-Scholes formulaWealth of an investor. An investor begins with nonrandom initial wealth X 0 and at each time t, holds (t) shares of stock. Stock is modelled by a geometric Brownian motion:dS (t) = S (t)dt + S (t)dB(t):CHAPTER 15. It s Formula olending at interest rate r.171 The investor nances his investing by borrowing or(t) can be random, but must be adapted.Let X (t) denote the wealth of the investor at time t. ThendX (t) = (t)dS (t) + r X (t) ; (t)S (t)] dt = (t) S (t)dt + S (t)dB (t)] + r X (t) ; (t)S (t)] dt = rX (t)dt + (t)S (t) ( {z r) dt + (t)S (t) dB (t): | ; }Risk premiumValue of an option. Consider an European option which pays g (S (T )) at time T . Let v (t x) denote the value of this option at time t if the stock price is S (t) = x. In other words, the value of the option at each time t 2 0 T ] isv (t S (t)):The differential of this value isdv (t S (t)) = vt dt + vxdS + 1 vxxdS dS 2 = vt dt + vx S dt + S dB ] + 1 vxx 2 S 2 dt h i 2 1 2 S 2v = vt + Svx + 2 xx dt + SvxdB A hedging portfolio starts with some initial wealth X 0 and invests so that the wealth X (t) at each time tracks v (t S (t)). We saw above that dX (t) = rX + ( ; r)S ] dt + S dB: To ensure that X (t) = v (t S (t)) for all t, we equate coefcients in their differentials. Equating the dB coefcients, we obtain the -hedging rule: (t) = vx (t S (t)): Equating the dt coefcients, we obtain: vt + Svx + 1 2 S 2vxx = rX + ( ; r)S: 2 But we have set = vx , and we are seeking to cause X to agree with v . Making these substitutions,1 vt + Svx + 2 2S 2vxx = rv + vx ( ; r)S (where v = v (t S (t)) and S = S (t)) which simplies to vt + rSvx + 1 2 S 2vxx = rv: 2 In conclusion, we should let v be the solution to the Black-Scholes partial differential equation 1 vt (t x) + rxvx (t x) + 2 2 x2vxx(t x) = rv (t x)satisfying the terminal condition If an investor starts with X 0 = v (0 S (0)) and uses the hedge (t) = vx (t X (t) = v(t S (t)) for all t, and in particular, X (T ) = g (S (T )). we obtainv (T x) = g (x):S (t)), then he will have17215.7 Mean and variance of the Cox-Ingersoll-Ross processThe Cox-Ingersoll-Ross model for interest rates isdr(t) = a(b ; cr(t))dt +where aqr(t) dB(t)bcand r(0) are positive constants. In integral form, this equation is0 0 2(t). This is df (r(t)), where f (x) = x2 . We obtain We apply It s formula to compute dr o dr2(t) = df (r(t)) = f (r(t)) dr(t) + 1 f (r(t)) dr(t) dr(t) 20 00r(t) = r(0) + a (b ; cr(u)) du +ZtZ tqr(u) dB (u):= 2r(t) a(b ; cr(t)) dt +qr(t) dB (t) + a(b ; cr(t)) dt +3qr(t) dB (t)2= 2abr(t) dt ; 2acr2(t) dt + 2 r 2 (t) dB (t) + 2 r(t) dt 3 = (2ab + 2 )r(t) dt ; 2acr2(t) dt + 2 r 2 (t) dB (t)The mean of r(t). The integral form of the CIR equation isr(t) = r(0) + a (b ; cr(u)) du +0ZtZ tq0r(u) dB (u):Taking expectations and remembering that the expectation of an It integral is zero, we obtain oIEr(t) = r(0) + a (b ; cIEr(u)) du:0Differentiation yieldsZtd dt IEr(t) = a(b ; cIEr(t)) = ab ; acIEr(t)which implies thatd heact IEr(t)i = eact acIEr(t) + d IEr(t) = eact ab: dt dt eactIEr(t) ; r(0) = abIntegration yields We solve for IEr(t):Zt0;eacu du = b (eact ; 1): c r(0) ; b : c cb If r(0) = c , then IEr(t) = b for every t. If r(0) 6= b , then r(t) exhibits mean reversion: c cIEr(t) = b + e cactlim IEr(t) = b : t!1CHAPTER 15. It s Formula oVariance of r(t). The integral form of the equation derived earlier for dr2(t) is173r2 (t) = r2(0) + (2ab + 2 )Taking expectations, we obtainZt0r(u) du ; 2acZt0r2(u) du + 2Zt02 r 3 (u) dB(u):IEr2(t) = r2(0) + (2ab + 2)Differentiation yieldsZt0IEr(u) du ; 2acZt0IEr2(u) du:d IEr2(t) = (2ab + 2)IEr(t) ; 2acIEr2(t) dtwhich implies thatd e2actIEr2(t) = e2act 2acIEr2(t) + d IEr2(t) dt dt 2act (2ab + 2)IEr(t): =e Using the formula already derived for IEr(t) and integrating the last equation, after considerablealgebra we obtain2 2b b 2 + b2 + r(0) ; b act 2 c2 2ac c ac + c e 2 2 2 + r(0) ; b e 2act + ac 2bc ; r(0) e 2act : c ac var r(t) = IEr2(t) ; (IEr(t))2 2 2 b 2 = 2ac2 + r(0) ; b ac e act + ac 2bc ; r(0) e 2act : cIEr2(t) =!;;;;;15.8 Multidimensional Brownian MotionDenition 15.2 (d-dimensional Brownian Motion) A d-dimensional Brownian Motion is a processB (t) = (B1 (t) : : : Bd (t))with the following properties: Each Bk (t) is a one-dimensional Brownian motion; If i 6= j , then the processes Bi (t) and Bj (t) are independent. Associated with a d-dimensional Brownian motion, we have a ltration fF (t)g such that For each t, the random vector B (t) is F (t)-measurable; For each tt1 : : : tn , the vector increments B(t1 ) ; B(t) : : : B(tn) ; B(tn 1 ) are independent of F (t).;17415.9 Cross-variations of Brownian motionsBecause each component Bi is a one-dimensional Brownian motion, we have the informal equationdBi(t) dBi (t) = dt:However, we have: Theorem 9.49 If i 6= j ,dBi(t) dBj (t) = 00 T ]. For i 6= j , dene the sample cross variationProof: Let = ft0 : : : tn g be a partition of of Bi and Bj on 0 T ] to beC =n 1 X;k=0Bi (tk+1 ) ; Bi (tk )] Bj (tk+1) ; Bj (tk )] :The increments appearing on the right-hand side of the above equation are all independent of one another and all have mean zero. Therefore,IEC = 0:We compute var(C ). First note thatC2 =+2n 1 X;k=0 n 1 X;Bi(tk+1 ) ; Bi (tk ) Bj (tk+1) ; Bj (tk )22` 0:CHAPTER 16. Markov processes and the Kolmogorov equationsand the boundary condition187v(t 0) = 00 t T:An example of such a process is the following from J.C. Cox, Notes on options pricing I: Constant elasticity of variance diffusions, Working Paper, Stanford University, 1975:dS (t) = rS (t) dt + S (t) dB (t)where 0 < 1. The volatility sponding Black-Scholes equation isS;1(t) decreases with increasing stock price. The corre-;rv + vt + rxvx + 1 2x2 vxx = 0 20 t0 v (t 0) = 0 0 t T v (T x) = (x ; K )+ x 0:188Chapter 17Girsanovs theorem and the risk-neutral measure(Please see Oksendal, 4th ed., pp 145151.) Theorem 0.52 (Girsanov, One-dimensional) Let B (t) 0 t T , be a Brownian motion on a probability space ( F P). Let F (t) 0 t T , be the accompanying ltration, and let (t) 0 t T , be a process adapted to this ltration. For 0 t T , denee B (t) =Zt0(u) du + B (t)Z (t) = exp ;Zt0(u) dB (u) ;Zt 2 1 2 0 (u) du 8A 2 F :and dene a new probability measure byf IP (A) = f e Under I , the process B (t) PZAZ (T ) dIP0 t T , is a Brownian motion.Caveat: This theorem requires a technical condition on the size of . IfIE expeverything is OK. We make the following remarks:( ZT1 2 02 (u) du) 0 for every !, we can invert the denition IP (A) = dIP A Z (T )1fA 2 F:f P If I (A) = 0, then A Z (1T )RdIP = 0:17.2 Risk-neutral measureAs usual we are given the Brownian motion: B (t) 0 t T , with ltration F (t) dened on a probability space ( F P). We can then dene the following. Stock price:0tT,dS (t) = (t)S (t) dt + (t)S (t) dB (t):The processes (t) and (t) are adapted to the ltration. The stock price model is completely general, subject only to the condition that the paths of the process are continuous.r(t) 0 t T . The process r(t) is adapted. Wealth of an agent, starting with X (0) = x. We can write the wealth process differential inInterest rate: several ways:dX (t) == r(t)X (t) dt + (t) dS (t) ; rS (t) dt] = r(t)X (t) dt + (t) ( (t) {z r(t)) S (t) dt + (t) (t)S (t) dB (t) | ; }Capital gains from StockdS ) | (t){z (t} + r(t) X (t) ;{z (t)S (t)] dt | }Interest earnings2 3 6 (t) ; r(t) 7 6 7 6 = r(t)X (t) dt + (t) (t)S (t) 6 dt + dB (t)7 7 ( 4 | {zt) } 5Risk premium Market price of risk=(t)194 Discounted processes:d e d e;;R t r(u) du0R t r(u) du0S (t) = e;X (t) = e;R t r(u) du0;R t r(u) du0;r(t)S (t) dt + dS (t)] ;r(t)X (t) dt + dX (t)]S (t) :0= (t)d eNotation:R t r(u) duRt (t) = e 0 r(u) dud (t) = r(t) (t) dt1 = e R0t r(u) du (t) d 1t) = ; r(tt) dt: ( ();The discounted formulas are= 1t) (t)S (t) (t) dt + dB (t)] ( (t) d X((tt)) = (t) d S(t) () = (tt) (t)S (t) (t) dt + dB (t)]:Changing the measure. Dene(t) d S(t) = 1t) ;r(t)S (t) dt + dS (t)] ( = 1t) ( (t) ; r(t))S (t) dt + (t)S (t) dB (t)] (e B (t) =ThenZt0(u) du + B (t):(t) e d S(t) = 1t) (t)S (t) dB (t) ( e d X((tt)) = ((tt)) (t)S (t) dB (t):f (t) Under I , S (t) and X((tt)) are martingales. PDenition 17.2 (Risk-neutral measure) A risk-neutral measure (sometimes called a martingale measure) is any probability measure, equivalent to the market measure IP , which makes all discounted asset prices martingales.CHAPTER 17. Girsanovs theorem and the risk-neutral measureFor the market model considered here,195f IP (A) =whereZA0Z (T ) dIP(u) dB (u) ; 1 2A2FZ (t) = exp ;ZtZtis the unique risk-neutral measure. Note that because 0.(t) =2 (u) du 0 (t) r(t) we must assume that (t);(t) 6=Risk-neutral valuation. Consider a contingent claim paying an F (T )-measurable random variable V at time T .Example 17.1V = (S(T) ; K)+ European call + V = (K ; S(T)) European put !+ ZT 1 S(u) du ; K V= T Asian call 0 V = 0maxT S(t) Look back tIf there is a hedging portfolio, i.e., a process satises X (T ) = V , then(t) 0 t T , whose corresponding wealth processf V X (0) = IE (T ) : f P This is because X((tt)) is a martingale under I , so(0) f T f V X (0) = X(0) = IE X((T )) = IE (T ) :196Chapter 18Martingale Representation Theorem18.1 Martingale Representation TheoremSee Oksendal, 4th ed., Theorem 4.11, p.50. Theorem 1.56 Let B (t) 0 t T be a Brownian motion on ( the ltration generated by this Brownian motion. Let X (t) 0 t relative to this ltration. Then there is an adapted process (t) 0F P). Let F (t) 0 t T , beT , be a martingale (under IP ) t T , such thatX (t) = X (0) +Zt0(u) dB (u)0 t T:In particular, the paths of X are continuous. Remark 18.1 We already know that if X (t) is a process satisfyingdX (t) = (t) dB (t) then X (t) is a martingale. Now we see that if X (t) is a martingale adapted to the ltration generated by the Brownian motion B (t), i.e, the Brownian motion is the only source of randomness in X , then dX (t) = (t) dB(t) for some (t).18.2 A hedging applicationHomework Problem 4.5. In the context of Girsanovs Theorem, suppse that F (t) the ltration generated by the Brownian motion B (under IP ). Suppose that Y is a Then there is an adapted process (t) 0 t T , such that0 t T is f IP -martingale.Y (t) = Y (0) +Zt0e (u) dB (u)1970 t T:198dS (t) = (t)S (t) dt + (t)S (t) dB (t) Zt (t) = exp r(u) du 0 (t) = (t) ; r(t) Z t (t) e B (t) = (u) du + B(t)0 Z f(A) = Z (T ) dIP IPZ (t) = exp ;A0Zt(u) dB (u) ; 1 2Zt02 (u) du8A 2 F :ThenLet(t) (t) e d S(t) = S(t) (t) dB (t): (t) 0 t T be a portfolio process. The corresponding wealth process X (t) satises (t) e d X((tt)) = (t) (t) S(t) dB(t)i.e.,X (t) = X (0) + Z t (u) (u) S (u) dB(u) e (t) (u) 00 t T:Let V be an F (T )-measurable random variable, representing the payoff of a contingent claim at time T . We want to choose X (0) and (t) 0 t T , so thatf P Dene the I -martingaleX (T ) = V:f V Y (t) = IE (T ) F (t) Zt00 t T: (t) 0 t T , such that 0 t T:According to Homework Problem 4.5, there is an adapted processf Set X (0) = Y (0) = I EhY (t) = Y (0) +V (T ) and chooseie (u) dB (u)(u) so that(u) (u) S (u) = (u): (u)CHAPTER 18. Martingale Representation TheoremWith this choice of199(u) 0 u T , we have X (t) = Y (t) = IE V F (t) f (t) (T )0 t T:In particular,X (T ) = IE V F (T ) = V f (T ) (T ) (T )soX (T ) = V:The Martingale Representation Theorem guarantees the existence of a hedging portfolio, although it does not tell us how to compute it. It also justies the risk-neutral pricing formulaf X (t) = (t)IE = Z(t) IE (t) 1 IE = (t)where(T ) F (t) Z (T ) V F (t) (T ) (T )V F (t)V0 t T(t) (t) = Z(t) = exp ;Zt0(u) dB (u) ;Zt0(r(u) + 1 2 (u)) du 218.3 d-dimensional Girsanov TheoremTheorem 3.57 (d-dimensional Girsanov) dimensional Brownian motion on (F (t) 0 t Tt TdeneFor 0(t) = ( 1(t) : : : d (t)) 0 t T , d-dimensional adapted process.the accompanying ltration, perhaps larger than the one generated by B ;F P);B(t) = (B1 (t) : : : Bd(t)) 0tT , a d-e Bj (t) =Zt0Z f(A) = Z (T ) dIP: IPAZ (t) = exp ;j (u) du + Bj (t) Ztj = 1 ::: d1 (u): dB (u) ; 2 0 0Ztjj (u)jj2 du200f P Then, under I , the process e e e B (t) = (B1(t) : : : Bd (t))is a d-dimensional Brownian motion.0 t T18.4 d-dimensional Martingale Representation TheoremTheorem 4.58 on ( Ft T , is a martingale (under IP ) relative to F (t) 0 If X (t) 0 d-dimensional adpated process (t) = ( 1(t) : : : d (t)), such that X (t) = X (0) +F (t) 0 t TP);B(t) = (B1 (t) : : : Bd (t)) 0tT a d-dimensional Brownian motion t T , then there is athe ltration generated by the Brownian motion B .Zt0(u): dB (u)0 t T:Corollary 4.59 If we have a d-dimensional adapted process (t) = ( 1 (t) : : : d (t)) then we can e f f P P dene B Z and I as in Girsanovs Theorem. If Y (t) 0 t T , is a martingale under I relative to F (t) 0 t T , then there is a d-dimensional adpated process (t) = ( 1(t) : : : d(t)) such that ZY (t) = Y (0) +t0e (u): dB (u)0 t T:18.5 Multi-dimensional market modelfollowing:B (t) = (B1(t) : : : Bd(t)) 0 t T , be a d-dimensional Brownian motion on some ( F P), and let F (t) 0 t T , be the ltration generated by B . Then we can dene theLetStocksdSi (t) = i (t)Si(t) dt + Si (t)Accumulation factord X j =1ij (t) dBj (t)i = 1 ::: m(t) = expHere, i (t)Zt0r(u) du :ij (t) and r(t) are adpated processes.CHAPTER 18. Martingale Representation TheoremDiscounted stock prices201d X d Si((tt)) = ( i(t){z r(t)) Si((tt)) dt + Si((tt)) ; } ij (t) dBj (t) | j =1 d ? Si (t) X (t) (t) + dB (t)] = (t) ij | j {z j } j =1 e dBj (t)Risk Premium(5.1)For 5.1 to be satised, we need to choose 1 (t):::d (t), so that(MPR)d X j =1ij (t) j (t) = i (t) ; r(t)i = 1 : : : m: :::Market price of risk. The market price of risk is an adapted process (t) = ( 1 (t) satisfying the system of equations (MPR) above. There are three cases to consider:d (t))Case I: (Unique Solution). For Lebesgue-almost every t and IP -almost every ! , (MPR) has a unique solution (t). Using (t) in the d-dimensional Girsanov Theorem, we dene a unique f f P P risk-neutral probability measure I . Under I , every discounted stock price is a martingale. f Consequently, the discounted wealth process corresponding to any portfolio process is a I P martingale, and this implies that the market admits no arbitrage. Finally, the Martingale Representation Theorem can be used to show that every contingent claim can be hedged; the market is said to be complete. Case II: (No solution.) If (MPR) has no solution, then there is no risk-neutral probability measure and the market admits arbitrage. Case III: (Multiple solutions). If (MPR) has multiple solutions, then there are multiple risk-neutral probability measures. The market admits no arbitrage, but there are contingent claims which cannot be hedged; the market is said to be incomplete. Theorem 5.60 (Fundamental Theorem of Asset Pricing) Part I. (Harrison and Pliska, Martingales and Stochastic integrals in the theory of continuous trading, Stochastic Proc. and Applications 11 (1981), pp 215-260.): If a market has a risk-neutral probability measure, then it admits no arbitrage. Part II. (Harrison and Pliska, A stochastic calculus model of continuous trading: complete markets, Stochastic Proc. and Applications 15 (1983), pp 313-316): The risk-neutral measure is unique if and only if every contingent claim can be hedged.202Chapter 19A two-dimensional market modelLet B (t)In what follows, all processes can depend on t and ! , but are adapted to simplify notation, we omit the arguments whenever there is no ambiguity. Stocks:F (t) 0 t T= (B1 (t) B2(t)) 0be a two-dimensional Brownian motion on ( be the ltration generated by B .t TF P). Lett T.ToF (t) 0dS1 = S1 dS2 = S2We assume 11dt + 1 dB1] q 2 dt + 2 dB1 + 1 ;1: Note that2 2 dB2:>02>0;12 2 dS1 dS2 = S1 2 dB1 dB1 = 2S1 dt 1 1 2 2 dS2 dS2 = S2 2 2 dB1 dB1 + S2 (1 ; 2) 2 dB2 dB2 2 2 2 S 2 dt = 2 2 dS1 dS2 = S1 1S2 2 dB1 dB1 = 1 2S1S2 dt:In other words,dS1 has instantaneous variance 2 , 1 S1 dS2 has instantaneous variance 2 , 2 S2 dS1 and dS2 have instantaneous covariance S1 S2Accumulation factor:1 2.(t) = expZt0r du := 2=1;r(MPR)The market price of risk equations are2 1+q1;1 1 2 22;r203204 The solution to these equations is1 2provided ;1 2m ; bg ( 2) 1 Z x =p exp ; 2T dx 2 T 2m b1 ;m>0 b 0 b < m: m = 2T T 2 T1 ;() !With drift. Lete B(t) = t + B(t)2092102m-bshadow pathm b Brownian motionFigure 20.1: Reection Principle for Brownian motion without driftm m=b (B(T), M(T)) lies in herebFigure 20.2: Possible values of B (T )M (T ).CHAPTER 20. Pricing Exotic Optionswhere B (t)2110 t T , is a Brownian motion (without drift) on ( F P). DeneZ (T ) = expf; B (T ) ; 1 2 T g 2 = expf; (B (T ) + T ) + 1 2 T g 2 e (t) + 1 2T g = expf; B 2 Z f(A) = Z (T ) dIP 8A 2 F : IPAf SetM (T ) = max0 t T f Under I Pe B (T ):) ( m b ~ ~ b ffM (T ) 2 dm B (T ) 2 d~g = 2(2p ; ~) exp ; (2m ; ~)2 dm d~ m > 0 ~ < m: f e ~ b ~ b ~ IP ~ b 2TT 2 T~) be a function of two variables. Then be B is a Brownian motion (without drift), soLet h(m ~T f e f IEh(M (T ) B (T )) = IE h(M (Z ()TB (T )) )fef f e e = IE h(M (T ) B (T )) expf B (T ) ; 1 2 T g 2=m= ~Z1hi~=m bZ ~;1m=0 ~= ~ bf f h(m ~) expf ~ ; 2 2 T g IP fM (T ) 2 dm B (T ) 2 d~g: ~ b b 1 ~ e bBut also,f e IEh(M (T ) B (T )) =m= ~Z1~Z m b= ~;1m=0 ~= ~ bf h(m ~) IP fM (T ) 2 dm B (T ) 2 d~g: ~ b ~ e bSince h is arbitrary, we conclude that (MPR)f IP fM (T ) 2 dm B (T ) 2 d~g ~ e b 1 2 T g I fM (T ) 2 dm B (T ) 2 d~g f f e = expf ~ ; 2 b b )~ (P m ~ ~ ~ ~2 b 2 = 2(2p ; b) exp ; (2m ; b) : expf ~ ; 1 2 T gdm d~ m > 0 ~ < m: ~ b ~ b ~ 2T T 2 T21220.2 Up and out European call.Let 0 < K where< L be given. The payoff at time T is (S (T ) ; K )+ 1 S (T ) > < = S0 exp > > :e = S0 expf B (t)g2 6 6B(t) + r ; 6 4 | {z 239 > 7> = t7> 7 } 5>where= r;2 e B (t) = t + B(t):Consequently,f S (t) = S0 expf M (t)gwhere,f e M (t) = 0maxt B (u): uWe compute,v (0 S (0)) = e(S (T ) ; K )+ 1 S (T )1 log K M (T )< 1 log L e S (0)} | {z | {zS (0)}~ b#m ~CHAPTER 20. Pricing Exotic Options213~ M(T) y ~ m (B(T), M(T)) lies in here x ~ B(T) x=y~ be Figure 20.3: Possible values of B (T )We consider only the casef M (T ).S (0) K < L so 0 ~ < m: b ~ The other case, K < S (0) L leads to ~ < 0 m and the analysis is similar. b ~ R m R m : : :dy dx: ~ ~ We compute ~ x b;; 2 1 p v (0 S (0)) = e (S (0) expf xg ; K ) 2(2y ; x) exp ; (2y2T x) + x ; 2