6. Finite Difference Methods: Dealing with American...

Preview:

Citation preview

Math6911 S08, HM Zhu

6. Finite Difference Methods: Dealing with American Option

2Math6911, S08, HM ZHU

References

1. Chapters 5 and 9, Brandimarte’s

2. Chapters 6, 7, 20, and 21, “Option Pricing”

Math6911 S08, HM Zhu

6.1 American call options

6. Finite Difference Methods: Dealing with American Option

4Math6911, S08, HM ZHU

Constraints on American Options

An American option valuation problem is uniquely defined by a set of constraints:

• Lower bound: V(S,t) ≥ max (K – S, 0) • Black-Scholes equation is replaced by an inequality• V(Sf, t) is continuous at S=Sf

• The option delta (its slope) must be continuous at S=Sf

Let’s consider American put options as an example

5Math6911, S08, HM ZHU

American put options

• To avoid arbitrage, American put options must satisfyP(S,t) ≥ max (K – S, 0)

• It is optimal to exercise American put options early if S is sufficiently small

• When the option is exercised early, P(S, t) = K-S and the B-S inequality holds; otherwise, P(S, t) > K-S and the B-S equality holds.

• Again, there is a unknown exercise boundary Sf(t), where option should be exercised if S< Sf(t) and held otherwise

6Math6911, S08, HM ZHU

American put option as a free boundary problem

2 2 2

2

0

2

For each time t, we must divide the S axis into two regions:i) where early exercise is optimal:

0

ii) where retaining the option is optimal:

f

f

S S

P S P PP K S , rS rPt S S

S S

PP K S ,t

∂ σ ∂ ∂∂ ∂ ∂

∂ σ∂

≤ <

= − + + − <

≤ < +∞

> − +

( )

2 2 2

22 0

with boundary conditions at f

S P PrS rPS S

S S t .

∂ ∂∂ ∂

+ − =

=

7Math6911, S08, HM ZHU

Black-Scholes Inequality

( )

depositbank a fromreturn the portfolio thefromreturn the s,other wordIn

0 2

:put American For

2

222

≤−++ rPSPrS

SPS

tP

S,tP

∂∂

∂∂σ

∂∂

8Math6911, S08, HM ZHU

Taking consideration of early exercise

For a vanilla American option, we can check the possibility of early exercise easily in an explicit scheme:

But this is difficult to do in an implicit scheme as computing fij requires knowing the other fij ’s.

To get around this difficulty, we can use iterative method to solve the linear system. Here we consider SOR method.

( )ij ijf max f , K j S= − ∆

Math6911, S08, HM ZHU

Crank-Nicolson Methods+ Projected SOR to Price an American Put Option

1 2 1

1 0 1 0 1

2

where 0 0 Recall that both

and are tri-diagonal matrices

Set

be the intrinsic value when for 1 1

i i iT

i , i ,

j

f f , ,..., .

g K j S

S j S j ,...,M - .

α

+

+

= ≡ +

= +⎡ ⎤⎣ ⎦

= − ∆

= ∆ =

M f r M f b

b M

M

Math6911, S08, HM ZHU

Crank-Nicolson Methods + Projected SOR to Price an American Put Option

( ) ( ) ( ) ( ) ( )

( ) ( ) ( ) ( ) ( ) ( )

( ) ( )

11 1 1 1 1 1 1 2

1

1 12 2 2 2 2 1 2 2 2 3

2

11 1 1

For each time layer we have the iterative scheme

11

11

1

k k k ki , i , i , i ,

k k k k ki , i , i , i , i ,

k ki ,M M i ,M

i,

f max g , f r f f

f max g , f r f f f

f max g , f

ω β γβ

ω α β γβ

ω

+

+ +

+− − −

⎧ ⎫⎡ ⎤= + − − +⎨ ⎬⎣ ⎦−⎩ ⎭⎧ ⎫⎡ ⎤= + + − − +⎨ ⎬⎣ ⎦−⎩ ⎭

= + ( ) ( ) ( )11 1 2 1 1

1

1k kM M i ,M M i ,M

M

r f fα ββ

+− − − − −

⎧ ⎫⎡ ⎤+ − −⎨ ⎬⎣ ⎦−⎩ ⎭

Math6911, S08, HM ZHU

Note on Implementation

When passing from a time layer to the next one, it may be reasonable to initialize the iteration with a guess to the values of the previous time layer.

12Math6911, S08, HM ZHU

Example

We compare Crank-Nicolson methods + Projected SOR for an American put, where T = 5/12 yr, S0=$50, K = $50, σ=40%, r = 10%. (ω=1.2, tol = 0.001)

CK Method with Smax=$100, ∆S=1, ∆t=1/600: $4.2800CK Method with Smax=$100, ∆S=1, ∆t=1/1200: $4.2828

13Math6911, S08, HM ZHU

Example (Stability)

We compare Crank-Nicolson methods + Projected SOR for an American put, where T = 5/12 yr, S0=$50, K = $50, σ=40%, r = 10%. (ω=1.2, tol = 0.001)

CK Method with Smax=$100, ∆S=1, ∆t=1/600: $4.2800CK Method with Smax=$100, ∆S=1, ∆t=1/100: $4.2778

14Math6911, S08, HM ZHU

American call option with dividend-paying as a free boundary problem

( )2 2 2

02

2

0

2

For each time t, we must divide the S axis into two regions:i) where retaining option is optimal:

0

ii) where early exercise is optimal:

f

f

S S

C S C CC S K , r D S rCt S S

S S

C SC S K ,t

∂ σ ∂ ∂∂ ∂ ∂

∂ σ∂

≤ <

> − + + − − =

≤ < +∞

= − + ( )

( )

2 2

022 0

with boundary conditions at . f

C Cr D S rCS S

S S t

∂ ∂∂ ∂

+ − − <

=

Math6911 S08, HM Zhu

6.2 Iterative Methods of Solving a Linear System

6. Finite Difference Methods: Dealing with American Option

16Math6911, S08, HM ZHU

Motivation: Direct vs Iterative Methods

Direct Methods: – determine exact solution subject only to round-off error and

involves factorization of matrix A– pick up an appropriate method and adapt it to exploit A’s

sparsity– impractical if A is large and sparse

Iterative Methods: – generate a sequence of approximate solutions and involve

matrix-vector multiplications– whether it converges to the exact solution?– how fast does it converge to the exact solution?

17Math6911, S08, HM ZHU

Basic ideas of iterative methods

A possible approach to generate a sequence of approximations to the solution to x = G(x) is the iteration scheme

x(k) = G( x(k-1) )Starting from initial approximation x(0) .

Similarly, we can rewritten Ax=b asx = (A+I) x – b = Â x - b

where G(x) = Â x – b. Using the previous iteration scheme, we have

x(k) = Â x(k-1) - b

18Math6911, S08, HM ZHU

Basic ideas of iterative methods

Starting from initial approximation x(0) , we have x(1) = Â x(0) – bx(2) = Â x(1) – b = Â2 x(1) – Â b – b

…This iteration scheme diverge if some elements of Ân grow without bound as n→∞

It converges only if

However, arbitrary systems of equations may often not satisfy this condition

( ) 1ˆ <Aρ

19Math6911, S08, HM ZHU

Basic Ideas of Iterative Methods

( ) ( )

( ) ( )

econvergenc improve toexploited bemay choosingin y flexibilit The..

is schemeiteration ingcorrespond The. of splitting a called , where

system equivalentan to sytem thengTransformi:approachdifferent slightly A

MbMNx-Mx

b-NxMx

ANM Ab-NxMx

bAx

11k1k

1kk

−−−

+=

+=

+=+=

=

ei

20Math6911, S08, HM ZHU

Basic Ideas of Iterative Methods

( ) ( ) ( )( ) ( )

( ) ( )

( ) .1 iff lim that proved becan It

limlim

error check theLet .Let :eConvergenc

0

11

<=

=∴

=−=−=

−==

∞→

∞→∞→

−−

−−

B0B

eBeBexxBxxe

AMINM- B

k

kkkk

11

ρk

k

k

kk

( )instead.

1 requiremay wes,eigenvalue computing avoid To

<≤ BBρ

21Math6911, S08, HM ZHU

Implementation: check for convergence

Usually one or a combination of the four common tests is used tocheck convergence:

( ) ( )

( ) ( )

( )

( )( ) ( )( ) ( )

( )( )4

3

21

1

11

:residual relative

where :residual absolute

:difference relative

:difference absolute

ε

ε

ε

ε

<

=<

<−

<−

bxr

Ax-b xrxr

x

xx

xx

k

kkk

k

kk

kk

22Math6911, S08, HM ZHU

Therefore, various iterative methods are developed along the following lines: • A splitting A = M+N is proposed where linear system of the form Mz = d are easy to solve

• Classes of matrices are identifies for which the iteration matrixsatisfies

• Further effort are studies to make smaller than 1 as possible so that the error tends to zero faster.

Further discussion on basic ideas of iterative methods

( )NMB 1−−= ( ) 1<Bρ

( )Bρ( )ke

23Math6911, S08, HM ZHU

Jacobi methods

A = DL

U( ) ( ) ( ) bxULDx kk ++−= −1Jacobi:

Jacobi, Gauss-Seidel, Successive Over-Relaxation (SOR) methods are commonly used and can be described in following forms. Let A = D + L + U (as shown in the figure)

24Math6911, S08, HM ZHU

Example

Consider the 3-by-3 example:

⎥⎥⎥

⎢⎢⎢

⎡=

⎥⎥⎥

⎢⎢⎢

⎥⎥⎥

⎢⎢⎢

−−−

111

310131

013

3

2

1

xxx

Jacobi Method: ( ) [ ]

( )

( )

( )

( )

( )

( )

i nodes allfor change little is thereuntil

111

010101010

300030003

... 2. 1, For .0,0,0 :guess Initial

13

12

11

3

2

1

0

⎥⎥⎥

⎢⎢⎢

⎡+

⎥⎥⎥

⎢⎢⎢

⎥⎥⎥

⎢⎢⎢

⎡=

⎥⎥⎥

⎢⎢⎢

⎥⎥⎥

⎢⎢⎢

==

k

k

k

k

k

k

T

xxx

xxx

kx

25Math6911, S08, HM ZHU

Implementation

In general, consider nibxa ij

jij , ,1for …==⇔= ∑bAx

Jacobi Method: ( ) ( )( )

( )

( )

endeconvergencfor check

end

n,,1, ifor maxiter , 1, for

:guess initialan give

1

00

ii

ij

kiji

ki a

xabx

k

x

j

i

∑≠

−−=

==

=

……

x

26Math6911, S08, HM ZHU

MATLAB Implementation

function [x, error, nIter] = Jacobi(A,b,x0,maxIter)

x = zeros(size(x0));n = length(b);error = 1; epi = 1E-6;nIter = 0;

while ((error > epi)& (nIter < maxIter)), nIter = nIter + 1; for i = 1:n,

x(i) = b(i) - A(i, 1:i-1) * x0(1:i-1)-A(i, i+1:n)*x0(i+1:n);if (abs(A(i, i)) > 1E-10),

x(i) = x(i)/A(i, i);else

'Error: diagonal is close to zero'end

enderror = norm(x - x0, inf); x0 = x;

end

27Math6911, S08, HM ZHU

Numerical example

( ) [ ] ( ) ( ) 61-kkT0 10 : testeConvergenc 00 :guess Initial

and

40

21

,

311

113

−<−=

=

⎥⎥⎥⎥

⎢⎢⎢⎢

=

⎥⎥⎥⎥

⎢⎢⎢⎢

−−

−−

=

xxx

Ax bx A

;,,

,

8.24E-741Jacobi

Infinity norm of abs error

No ofIterations

Method

28Math6911, S08, HM ZHU

Gauss-Seidel methods

Jacobi, Gauss-Seidel, Successive Over-Relaxation (SOR) methods are commonly used and can be described in following forms. Let A = D + L + U (as shown in the figure)

A = DL

U( ) ( ) ( ) bxULDx kk ++−= −1Jacobi:

Gauss-Seidel:

( ) ( ) ( ) ( ) bxUxLD kk +−=+ −1

29Math6911, S08, HM ZHU

Example

⎥⎥⎥

⎢⎢⎢

⎡=

⎥⎥⎥

⎢⎢⎢

⎥⎥⎥

⎢⎢⎢

−−−

111

310131

013

3

2

1

xxx

Consider the 3-by-3 example:

Gauss-Seidel Method: ( ) [ ]( )

( )

( )

( )

( )

( )

unknowns theallfor change little is thereuntil

111

000100010

310031003

... 2. 1, For .0,0,0 :guess Initial

13

12

11

3

2

1

0

⎥⎥⎥

⎢⎢⎢

⎡+

⎥⎥⎥

⎢⎢⎢

⎥⎥⎥

⎢⎢⎢

⎡=

⎥⎥⎥

⎢⎢⎢

⎥⎥⎥

⎢⎢⎢

−−

==

k

k

k

k

k

k

T

xxx

xxx

kx

30Math6911, S08, HM ZHU

Implementation

In general, consider nibxa ij

jij , ,1for …==⇔= ∑bAx

Gauss-Seidel Method: ( ) ( )( )

( )

( ) ( )

endeconvergencfor check

end

n,,1, ifor maxiter , 1, for

:guess initialan give

1

00

ii

ij ij

kij

kiji

ki a

xaxabx

k

x

jj

i

∑ ∑< >

−−−=

==

=

……

x

31Math6911, S08, HM ZHU

MATLAB Implementation

function [x, error, nIter] = GaussSeidel(A,b,x0,maxIter)

x = zeros(size(x0));n = length(b);error = 1; epi = 1E-6;nIter = 0;

while ((error > epi)& (nIter < maxIter)), nIter = nIter + 1; for i = 1:n,

x(i) = b(i) - A(i, 1:i-1) * x(1:i-1)-A(i, i+1:n)*x0(i+1:n);if (abs(A(i, i)) > 1E-10),

x(i) = x(i)/A(i, i);else

'Error: diagonal is close to zero'end

enderror = norm(x - x0, inf); x0 = x;

end

32Math6911, S08, HM ZHU

Numerical example

( ) [ ] ( ) ( ) 61-kkT0 10 : testeConvergenc 00 :guess Initial

and

40

21

,

311

113

−<−=

=

⎥⎥⎥⎥

⎢⎢⎢⎢

=

⎥⎥⎥⎥

⎢⎢⎢⎢

−−

−−

=

xxx

Ax bx A

;,,

,

9.69e-725Gauss-Seidel

8.24e-741Jacobi

Infinity norm of abs error

No ofIterations

Method

33Math6911, S08, HM ZHU

Assessment: Existence & Convergence

( )

ii

Let .

Matrix is strictly diagonally dominant if and only if for all

which ones are strictly diagonally dominant and why?2 1 2 1 2 11 2 1 1 3 2 1

1 2 1 2

ij

ijj i

a

i

a a

:≠

=

>

−⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥− − −⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥−⎣ ⎦ ⎣ ⎦

Definition

A

A

Example

3 11 2

⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦

34Math6911, S08, HM ZHU

Assessment: Existence

solution. unique a is e then therdominant, diagonallystrictly is If.Consider

AbAx

Theorem=

35Math6911, S08, HM ZHU

Assessment: Convergence

solution.exact the toconverge algorithms Seidel-Guess theand Jacobiboth

, guess initialany for than dominant, diagonallystrictly is If.Consider

0xAbAx

Theorem=

1 that Prove <− NM 1

36Math6911, S08, HM ZHU

More on Gauss-Seidel methods

Let A = D + L + U (as shown in the figure)

A = DL

UGauss-Seidel (Forward):

( ) ( ) ( ) ( )1

1 2where the correction order is nx ,x , ,x

−+ = − +k kD L x U x b

Gauss-Seidel (Backward):

( ) ( ) ( ) ( )1

1 1where the correction order is n nx ,x , ,x

+ = − +k kD U x L x b

Gauss-Seidel (Symmetric):It consists of a forward sweep followed by a backward sweep

37Math6911, S08, HM ZHU

More on Gauss-Seidel methods(Numerical Analysis, Burden & Faires, Chap. 7)

( ) ( ) ( ) ( )1 2

Define the residual vector of an approximate with respect to the system as the following:

Denote as the residual vector

for the Gauss-Seidel method correspo

Tk k k ki i i nir ,r , ,r⎡ ⎤= ⎣ ⎦

r xAx = b

r = b - Ax

r

( ) ( ) ( ) ( ) ( ) ( )

( )

( ) ( )( )

( ) ( )

1 11 2 1

1

1 1 0

nding to the

approximation

Then Gauss-Seidel method can be characterized as choosing

to satisfy

or choosing in such a way that

Tk k k k k ki i n

ki

kk k ii

i iii

k ki i ,i

x , x , , x , x , , x .

x

rx xa

x r

− −−

+ +

⎡ ⎤= ⎣ ⎦

= +

=

ix

38Math6911, S08, HM ZHU

More on Gauss-Seidel methods

( ) ( )

( )

( ) ( )

1 1

1

1

Choosing in such a way that the th component of 0however, is not the most efficient way to reduce the norm of the

vector If we modify the Gauss-Seidel procedure to

k ki i ,i

ki

k ki i

x i r ,

.

rx x ω

+ +

+

=

= +

r( )

for certain choices of positive , then we can reduce the norm of the residual vector and obtain significantly faster convergence.

kii

iiaω

39Math6911, S08, HM ZHU

Relaxation methods

( ) ( )( )

( )1

Methods involving choosing proper values of for

5 5 1

to reduce the norm of the residual vector and speed up convergenceare called " "

1. For 0< <1, it is called "

kk k ii

i iii

rx x . .a

ω

ω

ω

−= +

Relaxation Methods

un methods" and can be used to obtain convergence of systems failed by Gauss-Seidel method2. For 1< , it is called " methods" and can be used to accelebrate the conve

ω

der - relaxation

over - relaxationrgence for systems that are convergent

by Gauss-Seidel method. These methods are called called " " ( ).Successive Over - Relaxation SOR

40Math6911, S08, HM ZHU

SOR method

( ) ( ) ( ) ( ) ( ) ( )

( )

11 1

1 1

We reformulate Eq. (5.5.1):

1 5 5 2

Therefore, SOR methods can be characterized as a linear combination of old and Gauss-Seidel approximations:

1

i nk k k k

i i i ij j ij jj j iii

k

x x b a x a x . .aωω

−− −

= = +

⎡ ⎤= − + − −⎢ ⎥

⎣ ⎦

= −

∑ ∑

x ( ) ( ) ( )1

for 0< <2

k kGSω ω

ω

− +x x

41Math6911, S08, HM ZHU

Matrix version of SOR method

A = DL

U

( ) ( ) ( ) ( ) ( ) ( )

( ) ( ) ( )( ) ( )

11 1

1 1

1

To determine the matrix version of SOR, we rewrite Eq. (5.5.2):

1 5 5 3

so that in the matrix and vector form, we have

1

When =

i nk k k k

ii i ij j ii i ij j ij j i

a x a x a x a x b . .ω ω ω ω

ω ω ω ω

ω

−− −

= = +

+ = − − +

+ = − − +

∑ ∑

k kSOR : D L x D U x b

( ) ( ) ( )1

1, SOR methods reduces to the Gauss-Seidel methods−+ = − +k kGauss - Seidel : D L x Ux b

42Math6911, S08, HM ZHU

An Example

⎥⎥⎥

⎢⎢⎢

⎡=

⎥⎥⎥

⎢⎢⎢

⎥⎥⎥

⎢⎢⎢

−−−

111

310131

013

3

2

1

xxx

Consider the 3-by-3 example:

SOR Method (Leave as an exercise)

43Math6911, S08, HM ZHU

Implementation

Successive Over-Relaxation Method:

( ) ( )( )

( )

( ) ( )

( ) ( )

endeconvergencfor check

end

1

n,,1, ifor maxiter , 1, k for

:guess initialan give

1

1

00

−< >

−+−−

=

==

=

∑ ∑k

iii

ij ij

kij

kiji

ki x

a

xaxabx

x

jj

i

ωω

……

x

44Math6911, S08, HM ZHU

MATLAB Implementation

function [x, error, nIter] = SOR(A,b,x0,maxIter,omega)

x = zeros(size(x0));n = length(b);error = 1; epi = 1E-6;nIter = 0;

while ((error > epi)& (nIter < maxIter)), nIter = nIter + 1; for i = 1:n,

x(i) = b(i) - A(i, 1:i-1) * x(1:i-1)-A(i, i+1:n)*x0(i+1:n);if (abs(A(i, i)) > 1E-10),

x(i) = x(i)/A(i, i);x(i) = omega * x(i) + (1-omega)*x0(i);

else 'Error: diagonal is close to zero'

endenderror = norm(x - x0); x0 = x;

end

45Math6911, S08, HM ZHU

Comparison

( ) [ ] ( ) ( )T0 k k-1 6

3 1 4 11 4 2

, and 1 4

1 4 3 40

Initial guess: 0 0 Convergence test: 10

..

,.

.

, , ; −

−⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥−⎢ ⎥ ⎢ ⎥= = =⎢ ⎥ ⎢ ⎥−⎢ ⎥ ⎢ ⎥−⎣ ⎦ ⎣ ⎦

= − <

A x b Ax

x x x

ω=1.5

Notes

5.72e-742SOR

9.40e-799Gauss-Seidel

8.65e-6151Jacobi

Infinity norm of abs error

No ofIterations

Method

46Math6911, S08, HM ZHU

SOR Method: choose an appropriate ω

( ) [ ] ( ) ( )T0 k k-1 6

3 1 4 11 4 2

, and 1 4

1 4 3 40

Initial guess: 0 0 Convergence test: 10

..

,.

.

, , ; −

−⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥−⎢ ⎥ ⎢ ⎥= = =⎢ ⎥ ⎢ ⎥−⎢ ⎥ ⎢ ⎥−⎣ ⎦ ⎣ ⎦

= − <

A x b Ax

x x x

8.68e-7671.25

11.8430028.80e-7661.759.72e-7421.5

9.72e-71091

Infinity norm of abs error

No ofIterations

ω values

47Math6911, S08, HM ZHU

Assessment: How to choose ω

( )

6

0

(N u m e ric a l A n a ly s is , B u rd e n &F a ire s , C h a p . 7 , p a g e 4 4 9 )If is p o s it iv e d e f in ite m a tr ix a n d 0 < < 2 , th e n th e S O R

m e th o d c o n v e rg e s fo r a n y c h o ic e o f

(N u m e ric a l A n a ly s is , B u

ω

T h e o r e m 7 .2 5

A

x

T h e o r e m 7 .2

( )( )

2

1

2

1 1

rd e n &F a ire s , C h a p . 7 , p a g e 4 4 9 )If is p o s it iv e d e f in ite t r id ia g o n a l m a tr ix th e n th e o p t im a l c h o ic e o f fo r th e S O R m e th o d is

w h e re

j

j .

ω

ωρ

=⎡ ⎤+ − ⎣ ⎦

= +

A

T

T D L U

48Math6911, S08, HM ZHU

Summary

A = DL

U( ) ( ) ( ) bxULDx kk ++−= −1Jacobi:

Jacobi, Gauss-Seidel, Successive Over-Relaxation (SOR) methods are commonly used and can be described in following forms. Let A = D + L + U (as shown in the figure)

Gauss-Seidel (SOR, ω =1):

( ) ( ) ( ) ( ) bxUxLD kk +−=+ −1

SOR:

( ) ( ) ( )( ) ( ) bxUDxLD kk ωωωω +−−=+ −11

49Math6911, S08, HM ZHU

More references

1. D. Tavella and C. Randall, “Pricing financial instruments: The finite difference methods”, Wiley, New York, 2000

2. Y.-I. Zhu, X. Wu and I.-L. Chern, “Derivative securities and difference methods”, Springer, New York, 2004

3. R. Seydel, "Tools for computational finance", Springer-Verlag, Berlin, 2002

4. J. Topper, "Financial engineering with finite elements", Wiley, New York, 2005

Recommended