Lecture 13 - Portfolio choice with utility functionsarmerin/FinInsMathRwanda/Lecture13.pdf ·...

Preview:

Citation preview

Lecture 13Portfolio choice with utility functions

Lecture 13 1 / 33

Portfolio choice (1)

We will now use utility functions to find the optimal position of an investor.

A random payoff will be called a security.

We still assume that there are only two times – today when we make theinvestment decisions and tomorrow when the randomness is resolved.

Assume that there is a market with n securities having payoffsd1, d2, . . . , dn and prices P1,P2, . . . ,Pn.

Lecture 13 2 / 33

Portfolio choice (1)

We will now use utility functions to find the optimal position of an investor.

A random payoff will be called a security.

We still assume that there are only two times – today when we make theinvestment decisions and tomorrow when the randomness is resolved.

Assume that there is a market with n securities having payoffsd1, d2, . . . , dn and prices P1,P2, . . . ,Pn.

Lecture 13 2 / 33

Portfolio choice (1)

We will now use utility functions to find the optimal position of an investor.

A random payoff will be called a security.

We still assume that there are only two times – today when we make theinvestment decisions and tomorrow when the randomness is resolved.

Assume that there is a market with n securities having payoffsd1, d2, . . . , dn and prices P1,P2, . . . ,Pn.

Lecture 13 2 / 33

Portfolio choice (1)

We will now use utility functions to find the optimal position of an investor.

A random payoff will be called a security.

We still assume that there are only two times – today when we make theinvestment decisions and tomorrow when the randomness is resolved.

Assume that there is a market with n securities having payoffsd1, d2, . . . , dn and prices P1,P2, . . . ,Pn.

Lecture 13 2 / 33

Portfolio choice (2)

We will assume that the market satisfies the law of one price (LOP).

This is a weak requirement that simply says that for any portfolioθ = (θ1, θ2, . . . , θn) of securities the price P of

n∑i=1

θidi

is equal to

P =n∑

i=1

θiPi .

Lecture 13 3 / 33

Portfolio choice (2)

We will assume that the market satisfies the law of one price (LOP).

This is a weak requirement that simply says that for any portfolioθ = (θ1, θ2, . . . , θn) of securities

the price P of

n∑i=1

θidi

is equal to

P =n∑

i=1

θiPi .

Lecture 13 3 / 33

Portfolio choice (2)

We will assume that the market satisfies the law of one price (LOP).

This is a weak requirement that simply says that for any portfolioθ = (θ1, θ2, . . . , θn) of securities the price P of

n∑i=1

θidi

is equal to

P =n∑

i=1

θiPi .

Lecture 13 3 / 33

Portfolio choice (2)

We will assume that the market satisfies the law of one price (LOP).

This is a weak requirement that simply says that for any portfolioθ = (θ1, θ2, . . . , θn) of securities the price P of

n∑i=1

θidi

is equal to

P =n∑

i=1

θiPi .

Lecture 13 3 / 33

Portfolio choice (3)

There is an investor with wealth W and utility function U who wants tofind the portfolio θ1, θ2, . . . , θn that maximises his utility.

Given this market of n assets, the investor wants to maximise the utility ofhis wealth x tomorrow, where

x =n∑

i=1

θidi .

Lecture 13 4 / 33

Portfolio choice (3)

There is an investor with wealth W and utility function U who wants tofind the portfolio θ1, θ2, . . . , θn that maximises his utility.

Given this market of n assets, the investor wants to maximise the utility ofhis wealth x tomorrow

, where

x =n∑

i=1

θidi .

Lecture 13 4 / 33

Portfolio choice (3)

There is an investor with wealth W and utility function U who wants tofind the portfolio θ1, θ2, . . . , θn that maximises his utility.

Given this market of n assets, the investor wants to maximise the utility ofhis wealth x tomorrow, where

x =n∑

i=1

θidi .

Lecture 13 4 / 33

Portfolio choice (4)

Hence, we want to solve

maximise E [U(x)]

subject to∑n

i=1 θidi = x∑ni=1 θiPi = W .

This is a slight simplification of the problem in the book, but the solutionwill be the same.

We can rewrite the problem as

maximise E [U (∑n

i=1 θidi )]

subject to∑n

i=1 θiPi = W .

Lecture 13 5 / 33

Portfolio choice (4)

Hence, we want to solve

maximise E [U(x)]

subject to∑n

i=1 θidi = x∑ni=1 θiPi = W .

This is a slight simplification of the problem in the book, but the solutionwill be the same.

We can rewrite the problem as

maximise E [U (∑n

i=1 θidi )]

subject to∑n

i=1 θiPi = W .

Lecture 13 5 / 33

Portfolio choice (4)

Hence, we want to solve

maximise E [U(x)]

subject to∑n

i=1 θidi = x∑ni=1 θiPi = W .

This is a slight simplification of the problem in the book, but the solutionwill be the same.

We can rewrite the problem as

maximise E [U (∑n

i=1 θidi )]

subject to∑n

i=1 θiPi = W .

Lecture 13 5 / 33

Portfolio choice (5)

To solve this problem we form the Lagrangian L:

L = E

[U

(n∑

i=1

θidi

)]− λ

(n∑

i=1

θiPi −W

).

We have

∂θiE

[U

(n∑

k=1

θkdk

)]= E

[∂

∂θiU

(n∑

k=1

θkdk

)]

= E

[U ′

(n∑

k=1

θkdk

)di

]= E [U ′(x)di ]

Lecture 13 6 / 33

Portfolio choice (5)

To solve this problem we form the Lagrangian L:

L = E

[U

(n∑

i=1

θidi

)]− λ

(n∑

i=1

θiPi −W

).

We have

∂θiE

[U

(n∑

k=1

θkdk

)]

= E

[∂

∂θiU

(n∑

k=1

θkdk

)]

= E

[U ′

(n∑

k=1

θkdk

)di

]= E [U ′(x)di ]

Lecture 13 6 / 33

Portfolio choice (5)

To solve this problem we form the Lagrangian L:

L = E

[U

(n∑

i=1

θidi

)]− λ

(n∑

i=1

θiPi −W

).

We have

∂θiE

[U

(n∑

k=1

θkdk

)]= E

[∂

∂θiU

(n∑

k=1

θkdk

)]

= E

[U ′

(n∑

k=1

θkdk

)di

]= E [U ′(x)di ]

Lecture 13 6 / 33

Portfolio choice (5)

To solve this problem we form the Lagrangian L:

L = E

[U

(n∑

i=1

θidi

)]− λ

(n∑

i=1

θiPi −W

).

We have

∂θiE

[U

(n∑

k=1

θkdk

)]= E

[∂

∂θiU

(n∑

k=1

θkdk

)]

= E

[U ′

(n∑

k=1

θkdk

)di

]

= E [U ′(x)di ]

Lecture 13 6 / 33

Portfolio choice (5)

To solve this problem we form the Lagrangian L:

L = E

[U

(n∑

i=1

θidi

)]− λ

(n∑

i=1

θiPi −W

).

We have

∂θiE

[U

(n∑

k=1

θkdk

)]= E

[∂

∂θiU

(n∑

k=1

θkdk

)]

= E

[U ′

(n∑

k=1

θkdk

)di

]= E [U ′(x)di ]

Lecture 13 6 / 33

Portfolio choice (6)

Hence the first order conditions are

∂L

∂θi= E [U ′(x?)di ]− λPi = 0, i = 1, 2, . . . , n

∂L

∂λ= −

n∑i=1

θ?i Pi + W = 0

Here θ?i is the optimal portfolio and x? =∑n

i=1 θ?i di is the optimal payoff.

Lecture 13 7 / 33

Portfolio choice (7)

We can rewrite this as

E [U ′(x?)di ] = λPi , i = 1, 2, . . . , nn∑

i=1

θ?i Pi = W

In general we need to specify the utility function to get further.

In theory we solve the n first equations to get θ?i (λ), and then insert thisin the last equation to get λ.

We can get some more information from these equations if we assume thatthere exists a risk-free asset.

Lecture 13 8 / 33

Portfolio choice (7)

We can rewrite this as

E [U ′(x?)di ] = λPi , i = 1, 2, . . . , nn∑

i=1

θ?i Pi = W

In general we need to specify the utility function to get further.

In theory we solve the n first equations to get θ?i (λ), and then insert thisin the last equation to get λ.

We can get some more information from these equations if we assume thatthere exists a risk-free asset.

Lecture 13 8 / 33

Portfolio choice (7)

We can rewrite this as

E [U ′(x?)di ] = λPi , i = 1, 2, . . . , nn∑

i=1

θ?i Pi = W

In general we need to specify the utility function to get further.

In theory we solve the n first equations to get θ?i (λ), and then insert thisin the last equation to get λ.

We can get some more information from these equations if we assume thatthere exists a risk-free asset.

Lecture 13 8 / 33

Portfolio choice (8)

If we have a risk-free asset with total return Rf , then this security isdefined by

d = Rf and P = 1.

Inserting this in the first equation yields

E [U ′(x?)Rf ] = λ · 1

orλ = Rf E [U ′(x?)].

It follows that

E [U ′(x?)di ] = Rf E [U ′(x?)]Pi , i = 1, 2, . . . , n.

Lecture 13 9 / 33

Portfolio choice (8)

If we have a risk-free asset with total return Rf , then this security isdefined by

d = Rf and P = 1.

Inserting this in the first equation yields

E [U ′(x?)Rf ] = λ · 1

orλ = Rf E [U ′(x?)].

It follows that

E [U ′(x?)di ] = Rf E [U ′(x?)]Pi , i = 1, 2, . . . , n.

Lecture 13 9 / 33

Portfolio choice (8)

If we have a risk-free asset with total return Rf , then this security isdefined by

d = Rf and P = 1.

Inserting this in the first equation yields

E [U ′(x?)Rf ] = λ · 1

orλ = Rf E [U ′(x?)].

It follows that

E [U ′(x?)di ] = Rf E [U ′(x?)]Pi , i = 1, 2, . . . , n.

Lecture 13 9 / 33

Log-optimal portfolio choice (1)

An important example is when we use the logarithmic utility function,

U(x) = ln x .

To simplify we assume that there exists only two possible outcomes attime 1 and only two securities:

A risk-free with total return Rf = 1

A risky with

d =

[20

]and P = 1.

Lecture 13 10 / 33

Log-optimal portfolio choice (1)

An important example is when we use the logarithmic utility function,

U(x) = ln x .

To simplify we assume that there exists only two possible outcomes attime 1 and only two securities:

A risk-free with total return Rf = 1

A risky with

d =

[20

]and P = 1.

Lecture 13 10 / 33

Log-optimal portfolio choice (1)

An important example is when we use the logarithmic utility function,

U(x) = ln x .

To simplify we assume that there exists only two possible outcomes attime 1 and only two securities:

A risk-free with total return Rf = 1

A risky with

d =

[20

]and P = 1.

Lecture 13 10 / 33

Log-optimal portfolio choice (2)

Let θ denote the amount invested in the risky asset.

Since both securities has price equal to 1, it follows that we invest W − θin the risk-free asset.

It follows that

x = θ

[20

]+ (W − θ)

[11

]=

[W + θW − θ

]

Lecture 13 11 / 33

Log-optimal portfolio choice (2)

Let θ denote the amount invested in the risky asset.

Since both securities has price equal to 1, it follows that we invest W − θin the risk-free asset.

It follows that

x = θ

[20

]+ (W − θ)

[11

]=

[W + θW − θ

]

Lecture 13 11 / 33

Log-optimal portfolio choice (2)

Let θ denote the amount invested in the risky asset.

Since both securities has price equal to 1, it follows that we invest W − θin the risk-free asset.

It follows that

x = θ

[20

]+ (W − θ)

[11

]

=

[W + θW − θ

]

Lecture 13 11 / 33

Log-optimal portfolio choice (2)

Let θ denote the amount invested in the risky asset.

Since both securities has price equal to 1, it follows that we invest W − θin the risk-free asset.

It follows that

x = θ

[20

]+ (W − θ)

[11

]=

[W + θW − θ

]

Lecture 13 11 / 33

Log-optimal portfolio choice (3)

Hence,E [U(x)] = p ln(W + θ) + (1− p) ln(W − θ).

Maximising this over θ yields

p

W + θ− 1− p

W − θ= 0

with solutionθ? = (2p − 1)W .

Lecture 13 12 / 33

Log-optimal portfolio choice (3)

Hence,E [U(x)] = p ln(W + θ) + (1− p) ln(W − θ).

Maximising this over θ yields

p

W + θ− 1− p

W − θ= 0

with solutionθ? = (2p − 1)W .

Lecture 13 12 / 33

Log-optimal portfolio choice (3)

Hence,E [U(x)] = p ln(W + θ) + (1− p) ln(W − θ).

Maximising this over θ yields

p

W + θ− 1− p

W − θ= 0

with solutionθ? = (2p − 1)W .

Lecture 13 12 / 33

Log-optimal portfolio choice (4)

An interpretation of this model is the following.

You are able to double your investment with probability p (this isrepresented by the risky asset).

The risk-free asset is a model of money in your pocket (since the risk-freerate of return is equal to 0).

We can further interpret 2p − 1 as the fraction of your wealth that youshould invest in the risky asset.

Lecture 13 13 / 33

Log-optimal portfolio choice (4)

An interpretation of this model is the following.

You are able to double your investment with probability p (this isrepresented by the risky asset).

The risk-free asset is a model of money in your pocket (since the risk-freerate of return is equal to 0).

We can further interpret 2p − 1 as the fraction of your wealth that youshould invest in the risky asset.

Lecture 13 13 / 33

Log-optimal portfolio choice (4)

An interpretation of this model is the following.

You are able to double your investment with probability p (this isrepresented by the risky asset).

The risk-free asset is a model of money in your pocket (since the risk-freerate of return is equal to 0).

We can further interpret 2p − 1 as the fraction of your wealth that youshould invest in the risky asset.

Lecture 13 13 / 33

Log-optimal portfolio choice (4)

An interpretation of this model is the following.

You are able to double your investment with probability p (this isrepresented by the risky asset).

The risk-free asset is a model of money in your pocket (since the risk-freerate of return is equal to 0).

We can further interpret 2p − 1 as the fraction of your wealth that youshould invest in the risky asset.

Lecture 13 13 / 33

Log-optimal portfolio choice (5)

Note that if p ∈ [0, 1/2), then the investment in the risky asset is negative.

But what if we are not allows to short sell the risky asset?

If this is the case, then one can show that the optimal portfolio is

(2p − 1)+ = max(2p − 1, 0) =

2p − 1 if p ∈ [1/2, 1]

0 if p ∈ [0, 1/2)

This is sometimes referred to as Kelly’s rule of betting.

Lecture 13 14 / 33

Log-optimal portfolio choice (5)

Note that if p ∈ [0, 1/2), then the investment in the risky asset is negative.

But what if we are not allows to short sell the risky asset?

If this is the case, then one can show that the optimal portfolio is

(2p − 1)+ = max(2p − 1, 0) =

2p − 1 if p ∈ [1/2, 1]

0 if p ∈ [0, 1/2)

This is sometimes referred to as Kelly’s rule of betting.

Lecture 13 14 / 33

Log-optimal portfolio choice (5)

Note that if p ∈ [0, 1/2), then the investment in the risky asset is negative.

But what if we are not allows to short sell the risky asset?

If this is the case, then one can show that the optimal portfolio is

(2p − 1)+

= max(2p − 1, 0) =

2p − 1 if p ∈ [1/2, 1]

0 if p ∈ [0, 1/2)

This is sometimes referred to as Kelly’s rule of betting.

Lecture 13 14 / 33

Log-optimal portfolio choice (5)

Note that if p ∈ [0, 1/2), then the investment in the risky asset is negative.

But what if we are not allows to short sell the risky asset?

If this is the case, then one can show that the optimal portfolio is

(2p − 1)+ = max(2p − 1, 0)

=

2p − 1 if p ∈ [1/2, 1]

0 if p ∈ [0, 1/2)

This is sometimes referred to as Kelly’s rule of betting.

Lecture 13 14 / 33

Log-optimal portfolio choice (5)

Note that if p ∈ [0, 1/2), then the investment in the risky asset is negative.

But what if we are not allows to short sell the risky asset?

If this is the case, then one can show that the optimal portfolio is

(2p − 1)+ = max(2p − 1, 0) =

2p − 1 if p ∈ [1/2, 1]

0 if p ∈ [0, 1/2)

This is sometimes referred to as Kelly’s rule of betting.

Lecture 13 14 / 33

Log-optimal portfolio choice (5)

Note that if p ∈ [0, 1/2), then the investment in the risky asset is negative.

But what if we are not allows to short sell the risky asset?

If this is the case, then one can show that the optimal portfolio is

(2p − 1)+ = max(2p − 1, 0) =

2p − 1 if p ∈ [1/2, 1]

0 if p ∈ [0, 1/2)

This is sometimes referred to as Kelly’s rule of betting.

Lecture 13 14 / 33

Pricing using utility functions (1)

So far we have used utility functions to find optimal portfolios given theasset’s payoffs and prices.

Let us return to the equation

E [U ′(x?)di ] = Rf E [U ′(x?)]Pi .

We can now reverse the problem, and ask the following question: Giventhe asset payoffs and optimal portfolio payoff, what can be said of theprices of the assets?

Assume that we know the optimal portfolio x?. Then the equation abovegives us the price of security i .

Lecture 13 15 / 33

Pricing using utility functions (1)

So far we have used utility functions to find optimal portfolios given theasset’s payoffs and prices.

Let us return to the equation

E [U ′(x?)di ] = Rf E [U ′(x?)]Pi .

We can now reverse the problem, and ask the following question: Giventhe asset payoffs and optimal portfolio payoff, what can be said of theprices of the assets?

Assume that we know the optimal portfolio x?. Then the equation abovegives us the price of security i .

Lecture 13 15 / 33

Pricing using utility functions (1)

So far we have used utility functions to find optimal portfolios given theasset’s payoffs and prices.

Let us return to the equation

E [U ′(x?)di ] = Rf E [U ′(x?)]Pi .

We can now reverse the problem, and ask the following question: Giventhe asset payoffs and optimal portfolio payoff, what can be said of theprices of the assets?

Assume that we know the optimal portfolio x?. Then the equation abovegives us the price of security i .

Lecture 13 15 / 33

Pricing using utility functions (1)

So far we have used utility functions to find optimal portfolios given theasset’s payoffs and prices.

Let us return to the equation

E [U ′(x?)di ] = Rf E [U ′(x?)]Pi .

We can now reverse the problem, and ask the following question: Giventhe asset payoffs and optimal portfolio payoff, what can be said of theprices of the assets?

Assume that we know the optimal portfolio x?. Then the equation abovegives us the price of security i .

Lecture 13 15 / 33

Pricing using utility functions (2)

We can write

Pi =E [U ′(x?)di ]

Rf E [U ′(x?)]

=E [U ′(x?)] · E (di ) + Cov(U ′(x?), di )

Rf E [U ′(x?)]

=E (di )

1 + rf+ Cov

(1

1 + rf· U ′(x?)

E [U ′(x?)], di

).

The quantity

m =1

1 + rf· U ′(x?)

E [U ′(x?)]

is called a stochastic discount factor (SDF).

Lecture 13 16 / 33

Pricing using utility functions (2)

We can write

Pi =E [U ′(x?)di ]

Rf E [U ′(x?)]

=E [U ′(x?)] · E (di ) + Cov(U ′(x?), di )

Rf E [U ′(x?)]

=E (di )

1 + rf+ Cov

(1

1 + rf· U ′(x?)

E [U ′(x?)], di

).

The quantity

m =1

1 + rf· U ′(x?)

E [U ′(x?)]

is called a stochastic discount factor (SDF).

Lecture 13 16 / 33

Pricing using utility functions (2)

We can write

Pi =E [U ′(x?)di ]

Rf E [U ′(x?)]

=E [U ′(x?)] · E (di ) + Cov(U ′(x?), di )

Rf E [U ′(x?)]

=E (di )

1 + rf+ Cov

(1

1 + rf· U ′(x?)

E [U ′(x?)], di

).

The quantity

m =1

1 + rf· U ′(x?)

E [U ′(x?)]

is called a stochastic discount factor (SDF).

Lecture 13 16 / 33

Pricing using utility functions (2)

We can write

Pi =E [U ′(x?)di ]

Rf E [U ′(x?)]

=E [U ′(x?)] · E (di ) + Cov(U ′(x?), di )

Rf E [U ′(x?)]

=E (di )

1 + rf+ Cov

(1

1 + rf· U ′(x?)

E [U ′(x?)], di

).

The quantity

m =1

1 + rf· U ′(x?)

E [U ′(x?)]

is called a stochastic discount factor (SDF).

Lecture 13 16 / 33

Pricing using utility functions (3)

In the relation

Pi =E (di )

1 + rf+ Cov(m, di ).

we see that the first term represents risk-less discounting (we use theexpected value and discount using the risk-free rates).

If di is deterministic, then this is the only term.

More generally this is the only term if di is uncorrelated with m.

Using the SDF m we can also write

Pi = E (mdi ).

Lecture 13 17 / 33

Pricing using utility functions (3)

In the relation

Pi =E (di )

1 + rf+ Cov(m, di ).

we see that the first term represents risk-less discounting (we use theexpected value and discount using the risk-free rates).

If di is deterministic, then this is the only term.

More generally this is the only term if di is uncorrelated with m.

Using the SDF m we can also write

Pi = E (mdi ).

Lecture 13 17 / 33

Pricing using utility functions (3)

In the relation

Pi =E (di )

1 + rf+ Cov(m, di ).

we see that the first term represents risk-less discounting (we use theexpected value and discount using the risk-free rates).

If di is deterministic, then this is the only term.

More generally this is the only term if di is uncorrelated with m.

Using the SDF m we can also write

Pi = E (mdi ).

Lecture 13 17 / 33

Pricing using utility functions (3)

In the relation

Pi =E (di )

1 + rf+ Cov(m, di ).

we see that the first term represents risk-less discounting (we use theexpected value and discount using the risk-free rates).

If di is deterministic, then this is the only term.

More generally this is the only term if di is uncorrelated with m.

Using the SDF m we can also write

Pi = E (mdi ).

Lecture 13 17 / 33

Pricing using utility functions (4)

The total return of security i is given by

Ri =diPi

,

and the expected return is

E (Ri ) =E (di )

Pi.

By dividing with Pi in

Pi =E (di )

1 + rf+ Cov(m, di )

we get

1 =E (Ri )

1 + rf+ Cov(m,Ri ).

Lecture 13 18 / 33

Pricing using utility functions (4)

The total return of security i is given by

Ri =diPi,

and the expected return is

E (Ri ) =E (di )

Pi.

By dividing with Pi in

Pi =E (di )

1 + rf+ Cov(m, di )

we get

1 =E (Ri )

1 + rf+ Cov(m,Ri ).

Lecture 13 18 / 33

Pricing using utility functions (4)

The total return of security i is given by

Ri =diPi,

and the expected return is

E (Ri ) =E (di )

Pi.

By dividing with Pi in

Pi =E (di )

1 + rf+ Cov(m, di )

we get

1 =E (Ri )

1 + rf+ Cov(m,Ri ).

Lecture 13 18 / 33

Pricing using utility functions (4)

The total return of security i is given by

Ri =diPi,

and the expected return is

E (Ri ) =E (di )

Pi.

By dividing with Pi in

Pi =E (di )

1 + rf+ Cov(m, di )

we get

1 =E (Ri )

1 + rf+ Cov(m,Ri ).

Lecture 13 18 / 33

Pricing using utility functions (5)

This can rewritten

E (Ri ) = 1 + rf − (1 + rf )Cov(m,Ri ).

Using Ri = 1 + ri we get

E (1 + ri ) = 1 + rf − (1 + rf )Cov(m, ri + 1)

E (ri ) = rf − (1 + rf )Cov(m, ri )

ri = rf + Cov(−(1 + rf )m, ri ).

Compare this last equation with CAPM!

Lecture 13 19 / 33

Pricing using utility functions (5)

This can rewritten

E (Ri ) = 1 + rf − (1 + rf )Cov(m,Ri ).

Using Ri = 1 + ri we get

E (1 + ri ) = 1 + rf − (1 + rf )Cov(m, ri + 1)

E (ri ) = rf − (1 + rf )Cov(m, ri )

ri = rf + Cov(−(1 + rf )m, ri ).

Compare this last equation with CAPM!

Lecture 13 19 / 33

Pricing using utility functions (5)

This can rewritten

E (Ri ) = 1 + rf − (1 + rf )Cov(m,Ri ).

Using Ri = 1 + ri we get

E (1 + ri ) = 1 + rf − (1 + rf )Cov(m, ri + 1)

E (ri ) = rf − (1 + rf )Cov(m, ri )

ri = rf + Cov(−(1 + rf )m, ri ).

Compare this last equation with CAPM!

Lecture 13 19 / 33

Pricing using utility functions (5)

This can rewritten

E (Ri ) = 1 + rf − (1 + rf )Cov(m,Ri ).

Using Ri = 1 + ri we get

E (1 + ri ) = 1 + rf − (1 + rf )Cov(m, ri + 1)

E (ri ) = rf − (1 + rf )Cov(m, ri )

ri = rf + Cov(−(1 + rf )m, ri ).

Compare this last equation with CAPM!

Lecture 13 19 / 33

Arbitrage (1)

There are good and bad portfolio payoffs, and then there are payoffs thatare to good be true.

Assume again that we have n securities with random payoffs di and pricesPi . Assume that we can find a portfolio θ such that

n∑i=1

θidi ≥ 0

n∑i=1

θidi > 0 with positive probability

n∑i=1

θiPi ≤ 0.

Such a portfolio is called an arbitrage opportunity or simply an arbitrage.

Lecture 13 20 / 33

Arbitrage (1)

There are good and bad portfolio payoffs, and then there are payoffs thatare to good be true.

Assume again that we have n securities with random payoffs di and pricesPi . Assume that we can find a portfolio θ such that

n∑i=1

θidi ≥ 0

n∑i=1

θidi > 0 with positive probability

n∑i=1

θiPi ≤ 0.

Such a portfolio is called an arbitrage opportunity or simply an arbitrage.

Lecture 13 20 / 33

Arbitrage (1)

There are good and bad portfolio payoffs, and then there are payoffs thatare to good be true.

Assume again that we have n securities with random payoffs di and pricesPi . Assume that we can find a portfolio θ such that

n∑i=1

θidi ≥ 0

n∑i=1

θidi > 0 with positive probability

n∑i=1

θiPi ≤ 0.

Such a portfolio is called an arbitrage opportunity or simply an arbitrage.

Lecture 13 20 / 33

Arbitrage (1)

There are good and bad portfolio payoffs, and then there are payoffs thatare to good be true.

Assume again that we have n securities with random payoffs di and pricesPi . Assume that we can find a portfolio θ such that

n∑i=1

θidi ≥ 0

n∑i=1

θidi > 0 with positive probability

n∑i=1

θiPi ≤ 0.

Such a portfolio is called an arbitrage opportunity or simply an arbitrage.

Lecture 13 20 / 33

Arbitrage (1)

There are good and bad portfolio payoffs, and then there are payoffs thatare to good be true.

Assume again that we have n securities with random payoffs di and pricesPi . Assume that we can find a portfolio θ such that

n∑i=1

θidi ≥ 0

n∑i=1

θidi > 0 with positive probability

n∑i=1

θiPi ≤ 0.

Such a portfolio is called an arbitrage opportunity or simply an arbitrage.

Lecture 13 20 / 33

Arbitrage (2)

An arbitarge opportunity is too good to be true, and should not exist onan market in equilibrium.

As in the exact APT case we use the ruling out of arbitrage opportunitesas a way of modelling a financial market in equilibrium.

Lecture 13 21 / 33

Arbitrage (2)

An arbitarge opportunity is too good to be true, and should not exist onan market in equilibrium.

As in the exact APT case we use the ruling out of arbitrage opportunitesas a way of modelling a financial market in equilibrium.

Lecture 13 21 / 33

Finite state models (1)

So far we have not made any explict stochastic assumptions regarding thepossible outsomes of the world.

We now assume that there exists a finite number S of states thatdescribes the possible outcomes.

The states are numbered 1, 2, . . . ,S .

We can then describe the payoff d of an asset as an S-dimensional vector:

d = (d1, d2, . . . , dS).

The interpretation is that we get the amount d s if state s happens.

Lecture 13 22 / 33

Finite state models (1)

So far we have not made any explict stochastic assumptions regarding thepossible outsomes of the world.

We now assume that there exists a finite number S of states thatdescribes the possible outcomes.

The states are numbered 1, 2, . . . ,S .

We can then describe the payoff d of an asset as an S-dimensional vector:

d = (d1, d2, . . . , dS).

The interpretation is that we get the amount d s if state s happens.

Lecture 13 22 / 33

Finite state models (1)

So far we have not made any explict stochastic assumptions regarding thepossible outsomes of the world.

We now assume that there exists a finite number S of states thatdescribes the possible outcomes.

The states are numbered 1, 2, . . . ,S .

We can then describe the payoff d of an asset as an S-dimensional vector:

d = (d1, d2, . . . , dS).

The interpretation is that we get the amount d s if state s happens.

Lecture 13 22 / 33

Finite state models (1)

So far we have not made any explict stochastic assumptions regarding thepossible outsomes of the world.

We now assume that there exists a finite number S of states thatdescribes the possible outcomes.

The states are numbered 1, 2, . . . ,S .

We can then describe the payoff d of an asset as an S-dimensional vector:

d = (d1, d2, . . . , dS).

The interpretation is that we get the amount d s if state s happens.

Lecture 13 22 / 33

Finite state models (1)

So far we have not made any explict stochastic assumptions regarding thepossible outsomes of the world.

We now assume that there exists a finite number S of states thatdescribes the possible outcomes.

The states are numbered 1, 2, . . . ,S .

We can then describe the payoff d of an asset as an S-dimensional vector:

d = (d1, d2, . . . , dS).

The interpretation is that we get the amount d s if state s happens.

Lecture 13 22 / 33

Finite state models (2)

What are the states?

This is up to the modeller to decide.

Example

We want to model the behavior of a stock market over 1 month andchoose the states

1 = The market crashes

2 = The market goes down

3 = The market goes neither gone up nor down

4 = The market goes up

Lecture 13 23 / 33

Finite state models (2)

What are the states?

This is up to the modeller to decide.

Example

We want to model the behavior of a stock market over 1 month andchoose the states

1 = The market crashes

2 = The market goes down

3 = The market goes neither gone up nor down

4 = The market goes up

Lecture 13 23 / 33

Finite state models (2)

What are the states?

This is up to the modeller to decide.

Example

We want to model the behavior of a stock market over 1 month andchoose the states

1 = The market crashes

2 = The market goes down

3 = The market goes neither gone up nor down

4 = The market goes up

Lecture 13 23 / 33

Finite state models (3)

Example

(Continued)

To model the status in 1 year of an individual having a healthinsurance we choose the states

1 = The individual is alive and healthy

2 = The individual is alive and ill

3 = The individual is dead

Lecture 13 24 / 33

State prices (1)

An important set of assets in a finite state model are the elementary statesecurities or Arrow-Debreu assets es , s = 1, 2, . . . ,S .

These assets have payoff

es = 1 if state s occurs, and 0 otherwise.

Hence, es is the standard unit vector in direction s in RS :

es =

0...010...0

.

Lecture 13 25 / 33

State prices (1)

An important set of assets in a finite state model are the elementary statesecurities or Arrow-Debreu assets es , s = 1, 2, . . . ,S .

These assets have payoff

es = 1 if state s occurs, and 0 otherwise.

Hence, es is the standard unit vector in direction s in RS :

es =

0...010...0

.

Lecture 13 25 / 33

State prices (1)

An important set of assets in a finite state model are the elementary statesecurities or Arrow-Debreu assets es , s = 1, 2, . . . ,S .

These assets have payoff

es = 1 if state s occurs, and 0 otherwise.

Hence, es is the standard unit vector in direction s in RS :

es =

0...010...0

.

Lecture 13 25 / 33

State prices (2)

We let ψs denote the price of the security es .

The ψs ’s are called state prices or Arrow-Debreu prices.

Using the elementary state securities we can write

d =S∑

s=1

d ses

for any security with payoff d .

Using linear pricing we get the price P of d as

P =S∑

s=1

d sψs .

Lecture 13 26 / 33

State prices (2)

We let ψs denote the price of the security es .

The ψs ’s are called state prices or Arrow-Debreu prices.

Using the elementary state securities we can write

d =S∑

s=1

d ses

for any security with payoff d .

Using linear pricing we get the price P of d as

P =S∑

s=1

d sψs .

Lecture 13 26 / 33

State prices (2)

We let ψs denote the price of the security es .

The ψs ’s are called state prices or Arrow-Debreu prices.

Using the elementary state securities we can write

d =S∑

s=1

d ses

for any security with payoff d .

Using linear pricing we get the price P of d as

P =S∑

s=1

d sψs .

Lecture 13 26 / 33

State prices (2)

We let ψs denote the price of the security es .

The ψs ’s are called state prices or Arrow-Debreu prices.

Using the elementary state securities we can write

d =S∑

s=1

d ses

for any security with payoff d .

Using linear pricing we get the price P of d as

P =S∑

s=1

d sψs .

Lecture 13 26 / 33

State prices (3)

Let us now connect state prices with arbitrage opportunities.

The following theorem explicitly gives this connection.

Theorem

A finite state model does not allow any arbitrage opportunities if and onlyif there exists a set of strictly positive state prices.

Lecture 13 27 / 33

State prices (3)

Let us now connect state prices with arbitrage opportunities.

The following theorem explicitly gives this connection.

Theorem

A finite state model does not allow any arbitrage opportunities if and onlyif there exists a set of strictly positive state prices.

Lecture 13 27 / 33

State prices (4)

Proof.

Assume that there exists a set of strictly positive state prices. If d is asecurity such that d ≥ 0, then the price P of d is given by

P =S∑

s=1

ψsds

=

{= 0 if d = 0> 0 if d 6= 0

.

Hence, there cannot exist any arbitrage opportunities when there exists aset of strictly positive state prices.

To show the reverse implication, one can use separating hyperplanes.

Lecture 13 28 / 33

State prices (4)

Proof.

Assume that there exists a set of strictly positive state prices. If d is asecurity such that d ≥ 0, then the price P of d is given by

P =S∑

s=1

ψsds =

{= 0 if d = 0> 0 if d 6= 0

.

Hence, there cannot exist any arbitrage opportunities when there exists aset of strictly positive state prices.

To show the reverse implication, one can use separating hyperplanes.

Lecture 13 28 / 33

State prices (4)

Proof.

Assume that there exists a set of strictly positive state prices. If d is asecurity such that d ≥ 0, then the price P of d is given by

P =S∑

s=1

ψsds =

{= 0 if d = 0> 0 if d 6= 0

.

Hence, there cannot exist any arbitrage opportunities when there exists aset of strictly positive state prices.

To show the reverse implication, one can use separating hyperplanes.

Lecture 13 28 / 33

State prices (4)

Proof.

Assume that there exists a set of strictly positive state prices. If d is asecurity such that d ≥ 0, then the price P of d is given by

P =S∑

s=1

ψsds =

{= 0 if d = 0> 0 if d 6= 0

.

Hence, there cannot exist any arbitrage opportunities when there exists aset of strictly positive state prices.

To show the reverse implication, one can use separating hyperplanes.

Lecture 13 28 / 33

Risk-neutral pricing (1)

Let us return to the expression

P =S∑

s=1

ψsds .

Assume that the model does not allow any arbitrage opportunities, i.e. wehave a set of strictly positive state prices.

Before we continue, we note that the price of a security that gives 1 forsure tomorrow has price

1

Rf=

S∑s=1

ψs · 1 =S∑

s=1

ψs .

Lecture 13 29 / 33

Risk-neutral pricing (1)

Let us return to the expression

P =S∑

s=1

ψsds .

Assume that the model does not allow any arbitrage opportunities, i.e. wehave a set of strictly positive state prices.

Before we continue, we note that the price of a security that gives 1 forsure tomorrow has price

1

Rf=

S∑s=1

ψs · 1 =S∑

s=1

ψs .

Lecture 13 29 / 33

Risk-neutral pricing (1)

Let us return to the expression

P =S∑

s=1

ψsds .

Assume that the model does not allow any arbitrage opportunities, i.e. wehave a set of strictly positive state prices.

Before we continue, we note that the price of a security that gives 1 forsure tomorrow has price

1

Rf=

S∑s=1

ψs · 1 =S∑

s=1

ψs .

Lecture 13 29 / 33

Risk-neutral pricing (2)

Now we can write

P =S∑

s=1

ψsds

=

(S∑

s=1

ψs

S∑s=1

ψs∑Si=1 ψi

d s

=1

Rf

S∑s=1

qsds

where

qs =ψs∑Si=1 ψi

.

Lecture 13 30 / 33

Risk-neutral pricing (2)

Now we can write

P =S∑

s=1

ψsds

=

(S∑

s=1

ψs

S∑s=1

ψs∑Si=1 ψi

d s

=1

Rf

S∑s=1

qsds

where

qs =ψs∑Si=1 ψi

.

Lecture 13 30 / 33

Risk-neutral pricing (2)

Now we can write

P =S∑

s=1

ψsds

=

(S∑

s=1

ψs

S∑s=1

ψs∑Si=1 ψi

d s

=1

Rf

S∑s=1

qsds

where

qs =ψs∑Si=1 ψi

.

Lecture 13 30 / 33

Risk-neutral pricing (2)

Now we can write

P =S∑

s=1

ψsds

=

(S∑

s=1

ψs

S∑s=1

ψs∑Si=1 ψi

d s

=1

Rf

S∑s=1

qsds

where

qs =ψs∑Si=1 ψi

.

Lecture 13 30 / 33

Risk-neutral pricing (3)

Note that the qs ’s satisfies

qs ≥ 0 andS∑

s=1

qs = 1.

Hence, we can interpret them as probabilities.

They are called risk-neutral probabilities and the pricing formula

P =1

Rf

S∑s=1

qsds =

1

RfE (x)

is referred to as risk-neutral pricing. Here E means that we should use theqs ’s when calculating the expected value.

Lecture 13 31 / 33

Risk-neutral pricing (3)

Note that the qs ’s satisfies

qs ≥ 0 andS∑

s=1

qs = 1.

Hence, we can interpret them as probabilities.

They are called risk-neutral probabilities and the pricing formula

P =1

Rf

S∑s=1

qsds =

1

RfE (x)

is referred to as risk-neutral pricing. Here E means that we should use theqs ’s when calculating the expected value.

Lecture 13 31 / 33

Risk-neutral pricing (3)

Note that the qs ’s satisfies

qs ≥ 0 andS∑

s=1

qs = 1.

Hence, we can interpret them as probabilities.

They are called risk-neutral probabilities

and the pricing formula

P =1

Rf

S∑s=1

qsds =

1

RfE (x)

is referred to as risk-neutral pricing. Here E means that we should use theqs ’s when calculating the expected value.

Lecture 13 31 / 33

Risk-neutral pricing (3)

Note that the qs ’s satisfies

qs ≥ 0 andS∑

s=1

qs = 1.

Hence, we can interpret them as probabilities.

They are called risk-neutral probabilities and the pricing formula

P =1

Rf

S∑s=1

qsds =

1

RfE (x)

is referred to as risk-neutral pricing. Here E means that we should use theqs ’s when calculating the expected value.

Lecture 13 31 / 33

Risk-neutral pricing (3)

Note that the qs ’s satisfies

qs ≥ 0 andS∑

s=1

qs = 1.

Hence, we can interpret them as probabilities.

They are called risk-neutral probabilities and the pricing formula

P =1

Rf

S∑s=1

qsds =

1

RfE (x)

is referred to as risk-neutral pricing. Here E means that we should use theqs ’s when calculating the expected value.

Lecture 13 31 / 33

Risk-neutral pricing (4)

Why is this called ‘risk-neutral’ pricing?

The reason is that we take the expected value and discount using therisk-free rate.

This is exactly how we would have done it if we didn’t care about risk – ifwe were risk-neutral

Finally we note that using utility functions when pricing we have therelation

qs =psU

′(x?)s∑Si=1 piU

′(x?)i,

where ps is the probability that state s will occur.

Lecture 13 32 / 33

Risk-neutral pricing (4)

Why is this called ‘risk-neutral’ pricing?

The reason is that we take the expected value and discount using therisk-free rate.

This is exactly how we would have done it if we didn’t care about risk – ifwe were risk-neutral

Finally we note that using utility functions when pricing we have therelation

qs =psU

′(x?)s∑Si=1 piU

′(x?)i,

where ps is the probability that state s will occur.

Lecture 13 32 / 33

Risk-neutral pricing (4)

Why is this called ‘risk-neutral’ pricing?

The reason is that we take the expected value and discount using therisk-free rate.

This is exactly how we would have done it if we didn’t care about risk – ifwe were risk-neutral

Finally we note that using utility functions when pricing we have therelation

qs =psU

′(x?)s∑Si=1 piU

′(x?)i,

where ps is the probability that state s will occur.

Lecture 13 32 / 33

Risk-neutral pricing (4)

Why is this called ‘risk-neutral’ pricing?

The reason is that we take the expected value and discount using therisk-free rate.

This is exactly how we would have done it if we didn’t care about risk – ifwe were risk-neutral

Finally we note that using utility functions when pricing we have therelation

qs =psU

′(x?)s∑Si=1 piU

′(x?)i,

where ps is the probability that state s will occur.

Lecture 13 32 / 33

Pricing – a summary

We have seen three different ways of pricing risky asset:

Adjust the discount rates (CAPM)

Adjust the cash flow (Certainty-equivalent version of CAPM andpricing using utility functions)

Adjust the probabilities (risk-neutral pricing)

Lecture 13 33 / 33

Pricing – a summary

We have seen three different ways of pricing risky asset:

Adjust the discount rates (CAPM)

Adjust the cash flow (Certainty-equivalent version of CAPM andpricing using utility functions)

Adjust the probabilities (risk-neutral pricing)

Lecture 13 33 / 33

Pricing – a summary

We have seen three different ways of pricing risky asset:

Adjust the discount rates (CAPM)

Adjust the cash flow (Certainty-equivalent version of CAPM andpricing using utility functions)

Adjust the probabilities (risk-neutral pricing)

Lecture 13 33 / 33

Recommended