96
Learning and Global Dynamics Environment Steady states Learning Simulations Conclusion Learning and Global Dynamics James Bullard 10 February 2007

Learning and Global Dynamics - St. Louis Fed

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Learning and Global Dynamics - St. Louis Fed

Learning and Global Dynamics Environment Steady states Learning Simulations Conclusion

Learning and Global Dynamics

James Bullard

10 February 2007

Page 2: Learning and Global Dynamics - St. Louis Fed

Learning and Global Dynamics Environment Steady states Learning Simulations Conclusion

Learning and global dynamics

The paper for this lecture is “Liquidity Traps, Learning andStagnation,” by George Evans, Eran Guse, and SeppoHonkapohja.

This will serve as an introduction to some key ideas in thelearning literature.The main idea is to study stability under learning ofsystems analyzed by Benhabib, Schmitt-Grohe, and Uribe(2001, JET and elsewhere).The learning dynamics give a different perspective fromthe RE dynamics.

Page 3: Learning and Global Dynamics - St. Louis Fed

Learning and Global Dynamics Environment Steady states Learning Simulations Conclusion

Learning and global dynamics

The paper for this lecture is “Liquidity Traps, Learning andStagnation,” by George Evans, Eran Guse, and SeppoHonkapohja.This will serve as an introduction to some key ideas in thelearning literature.

The main idea is to study stability under learning ofsystems analyzed by Benhabib, Schmitt-Grohe, and Uribe(2001, JET and elsewhere).The learning dynamics give a different perspective fromthe RE dynamics.

Page 4: Learning and Global Dynamics - St. Louis Fed

Learning and Global Dynamics Environment Steady states Learning Simulations Conclusion

Learning and global dynamics

The paper for this lecture is “Liquidity Traps, Learning andStagnation,” by George Evans, Eran Guse, and SeppoHonkapohja.This will serve as an introduction to some key ideas in thelearning literature.The main idea is to study stability under learning ofsystems analyzed by Benhabib, Schmitt-Grohe, and Uribe(2001, JET and elsewhere).

The learning dynamics give a different perspective fromthe RE dynamics.

Page 5: Learning and Global Dynamics - St. Louis Fed

Learning and Global Dynamics Environment Steady states Learning Simulations Conclusion

Learning and global dynamics

The paper for this lecture is “Liquidity Traps, Learning andStagnation,” by George Evans, Eran Guse, and SeppoHonkapohja.This will serve as an introduction to some key ideas in thelearning literature.The main idea is to study stability under learning ofsystems analyzed by Benhabib, Schmitt-Grohe, and Uribe(2001, JET and elsewhere).The learning dynamics give a different perspective fromthe RE dynamics.

Page 6: Learning and Global Dynamics - St. Louis Fed

Learning and Global Dynamics Environment Steady states Learning Simulations Conclusion

What Benhabib, Schmitt-Grohe, and Uribe said

Monetary models normally impose a Fisher relationR = ρ+ π and a zero lower bound on nominal interestrates R > 0.

Many analyses include a continuous, “active” Taylor typemonetary policy rule R0 (π?) > 1, where π? is the targetinflation rate of the monetary authority.Main point: This combination of assumptions alwaysimplies the existence of a second steady state inflation rateπL < π?.Perfect foresight equilibria may exist in which inflationbegins in the neighborhood of π? but convergesasymptotically to πL along an oscillatory path.

Page 7: Learning and Global Dynamics - St. Louis Fed

Learning and Global Dynamics Environment Steady states Learning Simulations Conclusion

What Benhabib, Schmitt-Grohe, and Uribe said

Monetary models normally impose a Fisher relationR = ρ+ π and a zero lower bound on nominal interestrates R > 0.Many analyses include a continuous, “active” Taylor typemonetary policy rule R0 (π?) > 1, where π? is the targetinflation rate of the monetary authority.

Main point: This combination of assumptions alwaysimplies the existence of a second steady state inflation rateπL < π?.Perfect foresight equilibria may exist in which inflationbegins in the neighborhood of π? but convergesasymptotically to πL along an oscillatory path.

Page 8: Learning and Global Dynamics - St. Louis Fed

Learning and Global Dynamics Environment Steady states Learning Simulations Conclusion

What Benhabib, Schmitt-Grohe, and Uribe said

Monetary models normally impose a Fisher relationR = ρ+ π and a zero lower bound on nominal interestrates R > 0.Many analyses include a continuous, “active” Taylor typemonetary policy rule R0 (π?) > 1, where π? is the targetinflation rate of the monetary authority.Main point: This combination of assumptions alwaysimplies the existence of a second steady state inflation rateπL < π?.

Perfect foresight equilibria may exist in which inflationbegins in the neighborhood of π? but convergesasymptotically to πL along an oscillatory path.

Page 9: Learning and Global Dynamics - St. Louis Fed

Learning and Global Dynamics Environment Steady states Learning Simulations Conclusion

What Benhabib, Schmitt-Grohe, and Uribe said

Monetary models normally impose a Fisher relationR = ρ+ π and a zero lower bound on nominal interestrates R > 0.Many analyses include a continuous, “active” Taylor typemonetary policy rule R0 (π?) > 1, where π? is the targetinflation rate of the monetary authority.Main point: This combination of assumptions alwaysimplies the existence of a second steady state inflation rateπL < π?.Perfect foresight equilibria may exist in which inflationbegins in the neighborhood of π? but convergesasymptotically to πL along an oscillatory path.

Page 10: Learning and Global Dynamics - St. Louis Fed

Learning and Global Dynamics Environment Steady states Learning Simulations Conclusion

Benhabib, et al.: Existence of a “liquidity trap”

Page 11: Learning and Global Dynamics - St. Louis Fed

Learning and Global Dynamics Environment Steady states Learning Simulations Conclusion

Benhabib, et al., Figure 3

Page 12: Learning and Global Dynamics - St. Louis Fed

Learning and Global Dynamics Environment Steady states Learning Simulations Conclusion

Japan

Japan: Gross Domestic Product    % Change ­ Year to Year        SAAR, Bil.Chn.2000.Yen

Japan: Uncollateralized Overnight Call Rate% p.a.

060504030201009998979695949392Source:  OECD /Haver

6

4

2

0

­2

­4

6

5

4

3

2

1

0

Page 13: Learning and Global Dynamics - St. Louis Fed

Learning and Global Dynamics Environment Steady states Learning Simulations Conclusion

A learning application?

Rational expectations dynamics suggested by Benhabib, etal., seem unlikely.

Multiple steady states, which one would be attained in anactual economy?Would it be possible under learning to switch from aneighborhood of one steady state to a neighborhood of theother?How do policy choices influence these dynamics?

Page 14: Learning and Global Dynamics - St. Louis Fed

Learning and Global Dynamics Environment Steady states Learning Simulations Conclusion

A learning application?

Rational expectations dynamics suggested by Benhabib, etal., seem unlikely.Multiple steady states, which one would be attained in anactual economy?

Would it be possible under learning to switch from aneighborhood of one steady state to a neighborhood of theother?How do policy choices influence these dynamics?

Page 15: Learning and Global Dynamics - St. Louis Fed

Learning and Global Dynamics Environment Steady states Learning Simulations Conclusion

A learning application?

Rational expectations dynamics suggested by Benhabib, etal., seem unlikely.Multiple steady states, which one would be attained in anactual economy?Would it be possible under learning to switch from aneighborhood of one steady state to a neighborhood of theother?

How do policy choices influence these dynamics?

Page 16: Learning and Global Dynamics - St. Louis Fed

Learning and Global Dynamics Environment Steady states Learning Simulations Conclusion

A learning application?

Rational expectations dynamics suggested by Benhabib, etal., seem unlikely.Multiple steady states, which one would be attained in anactual economy?Would it be possible under learning to switch from aneighborhood of one steady state to a neighborhood of theother?How do policy choices influence these dynamics?

Page 17: Learning and Global Dynamics - St. Louis Fed

Learning and Global Dynamics Environment Steady states Learning Simulations Conclusion

Main ideas

Sticky price, stochastic, discrete time version of Benhabib,et al. (2001).

Replace rational expectations with recursive learning.Under “normal policy,” economy will converge to targetedsteady state, and agents will behave as if they haverational expectations.Large, pessimistic shocks can send the economy on a pathtoward the low inflation steady state.Alternative policies may eliminate this possibility.

Page 18: Learning and Global Dynamics - St. Louis Fed

Learning and Global Dynamics Environment Steady states Learning Simulations Conclusion

Main ideas

Sticky price, stochastic, discrete time version of Benhabib,et al. (2001).Replace rational expectations with recursive learning.

Under “normal policy,” economy will converge to targetedsteady state, and agents will behave as if they haverational expectations.Large, pessimistic shocks can send the economy on a pathtoward the low inflation steady state.Alternative policies may eliminate this possibility.

Page 19: Learning and Global Dynamics - St. Louis Fed

Learning and Global Dynamics Environment Steady states Learning Simulations Conclusion

Main ideas

Sticky price, stochastic, discrete time version of Benhabib,et al. (2001).Replace rational expectations with recursive learning.Under “normal policy,” economy will converge to targetedsteady state, and agents will behave as if they haverational expectations.

Large, pessimistic shocks can send the economy on a pathtoward the low inflation steady state.Alternative policies may eliminate this possibility.

Page 20: Learning and Global Dynamics - St. Louis Fed

Learning and Global Dynamics Environment Steady states Learning Simulations Conclusion

Main ideas

Sticky price, stochastic, discrete time version of Benhabib,et al. (2001).Replace rational expectations with recursive learning.Under “normal policy,” economy will converge to targetedsteady state, and agents will behave as if they haverational expectations.Large, pessimistic shocks can send the economy on a pathtoward the low inflation steady state.

Alternative policies may eliminate this possibility.

Page 21: Learning and Global Dynamics - St. Louis Fed

Learning and Global Dynamics Environment Steady states Learning Simulations Conclusion

Main ideas

Sticky price, stochastic, discrete time version of Benhabib,et al. (2001).Replace rational expectations with recursive learning.Under “normal policy,” economy will converge to targetedsteady state, and agents will behave as if they haverational expectations.Large, pessimistic shocks can send the economy on a pathtoward the low inflation steady state.Alternative policies may eliminate this possibility.

Page 22: Learning and Global Dynamics - St. Louis Fed

Learning and Global Dynamics Environment Steady states Learning Simulations Conclusion

Background

Idea is to get a model like the ones in this literature butalso amenable to recursive learning analysis.

Continuum of household-firms each produce adifferentiated consumption good under monopolisticcompetition.There is no capital, production is simply

yt,j = hαt,j (1)

where ht,j is the labor input.The labor market is competitive.

Page 23: Learning and Global Dynamics - St. Louis Fed

Learning and Global Dynamics Environment Steady states Learning Simulations Conclusion

Background

Idea is to get a model like the ones in this literature butalso amenable to recursive learning analysis.Continuum of household-firms each produce adifferentiated consumption good under monopolisticcompetition.

There is no capital, production is simply

yt,j = hαt,j (1)

where ht,j is the labor input.The labor market is competitive.

Page 24: Learning and Global Dynamics - St. Louis Fed

Learning and Global Dynamics Environment Steady states Learning Simulations Conclusion

Background

Idea is to get a model like the ones in this literature butalso amenable to recursive learning analysis.Continuum of household-firms each produce adifferentiated consumption good under monopolisticcompetition.There is no capital, production is simply

yt,j = hαt,j (1)

where ht,j is the labor input.

The labor market is competitive.

Page 25: Learning and Global Dynamics - St. Louis Fed

Learning and Global Dynamics Environment Steady states Learning Simulations Conclusion

Background

Idea is to get a model like the ones in this literature butalso amenable to recursive learning analysis.Continuum of household-firms each produce adifferentiated consumption good under monopolisticcompetition.There is no capital, production is simply

yt,j = hαt,j (1)

where ht,j is the labor input.The labor market is competitive.

Page 26: Learning and Global Dynamics - St. Louis Fed

Learning and Global Dynamics Environment Steady states Learning Simulations Conclusion

More background

Firms face downward sloping demand

Pt,j =

�yt,j

Yt

��1/ν

Pt. (2)

Pt,j is the profit-maximizing price set by firm j.Elasticity of substitution between goods is given by ν > 1.Price adjustment costs are of the Rotemberg type.

Page 27: Learning and Global Dynamics - St. Louis Fed

Learning and Global Dynamics Environment Steady states Learning Simulations Conclusion

More background

Firms face downward sloping demand

Pt,j =

�yt,j

Yt

��1/ν

Pt. (2)

Pt,j is the profit-maximizing price set by firm j.

Elasticity of substitution between goods is given by ν > 1.Price adjustment costs are of the Rotemberg type.

Page 28: Learning and Global Dynamics - St. Louis Fed

Learning and Global Dynamics Environment Steady states Learning Simulations Conclusion

More background

Firms face downward sloping demand

Pt,j =

�yt,j

Yt

��1/ν

Pt. (2)

Pt,j is the profit-maximizing price set by firm j.Elasticity of substitution between goods is given by ν > 1.

Price adjustment costs are of the Rotemberg type.

Page 29: Learning and Global Dynamics - St. Louis Fed

Learning and Global Dynamics Environment Steady states Learning Simulations Conclusion

More background

Firms face downward sloping demand

Pt,j =

�yt,j

Yt

��1/ν

Pt. (2)

Pt,j is the profit-maximizing price set by firm j.Elasticity of substitution between goods is given by ν > 1.Price adjustment costs are of the Rotemberg type.

Page 30: Learning and Global Dynamics - St. Louis Fed

Learning and Global Dynamics Environment Steady states Learning Simulations Conclusion

Households maximize

E0

∑t=0

βtUt,j

�ct,j,

Mt�1,j

Pt, ht,j,

Pt,j

Pt�1,j� 1

�(3)

subject to

ct,j +mt,j + bt,j + τt,j = mt�1,jπ�1t + Rt�1π�1

t bt�1,j +Pt,j

Ptyt,j.

(4)where

Ut,j =c1�σ1

t,j

1� σ1+

χ

1� σ2

�Mt�1,j

Pt

�1�σ2

�h1+ε

t,j

1+ ε� γ

2

�Pt,j

Pt�1,j� 1

�2

.

(5)

Notation standard; last term in utility is Rotemberg cost ofprice adjustment.

Page 31: Learning and Global Dynamics - St. Louis Fed

Learning and Global Dynamics Environment Steady states Learning Simulations Conclusion

Households maximize

E0

∑t=0

βtUt,j

�ct,j,

Mt�1,j

Pt, ht,j,

Pt,j

Pt�1,j� 1

�(3)

subject to

ct,j +mt,j + bt,j + τt,j = mt�1,jπ�1t + Rt�1π�1

t bt�1,j +Pt,j

Ptyt,j.

(4)where

Ut,j =c1�σ1

t,j

1� σ1+

χ

1� σ2

�Mt�1,j

Pt

�1�σ2

�h1+ε

t,j

1+ ε� γ

2

�Pt,j

Pt�1,j� 1

�2

.

(5)Notation standard; last term in utility is Rotemberg cost ofprice adjustment.

Page 32: Learning and Global Dynamics - St. Louis Fed

Learning and Global Dynamics Environment Steady states Learning Simulations Conclusion

Fiscal policy

The government budget constraint is

bt +mt + τt = gt +mt�1πt�1 + Rt�1π�1t bt�1 (6)

where τt is a lump-sum tax and gt is governmentconsumption.

Assume government consumption is stochastic

gt = g+ ut (7)

where ut is white noise.Assume fiscal policy follows

τt = κ0 + κbt�1 + ψt + ηt (8)

a linear tax rule as in Leeper (1991).

Page 33: Learning and Global Dynamics - St. Louis Fed

Learning and Global Dynamics Environment Steady states Learning Simulations Conclusion

Fiscal policy

The government budget constraint is

bt +mt + τt = gt +mt�1πt�1 + Rt�1π�1t bt�1 (6)

where τt is a lump-sum tax and gt is governmentconsumption.Assume government consumption is stochastic

gt = g+ ut (7)

where ut is white noise.

Assume fiscal policy follows

τt = κ0 + κbt�1 + ψt + ηt (8)

a linear tax rule as in Leeper (1991).

Page 34: Learning and Global Dynamics - St. Louis Fed

Learning and Global Dynamics Environment Steady states Learning Simulations Conclusion

Fiscal policy

The government budget constraint is

bt +mt + τt = gt +mt�1πt�1 + Rt�1π�1t bt�1 (6)

where τt is a lump-sum tax and gt is governmentconsumption.Assume government consumption is stochastic

gt = g+ ut (7)

where ut is white noise.Assume fiscal policy follows

τt = κ0 + κbt�1 + ψt + ηt (8)

a linear tax rule as in Leeper (1991).

Page 35: Learning and Global Dynamics - St. Louis Fed

Learning and Global Dynamics Environment Steady states Learning Simulations Conclusion

Monetary policy

Monetary policy follows a global interest rate rule

Rt � 1 = θtf (πt) , (9)

where f (π) is non-negative and non-decreasing.

θt is an exogenous iid positive random shock with mean 1.The monetary authority has an inflation target π? whereR? = β�1π? and f (π?) = R? � 1.For some purposes

f (π) = (R? � 1)� π

π?

�AR?/(R?�1)(10)

where f 0 (π?) = AR? is assumed larger than β�1.

Page 36: Learning and Global Dynamics - St. Louis Fed

Learning and Global Dynamics Environment Steady states Learning Simulations Conclusion

Monetary policy

Monetary policy follows a global interest rate rule

Rt � 1 = θtf (πt) , (9)

where f (π) is non-negative and non-decreasing.θt is an exogenous iid positive random shock with mean 1.

The monetary authority has an inflation target π? whereR? = β�1π? and f (π?) = R? � 1.For some purposes

f (π) = (R? � 1)� π

π?

�AR?/(R?�1)(10)

where f 0 (π?) = AR? is assumed larger than β�1.

Page 37: Learning and Global Dynamics - St. Louis Fed

Learning and Global Dynamics Environment Steady states Learning Simulations Conclusion

Monetary policy

Monetary policy follows a global interest rate rule

Rt � 1 = θtf (πt) , (9)

where f (π) is non-negative and non-decreasing.θt is an exogenous iid positive random shock with mean 1.The monetary authority has an inflation target π? whereR? = β�1π? and f (π?) = R? � 1.

For some purposes

f (π) = (R? � 1)� π

π?

�AR?/(R?�1)(10)

where f 0 (π?) = AR? is assumed larger than β�1.

Page 38: Learning and Global Dynamics - St. Louis Fed

Learning and Global Dynamics Environment Steady states Learning Simulations Conclusion

Monetary policy

Monetary policy follows a global interest rate rule

Rt � 1 = θtf (πt) , (9)

where f (π) is non-negative and non-decreasing.θt is an exogenous iid positive random shock with mean 1.The monetary authority has an inflation target π? whereR? = β�1π? and f (π?) = R? � 1.For some purposes

f (π) = (R? � 1)� π

π?

�AR?/(R?�1)(10)

where f 0 (π?) = AR? is assumed larger than β�1.

Page 39: Learning and Global Dynamics - St. Louis Fed

Learning and Global Dynamics Environment Steady states Learning Simulations Conclusion

Normal policy

Normal policy consists of ...

... the government budget constraint,the fiscal policy rule,and the monetary policy rule.The baseline analysis is under normal policy.

Page 40: Learning and Global Dynamics - St. Louis Fed

Learning and Global Dynamics Environment Steady states Learning Simulations Conclusion

Normal policy

Normal policy consists of ...... the government budget constraint,

the fiscal policy rule,and the monetary policy rule.The baseline analysis is under normal policy.

Page 41: Learning and Global Dynamics - St. Louis Fed

Learning and Global Dynamics Environment Steady states Learning Simulations Conclusion

Normal policy

Normal policy consists of ...... the government budget constraint,the fiscal policy rule,

and the monetary policy rule.The baseline analysis is under normal policy.

Page 42: Learning and Global Dynamics - St. Louis Fed

Learning and Global Dynamics Environment Steady states Learning Simulations Conclusion

Normal policy

Normal policy consists of ...... the government budget constraint,the fiscal policy rule,and the monetary policy rule.

The baseline analysis is under normal policy.

Page 43: Learning and Global Dynamics - St. Louis Fed

Learning and Global Dynamics Environment Steady states Learning Simulations Conclusion

Normal policy

Normal policy consists of ...... the government budget constraint,the fiscal policy rule,and the monetary policy rule.The baseline analysis is under normal policy.

Page 44: Learning and Global Dynamics - St. Louis Fed

Learning and Global Dynamics Environment Steady states Learning Simulations Conclusion

Equilibrium

Private sector optimization yields three equations given inthe text.

Combine these three with the government budgetconstraint, the fiscal policy rule, the monetary policy rule,and market clearing.

Page 45: Learning and Global Dynamics - St. Louis Fed

Learning and Global Dynamics Environment Steady states Learning Simulations Conclusion

Equilibrium

Private sector optimization yields three equations given inthe text.Combine these three with the government budgetconstraint, the fiscal policy rule, the monetary policy rule,and market clearing.

Page 46: Learning and Global Dynamics - St. Louis Fed

Learning and Global Dynamics Environment Steady states Learning Simulations Conclusion

Benhabib et al. 2001.

If f (π) is continuous, differentiable, and has a steady stateπ? in which f 0 (π?) > β�1, a second steady state πL existswith f (πL) < β�1.

At both steady states, R = β�1π.Unique values c > 0 and h > 0 are associated with positivesteady state inflation rates.At deflationary steady states, c > 0 and h > 0 are uniqueprovided π is close to one and g > 0.Corresponding stochastic steady states exist when thesupport of the exogenous shocks is sufficiently small.

Page 47: Learning and Global Dynamics - St. Louis Fed

Learning and Global Dynamics Environment Steady states Learning Simulations Conclusion

Benhabib et al. 2001.

If f (π) is continuous, differentiable, and has a steady stateπ? in which f 0 (π?) > β�1, a second steady state πL existswith f (πL) < β�1.At both steady states, R = β�1π.

Unique values c > 0 and h > 0 are associated with positivesteady state inflation rates.At deflationary steady states, c > 0 and h > 0 are uniqueprovided π is close to one and g > 0.Corresponding stochastic steady states exist when thesupport of the exogenous shocks is sufficiently small.

Page 48: Learning and Global Dynamics - St. Louis Fed

Learning and Global Dynamics Environment Steady states Learning Simulations Conclusion

Benhabib et al. 2001.

If f (π) is continuous, differentiable, and has a steady stateπ? in which f 0 (π?) > β�1, a second steady state πL existswith f (πL) < β�1.At both steady states, R = β�1π.Unique values c > 0 and h > 0 are associated with positivesteady state inflation rates.

At deflationary steady states, c > 0 and h > 0 are uniqueprovided π is close to one and g > 0.Corresponding stochastic steady states exist when thesupport of the exogenous shocks is sufficiently small.

Page 49: Learning and Global Dynamics - St. Louis Fed

Learning and Global Dynamics Environment Steady states Learning Simulations Conclusion

Benhabib et al. 2001.

If f (π) is continuous, differentiable, and has a steady stateπ? in which f 0 (π?) > β�1, a second steady state πL existswith f (πL) < β�1.At both steady states, R = β�1π.Unique values c > 0 and h > 0 are associated with positivesteady state inflation rates.At deflationary steady states, c > 0 and h > 0 are uniqueprovided π is close to one and g > 0.

Corresponding stochastic steady states exist when thesupport of the exogenous shocks is sufficiently small.

Page 50: Learning and Global Dynamics - St. Louis Fed

Learning and Global Dynamics Environment Steady states Learning Simulations Conclusion

Benhabib et al. 2001.

If f (π) is continuous, differentiable, and has a steady stateπ? in which f 0 (π?) > β�1, a second steady state πL existswith f (πL) < β�1.At both steady states, R = β�1π.Unique values c > 0 and h > 0 are associated with positivesteady state inflation rates.At deflationary steady states, c > 0 and h > 0 are uniqueprovided π is close to one and g > 0.Corresponding stochastic steady states exist when thesupport of the exogenous shocks is sufficiently small.

Page 51: Learning and Global Dynamics - St. Louis Fed

Learning and Global Dynamics Environment Steady states Learning Simulations Conclusion

Linearization

Linearization produces a decoupled system of fourequations in c, π, b, and m.

Equilibrium dynamics can be analyzed by considering theequations for c and π alone, provided debt dynamics arestationary.The system can be written as�

ctπt

�=

�Bcc Bcπ

Bπc Bππ

� �ce

t+1πe

t+1

�+

�Gcu Gcθ

Gπu Gπθ

� �utθt

�+

�k̃ck̃π

�.

Page 52: Learning and Global Dynamics - St. Louis Fed

Learning and Global Dynamics Environment Steady states Learning Simulations Conclusion

Linearization

Linearization produces a decoupled system of fourequations in c, π, b, and m.Equilibrium dynamics can be analyzed by considering theequations for c and π alone, provided debt dynamics arestationary.

The system can be written as�ctπt

�=

�Bcc Bcπ

Bπc Bππ

� �ce

t+1πe

t+1

�+

�Gcu Gcθ

Gπu Gπθ

� �utθt

�+

�k̃ck̃π

�.

Page 53: Learning and Global Dynamics - St. Louis Fed

Learning and Global Dynamics Environment Steady states Learning Simulations Conclusion

Linearization

Linearization produces a decoupled system of fourequations in c, π, b, and m.Equilibrium dynamics can be analyzed by considering theequations for c and π alone, provided debt dynamics arestationary.The system can be written as�

ctπt

�=

�Bcc Bcπ

Bπc Bππ

� �ce

t+1πe

t+1

�+

�Gcu Gcθ

Gπu Gπθ

� �utθt

�+

�k̃ck̃π

�.

Page 54: Learning and Global Dynamics - St. Louis Fed

Learning and Global Dynamics Environment Steady states Learning Simulations Conclusion

Determinacy

If both eigenvalues of B lie inside the unit circle, a uniquenonexplosive solution exists of the form�

ctπt

�=

�cπ

�+

�Gcu Gcθ

Gπu Gπθ

� �utθt

�. (11)

The corresponding mt is a constant plus white noise.The remaining condition for determinacy is that fiscalpolicy is “passive” according to Leeper (1991), whichmeans that

���β�1 � κ��� < 1.

Page 55: Learning and Global Dynamics - St. Louis Fed

Learning and Global Dynamics Environment Steady states Learning Simulations Conclusion

Determinacy

If both eigenvalues of B lie inside the unit circle, a uniquenonexplosive solution exists of the form�

ctπt

�=

�cπ

�+

�Gcu Gcθ

Gπu Gπθ

� �utθt

�. (11)

The corresponding mt is a constant plus white noise.

The remaining condition for determinacy is that fiscalpolicy is “passive” according to Leeper (1991), whichmeans that

���β�1 � κ��� < 1.

Page 56: Learning and Global Dynamics - St. Louis Fed

Learning and Global Dynamics Environment Steady states Learning Simulations Conclusion

Determinacy

If both eigenvalues of B lie inside the unit circle, a uniquenonexplosive solution exists of the form�

ctπt

�=

�cπ

�+

�Gcu Gcθ

Gπu Gπθ

� �utθt

�. (11)

The corresponding mt is a constant plus white noise.The remaining condition for determinacy is that fiscalpolicy is “passive” according to Leeper (1991), whichmeans that

���β�1 � κ��� < 1.

Page 57: Learning and Global Dynamics - St. Louis Fed

Learning and Global Dynamics Environment Steady states Learning Simulations Conclusion

Proposition 1

Assume fiscal policy is passive.

Assume γ > 0 sufficiently small.Then the steady state with inflation at target π = π? islocally determinate.And, the steady state with inflation π = πL is locallyindeterminate.

Page 58: Learning and Global Dynamics - St. Louis Fed

Learning and Global Dynamics Environment Steady states Learning Simulations Conclusion

Proposition 1

Assume fiscal policy is passive.Assume γ > 0 sufficiently small.

Then the steady state with inflation at target π = π? islocally determinate.And, the steady state with inflation π = πL is locallyindeterminate.

Page 59: Learning and Global Dynamics - St. Louis Fed

Learning and Global Dynamics Environment Steady states Learning Simulations Conclusion

Proposition 1

Assume fiscal policy is passive.Assume γ > 0 sufficiently small.Then the steady state with inflation at target π = π? islocally determinate.

And, the steady state with inflation π = πL is locallyindeterminate.

Page 60: Learning and Global Dynamics - St. Louis Fed

Learning and Global Dynamics Environment Steady states Learning Simulations Conclusion

Proposition 1

Assume fiscal policy is passive.Assume γ > 0 sufficiently small.Then the steady state with inflation at target π = π? islocally determinate.And, the steady state with inflation π = πL is locallyindeterminate.

Page 61: Learning and Global Dynamics - St. Louis Fed

Learning and Global Dynamics Environment Steady states Learning Simulations Conclusion

Perceived law of motion

Equilibria in this model are simple iid processes.

This implies agents can forecast by estimating meanvalues. Very helpful.The hallmark of the literature is the assignment of aperceived law of motion

πet+1 = πe

t + φt (πt�1 � πet) , (12)

cet+1 = ce

t + φt (ct�1 � cet) . (13)

Here φt is the gain sequence.

Page 62: Learning and Global Dynamics - St. Louis Fed

Learning and Global Dynamics Environment Steady states Learning Simulations Conclusion

Perceived law of motion

Equilibria in this model are simple iid processes.This implies agents can forecast by estimating meanvalues. Very helpful.

The hallmark of the literature is the assignment of aperceived law of motion

πet+1 = πe

t + φt (πt�1 � πet) , (12)

cet+1 = ce

t + φt (ct�1 � cet) . (13)

Here φt is the gain sequence.

Page 63: Learning and Global Dynamics - St. Louis Fed

Learning and Global Dynamics Environment Steady states Learning Simulations Conclusion

Perceived law of motion

Equilibria in this model are simple iid processes.This implies agents can forecast by estimating meanvalues. Very helpful.The hallmark of the literature is the assignment of aperceived law of motion

πet+1 = πe

t + φt (πt�1 � πet) , (12)

cet+1 = ce

t + φt (ct�1 � cet) . (13)

Here φt is the gain sequence.

Page 64: Learning and Global Dynamics - St. Louis Fed

Learning and Global Dynamics Environment Steady states Learning Simulations Conclusion

Perceived law of motion

Equilibria in this model are simple iid processes.This implies agents can forecast by estimating meanvalues. Very helpful.The hallmark of the literature is the assignment of aperceived law of motion

πet+1 = πe

t + φt (πt�1 � πet) , (12)

cet+1 = ce

t + φt (ct�1 � cet) . (13)

Here φt is the gain sequence.

Page 65: Learning and Global Dynamics - St. Louis Fed

Learning and Global Dynamics Environment Steady states Learning Simulations Conclusion

Gain sequences

Recursive least squares learning sets φt = 1/t.

Asymptotic convergence to rational expectations possibleRecursive constant gain learning sets φt = φ > 0, a smallpositive constant.More robust to structural change. Convergence propertiesweaker.Theorems: LSL. Simulations: Constant gain.

Page 66: Learning and Global Dynamics - St. Louis Fed

Learning and Global Dynamics Environment Steady states Learning Simulations Conclusion

Gain sequences

Recursive least squares learning sets φt = 1/t.Asymptotic convergence to rational expectations possible

Recursive constant gain learning sets φt = φ > 0, a smallpositive constant.More robust to structural change. Convergence propertiesweaker.Theorems: LSL. Simulations: Constant gain.

Page 67: Learning and Global Dynamics - St. Louis Fed

Learning and Global Dynamics Environment Steady states Learning Simulations Conclusion

Gain sequences

Recursive least squares learning sets φt = 1/t.Asymptotic convergence to rational expectations possibleRecursive constant gain learning sets φt = φ > 0, a smallpositive constant.

More robust to structural change. Convergence propertiesweaker.Theorems: LSL. Simulations: Constant gain.

Page 68: Learning and Global Dynamics - St. Louis Fed

Learning and Global Dynamics Environment Steady states Learning Simulations Conclusion

Gain sequences

Recursive least squares learning sets φt = 1/t.Asymptotic convergence to rational expectations possibleRecursive constant gain learning sets φt = φ > 0, a smallpositive constant.More robust to structural change. Convergence propertiesweaker.

Theorems: LSL. Simulations: Constant gain.

Page 69: Learning and Global Dynamics - St. Louis Fed

Learning and Global Dynamics Environment Steady states Learning Simulations Conclusion

Gain sequences

Recursive least squares learning sets φt = 1/t.Asymptotic convergence to rational expectations possibleRecursive constant gain learning sets φt = φ > 0, a smallpositive constant.More robust to structural change. Convergence propertiesweaker.Theorems: LSL. Simulations: Constant gain.

Page 70: Learning and Global Dynamics - St. Louis Fed

Learning and Global Dynamics Environment Steady states Learning Simulations Conclusion

Expectational stability

Approximate π�1t+1c�σ1

t+1 by�

πet+1

�ce

t+1

�σ1��1

. This changesthe dynamic system slightly. The linearization isunchanged.

The system is now the two altered equations for c, π,

πt = Fπ (πet+1, ce

t+1, ut, θt) , (14)ct = Fc (π

et+1, ce

t+1, ut, θt) . (15)

the monetary policy rule, and the updating equations forexpectations.

Page 71: Learning and Global Dynamics - St. Louis Fed

Learning and Global Dynamics Environment Steady states Learning Simulations Conclusion

Expectational stability

Approximate π�1t+1c�σ1

t+1 by�

πet+1

�ce

t+1

�σ1��1

. This changesthe dynamic system slightly. The linearization isunchanged.The system is now the two altered equations for c, π,

πt = Fπ (πet+1, ce

t+1, ut, θt) , (14)ct = Fc (π

et+1, ce

t+1, ut, θt) . (15)

the monetary policy rule, and the updating equations forexpectations.

Page 72: Learning and Global Dynamics - St. Louis Fed

Learning and Global Dynamics Environment Steady states Learning Simulations Conclusion

More on expectational stability

The REE is said to be expectationally stable if thedifferential equation in notional time τ�

dπe/dτdce/dτ

�=

�Tπ (πe, ce)Tc (πe, ce)

���

πe

ce

�(16)

is locally asymptotically stable at a steady state (π, c) .

Expectational stability is determined by the Jacobianmatrix DT of T at the steady state, � B for small noise.The condition is then that both eigenvalues of B� I havereal parts less than zero.Proposition 2. For γ > 0 sufficiently small, the steady stateat π = π? is locally stable under learning and the steadystate at πL is locally unstable, taking the form of a saddlepoint.

Page 73: Learning and Global Dynamics - St. Louis Fed

Learning and Global Dynamics Environment Steady states Learning Simulations Conclusion

More on expectational stability

The REE is said to be expectationally stable if thedifferential equation in notional time τ�

dπe/dτdce/dτ

�=

�Tπ (πe, ce)Tc (πe, ce)

���

πe

ce

�(16)

is locally asymptotically stable at a steady state (π, c) .Expectational stability is determined by the Jacobianmatrix DT of T at the steady state, � B for small noise.

The condition is then that both eigenvalues of B� I havereal parts less than zero.Proposition 2. For γ > 0 sufficiently small, the steady stateat π = π? is locally stable under learning and the steadystate at πL is locally unstable, taking the form of a saddlepoint.

Page 74: Learning and Global Dynamics - St. Louis Fed

Learning and Global Dynamics Environment Steady states Learning Simulations Conclusion

More on expectational stability

The REE is said to be expectationally stable if thedifferential equation in notional time τ�

dπe/dτdce/dτ

�=

�Tπ (πe, ce)Tc (πe, ce)

���

πe

ce

�(16)

is locally asymptotically stable at a steady state (π, c) .Expectational stability is determined by the Jacobianmatrix DT of T at the steady state, � B for small noise.The condition is then that both eigenvalues of B� I havereal parts less than zero.

Proposition 2. For γ > 0 sufficiently small, the steady stateat π = π? is locally stable under learning and the steadystate at πL is locally unstable, taking the form of a saddlepoint.

Page 75: Learning and Global Dynamics - St. Louis Fed

Learning and Global Dynamics Environment Steady states Learning Simulations Conclusion

More on expectational stability

The REE is said to be expectationally stable if thedifferential equation in notional time τ�

dπe/dτdce/dτ

�=

�Tπ (πe, ce)Tc (πe, ce)

���

πe

ce

�(16)

is locally asymptotically stable at a steady state (π, c) .Expectational stability is determined by the Jacobianmatrix DT of T at the steady state, � B for small noise.The condition is then that both eigenvalues of B� I havereal parts less than zero.Proposition 2. For γ > 0 sufficiently small, the steady stateat π = π? is locally stable under learning and the steadystate at πL is locally unstable, taking the form of a saddlepoint.

Page 76: Learning and Global Dynamics - St. Louis Fed

Learning and Global Dynamics Environment Steady states Learning Simulations Conclusion

Implications

One could stop here and claim that the Benhabib et al.(2001) constructed dynamics are unlikely to be observed inactual economies.

One could also claim that liquidity traps that come out ofthis model are theoretical curiosities that need not worryactual policymakers.The authors take a different course, pointing out thatcertain regions of instability exist.They want to design policy to eliminate these regions ofinstability.They simulate the global dynamics with larger values ofγ > 0.

Page 77: Learning and Global Dynamics - St. Louis Fed

Learning and Global Dynamics Environment Steady states Learning Simulations Conclusion

Implications

One could stop here and claim that the Benhabib et al.(2001) constructed dynamics are unlikely to be observed inactual economies.One could also claim that liquidity traps that come out ofthis model are theoretical curiosities that need not worryactual policymakers.

The authors take a different course, pointing out thatcertain regions of instability exist.They want to design policy to eliminate these regions ofinstability.They simulate the global dynamics with larger values ofγ > 0.

Page 78: Learning and Global Dynamics - St. Louis Fed

Learning and Global Dynamics Environment Steady states Learning Simulations Conclusion

Implications

One could stop here and claim that the Benhabib et al.(2001) constructed dynamics are unlikely to be observed inactual economies.One could also claim that liquidity traps that come out ofthis model are theoretical curiosities that need not worryactual policymakers.The authors take a different course, pointing out thatcertain regions of instability exist.

They want to design policy to eliminate these regions ofinstability.They simulate the global dynamics with larger values ofγ > 0.

Page 79: Learning and Global Dynamics - St. Louis Fed

Learning and Global Dynamics Environment Steady states Learning Simulations Conclusion

Implications

One could stop here and claim that the Benhabib et al.(2001) constructed dynamics are unlikely to be observed inactual economies.One could also claim that liquidity traps that come out ofthis model are theoretical curiosities that need not worryactual policymakers.The authors take a different course, pointing out thatcertain regions of instability exist.They want to design policy to eliminate these regions ofinstability.

They simulate the global dynamics with larger values ofγ > 0.

Page 80: Learning and Global Dynamics - St. Louis Fed

Learning and Global Dynamics Environment Steady states Learning Simulations Conclusion

Implications

One could stop here and claim that the Benhabib et al.(2001) constructed dynamics are unlikely to be observed inactual economies.One could also claim that liquidity traps that come out ofthis model are theoretical curiosities that need not worryactual policymakers.The authors take a different course, pointing out thatcertain regions of instability exist.They want to design policy to eliminate these regions ofinstability.They simulate the global dynamics with larger values ofγ > 0.

Page 81: Learning and Global Dynamics - St. Louis Fed

Learning and Global Dynamics Environment Steady states Learning Simulations Conclusion

Figure 1

Page 82: Learning and Global Dynamics - St. Louis Fed

Learning and Global Dynamics Environment Steady states Learning Simulations Conclusion

More aggressive monetary policy

Change the monetary policy rule to

Rt =

�1+ θtf (πt) if πt > π̃

R̂ if πt < π̃

and R̂ � Rt � 1+ θtf (πt) if πt = π̃.

The authors choose 1 < R̂ < min�

1+ f (πt) , β�1π̃�

.

The idea is to follow normal policy when πt � π̃, but cutinterest rates to a low level if inflation threatens to movebelow the threshold.

Page 83: Learning and Global Dynamics - St. Louis Fed

Learning and Global Dynamics Environment Steady states Learning Simulations Conclusion

More aggressive monetary policy

Change the monetary policy rule to

Rt =

�1+ θtf (πt) if πt > π̃

R̂ if πt < π̃

and R̂ � Rt � 1+ θtf (πt) if πt = π̃.

The authors choose 1 < R̂ < min�

1+ f (πt) , β�1π̃�

.

The idea is to follow normal policy when πt � π̃, but cutinterest rates to a low level if inflation threatens to movebelow the threshold.

Page 84: Learning and Global Dynamics - St. Louis Fed

Learning and Global Dynamics Environment Steady states Learning Simulations Conclusion

More aggressive monetary policy

Change the monetary policy rule to

Rt =

�1+ θtf (πt) if πt > π̃

R̂ if πt < π̃

and R̂ � Rt � 1+ θtf (πt) if πt = π̃.

The authors choose 1 < R̂ < min�

1+ f (πt) , β�1π̃�

.

The idea is to follow normal policy when πt � π̃, but cutinterest rates to a low level if inflation threatens to movebelow the threshold.

Page 85: Learning and Global Dynamics - St. Louis Fed

Learning and Global Dynamics Environment Steady states Learning Simulations Conclusion

More aggressive monetary policy

Page 86: Learning and Global Dynamics - St. Louis Fed

Learning and Global Dynamics Environment Steady states Learning Simulations Conclusion

Altered monetary and fiscal policy

In this policy, interest rates are aggressively lowered asdescribed above.

If this does not work, government expenditures areincreased until inflation increases to the desired level.See Lemma 4.This can again create more than two steady statesdepending on the choice of π̃.

Page 87: Learning and Global Dynamics - St. Louis Fed

Learning and Global Dynamics Environment Steady states Learning Simulations Conclusion

Altered monetary and fiscal policy

In this policy, interest rates are aggressively lowered asdescribed above.If this does not work, government expenditures areincreased until inflation increases to the desired level.

See Lemma 4.This can again create more than two steady statesdepending on the choice of π̃.

Page 88: Learning and Global Dynamics - St. Louis Fed

Learning and Global Dynamics Environment Steady states Learning Simulations Conclusion

Altered monetary and fiscal policy

In this policy, interest rates are aggressively lowered asdescribed above.If this does not work, government expenditures areincreased until inflation increases to the desired level.See Lemma 4.

This can again create more than two steady statesdepending on the choice of π̃.

Page 89: Learning and Global Dynamics - St. Louis Fed

Learning and Global Dynamics Environment Steady states Learning Simulations Conclusion

Altered monetary and fiscal policy

In this policy, interest rates are aggressively lowered asdescribed above.If this does not work, government expenditures areincreased until inflation increases to the desired level.See Lemma 4.This can again create more than two steady statesdepending on the choice of π̃.

Page 90: Learning and Global Dynamics - St. Louis Fed

Learning and Global Dynamics Environment Steady states Learning Simulations Conclusion

Figure 4

Page 91: Learning and Global Dynamics - St. Louis Fed

Learning and Global Dynamics Environment Steady states Learning Simulations Conclusion

Conclusion

Multiple equilibria, one of which is a “liquidity trap.”

How serious a problem is this?The Japanese data are alarming.This paper suggests the Benhabib et al., 2001, dynamics arenot robust to small changes in expectational assumptions.The targeted, high inflation steady state would be locallystable in the learning dynamics.The possibility of deflationary spirals would still existhowever, unless policy is chosen carefully.

Page 92: Learning and Global Dynamics - St. Louis Fed

Learning and Global Dynamics Environment Steady states Learning Simulations Conclusion

Conclusion

Multiple equilibria, one of which is a “liquidity trap.”How serious a problem is this?

The Japanese data are alarming.This paper suggests the Benhabib et al., 2001, dynamics arenot robust to small changes in expectational assumptions.The targeted, high inflation steady state would be locallystable in the learning dynamics.The possibility of deflationary spirals would still existhowever, unless policy is chosen carefully.

Page 93: Learning and Global Dynamics - St. Louis Fed

Learning and Global Dynamics Environment Steady states Learning Simulations Conclusion

Conclusion

Multiple equilibria, one of which is a “liquidity trap.”How serious a problem is this?The Japanese data are alarming.

This paper suggests the Benhabib et al., 2001, dynamics arenot robust to small changes in expectational assumptions.The targeted, high inflation steady state would be locallystable in the learning dynamics.The possibility of deflationary spirals would still existhowever, unless policy is chosen carefully.

Page 94: Learning and Global Dynamics - St. Louis Fed

Learning and Global Dynamics Environment Steady states Learning Simulations Conclusion

Conclusion

Multiple equilibria, one of which is a “liquidity trap.”How serious a problem is this?The Japanese data are alarming.This paper suggests the Benhabib et al., 2001, dynamics arenot robust to small changes in expectational assumptions.

The targeted, high inflation steady state would be locallystable in the learning dynamics.The possibility of deflationary spirals would still existhowever, unless policy is chosen carefully.

Page 95: Learning and Global Dynamics - St. Louis Fed

Learning and Global Dynamics Environment Steady states Learning Simulations Conclusion

Conclusion

Multiple equilibria, one of which is a “liquidity trap.”How serious a problem is this?The Japanese data are alarming.This paper suggests the Benhabib et al., 2001, dynamics arenot robust to small changes in expectational assumptions.The targeted, high inflation steady state would be locallystable in the learning dynamics.

The possibility of deflationary spirals would still existhowever, unless policy is chosen carefully.

Page 96: Learning and Global Dynamics - St. Louis Fed

Learning and Global Dynamics Environment Steady states Learning Simulations Conclusion

Conclusion

Multiple equilibria, one of which is a “liquidity trap.”How serious a problem is this?The Japanese data are alarming.This paper suggests the Benhabib et al., 2001, dynamics arenot robust to small changes in expectational assumptions.The targeted, high inflation steady state would be locallystable in the learning dynamics.The possibility of deflationary spirals would still existhowever, unless policy is chosen carefully.