17
Journal of Economic Dynamics and Control 11 (1987) 465-481. North-HoUand A PROCEDURE FOR DIFFERENTIATING PERFECT-FORESIGHT-MODEL REDUCED-FORM COEFFICIENTS* Gary ANDERSON Federal Reserve Boara~ Washington, DC 20551, USA Received January 1987, final version received August 1987 Anderson and Moore (1985) present a fail-safe method for analyzing any linear perfect-foresight model. They describe a procedure which either computes the reduced-form solution or indicates why the model has no reduced form. This paper presents formulae for differentiating singular vectors and vectors spanning an invariant space, and shows how to use these formulae to differentiate Anderson and Moore's structural-model to reduced-form-model transformation. 1. Introduction and summary Anderson and Moore (1985) present a fourteen-step algorithm for comput- ing solutions for linear perfect-foresight models. The algorithm computes the reduced form for any linear perfect-foresight model that has a unique solution. This transformation is an important part of many operations performed on rational-expectations models. This paper extends the fourteen-step algorithm so that it will also compute first and second derivatives of the structural-model to reduced-form-model transformation. One can use the first and second derivatives to investigate the identification of and facilitate the estimation of rational-expectations models. The paper has five sections and an appendix. Section 2 provides a canonical representation for linear perfect-foresight models and their reduced forms, and it identifies necessary and sufficient conditions for the linear perfect-foresight model to have a unique reduced form. Section 3 presents the fourteen-step procedure augmented to compute first and second derivatives of the reduced- form coefficient matrices. Section 4 presents several important formulae needed for implementing the augmented fourteen-step procedure: a formula for the first and second derivatives of left and fight singular vectors and for the first and second derivatives of vectors spanning an invariant space of a matrix. *The views expressed here are those of the author and do not necessarily represent the views of the Federal Reserve System. I would like to thank George Moore and an anonymous referee for their helpful comments and John Ammer and William Goffe for their valuable programming assistance. 0165-1889/87/$3.50©1987, Elsevier Science Publishers B.V. (North-Holland)

A procedure for differentiating perfect-foresight-model reduced-from coefficients

Embed Size (px)

Citation preview

Page 1: A procedure for differentiating perfect-foresight-model reduced-from coefficients

Journal of Economic Dynamics and Control 11 (1987) 465-481. North-HoUand

A PROCEDURE FOR DIFFERENTIATING PERFECT-FORESIGHT-MODEL

REDUCED-FORM COEFFICIENTS*

Gary ANDERSON

Federal Reserve Boara~ Washington, DC 20551, USA

Received January 1987, final version received August 1987

Anderson and Moore (1985) present a fail-safe method for analyzing any linear perfect-foresight model. They describe a procedure which either computes the reduced-form solution or indicates why the model has no reduced form. This paper presents formulae for differentiating singular vectors and vectors spanning an invariant space, and shows how to use these formulae to differentiate Anderson and Moore's structural-model to reduced-form-model transformation.

1. Introduction and summary

Anderson and Moore (1985) present a fourteen-step algorithm for comput- ing solutions for linear perfect-foresight models. The algorithm computes the reduced form for any linear perfect-foresight model that has a unique solution. This transformation is an important part of many operations performed on rational-expectations models.

This paper extends the fourteen-step algorithm so that it will also compute first and second derivatives of the structural-model to reduced-form-model transformation. One can use the first and second derivatives to investigate the identification of and facilitate the estimation of rational-expectations models.

The paper has five sections and an appendix. Section 2 provides a canonical representation for linear perfect-foresight models and their reduced forms, and it identifies necessary and sufficient conditions for the linear perfect-foresight model to have a unique reduced form. Section 3 presents the fourteen-step procedure augmented to compute first and second derivatives of the reduced- form coefficient matrices. Section 4 presents several important formulae needed for implementing the augmented fourteen-step procedure: a formula for the first and second derivatives of left and fight singular vectors and for the first and second derivatives of vectors spanning an invariant space of a matrix.

*The views expressed here are those of the author and do not necessarily represent the views of the Federal Reserve System. I would like to thank George Moore and an anonymous referee for their helpful comments and John Ammer and William Goffe for their valuable programming assistance.

0165-1889/87/$3.50©1987, Elsevier Science Publishers B.V. (North-Holland)

Page 2: A procedure for differentiating perfect-foresight-model reduced-from coefficients

466 G. Anderson, Perfect-foresight-model reduced-form coefficients

Section 5 provides a brief conclusion. The appendix contains a derivation of the formulae presented in section 4.

2. The structural-model to reduced-form-model transformation

Anderson and Moore (1985) outline a procedure that computes solutions for structural models of the form

0

E HiXt+i = 0, t > 0, ( 1 ) i ~ - - ' r

with initial conditions

X i = x ~, i = - ' r . . . . , - 1 ,

where both ~- and 8 are non-negative, and X, is an L-dimensional real vector. Let the L by L real coefficient matrices Hi: i = - ¢ . . . . . 8, satisfy the following two restrictions:

1. The origin is the unique steady state of eq. (1); that is,

2. Corresponding to any initial conditions X i = xi: i -- - ~" . . . . . - 1, eq. (1) has a unique solution X,: t > 0, such that

lim X, = 0.

Anderson and Moore (1985) demonstrate that a model satisfying these two conditions has a reduced-form representation,

- 1

x , = 8iX,+ , t > 0, i ~ - - 7 "

generating the unique solution Xt: t > O, such that

lim X, = 0.

They present a constructive fourteen-step procedure for analyzing linear perfect-foresight models. Given the coefficient matrix

[ H _ ~ . . . H0],

the procedure computes the reduced-form coefficient matrix

B_,],

Page 3: A procedure for differentiating perfect-foresight-model reduced-from coefficients

G. Anderson, Perfect-foresight-model reduced-form coefficients 467

for any model satisfying Assumptions 1 and 2. If the model does not satisfy Assumptions 1 and 2, the procedure indicates whether there are no convergent solutions or a multiplicity of convergent solutions.

Inferring the sensitivity of reduced-form coefficients to changes in struc- tural-form parameters requires a straightforward (though intensive) applica- tion of the chain rule of differentiation. The only real complication arises because one needs the derivatives of vectors spanning an invariant space and the derivatives of singular vectors with respect to structural parameters. In the next section, we assume these derivatives are available and defer the discussion of these details until section 4 and the appendix. The next section focuses on how to augment the control structure of the procedure for transforming the structural model into a reduced-form representation.

3. Summary of the augmented procedure

This section describes how to augment each step of the fourteen-step procedure presented in Anderson and Moore (1985) so that the procedure will compute the first derivative of the reduced form with respect to a generic scalar parameter a t and the second derivative with respect to generic scalar parameters ai, aj.

3.1. Initialization 0 1. Verify that (~i= _/4,.) is full rank. If it is singular, the steady state is not

unique; stop.

2. If it is non-singular, initialize

H:=[H_¢...Ho],

OH [OH_~ : ~ . • .

Oat Oai

OZH := [ O=H_,

Oai Oaj Oa i Oaj

Q := null matrix,

oO - - := null matrix, atx i

oH OOli '

"'" Oa i Oaj '

OZQ

Oai Oaj - - .'= null matrix.

Page 4: A procedure for differentiating perfect-foresight-model reduced-from coefficients

468 G. Anderson, Perfect-foresight-model reduced-form coefficients

3.2. Auxil iary initial conditions

1. Compute the singular values, (/~i: i = 1 . . . . . L}, and singular vectors, V, of H o. Sort the/~i small-to-large and order the columns of V conformably. If #~ 4: 0, i -- 1 . . . . . L, then H o is non-singular; go to step 1 of section 3.3.

2. H o is singular. Compute the first and second derivatives of the left singular vectors of H o using the formulae presented in section 4. Premultiply the coefficient matrix by V r to annihilate L-rank(Ho) rows of H o,

H : = V r H .

Update the OH/Oa~ matrix using the formula

OH OV r v r O H . . . . H + Oa, Oa i Oa i

Update the 0 2H/Oet~ Oaj matrix using the formula

02H := 02V r OV r OH OV r OH Vr 02H ] 0~ i O~j ~ t'I-I- golf O~'Ij "4- OI-Ejj O 0 l% "l- O OLi O--I-----~jjj

3. Partition the coefficient matrix as

q s

r

where

Os 0 2s - - = - - ~ _ 0 . 1

s = Oa t Oa i Oaj

~If Os/Oa, or O 2slOe, Oaj is non-zero, then computing the first or second derivatives is more complicated. In this case a small perturbation of the generic parameters a i and aj changes the rank of H o. The derivation of formulae for treating this case should exploit the fact that some of the auxiliary initial conditions for the original linear perfect-foresight model will become vectors in the invariant space associated with large eigenvalues for the slightly perturbed model: ai _+ e~ and a I + ej. One should elaborate the algorithm for computing the state-space transition matrix so that it

- evaluates the singularity of H o for the a~ + e~ and aj + ej version of the model as well as the a~ and al version:

- keeps track of those auxiliary initial conditions corresponding to rows of H with as/Oa i or O'-s/Oa, 0% -4= O; and

- keeps track of the quantities Os/Oai or 02s/Oa, Om 4: O. • . J . .

One can derive explicit formulae for the hm~ts of the denvatwes of the new eigenvectors as ei and ej approach zero, thus avoiding the computational burden of computing explicit solutions for the perturbed model.

Page 5: A procedure for differentiating perfect-foresight-model reduced-from coefficients

G. Anderson, Perfect -foresight-model reduced-form coefficients 469

The matrix q has L-rank(H0) rows and L(~" + 0) columns; r has rank(He) rows and L ('r + 1 + 0) columns.

Partition the coefficient derivative matrices as

9q Os

9a i 9or i

~r

Oct i

9H

9 0li

0 2q

Oa i 0%

92s

Oai 0%

2r

Oa i 0%

9 ~

Oa~ 0%"

4. Include q among the auxiliary initial conditions,

O:= O °

q

Include Oq/Oai and 0 2q/Oa~ 0% among the derivatives of the auxiliary initial conditions,

9Q

9Q

9 a i

9q

9 a i

O2Q z~

9 a i 90lj

O2Q

Oaj 0%

0 2q Oa~ Oaj

Page 6: A procedure for differentiating perfect-foresight-model reduced-from coefficients

470 G. Anderson, Perfect-foresight-model reduced-form coefficients

Now, shift the sub-matrix q L columns to the right in H,

H := 0 [ r q '

and shift the matrices Oq/aa i and a 2q/aai 0% L columns to the right,

aH

Ot~ i

aq

cg a i

Or

O¢X i

0 2H

O~ i Oaj

a 2q

Oa i a%

O2r

O~X i OOtj

Repeat these four steps until H o is non-singular.

3.3. Stability conditions

1. H o is non-singular. Solve for coefficients expressing Xt+ o in terms of X , _ , . . . X~+o_ l,

F:= - H f ' [ H _ , . . . Ho_l],

and update the derivative matrices

a2F F a2HfX H - -

aa i aaj [ - ' ' " H°- l] + a~; ' a[~_, . . . ~o-~]

aai a%

+ - - all; ~ a[I'I_,...I-Io_d

OOlj OOt i

a~[n_,.../~o_d ] •

Page 7: A procedure for differentiating perfect-foresight-model reduced-from coefficients

G. Anderson, Perfect-foresight-model reduced-form coefficients 471

2. Construct the first-order state-space transition matrix and its derivatives

A:_- [0] ::[or,], F I cg A oO 2A

' a., LT., j a.,a.j °, 1 t92F .

aai aaj

3. Compute W, a matrix of row vectors spanning the left invariant subspace of A associated with roots outside the open unit disk. We use the routine HQR3 described in Stewart (1976). The matrix W contains the stability conditions we use to verify the saddle-point property [see Anderson and Moore (1983)]. Compute a W / a a i and 02W/Oai aaj, the first and second derivatives of the matrix of vectors spanning the left invariant subspace of A associated with roots outside the open unit disk. Section 4 presents a formula for these derivative calculations.

3.4. Reduced form

1. Concatenate the auxiliary initial conditions with the stability conditions,

O:= O W

Concatenate the derivative of the auxiliary initial conditions with the deriva- tives of the stability conditions,

aO aa i

aO Oct i

d W

tg ct i

a2Q

a2Q

aa, 0%

aa~ a%

c~ 2W

aa i aaj

Page 8: A procedure for differentiating perfect-foresight-model reduced-from coefficients

472 G. Anderson, Perfect-foresight-model reduced-form coefficients

2. Partition Q,

[Qt. QR] :=Q,

where Qt. has L "r columns and QR has L 0 columns. Partition OQ/Oa i and 0 3 2Q//Ooti 03t~,

[ oo_L oeR Oa~ Oa i

OQ : = m

00/i

02QL 02QR ] : = 02Q

OOt i Oaj OOt i OOtj C)Ot i OOtj "

3. Let n be the number of rows in Q. (a) If n < LO, then Assumption 2 is violated; there are many solutions

converging to the origin for any initial condition. Stop. (b) If n > LO or n -- LO and QR is singular, then Assumption 2 is violated;

there exist initial conditions for which there are no solutions converging to the steady state. Stop.

(c) If n = LO and QR is non-singular, then set

and

[ B _ , . . . B_t ] := the first L rows of -1 - O k QL,

- I St---- Y'~ BiXt+i , t>__O,

i ~ - - "r

is the unique solution converging to the steady state for any initial conditions. Set

o , [o Lo R ] Oa----~i := the first L rows of _Q~I Oai Oa_____~Q~IQL ,

02B , [ 02,o-t ~R OQ-i 10QL :~ the first L rows of [ / ~ Q L +

ot-------~ O a----~ 0 0 0%

OQ~ 10QL + - - _ _ + Q~ 0% Oa i

02QL ] Oa~ 0% "

Stop. End.

Page 9: A procedure for differentiating perfect-foresight-model reduced-from coefficients

G. Anderson, Perfect-foresight-model reduced-form coejficients 473

4. Formulae for computing first and second derivatives of singular vectors and invariant space vectors

Step 2 in section 3.2 requires the computation of first and second derivatives for the left singular vectors of a matrix. Step 3 in section 3.3 requires a formula for the derivative of a set of vectors spanning the left invariant space of a matrix. This section provides formulae for obtaining the first and second derivatives. The appendix derives these formulae using a technique outlined in Stewart (1973). All the matrices needed to compute these derivatives are easily obtainable from standard linear algebra packages such as NAG or IMSL. 2

4.1. Differentiation formulae for left and right singular vectors

The left and right singular vectors of a real matrix A are orthonormal matrices Y and X such that

and A is a diagonal (k by k) matrix with the singular values of A along the diagonal and the use of H denotes the Hermitian transpose of the matrix Y [see Noble (1969, p. 338)]. To determine the impact of changes in parameters on the calculation of the singular vectors one can adopt the strategy outlined in Stewart (1973). This analysis yields a useful formula for the derivative of the left and right singular vectors with respect to changes in the matrix A,

X:x[0

where

aA I t a 0A q,] = A - t X 2 H - - Yt, O ~ = A - Y2

2 The expressions for the derivatives involve Kronecker products, matrix inverses, and Herme- tian transposes of the left and right singular vectors, and the vectors spanning an invariant space.

Page 10: A procedure for differentiating perfect-foresight-model reduced-from coefficients

474 G. Anderson, Perfect-foresight-model reduced-form coefficients

and

a2x [ ~((~:)%) + (~))"~:) Oa i 0~-----~ = X e~ 2

_( ~,2)H ],

OZy

Oa i 0% - - = y

1 1 H 1 ~((o,) oj +(o~)~o:) 02

-(0~)" } e~(ej) +e~(o,) )

where

a2A 02A C Z = A - x X ~ V,, 0 2 = A - ' Y z n - - X , .

Oa i OOtj OOl i OOlj

Consequently, computing the first and second derivatives of the left or right singular vectors of a matrix requires little additional computational effort. Singular value decomposition routines can return both left and right singular vectors at little added computational cost. Given the matrix of first or second derivatives of the matrix A, simple matrix multiplication produces the re- quired • and 0 matrices.

4.2. Differentiation formula for invariant space vectors

For a given matrix A, the HQR3 procedure produces a unitary matrix X such that XHAX is quasi-triangular [see Stewart (1976)]. In particular,

1 X"AX= X" ,4[X~ X,_]= ' E~

Here, for our purposes, the k 2 column vectors comprising X 2 span the left invariant space of A associated with eigenvalues greater than or equal to one in absolute value, and the kx column vectors comprising X x span the right invariant space associated with eigenvalues less than one. One can apply Stewart's technique to compute the derivative of the matrix X. The analysis produces

[ 0 l'T 0 O a i 171

Page 11: A procedure for differentiating perfect-foresight-model reduced-from coefficients

G. Anderson, Perfect-foresight-model reduced-form coefficients 475

where

and

~A vec(n')=((Er ®I)-(I®E22))-Xvec X2--XII,

I[ 7711 ~12 '/721 '/722

v e c

qTk21 qTk22

• ° .

. ° .

And similarly for the second derivative,

7711 Ir21

qTk21

9712

'B'2k t

qTk22

~Tk2k I

qTlkl ~r2k,

~k2k ,

I 1H I 1H I -(n~)T ] I I i H (n;) + n (rf ) H)

where

vec(r /2) = ( ( e ~ ® 1) - (~ ~ G 2 ) ) - l v e c X20~,O~Xl. 5. Conelusions

This paper shows how to augment the Anderson-Moore procedure for computing solutions of linear perfect-foresight models so that the procedure can compute the first and second derivatives of the reduced form with respect to the structural-form parameters. The computations described here are less burdensome than perturbing the structural parameters and recomputing the reduced form. The gains in efficiency and accuracy available with this method are even more dramatic when compared with numerical second derivatives of the reduced-form coefficients. The formulae for computing the derivatives of the reduced form are not burdensome because they require the solution of

Page 12: A procedure for differentiating perfect-foresight-model reduced-from coefficients

476 G. Anderson, Perfect-foresight-model reduced-form coefficients

linear equations involving matrices already computed to determine the re- duced form. Since computing perfect-foresight solutions is an important part of estimating stochastic rational-expectations models, the availability of first and second derivatives of the reduced form should prove useful for identifica- tion and estimation of these models.

Appendix A: Differentiation of left and right singular spaces

The left and right singular vectors of a matrix A are unitary matrices Y and X such that

and A is a diagonal matrix with the singular values of A along the diagonal. To determine the impact of changes in parameters on the calculation of the singular vectors one can adopt the strategy outlined in Stewart (1973).

Consider the matrix function of a~:

YH(a~)A(ai)X(a~)= [ A ( : i)

with YH(ai) and X(a~), the left and

~], (A.1)

right singular vectors of A(ai), and A(ai), the non-zero singular values of A(ai). Here, the matrix A(a~) is n × n, the matrix A(ai) is k × k, and k is the rank of the matrix A(0). Now define A(oti) by

0A A( ai) = A + a, 3a, . (A.2)

One can find unitary matrices U(a~) and T(ai) differing only slightly from the identity matrix such that

X(a,)=X(O)U(a~) and Y(a,)-- Y(O)T(a,). (A.3)

To do this assume U and T are of the form

U =

T=

AL o ( +

I._ k 0 ( I._ k + ssH) -1/2 "

Page 13: A procedure for differentiating perfect-foresight-model reduced-from coefficients

G. A nderson, Perfect-foresight-model reduced-form coefficients 477

Now both P and S approach zero as a approaches zero, so that 8(1+ PPrI)-x/2/Sai, 8(1+ PHP)-I/2/Sai, 8(1+ ssH)-I/2//Sai, and 8( I+ s Hs)-I/2/8ai all approach zero as a; approaches zero.

Consequently, differentiating eqs. (A.3) and evaluating them at a = 0 pro- duces expressions for 8X/8a i and 8Y/8a i which depend only on 8P/8a i and 8S/8a i evaluated at a = 0. To compute 8P/Sai and 8S/8a~, construct the four matrices Cn, C12, C2~ and C22 from the relation

c1 c,21:I ]aA C~x C22J Y~ Tara, [X~ X~],

where Cxx is a k x k matrix, C12 is a k x (n - k) matrix, C21 is an (n - k) x k and C22 is an (n - k) X (n - k) matrix. Substituting relations (A.2)-(A.5) into eq. (A.1) and simplifying the expressions for the off-diagonal matrices pro- duces

S[A + a~Ca~] - a~C22P = ai[C2, - SC12P], (A.6)

P[A + aiC~] - a i C ~ S = a i [ C ~ - PC~S]. (A.7)

Differentiating eqs. (A.6) and (A.7) with respect to a~ and evaluating the resulting expressions at a~ = 0 produces

8P 8S - - (0 ) = A - ~ C ~ , - - (0) = A - 1 C 2 x . 80t i 8 a i

The derivation for second derivatives begins with

82A A ( a i, aj) =A + a~% Oa i 8%'

and proceeds in a parallel fashion.

Appendix B: Differentiation of invariant space vectors

For a given matrix A, the HQR3 procedure produces a unitary matrix X such that XHAX is quasi-triangular. In particular,

XHAX= X~ A[X, X,_]= 1 e22e12

Page 14: A procedure for differentiating perfect-foresight-model reduced-from coefficients

478 G. Anderson, Perfect-foresight-model reduced-form coefficients

In the present application, X 2 spans the left invariant space of A associated with eigenvalues greater than one in absolute value. In addition, X 1 spans the right invariant space associated with eigenvalues less than or equal to one. One can apply Stewart's technique to compute the derivative of the matrix X. In this case, consider

X'~('~') ]A(,~i)[Xa(~,) x~(~,)] X(~,)"A(~,)X(~,)= x~(~,)

E22 J

where X(aj) = XU(ai) and A(ai) = A + ai(SA/Oai). The equation analogous to eqs. (A.6) and (A.7) becomes

OP (O)En OP H OA i~--o - e22 o~,=- ( o ) = x2 = - - x , = c~x,

gct i

which one can solve using the relation vec(Z1Z2Z3)= ( z r ® Z1)vec(Z2) so that

Oa t X 171 0

where

v e c ( / 7 ~ ) = ( ( E r ® I ) - ( I ® E z 2 ) ) - l v e c X2-~a X a .

The derivation for second derivatives begins with

02,t A(a i ,aj) =A + aia j Oai Oaj"

Appendix C: Evaluating 8(1 + PPn) - t /2 / Sai, 8U/ 8a i, and 8 2U/ Oai8 %

Let

then

Z = ( I + ppH)-X/2,

z z = (x+ i 'P" ) -~,

8Z(l + pptt) - 1/2/8a~Saj,

Page 15: A procedure for differentiating perfect-foresight-model reduced-from coefficients

G. A nderson, Perfect-foresight-model reduced-form coefficients 479

and it follows that

and

O(ZZ) OZ OZ O(I+PPH) -1

= - ( I + ppn) - ' 0 ( I + Pe n ) Oq ai

,imlZO 0Z ) + a,--,o 1 Oot i -O-~i z

= lim ( - ( I + p p n ) -a O(I+PPrt) Cti---~O \ OOl i

Now, since l i m ~ _ o Z = I and l i m ~ o P = 0,

OZ _¼ [ OP ia OPrt) lim - - - - lim / ~ P ~ o a,~, - ~- .o I a,~, + P--g~-~,

Now,

(I + ppH)-l,

(I + ppla)-l).

= 0 .

~2(ZZ) ( ~2Z ~Z ~Z ~Z ~Z oq2Z )

02( I + ppH) -1

Oa i Oaj

= (I+ ppn)-I 0(I+ ppU) Oa~

( I + ppH) -1

a(z+i,v I~) X (I + ppH)-x Oaj

_ ( i + ppH)-I 92(I+PPH) Oa i 0%

(I + ppi~)-i

+( i+ pprt)-I 0( I + PPH) oaj

(I+ ppH)-I

o(I+pe ~) ( I + pp i - i ) - 1, Oot i

Page 16: A procedure for differentiating perfect-foresight-model reduced-from coefficients

480 G. Anderson, Perfect-foresight-model reduced-form coefficients

so that

O2Z OZ OZ OZ clZ 02Z lim - - Z + + ~ }

a,--O aa i (lay 0% Dotj ~aj ~a i + Z

= lira

( t + ev") -~ a( t + vv") a( t + vv ~) ( I + I'PH) -~ ( l + Pt'H) -~ Oa~ 0%

- ( t + vv")- ' °2(I+ vv") (i+ vv") - ' aa i Ootj

+(t+ vv") -t a(t+ Pe") (z+ Pen) -' a(t+ pvH) ( I+ ppH)- ' 0% Oa i

Now, since lim~__.oZ = I, lim~_., o OZ/Oa i = 0, and lim~__, oP = 0,

OZZ l i m ~ a~O OOt i Oaj

02P OP OP H • p H +

= ½ h~mo aai 0% aa~ 0%

OP OP H + - - - - + p

Oaj Oa i

0 2pH

Oa i Oaj

OP OP H OP OP a = ! _ _ +

2 OOt i Oaj Oaj Oa i

The derivations for O(I + pHp)-I /2 /Oai , O(I + SgH)-l /2/Oai , O(I + SrtS)- 1/Z/Oai, 0 2(I + pHp)- t /2 /Oai 0%, 0 2(1+ s sH) - t /2 /Oa i 8aj, and 8 2(i + sHs) - 1/Z/Oa~ 0% are almost identical. U is of the form

U= U, U2= I,,-k 0 ( I,,_ k + PPH) -1/2 "

Since

au: au au~ au: lim U l = I , l imU 2 = I , lim --0, = I U 2 + U x , ~ o ~--.o ,,-~o -~a~ Oa~ Oa i Oa~

it follows that

o u ou~ ou~ ou~ lira ~ = l i r a - - + l i m - - = l i m - - a ~ O a"~O 8 a i a ~ O O a i a" -*O OOt i

Page 17: A procedure for differentiating perfect-foresight-model reduced-from coefficients

G. Anderson, Perfect-foresight-model reduced-form coefficients

In addition, since

( a u, au, / a~ a~] = l ~ U2 + -a~ -~j + ~ -~, + u~ a~, a---~j ] '

it follows that

0 2U 8 2U I 0 2U 2 l i r a - - = lira ~ + l i m - a-0 80l i OOlj e~-'+O O• i Oaj a'*O 80l i 8a j "

The derivations for OT/Oa i and 02T//Ooli 0% are identical.

481

References

Anderson, Gary and George Moore, 1983, An efficient procedure for solving linear perfect foresight models, Unpublished manuscript (Board of Governors of the Federal Reserve System, Washington, DC).

Anderson, Gary and George Moore, 1985, A linear algebraic procedure for solving linear perfect foresight models, Economics Letters 17, 247-252.

Noble, Ben, 1969, Applied linear algebra (Prentice-Hall, Englewood Cliffs, NJ). Stewart, G.W., 1973, Error peturbation bounds for subspaces associated with certain eigenvalue

problems, Siam Review 13, 727-764. Stewart, G.W., 1976, Algorithm 506, HQR3 and EXCHNG: Fortran subroutines for calculating

and ordering the eigenvalues of a real upper Hessenburg matrix, ACM Transactions on Mathematical Software 2, 275-280.