105
Bayesian methods for VAR Models Fabio Canova EUI and CEPR December 2012

Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

Bayesian methods for VAR Models

Fabio Canova

EUI and CEPR

December 2012

Page 2: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

Outline

� Introduction.

� Likelihood function for an M variable VAR(q).

� Priors for VARs (Di�use, Minnesota (Litterman), General, Hierarchical,DSGE based).

� Forecasting with BVARs.

� Structural (overidenti�ed) BVAR.

� Univariate dynamic panels; endogenous grouping, partial pooling ofVARs.

Page 3: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

References

Lutkepohl, H., (1991), Introduction to Multiple Time Series Analysis, Springer and Ver-

lag.

Ballabriga, F. (1997) "Bayesian Vector Autoregressions", ESADE, manuscript.

Canova, F. (1992) " An Alternative Approach to Modelling and Forecasting Seasonal

Time Series " Journal of Business and Economic Statistics, 10, 97-108.

Canova, F. (1993a) " Forecasting time series with common seasonal patterns", Journal

of Econometrics, 55, 173-200.

Canova, F. (2004), "Testing for Convergence Clubs in Income per Capita: A Predictive

Density Approach", International Economic Review, 45(1), 2004, 49-77.

Del Negro, M. and F. Schorfheide (2004), " Priors from General Equilibrium Models for

VARs", International Economic Review, 45, 643-673.

Page 4: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

Del Negro, M. and Schorfheide, F. (2012)," Bayesian macroeconometrics" in J. Geweke,

G.Koop, H. Van Dijk (eds.) The Oxford Handbook of Bayesian econometrics, Oxford

University Press.

Favero, C. (2001) Econometrics, Oxford University Press.

Giannone, D., Primiceri, G. and Lenza, M. (2012) Prior selection for vector autoregression,

Northwestern University, manuscript.

Girlchrist, S. and Gertler, M., (1994), Monetary Policy, Business Cycles and the Behavior

of Small Manufactoring Firms, Quarterly Journal of Economics, CIX, 309-340.

Kadiyala, R. and Karlsson, S. (1997) Numerical methods for estimation and Inference in

Bayesian VAR models, Journal of Applied Econometrics, 12, 99-132.

Killian, L. (2011) Structural vector autoregressions, University of Michigan, manuscript.

Page 5: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

Ingram, B. and Whitemann, C. (1994), "Supplanting the Minnesota prior. Forecasting

macroeconomic time series using real business cycle priors, Journal of Monetary Eco-

nomics, 34, 497-510.

Lindlay, D. V. and Smith, A.F.M. (1972) "Bayes Estimates of the Linear Model", Journal

of the Royal Statistical Association, Ser B, 34, 1-18.

Robertson, J. and Tallman, E. (1999), 'Vector Autoregressions: Forecasting and Reality",

Federal Reserve Bank of Atlanta, Economic Review, First quarter, 4-18.

Sims, C. and Zha T. (1998) \Bayesian Methods for Dynamic Multivariate Models", In-

ternational Economic Review, 39, 949-968.

Waggoner and T. Zha (2003) A Gibbs Simulator for Restricted VAR models, Journal of

Economic Dynamics and Control, 26, 349-366.

Zellner, A., Hong, (1989) Forecasting International Growth rates using Bayesian Shrinkage

and other procedures, Journal of Econometrics, 40, 183-202.

Zha, T. (1999) "Block Recursion and Structural Vector Autoregressions", Journal of

Econometrics, 90, 291-316.

Page 6: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

1 Why BVAR?

- VARs have lots of parameters to be estimated. If they are used for

forecasting, their performance is poor.

- Even if they are used for structural analysis, parameter uncertainty is a

concern.

- Impossible to incorporate prior views of the client into classical analysis.

- BVAR are a exible way to incorporate extraneous (client) information.

They can also help to reduce the dimensionality of the parameter space.

Page 7: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

2 A useful results

Matrix Algebra

1) Am�n Bp timesq =

0B@ a11B a12B : : : A1;nB: : : : : : : : : : : :am1B am2B : : : Am;nB

1CA

2) vec(A)0 = [a11; a12; : : : ; a1n; : : : ; am1; am2; : : : ; amn].

3) vec(A0)0vec(B) = tr(AB) = tr(BA) = vec(B0)0vec(A)

4) vec(ABC) = (C0 A)vec(B).

Page 8: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

5)

tr(ABC) = vec(A0)0(C0 I)vec(B)

= vec(A0)0(I B)vec(C)

= vec(B0)0(A0 I)vec(C)

= vec(B0)0(I C)vec(A)

= vec(C0)0(B0 I)vec(A)

= vec(C0)0(I A)vec(B)

Page 9: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

Distributions

1) Multivariate normal: x(M�1) � N(�;�)

p(x) = (2�)�0:5M j�j�0:5expf0:5(x� �)0�1(x� �)g (1)

2) Multivariate t: x(M�1) � tv(�;�)

p(x) =�(0:5(� +M))

�(0:5�)(��)0:5�j�j�0:5expf1 + 0:5

�(x� �)0�1(x� �)g�0:5(�+M)

(2)

3) Inverse Wishart A(M�M) �W (S�1; �)

p(A) = (20:5�M�0:25M(M�1)MYi

�(0:5(� + 1� i))�1jSj0:5�

jAj�0:5(�+M+1)exp(�0:5tr(SA�1)) (3)

Page 10: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

3 Likelihood function of an M variable VAR(q)

Consider an M � 1 VAR model with q lags each (k=Mq coe�cients eachequation, Mk total coe�cients in total), no constant.

yt = B(L)yt�1 + et et � N(0;�e)

Letting B = [B1; : : : Bq];Xt = [yt�1; : : : yt�p]; � = vec(B), the VAR is:

y = (IM X)� + e e � (0;�e IT ) (4)

where y; e areMT�1 vectors, IM is the identify matrix, and � is aMk�1vector. Conditioning on initial observations yp = [y�1; : : : ; y�q]:

L(�;�ejy; yp) =1

(2�)0:5MTj�e IT j�0:5

� expf�0:5(y � (IM X)�)0(��1e IT )(y � (IM X)�)g

Page 11: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

Some manipulations of the likelihood function:

(y � (IM X)�)0(��1e IT )(y � (IM X)�) =

(��0:5e IT )(y � (IM X)�)0(��0:5e IT )(y � (IM X)�) =

[(��0:5e IT )y � (��0:5e X)�)]0[(��0:5e IT )y � (��0:5e X)�)]

Also (��0:5e IT )y � (��0:5e X)� = (��0:5e IT )y � (��0:5e X)� +(��0:5e X)(� � �) where � = (��1e X 0X)�1(��1e X)y. Therefore:

(y � (IM X)�)0(��1e IT )(y � (IM X)�) =

((��0:5e IT )y � (��0:5e X)�)0((��0:5e IT )y � (��0:5e X)�) +

(� � �)0(��1e X 0X)(� � �)

Page 12: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

Putting the pieces together:

L(�;�e) / j�e IT j�0:5expf�0:5((� � �)0(��1e X 0X)(� � �)

� 0:5[(��0:5e IT )y � (��0:5e X)�)0

[(��0:5e IT )y � (��0:5e X)�)]g= j�ej�0:5kexpf�0:5(� � �)0(��1e X 0X)(� � �)g� j�ej�0:5(T�k)expf�0:5tr[(��0:5e IT )y

� (��0:5e X)�)0(��0:5e IT )y � (��0:5e X)�)]g/ N(�j�;�e; X; y; yp)� iW (�ej�; X; y; yp; T � �) (5)

where tr = trace of the matrix.

Page 13: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

� The conditional likelihood of a VAR(q) is the product of Normal densityfor � conditional on � and �e, and an inverted Wishart distribution for

�e, conditional on �, with scale (y � (x �e)�)0(y � (x �e)�), where(T � �) degrees of freedom; � = k +M + 1.

� Bayesian inference: combine likelihood with a prior.

i)If the prior is conjugate and the hyperparameters (parameters of the

prior) are known (or estimated): closed form solution for the conditional

and marginal of � and marginal of �e are available.

ii) If hyperparameters are random, need numerical MC methods to get

conditional and marginal distributions, even if prior is conjugate.

What priors conjugate with (5)?

Page 14: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

4 Conjugate priors for VARs

1. Di�use prior for both � and �e.

2. Normal prior for � with �e �xed.

3. Normal prior for �, di�use prior for �e (semi-conjugate)

4. Normal for �j�e, inverted Wishart for �e (conjugate).

Page 15: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

Case 1: P (�;�e) / j�ej�0:5(M+1). This is typically called Je�rey's prior.

Joint posterior p(�;�ejY ) = L(�;�ejY )p(�;�e).

Note: posterior is similar to the likelihood. There is only an extra term in

the normalizing constant. Thus

p(�j�e; Y ) = N(�j�;�e; X; y; yp) (6)

p(�ejY ) = iW (�ej�; X; y; yp; T � k) (7)

k number of parameters in each equation.

Note: g(�jy) (the marginal of �) is a t-distribution with parameters ((X 0X); (y�XB)0(y �XB); B; T � k), where a B = (X 0X)�1(X 0Y ), � = vec(B).

Page 16: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

� If the prior is di�use, the mean of the posterior is the OLS estimator.Classical analysis is equivalent to Bayesian analysis with at prior and a

quadratic loss function ( so that the posterior mean is the optimal point

estimator)

� Simulations can be done in two steps

1. Draw �e from the posterior inverted Wishart

2. Conditional on the draw for �e, draw � from a multivariate normal.

Alternatively

1'. Draw � from the a multivariate t-distribution.

Page 17: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

Case 2: � = �� + v; v � N(0;�b), where ��;�b known.

Then the prior is:

g(�) / j�bj�0:5exp[�0:5(� � ��)0��1b (� � ��)]= j�bj�0:5exp[�0:5(��0:5b (� � ��))0��0:5b (� � ��)] (8)

Page 18: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

Posterior:

g(�jy) / g(�)L(�jy)= j�bj�0:5 expf�0:5(��0:5b (� � ��))0��0:5b (� � ��)g � j�e IT j�0:5

� exp f(��0:5e IT )y � (��0:5e X)�)0(��0:5e IT )y � (��0:5e X)�)g= exp f�0:5(z � Z�)0(z � Z�)g= exp f�0:5(� � ~�)0Z 0Z(� � ~�) + (z � Z~�)0(z � Z~�)g (9)

where z = [��0:5b��; (��0:5e IT )y]

0; Z = [��0:5b ; (��0:5e X)]0 and

~� = (Z0Z)�1(Z0z) = [��1b +(��1e X 0X)]�1[��1b ��+(��1e X)0y] (10)

Page 19: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

Since �e and �b are �xed, the second term in (9) is a constant and

g(�jy) / exp[�0:5(� � ~�)0Z0Z(� � ~�)] (11)

/ exp[�0:5(� � ~�)0~��1b (� � ~�)] (12)

Conclusion: g(�jy) is N(~�; ~�b) where ~�b = [��1b + (��1e X 0X)]�1.

- If �e is unknown, use �e =1

T�1e0e in formulas, where et = yt�(IX)�

and � = �ols.

- The ~� obtained with this prior is related to the classical least square

estimator under uncertain linear restrictions.

Page 20: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

Model

yt = xtB + et et � (0; �2)�B = B � � � � (0;�b) (13)

where B = [B1; : : : Bq]0; xt = [yt�1; : : : yt�q]. Set zt = [yt; �B]

0; Zt =[xt; I]

0; Et = [et; �]0. Then zt = ZtB + Et where Et � (0;�E),�E is

known, t = 1; : : : ; T . Thus:

BGLS = (Z0��1E Z)�1(Z0��1E z) = ~B (Theil' s mixed estimator).

� Prior on VAR coe�cients can be treated as a dummy observation addedto the system of VAR equations.

� Prior can treat it as initial condition. If we write the initial observationas y0 = x0B + e0, then y0 = �2W�1 �B; x0 = �2W�1, e0 = �2W�1�,WW 0 = �b.

Page 21: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

Special case 1: Ridge Estimator

Consider a univariate model. If �B = 0; �e = I � �2e, �b = I � �2v,

~B = (Iq + �(X 0X)�1)�1B (14)

where � =�2e�2vand B = (X 0X)�1(X 0Y ).

- Prior re ects the belief that all the coe�cients of an AR(q) are small.

- Posterior estimator increases the smallest eigenvalues of the data matrix

by a factor � (useful when q is large: (X 0X) matrix ill-conditioned)

Page 22: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

Special case 2: Litterman (Minnesota) setup

Multivariate setup. Now ��;�� have special structure: �� = 0 except��i1 = 1. �b = �b(�) where:

�ij;` =�0h(`)

if i = j

= �0�1h(`)

� (�i�j)2 otherwise (15)

= �0 � �2 for exogenous variables (16)

�0 = tightness on the variance of the �rst lag; �1 = relative tightness onother variables; h(l) = tightness of the variance of lags other than the �rstone; (decay parameter); (�i�j

)2scaling factor.

Typically, h(`) regulated by one (decay) parameter. Useful structures:harmonic decay h(`) = l�3; geometric decay h(`) = ��`+13 ; linear decayh(`) = l.

Page 23: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

Own lags

Lag 

1

­5.0 ­2.5 0.0 2.5 5.00.00.10.20.30.40.5

Lag 

2

­4 ­2 0 2 40.0

0.1

0.2

0.3

0.4

0.5

Lag 

3

­4 ­2 0 2 40.0

0.1

0.2

0.3

0.4

0.5

Other lags

Lag 

1

­4 ­2 0 2 40.00.10.20.30.40.5

Lag 

2

­4 ­2 0 2 40.0

0.1

0.2

0.3

0.4

0.5

Lag 

3

­4 ­2 0 2 40.0

0.1

0.2

0.3

0.4

0.5

Page 24: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

Logic for this (shrinkage) prior:

- Mean chosen so that the VAR is M a-priori random walks (good forforecasting).

- �b very big. Decrease dimensionality by setting �b = �b(�).

- �b is a-priori diagonal (no expected relationship among equations andcoe�cients); �0 is the relative importance of prior to the data.

- The variance of lags of LHS variables shrinks to zero as lags increase.Variance of lags of other RHS variables shrinks to zero at a di�erent rate(governed by �1). �1 � 1 relative importance of other variables.

- Variance of the exogenous variables is regulated by �2. If �2 is large,prior information on the exogenous variables di�use.

Page 25: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

Example 1 Bivariate VAR(2) with h(`) = `:

�� = [1; 0; 0; 0; 0; 1; 0; 0]

�b =

266666666666666664

�0 0 0 0 0 0 0 00 �0�1(

�1�2)2 0 0 0 0 0 0

0 0�02 0 0 0 0 0

0 0 0�02 �1(

�1�2)2 0 0 0 0

0 0 0 0 �0�1(�1�2)2 0 0 0

0 0 0 0 0 �0 0 0

0 0 0 0 0 0�02 �1(

�1�2)2 0

0 0 0 0 0 0 0�02

377777777777777775

Page 26: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

- If �b is diagonal, �1 = 1 and the same variables belong to all equations,

then ~�=vec(~�i), where ~�i computed equation by equation. In other setups,

�b is not diagonal and this result does not hold.

- Let � = (�; vech(�b)). Minnesota prior makes � = �(�), � small

dimension. Better estimates of � than for � from the data. Better forecasts

than univariate ARIMA models or traditional multivariate SES (see e.g.

Robertson and Tallman (1999)).

- Standard approaches: "unimportant" lags purged using t-test. (see e.g.

Favero (2001)). Strong a-priori restrictions on what variables and which

lags enter in the VAR. Unpalatable.

- Minnesota prior imposes probability distributions on VAR coe�cients (un-

certain linear restrictions). It gives a reasonable account of the uncertainty

faced by an investigator.

Page 27: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

� How do we choose � = (�0; �1; �2; : : :) and (�i�j)2?

1) Use rules of thumb. Typical default values: �0 = 0:2; �1 = 0:5; �2 =

105, an harmonic speci�cation for h(`) with �3 = 1 or 2, implying loose

prior on lagged coe�cients and uninformative prior for the exogenous vari-

ables.

2) Estimate them using ML-II approach. That is, maximize L(�jy) =Rf(�jy; �)g(�j�)d� on training sample.

3) Set up prior g(�), produce hierarchical posterior estimates. For this we

need MCMC methods.

Page 28: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

Example 2 Consider yt = Bxt + ut, B scalar, ut � N(0; �2u), �2u known

and let B = �B + � where � � N(0; �2�), �B �xed and �2� = q(�)2, where

� is a set of hyperparameters.

Then yt = �Bxt + �t where �t = et + �xt and posterior kernel is:

�g(�; �jy) = 1

(2�)0:5�u��expf�0:5(y �Bx)2

�2u� 0:5(B �

�B)2

�2�g (17)

where y = [y1; : : : yt]0; x = [x1; : : : xt]0. Integrating B out of (17):

~g(�jy) = 1

(2�q(�)2trjX 0Xj+ �2u)0:5expf�0:5 (y � �Bx)2

�2u + q(�)2trjX 0Xjg

(18)

Maximize (18) using gradient or grid methods. Alternative: compute pre-

diction error decomposition of �g(�jy) with the Kalman �lter; �nd modalestimates of �.

Page 29: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

- Recent applications of this method.

i) Giannone, Primiceri, Lenza (2012): employ marginal likelihood to choose

the informativeness of prior restrictions.

Idea: � � N(��;� �) where � is a scalar � the covariance matrix of

VAR shocks and a known scale matrix. problem choose � in an optimal

way.

ii) Belmonte, Koop, Korobilis (2012): employ marginal likelihood to choose

the informativeness of prior distribution for time variations in coe�cients

and in the variance.

iii) Carriero, Kapetanios, Marcellino (2011): employ marginal likelihood to

select the variance of the prior from a grid.

Page 30: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

Hierachical VARs

One possible hierarchical model is

yt = (I X)� + e e � N(0;�) (19)

� = �� + v v � N(0;� � �) (20)

� = �� + � � � N(0; �) (21)

��; ; ��; � known (or estimable).

- Need to compute the joint posterior of (�; �;�).

- Interest is in g(�jy;X; yp) =Rg(�; �;�jy;X; yp)d�d�.

- Typically impossible to compute g(�jy;X; yp) analytically. One examplewhen this is possible is in Canova (2007, chapter 9). Otherwise, use MCMC

methods to get draws from this distribution.

Page 31: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

Simulation

It is very easy to perform simulations in this case since � is �xed. Thus

1. Draw � from the normal posterior

Results for other prior structures (Kadiyala and Karlsson (1997)):

Case 3): g(�;�e) is Normal-di�use, i.e. g(�) � N(��; ��b); �� and �bknown, and g(�e) / j�ej�0:5(M+1). This prior is semi-conjugate. This

means that the conditional posteriors are of the same form as case 2)

(moments of the normal are di�erent) but the marginal posterior g(�jy) /expf0:5(�� ~�)0���1b (�� ~�)g�j(y�XB)0(y�XB)+(B�B)0(X 0X)(B�B)j�0:5T has an unknown format.

Page 32: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

Case 4): g(�j�e) � N(��;�e�) and g(�e) � iW (��; ��). Then g(�j�e; y) �N(~�;�e ~); g(�ejy) � iW (~�; T + ��) where ~ = (��1 + X 0X)�1;~� = B0X 0XB+ �B0��1 �B+ ��+(y�XB)0(y�XB)� ~B(��1+X 0X) ~B;~� = ~(��1�� +X 0X�).

Marginal of � is t with parameters (~�1; ~�e; ~B; T + ��).

- In cases 3)-4) there is posterior dependence among the equations (even

with prior independence and �1 = 1).

Page 33: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

� Any additional uncertain restrictions on the coe�cients can be taggedon to the system in exactly the same way as in case 1).

i) Quasi-deterministic seasonality

Example 3 In quarterly data, a prior for a bivariate VAR(2) with 4 seasonal

dummies has mean �� = [1; 0; 0; 0; 0; 0; 0; 0j0; 0; 1; 0; 0; 0; 0; 0] and the blockof �a corresponding to the seasonal dummies has diagonal elements �dd =

�0�s where �s is the tightness of seasonal information (large �s means little

prior information).

Page 34: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

ii) Stochastic seasonality: there is peak in spectrum at !q =�2 or � or

both (quarterly data).

Let yt = D(`)et. If there is a peak at !q : jD(!q)j2 is large or jB(!q)j2small, where B(`) = D(`)�1.

A small jB(!q)j2 impliesP1j=1Bjcos(j!q) � �1. This is a "sum-of-

coe�cients" restrictions.

In a VAR model: 1 +P1j=1Bjcos(j!q) � 0, Bj where AR coe�cients in

equation j. (see Canova, 1992).

Set R� = r + v, r = [�1; : : : ;�1]0 and R is a 2 �Mk. For quarterly

data, if the �rst variable of the VAR displays seasonality at �2 and �:

Page 35: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

R =

"0 �1 0 1 0 �1 : : : 0�1 1 �1 1 �1 1 : : : 0

#

Add these restrictions to original prior. Use Theil's Mixed estimator.

iii) Trend restrictions on variable i:P1j=1Bji � �1;

iv) Cyclical peak restrictionP1j=1Bjicos(j!) � �1 for all ! 2 (2�d � �),

some d, � small, i = 1; 2; : : :.

v) High coherence at frequency �2 in series i and i

0 of a VAR implies thatP1j=1(�1)jBi0i0(2j) +

P1j=1(�1)jBii(2j) � �2.

Page 36: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

Some tips

- If hyperparameters are treated as �xed, we need some sensitivity analysis.

Rule-of-thumb parameters work well for forecasting. Do they work well in

structural estimation?

- You can set prior mean or prior variance as you wish (after all this is a

prior!!). In all cases we consider, the covariance matrix has a Kroneker

product form (easy to compute).

- What are the gains from using fully hierarchical methods (relative to

empirical based or rules of thumb)? Not much is known (see Giannone et

al. (2012), Carriero et al. (2012)).

Page 37: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

General Hierarchical Bayesian VARs

yt = Xt� + e e � N(0;�) (22)

� = M0� + v v � N(0; D0) (23)

� = M1�+ � � � N(0; D1) (24)

M0;M1; D1 known; Xt = (I X). Priors:

1) p(�) � iW ( �S; s).

2) p(D0) � iW ( �D0; �).

Conditional Posteriors:

1) (�j ��; Y ) � N(~�; ~).

2) (�j ��; Y ) � iW (~�; s+ T )

3) (�j ��; Y ) � N( ~D1(D�11 M1�+M 0

0D�10 �); ~D1)

4) (D0j �D0; Y ) � iW ( ~D0; �+ 1)

Page 38: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

where ~ = (D�10 +PtX

0t��1Xt)�1;

~� = ~(D�10 M0� +PtX

0t��1yt);

~��1 = �S +Pt(Yt �Xt�)(yt �Xt�)

0;~D1 = (D

�11 +M 0

0D�10 M0)

�1;~D0�1= D�10 +

PMg=1(�g � �)(�g � �)0.

We can use these conditional posterior in the Gibbs sampler.

Page 39: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

Example 4 (Forecasting in ation rates in Italy)

- Large changes in the persistence of in ation: AR(1) coe�cient is 0.85 in1980s and 0.48 in 1990s.

- Which model to use? Univariate ARIMA; VAR(4) with annualized threemonth in ation, rent in ation and the unemployment rate; two trivariateBVAR(4) (one with arbitrary hyperparameters 0.2, 1, 0.5; one with optimalones =(0.15, 2.0, 1.0)). Report one year ahead Theil-U Statistics.

Sample ARIMAVAR BVAR1 BVAR2

1996:1-2000:4 1.04 1.47 1.09 (0.03) 0.97 (0.02)1990:1-1995:4 0.99 1.24 1.04 (0.04) 0.94 (0.03)

- Di�cult to forecast; VAR poor, BVAR better.

- Results robust to changes of the forecasting sample.

Page 40: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

5 Forecasting with BVARs: Fan Charts

Let the VAR be written in a companion form:

Yt = BYt�1 + Et (25)

where Yt and Et are Mq � 1 vectors, B is a Mq �Mq matrix.

Repeatedly substituting: Yt = B�Yt�� +P��1j=0 B

jEt�j or

yt = JB�Yt�� +��1Xj=0

Bjet�j (26)

where J is such that JYt = yt, JEt = et and J 0JEt = Et.

Page 41: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

� Unconditional point forecast for yt+�

yt (�) = JB�Yt (27)

Use the posterior mean or median or mode,~B depending on the loss func-tion. Recall that if Et is normal, mean, mode and median coincide.

Forecast error is yt+� � yt (�) =P��1j=0

~Bjet+��j + [yt (�)� yt (�)].

Page 42: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

� Unconditional probability distributions for forecasts (fan charts).

Algorithm 5.1 Assume � � N(~�; ~�b). Set ~P ~P 0 = ~�b.

- Draw a normal (0,1) random vector vt and set �` = ~� + ~Pvt

- Construct point forecasts yt(�); � = 1; 2; ; : : : using �`

- Repeat previous steps L times.

- Construct distributions at each � using kernel methods and extract per-

centiles (fan charts).

Can also be used for recursive forecasts charts, only di�erence would be

that ~� and ~�b depend on t (they are recursively estimated).

Page 43: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

� "Average" � -step ahead forecasts

Construct f (yt+� j yt) =Rf (yt+� j yt; �) g (� j yt) d� where f (yt+� j yt; �)

is the conditional density of yt+� and g (� j yt) the posterior of �.

- Can calculate this numerically. Draw �l from g (� j yt). Compute

f�yt+� j yt; �l

�. Average over yt+� paths.

Page 44: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

- Can use the above algorithm to calculate turning point probabilities

i) A upturn turn � in yt(�) (typically, GDP) if yt(� � 2) < yt(� � 1) <yt(�) > yt(� + 1) > yt(� + 2).

ii) A downturn at � in yit(�) if yt(��2) > yt(��1) > yt(�) < yt(�+1) <

yt(� + 2).

Implementation: draw �`, construct (yt(�))`; ` = 1; : : : L; apply above

rule for each � . The fraction of times for which the condition is satis�ed

at each t is an estimate of the probability of an upturn (downturn).

Page 45: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

Example 5 Use a BVAR to construct one year ahead bands for in ation,

recursively updating posterior estimates over 1995:4-1998:2.

- The bands are relatively tight: errors at the beginning. Distribution of

one year ahead forecasts (based on 1995:4) also tight.

- Sample 1996:1 2002:4: 4 downturns. Median forecasted downturns 3;

Pr(n� 3)= 0.9, Pr(n> 4)= 0.0.

Page 46: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

Recursive forecasts

Year

Perc

ent

1997 1998­1.6

0.0

1.6

3.2

4.8

6.4

8.0

9.68497.5162.5ACTUAL

Data up to 1995:4

Percent

Den

sity

2.0 3.0 4.0 5.0

0.0

0.5

1.0

1.5

2.0

2.5

Page 47: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

6 DSGE priors for VARs

Log linearized solution of a DSGE model:

y1t+1 = �(�)y1t + et+1 (28)

y2t = �(�)y1t (29)

y1t are exogenous and endogenous (d1�1) states; y2t are endogenous (d2�1) controls; et+1 are the innovations in the shocks; �(�);�(�) function of�, structural parameters. Letting yt = [y2t; y1t]

0 the system is:"0 00 Id1

#yt+1 =

"�Id2 �(�)0 �(�)

#yt +

"0

et+1

#(30)

or

B0yt+1 = B1(�)yt + ut+1

� (Log-)linear DSGE solution is a restricted VAR!

Page 48: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

Given g(�), the model implies priors g(�(�)) and g(�(�)) for the decision

rule coe�cients and thus a prior � = [B(`)].

� A DSGE model implies restrictions on the VAR coe�cients. It can beused to link (in a hierarchical fashion) the VAR coe�cients � and DSGE

parameters �).

Note: if � � N(��;��); vec(�(�)) � N(vec(�(��));@vec(�(�))

@� ��@vec(�(�))

@�0);

vec(�(�)) � N(vec(�(��));@vec(�(�))

@� ��@vec(�(�))

@�0).

Example 6 Consider a VAR(q) yt+1 = B(`)yt + ut. From (30) g(B1) is

normal with mean BG0 B1(��), BG0 is the generalized inverse of B0 and vari-ance �b = �G0 �b1�

G00 ; �

G0 = vec(BG0 ); �b1 is the variance of vec(B1(�)).

A DSGE prior on B`; ` � 2 has a dogmatic form: mean zero and zero

variance.

Page 49: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

If there are unobservables, want a prior for VAR with observables only.

Example 7 (RBC prior: Ingram and Whiteman, 1994). A RBC model withutility function U(c; n) = log(ct) + log(1� nt) implies"

Kt+1lnAt+1

#=

" kk ka0 �

# "KtlnAt

#+

"0

et+1

#� �

"KtlnAt

#+ ut+1

(31)hct nt yt it

i0= �

"KtlnAt

#(32)

Kt is the capital stock, At a technological disturbance; ct consumption,nt hours, yt output and it investments.

Here � and � are function of �, the share of labor in production; � the dis-count factor, � the depreciation rate, � the AR parameter of the technologyshock. Let y1t = [ct; nt; yt; it]

0 and y2t = [kt; lnAt]0, � = (�; �; �; �).

Page 50: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

A VAR for y1t only is y1t = H(�)y1t�1+�1t where H(�) = �(�)�(�)(�(�)0

�(�))�1�(�); �1t = �(�)ut and (�(�)0�(�)�1)�(�) is the generalized

inverse of �(�).

If � � N(

266640:580:9880:0250:95

37775 ;266640:0006

0:00050:0006

0:00015

37775), the model

implies that the prior mean forH(�) is �H(�) =

266640:19 0:33 0:13 �0:020:45 0:67 0:29 �0:100:49 1:32 0:40 0:171:35 4:00 1:18 0:64

37775;

(Note substantial feedback from C, Y, N to I in the last row).

Page 51: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

The prior variance for H(��) is �H = @H@� ��

@H@�0.

� A Minnesota-style prior for y1t consistent with the RBC is

- Coe�cient on y1t�1 � N( �H(�); �0 � �H).

- Coe�cients on y1t�j � N(0;�0h(`)

� �H); j > 1 where �0 is a tightness

parameter and h(l) a decay function. Note that here �1 = 1.

� Move from statistical to economic priors.

Page 52: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

Del Negro and Schorfheide (2004):

- DSGE model provides more than a "form" of the prior restrictions (zero

mean on lags greater than one, etc.). It gives quantitative info.

- Exploit the idea that prior is an additional set of equations that can be

appended to a model.

- Can make the DSGE prior more or less informative for the VAR depending

on how much DSGE data is appended to the actual data.

- Setup a hierarchical model that allows us to compute the posterior of

DGSE and VAR parameters jointly.

Page 53: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

Idea of the approach:

- Given �, simulate data from model. Append simulated data to actualdata and estimate a VAR on extended data set.

- Estimates of the VAR coe�cients and of the covariance matrix will re ectsample and model information. The weight will be given by the precisionof the two types of information.

- Precision of data information depends on T (which is �xed). Precision ofsimulated information depends on T1, which can be chosen by the investi-gator. By varying � = T1

T , one can make the prior more or less informativeand thus assess of important the model is for the data.

- The model has restrictions. If � large is optimal it means that therestrictions imposed by the model are not violated. If � is small, restrictionsare violated (test of the model).

Page 54: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

� Let g(�) = Qki=1 g(�k) be the prior on DGSE parameters.

� The DSGE model implies a prior g(�j�) � N(��(�); ��b(�)); �e � IW (T1��(�); T1�k) on the VAR parameters of the decision rule where

��(�) = (Xs0Xs)�1(Xs0ys)

��b(�) = �e(�) (T1Xs0Xs)�1

��(�) = (ys0ys � (ys

0Xs)��(�)) (33)

ys simulated data, Xs lags in the VAR of simulated data, T1 =length of

simulated data.

Let � = T1T control the relative importance of two types of information.

�! 0 (�!1) actual (simulated) data dominates.

� The VAR implies a density f(�;�ujy).

Page 55: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

The model has a hierarchical structure: f(�;�ejy)g(�j�)g(�ej�)g(�). Sincelikelihood and the prior are conjugate (see the Normal-IW assumption

above); the conditional posteriors for VAR parameters are available in an-

alytical format.

� g(�j�; y;�e) � N(~�(�); ~�b(�)); g(�ej�; y) � iW ((�+T )~�(�); T+��k)where

~�(�) = (T1Xs0Xs +X 0X)�1(T1X

s0ys +X 0y)~�b(�) = �e(�) (T1Xs0Xs +X 0X)�1

~�(�) =1

(1 + �)T[(T1y

s0ys + y0y)� (T1ys0Xs + y0X)~�(�)] (34)

� If we pick a � we can immediately construct these posteriors.

Page 56: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

� g(�jy) / g(�)�j�ej�0:5(T�M�1) expf�0:5tr[��1e (Y�X�)0(Y�X�)g�

j�e(�)j�0:5(T1�M�1) expf�0:5tr[�e(�)�1(Y s�Xs�(�))0(Y s�Xs�(�))g.This conditional posterior is non-standard: need Metropolis-Hasting step

to calculate it.

- Use g(�jy); g(�j�; y;�e); g(�ej�; y) in the Gibbs sampler to obtain amarginal for �.

- All posterior moments in (34) are conditional on �. How do we select it?

i) Use Rules of thumbs (e.g. � = 1, T observation added). ii) Maximize

the marginal likelihood.

Page 57: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

Example 8 (sticky price model) In a basic NK sticky price-sticky wage

economy, set � = 0:66; �ss = 1:005; Nss = 0:33; cgdp = 0:8; � = 0:99; �p =

�w = 0:75; a0 = 0; a1 = 0:5; a2 = �1:0; a3 = 0:1. Run a VAR with out-

put, interest rates, money and in ation using actual quarterly data from

1973:1 to 1993:4 and data simulated from the model conditional on these

parameters. Overall, only a modest amount of simulated data (roughly, 20

data) should be used to set up a DSGE prior.

ML: Sticky price sticky wage model.� = 0 � = 0:1 � = 0:25 � = 0:5 � = 1 � = 2-1228.08 -828.51 -693.49 -709.13 -913.51 -1424.61

Page 58: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

7 Bayesian FAVAR

- Often we have available several indicators for the same variable (e.g. CPI,

PCE, GDP de ator, etc.)

- We have also many variables that could potentially enter a VAR. Which

ones do we choose? Curse of dimensionality problem.

- FAVAR is a good solution. Construct a few factors from a number of

indicators. Run a VAR with observables variables and factors. Construct

responses of variables not in the VAR using the factors.

� Can we run Bayesian FAVAR?

Page 59: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

- No new issues for naive treatment. Treat factors as observable variables.

Use conjugate priors for the � and the � coe�cients. Derive posterior

distribution for the coe�cients of variables not in the VAR using posterior

dynamics of the factor.

- Coe�cients of the variables in the VAR and outside the VAR is asym-

metric (the former have a prior, the latters do not).

- How do we set up a consistent prior for the FAVAR? Need to setup a

proper unobservable factor model and use a Gibbs sampler (see notes about

state space models).

Page 60: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

8 Structural BVARs

So far we set priors for reduced form VAR parameters. Can we set directlypriors for structural VARs?

B0yt � B(`)yt�1 = et et � (0; I) (35)

yt �B(`)yt�1 = ut ut � (0;�) (36)

B(`) = B1L+ : : :BqLq; B0 non singular; B(`) = B�10 B(`); � = B�10 B�10

0 .

- (35) is a structural system, while (36) is the corresponding VAR.

- Why do we want prior for (35)? We may have a-priori restrictions on thestructural dynamics (output responses to a monetary shock have a hump).

- We may have a-priori restrictions on the structural impacts e�ects ofshocks (output responses to a monetary shocks take time to materialize).

Page 61: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

What priors do you use for B0 and B(`)? How do you draw from their

posterior?

� Standard approach (Canova (1991), Gordon and Leeper (1994)): UseNormal- inverted Wishart prior for reduced form coe�cients (B(`);�).

This implies a Normal- inverted Wishart posterior. Draw B(`)l;�l) and

use identi�cation restrictions to draw structural parameters i.e. �l =

(B�10 )l(B�10

0 )l; Blj = Bl0Blj.

- Procedure is OK for just-identi�ed systems. For overidenti�ed systems it

does not take into account the extra restrictions.

� Sims and Zha (1998) work directly with the structural model (valid forboth just-identi�ed and over-identi�ed systems). Staking the observations

in (35):

Page 62: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

Y B0 �XB+ = E (37)

where Y is a T �M , X is a T �k matrix of lagged variables; E is a T �Mmatrix. Setting Z = [Y;�X]; B = [B0;B+]0, the likelihood is:

L(Bjy) / jB0jT expf�0:5tr(ZB)0(ZB)g / jB0jT expf�0:5b0(I Z0Z)bg(38)

where b = vec(B) is a M(k +M) � 1 vector; b0 = vec(B0) is a M2 � 1vector; b+ = vec(B+) is aMk�1 vector, I a (Mk�Mk) identity matrix.

Priors:- g(b) = g(b0)g(b+jb0), where g(b0) may have singularities (due to zeroidenti�cation restrictions).- g(b+jb0) � N(h(b0);�(b0)):

�Make prior on dynamics conditional on prior for contemporaneous e�ects.

Page 63: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

Posterior kernel:

g(bjy) / g(b0)jA0jT j�(b0)j�0:5 expf�0:5[b0(I Z0Z)bgexpf(b+ � h(b0))

0�(b0)�1(b+ � h(b0))g (39)

- Since b0(IZ0Z)b = b00(IY 0Y )b0+b0+(IX 0X)b+�2b0+(IX 0Y )b0,conditional on b0, the quantity in the exponent is quadratic in b+, thus

- g(b+jb0; y) � N(~b0; ~�(b0)�1) where ~b0 = ((IX 0X)+�(b0)�1)�1((I

X 0Y )b0 +�(b0)�1h(b0)); ~�(b0) = ((I X 0X) + �(b0)�1).

- g(b0jy) / g(b0)jB0jT j(I X 0X)�(b0) + Ij�0:5expf�0:5[b00(I Y 0Y )b0 + h(b0)

0�(b0)�1�(b0)� ~b0~�(b0)~b0g

- g(b0jy) has unknown format!! In addition, dim(b+) = M(Mq + 1) so

the calculation of g(b+jb0; y) is complicated.

Page 64: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

To simplify the computations: Choose �(b0) = �1 �2 and restrict

�1 = ' � I.

Then even if �2i 6= �2j, independence across equations is guaranteed since(I X 0X) + �(b0)�1) / (I X 0X) + diagf�21; : : :�2mg = diagf�21+X 0X; : : :�2m +X 0Xg. This means that we can proceed to estimate theequations one by one, without worrying about simultaneity.

In general, if had we started from VAR then ~�(b0) = (�eX 0X)+�(b0)�1(correlation across equations).

Page 65: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

� Structural Minnesota priors.

Given B0, let yt = B(`)yt�1 + C + et. Let � = vec[B1; : : : Bq; C]. Since

� = [B+B�10 ]; E(�) = [Im; 0; : : : 0] and var(�) = �b imply

E(B+jB0) = [B0; 0; : : : ; 0]

var(B+jB0) = diag(�+(ijl)) =�0�1h(`)�j

i; j = 1; : : :m; ` = 1; : : : ; p

= �0�2 otherwise (40)

where i stands for equation, j for variable, ` for lag.

Page 66: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

(i) No distinction own vs. other coe�cients (in SES no normalization with

respect to one RHS variable).

(ii) Scale factor di�er from reduced form Minnesota prior since var(vt) = I.

(iii) Prior for constant independently parametrized.

(iv) Because � = vec[B+B�10 ] there is a-priori correlation in the coe�cients

across equations (since they depend on the beliefs about B0). For example,if �2i = �2 8i, g(�jB0) is normal with covariance matrix �e �2 (seeKadiyala and Karlsson (1997)).

Page 67: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

� Additional restrictions for a structural system:

- Average value of lagged yi's (say �yio) is a good predictor of yit for each

equation. Then YdB0 � XdB+ = V where Yd = fyijg = �3�y0i if i = j

and zero otherwise, i; j = 1; : : :M ; Xd = fxisg = �3�y0i if i = j; s < k

and zero otherwise i = 1; : : :M; s = 1; : : : k.

Note that as �3 !1, this restriction implies model in �rst di�erence.

- Initial dummy restriction: suppose YdcB0 � XdcB+ = E where Ydc =

fyjg = �4�y0j if j = 1; : : :M Xdc = fxsg = �4�y0j if s < k � 1 and = �4if s = k.

If �4 ! 1, the dummy observation becomes [I � A(1)]�y0 +A�10 C = 0.

If C 6= 0, this implies cointegration.

Page 68: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

� How do we choose g(b0).

b0 contains contemporaneous structural parameters. We need to make a

distinction between soft vs. hard restrictions.

- Hard restrictions give you identi�cation (possibly of blocks of equations).

- Select the prior for non-zero coe�cients as non-informative i.e. if bn0 are

the non-zero elements of b0, g(bn0) / 1 or normal.

Example 9 Suppose M(M � 1)=2 restrictions, e.g. B0 upper triangular.One prior could be to set g(�b0) to be independent normal with zero mean

so E(�b0(ij)�b0(kh)) = 0 - no relationship across equations. The variance

�2(�b0(ij)) = (�5�i)2 i.e. all the elements of equation i have the same

variance.

Page 69: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

An alternative would be to use a Wishart prior for ��1e , i.e. g(��1e ) �IW (��; ��) where �� are dof and �� the scale. If �� =M+1; �� = diag (

�5�i)2,

then a prior for �b0 is the same as before except for the Jacobian j@��1e

@B0 j =2m

Qmj=1 a

jjj. Since likelihood contains a term jB0jT =

Qmj=1 b

Tjj, ignoring

the Jacobian makes no di�erence if T >> m.

How do we draw samples from g(b0jy)? Need MC techniques:

Algorithm 8.1 [1.] Calculated mode of g(b0jy) and the Hessian at themode.

[2.] Draw b0 from a normal centered at mode with covariance equal to the

Hessian at the mode or a t-distribution with the same mean and covariance

and �� =M + 1 degrees of freedom.

Page 70: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

[3.] Use importance sampling to weight the draw (use ratio IRl =gAP (bl0)

�(bl0)),

and check the magnitude of IR over l = 1; : : : ; L.

Alternative: MH algorithm with a normal or a t-distribution as the candi-

dates or a restricted Gibbs (Waggoner-Zha (2003)).

What if the identi�cation restrictions are of non-contemporaneous form?

Same idea. Long run restrictions imply special form of B0. Sign restrictions�e = PDP 0 = ~P ~P 0 and B0 = ~P�1.

Page 71: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

Extensions:

- VAR with exogenous variables: e.g. Oil prices in a VAR for domestic

variables.

- partial VARs (di�erent lag lengths) (special case of 1).

- VAR with block exogenous variables and overidentifying restrictions in

some block, e.g. a two country VAR model where one is block exogenous.

General structure for last case

Bi(`)yt = vit i = 1; : : : ; N (41)

Page 72: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

i is the number of blocks and M =PiMi and Mi is the number of

equations in each block. eit is a Mi � 1 vector for each i and Bi(`) =(Bi1(`); : : : ;Bin(`)). Let B0 = diagfBii(0)g and rewrite (41) as

B�10 Bi(`)yt = B�10 vit

or

yit = Bi(`)yit�1 + eit (42)

where Bi(`) = (0�; Ii; 0+) � B�1ii (0)Bi(`) 0� is a Mi �Mi� matrix of

zeros of dimension , 0+ is a Mi �Mi+ matrix of zeros, where Mi� = 0

for i = 1 and Mi� =Pi�1j Mj for i = 2; : : : ; n Mi+ = 0 for i = n and

Mi+ =Pnj=i+1Mj for i = 1; : : : ; n� 1 and where E(ete0t) = diagf�iig =

diagfBii(0)�1Bii(0)�10g .

Stack all the observations to have

yi = zibi + Ei (43)

Page 73: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

where yi and vi are T �mi matrices, xi is a T � k matrix and k is the

number of coe�cients in each block, bi = vec(Bi), zi = (I Xi) The

likelihood is

L(bijy�q : : : ; y0; y1; : : : ; yT ) /nYi=1

jBii(0)jT expf�0:5tr[(Si(bi)Bii(0)0Bii(0)]g

/nYi=1

jBii(0)jT expf�0:5tr[(Si(bi)Bii(0)0Bii(0)

+ (bi � bi)0x0ixi(bi � bi)Bii(0)0Bii(0)]g (44)

where bi = (z0izi)

�1(z0iyi) and Si(bi) = (yi � zibi)0(yi � zibi).

Suppose g(Bii(0); bi) / jBii(0)jk. The posterior are

g(Bii(0)jy) / jBii(0)jT expf�0:5tr[(Si(bi)Bii(0)0Bii(0)]g (45)

g(bjBii(0); y) � N(b; (Bii(0)0Bii(0))�1 (X 0iXi)�1) (46)

Page 74: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

where bi = vec(Bi). To draw posterior sequences for b;Bii(0) need todistinguish whether the system is just or over-identi�ed.

i) If just-identi�ed (e.g. Bii(0) is the Choleski factor of �ii), then thereis one-to-one mapping between the prior and the posterior of Bii(0) andfor �ii. So we can draw from (45) or from the posterior of �ii and use

identi�cation restrictions to get a draw for Bii(0).

ii) If Bii(0) is overidenti�ed, then we need to draw Bii(0)l; l = 1; : : : J

from the marginal posterior directly and use, e.g.

Page 75: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

Algorithm 8.2 [1.] Draw Bii(0)l; l = 1; : : : J from N(B �ii (0); HB�ii)where B�ii(0) is the mode of the posterior and H(B�ii)

Hessian at the mode.

[2.] Draw bl from g(bijBii(0)l; y) and calculateBi(`)l = Bii(0)l(0�; Ii; 0+)�Bi(`)l.

[3.] Calculate any function h(Bi(`)l), weighting the Bi(`)

l with the im-

portance ratio IRl =g(Bi(`)

l)N(Bi(`)l

).

Page 76: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

An importance sampling algorithm can be ine�cient when the degree of

simultaneity is high and the number of degrees of freedom is small since

the posterior of the parameters can be far from a normal (or even a t-

distribution).

- Alternative 1: Use the Gibbs sampler of Waggoner and Zha (2003).

- Alternative 2: Use a Metropolis algorithm to draw Bii(0)l; l = 1; : : : J .

That is, rather than step [1.] of the previous algorithm use the following:

[1'.] Draw Bii(0)l; l = 1; : : : J from Bii(0)y = Bii(0)l�1 + uii where

var(uii) / H(B�ii)and accept it if

g(Bii(0)yjy)g(Bii(0)l�1jy)

> U(0; 1) random variable.

Otherwise set Bii(0)l = Bii(0)l�1.

Page 77: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

Example 10 (Transmission of monetary shocks) Use US data, 1960:1 to

2003:1 for the log IP, log of CPI, Fed Funds rate and the log of M2.

Overidentify the system: the central bank only looks at money when manip-

ulating the nominal rate, i.e. contemporaneous impact matrix is Choleski

form except (3,1), (3,2) elements which are zero.

g(�bi) � N(0; 1). Use an importance sampling to draw from a normal

centered at the mode and with dispersion equal to Hessian at the mode.

Importance ratio: in 17 out of 1000 draws weight is large.

Page 78: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

Real GDP

5 10 15 20­1.00

­0.75

­0.50

­0.25

0.00

0.25

Price level

Horizon (Quarters)5 10 15 20

­0.2

0.0

0.2

0.4

0.6

M2

5 10 15 20­0.8

­0.6

­0.4

­0.2

­0.0

0.2

Federal funds rate

Horizon (Quarters)5 10 15 20

­0.2

0.0

0.2

0.4

Page 79: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

� Both output and money persistently decline in response to an increasein interest rates. The response of price initially close to zero but turns

positive and signi�cant after about 5 months (price puzzle?).

� Monetary shocks explain 4-18% of var(Y ) at the 48 month horizon and

0-7% of var(P ).

Page 80: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

9 Bayesian panel data analysis

9.1 Bayesian pooling

- Often in cross country studies we have only a few data points for amoderate number of countries.

- If dynamic heterogeneities are suspected, exact pooling of cross sectionalinformation leads to biases and inconsistencies.

- Any way to do some partial cross sectional pooling to improve over singleunit estimators?

- How do you compute "average" e�ects in dynamic models which areheterogeneous in the cross section?

Page 81: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

� Simple univariate model to set up ideas:

yit = Xit�i + eit eit � iid (0; �2I) (47)

where Xit = [1; yit�1; : : : ; yit�p]; �i = [ai0; Ai1; Ai2; : : : ; Aip]. Assumethat T is short. Suppose

�i = �� + vi

where vi � (0;�v).

� Coe�cient of the dynamic model are drawn from the same distribution(they are di�erent realizations of the same process).

� �v controls degree of dispersion. �v = 0 coe�cients equal; �v ! 1no relationship between the coe�cients.

- Two interpretations of (9.1): i) uncertain linear restriction (classical ap-proach); ii) prior shrinking the coe�cients of unit i and j toward a commonmean.

Page 82: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

� Bayesian Random coe�cient estimator.

If ei and vi are normal, �� and �v known, g(�ijy) is normal with mean

(1

�2ix0ixi +�

�1v )�1(

1

�2ix0ixi�i;ols +�

�1v��)

where �i;ols is the OLS estimator of �i and variance

(1

�2ix0ixi +�

�1v )�1

� Weighted mean of prior and sample information with weights given bythe relative precision of the two informations!!

Page 83: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

- Use �2i;ols in the formulas.

- If �v is large, ~�i ! �i;ols.

� ~� = 1n

Pni=1

~�i = �GLS applied to the uncertain linear model using Theil

mixed estimator.

Page 84: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

- If ��; �2i ; �v are unknown, need a prior for these parameters. No analyticalsolution for the posterior mean of �i exists.

- Approximate posterior modal estimates (see Smith (1973))

���=

1

n

nXi=1

��i (48)

(��i )2 =

1

T + 2[(yi � xi�

�i )0(yi � xi�

�i )] (49)

��v =1

n� dim(�)� 1[Xi

(��i � ���)(��i � ��

�) + � (50)

where "*" are modal estimates from a training sample, � = diag[0:001].

- Plug in these estimates in the posterior mean/variance formulas. Under-estimate uncertainty (parameters treated as �xed when they are random).

- Two step estimator.

Page 85: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

- Alternative estimator (see Rao (1975)) using a training sample:

��EB =1

n

nXi=1

�i;ols (51)

�2i;EB =1

T � dim(�)(y0iyi � y0ixi�i;ols) (52)

�v;EB =1

n� 1

nXi=1

(�i;ols � ��EB)(�i � ��EB)0 �1

n

nXi=1

(x0ixi)�1�2i;ols(53)

� The two estimators of �� are similar, but the �rst averages posterior

modes, the second averages OLS estimates.

Can use the procedure to partially pool subsets of the cross sectional units.

Assume (9.1) within each subset but not across subsets.

Page 86: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

9.2 Univariate dynamic panels

yit = %i +B1i(`)yit�1 +B2i(`)xt + eit eit � (0; �2i ) (54)

Bji(`) = Bji1`+Bji2`2+: : : Bjiqj`

qj , Bjil is a scalar, %i is the unit speci�c

�xed e�ect, xt are exogenous variables, common to all units. Assume

E(eitej�) = 0 8i 6= j; 8t; � .

- Interesting quantities that can be computed: bi1(1) = (1 � B1i(1))�1,

bi2(1) = (1 � B1i(1))�1B2i(1) (long run e�ects), bi1(`), bi2(`) (impulse

responses).

Page 87: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

- Stack the T observations to create yi; x; ei; 1. Let Xi = (yi; x; 1); X =

diagfXig, � = [B1; : : : BN ]0; Bi = (%i; B1i1; : : : ; B1iq1; B2i1; : : : ; B2iq2),

�i = �2i � IT ;� = diagf�ig then:

y = X� + e e � (0;�) (55)

y = (y01; : : : y0N)0; e = (e01; : : : e

0N)0.

- Comparing (55) with (4) one can see dynamic panel has same structure

as a VAR but Xi are unit speci�c and the covariance matrix has a (block)

heteroschedastic structure.

- Likelihood is of (55) is still the product of a normal for �, conditional

on �, and N inverted gammas for �2i . Note that since var(e) is diagonal,

ML=OLS equation by equation.

Page 88: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

What kind of priors could be used?

- Semi-conjugate prior: g(�) � N(��; ��b) and g(�2i ) � IG(0:5a1; 0:5a2).

- Exchangeable prior: g(�) =Qi gi(�); �i � N(��; �b), where �b measures

a-priori heterogeneity. With exchangeability ~� can be computed equation

by equation.

- Exchangeable prior on the di�erence (Canova and Marcet (1998)): �i��j � N(0;�b). �b has a special structure.

- Depending on the choice of prior, the posterior will re ect prior and

sample or prior and pooled info (see Zellner and Hong, 1989).

Page 89: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

Example 11 (Growth and convergence)

Yit = %i +BiYit�1 + eit eit � N(0; �2i ) (56)

where Yit = log(yityt), yt is the average EU GDP.

Let �i = (%i; Bi) = �� + vi, where vi � N(0; �2b). Assume �2i given,

��

known (if not get it from pooled regression on (��; 0)) and treat �2b �xed.

Let �i;j =�2i�2bjj

; j = 1; 2 be the relative importance of prior and sample

information. Choose loose prior (�i;j = 0:5).

Use income per-capita for 144 EU regions from 1980 to 1996 to construct

SSi = ~%i1� ~BTi1� ~Bi

+ ~BT+1i zi0 where ~%i; ~Bi are posterior mean, and CVi =

1� ~Bi (the convergence rate).

Page 90: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

-Mode of CVi distribution is 0.09: fast catch up. The highest 95% credible

set is large (from 0.03 to 0.45).

- Distribution of SS has many modes (at least 2).

What can we say about the posterior of the cross sectional mean SS?

Suppose g(SSi) � N(�; �2). Assume g(�) / 1 and � = 0:4.

- g(�jy) combines the prior and the data and the posterior of g(SSijy)combines unit speci�c and pooled information.

- ~� = �0:14 (highly left skewed distribution); variance is 0.083; 95 percentcredible interval is (-0.30, 0.02).

Page 91: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

Convergence rate

De

nsi

ty

0.00 0.600.0

1.6

3.2

4.8

6.4

Steady stateD

en

sity

­1.0 0.50.0

0.9

1.8

2.7

Page 92: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

9.3 Endogenous grouping

� Are there groups in the cross section? Convergence clubs; credit con-

strained vs non-credit constrained consumers, large vs. small �rms, etc.

Classi�cations typically exogenous (see e.g., Gertler and Gilchrist (1991)).

- Want an approach that simultaneously allows for endogenous cross sec-

tional grouping and Bayesian estimation of the parameters.

- Idea: if units i and j belong to a group, coe�cients �i and �j have same

distribution. If not, they have di�erent distributions.

- Basic problem: what ordering of the cross section gives grouping? There

are } = 1; 2; : : : N ! orderings. How do you �nd groups?

Page 93: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

� Suppose & = 1; 2; : : : ;�& breaks, �& given. For each & + 1 groups let the

model be:

yit = %i +B1i(`)yit�1 +A2i(`)xt�1 + eit (57)

�ji = ��

j+ vj (58)

where i = 1; : : : ; nj(}); nj(}) is the number of units in group j, given

the }-th ordering,Pj n

j(}) = N , each } and eit � (0; �2ei); vj � (0; ��j)

�i = [%i; B1i1; : : : ; B1iq1; B2i1; : : : ; ; B2iq2]. Let hj(}) be the location of

the break for group j = 1; : : : ; & + 1.

Alternative to (58): �& = 0 and exchangeable structure 8i, i.e

�i = �� + vi i = 1; : : : ; N vi � N(0; ��i) (59)

Page 94: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

Want to evaluate (57)-(58) against (57)-(59) and estimate (�; �ei) jointlywith optimal (}; &; hj(})) (ordering, number of breaks, location of break).

- Given an ordering }, the number of breaks &, and the location of thebreak point hj(}), rewrite (57)� (58) as:

Y = X� + E E � (0;�e) (60)

� = ��0 + V V � (0;�V ) (61)

where �E is (NTM)� (NTM) and �V = diagf�ig is (Nk)� (Nk).

- Specify priors for (�0;�e;�V ). Construct posterior estimates for (�;�E),(�0;�V ) jointly with posterior estimates of (}; & , h

j(})). Problem com-plicated!

- Split the problem in three steps. Use Empirical Bayes techniques toconstruct posteriors estimates of �, conditional on optimal (}; & , hj(}))and estimates of (�0;�V ;�E).

Page 95: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

� Step 1: How do you compute }; &; hj(}) optimally?

a) Given (�0; �V ; �e), and a }, examine how many groups are present

(select &).

b) Given } and &, check for the location of the break points (select hj(})).

c) Iterate on the �rst two steps, altering }.

Conclusion: selected submodel maximizes the predictive density over or-

derings }, groups & + 1 and break points hj(}).

Let: f(Y jH0) be the predictive density under cross sectional homogeneity.

Page 96: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

Let f(Y jH& ;hj(}); }) =Q&+1j=1 f(Y

jjH& ; hj(}); }) the predictive densityfor group j, with & break points at location hj(}), using ordering }.

De�ne: - I& : set of possible break points when there are & groups

- J : set of possible orderings of the cross section.

- �jh(}): (di�use) prior of a break at location h for group j of ordering }.

� f�(Y jH& ; }) � suphj(})2I& f(Y jH& ; hj(}); }) (max w.r. to break)

� fy(Y jH&) � sup}2J f�(Y jH& ; }) (max w.r. to break and ordering)

� f0(Y jH& ; }) �Phj(})2I& �

jh(})f(Y jH& ; h

j(}); }) (average).

Page 97: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

To test for breaks (set �& << (N=2)0:5).

1) Given };H(0) no breaks, H(1) & breaks:

PO(}) =�0f

0(Y jH0)P& �&f

0(Y jH& ; })(62)

�0 (�&) the prior probability that there are 0 (&) breaks.

2) Given };H0 : & � 1 breaks H(1) : & breaks.

PO(}; & � 1) =�&�1f0(&�1)(Y jH&�1; })

�&f0(&)(Y jH& ; })(63)

Given &, assign units to j i.e �nd f�(Y jH�& ; }). Alter } to get fy(Y jH�&).

Page 98: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

Questions:

i) Can we proceed sequentially to test for (cross sectional) breaks? Bai

(1997) OK consistent. But estimated break point is consistent for any of

the existing break points, location depends on the "strength" of the break.

ii) How to maximize predictive density over } when N is large? Do we

need N permutations? No, much less. Plus use economic theory to give

you interesting ordering.

Page 99: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

� Step 2: Given (}; &; hj(})) estimate [�00; vech(�V )0; vech(�e)0]0 usingfy on a training sample.

If the e's are normally distributed, then

�j0 =

1

nj(})

nj(})Xi=1

�iols

�j =1

nj(})� 1

nj(})Xi=1

(�iols � �i)(�iols � �

i)0 � 1

nj(})

nj(m)Xi=1

(XiX0i)�1�2i

�2i =1

T � k(Y 0i Yi � Y 0iXi�

iols) (64)

j = 1; : : : ; & + 1; xi regressors and yi dependent variables for unit i of

group j , and �jols = (xj

0xj)�1(xj

0yj) is the OLS estimator for unit i (in

group j).

Page 100: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

� Step 3: Construct posterior estimates of � conditional on all other pa-rameters.

- EB posterior point estimate � = (X 0��1E X+��1V )�1(X 0��1E Y+��1V A�0).

- Alternatively, joint estimation prior and posterior if e's and the v's are

normal and the prior on hyperparameters di�use (see Smith 1973).

Page 101: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

Example 12 (Convergence clubs). The posterior in example 11 is multi-

modal. Are there at least two convergence clubs? Where is the break

point? How di�erent are convergence rates across groups?

- Examine several ordering. More or less they give the same result. Best

use initial conditions of relative income per-capita.

- Set & = 4 and sequentially examine & against & + 1 breaks starting from

& = 0. Three breaks, PO ratios of 0.06. 0.52, 0.66 respectively. Evidence

in favour of two groups.

- Figure reports the predictive density as a function of the break point (for

(& = 1) and (& = 0). Units up to 23 (poor, Mediterranean and peripheral

regions in the EU) belong to the �rst group and from 24 to 144 to the

second.

Page 102: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

The average CV of two groups are 0.78 and 0.20: faster convergence to

below average steady state in the �rst group. Posterior distributions of the

steady states for the two groups distinct.

Break point

Pred

ictiv

e de

nsity

4850

4875

4900

4925

4950

4975 ONE BREAKNO BREAK

Steady State Distributions

Prob

abilit

y

­1.2 ­0.4 0.40

5

10

15CLUB1CLUB2

Page 103: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

9.4 Bayesian pooling for VARs

- Can maintain same setup and same ideas. Approach is the same.

yit = (I Xt)�i + eit eit � iid (0;�e) (65)

where Xt = [1; yt�1; : : : ; yit�p]; �i = [ai0; Ai1; Ai2; : : : ; Aip]. Suppose

�i = �� + vi vi � (0;�v) (66)

Case 1: ��;�v known. Posterior for �i is normal with mean and variance

given by

~�i = (1

�2ix0ixi +�

�1v )�1(

1

�2ix0ixi�i;ols +�

�1v��) (67)

~�� = (1

�2ix0ixi +�

�1v )�1 (68)

Page 104: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

Case 2:��;�v unknown �xed quantities estimable on a training sample.

- (67)-(68) still applicable with estimates of ��;�v in place of true ones.

Case 3: ��;�v unknown random quantities with prior distribution.

- Use MCMC to derive posterior marginals of the parameters.

- Cross sectional prior can be used in addition or in alternative to time

series prior. Both have shrinkage features.

Page 105: Bayesian methods for VAR Modelsapps.eui.eu/Personal/Canova/Teachingmaterial/Bvar_korea...The conditional likelihood of a VAR(q) is the product of Normal density for conditional on

- Same logic can be applied if one expects impulse responses (rather than

VAR coe�cients) to be similar. Model in this case is

yit =Xj

ijeit�j eit � iid (0;�e) (69)

i = � + vi (70)

where vi � (0;�v) and i = [ i1; i2; : : :].

- Posterior distribution of impulse responses will re ect unit speci�c (sam-

ple) information and prior information. Weights will depend on the relative

precision of the two information.

- Note that we treat �e as �xed (known or estimable quantity). If it

is a random variable we need to use some conjugate format to derive

analytically the posterior; otherwise we need to use MCMC methods.