View
1
Download
0
Category
Preview:
Citation preview
A Bayesian Analysis of Common Trends and Cycles on
Nonstationary Time Series Panel Data
By Wensheng Kang 1 and Marco Ferreira2
1Corresponding: Department of Economics, Kent State University, New Philadelphia. E-mail address:
wkang3@kent.edu; Tel: 330-308-7414; Fax: 330-339-3321.2Department of Statistics, University of Missouri, Columbia, MO 65211. E-mail address: fer-
reiram@missouri.edu; Tel: 573-884-8568 ; Fax: 573-884-5524.
A Bayesian Analysis of Common Trends and Cycles on
Nonstationary Time Series Panel Data
Abstract
This paper utilizes the Bayesian approach to examine latent common stochastic
trends and cycles of nonstationary time series panel data. We develop a Markov
Chain Monte Carlo (MCMC) algorithm that explores the high-dimensional pos-
terior distribution of the panel data model. Numerical simulation shows that
the Bayesian approach based on this algorithm is effective at both estimating
the elements of the regression coefficients and error variance matrix as well as
extracting latent components. To illustrate the potential of our approach, we
apply our method to three datasets: the national expenditures, the metropoli-
tan housing prices, and the dot-com daily stock prices. The empirical results
show the stronger the long-run growth the higher the cyclical volatility.
KEY WORDS : Nonstationary Time Series; Trends; Cycles; Unobserved Components.
JEL CLASSIFICATION : C11, C32, E32
1
1. Introduction
The key to understanding the properties of an important class of nonstationary models
including integrated time-series is accurately identifying the components of latent dynamic
factors contained in the model. The influential empirical article by Nelson and Plosser
(1982) shows that most macroeconomic variables are nonstationary time-series, in which
dynamic factors can be decomposed as trend and cyclical components. In this paper, we
develop a Markov Chain Monte Carlo (MCMC) algorithm for a Bayesian state-space model
to examine common trends and cycles contained in a nonstationary time-series panel data.
The common feature in a group of financial and economic time series has been an im-
portant research topic which may include the common trend, cycle, serial correlation, sea-
sonality, time-varying volatility, breaks, and nonlinearities (see the survey by Urga, 2007).
For the most recent study, Chang et al. (2008) propose an impressive approach in the use of
Kalman filter to extract the common stochastic trend from multiple integrating time-series,
and show that the common latent dynamic factor is very useful to illustrate the fluctua-
tions of a set of economic variables simultaneously. For the growing research of nonsta-
tionary time-series panel data, we can improve the estimation and forecasting performance
for short series via Bayesian study because it offers an advantage that combines the prior
knowledge and data information. Sims (1980) and Litterman (1986) among others have
documented the effectiveness in dealing with small sample problems by using a Bayesian
Vector Auto-Regressive (BVAR) approach to modeling multivariate time-series. Another
potential disadvantage of the use of Kalman filter for a state space model is that the statisti-
cal inference of unobserved components is conditional on the maximum likelihood estimates
on high-dimensional parameters including the elements of regression coefficients and error
covariance matrix, in the sense that the state variable inference takes these parameter esti-
mates as if they were the true values. When the number of observed time points are small,
this may grossly underestimate the uncertainty. The reader is referred to Kim and Nelson
(1999) for many concrete examples and intensive studies. The Bayesian analysis makes the
implementation straightforward by treating the state space model parameters as unknown
and the latent dynamic factors as missing data. Kim and Nelson (1998) show the benefits
2
of using a Bayesian Gibbs-sampling approach to study Stock and Watson’s (1991) applica-
tion of the state space model to extract a linear dynamic factor for four non-cointegrated
coincident indicators.
In the Bayesian context, Harvey et al. (2007) develop an MCMC algorithm to compute
posterior results on parameters and unobserved trends and cycles for nonstationary time
series. They focus on the study of smooth common cycles by modeling the higher-order
dynamics of the measurement equation in the state space setting, but pay less attention
to the integration of a panel dataset. Koop et al. (2008) have shown its importance by
analyzing the integrating panel data in the Bayesian context. To the best of our knowledge,
no study has yet to explicitly propose a Bayesian approach to study both the latent common
stochastic trends and cyclical components simultaneously in the integrating time series panel
data. In addition, the integration implies common trends embedded in the dataset which
lead to the reduced rank of the error covariance matrix in the measurement equation of a
state space model setting. While the reduced rank of the error covariance matrix brings
other difficulties for deriving frequentist statistics inference, the present paper shows that
they pose no difficulties for Bayesian computations as long as the posterior distribution of
error covariances is available.
This paper focuses on a single latent common trend and cycle, since it conveys valu-
able information that is useful for developing parsimonious models and provides evidence
regarding the relevance of economic theories that imply such common features. For exam-
ple, the common trend describes the long-run growth, while the common cycle illustrates
the short-run comovement of the m parallel time series. Here, we assume that there are
(m − g) linear independent cointegration vectors for the multivariate nonstationary time-
series panel data, yit, for i = 1, ..., m and t = 1, ..., T . Then, the rank of common trend is
g = 1, if the state space model contains a single latent common stochastic trend. We also
assume that there is a single latent common cycle, since it is often the case in economics
that the monetary policy may systematically influence the economy. The consideration of
single latent common trend and cycle is a purely theoretical simplicity of the exposition.
We demonstrate with the analysis of a simulated dataset that our proposed Bayesian
3
approach can effectively extract the latent common stochastic trend and cycle in an integrat-
ing nonstationary time series panel data. In addition, the practical value of our approach is
demonstrated by its application to three real examples in which the dimension of both the
regression parameters and latent sequences is relatively high. The first example features the
prototypical bivariate integrating system about national expenditures. The second exam-
ple analyzes the integrating system about metropolitan housing prices. The third example
studies the daily stock prices for internet industrial sector. The empirical results show
the stronger the long-run growth the higher the cyclical volatility. When we distinguished
the peak and trough turning points of the time series, the Bayesian implementation works
better in illustrating the positive trend-cycle relationship.
The remainder of this paper is organized as follows. Section 2 presents the methodology
including the model, the choice of prior distribution, and the sampling scheme. Section 3
presents four numerical examples including the simulated dataset and three important real
datasets in economics. Section 4 summarizes the results of the study.
2. Methodology
2.1. The Model and likelihood
Define the m×1 vector of observations Yt, where Yt = (y1t, ..., ymt)′. The class of multivariate
nonstationary time-series panel data, Yt, under consideration in this paper consists of trend,
θxt, cycle, φzt, and irregular, εt, components, given by
Yt = θxt + φzt + εt, εt ∼ NID(0, Σ), (1)
xt = xt−1 + νt, νt ∼ N(0, σ2ν), (2)
zt = ψzt−1 + ηt, ηt ∼ N(0, σ2η), and |ψ| < 1, (3)
where t = 1, ..., T . The θ = (θ1, ..., θm)′ and φ = (φ1, ..., φm)′ are m-dimensional vectors of
regression coefficients. The NID(0, Σ) denotes that the error terms are serially independent
and normally distributed with zero mean vector and m×m positive semi-definite covariance
matrix, Σ. The xt and zt are scalar latent variables, and the parameter ψ restricted inside
4
a unit interval maintains stationary of the process zt. Note that the variances of σ2ν and
σ2η are kept equal to 1 for identifiability purpose in this paper (see Kim and Nelson 1999,
Harvey et al. 2007, and Chang et al. 2008).
Models (1) to (3) state that the m-dimensional integrating panel data, yit, for i =
1, ..., m, and t = 1, ..., T , contains a single unobservable common stochastic trend, xt, defined
as a random walk, and a single unobservable common cyclical component, zt, defined as
a stationary process. The trend xt and cycle zt of this decomposition have important
economic implication. As Nelson and Plosser (1982) indicate, the common stochastic trend
may help us in understanding the relationship between capital accumulation and long-run
economic growth, while the common cyclical component may help us in understanding the
relationship between the business cycles and monetary policies.
Note that we have two latent variables, xt and zt, and two vectors of factor loading,
θ and φ in a single Equation (1), it is difficult to extract the latent series and estimate
the loading factors using the standard Bayesian MCMC procedure and/or deal with the
standard Kalman filter.3 In addition, previous economic studies have shown the relationship
is complicated and unclear between trends and cycles (see Kim and Nelson, 1999). For these
reasons, we wish to introduce an easier Bayesian approach to extract the latent common
stochastic trend and common cyclical component sequentially. In the first stage, we directly
extract the common stochastic trend xt and estimate the cointegration vector θ from the
nonstationary panel data. In the second stage, we remove the trend components by taking3One may write the model as a conventional multivariate state space model,
Yt =(
θ φ)
xt
zt
+ εt,
xt
zt
=
1 0
0 ψ
xt−1
zt−1
+
νt
ηt
,
and try to use standard Bayesian MCMC procedure for the model and/or deal with the standard Kalman
filter. All the efforts have to seek restrictions on the two vectors of factor loading, θ and φ and restrictions
on the covariance matrices of error terms, in the sense that, if those parameters are not identified at first,
it is difficult to extract two latent series xt and zt using conventional methods. In practice, we tried
standard Bayesian MCMC and Kalman filter procedures. When one latent variable dominates the other
latent variable, the extraction made the dominated components underestimated even though the trend-and-
cycle decomposition could be done in various ways.
5
the first difference, Yt − Yt−1, in order to extract the common cyclical components. As
Nelson and Plosser (1982) suggested, the first difference of I(1) variables makes the time-
series stationary .
The difficulty in estimating the elements of the regression coefficients θ′is is that they may
be highly correlated a posteriori, since the latent series xt and zt are AR(1) and unknown
and the parameter vector θ corresponds to the identified cointegrating vector which captures
the long-run relationship of the integrating system. This correlation may make the sampling
convergence slow, so we sample the elements of regression coefficients jointly rather than
sampling each θi once at a time. In order to estimate the vector of factor loading θ, we
define εPt = φzt + εt as the residuals of the trend components. Then we rewrite Equation
(1) in a matrix form
Yt = Xtθ + εPt , εP
t ∼ N(MPt , ΣP
t ), (4)
where
Xt =
xt 0 · · · 0
0 xt · · · 0...
.... . .
...
0 0 · · · xt
, θ =
θ1
θ2
...
θm
,
and t = 1, ..., T . As defined in Equation (1), θ is an m-dimensional vector of regression
coefficients. Mpt is an m-dimensional mean vector of error terms at time t while Σp
t is the
m×m error covariance matrix at time t.
Define YT = (Y1, ..., YT )′, XT = (X1, ..., XT )′, and etr(A) ≡ expTrace(A) represents
the exponential trace of matrix A. The likelihood function of the parameter vector θ is
L(θ; YT ) ∝ |Ω|−1/2etr
−1
2(YT − XT θ)′Ω−1(YT − XT θ)
, (5)
where the Ω = IT×T ⊗ ΣP denotes the MT ×MT dimensional error covariance matrix of
the stacked panel data.
Notice that the system equation has introduced prior correlation between latent vari-
ables, xt, t = 1, ..., T , with an AR(1) process. This correlation is partially preserved in the
posterior distribution (Gamerman and Lopes, 2006), which causes convergence problems
6
when running Gibbs sampling. Thus, we use the well-known Forward Filtering Backward
Sampling (FFBS) method proposed by Carter and Kohn (1994) and Fruhwirth-Schnatter
(1994) to extract the latent common stochastic trend jointly by rewriting the trend system
in the familiar state space form
Yt = FBt + εPt , εP
t ∼ N(MPt , ΣP
t ), (6)
Bt = GBt−1 + wt, wt ∼ N(0, 1), (7)
where F = θ, Bt = xt, and G = 1 for t = 1, ..., T .
Define Y cT = YT − YT−1 as the cyclical component that is the first difference of the
time-series. For estimating φ, the likelihood function of the parameters φ is
L(φ; Y cT ) ∝ |Σ|−T/2etr
−1
2(Y c
T − ZT φ)′(IT×T ⊗ Σ−1)(Y cT − ZT φ)
, (8)
where Y cT = (Y c
1 , ..., Y cT )′, ZT = (Z1, ..., ZT )′, and Zt = diag(zt, ..., zt) and φ = (φ1, ..., φm)′
for t = 1, ..., T .
In addition, to extract the common cyclical components, zt for t = 1, ..., T , we follow
the same methodology as extracting the trend component xt. The state-space modeling of
the cyclical component is given by
Y ct = FBt + εt, εt ∼ N(0, Σ), (9)
Bt = GBt−1 + wt, wt ∼ N(0, 1), (10)
where F = φ, Bt = zt, and G = ψ.
2.2. The Prior Specification
From now on, we use the notation, xT = (x0, ..., xT )′ and zT = (z0, ..., zT )′ to denote two
latent sequences. Since the state-space model is linear and Gaussian, a natural prior choice
of the initial values of latent variables xT and zT is
x0 ∼ N(α0,M0) and z0|ψ ∼ N
(0,
11− ψ2
),
7
where α0 and M0 are known hyperparameters. A default choice of a vague prior is α0 = 0
and large variance, such as M0 = 1010 in our numerical examples in rendering this prior
noninfluential setting.
We employ normal priors for the elements in θ and φ,
θ ∼ N(bθ,Mθ × Im×m), and φ ∼ N(bφ,Mφ × Im×m),
where bθ and bφ are m−dimensional known hyperparameter vectors, Mθ and Mφ are known
hyperparameter scalers. In the absence of prior information, we set bθ = bφ = 0 and
Mθ = Mφ = c with c large enough to reduce prior influence on θ and φ. For example, we
set c = 1010 in all of our numerical examples.
The prior of error covariance matrix we employ is an Inverse-Wishart (IW) distribution
in the conjugate Bayesian prior setting, given by
ΣP ∼ IW(Σ−10 , v0), and Σ ∼ IW(Σ−1
0 , v0),
where Σ−10 is the prior scale matrix and v0 is the degree freedom of the Inverse-Wishart
distribution. A natural default choice is to set the diagonal elements Σ0 as small values
such as 10−10 and the off-diagonal elements of Σ0 as zero, obtained in the limit as the prior
precision shrinks to zero in the sense |Σ0| → 0. We take the degree of freedom v0 of the
Inverse-Wishart distribution as a hyperparameter set to m in the application. When the
shrinkage prior is applied, the degree of freedom v0 is normally set as an extremely small
value such as 10−7 in Carmeci (2005). The same idea is obtained by setting a noninformative
prior for ΣP and Σ as |Σ|−(TM+1)/2. Koop et al. (2008) have shown that the non-informative
prior derives the proper marginal posteriors that are Inverse-Wishart distributions for ΣP
and Σ.
The prior choice of ψ is a uniform distribution,
ψ ∼ U(−1, 1).
The uniform prior of ψ is restricted from −1 to 1 in order to maitain stationary of the
z0 process. Alternatively, one may use as a noninformative prior for ψ the reference prior
1/(2π√
1− ψ2). Yang and Berger (1994) have shown that both priors lead to the proper
marginal posteriors for ψ.
8
2.3. MCMC Sampling
The Bayesian inference of all unknown parameters, for example θ, is based on the poste-
rior distribution by combining the likelihood function with the prior density: p(θ|YT ) ∝L(θ; YT )p(θ). Then Bayesian estimators can be obtained from the posterior distribution.
For example, the Bayesian estimator of a function h(θ) of the parameter vector under the
squared error loss is the posterior mean
E[h(θ)|YT ] =∫
Θ
h(θ)p(θ|YT )dθ.
Unfortunately, the integral above can not be computed analytically in our framework . In
order to overcome this limitation, we develop here a Markov chain Monte Carlo algorithm for
the posterior exploration. More specifically, a total of G samples of each parameter is gener-
ated via a Gibbs sampling approach by sequential substitution sampling from the posterior
distributions of all unknown parameters. The parameter estimate is obtained by averaging
over the G-J iterates in J burn-in iterates, for example the mean θ = (∑G
j=J θ(j))/(G− J),
where the θ(j) is the jth draw for the unknown parameter vector θ.
To carry out the MCMC sampling procedure, we follow primarily two sequential steps
in our setting. In the first step from 〈1〉 to 〈3〉 in the following, we sample parameters
(θ, ΣP , xT ) from their full conditional distribution:
〈1〉. Sample θ based on the full conditional posterior distribution given by
(θ|Ω, xT , YT ) ∼ N(Mθ, Cθ), (11)
where
Mθ = Cθ(X ′T ΩYT + bθ),
Cθ =(
X ′T ΩXT +
1Mθ
Im×m
)−1
.
〈2〉. Let Ω = IT×T ⊗ ΣP . We sample ΣP based on the full conditional distribution
(ΣP |θ, xT , YT ) ∼ IW(Σ1, v1), (12)
where Σ1 = Σ0 +∑T
t=1 εPt (εP
t )′, v1 = v0 + T , and εPt = (εP
1t, ..., εPmt)
′.
9
〈3〉. Sample the series xT using the FFBS scheme:
Since we have the initialization of x0 ∼ N(m0, C0) from the prior specification,
m0 = 0 and C0 = M0
we can compute and save the updating distribution xt ∼ N(mt, Ct), ∀t = 1, ..., T ,
for sampling use by taking the prior of xt ∼ N(at, Rt) with
at = Gmt−1 and Rt = Ct−1G2 + 1.
Thus, the updated distribution is xt ∼ N(mt, Ct), where
mt = at + Atet, Ct = Rt −AtFRt, et = yt − ft,
At = RtF′Q−1
t , ft = Fat, and Qt = FRtF′ + ΣP
t .
Then we sample (xT |YT , θ,Ω) in the following:
〈3a〉. Sample xT from the updating distribution xT ∼ N(mT , CT ).
〈3b〉. Sample (xt|xt+1, YT , θ, Σt) ∀t = T − 1, ..., 1 from a normal distribution
N( CtG2Ct+1
(Gxt+1 + 1Ct
mt), CtG2Ct+1
).
The FFBS block sampling is generally preferable to sample the high correlated parame-
ters jointly rather than sampling each xt once at a time. Empirical studies have shown that
the Markov chain has faster convergence when using joint proposals in the Gibbs sampling
for the correlated parameters of posterior distribution (Shephard, 1994).
In the next step from 〈4〉 to 〈5〉, we sample parameters (ψ, φ, Σ, zT ) by computing
Y cT = YT − YT−1, the first difference of the time-series.
〈4〉 Sample φ, zT , and Σ as the same scheme as sampling θ, xT , and ΣP respectively.
The full conditional posterior distribution is referred to the Appendix.
〈5〉 Sample (ψ|Y cT , φ, Σ, zT ) using the following Metropolis-Hastings method:
〈5a〉. Generate u from a uniform distribution on (0, 1).
〈5b〉. Draw ψ(prop) from the N(h, c) truncated to (−1, 1), and compute the probability
r = (1− (ψ(prop))2)12 /(1− (ψ(g−1))2)
12 ,
where the c = (∑T−1
t=1 z2t )−1 and h = c
∑Tt=1 ztzt−1.
〈5c〉. If r > u, then accept the proposal.
10
3. Numerical examples
This section presents four numerical examples to illustrate the Bayesian approach for ex-
tracting common stochastic trends and common cyclical components using the developed
MCMC sampling scheme. The first example reports the simulation results using the artifi-
cial data generated from 2 and 20-dimensional nonstationary panel-data models respectively.
The second example features the prototypical bivariate integrating system with quarterly
expenditure datasets. The third example studies metropolitan housing-price panel data.
The last example analyzes the high-dimensional integrating system with daily stock prices
of internet industrial sector.
3.1 Simulation Results
We simulated two samples from 2 and 20-dimensional panel data models with true pa-
rameters given arbitrary as follows. For the case of m = 2, the elements of θ and φ are
θ = (1.21, 1.22) and φ = (1.51, 1.52). The diagonal elements of Σ are 5.2 and 7.1, while
off-diagonal elements of Σ are restricted to zeroes for the simplicity of exposition. For the
case of m = 20, the elements of θ and φ are θ = (1.21, ..., 1.40) and φ = (1.51, ..., 1.70) with
an increment value of 0.01. The diagonal elements of Σ start at 5.2 and end at 7.1 with
an increment value of 0.1, while all off-diagonal elements of Σ are also restricted to zeroes.
These element selections characterize the panel data that contains a small difference in the
values of the elements of regression coefficients and that of elements of covariance matrix.
The ψ is 0.2 for both cases. The number of sample observations, T, is fixed to 100 in order
to examine the efficiency of the Bayesian analysis when cross section units m become large
relative to time points T .
In computation of the Bayesian estimates, we ran G=20,000 MCMC iterates with 15,000
burn-in iterates for the posterior distribution sample. The simulation work for the high-
dimensional posterior distribution of the panel data was heavy, which took around 12 hours
for 20,000 samples on a Pentium 4 2.1 GHz PC. However, the simulations converged quite
quickly due to the samplings in blocks. There is little difference in terms of the posterior
mean of the elements of both regression coefficients and error covariance matrix when the
11
Markov chain is shortened to 15,000 from 20,000 with 10,000 burn-in iterates.
Table 1: Simulation Study ( M=2 and T=100)
Parameter Mean
0.025 0.975
1 1.210 1.227 1.045 1.850
2 1.220 1.234 1.050 1.848
1 1.510 1.599 1.033 2.221
2 1.520 1.854 1.219 2.522
2
1 5.200 5.523 4.120 6.314
2
2 5.300 5.593 4.129 6.489
0.200 0.191 0.138 0.338
Quantile
Figure 1: (a) is the theta; (b) is the phi; (c) is the sigma. Here M=20 and T=100. The points on the straight solid line are true values,
and the points on the solid curve are posterior means of Bayesian estimate. The broken lines denote the 95% credibale intervals.
(a) (b) (c)
0.00
0.50
1.00
1.50
2.00
2.50
1 3 5 7 9 11 13 15 17 19
0.00
2.00
4.00
6.00
8.00
10.00
12.00
1 3 5 7 9 11 13 15 17 19
0.00
0.50
1.00
1.50
2.00
2.50
3.00
3.50
1 3 5 7 9 11 13 15 17 19
Table 1 and Figures 1.1 - 1.3 report the Bayesian estimates on the cases of m = 2 and
m = 20 respectively for regression coefficients, θ and φ, the diagonal elements of the error
covariance matrix Σ, and the 95% posterior probability bands. While all parameters are
closely replicated for m = 2, the estimate uncertainty increases for a wider 95% posterior
probability bands when the cross section units increase from m = 2 to m = 20. For the case
of m = 20, the posterior means of θ and φ are slightly overestimated within a reasonable
range from -0.2 to 0.2 in maximum, while the posterior mean of the diagonal elements of
the error covariance matrix are moderately better. The results reflect the nature of Monte
Carlo simulation to capture the estimate uncertainty of unknown hyperparameters with
high-dimensional posterior distribution. Nevertheless, all parameters including the true
12
values are within the pointwise 95% credible intervals. The posterior mean ψ = 0.201 is
very close to the true value ψ = 0.2. The results confirm the effectiveness of the Bayesian
method developed.
0 20 40 60 80 100
−6−4
−20
24
6
time
x_t
Figure 1.4: Simulation Study of X (M=2,T=100).
x_true: _ _; x_hat: _; 95% quantile: ...
0 20 40 60 80 100
−2−1
01
2time
z_t
Figure 1.5: Simulation Study of Z (M=2,T=100).
z_true: _ _; z_hat: _; 95% quantile: ...
0 20 40 60 80 100
−6−4
−20
24
6
time
x_t
Figure 1.6: Simulation Study of X (M=20,T=100).
x_true: _ _; x_hat: _; 95% quantile: ...
0 20 40 60 80 100
−3−2
−10
12
3
time
z_t
Figure 1.7: Simulation Study of Z (M=20,T=100).
z_true: _ _; z_hat: _; 95% quantile: ...
While Figures 1.4 - 1.5 present the unobserved components extracted from the simulation
study for m = 2, Figures 1.6 - 1.7 present the unobserved components extracted from
the simulation study for m = 20. In general, the sequences of latent common trend and
cycle are very closely replicated within the 95% pointwise credible intervals for both cases,
although the values on the peak and trough turning points tend to be underestimated with
-0.5 in maximum. The underestimate at turning points is reasonable and natural given
the unpredictable property of the random walk process at these extreme points. Further
13
modeling the dynamics in the measurement equations in the system (2) and (3) may dampen
the underestimate, but it is beyond this study and may deviate our attention from how the
Bayesian methodology works on extracting the latent common dynamic factor. Interested
researchers may refer to Zellner et al. (1990), Harding and Pagan (2002), and Harvey et
al. (2007) for further studies. In particular, they suggest that it is a good idea for the very
noisy time-series such as daily stock prices to split the nonstationary time series in order
to extract smoother latent dynamic factor for each period, if we are able to identify the
peak and trough turning points before the extraction in practice. In this study, we show
the property in the dot-com daily stock price example in which we can directly identify
the peak and trough turning points from the auxiliary general financial market index. The
study provides us a natural way to compare the differences in the growth rate of trends as
well as the magnitude and persistence of cycles in different economic stages.
3.2 Empirical Applications
In line with the simulation, we present a bivariate example about the quantity theory of
money in Section 3.2.1, and analyze a 20 cross-section-unit panel data about metropolitan
housing prices in Section 3.2.2. In Section 3.2.3, we investigate a daily dataset about dot-com
stock prices by dividing the series into growth, recession, and recovery periods according to
the salient turning points of the business cycles. For all empirical applications, we assume
that the time-series in each case are nonstationary and cointegrated which is common for
most macroeconomic variables as Nelson and Plosser (1982) and Engle and Granger (1987)
among many other macro economists have documented.
3.2.1 Quantity theory of money
The example features the prototypical bivariate integrating system with quarterly nonsta-
tionary time-series. We examine the positive relationship of the nominal value of expendi-
tures GNP to the quantity of money M2 in circulation based upon the quantity theory of
money:
MV = PY,
14
where M is the average amount of money in an economy at any one time, PY is the GNP ,
a sum of the values of a specific group of goods and services. In its most basic form,
the quantity theory of money assumes that the velocity of circulation V is constant or at
least stationary in the short run. Then, the theory implies that log(GNP ) and log(M)
are cointegrated in nominal terms. Engle and Granger’s (1987) Nobel literature documents
the cointegrating relationship between the nominal expenditure GNP and money stock M2
using the data before 1985.
In this application, we revisit the theory in order to extract the latent dynamic factor
embedded in the integrating system over the first quarter of 1959 through the first quarter
of 2008. The seasonally adjusted data series obtained from Federal Reserve Economic Data
(FRED) in St. Louis have 197 quarterly observations of nominal GNP and 591 monthly
observations of nominal M2. We take three month average on M2 as the average of amount
of money in the economy for each quarter.
1959 1971 1983 1995 2008
1820
2224
26
GNP and M2
x_t
Figure 2.1: The Common Stochastic Trend of 20 realizations
1959 1971 1983 1995 2008
−10
−5
05
10
GNP and M2
z_t
Figure 2.2: The Common Cyclical Component of 20 realizations
Since the posterior mean of ψ = 0.975 within the interval (0.949, 0.997) implies that the
business cycle is strongly persistent and close to a random walk, we report 20 realizations to
examine the smoothness of latent common stochastic trend and cyclical component rather
than the posterior mean that only show the prolong up-and-down movements after 15,000
15
burn-in iterates. Figure 2.1 -2.2 displays the 20 realizations of latent common dynamic
factors embedded in the integrating system. The common stochastic trend extracted seems
to resemble the growth of GNP and M2 that is not stationary.
The relationship between log(GNP ) and log(M2) is the following based on the estimates
of cointegration vector θ and the cycle-equation coefficient φ, log(GNPt)
log(M2t)
=
0.371(0.002)
0.343(0.002)
xt +
0.004(0.004)
−0.005(0.004)
zt + εt
The relationship implies that the national income elasticity is around .373/.341 = 1.081
with respect to the money stock, which provides the supporting evidence of money-supply
and economy-outcome positive causation that the leading monetarist, Milton Friedman,
has asserted (Friedman and Schwartz, 1963). The common cyclical component we obtained
resembles the US business cycle that has a relatively long up-and-down cycle each period.
The movement of the common cyclical component in this study resembles particularly the
pattern of smooth cycles by Harvey et al. (2007) who model the business cycle using GDP
over recent two decades via specifying a higher serial correlation order of the measurement
equation in the system (3). The consistent result illustrates the effectiveness using the
Bayesian method we developed.
3.2.2 Housing Prices Dynamics
The second real-data example features the integrating system by studying housing prices
for 20 metropolitan areas in which both housing volumes and median housing prices are
above the national average. Del Negroa and Otrok (2007) use the state-level housing-price
data to show that housing prices tend to converge over the recent decade. The convergence
implies the integration of the nonstationary housing-price panel data. While they extract
the common cycles of state-level housing prices, we use the Bayesian method developed to
extract both a single common trend and common cycle in order to explore the trend-and-
cycle determinants of housing price dynamics.
The housing study has been a growing research topic for two primary reasons. First,
the housing serves as both consumption and investment for households accounts for more
16
than 20% of the national wealth. Second, the influence of housing market volatility on the
financial system has drawn policy makers attention, in particularly over recent decades,
on the real asset pricing, on the risk hedging strategies of mortgage backed securities, and
on the regulatory policies of related financial markets. These issues have been of heighten
interest during the internet collapse of the early 2000s and the financial industry crisis
recently that saw a common housing price downturn across the nation.
Currently, there are two popular datasets for regional housing prices, Median Home
Prices (MHP) produced by National Association of Realtors (NAR), and conventional mort-
gage Home Price Indexes (HPI) produced by Freddie Mac and Fannie Mae and their safety
regulator of the Office of Federal Housing Enterprize Oversight (OFHEO). This paper uses
the HPI across 20 metropolitan areas from the first quarter of 1980 to the forth quarter
of 2007. Recent literature on housing prices normally use the HPI instead of MHP. The
benefits of using use HPI are two-fold. First, the HPI is constructed by the weighted repeat-
sales method, described in Case and Shiller (1989), which produce the housing price quality
constant of same properties over time. The second benefit is the breadth of its coverage for
more than 150 metropolitan areas over a reasonable long time span.
The relationship of 20 metropolitan housing prices is the following based on the estimates
of cointegration vector θ and the cycle-equation coefficient φ,
17
SanJoseSeattleDallas
PhoenixWashingtonDC
OaklandSanDiegoAtlanta
LosAngelesMinneapolis
BostonSanFrancisco
ChicagoNewY orkCityIndianapolis
BuffaloSaltlakeHoustonDetroit
Milwaukee
=
0.428(0.014)0.418(0.014)0.107(0.022)0.314(0.025)0.412(0.027)0.438(0.028)0.306(0.022)0.281(0.017)0.166(0.021)0.268(0.023)0.321(0.034)0.237(0.023)0.234(0.012)0.322(0.023)0.238(0.013)0.106(0.024)0.221(0.018)0.166(0.013)0.38(0.023)0.467(0.012)
xt +
−0.008(0.027)0.008(0.046)0.044(0.044)−0.070(0.037)0.140(0.068)0.105(0.046)0.102(0.028)0.094(0.035)−0.068(0.031)−0.108(0.027)0.028(0.022)0.011(0.024)−0.20(0.032)0.120(0.062)0.115(0.043)0.104(0.022)0.087(0.022)−0.118(0.021)−0.128(0.017)0.222(0.022)
zt + εt.
All elements of the cointegration vector θ estimated are significant with the standard er-
rors less than 0.04. Since the magnitude of the cointegration vector θ estimated is relatively
small as listed above with the posterior mean less than 0.5, it implies that the long-run
growth relationship is weak and there is a slow mean reversion component in the housing
market. Case and Shiller (1989) and Del Negroa and Otrok (2007) among others explain
the results by the heterogeity of the housing market. The proposition is captured in the
large differences in the elements of cycle-equation coefficients φ estimated.
The posterior mean of ψ = 0.93 within the interval (0.85, 0.99) implies that the up-and-
down cycle of housing prices is relatively persistent, in the sense that a potential utilization
of housing-price information can predict the price by evaluating the autocorrelation function
and analyzing the local housing market information. A similar result has been documented
in the established housing literature, such as Case and Shiller (1989), Clayton (1998), among
others, providing the evidence of market inefficiency of price correction. The interpretation
of the inefficiency is due to irrational expectations to plummet the price downward with a
prolong time period in the housing market, if there is an unexpected shock in the economy.
18
1980 1990 2000 2007
1011
1213
20 Metropolitan Housing Prices
x_t
Figure 3.1: The Common Stochastic Trend
1980 1990 2000 2007
−6
−4
−2
02
4
20 Metropolitan Housing Prices
z_t
Figure 3.2: The Common Cyclical Component
The prolong downturn of housing prices is due to the information imperfection of housing
markets, for example, the individual area-housing heterogeneities, large transaction costs,
and different local tax programs.
Figures 3.1 and 3.2 report the latent housing-price dynamic factors. The common
stochastic trend extracted reflects the growth tendency of housing market, a prolong boom
in history and a bust recently. The common cyclical component we obtained resembles the
latent housing dynamic factor extracted from state-level housing price data by Del Negroa
and Otrok (2007).
3.2.3 Dot-com Cycles
The third example features the high-dimensional posterior distribution integrating system
with daily stock prices of the internet industrial sector. We extract the common latent
dynamic factor driving the dot-com companies from growth, recession to recovery period4
4Note that the recovery period after 2003 is a relative description with respect to the internet collapse
period from 2000 to 2003. In fact, the period after 2003 still witnesses the recession of internet industries.
19
during past two decades. As Nelson and Plosser’s (1982) study, the common stochastic
trend may help us in understanding the relationship between venture capital accumulation
and internet industry development, while the common cyclical component may help us
in understanding the relationship between dot-com sector business cycles and monetary
policies.
Figure 4: The NASDAQ Index from January of 1994 to December of 2007.
0
1000
2000
3000
4000
5000
6000
1994:01 2000:03 2000:4
Time
Internet Growth Internet Recession Internet Recovery
The public advent of the Mosaic Web Browser and the nascent World Wide Web drew
general public’s attention in 1994. Then, the Web-based commerce attracted vast amounts
of venture capitals and intrigued millions of bright young people. Its pervasive development
during the late 1990’s coined the “digital economy”and created a new business model.
During its exuberant growth period from 1994 to March of 2000, the dot-com company’s
business relied on cash raised through public offerings on the stock exchange to build market
share, but dismissed standard business models by operating at a loss to build the dot-com
bubbles. The bubbles burst in the spring of 2000 and run full speed through 2001 -2003.
The venture capital related to dot-com industries lost more than hundred billions, such as
the NASDAQ index drop from 5000 to 1000 as can be seen in Figure 4.
It has been realized that the advance in the high-tech industry nationwide, directly and
indirectly invading other industry sectors, has been shaping US economy characteristics
over past decades (see DeVol et al., 1999). As the new growth theory by Romer (1990) and
Barro (1991), the high-tech industry development is able to enhance the economy growth
and accelerate the economy convergence due to the diffusion of technology and creation
of new ideas. In recent studies, the positive linkage between economic outcomes and dot-
20
com industry development has been documented in Gorden (2000) and Litan (2001) among
others. The long-run growth trend of the high-tech economy has been one of the most
important indicators of the strengthen of the economy, whereas the cyclical volatility of the
high-tech economy has been one of the most important indicators of the stability of the
economy.
In this study, we examine eight dot-com companies who successfully survived in the era of
internet industry development. They are Amazon.com Inc. (AMZN), CMGI Inc. (CMGI),
Earthlink Inc. (ELNK), Interactive New (IACID), Open Text Corp. (OTEX), Skillsoft PLC
(SKIL), Yahoo Inc. (YHOO), and Zix Corp. (ZIXI). With weekends and holidays omitted,
we obtain 2623 daily observations from Yahoo! Finance (http://finance.yahoo.com). The
dataset is divided into three periods in terms of the NASDAQ benchmark movement, the
growth before March, 2000, the recession between April, 2000 and December, 2003, and
the recovery after December, 2003, that can be seen in Figure 4. For the very noisy daily
stock prices, the division provides us a natural way to compare the differences in the growth
strength of trends, the magnitude and persistence of cycles, and how the dot-com industry
economic structure changes in general in different economic stages.
The relationship of dot-com stock prices is the following based on the estimates of
cointegration vector θ and the cyclical coefficient φ,
AMZNt
CMGIt
ELNKt
IACIDt
OTEXt
SKILt
Y HOOt
ZIXIt
=
Growth Recess. Recov.0.081
(0.002)0.058
(0.001)0.074
(0.003)0.127
(0.003)0.061
(0.001)0.055
(0.002)0.075
(0.002)0.042
(0.001)0.043
(0.002)0.078
(0.007)0.070
(0.002)0.070
(0.004)0.056
(0.002)0.048
(0.001)0.058
(0.002)0.079
(0.002)0.048
(0.001)0.036
(0.001)0.074
(0.002)0.049
(0.001)0.067
(0.002)0.065
(0.001)0.039
(0.001)0.019
(0.001)
xt+
Growth Recess. Recov.-0.043(0.003)
-0.035(0.003)
-0.014(0.003)
-0.050(0.003)
-0.048(0.003)
-0.0173(0.004)
-0.030(0.003)
-0.023(0.003)
-0.0125(0.003)
-0.014(0.002)
-0.020(0.002)
-0.009(0.003)
-0.014(0.003)
-0.018(0.002)
-0.009(0.003)
-0.010(0.009)
-0.026(0.003)
-0.007(0.003)
-0.042(0.003)
-0.037(0.001)
-0.012(0.003)
-0.015(0.003)
-0.025(0.003)
-0.014(0.004)
zt+εt,
21
and the measurement equations based on the estimates of ψ′s in three economic states,
zgrowtht
zrecessiont
zrecoveryt
=
0.092(0.055)
0.088(0.045)
0.050(0.077)
[zgrowtht−1 zrecession
t−1 zrecoveryt−1
]+ ηt.
As can be seen, the values of the cointegration vector elements θ′s of dot-com stock
price in the growth and the recovery periods are relatively larger than that in the recession
period, in the sense that the positive long-run growth relationship becomes stronger in
the good economy states. On the other hand, all values of the cycle-equation coefficients
φ′s are small and negative, which implies the short-run adjustment towards the long-run
equilibrium is weak in the dot-com industry market.
Figures 3.1.1 - 3.1.2 report the latent common components extracted during dot-com
growth period from 1997 to 2000. The latent common stochastic trend reflects the surge of
dot-com industry in history, and latent common cyclical components captures the moderate
volatility of stock prices. Figures 3.2.1 - 3.2.2 report the latent common components ex-
tracted during dot-com recession period over 2000 to 2003. The latent common stochastic
trend drops deeply, and the latent common cyclical component reflects the high volatilities
of the stock prices. Figures 3.3.1 - 3.3.2 report the latent common components extracted
during dot-com recovery period from 2004 to 2007. The latent common stochastic trend
fluctuates around the initial price level capturing the weak condition of dot-com industry af-
ter the dot-com collapse shocks, and latent common cyclical component shows the moderate
volatility of stock prices in the relatively stable economic condition.
The example shows that the Bayesian implementation works better if we can distin-
guish the peak and trough turning points of the time series. In this way, we can clearly
disentangle the common trend driving the long-run growth and the common cycle reflecting
the volatility for parallel integrating time-series. For the very noisy daily stock prices, the
division provides us a natural way to compare the differences in the growth strength of
trends, the magnitude and persistence of cycles, and how the dot-com industry economic
structure changes in different economic states.
22
1997:05 1999:01 2000:03 1997:05 1999:01 2000:03
2030
4050
Dot−com Stocks During Growth (1997:5−2000:3)
x_t
Figure 3.1.1: The Common Stochastic Trend and 95% Bands−
4−
20
24
6
Dot−com Stocks During Growth (1997:5−2000:3)z_
t
Figure 3.1.2: The Common Cyclical Component and 95% Bands
2000:03 2001:12 2003:12 2000:03 2001:12 2003:12
4648
5052
5456
58
Dot−com Stocks During Recovery (2000:3−2003:12)
x_t
Figure 3.2.1: The Common Stochastic Trend and 95% Bands
−4
−2
02
4
Dot−com Stocks During Recovery (2000:3−2003:12)
z_t
Figure 3.2.2: The Common Cyclical Component and 95% Bands
2004:01 2005:12 2007:12 2004:01 2005:12 2007:12
4550
5560
65
Dot−com Stocks During Recovery(2004:01−2007:12)
x_t
Figure 3.3.1: The Common Stochastic Trend and 95% Bands
−4
−2
02
4
Dot−com Stocks During Recovery(2004:01−2007:12)
z_t
Figure 3.3.2: The Common Cyclical Component and 95% Bands
23
4. Conclusion
This paper proposes a Bayesian approach to extract a single latent common stochastic trend
and cycle in integrating time series panel data. For this purpose, we develop a Markov
chain Monte Carlo algorithm that visits the high-dimensional posterior distribution in the
state-space model. Numerical simulation shows that the Bayesian estimates based on this
algorithm are effective. To illustrate the potential of our approach, we apply our method
to panel data of quarterly national expenditures, metropolitan housing prices, and dot-
com daily stock prices. The results demonstrate that our method can be applied to high-
dimensional posterior distribution model at both estimating the elements of the regression
coefficients and error variance matrix as well as extracting latent sequences. The empirical
results show the stronger the long-run growth the higher the cyclical volatility. Our Bayesian
implementation works better if we distinguished the peak and trough turning points of the
time series.
24
Appendix
1. The joint posterior distribution of (θ, Ω, xT ) is the following product:
p(θ, Ω, xT , |YT ) ∝ |Ω|− 12 etr−1
2(YT − XT θ)′Ω(YT − XT θ)
× exp− 12Mθ
(θ − bθ)′(θ − bθ)
×|ΣP |− v0+m+12 exptrace(Σ0ΣP )/2
×ΠTt=1 exp−1
2(xt − xt−1)2
× exp− 12M0
(x0 − α0)2.
a. Let Ω = IT×T ⊗ ΣP . The full conditional density of ΣP is given by:
p(ΣP |YT , θ, xT ) ∝ |ΣP |−T2 exp−
12trace(
∑Tt=1 εP
t (εPt )′(ΣP )−1)×|ΣP |− v0+m+1
2 exp−trace(Σ0(ΣP )−1)/2
∝ |ΣP |− v0+m+T+12 exp−trace((Σ0+
∑Tt=1 εP
t (εPt )′(ΣP )−1))/2,
thus, the full conditional distribution of ΣP is an inverse Wishart distribution, Inv-Wishart(Σ1, v1),
by defining Σ1 = Σ0 +∑T
t=1 εPt (εP
t )′ and v1 = v0 + T .
b. The full conditional distribution of θ is then given by
p(θ|Ω, xT , YT ) ∝ −12
exp−12(YT − XT θ)′Ω(YT − XT θ)
× exp− 12Mθ
(θ − bθ)′(θ − bθ)
∝ exp−12(θ′(X ′
T ΩXT +1
MθIm×m)θ)
+θ(X ′T ΩYT + bθ).
which is proportional to the density of a multivariate normal distribution, N(Mθ, Cθ), with
Mθ = Cθ(X ′T ΩYT + bθ), and Cθ = (X ′
T ΩXT + 1Mθ
Im×m)−1.
25
2. The joint posterior distribution of (ψ, φ, zT ) is the following product:
p(ψ, φ,Σ, zT , |Y cT ) ∝ |Σ|−T
2 etr−12(Y c
T − ZT φ)′(IT×T ⊗ Σ−1)(Y cT − ZT φ)
× exp− 12Mφ
(φ− bφ)′(φ− bφ)
×|Σ|− v0+m+12 exptrace(Σ0Σ)/2
×ΠTt=1 exp−1
2(zt − ψzt−1)2
×(1− ψ2)1/2 exp−(1− ψ2)2
(z0 − α0)2.
a. The full conditional density of Σ is given by:
p(Σ|Y cT , φ, ψ, zT ) ∝ |Σ|−T
2 exp−12trace(
∑Tt=1 εtε′tΣ
−1)×|Σ|− v0+m+12 exp−trace(Σ0Σ−1)/2
∝ |Σ|− v0+m+T+12 exp−trace((Σ0+
∑Tt=1 εtε′t)Σ
−1)/2,
thus, the full conditional distribution of Σ is an inverse Wishart distribution, Inv-Wishart(Σ2, v2),
by defining Σ2 = Σ0 +∑T
t=1 εtε′t and v2 = v0 + T .
b. The full conditional distribution of φ is given by
p(φ|ψ, Σ, zT , Y cT ) ∝ |Σ|− 1
2 exp−12(Y c
T − ZT φ)′(IT×T ⊗ Σ−1)(Y cT − ZT φ)
× exp− 12Mφ
(φ− bφ)′(φ− bφ)
∝ exp−12(φ′(Z ′T (IT×T ⊗ Σ−1)ZT +
1Mφ
Im×m)φ)
+φ(Z ′T (IT×T ⊗ Σ−1)Y cT + bφ)
which is proportional to the density of a multivariate normal distribution, N(Mφ, Cφ), with
Mφ = Cφ(Z ′T (IT×T ⊗ Σ−1)Y cT + bφ), and Cφ = (Z ′T (IT×T ⊗ Σ−1)ZT + 1
MφIm×m)−1.
c. The full conditional distribution of ψ is:
p(ψ|Y cT , Σ, φ, zT ) ∝ (1− ψ2)
12 exp−1
2[
T∑
t=1
(zt − ψzt−1)2 + (1− ψ2)z20 ]1(ψ)
(−1,1)
∝ (1− ψ2)12 exp−1
2ψ2(
T−1∑
t=1
z2t ) + ψ
T∑
t=1
ztzt−11(ψ)(−1,1),
26
which is, up to a constant, the product of (1 − ψ2)12 and a Gaussian density, N(h, c),
truncated to the interval (−1, 1). The c = (∑T−1
t=1 z2t )−1 and h = c
∑Tt=1 ztzt−1.
27
References
Barro, R. J., (1991), “Economic Growth in a Cross Section of Countries,” The Quarterly
Journal of Economics, MIT Press, vol. 106(2), 407-43.
Carmeci, G. (2005), “A Bayesian State Space Approach to Cointegration Panel Data
Models,” Working paper at http://www.cide.info/conf/papers/1128.pdf.
Carter, C. K. and Kohn, R. (1994), “On Gibbs sampling for state space models,” Biometrika,
81, 3, 541-553.
Case, K. E. and Shiller, R. J.(1989), “The Efficiency of the Market for Single-Family
Homes,” American Economic Review, 79(1), 125-37.
Chang, Y., Miller, J.I. and Park, J.Y. (2008), “Extracting a Common Stochastic Trend:
Theories with Some Applications,” Journal of Econometrics, forthcoming.
Clayton, J. (1998), “Further Evidence on Real Estate Market Efficiency,” Journal of Real
Estate Research, 15(1/2), 41-57.
DeVol, R. C., and Wong, P., Catapano, J. and Robitshek, G. (1999), America’s High-Tech
Economy: Growth, Development, and Risks for Metropolitan Areas, Milken Institute.
Engle, R. F. and Granger, C. W. J. (1987), “Co-integration and error-correction: Repre-
sentation, estimation and testing,” Econometrica, 55, 251-276.
Friedman, M. and Schwartz, A. J. (1963), “Money and Business Cycles,” Review of Eco-
nomics and Statistics, 45, 32-64.
Fruhwirth-Schnatter, S. (1994) “Data augmentation and Dynamic Linear Models, ” Jour-
nal of Time Series Analysis, 15, 183-202.
Gamerman, D. and Loped, H. (2006), Markov Chain Monte Carlo: Stochastic Simulation
for Bayesian Inference, 2nd ed., Chapman & Hall/CC, the Taylor & Francis Group.
Gorden, Rober, J. (2000), “Does the ’New Economy’ Measure up to the Great Inventions
of the Past? ” Journal of Economic Perspectives, Fall 2000, 14(4), pp 49-74.
28
Harding, D. and Pagan, A. (2002), “Dissecting the cycle: a methodological investigation,”
Journal of Monetary Economics, 49, 365C381.
Harvey, A.C., Trimbur, T.M.. and van Dijk, H.K. (2007), “Trends and Cycles in Economic
Time Series: A Bayesian Approach,” Journal of Econometrics, 07, 006.
Kim, C.-J. and Nelson, C.R. (1998),“Business Cycles Turning Points, a New Coincident
Index, and Test of Duration Dependence Based on a Dynamic Factor Model with
Regime-Switching,” Review of Economics and Economic Statistics, 80, 188-201.
Kim, C.J. and Nelson, C.R. (1999), State-Space Models with Regime Switching, Cambridge,
M.A., MIT Press.
Koop G., Lon-Gonzalez, R. and Strachan, R. W. (2008) “Bayesian inference in a cointe-
grating panel data model,” Advances in Econometrics, forthcoming.
Litan, R. E. (2001), “Projecting the Economic Impact of the Internet, ” American Eco-
nomic Review, Papers and Preceedings, 91(2).
Litterman, R.B. (1986),“Forecasting with Bayesian Vector Autoregressions - Five Years’
of Experience,” Journal of Business and Economic Statistics, 4, 2-38.
Nelson, C.R. and Plosser, C. (1982), “Trends and Random Walks in Macroeconomic Time
Series: Some Evidence and Implications,” Journal of Monetary Economics, 10, 139-
162.
Del Negroa, M. and Otrok, C. (2007), “99 Luftballons: Monetary Policy and the House
Price Boom across U.S. States,”Journal of Monetary Economics, Vol. 54, pp. 1962-
1985.
Park, J.K. (1990),“Disequilibrium Imputs Analysis,” mimeographed, Department of Eco-
nomics, Cornell University.
Romer, P. M. (1990),“Endogenous technological change,” Journal of Political Economy,
Vol. 98, 71-101.
29
Shephard, N. (1994), “Partial non-Gaussian State Space,” Biometrika, 81, 115-131.
Sims, C.A. (1980), “Macroeconomics and Reality,” Econometrica, 48, 1-48.
Stock, J.H. and Watson, M.W. (1991),“A Probability Model of the Coincident Economic
Indicators,” in Leading Economic Indicators: New Approaches and Forecasting Records,
ed. Lahiri, K. and Moore, G.H., Cambridge University Press, 63-89.
Urga, G. (2007),“Common Features in Economics and Finance: An Overview of Recent
Developments, ” Journal of Business and Economic Statistics, Vol. 25, No. 1.
Yang, R. and Berger, J.O. (1994),“A Catalog of Noninformative Priors, Duke University,
http://www.stat.duke.edu/ berger/papers/catalog.html.
Zellner, A., Hong, C., and Gulati, G., (1990),“Turning points in economic time series,
loss structures, and Bayesian forecasting.” In Geisser, S., et al (Eds.), Bayesian and
Likelihood Methods in Statistics and Econometrics, North-Holland, Elsevier Science
Publishers, Amsterdam.
Recommended