54
1 Ka-fu Wong University of Hong Kong Forecasting with Regression Models

Ka-fu Wong University of Hong Kong

Embed Size (px)

DESCRIPTION

Ka-fu Wong University of Hong Kong. Forecasting with Regression Models. Linear regression models. Endogenous variable. Exogenous variables. Explanatory variables. Rule , rather than exception: all variables are endogenous. Conditional forecasting. - PowerPoint PPT Presentation

Citation preview

Page 1: Ka-fu Wong University of Hong Kong

1

Ka-fu WongUniversity of Hong Kong

Forecasting with Regression Models

Page 2: Ka-fu Wong University of Hong Kong

2

Linear regression models

Endogenous variable

Exogenous variables

Explanatory variables

Rule, rather than exception: all variables are endogenous.

Page 3: Ka-fu Wong University of Hong Kong

3

Conditional forecasting

The h-step ahead forecast of y given some assumed h-step-ahead value of xT+h.

Assumed h-step-ahead value of the exogenous variables

Call it scenario analysis or contingency analysis – based on some assumed h-step-ahead value of the exogenous variables.

Page 4: Ka-fu Wong University of Hong Kong

4

Uncertainty of Forecast

Specification uncertainty / error: our models are only approximation (since no one knows the truth). E.g., we adopt an AR(1) model but the truth is AR(2). Almost impossible to account for via a forecast interval.

Parameter uncertainty / sampling error: parameters are estimated from a data sample. The estimate will always be different from the truth. The difference is called sampling error. Can account for via a forecast interval if we do the calculation

carefully.

Innovation uncertainty: errors that cannot be avoided even if we know the true model and true parameter. This is, unavoidable. Often account for via a forecast interval using standard

softwares.

Page 5: Ka-fu Wong University of Hong Kong

5

Quantifying the innovation and parameter uncertainty

Consider the very simple case in which x has a zero mean:

Page 6: Ka-fu Wong University of Hong Kong

6

Density forecast that accounts for parameter uncertainty

~

Page 7: Ka-fu Wong University of Hong Kong

7

Interval forecasts that do not acknowledge parameter uncertainty

Page 8: Ka-fu Wong University of Hong Kong

8

Interval forecasts that do acknowledge parameter uncertainty

The closer xT+h* is closer to its mean, the smaller is the prediction-error

variance.

Page 9: Ka-fu Wong University of Hong Kong

9

Unconditional Forecasting Models

Forecast based on some other models of x, say, by assuming x to follow an AR(1).

Page 10: Ka-fu Wong University of Hong Kong

10

h-step-ahead forecast without modeling x explicitlyBased on unconditional forecasting models

Standing at time T, with observations, (x1,y1), (x2,y2),…,(xT,yT)

1-step-ahead: yt = 0 + 1 xt-1 + t

yT+1 = 0 + 1 xT + t

2-step-ahead: yt = 0 + 1 xt-2 + t

yT+2 = 0 + 1 xT + t

… h-step-ahead:

yt = 0 + 1 xt-h + t

yT+h = 0 + 1 xT + t

Page 11: Ka-fu Wong University of Hong Kong

11

h-step-ahead forecast without modeling x explicitlyBased on unconditional forecasting models

Special cases: The model contains only time trends and seasonal

components. Because these components are perfectly predictable.

Page 12: Ka-fu Wong University of Hong Kong

12

Distributed Lags

y depends on a distributed lags of past x’s

Parameters to be estimated:0, 1,…,Nx

Page 13: Ka-fu Wong University of Hong Kong

13

Polynomial Distributed Lags

Parameters to be estimated:0, a, b, c

Page 14: Ka-fu Wong University of Hong Kong

14

Rational Distributed Lags

Example: A(L) = a0 + a1LB(L) = b0 + b1L

b0 yt + b1 yt-1 = a0 xt + a1 xt-1 + b0 t + b1 t-1

yt = [- b1 yt-1 + a0xt + a1xt-1 + b0 t + b1 t-1]/b0

yt = [- b1/b0] yt-1 + [a0/b0] xt + [a1/b0] xt-1 + t + [b1/b0] t-1

Page 15: Ka-fu Wong University of Hong Kong

15

Regression model with AR(1) disturbance

Page 16: Ka-fu Wong University of Hong Kong

16

ARMA(p,q) models equivalent to model with only a constant regressor and ARMA(p,q) disturbances.

Page 17: Ka-fu Wong University of Hong Kong

17

Transfer function models

A transfer function is a mathematical representation of the relation between the input and output of a system.

Page 18: Ka-fu Wong University of Hong Kong

18

Vector Autoregressions, VAR(p)allows cross-variable dynamics

VAR(1) of two variables.

The variable vector consists of two elements.

Regressors consist of the variable vector lagged one period only.

The innovations allowed to be correlated.

Page 19: Ka-fu Wong University of Hong Kong

19

Estimation of Vector Autoregressions

Run OLS regressions equation by equation.

OLS estimation turns out to have very good statistical properties when each equation has the same regressors, as in standard VARs.Otherwise, a more complicated estimation procedure called seemingly unrelated regression, which explicitly accounts for correlation across equation disturbances, would be need to obtain estimates with good statistical properties.

Page 20: Ka-fu Wong University of Hong Kong

20

The choice order Estimation of Vector Autoregressions

Use AIC and SIC.

Page 21: Ka-fu Wong University of Hong Kong

21

Forecast Estimation of Vector Autoregressions

y1,T, y2,T

y1,T+1, y2,T+1

y1,T+1, Y2,T+1

y1,T+2, Y2,T+2

y1,T+2, y2,T+2 y1,T+3, Y2,T+3

y1,T+3, y2,T+3

Given the parameters, or parameter estimates

Page 22: Ka-fu Wong University of Hong Kong

22

Predictive Causality

Two principles Cause should occur before effect. A causal series should contain information useful for

forecasting that is not available in the other series.

Predictive Causality in a VAR

y2 does not cause y1 if φ12 =0

In a bivariate VAR, noncausality in 1-step-ahead forecast will imply noncausality in h-step-ahead forecast.

Page 23: Ka-fu Wong University of Hong Kong

23

Predictive Causality

In VAR with higher dimension, noncausality in 1-step-ahead forecast need not imply noncausality in h-step-ahead forecast. Example:

Variable i may 1-step-cause variable jVariable j may 1-step-cause variable kVariable i 2-step-causes variable k but does not 1-

step-cause variable k.

Page 24: Ka-fu Wong University of Hong Kong

24

Impulse response functions

All univariate ARMA(p,q) processes can be written as:

We can always normalize the innovations with a constant m:

Page 25: Ka-fu Wong University of Hong Kong

25

Impulse response functions

1 unit increase in t’ is equivalent to one standard deviation increase in t.

1 unit increase in t’ has b0’ impact on yt

1 standard deviation increase in t has b0impact on yt, b1 impact on yt, etc.

Impact of t on yt:

Page 26: Ka-fu Wong University of Hong Kong

26

AR(1)

Page 27: Ka-fu Wong University of Hong Kong

27

VAR(1)

Page 28: Ka-fu Wong University of Hong Kong

28

Normalizing the VAR by the Cholesky factor

If y1 is ordered first,

Example: y1 = GDP, y2 = Price level

An innovation to GDP has effects on current GDP and price level.An innovation to price level has effects only on current price level but not current GDP.

Page 29: Ka-fu Wong University of Hong Kong

29

Features of Cholesky decomposition

The innovations of the transformed system are in standard deviation units.

The current innovations in the normalized representation have can non-unit coefficients.

The first equation has only one current innovation, e1,t. The second equation has both current innovations.

The normalization yields a zero covariance between the innovations.

Page 30: Ka-fu Wong University of Hong Kong

30

Normalizing the VAR by the Cholesky factor

If y2 is ordered first,

Example: y1 = GDP, y2 = Price level

An innovation to price level has effects on current GDP and price level.An innovation to GDP has effects only on current GDP but not current price level.

Page 31: Ka-fu Wong University of Hong Kong

31

Impulse response functions

With bivariate autoregression, we can compute four sets of impulse-response functions: y1 innovations (1,t) on y1

y1 innovations (1,t) on y2

y2 innovations (2,t) on y1

y2 innovations (2,t) on y2

Page 32: Ka-fu Wong University of Hong Kong

32

Variance decomposition

How much of the h-step-ahead forecast error variance of variable i is explained by innovations to variable j, for h=1,2,…. ?

With bivariate autoregression, we can compute four sets of variance decomposition: y1 innovations (1,t) on y1

y1 innovations (1,t) on y2

y2 innovations (2,t) on y1

y2 innovations (2,t) on y2

Page 33: Ka-fu Wong University of Hong Kong

33

Example:y1 = Housing starts, y2= Housing completions (1968:01 – 1996:06)

Observation #1: Seasonal pattern.

Observation #2: Highly cyclical with business cycles.

Observation #3: Completions lag starts.

group fig112 starts compsfreeze(Figure112) fig112.line(d)

Page 34: Ka-fu Wong University of Hong Kong

34

Correlogram and Ljung-Box Statistics of housing starts (1968:01 to 1991:12)

freeze(Table112) starts.correl(24)

Page 35: Ka-fu Wong University of Hong Kong

35

Correlogram and Ljung-Box Statistics of housing starts (1968:01 to 1991:12)

Page 36: Ka-fu Wong University of Hong Kong

36

Correlogram and Ljung-Box Statistics of housing completions (1968:01 to 1991:12)

freeze(Table113) comps.correl(24)

Page 37: Ka-fu Wong University of Hong Kong

37

Correlogram and Ljung-Box Statistics of housing starts (1968:01 to 1991:12)

Page 38: Ka-fu Wong University of Hong Kong

38

Starts and completions, sample cross-correlations

freeze(Figure115) fig112.cross(24) starts comps

Page 39: Ka-fu Wong University of Hong Kong

39

VAR regression by OLS (1)

equation Table114.ls starts c starts(-1) starts(-2) starts(-3) starts(-4) comps(-1) comps(-2) comps(-3) comps(-4)

Page 40: Ka-fu Wong University of Hong Kong

40

VAR regression by OLS (1)

Page 41: Ka-fu Wong University of Hong Kong

41

VAR regression by OLS (1)

Page 42: Ka-fu Wong University of Hong Kong

42

VAR regression by OLS (1)

Page 43: Ka-fu Wong University of Hong Kong

43

VAR regression by OLS (2)

equation Table116.ls comps c starts(-1) starts(-2) starts(-3) starts(-4) comps(-1) comps(-2) comps(-3) comps(-4)

Page 44: Ka-fu Wong University of Hong Kong

44

VAR regression by OLS (2)

Page 45: Ka-fu Wong University of Hong Kong

45

VAR regression by OLS (2)

Page 46: Ka-fu Wong University of Hong Kong

46

VAR regression by OLS (2)

Page 47: Ka-fu Wong University of Hong Kong

47

Predictive causality test

group tbl108 comps startsfreeze(Table118) tbl108.cause(4)

Page 48: Ka-fu Wong University of Hong Kong

48

Impulse response functions(response to one standard-deviation innovations)

var fig1110.ls 1 4 starts compsfreeze(Figure1110) fig1110.impulse(36,m)

Page 49: Ka-fu Wong University of Hong Kong

49

Variance decomposition

freeze(Figure1111) fig1110.decomp(36,m)

Page 50: Ka-fu Wong University of Hong Kong

50

Starts: History, 1968:01-1991:12Forecast, 1992:01-1996:06

Page 51: Ka-fu Wong University of Hong Kong

51

Starts: History, 1968:01-1991:12Forecast, 1992:01-1996:06

Page 52: Ka-fu Wong University of Hong Kong

52

Completions: History, 1968:01-1991:12Forecast, 1992:01-1996:06

Page 53: Ka-fu Wong University of Hong Kong

53

Completions: History, 1968:01-1991:12Forecast, and Realization, 1992:01-1996:06

Page 54: Ka-fu Wong University of Hong Kong

54

End