9bWJ4riXFBGGECh12 Autocorrelation

Embed Size (px)

Citation preview

  • 8/3/2019 9bWJ4riXFBGGECh12 Autocorrelation

    1/17

    Gujarati(2003): Chapter 12

  • 8/3/2019 9bWJ4riXFBGGECh12 Autocorrelation

    2/17

    Definition of Autocorrelation

    We assumed of the CLRMs errors that Cov (ui , uj) = 0 forij, i.e. Errors are not serially correlated, or, no autocorrelation

    This is essentially the same as saying there is no pattern in theerrors.

    Obviously we never have the actual us, so we use theirsample counterpart, the residuals (the s).

    If there are patterns in the residuals from a model, we say that

    they are autocorrelated.

    Some stereotypical patterns we may find in the residuals aregiven on the next 3 slides.

  • 8/3/2019 9bWJ4riXFBGGECh12 Autocorrelation

    3/17

    Positive Autocorrelation

    +

    -

    -

    t

    +

    1

    t

    +

    -

    Time

    tu

  • 8/3/2019 9bWJ4riXFBGGECh12 Autocorrelation

    4/17

    Negative Autocorrelation

    +

    -

    -

    t

    +

    1

    tu

    +

    -

    tu

    Time

  • 8/3/2019 9bWJ4riXFBGGECh12 Autocorrelation

    5/17

    No pattern in residuals

    No autocorrelation

    +

    tu

    -

    -

    +

    1t

    +

    tu

  • 8/3/2019 9bWJ4riXFBGGECh12 Autocorrelation

    6/17

    Definition: First-order of Autocorrelation, AR(1)

    If Cov (ut, us) = E (ut us) 0 where t sYt = 1 + 2 X2t + ut t = 1,,T

    and ifut = ut-1+ vt

    where -1 < < 1 ( : RHO)and vt ~ iid (0, v

    2) (white noise)

    This scheme is called first-order autocorrelation and denotes as AR(1)

    Autoregressive : The regression ofut can be explained byitself lagged one period.

    (RHO) : the first-order autocorrelation coefficientor coefficient of autocovariance

    0),cov(

    )var(

    0)(

    2

    st

    vt

    t

    vv

    v

    vE

  • 8/3/2019 9bWJ4riXFBGGECh12 Autocorrelation

    7/17

    Autocorrelation AR(1) :

    Cov (ut u t-1) > 0 => 0 -1

  • 8/3/2019 9bWJ4riXFBGGECh12 Autocorrelation

    8/17

    1973 230 320 u1973 .

    ... .

    1995 558 714 u19951996 699 822 u19961997 881 907 u19971998 925 1003 u19981999 984 1174 u19992000 1072 1246 u2000

    Year Consumptiont = 1 + 2 Incomet + errortExample of serial correlation:

    TaxRate1999

    TaxRate2000

    Error term

    represents

    other factors

    that affect

    consumption

    t

    ~ iid(0, v

    2)

    TaxRate2000 = TaxRate1999 + v2000 u

    t=

    u

    t-1+ v

    t

    The current year Tax Rate may be determined by previous year rate

  • 8/3/2019 9bWJ4riXFBGGECh12 Autocorrelation

    9/17

    The consequences of serial correlation:

    (are same as those of heteroscedasticity)

    1. The estimated coefficients are still unbiased.

    E(k) = k^

    3. The standard error of the estimated coefficient,Se(k)will be biased. Therefore, t- and F tests are not valid.

    ^

    ^2. The variances of the kis no longer the smallest.

    So, OLS estimators are not BLUE

    Therefore, when the regression has AR(1) errors,

    The estimators are not BLUE. t and F tests are invalid.

  • 8/3/2019 9bWJ4riXFBGGECh12 Autocorrelation

    10/17

    Detecting Autocorrelation:

    The Durbin-Watson Test

    The Durbin-Watson (DW) is a test for first order autocorrelation -i.e. it assumes that the relationship is between an error and the

    previous one

    ut= ut-1 + vt (1)

    where vt N(0, v2

    ). The DW test statistic actually tests

    H0: =0andH1: 0

    The test statistic is calculated by

    DW

    u u

    u

    t tt

    T

    tt

    T

    12

    2

    2

    2

  • 8/3/2019 9bWJ4riXFBGGECh12 Autocorrelation

    11/17

    Detecting Autocorrelation:

    The Durbin-Watson Test

    We can also write

    (2)where is the estimated correlation coefficient. Since isa correlation, it implies that .

    Rearranging forDWfrom (2) would give 0DW4.

    If = 0, DW= 2. So roughly speaking, do not reject thenull hypothesis ifDW is near 2 i.e. there is littleevidence of autocorrelation

    Unfortunately, DWhas 2 critical values, an upper criticalvalue (du) and a lower critical value (dL), and there is alsoan intermediate region where we can neither reject nor notreject H0.

    11 p

    DW 2 1( )

  • 8/3/2019 9bWJ4riXFBGGECh12 Autocorrelation

    12/17

    Durbin-Watson test(Cont.)

    d = 2 (1- )==> = 1 -

    ==> = 1-

    ^

    ^d

    2

    d2

    ^

    Since -1 1^implies 0 d 4

    DW =

    2 (1 - ) (ut ut-1)

    2t=2

    T^ ^

    ut2

    t=1

    T ^

    ^

    (d)

  • 8/3/2019 9bWJ4riXFBGGECh12 Autocorrelation

    13/17

    The Durbin-Watson Test

    Conditions which Must be Fulfilled for DW to be a Valid Test 1. Constant term in regression 2. Regressors are non-stochastic 3. No lags of dependent variable

  • 8/3/2019 9bWJ4riXFBGGECh12 Autocorrelation

    14/17

    Ex: How to detect autocorrelation ?Gujarati(2003) Table12.4, pp.460

  • 8/3/2019 9bWJ4riXFBGGECh12 Autocorrelation

    15/17

    Run OLS: and check the t-value of the coefficientttt vuu 1

    9385.02

    122904.01

    21914245.0

    DW

  • 8/3/2019 9bWJ4riXFBGGECh12 Autocorrelation

    16/17

    From OLS regression result: where dor DW* = 0.1229

    Check DW Statistic Table(At 5% level of significance, k = 1, n=40)

    dL = 1.246

    du = 1.344

    0 1.246 1.344 2

    dL du

    DW*

    0.1229

    Durbin-Watson Autocorrelation test

    RejectH0

    region

    H0 : no autocorrelation

    = 0

    H1 : yes, autocorrelation exists.

    or > 0

    positive autocorrelation

  • 8/3/2019 9bWJ4riXFBGGECh12 Autocorrelation

    17/17

    Example 2:

    UMt = 23.1 - 0.078 CAPt - 0.146 CAPt-1 + 0.043Tt^

    (15.6) (2.0) (3.7) (10.3)

    R2 = 0.78 F = 78.9 u = 0.677 RSS = 29.3 DW = 0.23 n = 68_

    ^

    (i) K = 3 (number of independent variables)

    (ii) n = 68 , = 0.01 significance level

    0.05

    (iii) dL = 1.525 , du = 1.703 0.05

    dL = 1.372 , du = 1.546 0.01

    Reject H0, positive autocorrelation exists