07_lent_Topic 2 - Generalized Method of Moments, Part II - The Linear Model_mw217

Embed Size (px)

Citation preview

  • 8/19/2019 07_lent_Topic 2 - Generalized Method of Moments, Part II - The Linear Model_mw217

    1/16

    M320

    Cross Section and Panel Data Econometrics

    Topic 2: Generalized Method of Moments

    Part II: The Linear Model

    Dr. Melvyn Weeks

    Faculty of Economics and Clare College

    University of Cambridge

    1

    Outline

    Outline I

    1   IV and OLS Estimators: RevisionSmall Sample Issues

    2   IV and 2SLS EstimatorsJust Identified

    Over Identified: The Generalised IV Estimator

    3   GMM Estimators for the Linear ModelMoment-Based EstimationTypes of GMM Estimators

    4   SummaryGMM Estimators in the Linear Model

    2

    Notes

    Notes

  • 8/19/2019 07_lent_Topic 2 - Generalized Method of Moments, Part II - The Linear Model_mw217

    2/16

    Outline

    Readings

    Cameron, A. C., and P. K. Trivedi (2005).   Microeconometric Methods and Applications. Cambridge University Press.Chapter 6.

    A. Colin Cameron and P.K. Trivedi.   Microeconometrics Using Stata.  Stata Press, 2009. URL

    http://www.stata.com/bookstore/mus.html

    Hayashi, F. (2000)  Econometrics.  Princeton University Press,Princeton.

    Wooldridge, J. M. (2001).  Applications of Generalized Method of Moments Estimation, Journal of Economic Perspectives,15(4), 87.100. 1

    3

    Outline

    Part II: GMM and Linear Models

    1 The Linear Model

    IV: Finite and Large Sample Properties (Review)

    IV as MOM2SLS [Generalised IV (GIVE)] as GMM

    4

    Notes

    Notes

  • 8/19/2019 07_lent_Topic 2 - Generalized Method of Moments, Part II - The Linear Model_mw217

    3/16

    Outline

    Road Map

    In Generalized Method of Moments - Part II: The Linear Model,we consider the  gmm  as a canonical estimator.

    In doing this we show how  ols ,  iv  and  2sls  are special cases of the more general  gmm  estimator.

    We show how just identified and over-identified models may berepresented as moment estimators. In the just-identified case wealso explicitly show that the  gmm  criterion function may be set tozero, whereas in the over-identified case we minimise the distancefrom zero.

    We also provide some background material on the small-sampleproperties of the  iv  and  2sls  estimators.

    5

    IV and OLS Estimators: Revision

    We first provide a brief overview of   ols and  iv  estimators.

    We consider the unbiasedness of the  ols  estimator under certainconditions and the small sample bias of the  iv  estimator.

    We show how it is difficult to obtain the sample mean of the  ivestimator

    This leads us to the potential small (and large) sample bias thatmight be induced by weak instruments.

    6

    Notes

    Notes

  • 8/19/2019 07_lent_Topic 2 - Generalized Method of Moments, Part II - The Linear Model_mw217

    4/16

    IV and OLS Estimators: Revision

    OLS

    Proposition

    If   E [X ε] =  0  ols  is an unbiased estimator of   β .

    Proof.

     ˆ β   = (X X )−1X y 

    E [  ˆ β] =   β + E [(X X )−1X ε]

    =   β + (X X )−1E [X ε]   (1)

    7

    IV and OLS Estimators: Revision

    Proposition

    If   E [X ε] = 0  the   ols  estimator is biased and inconsistent estimator of    β

    Proof.

     ˆ β   = (X X )−1X y E [  ˆ β] =   β + E [(X X )−1X ε]

    =   β + E [(X X )−1X τ (X )] =  β

    for E [ε|X ] =  τ (X )Question: why  E [(X X )−1X τ (X )]  cannot be factored, as in (1)?

    8

    Notes

    Notes

  • 8/19/2019 07_lent_Topic 2 - Generalized Method of Moments, Part II - The Linear Model_mw217

    5/16

    IV and OLS Estimators: Revision

    The IV Estimator

      βIV    =   (Z X )−1Z (X  β + ε)=   (Z X )−1Z X  β + (Z X )−1Z ε

    Proposition

    The IV estimator is biased in small samples.

    Can we utilise the same sort of proof as used for the unbiasednessof the  ols  estimator?

    E [  βIV ] =   E [(Z X )−1Z X  β + (Z X )−1Z ε]=   (Z X )−1Z X  β +E X ,Z ,ε[(Z 

    X )−1Z ε]

    =   β + E X ,Z ,ε[(Z X )−1Z ε]   (2)

    =   β + E X ,Z [(Z X )−1 × [E [ε|Z ,X ]]   (3)

    =   β + (Z X )−1E [Z ε]

    9

    IV and OLS Estimators: Revision

    Note that the unconditional expectation wrt  E X ,Z ,ε[.]   in (2) isobtained by first taking expectations wrt  ε  given  Z ,X   in (3)

    What if we imposed  E [ε|Z ,X ] = 0?

    But this is no use since it implies  E [ε|X ] = 0, thereby negating arequirement for an instrument in the first place.

    What if we exploit

      βIV   =  β + N −1Z X −1 N −1Z ε

    10

    Notes

    Notes

  • 8/19/2019 07_lent_Topic 2 - Generalized Method of Moments, Part II - The Linear Model_mw217

    6/16

    IV and OLS Estimators: Revision

    In what way might large sample arguments help us here?

    Lets consider the following simple case, where we make theassumption that  E [εi |X i ] = 0.

    A single instrument  Z i   is available.

    Y i  =  β1 + β2X i  +  εi 

      βIV 2   =  ∑ N i =1(Z i  −  Z̄ )(Y i  −  Ȳ )∑ N i =1(Z i  −  Z̄ )(X i  −  X̄ )

    Proposition  βIV 2   is consistent provided that  σ ZX   is nonzero 11

    IV and OLS Estimators: Revision

      βIV 2   =   ∑ N i =1(Z i  −  Z̄ )(Y i  −  Ȳ )∑ N i =1(Z i  −  Z̄ )(X i  −  X̄ )

    =  ∑ 

    N i =1(Z i  −  Z̄ )([ β1 + β2X i  +  εi ] − [ β1 + β2  X̄  +   ε̄])

    ∑ N i =1(Z i  −  Z̄ )(X i  −  X̄ )

    =  ∑ 

    N i =1( β2(Z i  −  Z̄ )(X i  −  X̄ ) + (Z i  −  Z̄ )(εi  −  ε̄))

    ∑ 

    i =1(Z i  −  ¯

    Z )(X i  −  ¯

    X )

    =   β2 +  ∑ 

    N i =1(Z i  −  Z̄ )(ε i  −   ε̄)

    ∑ N i =1(Z i  −  Z̄ )(X i  −  X̄ )

    Nothing much can be said about the distribution of   βIV 2   in smallsamples

    However, we observe that the  iv  estimator is equal to the truevalue plus an error, which under certain conditions will vanish as  N becomes large

    12

    Notes

    Notes

  • 8/19/2019 07_lent_Topic 2 - Generalized Method of Moments, Part II - The Linear Model_mw217

    7/16

    IV and OLS Estimators: Revision

    Divide the numerator and denominator by  N  so that they bothhave limits, then we can take plims

    plim

      βIV 2   =   β2 +

      plim   1N  ∑ 

    N i =1(Z i  −  Z̄ )(εi  −   ε̄)

    plim   1N  ∑ 

    N i =1(Z i  −  Z̄ )(X i  −  X̄ )

    =   β2 +  σ Z 

    εσ ZX 

    Slutskys theorem allows us (as opposed to the situation whentaking expectations) to split the problem. We can the evaluate thelimit of numerator and denominator separately

    13

    IV and OLS Estimators: Revision Small Sample Issues

    Small Sample Issues

    The  iv  estimator requires  σ εZ  = 0. In small samples  ε  and  Z   maybe weakly correlated.

    In this instance the  iv  estimator can have large (asymptotic bias)even if the correlation is moderate.

    To see this we write the probability limit of the  iv  estimator as

    plim   ˆ βIV ,2   =   β2 +  Cov (Z , ε)

    Cov (Z ,X )  (4)

    =   β2 +  Corr (Z , ε)

    Corr (Z ,X ) ×

      σ εσ X 

    (5)

    =   β2 +  0

    Cov (Z ,X )  (6)

    14

    Notes

    Notes

  • 8/19/2019 07_lent_Topic 2 - Generalized Method of Moments, Part II - The Linear Model_mw217

    8/16

    IV and OLS Estimators: Revision Small Sample Issues

    Comments

    1 In small samples it is not possible to say much about thedistribution of    ˆ βIV ,2

    2 From (6) we observe that if  Z   is distributed independent of  εthen   ˆ βIV ,2   is consistent for  β2.

    3 In small (or even large) samples ε  and  Z  may be weaklycorrelated.

    In this instance the  iv  estimator can have large (asymptoticbias) even if the correlation is moderate

    This will obviously depend upon the correlation between  Z and  X ; this is the problem of weak instruments.

    15

    IV and OLS Estimators: Revision Small Sample Issues

    Comments - cont.

    4.  Even in the context of large sample results, it is notunequivocally better to use  iv   rather than  ols .

    To see this compare (5) with the plim of the  ols  estimatorˆ βOLS ,2  which we write as

    plim   ˆ βOLS ,2 =  β2 + Corr (X , ε) σ εσ X 

    (7)

    By finding a relationship between plim   ˆ βOLS ,2  and plim   ˆ βIV ,2,under what circumstances is  iv  preferred to  ols  onasymptotic grounds?

    16

    Notes

    Notes

  • 8/19/2019 07_lent_Topic 2 - Generalized Method of Moments, Part II - The Linear Model_mw217

    9/16

    IV and 2SLS Estimators

    Below we briefly review the  iv   approach to identification in thepresence of exact and overidentified models.

    In the following section we show that the  gmm  estimator for thelinear model is canonical in that it nests the  iv  (mom) and  2slsestimators as special cases.

    We also show that in the case of an exactly identified model, thegmm criterion function can be set exactly to zero, just as the sumof residuals for the OLS estimator is exactly zero.

    In this instance the sample realisation of the criterion function isnot a random variable and it is not possible to test the exogeneityof instruments.

    17

    IV and 2SLS Estimators

    Consider a first stage regression based on a linear combination of instruments

    X    =   Z δ + u    (8)

     δ   = (Z Z )−1Z X 

     X    =   P Z X =   Z (Z Z )−1Z X Using X   as instruments.

      βIV    =   (X P Z X )−1X P Z y = (X Z (Z Z )−1Z X )−1X Z (Z Z )−1Z y    (9)

    18

    Notes

    Notes

  • 8/19/2019 07_lent_Topic 2 - Generalized Method of Moments, Part II - The Linear Model_mw217

    10/16

    IV and 2SLS Estimators Just Identified

    Just-identified:  M  =  k The estimator with  M  =  k   (M   the of cols. of  Z ) isoften called the   Instrumental Variable Estimator .

    X Z  is a square matrix since  M  =  k .

    (X Z (Z Z )−1Z X )−1 can then be decomposed as

    (Z X )−1(Z Z )(X Z )−1, (10)

    such that the  iv  estimator may then be rewritten as

     ˆ βIV    = (Z X )−1(Z Z )(X Z )−1X Z (Z Z )−1Z y 

    = (Z X )−1Z y .

    19

    IV and 2SLS Estimators Just Identified

    In the just-identified case we observe that the weighting matrixP Z   = Z (Z 

    Z )−1Z  falls out.

    In situations where parameters  β  are   exactly identified  we have justenough moment conditions to estimate  β.

    Consequence: the minimum of a generalised distance measure(GMM) is exactly zero - all sample moments can be set to zero byappropriate choice of  β.

    There is therefore no need to weight the individual moments inorder to minimise a weighted sum.

    20

    Notes

    Notes

  • 8/19/2019 07_lent_Topic 2 - Generalized Method of Moments, Part II - The Linear Model_mw217

    11/16

    IV and 2SLS Estimators Over Identified: The Generalised IV Estimator

    Over-identified:  M  > k The estimator is called the  2sls  or GeneralizedInstrumental Variable Estimator (give).

    X Z   is  not  a square matrix since for  M  > k 

    (X 

    Z (Z 

    Z )−1

    X )−1

    cannot be decomposed.

    The  iv  estimator is then given by

     ˆ βIV    = (X Z (Z Z )−1Z X )−1X Z (Z Z )−1Z y 

    21

    GMM Estimators for the Linear Model Moment-Based Estimation

    IV as Moment Estimators

    We have motivated the  iv  estimator based on a transformed linearregression model of the form  Z y  = Z X  β + Z ε.

    Alternative derivation: minimise a quadratic form of vectormoments: functions of parameters and data.

    The  gmm  estimator is obtained by minimising a quadratic form inthe analogous sample moments:   n−1[Z y  − Z X  β]

    Ignoring n−1 then the  gmm  estimator is defined as

    ˆ β∗ =  arg min[(Z y  −Z X  β)C N (Z 

    y  −Z X  β)]   (11)

    22

    Notes

    Notes

  • 8/19/2019 07_lent_Topic 2 - Generalized Method of Moments, Part II - The Linear Model_mw217

    12/16

    GMM Estimators for the Linear Model Moment-Based Estimation

    Solving (11) exactly or by minimising a weighted quadraticfunction depends on whether the system of equations is over orexactly identified.

    C N   is an  M  ×M  p d symmetric weighting matrix; it tells us howmuch weight to attach to which (linear combinations) of thesample moments.

    In general  C N  will depend upon the sample size  N , because it mayitself be an estimate.Again we note the use of the  Generalised  prefix in  gmm.

    Given that we cannot set individual sample moments equal topopulation counterparts, we utilise a weighting matrix  C N , whichweights each moment such that the sample moments are as closeas possible to zero.

    23

    GMM Estimators for the Linear Model Types of GMM Estimators

    A class of GMM estimators will depend on what is assumed sboutthe distribution of  ε i 

    IV:  M  =  K , errors homoscedastic; Var(εi |z i ) =  σ 2I N .  βMOM    = (Z X )−1(X Z )−1X ZZ y 

    = (Z X )−1Z y 

    2SLS:  M  > K , errors homoscedastic: Var(εi |z i ) =  σ 2I N .  β2SLS  = [X Z (Z Z )−1Z X ]−1X Z (Z Z )−1Z y GMM  M  > K , errors not restricted

      βGMM    = X ZC N Z X −1 X ZC N Z y 

    24

    Notes

    Notes

  • 8/19/2019 07_lent_Topic 2 - Generalized Method of Moments, Part II - The Linear Model_mw217

    13/16

    GMM Estimators for the Linear Model Types of GMM Estimators

    2SLS

    If  ε  ∼  (0 , σ 2I ) the covariance matrix of the moment conditions is

    Var(Z (y  − X   β)) = Var(Z ε) =  σ 2Z Z .An optimal weighting matrix is then

    C N  =  1N  N ∑ i =1

    z i z i −1The resulting GMM estimator is

      β2SLS  = [X Z (Z Z )−1Z X ]−1X Z (Z Z )−1Z y This estimator is often referred to as theGeneralised Instrumental Variables Estimator   (give).

    25

    GMM Estimators for the Linear Model Types of GMM Estimators

    Below we consider the following two propositions:

    Proposition

    I: If Var (Z ε) =  σ 2Z Z   then

      βGMM  =  β2SLS  = (X Z (Z Z )−1Z X )−1X Z (Z Z )−1Z y .Proposition

    II: If   M  =  K   then   ˆ βGMM  =   ˆ βIV   and  Q C ( β)  can be set to 0 

    26

    Notes

    Notes

  • 8/19/2019 07_lent_Topic 2 - Generalized Method of Moments, Part II - The Linear Model_mw217

    14/16

    GMM Estimators for the Linear Model Types of GMM Estimators

    Proposition

      βGMM  =  β2SLS  = (X Z (Z Z )−1Z X )−1X Z (Z Z )−1Z y .Proof.

    Q C ( β) = (y  − X 

      β)P Z (y  − X 

      β)   (12)

    = (y  −

    X   β)Z (Z Z Z )−1Z (y  −X   β)= [Z (y  −X   β)](Z Z )−1[Z (y  − X   β)]   (13)= [Z (y  −X   β)]C N [Z (y  − X   β)]   (14)=   y P Z y  +  βX P Z X   β− 2  βX P Z y    (15)

    ∂Q C ( β)

    ∂  β =   ∂  βX P Z X   β

    ∂  β −  ∂2  βXP Z y 

    ∂  β=   2X P Z X   β− 2X P Z y  = 0   (16)=   2X ZC N Z 

      β− 2X Z C N Z 

    y  = 0. (17)27

    GMM Estimators for the Linear Model Types of GMM Estimators

    (16) and (17) are derived, respectively, from (13) and (14)

    Both (16) and (17) represent systems of  M   equations and  k unknowns, where  X Z   is  k  ×M .

    The solution to (16) is given by

    ˆ βGMM    = (X P Z X )−1X P Z y 

    =X Z (Z Z )−1Z X 

    −1X Z (Z Z )−1Z y 

    =   ˆ β2SLS 

    28

    Notes

    Notes

  • 8/19/2019 07_lent_Topic 2 - Generalized Method of Moments, Part II - The Linear Model_mw217

    15/16

    GMM Estimators for the Linear Model Types of GMM Estimators

    Just Identified   If  M  =  k  we may solve (11) exactly

    ˆ βGMM  =   ˆ βIV    =   ˆ βMOM  = (Z X )−1Z y 

    Proposition

    ˆ βGMM  =   ˆ βIV   and  Q C ( β)  can be set to 0 

    Proof.

    Substituting   ˆ βIV   into (15) then

    Q C ( β) =   y P Z y  +   ˆ β

    IV XP Z X 

      ˆ βIV  − 2   ˆ βIV X 

    P Z y 

    =   y Z (Z Z )−1Z y 

    +y Z (X Z )−1X Z (Z Z )−1Z X (Z X )−1Z y 

    −2Z (X Z )−1X Z (Z Z )−1Z y 

    =   y Z (Z Z )−1Z y  + y Z (Z Z )−1Z y 

    −2y Z (Z Z )−1Z y  = 0

    29

    GMM Estimators for the Linear Model Types of GMM Estimators

    Two Step GMM

    The most efficient (feasible) GMM estimator based upon

    E (z i εi ) =  0 uses weight matrix, say V −1. V   is constructed using fitted residuals, obtained from a first step

    using a consistent estimator of  β , which is often the 2SLSestimator:

     V    =  1

    ∑ i =1

    z i (y i  − x i   β2sls )(y i  − x i   β2sls )z i where  V   is given by

    V    =   plimN →∞1

    ∑ i =1

    z i  εi  εi z i This gives the two-step GMM estimator  β2sGMM    = X ZC N Z X −1 X ZC N Z y where  C N  =

    −1

    30

    Notes

    Notes

  • 8/19/2019 07_lent_Topic 2 - Generalized Method of Moments, Part II - The Linear Model_mw217

    16/16

    GMM Estimators for the Linear Model Types of GMM Estimators

    The efficiency gains of the GMM estimator relative to thetraditional IV/2SLS estimator derive from the overidentifyingrestrictions of the model, the use of the optimal weighting matrix,and the relaxation of the i.i.d. assumption.

    For an exactly-identified model, the efficient GMM and traditional

    IV/2SLS estimators coincide.

    Under the assumptions of conditional homoskedasticity andindependence, the efficient GMM estimator is the traditionalIV/2SLS estimator.

    31

    Summary GMM Estimators in the Linear Model

    Summary: GMM Estimators in the Linear IV Model

    GMM    ˆ βGMM    =X ZC N Z 

    X −1

    X ZC N Z y 

    2SLS ,GIVE    ˆ β2SLS    = X Z (Z Z )−1Z X −1 X Z (Z Z )−1Z y IV    ˆ βIV    =

    Z X ]−1Z y 

    32

    Notes

    Notes