29
Dealing with data All variables ok? / getting acquainted Base model Final model(s) Assumption checking on final model(s) Conclusion(s) / Inference Better models Better variables (interaction, transformations) Assumption checking Outliers and influential cases Creating subsets of the data

Dealing with data All variables ok? / getting acquainted Base model Final model(s) Assumption checking on final model(s) Conclusion(s) / Inference Better

Embed Size (px)

Citation preview

Page 1: Dealing with data All variables ok? / getting acquainted Base model Final model(s) Assumption checking on final model(s) Conclusion(s) / Inference Better

Dealing with data

• All variables ok? / getting acquainted• Base model

• Final model(s)• Assumption checking on final model(s)• Conclusion(s) / Inference

Better models

Better variables (interaction, transformations)

Assumption checking

Outliers and influential cases

Creating subsets of the data

Page 2: Dealing with data All variables ok? / getting acquainted Base model Final model(s) Assumption checking on final model(s) Conclusion(s) / Inference Better

Finding help

Page 3: Dealing with data All variables ok? / getting acquainted Base model Final model(s) Assumption checking on final model(s) Conclusion(s) / Inference Better

Stata manuals

You have all these as pdf! Check the folder /Stata12/docs

Page 4: Dealing with data All variables ok? / getting acquainted Base model Final model(s) Assumption checking on final model(s) Conclusion(s) / Inference Better

ASSUMPTION CHECKING AND OTHER NUISANCES

• In regression analysis with Stata

• In logistic regression analysis with Stata

NOTE: THIS WILL BE EASIER IN Stata THAN IT WAS IN SPSS

Page 5: Dealing with data All variables ok? / getting acquainted Base model Final model(s) Assumption checking on final model(s) Conclusion(s) / Inference Better

Assumption checking in “normal” multiple regression

with Stata

Page 6: Dealing with data All variables ok? / getting acquainted Base model Final model(s) Assumption checking on final model(s) Conclusion(s) / Inference Better

6

Assumptions in regression analysis

•No multi-collinearity•All relevant predictor variables included•Homoscedasticity: all residuals are from a distribution with the same variance•Linearity: the “true” model should be linear.•Independent errors: having information about the value of a residual should not give you information about the value of other residuals•Errors are distributed normally

Page 7: Dealing with data All variables ok? / getting acquainted Base model Final model(s) Assumption checking on final model(s) Conclusion(s) / Inference Better

7

FIRST THE ONE THAT LEADS TO NOTHING NEW IN STATA (NOTE: SLIDE TAKEN LITERALLY FROM MMBR)

Independent errors: having information about the value of a residual should not give you information about the value of other residuals

Detect: ask yourself whether it is likely that knowledge about one residual would tell you something about the value of another residual.Typical cases: -repeated measures-clustered observations (people within firms / pupils within schools)

Consequences: as for heteroscedasticityUsually, your confidence intervals are estimated too small (think about why that is!).

Cure: use multi-level analyses part 2 of this course

Page 8: Dealing with data All variables ok? / getting acquainted Base model Final model(s) Assumption checking on final model(s) Conclusion(s) / Inference Better

The rest, in Stata:

Example: the Stata “auto.dta” data setsysuse auto

corr (correlation)vif (variance inflation

factors)

ovtest (omitted variable test)

hettest (heterogeneity test)

predict e, residswilk (test for normality)

Page 9: Dealing with data All variables ok? / getting acquainted Base model Final model(s) Assumption checking on final model(s) Conclusion(s) / Inference Better

Finding the commands

• “help regress”• “regress postestimation”

and you will find most of them (and more) there

Page 10: Dealing with data All variables ok? / getting acquainted Base model Final model(s) Assumption checking on final model(s) Conclusion(s) / Inference Better

10

Multi-collinearity A strong correlation between two or more of your predictor variables

You don’t want it, because:1. It is more difficult to get higher R’s2. The importance of predictors can be difficult to

establish (b-hats tend to go to zero)3. The estimates for b-hats are unstable under slightly

different regression attempts (“bouncing beta’s”)

Detect: 4. Look at correlation matrix of predictor variables5. calculate VIF-factors while running regression

Cure:Delete variables so that multi-collinearity disappears, for instance by combining them into a single variable

Page 11: Dealing with data All variables ok? / getting acquainted Base model Final model(s) Assumption checking on final model(s) Conclusion(s) / Inference Better

11

Stata: calculating the correlation matrix (“corr” or “pwcorr”) and VIF statistics (“vif”)

Page 12: Dealing with data All variables ok? / getting acquainted Base model Final model(s) Assumption checking on final model(s) Conclusion(s) / Inference Better

12

Misspecification tests(replaces: all relevant predictor

variables included [Ramsey])

Also run “ovtest, rhs” here. Both tests should be non-significant.

Note that there are two ways to interpret “all relevant predictor variables included”

Page 13: Dealing with data All variables ok? / getting acquainted Base model Final model(s) Assumption checking on final model(s) Conclusion(s) / Inference Better

13

Homoscedasticity: all residuals are from a distribution with the same variance

Consequences: Heteroscedasticiy does not necessarily lead to biases in your estimated coefficients (b-hat), but it does lead to biases in the estimate of the width of the confidence interval, and the estimation procedure itself is not efficient.

THIS CAN BE DONE

IN STATA TOO

(CHECK FOR YOURSELF)

Page 14: Dealing with data All variables ok? / getting acquainted Base model Final model(s) Assumption checking on final model(s) Conclusion(s) / Inference Better

Testing for heteroscedasticity in Stata

• Your residuals should have the same variance for all values of Y hettest

• Your residuals should have the same variance for all values of X hettest, rhs

Page 15: Dealing with data All variables ok? / getting acquainted Base model Final model(s) Assumption checking on final model(s) Conclusion(s) / Inference Better

15

Errors distributed normally

Errors should be distributed normally (just the errors, not the variables themselves!)

Detect: look at the residual plots, test for normality, or save residuals and test directly

Consequences: rule of thumb: if n>600, no problem. Otherwise confidence intervals are wrong.

Cure: try to fit a better model (or use more difficult ways of modeling instead - ask an expert).

Page 16: Dealing with data All variables ok? / getting acquainted Base model Final model(s) Assumption checking on final model(s) Conclusion(s) / Inference Better

First calculate the residuals (after regress):

predict e, resid

Then test for normalityswilk e

Errors distributed normally

Page 17: Dealing with data All variables ok? / getting acquainted Base model Final model(s) Assumption checking on final model(s) Conclusion(s) / Inference Better

Assumption checking in logistic regression

with Stata

Note: based onhttp://www.ats.ucla.edu/stat/stata/

webbooks/logistic/chapter3/statalog3.htm

Page 18: Dealing with data All variables ok? / getting acquainted Base model Final model(s) Assumption checking on final model(s) Conclusion(s) / Inference Better

Assumptions in logistic regression

• Y is 0/1• Independence of errors (as in

multiple regression)• No cases where you have

complete separation (Stata will try to remove these cases automatically)

• Linearity in the logit (comparable to “the true model should be linear” in multiple regression) – “specification error”

• No multi-collinearity (as in m.r.)

Think!

Page 19: Dealing with data All variables ok? / getting acquainted Base model Final model(s) Assumption checking on final model(s) Conclusion(s) / Inference Better

Think!• What will happen if you try logit y x1 x2 in this case?

Page 20: Dealing with data All variables ok? / getting acquainted Base model Final model(s) Assumption checking on final model(s) Conclusion(s) / Inference Better

This!

Because all cases with x==1 lead to y==1, the weight of x should be +infinity. Stata therefore rightly disregards these cases.

Do realize that, even though you do not see them in the regression, these are extremely important cases!

Page 21: Dealing with data All variables ok? / getting acquainted Base model Final model(s) Assumption checking on final model(s) Conclusion(s) / Inference Better

(checking for)multi-collinearity

• In regression, we had “vif”• Here we need to download a

command that a user-created: “collin” (try “findit collin” in Stata)

Page 22: Dealing with data All variables ok? / getting acquainted Base model Final model(s) Assumption checking on final model(s) Conclusion(s) / Inference Better

(checking for)specification error

• The equivalent for “ovtest” is the command “linktest”

Page 23: Dealing with data All variables ok? / getting acquainted Base model Final model(s) Assumption checking on final model(s) Conclusion(s) / Inference Better

(checking for)specification error – part 2

Page 24: Dealing with data All variables ok? / getting acquainted Base model Final model(s) Assumption checking on final model(s) Conclusion(s) / Inference Better

Further things to do:

• Check for useful transformations of variables, and interaction effects

• Check for outliers / influential cases:1) using a plot of stdres

(against n) and dbeta (against n)

2) using a plot of ldfbeta’s (against n)

3) using regress and diag (but don’t tell anyone that I suggested

this)

Page 25: Dealing with data All variables ok? / getting acquainted Base model Final model(s) Assumption checking on final model(s) Conclusion(s) / Inference Better

Checking for outliers / influential cases

… check the file auto_outliers.do for this …

Page 26: Dealing with data All variables ok? / getting acquainted Base model Final model(s) Assumption checking on final model(s) Conclusion(s) / Inference Better

Dealing with data

• All variables ok? / getting acquainted• Base model

• Final model(s)• Assumption checking on final model(s)• Conclusion(s) / Inference

Better models

Better variables (interaction, transformations)

Assumption checking

Outliers and influential cases

Creating subsets of the data

Page 27: Dealing with data All variables ok? / getting acquainted Base model Final model(s) Assumption checking on final model(s) Conclusion(s) / Inference Better

Example analyses on ideas.dta

Page 28: Dealing with data All variables ok? / getting acquainted Base model Final model(s) Assumption checking on final model(s) Conclusion(s) / Inference Better

For next week:improve the logistic regression you had

Annotated output: as if you write an exam assignment ...1. Create do-file with comments in it2. Run it and add further comments on the outcomes in the

log file3. Submit do-file and log-file

Use your own assignment, and the skills you mastered today.

Deadline: coming Wednesday

Page 29: Dealing with data All variables ok? / getting acquainted Base model Final model(s) Assumption checking on final model(s) Conclusion(s) / Inference Better

Online also:the taxi tipping data