446
Week 5: Simple Linear Regression Brandon Stewart 1 Princeton October 10, 12, 2016 1 These slides are heavily influenced by Matt Blackwell, Adam Glynn and Jens Hainmueller. Illustrations by Shay O’Brien. Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 1 / 103

Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

  • Upload
    others

  • View
    44

  • Download
    1

Embed Size (px)

Citation preview

Page 1: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Week 5: Simple Linear Regression

Brandon Stewart1

Princeton

October 10, 12, 2016

1These slides are heavily influenced by Matt Blackwell, Adam Glynn and JensHainmueller. Illustrations by Shay O’Brien.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 1 / 103

Page 2: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Where We’ve Been and Where We’re Going...

Last WeekI hypothesis testingI what is regression

This WeekI Monday:

F mechanics of OLSF properties of OLS

I Wednesday:F hypothesis tests for regressionF confidence intervals for regressionF goodness of fit

Next WeekI mechanics with two regressorsI omitted variables, multicollinearity

Long RunI probability → inference → regression

Questions?

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 2 / 103

Page 3: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Where We’ve Been and Where We’re Going...

Last WeekI hypothesis testingI what is regression

This WeekI Monday:

F mechanics of OLSF properties of OLS

I Wednesday:F hypothesis tests for regressionF confidence intervals for regressionF goodness of fit

Next WeekI mechanics with two regressorsI omitted variables, multicollinearity

Long RunI probability → inference → regression

Questions?

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 2 / 103

Page 4: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Where We’ve Been and Where We’re Going...

Last WeekI hypothesis testingI what is regression

This WeekI Monday:

F mechanics of OLSF properties of OLS

I Wednesday:F hypothesis tests for regressionF confidence intervals for regressionF goodness of fit

Next WeekI mechanics with two regressorsI omitted variables, multicollinearity

Long RunI probability → inference → regression

Questions?

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 2 / 103

Page 5: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Where We’ve Been and Where We’re Going...

Last WeekI hypothesis testingI what is regression

This WeekI Monday:

F mechanics of OLS

F properties of OLSI Wednesday:

F hypothesis tests for regressionF confidence intervals for regressionF goodness of fit

Next WeekI mechanics with two regressorsI omitted variables, multicollinearity

Long RunI probability → inference → regression

Questions?

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 2 / 103

Page 6: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Where We’ve Been and Where We’re Going...

Last WeekI hypothesis testingI what is regression

This WeekI Monday:

F mechanics of OLSF properties of OLS

I Wednesday:F hypothesis tests for regressionF confidence intervals for regressionF goodness of fit

Next WeekI mechanics with two regressorsI omitted variables, multicollinearity

Long RunI probability → inference → regression

Questions?

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 2 / 103

Page 7: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Where We’ve Been and Where We’re Going...

Last WeekI hypothesis testingI what is regression

This WeekI Monday:

F mechanics of OLSF properties of OLS

I Wednesday:F hypothesis tests for regression

F confidence intervals for regressionF goodness of fit

Next WeekI mechanics with two regressorsI omitted variables, multicollinearity

Long RunI probability → inference → regression

Questions?

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 2 / 103

Page 8: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Where We’ve Been and Where We’re Going...

Last WeekI hypothesis testingI what is regression

This WeekI Monday:

F mechanics of OLSF properties of OLS

I Wednesday:F hypothesis tests for regressionF confidence intervals for regression

F goodness of fit

Next WeekI mechanics with two regressorsI omitted variables, multicollinearity

Long RunI probability → inference → regression

Questions?

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 2 / 103

Page 9: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Where We’ve Been and Where We’re Going...

Last WeekI hypothesis testingI what is regression

This WeekI Monday:

F mechanics of OLSF properties of OLS

I Wednesday:F hypothesis tests for regressionF confidence intervals for regressionF goodness of fit

Next WeekI mechanics with two regressorsI omitted variables, multicollinearity

Long RunI probability → inference → regression

Questions?

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 2 / 103

Page 10: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Where We’ve Been and Where We’re Going...

Last WeekI hypothesis testingI what is regression

This WeekI Monday:

F mechanics of OLSF properties of OLS

I Wednesday:F hypothesis tests for regressionF confidence intervals for regressionF goodness of fit

Next WeekI mechanics with two regressorsI omitted variables, multicollinearity

Long RunI probability → inference → regression

Questions?

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 2 / 103

Page 11: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Where We’ve Been and Where We’re Going...

Last WeekI hypothesis testingI what is regression

This WeekI Monday:

F mechanics of OLSF properties of OLS

I Wednesday:F hypothesis tests for regressionF confidence intervals for regressionF goodness of fit

Next WeekI mechanics with two regressorsI omitted variables, multicollinearity

Long RunI probability → inference → regression

Questions?

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 2 / 103

Page 12: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Macrostructure

The next few weeks,

Linear Regression with Two Regressors

Multiple Linear Regression

Break Week

Regression in the Social Science

What Can Go Wrong and How to Fix It Week 1

What Can Go Wrong and How to Fix It Week 2 / Thanksgiving

Causality with Measured Confounding

Unmeasured Confounding and Instrumental Variables

Repeated Observations and Panel Data

A brief comment on exams, midterm week etc.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 3 / 103

Page 13: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Macrostructure

The next few weeks,

Linear Regression with Two Regressors

Multiple Linear Regression

Break Week

Regression in the Social Science

What Can Go Wrong and How to Fix It Week 1

What Can Go Wrong and How to Fix It Week 2 / Thanksgiving

Causality with Measured Confounding

Unmeasured Confounding and Instrumental Variables

Repeated Observations and Panel Data

A brief comment on exams, midterm week etc.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 3 / 103

Page 14: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Macrostructure

The next few weeks,

Linear Regression with Two Regressors

Multiple Linear Regression

Break Week

Regression in the Social Science

What Can Go Wrong and How to Fix It Week 1

What Can Go Wrong and How to Fix It Week 2 / Thanksgiving

Causality with Measured Confounding

Unmeasured Confounding and Instrumental Variables

Repeated Observations and Panel Data

A brief comment on exams, midterm week etc.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 3 / 103

Page 15: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Macrostructure

The next few weeks,

Linear Regression with Two Regressors

Multiple Linear Regression

Break Week

Regression in the Social Science

What Can Go Wrong and How to Fix It Week 1

What Can Go Wrong and How to Fix It Week 2 / Thanksgiving

Causality with Measured Confounding

Unmeasured Confounding and Instrumental Variables

Repeated Observations and Panel Data

A brief comment on exams, midterm week etc.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 3 / 103

Page 16: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Macrostructure

The next few weeks,

Linear Regression with Two Regressors

Multiple Linear Regression

Break Week

Regression in the Social Science

What Can Go Wrong and How to Fix It Week 1

What Can Go Wrong and How to Fix It Week 2 / Thanksgiving

Causality with Measured Confounding

Unmeasured Confounding and Instrumental Variables

Repeated Observations and Panel Data

A brief comment on exams, midterm week etc.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 3 / 103

Page 17: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

1 Mechanics of OLS

2 Properties of the OLS estimator

3 Example and Review

4 Properties Continued

5 Hypothesis tests for regression

6 Confidence intervals for regression

7 Goodness of fit

8 Wrap Up of Univariate Regression

9 Fun with Non-Linearities

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 4 / 103

Page 18: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

1 Mechanics of OLS

2 Properties of the OLS estimator

3 Example and Review

4 Properties Continued

5 Hypothesis tests for regression

6 Confidence intervals for regression

7 Goodness of fit

8 Wrap Up of Univariate Regression

9 Fun with Non-Linearities

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 4 / 103

Page 19: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

The population linear regression function

The (population) simple linear regression model can be stated as thefollowing:

r(x) = E [Y |X = x ] = β0 + β1x

This (partially) describes the data generating process in thepopulation

Y = dependent variable

X = independent variable

β0, β1 = population intercept and population slope (what we want toestimate)

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 5 / 103

Page 20: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

The population linear regression function

The (population) simple linear regression model can be stated as thefollowing:

r(x) = E [Y |X = x ] = β0 + β1x

This (partially) describes the data generating process in thepopulation

Y = dependent variable

X = independent variable

β0, β1 = population intercept and population slope (what we want toestimate)

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 5 / 103

Page 21: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

The population linear regression function

The (population) simple linear regression model can be stated as thefollowing:

r(x) = E [Y |X = x ] = β0 + β1x

This (partially) describes the data generating process in thepopulation

Y = dependent variable

X = independent variable

β0, β1 = population intercept and population slope (what we want toestimate)

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 5 / 103

Page 22: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

The population linear regression function

The (population) simple linear regression model can be stated as thefollowing:

r(x) = E [Y |X = x ] = β0 + β1x

This (partially) describes the data generating process in thepopulation

Y = dependent variable

X = independent variable

β0, β1 = population intercept and population slope (what we want toestimate)

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 5 / 103

Page 23: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

The population linear regression function

The (population) simple linear regression model can be stated as thefollowing:

r(x) = E [Y |X = x ] = β0 + β1x

This (partially) describes the data generating process in thepopulation

Y = dependent variable

X = independent variable

β0, β1 = population intercept and population slope (what we want toestimate)

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 5 / 103

Page 24: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

The population linear regression function

The (population) simple linear regression model can be stated as thefollowing:

r(x) = E [Y |X = x ] = β0 + β1x

This (partially) describes the data generating process in thepopulation

Y = dependent variable

X = independent variable

β0, β1 = population intercept and population slope (what we want toestimate)

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 5 / 103

Page 25: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

The sample linear regression function

The estimated or sample regression function is:

r(Xi ) = Yi = β0 + β1Xi

β0, β1 are the estimated intercept and slope

Yi is the fitted/predicted value

We also have the residuals, ui which are the differences between thetrue values of Y and the predicted value:

ui = Yi − Yi

You can think of the residuals as the prediction errors of ourestimates.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 6 / 103

Page 26: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

The sample linear regression function

The estimated or sample regression function is:

r(Xi ) = Yi = β0 + β1Xi

β0, β1 are the estimated intercept and slope

Yi is the fitted/predicted value

We also have the residuals, ui which are the differences between thetrue values of Y and the predicted value:

ui = Yi − Yi

You can think of the residuals as the prediction errors of ourestimates.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 6 / 103

Page 27: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

The sample linear regression function

The estimated or sample regression function is:

r(Xi ) = Yi = β0 + β1Xi

β0, β1 are the estimated intercept and slope

Yi is the fitted/predicted value

We also have the residuals, ui which are the differences between thetrue values of Y and the predicted value:

ui = Yi − Yi

You can think of the residuals as the prediction errors of ourestimates.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 6 / 103

Page 28: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

The sample linear regression function

The estimated or sample regression function is:

r(Xi ) = Yi = β0 + β1Xi

β0, β1 are the estimated intercept and slope

Yi is the fitted/predicted value

We also have the residuals, ui which are the differences between thetrue values of Y and the predicted value:

ui = Yi − Yi

You can think of the residuals as the prediction errors of ourestimates.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 6 / 103

Page 29: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

The sample linear regression function

The estimated or sample regression function is:

r(Xi ) = Yi = β0 + β1Xi

β0, β1 are the estimated intercept and slope

Yi is the fitted/predicted value

We also have the residuals, ui which are the differences between thetrue values of Y and the predicted value:

ui = Yi − Yi

You can think of the residuals as the prediction errors of ourestimates.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 6 / 103

Page 30: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

The sample linear regression function

The estimated or sample regression function is:

r(Xi ) = Yi = β0 + β1Xi

β0, β1 are the estimated intercept and slope

Yi is the fitted/predicted value

We also have the residuals, ui which are the differences between thetrue values of Y and the predicted value:

ui = Yi − Yi

You can think of the residuals as the prediction errors of ourestimates.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 6 / 103

Page 31: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Overall Goals for the Week

Learn how to run and read regression

Mechanics: how to estimate the intercept and slope?

Properties: when are these good estimates?

Uncertainty: how will the OLS estimator behave in repeated samples?

Testing: can we assess the plausibility of no relationship (β1 = 0)?

Interpretation: how do we interpret our estimates?

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 7 / 103

Page 32: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Overall Goals for the Week

Learn how to run and read regression

Mechanics: how to estimate the intercept and slope?

Properties: when are these good estimates?

Uncertainty: how will the OLS estimator behave in repeated samples?

Testing: can we assess the plausibility of no relationship (β1 = 0)?

Interpretation: how do we interpret our estimates?

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 7 / 103

Page 33: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Overall Goals for the Week

Learn how to run and read regression

Mechanics: how to estimate the intercept and slope?

Properties: when are these good estimates?

Uncertainty: how will the OLS estimator behave in repeated samples?

Testing: can we assess the plausibility of no relationship (β1 = 0)?

Interpretation: how do we interpret our estimates?

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 7 / 103

Page 34: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Overall Goals for the Week

Learn how to run and read regression

Mechanics: how to estimate the intercept and slope?

Properties: when are these good estimates?

Uncertainty: how will the OLS estimator behave in repeated samples?

Testing: can we assess the plausibility of no relationship (β1 = 0)?

Interpretation: how do we interpret our estimates?

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 7 / 103

Page 35: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Overall Goals for the Week

Learn how to run and read regression

Mechanics: how to estimate the intercept and slope?

Properties: when are these good estimates?

Uncertainty: how will the OLS estimator behave in repeated samples?

Testing: can we assess the plausibility of no relationship (β1 = 0)?

Interpretation: how do we interpret our estimates?

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 7 / 103

Page 36: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Overall Goals for the Week

Learn how to run and read regression

Mechanics: how to estimate the intercept and slope?

Properties: when are these good estimates?

Uncertainty: how will the OLS estimator behave in repeated samples?

Testing: can we assess the plausibility of no relationship (β1 = 0)?

Interpretation: how do we interpret our estimates?

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 7 / 103

Page 37: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Overall Goals for the Week

Learn how to run and read regression

Mechanics: how to estimate the intercept and slope?

Properties: when are these good estimates?

Uncertainty: how will the OLS estimator behave in repeated samples?

Testing: can we assess the plausibility of no relationship (β1 = 0)?

Interpretation: how do we interpret our estimates?

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 7 / 103

Page 38: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

What is OLS?

An estimator for the slope and the intercept of the regression line

We talked last week about ways to derive this estimator and wesettled on deriving it by minimizing the squared prediction errors ofthe regression, or in other words, minimizing the sum of the squaredresiduals:

Ordinary Least Squares (OLS):

(β0, β1) = arg minb0,b1

n∑i=1

(Yi − b0 − b1Xi )2

In words, the OLS estimates are the intercept and slope that minimizethe sum of the squared residuals.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 8 / 103

Page 39: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

What is OLS?

An estimator for the slope and the intercept of the regression line

We talked last week about ways to derive this estimator and wesettled on deriving it by minimizing the squared prediction errors ofthe regression, or in other words, minimizing the sum of the squaredresiduals:

Ordinary Least Squares (OLS):

(β0, β1) = arg minb0,b1

n∑i=1

(Yi − b0 − b1Xi )2

In words, the OLS estimates are the intercept and slope that minimizethe sum of the squared residuals.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 8 / 103

Page 40: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

What is OLS?

An estimator for the slope and the intercept of the regression line

We talked last week about ways to derive this estimator and wesettled on deriving it by minimizing the squared prediction errors ofthe regression, or in other words, minimizing the sum of the squaredresiduals:

Ordinary Least Squares (OLS):

(β0, β1) = arg minb0,b1

n∑i=1

(Yi − b0 − b1Xi )2

In words, the OLS estimates are the intercept and slope that minimizethe sum of the squared residuals.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 8 / 103

Page 41: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

What is OLS?

An estimator for the slope and the intercept of the regression line

We talked last week about ways to derive this estimator and wesettled on deriving it by minimizing the squared prediction errors ofthe regression, or in other words, minimizing the sum of the squaredresiduals:

Ordinary Least Squares (OLS):

(β0, β1) = arg minb0,b1

n∑i=1

(Yi − b0 − b1Xi )2

In words, the OLS estimates are the intercept and slope that minimizethe sum of the squared residuals.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 8 / 103

Page 42: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Graphical Example

How do we fit the regression line Y = β0 + β1X to the data?Answer: We will minimize the squared sum of residuals

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 9 / 103

Page 43: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Graphical ExampleHow do we fit the regression line Y = β0 + β1X to the data?

Answer: We will minimize the squared sum of residuals

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 9 / 103

Page 44: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Graphical ExampleHow do we fit the regression line Y = β0 + β1X to the data?

Answer: We will minimize the squared sum of residuals

0

1

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 9 / 103

Page 45: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Graphical ExampleHow do we fit the regression line Y = β0 + β1X to the data?Answer: We will minimize the squared sum of residuals

iii YYu

Residual ui is “part” of Yi not predicted

n

i

iu1

2

,min

10

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 9 / 103

Page 46: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Deriving the OLS estimator

Let’s think about n pairs of sample observations:(Y1,X1), (Y2,X2), . . . , (Yn,Xn)

Let {b0, b1} be possible values for {β0, β1}Define the least squares objective function:

S(b0, b1) =n∑

i=1

(Yi − b0 − b1Xi )2.

How do we derive the LS estimators for β0 and β1? We want tominimize this function, which is actually a very well-defined calculusproblem.

1 Take partial derivatives of S with respect to b0 and b1.2 Set each of the partial derivatives to 03 Solve for {b0, b1} and replace them with the solutions

To the board we go!

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 10 / 103

Page 47: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Deriving the OLS estimator

Let’s think about n pairs of sample observations:(Y1,X1), (Y2,X2), . . . , (Yn,Xn)

Let {b0, b1} be possible values for {β0, β1}Define the least squares objective function:

S(b0, b1) =n∑

i=1

(Yi − b0 − b1Xi )2.

How do we derive the LS estimators for β0 and β1? We want tominimize this function, which is actually a very well-defined calculusproblem.

1 Take partial derivatives of S with respect to b0 and b1.2 Set each of the partial derivatives to 03 Solve for {b0, b1} and replace them with the solutions

To the board we go!

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 10 / 103

Page 48: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Deriving the OLS estimator

Let’s think about n pairs of sample observations:(Y1,X1), (Y2,X2), . . . , (Yn,Xn)

Let {b0, b1} be possible values for {β0, β1}

Define the least squares objective function:

S(b0, b1) =n∑

i=1

(Yi − b0 − b1Xi )2.

How do we derive the LS estimators for β0 and β1? We want tominimize this function, which is actually a very well-defined calculusproblem.

1 Take partial derivatives of S with respect to b0 and b1.2 Set each of the partial derivatives to 03 Solve for {b0, b1} and replace them with the solutions

To the board we go!

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 10 / 103

Page 49: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Deriving the OLS estimator

Let’s think about n pairs of sample observations:(Y1,X1), (Y2,X2), . . . , (Yn,Xn)

Let {b0, b1} be possible values for {β0, β1}Define the least squares objective function:

S(b0, b1) =n∑

i=1

(Yi − b0 − b1Xi )2.

How do we derive the LS estimators for β0 and β1? We want tominimize this function, which is actually a very well-defined calculusproblem.

1 Take partial derivatives of S with respect to b0 and b1.2 Set each of the partial derivatives to 03 Solve for {b0, b1} and replace them with the solutions

To the board we go!

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 10 / 103

Page 50: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Deriving the OLS estimator

Let’s think about n pairs of sample observations:(Y1,X1), (Y2,X2), . . . , (Yn,Xn)

Let {b0, b1} be possible values for {β0, β1}Define the least squares objective function:

S(b0, b1) =n∑

i=1

(Yi − b0 − b1Xi )2.

How do we derive the LS estimators for β0 and β1? We want tominimize this function, which is actually a very well-defined calculusproblem.

1 Take partial derivatives of S with respect to b0 and b1.2 Set each of the partial derivatives to 03 Solve for {b0, b1} and replace them with the solutions

To the board we go!

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 10 / 103

Page 51: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Deriving the OLS estimator

Let’s think about n pairs of sample observations:(Y1,X1), (Y2,X2), . . . , (Yn,Xn)

Let {b0, b1} be possible values for {β0, β1}Define the least squares objective function:

S(b0, b1) =n∑

i=1

(Yi − b0 − b1Xi )2.

How do we derive the LS estimators for β0 and β1? We want tominimize this function, which is actually a very well-defined calculusproblem.

1 Take partial derivatives of S with respect to b0 and b1.

2 Set each of the partial derivatives to 03 Solve for {b0, b1} and replace them with the solutions

To the board we go!

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 10 / 103

Page 52: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Deriving the OLS estimator

Let’s think about n pairs of sample observations:(Y1,X1), (Y2,X2), . . . , (Yn,Xn)

Let {b0, b1} be possible values for {β0, β1}Define the least squares objective function:

S(b0, b1) =n∑

i=1

(Yi − b0 − b1Xi )2.

How do we derive the LS estimators for β0 and β1? We want tominimize this function, which is actually a very well-defined calculusproblem.

1 Take partial derivatives of S with respect to b0 and b1.2 Set each of the partial derivatives to 0

3 Solve for {b0, b1} and replace them with the solutions

To the board we go!

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 10 / 103

Page 53: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Deriving the OLS estimator

Let’s think about n pairs of sample observations:(Y1,X1), (Y2,X2), . . . , (Yn,Xn)

Let {b0, b1} be possible values for {β0, β1}Define the least squares objective function:

S(b0, b1) =n∑

i=1

(Yi − b0 − b1Xi )2.

How do we derive the LS estimators for β0 and β1? We want tominimize this function, which is actually a very well-defined calculusproblem.

1 Take partial derivatives of S with respect to b0 and b1.2 Set each of the partial derivatives to 03 Solve for {b0, b1} and replace them with the solutions

To the board we go!

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 10 / 103

Page 54: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Deriving the OLS estimator

Let’s think about n pairs of sample observations:(Y1,X1), (Y2,X2), . . . , (Yn,Xn)

Let {b0, b1} be possible values for {β0, β1}Define the least squares objective function:

S(b0, b1) =n∑

i=1

(Yi − b0 − b1Xi )2.

How do we derive the LS estimators for β0 and β1? We want tominimize this function, which is actually a very well-defined calculusproblem.

1 Take partial derivatives of S with respect to b0 and b1.2 Set each of the partial derivatives to 03 Solve for {b0, b1} and replace them with the solutions

To the board we go!

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 10 / 103

Page 55: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

The OLS estimator

Now we’re done! Here are the OLS estimators:

β0 = Y − β1X

β1 =

∑ni=1(Xi − X )(Yi − Y )∑n

i=1(Xi − X )2

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 11 / 103

Page 56: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Intuition of the OLS estimator

The intercept equation tells us that the regression line goes throughthe point (Y ,X ):

Y = β0 + β1X

The slope for the regression line can be written as the following:

β1 =

∑ni=1(Xi − X )(Yi − Y )∑n

i=1(Xi − X )2=

Sample Covariance between X and Y

Sample Variance of X

The higher the covariance between X and Y , the higher the slope willbe.

Negative covariances → negative slopes;positive covariances → positive slopes

What happens when Xi doesn’t vary?

What happens when Yi doesn’t vary?

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 12 / 103

Page 57: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Intuition of the OLS estimator

The intercept equation tells us that the regression line goes throughthe point (Y ,X ):

Y = β0 + β1X

The slope for the regression line can be written as the following:

β1 =

∑ni=1(Xi − X )(Yi − Y )∑n

i=1(Xi − X )2=

Sample Covariance between X and Y

Sample Variance of X

The higher the covariance between X and Y , the higher the slope willbe.

Negative covariances → negative slopes;positive covariances → positive slopes

What happens when Xi doesn’t vary?

What happens when Yi doesn’t vary?

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 12 / 103

Page 58: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Intuition of the OLS estimator

The intercept equation tells us that the regression line goes throughthe point (Y ,X ):

Y = β0 + β1X

The slope for the regression line can be written as the following:

β1 =

∑ni=1(Xi − X )(Yi − Y )∑n

i=1(Xi − X )2=

Sample Covariance between X and Y

Sample Variance of X

The higher the covariance between X and Y , the higher the slope willbe.

Negative covariances → negative slopes;positive covariances → positive slopes

What happens when Xi doesn’t vary?

What happens when Yi doesn’t vary?

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 12 / 103

Page 59: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Intuition of the OLS estimator

The intercept equation tells us that the regression line goes throughthe point (Y ,X ):

Y = β0 + β1X

The slope for the regression line can be written as the following:

β1 =

∑ni=1(Xi − X )(Yi − Y )∑n

i=1(Xi − X )2=

Sample Covariance between X and Y

Sample Variance of X

The higher the covariance between X and Y , the higher the slope willbe.

Negative covariances → negative slopes;positive covariances → positive slopes

What happens when Xi doesn’t vary?

What happens when Yi doesn’t vary?

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 12 / 103

Page 60: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Intuition of the OLS estimator

The intercept equation tells us that the regression line goes throughthe point (Y ,X ):

Y = β0 + β1X

The slope for the regression line can be written as the following:

β1 =

∑ni=1(Xi − X )(Yi − Y )∑n

i=1(Xi − X )2=

Sample Covariance between X and Y

Sample Variance of X

The higher the covariance between X and Y , the higher the slope willbe.

Negative covariances → negative slopes;positive covariances → positive slopes

What happens when Xi doesn’t vary?

What happens when Yi doesn’t vary?

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 12 / 103

Page 61: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Intuition of the OLS estimator

The intercept equation tells us that the regression line goes throughthe point (Y ,X ):

Y = β0 + β1X

The slope for the regression line can be written as the following:

β1 =

∑ni=1(Xi − X )(Yi − Y )∑n

i=1(Xi − X )2=

Sample Covariance between X and Y

Sample Variance of X

The higher the covariance between X and Y , the higher the slope willbe.

Negative covariances → negative slopes;positive covariances → positive slopes

What happens when Xi doesn’t vary?

What happens when Yi doesn’t vary?

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 12 / 103

Page 62: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Intuition of the OLS estimator

The intercept equation tells us that the regression line goes throughthe point (Y ,X ):

Y = β0 + β1X

The slope for the regression line can be written as the following:

β1 =

∑ni=1(Xi − X )(Yi − Y )∑n

i=1(Xi − X )2=

Sample Covariance between X and Y

Sample Variance of X

The higher the covariance between X and Y , the higher the slope willbe.

Negative covariances → negative slopes;positive covariances → positive slopes

What happens when Xi doesn’t vary?

What happens when Yi doesn’t vary?

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 12 / 103

Page 63: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

A Visual Intuition for the OLS Estimator

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 13 / 103

Page 64: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

A Visual Intuition for the OLS Estimator

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 13 / 103

Page 65: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

A Visual Intuition for the OLS Estimator

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 13 / 103

Page 66: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

A Visual Intuition for the OLS Estimator

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 13 / 103

Page 67: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

A Visual Intuition for the OLS Estimator

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 13 / 103

Page 68: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

A Visual Intuition for the OLS Estimator

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 13 / 103

Page 69: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

A Visual Intuition for the OLS Estimator

+

++ + -

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 13 / 103

Page 70: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

A Visual Intuition for the OLS Estimator

+

++ + -

+ + ++ +

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 13 / 103

Page 71: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Mechanical properties of OLS

Later we’ll see that under certain assumptions, OLS will have nicestatistical properties.

But some properties are mechanical since they can be derived fromthe first order conditions of OLS.

1 The residuals will be 0 on average:

1

n

n∑i=1

ui = 0

2 The residuals will be uncorrelated with the predictor(cov is the sample covariance):

cov(Xi , ui ) = 0

3 The residuals will be uncorrelated with the fitted values:

cov(Yi , ui ) = 0

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 14 / 103

Page 72: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Mechanical properties of OLS

Later we’ll see that under certain assumptions, OLS will have nicestatistical properties.

But some properties are mechanical since they can be derived fromthe first order conditions of OLS.

1 The residuals will be 0 on average:

1

n

n∑i=1

ui = 0

2 The residuals will be uncorrelated with the predictor(cov is the sample covariance):

cov(Xi , ui ) = 0

3 The residuals will be uncorrelated with the fitted values:

cov(Yi , ui ) = 0

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 14 / 103

Page 73: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Mechanical properties of OLS

Later we’ll see that under certain assumptions, OLS will have nicestatistical properties.

But some properties are mechanical since they can be derived fromthe first order conditions of OLS.

1 The residuals will be 0 on average:

1

n

n∑i=1

ui = 0

2 The residuals will be uncorrelated with the predictor(cov is the sample covariance):

cov(Xi , ui ) = 0

3 The residuals will be uncorrelated with the fitted values:

cov(Yi , ui ) = 0

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 14 / 103

Page 74: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Mechanical properties of OLS

Later we’ll see that under certain assumptions, OLS will have nicestatistical properties.

But some properties are mechanical since they can be derived fromthe first order conditions of OLS.

1 The residuals will be 0 on average:

1

n

n∑i=1

ui = 0

2 The residuals will be uncorrelated with the predictor(cov is the sample covariance):

cov(Xi , ui ) = 0

3 The residuals will be uncorrelated with the fitted values:

cov(Yi , ui ) = 0

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 14 / 103

Page 75: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Mechanical properties of OLS

Later we’ll see that under certain assumptions, OLS will have nicestatistical properties.

But some properties are mechanical since they can be derived fromthe first order conditions of OLS.

1 The residuals will be 0 on average:

1

n

n∑i=1

ui = 0

2 The residuals will be uncorrelated with the predictor(cov is the sample covariance):

cov(Xi , ui ) = 0

3 The residuals will be uncorrelated with the fitted values:

cov(Yi , ui ) = 0

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 14 / 103

Page 76: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Mechanical properties of OLS

Later we’ll see that under certain assumptions, OLS will have nicestatistical properties.

But some properties are mechanical since they can be derived fromthe first order conditions of OLS.

1 The residuals will be 0 on average:

1

n

n∑i=1

ui = 0

2 The residuals will be uncorrelated with the predictor(cov is the sample covariance):

cov(Xi , ui ) = 0

3 The residuals will be uncorrelated with the fitted values:

cov(Yi , ui ) = 0

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 14 / 103

Page 77: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

OLS slope as a weighted sum of the outcomes

One useful derivation is to write the OLS estimator for the slope as aweighted sum of the outcomes.

β1 =n∑

i=1

WiYi

Where here we have the weights, Wi as:

Wi =(Xi − X )∑ni=1(Xi − X )2

This is important for two reasons. First, it’ll make derivations latermuch easier. And second, it shows that is just the sum of a randomvariable. Therefore it is also a random variable.

To the board!

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 15 / 103

Page 78: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

OLS slope as a weighted sum of the outcomes

One useful derivation is to write the OLS estimator for the slope as aweighted sum of the outcomes.

β1 =n∑

i=1

WiYi

Where here we have the weights, Wi as:

Wi =(Xi − X )∑ni=1(Xi − X )2

This is important for two reasons. First, it’ll make derivations latermuch easier. And second, it shows that is just the sum of a randomvariable. Therefore it is also a random variable.

To the board!

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 15 / 103

Page 79: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

OLS slope as a weighted sum of the outcomes

One useful derivation is to write the OLS estimator for the slope as aweighted sum of the outcomes.

β1 =n∑

i=1

WiYi

Where here we have the weights, Wi as:

Wi =(Xi − X )∑ni=1(Xi − X )2

This is important for two reasons. First, it’ll make derivations latermuch easier. And second, it shows that is just the sum of a randomvariable. Therefore it is also a random variable.

To the board!

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 15 / 103

Page 80: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

OLS slope as a weighted sum of the outcomes

One useful derivation is to write the OLS estimator for the slope as aweighted sum of the outcomes.

β1 =n∑

i=1

WiYi

Where here we have the weights, Wi as:

Wi =(Xi − X )∑ni=1(Xi − X )2

This is important for two reasons. First, it’ll make derivations latermuch easier. And second, it shows that is just the sum of a randomvariable. Therefore it is also a random variable.

To the board!

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 15 / 103

Page 81: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

OLS slope as a weighted sum of the outcomes

One useful derivation is to write the OLS estimator for the slope as aweighted sum of the outcomes.

β1 =n∑

i=1

WiYi

Where here we have the weights, Wi as:

Wi =(Xi − X )∑ni=1(Xi − X )2

This is important for two reasons. First, it’ll make derivations latermuch easier. And second, it shows that is just the sum of a randomvariable. Therefore it is also a random variable.

To the board!

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 15 / 103

Page 82: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

1 Mechanics of OLS

2 Properties of the OLS estimator

3 Example and Review

4 Properties Continued

5 Hypothesis tests for regression

6 Confidence intervals for regression

7 Goodness of fit

8 Wrap Up of Univariate Regression

9 Fun with Non-Linearities

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 16 / 103

Page 83: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

1 Mechanics of OLS

2 Properties of the OLS estimator

3 Example and Review

4 Properties Continued

5 Hypothesis tests for regression

6 Confidence intervals for regression

7 Goodness of fit

8 Wrap Up of Univariate Regression

9 Fun with Non-Linearities

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 16 / 103

Page 84: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Sampling distribution of the OLS estimator

Remember: OLS is an estimator—it’s a machine that we plug datainto and we get out estimates.

OLS

Sample 1: {(Y1,X1), . . . , (Yn,Xn)} (β0, β1)1

Sample 2: {(Y1,X1), . . . , (Yn,Xn)} (β0, β1)2

......

Sample k − 1: {(Y1,X1), . . . , (Yn,Xn)} (β0, β1)k−1

Sample k: {(Y1,X1), . . . , (Yn,Xn)} (β0, β1)k

Just like the sample mean, sample difference in means, or the samplevariance

It has a sampling distribution, with a sampling variance/standarderror, etc.

Let’s take a simulation approach to demonstrate:

I Pretend that the AJR data represents the population of interestI See how the line varies from sample to sample

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 17 / 103

Page 85: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Sampling distribution of the OLS estimator

Remember: OLS is an estimator—it’s a machine that we plug datainto and we get out estimates.

OLS

Sample 1: {(Y1,X1), . . . , (Yn,Xn)} (β0, β1)1

Sample 2: {(Y1,X1), . . . , (Yn,Xn)} (β0, β1)2

......

Sample k − 1: {(Y1,X1), . . . , (Yn,Xn)} (β0, β1)k−1

Sample k: {(Y1,X1), . . . , (Yn,Xn)} (β0, β1)k

Just like the sample mean, sample difference in means, or the samplevariance

It has a sampling distribution, with a sampling variance/standarderror, etc.

Let’s take a simulation approach to demonstrate:

I Pretend that the AJR data represents the population of interestI See how the line varies from sample to sample

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 17 / 103

Page 86: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Sampling distribution of the OLS estimator

Remember: OLS is an estimator—it’s a machine that we plug datainto and we get out estimates.

OLS

Sample 1: {(Y1,X1), . . . , (Yn,Xn)} (β0, β1)1

Sample 2: {(Y1,X1), . . . , (Yn,Xn)} (β0, β1)2

......

Sample k − 1: {(Y1,X1), . . . , (Yn,Xn)} (β0, β1)k−1

Sample k: {(Y1,X1), . . . , (Yn,Xn)} (β0, β1)k

Just like the sample mean, sample difference in means, or the samplevariance

It has a sampling distribution, with a sampling variance/standarderror, etc.

Let’s take a simulation approach to demonstrate:

I Pretend that the AJR data represents the population of interestI See how the line varies from sample to sample

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 17 / 103

Page 87: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Sampling distribution of the OLS estimator

Remember: OLS is an estimator—it’s a machine that we plug datainto and we get out estimates.

OLS

Sample 1: {(Y1,X1), . . . , (Yn,Xn)} (β0, β1)1

Sample 2: {(Y1,X1), . . . , (Yn,Xn)} (β0, β1)2

......

Sample k − 1: {(Y1,X1), . . . , (Yn,Xn)} (β0, β1)k−1

Sample k: {(Y1,X1), . . . , (Yn,Xn)} (β0, β1)k

Just like the sample mean, sample difference in means, or the samplevariance

It has a sampling distribution, with a sampling variance/standarderror, etc.

Let’s take a simulation approach to demonstrate:

I Pretend that the AJR data represents the population of interestI See how the line varies from sample to sample

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 17 / 103

Page 88: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Sampling distribution of the OLS estimator

Remember: OLS is an estimator—it’s a machine that we plug datainto and we get out estimates.

OLS

Sample 1: {(Y1,X1), . . . , (Yn,Xn)} (β0, β1)1

Sample 2: {(Y1,X1), . . . , (Yn,Xn)} (β0, β1)2

......

Sample k − 1: {(Y1,X1), . . . , (Yn,Xn)} (β0, β1)k−1

Sample k: {(Y1,X1), . . . , (Yn,Xn)} (β0, β1)k

Just like the sample mean, sample difference in means, or the samplevariance

It has a sampling distribution, with a sampling variance/standarderror, etc.

Let’s take a simulation approach to demonstrate:

I Pretend that the AJR data represents the population of interestI See how the line varies from sample to sample

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 17 / 103

Page 89: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Sampling distribution of the OLS estimator

Remember: OLS is an estimator—it’s a machine that we plug datainto and we get out estimates.

OLS

Sample 1: {(Y1,X1), . . . , (Yn,Xn)} (β0, β1)1

Sample 2: {(Y1,X1), . . . , (Yn,Xn)} (β0, β1)2

......

Sample k − 1: {(Y1,X1), . . . , (Yn,Xn)} (β0, β1)k−1

Sample k: {(Y1,X1), . . . , (Yn,Xn)} (β0, β1)k

Just like the sample mean, sample difference in means, or the samplevariance

It has a sampling distribution, with a sampling variance/standarderror, etc.

Let’s take a simulation approach to demonstrate:

I Pretend that the AJR data represents the population of interestI See how the line varies from sample to sample

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 17 / 103

Page 90: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Sampling distribution of the OLS estimator

Remember: OLS is an estimator—it’s a machine that we plug datainto and we get out estimates.

OLS

Sample 1: {(Y1,X1), . . . , (Yn,Xn)} (β0, β1)1

Sample 2: {(Y1,X1), . . . , (Yn,Xn)} (β0, β1)2

......

Sample k − 1: {(Y1,X1), . . . , (Yn,Xn)} (β0, β1)k−1

Sample k: {(Y1,X1), . . . , (Yn,Xn)} (β0, β1)k

Just like the sample mean, sample difference in means, or the samplevariance

It has a sampling distribution, with a sampling variance/standarderror, etc.

Let’s take a simulation approach to demonstrate:

I Pretend that the AJR data represents the population of interestI See how the line varies from sample to sample

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 17 / 103

Page 91: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Sampling distribution of the OLS estimator

Remember: OLS is an estimator—it’s a machine that we plug datainto and we get out estimates.

OLS

Sample 1: {(Y1,X1), . . . , (Yn,Xn)} (β0, β1)1

Sample 2: {(Y1,X1), . . . , (Yn,Xn)} (β0, β1)2

......

Sample k − 1: {(Y1,X1), . . . , (Yn,Xn)} (β0, β1)k−1

Sample k: {(Y1,X1), . . . , (Yn,Xn)} (β0, β1)k

Just like the sample mean, sample difference in means, or the samplevariance

It has a sampling distribution, with a sampling variance/standarderror, etc.

Let’s take a simulation approach to demonstrate:

I Pretend that the AJR data represents the population of interestI See how the line varies from sample to sample

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 17 / 103

Page 92: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Sampling distribution of the OLS estimator

Remember: OLS is an estimator—it’s a machine that we plug datainto and we get out estimates.

OLS

Sample 1: {(Y1,X1), . . . , (Yn,Xn)} (β0, β1)1

Sample 2: {(Y1,X1), . . . , (Yn,Xn)} (β0, β1)2

......

Sample k − 1: {(Y1,X1), . . . , (Yn,Xn)} (β0, β1)k−1

Sample k: {(Y1,X1), . . . , (Yn,Xn)} (β0, β1)k

Just like the sample mean, sample difference in means, or the samplevariance

It has a sampling distribution, with a sampling variance/standarderror, etc.

Let’s take a simulation approach to demonstrate:

I Pretend that the AJR data represents the population of interestI See how the line varies from sample to sample

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 17 / 103

Page 93: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Sampling distribution of the OLS estimator

Remember: OLS is an estimator—it’s a machine that we plug datainto and we get out estimates.

OLS

Sample 1: {(Y1,X1), . . . , (Yn,Xn)} (β0, β1)1

Sample 2: {(Y1,X1), . . . , (Yn,Xn)} (β0, β1)2

......

Sample k − 1: {(Y1,X1), . . . , (Yn,Xn)} (β0, β1)k−1

Sample k: {(Y1,X1), . . . , (Yn,Xn)} (β0, β1)k

Just like the sample mean, sample difference in means, or the samplevariance

It has a sampling distribution, with a sampling variance/standarderror, etc.

Let’s take a simulation approach to demonstrate:

I Pretend that the AJR data represents the population of interestI See how the line varies from sample to sample

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 17 / 103

Page 94: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Sampling distribution of the OLS estimator

Remember: OLS is an estimator—it’s a machine that we plug datainto and we get out estimates.

OLS

Sample 1: {(Y1,X1), . . . , (Yn,Xn)} (β0, β1)1

Sample 2: {(Y1,X1), . . . , (Yn,Xn)} (β0, β1)2

......

Sample k − 1: {(Y1,X1), . . . , (Yn,Xn)} (β0, β1)k−1

Sample k: {(Y1,X1), . . . , (Yn,Xn)} (β0, β1)k

Just like the sample mean, sample difference in means, or the samplevariance

It has a sampling distribution, with a sampling variance/standarderror, etc.

Let’s take a simulation approach to demonstrate:

I Pretend that the AJR data represents the population of interest

I See how the line varies from sample to sample

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 17 / 103

Page 95: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Sampling distribution of the OLS estimator

Remember: OLS is an estimator—it’s a machine that we plug datainto and we get out estimates.

OLS

Sample 1: {(Y1,X1), . . . , (Yn,Xn)} (β0, β1)1

Sample 2: {(Y1,X1), . . . , (Yn,Xn)} (β0, β1)2

......

Sample k − 1: {(Y1,X1), . . . , (Yn,Xn)} (β0, β1)k−1

Sample k: {(Y1,X1), . . . , (Yn,Xn)} (β0, β1)k

Just like the sample mean, sample difference in means, or the samplevariance

It has a sampling distribution, with a sampling variance/standarderror, etc.

Let’s take a simulation approach to demonstrate:

I Pretend that the AJR data represents the population of interestI See how the line varies from sample to sample

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 17 / 103

Page 96: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Simulation procedure

1 Draw a random sample of size n = 30 with replacement usingsample()

2 Use lm() to calculate the OLS estimates of the slope and intercept3 Plot the estimated regression line

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 18 / 103

Page 97: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Simulation procedure

1 Draw a random sample of size n = 30 with replacement usingsample()

2 Use lm() to calculate the OLS estimates of the slope and intercept3 Plot the estimated regression line

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 18 / 103

Page 98: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Simulation procedure

1 Draw a random sample of size n = 30 with replacement usingsample()

2 Use lm() to calculate the OLS estimates of the slope and intercept

3 Plot the estimated regression line

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 18 / 103

Page 99: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Simulation procedure

1 Draw a random sample of size n = 30 with replacement usingsample()

2 Use lm() to calculate the OLS estimates of the slope and intercept3 Plot the estimated regression line

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 18 / 103

Page 100: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Population Regression

1 2 3 4 5 6 7 8

67

89

1011

12

Log Settler Mortality

Log

GDP

per c

apita

gro

wth

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 19 / 103

Page 101: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Randomly sample from AJR

1 2 3 4 5 6 7 8

67

89

1011

12

Log Settler Mortality

Log

GDP

per c

apita

gro

wth

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 20 / 103

Page 102: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Randomly sample from AJR

1 2 3 4 5 6 7 8

67

89

1011

12

Log Settler Mortality

Log

GDP

per c

apita

gro

wth

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 20 / 103

Page 103: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Randomly sample from AJR

1 2 3 4 5 6 7 8

67

89

1011

12

Log Settler Mortality

Log

GDP

per c

apita

gro

wth

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 20 / 103

Page 104: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Randomly sample from AJR

1 2 3 4 5 6 7 8

67

89

1011

12

Log Settler Mortality

Log

GDP

per c

apita

gro

wth

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 20 / 103

Page 105: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Randomly sample from AJR

1 2 3 4 5 6 7 8

67

89

1011

12

Log Settler Mortality

Log

GDP

per c

apita

gro

wth

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 20 / 103

Page 106: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Randomly sample from AJR

1 2 3 4 5 6 7 8

67

89

1011

12

Log Settler Mortality

Log

GDP

per c

apita

gro

wth

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 20 / 103

Page 107: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Randomly sample from AJR

1 2 3 4 5 6 7 8

67

89

1011

12

Log Settler Mortality

Log

GDP

per c

apita

gro

wth

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 20 / 103

Page 108: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Sampling distribution of OLS

You can see that the estimated slopes and intercepts vary from sampleto sample, but that the “average” of the lines looks about right.

Sampling distribution of intercepts

β0

Freq

uenc

y

6 8 10 12 14

010

030

0

Sampling distribution of slopes

β1

Freq

uenc

y

-1.5 -1.0 -0.5 0.0 0.5

010

030

0

Is this unique?

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 21 / 103

Page 109: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Sampling distribution of OLS

You can see that the estimated slopes and intercepts vary from sampleto sample, but that the “average” of the lines looks about right.

Sampling distribution of intercepts

β0

Freq

uenc

y

6 8 10 12 14

010

030

0

Sampling distribution of slopes

β1

Freq

uenc

y-1.5 -1.0 -0.5 0.0 0.5

010

030

0

Is this unique?

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 21 / 103

Page 110: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Sampling distribution of OLS

You can see that the estimated slopes and intercepts vary from sampleto sample, but that the “average” of the lines looks about right.

Sampling distribution of intercepts

β0

Freq

uenc

y

6 8 10 12 14

010

030

0

Sampling distribution of slopes

β1

Freq

uenc

y-1.5 -1.0 -0.5 0.0 0.5

010

030

0

Is this unique?

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 21 / 103

Page 111: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Assumptions for unbiasedness of the sample mean

What assumptions did we make to prove that the sample mean wasunbiased?

E[X ] = µ

Just one: random sample

We’ll need more than this for the regression case

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 22 / 103

Page 112: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Assumptions for unbiasedness of the sample mean

What assumptions did we make to prove that the sample mean wasunbiased?

E[X ] = µ

Just one: random sample

We’ll need more than this for the regression case

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 22 / 103

Page 113: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Assumptions for unbiasedness of the sample mean

What assumptions did we make to prove that the sample mean wasunbiased?

E[X ] = µ

Just one: random sample

We’ll need more than this for the regression case

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 22 / 103

Page 114: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Assumptions for unbiasedness of the sample mean

What assumptions did we make to prove that the sample mean wasunbiased?

E[X ] = µ

Just one: random sample

We’ll need more than this for the regression case

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 22 / 103

Page 115: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Our goal

What is the sampling distribution of the OLS slope?

β1 ∼?(?, ?)

We need fill in those ?s.

We’ll start with the mean of the sampling distribution. Is theestimator centered at the true value, β1?

Most of our derivations will be in terms of the slope but they apply tothe intercept as well.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 23 / 103

Page 116: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Our goal

What is the sampling distribution of the OLS slope?

β1 ∼?(?, ?)

We need fill in those ?s.

We’ll start with the mean of the sampling distribution. Is theestimator centered at the true value, β1?

Most of our derivations will be in terms of the slope but they apply tothe intercept as well.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 23 / 103

Page 117: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Our goal

What is the sampling distribution of the OLS slope?

β1 ∼?(?, ?)

We need fill in those ?s.

We’ll start with the mean of the sampling distribution. Is theestimator centered at the true value, β1?

Most of our derivations will be in terms of the slope but they apply tothe intercept as well.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 23 / 103

Page 118: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Our goal

What is the sampling distribution of the OLS slope?

β1 ∼?(?, ?)

We need fill in those ?s.

We’ll start with the mean of the sampling distribution. Is theestimator centered at the true value, β1?

Most of our derivations will be in terms of the slope but they apply tothe intercept as well.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 23 / 103

Page 119: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Our goal

What is the sampling distribution of the OLS slope?

β1 ∼?(?, ?)

We need fill in those ?s.

We’ll start with the mean of the sampling distribution. Is theestimator centered at the true value, β1?

Most of our derivations will be in terms of the slope but they apply tothe intercept as well.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 23 / 103

Page 120: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

OLS Assumptions Preview

1 Linearity in Parameters: The population model is linear in itsparameters and correctly specified

2 Random Sampling: The observed data represent a random samplefrom the population described by the model.

3 Variation in X : There is variation in the explanatory variable.

4 Zero conditional mean: Expected value of the error term is zeroconditional on all values of the explanatory variable

5 Homoskedasticity: The error term has the same variance conditionalon all values of the explanatory variable.

6 Normality: The error term is independent of the explanatory variablesand normally distributed.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 24 / 103

Page 121: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

OLS Assumptions Preview

1 Linearity in Parameters: The population model is linear in itsparameters and correctly specified

2 Random Sampling: The observed data represent a random samplefrom the population described by the model.

3 Variation in X : There is variation in the explanatory variable.

4 Zero conditional mean: Expected value of the error term is zeroconditional on all values of the explanatory variable

5 Homoskedasticity: The error term has the same variance conditionalon all values of the explanatory variable.

6 Normality: The error term is independent of the explanatory variablesand normally distributed.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 24 / 103

Page 122: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

OLS Assumptions Preview

1 Linearity in Parameters: The population model is linear in itsparameters and correctly specified

2 Random Sampling: The observed data represent a random samplefrom the population described by the model.

3 Variation in X : There is variation in the explanatory variable.

4 Zero conditional mean: Expected value of the error term is zeroconditional on all values of the explanatory variable

5 Homoskedasticity: The error term has the same variance conditionalon all values of the explanatory variable.

6 Normality: The error term is independent of the explanatory variablesand normally distributed.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 24 / 103

Page 123: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

OLS Assumptions Preview

1 Linearity in Parameters: The population model is linear in itsparameters and correctly specified

2 Random Sampling: The observed data represent a random samplefrom the population described by the model.

3 Variation in X : There is variation in the explanatory variable.

4 Zero conditional mean: Expected value of the error term is zeroconditional on all values of the explanatory variable

5 Homoskedasticity: The error term has the same variance conditionalon all values of the explanatory variable.

6 Normality: The error term is independent of the explanatory variablesand normally distributed.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 24 / 103

Page 124: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

OLS Assumptions Preview

1 Linearity in Parameters: The population model is linear in itsparameters and correctly specified

2 Random Sampling: The observed data represent a random samplefrom the population described by the model.

3 Variation in X : There is variation in the explanatory variable.

4 Zero conditional mean: Expected value of the error term is zeroconditional on all values of the explanatory variable

5 Homoskedasticity: The error term has the same variance conditionalon all values of the explanatory variable.

6 Normality: The error term is independent of the explanatory variablesand normally distributed.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 24 / 103

Page 125: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

OLS Assumptions Preview

1 Linearity in Parameters: The population model is linear in itsparameters and correctly specified

2 Random Sampling: The observed data represent a random samplefrom the population described by the model.

3 Variation in X : There is variation in the explanatory variable.

4 Zero conditional mean: Expected value of the error term is zeroconditional on all values of the explanatory variable

5 Homoskedasticity: The error term has the same variance conditionalon all values of the explanatory variable.

6 Normality: The error term is independent of the explanatory variablesand normally distributed.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 24 / 103

Page 126: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

OLS Assumptions Preview

1 Linearity in Parameters: The population model is linear in itsparameters and correctly specified

2 Random Sampling: The observed data represent a random samplefrom the population described by the model.

3 Variation in X : There is variation in the explanatory variable.

4 Zero conditional mean: Expected value of the error term is zeroconditional on all values of the explanatory variable

5 Homoskedasticity: The error term has the same variance conditionalon all values of the explanatory variable.

6 Normality: The error term is independent of the explanatory variablesand normally distributed.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 24 / 103

Page 127: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Hierarchy of OLS Assumptions

!"#$%&'(%)$*+(,(*+#-'./0%)$*

1(./(%)$*/$*2*

3$4/(-#"$#--*5)$-/-,#$'6*

1(./(%)$*/$*2*

7($")8*9(80:/$;*

</$#(./,6*/$*=(.(8#,#.-*

>#.)*5)$"/%)$(:*?#($*

@(A--B?(.C)D*EF<3GH*I-680,)%'*!$J#.#$'#*************

EK*($"*!LH"

1(./(%)$*/$*2*

7($")8*9(80:/$;*

</$#(./,6*/$*=(.(8#,#.-*

>#.)*5)$"/%)$(:*?#($*

M)8)-C#"(-%'/,6*

5:(--/'(:*<?*EF3GH*98(::B9(80:#*!$J#.#$'#***

E,*($"*NH*

1(./(%)$*/$*2*

7($")8*9(80:/$;*

</$#(./,6*/$*=(.(8#,#.-*

>#.)*5)$"/%)$(:*?#($*

M)8)-C#"(-%'/,6*

O).8(:/,6*)J*G..).-*

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 25 / 103

Page 128: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Hierarchy of OLS Assumptions

!"#$%&'(%)$*+(,(*+#-'./0%)$*

1(./(%)$*/$*2*

3$4/(-#"$#--*5)$-/-,#$'6*

1(./(%)$*/$*2*

7($")8*9(80:/$;*

</$#(./,6*/$*=(.(8#,#.-*

>#.)*5)$"/%)$(:*?#($*

@(A--B?(.C)D*EF<3GH*I-680,)%'*!$J#.#$'#*************

EK*($"*!LH"

1(./(%)$*/$*2*

7($")8*9(80:/$;*

</$#(./,6*/$*=(.(8#,#.-*

>#.)*5)$"/%)$(:*?#($*

M)8)-C#"(-%'/,6*

5:(--/'(:*<?*EF3GH*98(::B9(80:#*!$J#.#$'#***

E,*($"*NH*

1(./(%)$*/$*2*

7($")8*9(80:/$;*

</$#(./,6*/$*=(.(8#,#.-*

>#.)*5)$"/%)$(:*?#($*

M)8)-C#"(-%'/,6*

O).8(:/,6*)J*G..).-*

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 25 / 103

Page 129: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

OLS Assumption I

Assumption (I. Linearity in Parameters)

The population regression model is linear in its parameters and correctlyspecified as:

Y = β0 + β1X1 + u

Note that it can be nonlinear in variablesI OK: Y = β0 + β1X + u or

Y = β0 + β1X2 + u or

Y = β0 + β1log(X ) + uI Not OK: Y = β0 + β2

1X + u orY = β0 + exp(β1)X + u

β0, β1: Population parameters — fixed and unknown

u: Unobserved random variable with E [u] = 0 — captures all otherfactors influencing Y other than X

We assume this to be the structural model, i.e., the model describingthe true process generating Y

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 26 / 103

Page 130: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

OLS Assumption I

Assumption (I. Linearity in Parameters)

The population regression model is linear in its parameters and correctlyspecified as:

Y = β0 + β1X1 + u

Note that it can be nonlinear in variablesI OK: Y = β0 + β1X + u or

Y = β0 + β1X2 + u or

Y = β0 + β1log(X ) + uI Not OK: Y = β0 + β2

1X + u orY = β0 + exp(β1)X + u

β0, β1: Population parameters — fixed and unknown

u: Unobserved random variable with E [u] = 0 — captures all otherfactors influencing Y other than X

We assume this to be the structural model, i.e., the model describingthe true process generating Y

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 26 / 103

Page 131: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

OLS Assumption I

Assumption (I. Linearity in Parameters)

The population regression model is linear in its parameters and correctlyspecified as:

Y = β0 + β1X1 + u

Note that it can be nonlinear in variables

I OK: Y = β0 + β1X + u orY = β0 + β1X

2 + u orY = β0 + β1log(X ) + u

I Not OK: Y = β0 + β21X + u or

Y = β0 + exp(β1)X + u

β0, β1: Population parameters — fixed and unknown

u: Unobserved random variable with E [u] = 0 — captures all otherfactors influencing Y other than X

We assume this to be the structural model, i.e., the model describingthe true process generating Y

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 26 / 103

Page 132: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

OLS Assumption I

Assumption (I. Linearity in Parameters)

The population regression model is linear in its parameters and correctlyspecified as:

Y = β0 + β1X1 + u

Note that it can be nonlinear in variablesI OK: Y = β0 + β1X + u or

Y = β0 + β1X2 + u or

Y = β0 + β1log(X ) + u

I Not OK: Y = β0 + β21X + u or

Y = β0 + exp(β1)X + u

β0, β1: Population parameters — fixed and unknown

u: Unobserved random variable with E [u] = 0 — captures all otherfactors influencing Y other than X

We assume this to be the structural model, i.e., the model describingthe true process generating Y

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 26 / 103

Page 133: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

OLS Assumption I

Assumption (I. Linearity in Parameters)

The population regression model is linear in its parameters and correctlyspecified as:

Y = β0 + β1X1 + u

Note that it can be nonlinear in variablesI OK: Y = β0 + β1X + u or

Y = β0 + β1X2 + u or

Y = β0 + β1log(X ) + uI Not OK: Y = β0 + β2

1X + u orY = β0 + exp(β1)X + u

β0, β1: Population parameters — fixed and unknown

u: Unobserved random variable with E [u] = 0 — captures all otherfactors influencing Y other than X

We assume this to be the structural model, i.e., the model describingthe true process generating Y

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 26 / 103

Page 134: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

OLS Assumption I

Assumption (I. Linearity in Parameters)

The population regression model is linear in its parameters and correctlyspecified as:

Y = β0 + β1X1 + u

Note that it can be nonlinear in variablesI OK: Y = β0 + β1X + u or

Y = β0 + β1X2 + u or

Y = β0 + β1log(X ) + uI Not OK: Y = β0 + β2

1X + u orY = β0 + exp(β1)X + u

β0, β1: Population parameters — fixed and unknown

u: Unobserved random variable with E [u] = 0 — captures all otherfactors influencing Y other than X

We assume this to be the structural model, i.e., the model describingthe true process generating Y

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 26 / 103

Page 135: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

OLS Assumption I

Assumption (I. Linearity in Parameters)

The population regression model is linear in its parameters and correctlyspecified as:

Y = β0 + β1X1 + u

Note that it can be nonlinear in variablesI OK: Y = β0 + β1X + u or

Y = β0 + β1X2 + u or

Y = β0 + β1log(X ) + uI Not OK: Y = β0 + β2

1X + u orY = β0 + exp(β1)X + u

β0, β1: Population parameters — fixed and unknown

u: Unobserved random variable with E [u] = 0 — captures all otherfactors influencing Y other than X

We assume this to be the structural model, i.e., the model describingthe true process generating Y

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 26 / 103

Page 136: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

OLS Assumption I

Assumption (I. Linearity in Parameters)

The population regression model is linear in its parameters and correctlyspecified as:

Y = β0 + β1X1 + u

Note that it can be nonlinear in variablesI OK: Y = β0 + β1X + u or

Y = β0 + β1X2 + u or

Y = β0 + β1log(X ) + uI Not OK: Y = β0 + β2

1X + u orY = β0 + exp(β1)X + u

β0, β1: Population parameters — fixed and unknown

u: Unobserved random variable with E [u] = 0 — captures all otherfactors influencing Y other than X

We assume this to be the structural model, i.e., the model describingthe true process generating Y

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 26 / 103

Page 137: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

OLS Assumption II

Assumption (II. Random Sampling)

The observed data:(yi , xi ) for i = 1, ..., n

represent an i.i.d. random sample of size n following the population model.

Data examples consistent with this assumption:

A cross-sectional survey where the units are sampled randomly

Potential Violations:

Time series data (regressor values may exhibit persistence)

Sample selection problems (sample not representative of thepopulation)

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 27 / 103

Page 138: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

OLS Assumption II

Assumption (II. Random Sampling)

The observed data:(yi , xi ) for i = 1, ..., n

represent an i.i.d. random sample of size n following the population model.

Data examples consistent with this assumption:

A cross-sectional survey where the units are sampled randomly

Potential Violations:

Time series data (regressor values may exhibit persistence)

Sample selection problems (sample not representative of thepopulation)

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 27 / 103

Page 139: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

OLS Assumption II

Assumption (II. Random Sampling)

The observed data:(yi , xi ) for i = 1, ..., n

represent an i.i.d. random sample of size n following the population model.

Data examples consistent with this assumption:

A cross-sectional survey where the units are sampled randomly

Potential Violations:

Time series data (regressor values may exhibit persistence)

Sample selection problems (sample not representative of thepopulation)

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 27 / 103

Page 140: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

OLS Assumption II

Assumption (II. Random Sampling)

The observed data:(yi , xi ) for i = 1, ..., n

represent an i.i.d. random sample of size n following the population model.

Data examples consistent with this assumption:

A cross-sectional survey where the units are sampled randomly

Potential Violations:

Time series data (regressor values may exhibit persistence)

Sample selection problems (sample not representative of thepopulation)

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 27 / 103

Page 141: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

OLS Assumption II

Assumption (II. Random Sampling)

The observed data:(yi , xi ) for i = 1, ..., n

represent an i.i.d. random sample of size n following the population model.

Data examples consistent with this assumption:

A cross-sectional survey where the units are sampled randomly

Potential Violations:

Time series data (regressor values may exhibit persistence)

Sample selection problems (sample not representative of thepopulation)

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 27 / 103

Page 142: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

OLS Assumption II

Assumption (II. Random Sampling)

The observed data:(yi , xi ) for i = 1, ..., n

represent an i.i.d. random sample of size n following the population model.

Data examples consistent with this assumption:

A cross-sectional survey where the units are sampled randomly

Potential Violations:

Time series data (regressor values may exhibit persistence)

Sample selection problems (sample not representative of thepopulation)

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 27 / 103

Page 143: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

OLS Assumption II

Assumption (II. Random Sampling)

The observed data:(yi , xi ) for i = 1, ..., n

represent an i.i.d. random sample of size n following the population model.

Data examples consistent with this assumption:

A cross-sectional survey where the units are sampled randomly

Potential Violations:

Time series data (regressor values may exhibit persistence)

Sample selection problems (sample not representative of thepopulation)

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 27 / 103

Page 144: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

OLS Assumption III

Assumption (III. Variation in X ; a.k.a. No Perfect Collinearity)

The observed data:xi for i = 1, ..., n

are not all the same value.

Satisfied as long as there is some variation in the regressor X in thesample.

Why do we need this?

β1 =

∑ni=1(xi − x)(yi − y)∑n

i=1(xi − x)2

This assumption is needed just to calculate β, i.e. identifying β.

In fact, this is the only assumption needed for using OLS as a pure datasummary.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 28 / 103

Page 145: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

OLS Assumption III

Assumption (III. Variation in X ; a.k.a. No Perfect Collinearity)

The observed data:xi for i = 1, ..., n

are not all the same value.

Satisfied as long as there is some variation in the regressor X in thesample.

Why do we need this?

β1 =

∑ni=1(xi − x)(yi − y)∑n

i=1(xi − x)2

This assumption is needed just to calculate β, i.e. identifying β.

In fact, this is the only assumption needed for using OLS as a pure datasummary.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 28 / 103

Page 146: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

OLS Assumption III

Assumption (III. Variation in X ; a.k.a. No Perfect Collinearity)

The observed data:xi for i = 1, ..., n

are not all the same value.

Satisfied as long as there is some variation in the regressor X in thesample.

Why do we need this?

β1 =

∑ni=1(xi − x)(yi − y)∑n

i=1(xi − x)2

This assumption is needed just to calculate β, i.e. identifying β.

In fact, this is the only assumption needed for using OLS as a pure datasummary.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 28 / 103

Page 147: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

OLS Assumption III

Assumption (III. Variation in X ; a.k.a. No Perfect Collinearity)

The observed data:xi for i = 1, ..., n

are not all the same value.

Satisfied as long as there is some variation in the regressor X in thesample.

Why do we need this?

β1 =

∑ni=1(xi − x)(yi − y)∑n

i=1(xi − x)2

This assumption is needed just to calculate β, i.e. identifying β.

In fact, this is the only assumption needed for using OLS as a pure datasummary.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 28 / 103

Page 148: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

OLS Assumption III

Assumption (III. Variation in X ; a.k.a. No Perfect Collinearity)

The observed data:xi for i = 1, ..., n

are not all the same value.

Satisfied as long as there is some variation in the regressor X in thesample.

Why do we need this?

β1 =

∑ni=1(xi − x)(yi − y)∑n

i=1(xi − x)2

This assumption is needed just to calculate β, i.e. identifying β.

In fact, this is the only assumption needed for using OLS as a pure datasummary.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 28 / 103

Page 149: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

OLS Assumption III

Assumption (III. Variation in X ; a.k.a. No Perfect Collinearity)

The observed data:xi for i = 1, ..., n

are not all the same value.

Satisfied as long as there is some variation in the regressor X in thesample.

Why do we need this?

β1 =

∑ni=1(xi − x)(yi − y)∑n

i=1(xi − x)2

This assumption is needed just to calculate β, i.e. identifying β.

In fact, this is the only assumption needed for using OLS as a pure datasummary.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 28 / 103

Page 150: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Stuck in a moment

Why does this matter?

How would you draw the line of best fitthrough this scatterplot, which is a violation of this assumption?

-3 -2 -1 0 1 2 3

-2-1

01

X

Y

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 29 / 103

Page 151: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Stuck in a moment

Why does this matter?

How would you draw the line of best fitthrough this scatterplot, which is a violation of this assumption?

-3 -2 -1 0 1 2 3

-2-1

01

X

Y

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 29 / 103

Page 152: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Stuck in a moment

Why does this matter? How would you draw the line of best fitthrough this scatterplot, which is a violation of this assumption?

-3 -2 -1 0 1 2 3

-2-1

01

X

Y

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 29 / 103

Page 153: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

OLS Assumption IV

Assumption (IV. Zero Conditional Mean)

The expected value of the error term is zero conditional on any value of theexplanatory variable:

E [u|X ] = 0

E [u|X ] = 0 implies a slightly weaker condition Cov(X , u) = 0

Given random sampling, E [u|X ] = 0 also implies E [ui |xi ] = 0 for all i

Violations:

Recall that u represents all unobserved factors that influence Y

If such unobserved factors are also correlated with X , Cov(X , u) 6= 0

Example: Wage = β0 + β1education + u. What is likely to be in u?

−→ It must be assumed E [ability |educ = low ] = E [ability |educ = high]

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 30 / 103

Page 154: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

OLS Assumption IV

Assumption (IV. Zero Conditional Mean)

The expected value of the error term is zero conditional on any value of theexplanatory variable:

E [u|X ] = 0

E [u|X ] = 0 implies a slightly weaker condition Cov(X , u) = 0

Given random sampling, E [u|X ] = 0 also implies E [ui |xi ] = 0 for all i

Violations:

Recall that u represents all unobserved factors that influence Y

If such unobserved factors are also correlated with X , Cov(X , u) 6= 0

Example: Wage = β0 + β1education + u. What is likely to be in u?

−→ It must be assumed E [ability |educ = low ] = E [ability |educ = high]

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 30 / 103

Page 155: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

OLS Assumption IV

Assumption (IV. Zero Conditional Mean)

The expected value of the error term is zero conditional on any value of theexplanatory variable:

E [u|X ] = 0

E [u|X ] = 0 implies a slightly weaker condition Cov(X , u) = 0

Given random sampling, E [u|X ] = 0 also implies E [ui |xi ] = 0 for all i

Violations:

Recall that u represents all unobserved factors that influence Y

If such unobserved factors are also correlated with X , Cov(X , u) 6= 0

Example: Wage = β0 + β1education + u. What is likely to be in u?

−→ It must be assumed E [ability |educ = low ] = E [ability |educ = high]

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 30 / 103

Page 156: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

OLS Assumption IV

Assumption (IV. Zero Conditional Mean)

The expected value of the error term is zero conditional on any value of theexplanatory variable:

E [u|X ] = 0

E [u|X ] = 0 implies a slightly weaker condition Cov(X , u) = 0

Given random sampling, E [u|X ] = 0 also implies E [ui |xi ] = 0 for all i

Violations:

Recall that u represents all unobserved factors that influence Y

If such unobserved factors are also correlated with X , Cov(X , u) 6= 0

Example: Wage = β0 + β1education + u. What is likely to be in u?

−→ It must be assumed E [ability |educ = low ] = E [ability |educ = high]

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 30 / 103

Page 157: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

OLS Assumption IV

Assumption (IV. Zero Conditional Mean)

The expected value of the error term is zero conditional on any value of theexplanatory variable:

E [u|X ] = 0

E [u|X ] = 0 implies a slightly weaker condition Cov(X , u) = 0

Given random sampling, E [u|X ] = 0 also implies E [ui |xi ] = 0 for all i

Violations:

Recall that u represents all unobserved factors that influence Y

If such unobserved factors are also correlated with X , Cov(X , u) 6= 0

Example: Wage = β0 + β1education + u. What is likely to be in u?

−→ It must be assumed E [ability |educ = low ] = E [ability |educ = high]

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 30 / 103

Page 158: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

OLS Assumption IV

Assumption (IV. Zero Conditional Mean)

The expected value of the error term is zero conditional on any value of theexplanatory variable:

E [u|X ] = 0

E [u|X ] = 0 implies a slightly weaker condition Cov(X , u) = 0

Given random sampling, E [u|X ] = 0 also implies E [ui |xi ] = 0 for all i

Violations:

Recall that u represents all unobserved factors that influence Y

If such unobserved factors are also correlated with X , Cov(X , u) 6= 0

Example: Wage = β0 + β1education + u. What is likely to be in u?

−→ It must be assumed E [ability |educ = low ] = E [ability |educ = high]

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 30 / 103

Page 159: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

OLS Assumption IV

Assumption (IV. Zero Conditional Mean)

The expected value of the error term is zero conditional on any value of theexplanatory variable:

E [u|X ] = 0

E [u|X ] = 0 implies a slightly weaker condition Cov(X , u) = 0

Given random sampling, E [u|X ] = 0 also implies E [ui |xi ] = 0 for all i

Violations:

Recall that u represents all unobserved factors that influence Y

If such unobserved factors are also correlated with X , Cov(X , u) 6= 0

Example: Wage = β0 + β1education + u. What is likely to be in u?

−→ It must be assumed E [ability |educ = low ] = E [ability |educ = high]

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 30 / 103

Page 160: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

OLS Assumption IV

Assumption (IV. Zero Conditional Mean)

The expected value of the error term is zero conditional on any value of theexplanatory variable:

E [u|X ] = 0

E [u|X ] = 0 implies a slightly weaker condition Cov(X , u) = 0

Given random sampling, E [u|X ] = 0 also implies E [ui |xi ] = 0 for all i

Violations:

Recall that u represents all unobserved factors that influence Y

If such unobserved factors are also correlated with X , Cov(X , u) 6= 0

Example: Wage = β0 + β1education + u. What is likely to be in u?

−→ It must be assumed E [ability |educ = low ] = E [ability |educ = high]

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 30 / 103

Page 161: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

OLS Assumption IV

Assumption (IV. Zero Conditional Mean)

The expected value of the error term is zero conditional on any value of theexplanatory variable:

E [u|X ] = 0

E [u|X ] = 0 implies a slightly weaker condition Cov(X , u) = 0

Given random sampling, E [u|X ] = 0 also implies E [ui |xi ] = 0 for all i

Violations:

Recall that u represents all unobserved factors that influence Y

If such unobserved factors are also correlated with X , Cov(X , u) 6= 0

Example: Wage = β0 + β1education + u. What is likely to be in u?

−→ It must be assumed E [ability |educ = low ] = E [ability |educ = high]

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 30 / 103

Page 162: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

OLS Assumption IV

Assumption (IV. Zero Conditional Mean)

The expected value of the error term is zero conditional on any value of theexplanatory variable:

E [u|X ] = 0

E [u|X ] = 0 implies a slightly weaker condition Cov(X , u) = 0

Given random sampling, E [u|X ] = 0 also implies E [ui |xi ] = 0 for all i

Violations:

Recall that u represents all unobserved factors that influence Y

If such unobserved factors are also correlated with X , Cov(X , u) 6= 0

Example: Wage = β0 + β1education + u. What is likely to be in u?

−→ It must be assumed E [ability |educ = low ] = E [ability |educ = high]

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 30 / 103

Page 163: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Violating the zero conditional mean assumption

How does this assumption get violated? Let’s generate data from thefollowing model:

Yi = 1 + 0.5Xi + ui

But let’s compare two situations:

1 Where the mean of ui depends on Xi (they are correlated)2 No relationship between them (satisfies the assumption)

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 31 / 103

Page 164: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Violating the zero conditional mean assumption

How does this assumption get violated? Let’s generate data from thefollowing model:

Yi = 1 + 0.5Xi + ui

But let’s compare two situations:

1 Where the mean of ui depends on Xi (they are correlated)2 No relationship between them (satisfies the assumption)

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 31 / 103

Page 165: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Violating the zero conditional mean assumption

How does this assumption get violated? Let’s generate data from thefollowing model:

Yi = 1 + 0.5Xi + ui

But let’s compare two situations:

1 Where the mean of ui depends on Xi (they are correlated)

2 No relationship between them (satisfies the assumption)

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 31 / 103

Page 166: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Violating the zero conditional mean assumption

How does this assumption get violated? Let’s generate data from thefollowing model:

Yi = 1 + 0.5Xi + ui

But let’s compare two situations:

1 Where the mean of ui depends on Xi (they are correlated)2 No relationship between them (satisfies the assumption)

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 31 / 103

Page 167: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Violating the zero conditional mean assumption

-3 -2 -1 0 1 2 3

-2-1

01

23

45

Assumption 4 violated

X

Y

-3 -2 -1 0 1 2 3-2

-10

12

34

5

Assumption 4 not violated

X

Y

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 32 / 103

Page 168: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Unbiasedness (to the blackboard)With Assumptions 1-4, we can show that the OLS estimator for the slopeis unbiased, that is E [β1] = β1.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 33 / 103

Page 169: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Unbiasedness (to the blackboard)With Assumptions 1-4, we can show that the OLS estimator for the slopeis unbiased, that is E [β1] = β1.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 33 / 103

Page 170: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Unbiasedness of OLS

Theorem (Unbiasedness of OLS)

Given OLS Assumptions I–IV:

E [β0] = β0 and E [β1] = β1

The sampling distributions of the estimators β1 and β0 are centered aboutthe true population parameter values β1 and β0.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 34 / 103

Page 171: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Where are we?

Now we know that, under Assumptions 1-4, we know that

β1 ∼?(β1, ?)

That is we know that the sampling distribution is centered on thetrue population slope, but we don’t know the population variance.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 35 / 103

Page 172: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Where are we?

Now we know that, under Assumptions 1-4, we know that

β1 ∼?(β1, ?)

That is we know that the sampling distribution is centered on thetrue population slope, but we don’t know the population variance.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 35 / 103

Page 173: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Where are we?

Now we know that, under Assumptions 1-4, we know that

β1 ∼?(β1, ?)

That is we know that the sampling distribution is centered on thetrue population slope, but we don’t know the population variance.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 35 / 103

Page 174: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Sampling variance of estimated slope

In order to derive the sampling variance of the OLS estimator,

1 Linearity2 Random (iid) sample3 Variation in Xi

4 Zero conditional mean of the errors5 Homoskedasticity

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 36 / 103

Page 175: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Sampling variance of estimated slope

In order to derive the sampling variance of the OLS estimator,

1 Linearity2 Random (iid) sample3 Variation in Xi

4 Zero conditional mean of the errors5 Homoskedasticity

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 36 / 103

Page 176: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Sampling variance of estimated slope

In order to derive the sampling variance of the OLS estimator,

1 Linearity

2 Random (iid) sample3 Variation in Xi

4 Zero conditional mean of the errors5 Homoskedasticity

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 36 / 103

Page 177: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Sampling variance of estimated slope

In order to derive the sampling variance of the OLS estimator,

1 Linearity2 Random (iid) sample

3 Variation in Xi

4 Zero conditional mean of the errors5 Homoskedasticity

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 36 / 103

Page 178: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Sampling variance of estimated slope

In order to derive the sampling variance of the OLS estimator,

1 Linearity2 Random (iid) sample3 Variation in Xi

4 Zero conditional mean of the errors5 Homoskedasticity

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 36 / 103

Page 179: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Sampling variance of estimated slope

In order to derive the sampling variance of the OLS estimator,

1 Linearity2 Random (iid) sample3 Variation in Xi

4 Zero conditional mean of the errors

5 Homoskedasticity

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 36 / 103

Page 180: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Sampling variance of estimated slope

In order to derive the sampling variance of the OLS estimator,

1 Linearity2 Random (iid) sample3 Variation in Xi

4 Zero conditional mean of the errors5 Homoskedasticity

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 36 / 103

Page 181: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Variance of OLS Estimators

How can we derive Var[β0] and Var[β1]? Let’s make the following additionalassumption:

Assumption (V. Homoskedasticity)

The conditional variance of the error term is constant and does not vary as afunction of the explanatory variable:

Var[u|X ] = σ2u

This implies Var[u] = σ2u

→ all errors have an identical error variance (σ2ui = σ2

u for all i)

Taken together, Assumptions I–V imply:

E [Y |X ] = β0 + β1X

Var[Y |X ] = σ2u

Violation: Var[u|X = x1] 6= Var[u|X = x2] called heteroskedasticity.

Assumptions I–V are collectively known as the Gauss-Markov assumptions

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 37 / 103

Page 182: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Variance of OLS EstimatorsHow can we derive Var[β0] and Var[β1]? Let’s make the following additionalassumption:

Assumption (V. Homoskedasticity)

The conditional variance of the error term is constant and does not vary as afunction of the explanatory variable:

Var[u|X ] = σ2u

This implies Var[u] = σ2u

→ all errors have an identical error variance (σ2ui = σ2

u for all i)

Taken together, Assumptions I–V imply:

E [Y |X ] = β0 + β1X

Var[Y |X ] = σ2u

Violation: Var[u|X = x1] 6= Var[u|X = x2] called heteroskedasticity.

Assumptions I–V are collectively known as the Gauss-Markov assumptions

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 37 / 103

Page 183: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Variance of OLS EstimatorsHow can we derive Var[β0] and Var[β1]? Let’s make the following additionalassumption:

Assumption (V. Homoskedasticity)

The conditional variance of the error term is constant and does not vary as afunction of the explanatory variable:

Var[u|X ] = σ2u

This implies Var[u] = σ2u

→ all errors have an identical error variance (σ2ui = σ2

u for all i)

Taken together, Assumptions I–V imply:

E [Y |X ] = β0 + β1X

Var[Y |X ] = σ2u

Violation: Var[u|X = x1] 6= Var[u|X = x2] called heteroskedasticity.

Assumptions I–V are collectively known as the Gauss-Markov assumptions

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 37 / 103

Page 184: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Variance of OLS EstimatorsHow can we derive Var[β0] and Var[β1]? Let’s make the following additionalassumption:

Assumption (V. Homoskedasticity)

The conditional variance of the error term is constant and does not vary as afunction of the explanatory variable:

Var[u|X ] = σ2u

This implies Var[u] = σ2u

→ all errors have an identical error variance (σ2ui = σ2

u for all i)

Taken together, Assumptions I–V imply:

E [Y |X ] = β0 + β1X

Var[Y |X ] = σ2u

Violation: Var[u|X = x1] 6= Var[u|X = x2] called heteroskedasticity.

Assumptions I–V are collectively known as the Gauss-Markov assumptions

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 37 / 103

Page 185: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Variance of OLS EstimatorsHow can we derive Var[β0] and Var[β1]? Let’s make the following additionalassumption:

Assumption (V. Homoskedasticity)

The conditional variance of the error term is constant and does not vary as afunction of the explanatory variable:

Var[u|X ] = σ2u

This implies Var[u] = σ2u

→ all errors have an identical error variance (σ2ui = σ2

u for all i)

Taken together, Assumptions I–V imply:

E [Y |X ] = β0 + β1X

Var[Y |X ] = σ2u

Violation: Var[u|X = x1] 6= Var[u|X = x2] called heteroskedasticity.

Assumptions I–V are collectively known as the Gauss-Markov assumptions

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 37 / 103

Page 186: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Variance of OLS EstimatorsHow can we derive Var[β0] and Var[β1]? Let’s make the following additionalassumption:

Assumption (V. Homoskedasticity)

The conditional variance of the error term is constant and does not vary as afunction of the explanatory variable:

Var[u|X ] = σ2u

This implies Var[u] = σ2u

→ all errors have an identical error variance (σ2ui = σ2

u for all i)

Taken together, Assumptions I–V imply:

E [Y |X ] = β0 + β1X

Var[Y |X ] = σ2u

Violation: Var[u|X = x1] 6= Var[u|X = x2] called heteroskedasticity.

Assumptions I–V are collectively known as the Gauss-Markov assumptions

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 37 / 103

Page 187: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Variance of OLS EstimatorsHow can we derive Var[β0] and Var[β1]? Let’s make the following additionalassumption:

Assumption (V. Homoskedasticity)

The conditional variance of the error term is constant and does not vary as afunction of the explanatory variable:

Var[u|X ] = σ2u

This implies Var[u] = σ2u

→ all errors have an identical error variance (σ2ui = σ2

u for all i)

Taken together, Assumptions I–V imply:

E [Y |X ] = β0 + β1X

Var[Y |X ] = σ2u

Violation: Var[u|X = x1] 6= Var[u|X = x2] called heteroskedasticity.

Assumptions I–V are collectively known as the Gauss-Markov assumptions

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 37 / 103

Page 188: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Deriving the sampling variance

var[β1|X1, . . . ,Xn] =??

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 38 / 103

Page 189: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Deriving the sampling variance

var[β1|X1, . . . ,Xn] =??

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 38 / 103

Page 190: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Variance of OLS Estimators

Theorem (Variance of OLS Estimators)

Given OLS Assumptions I–V (Gauss-Markov Assumptions):

Var[β1 | X ] =σ2u∑n

i=1(xi − x)2=

σ2u

SSTx

Var[β0 | X ] = σ2u

{1

n+

x2∑ni=1(xi − x)2

}where Var[u | X ] = σ2

u (the error variance).

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 39 / 103

Page 191: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Understanding the sampling variance

var[β1|X1, . . . ,Xn] =σ2u∑n

i=1(Xi − X )2

What drives the sampling variability of the OLS estimator?

I The higher the variance of Yi , the higher the sampling varianceI The lower the variance of Xi , the higher the sampling varianceI As we increase n, the denominator gets large, while the numerator is

fixed and so the sampling variance shrinks to 0.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 40 / 103

Page 192: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Understanding the sampling variance

var[β1|X1, . . . ,Xn] =σ2u∑n

i=1(Xi − X )2

What drives the sampling variability of the OLS estimator?

I The higher the variance of Yi , the higher the sampling variance

I The lower the variance of Xi , the higher the sampling varianceI As we increase n, the denominator gets large, while the numerator is

fixed and so the sampling variance shrinks to 0.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 40 / 103

Page 193: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Understanding the sampling variance

var[β1|X1, . . . ,Xn] =σ2u∑n

i=1(Xi − X )2

What drives the sampling variability of the OLS estimator?

I The higher the variance of Yi , the higher the sampling varianceI The lower the variance of Xi , the higher the sampling variance

I As we increase n, the denominator gets large, while the numerator isfixed and so the sampling variance shrinks to 0.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 40 / 103

Page 194: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Understanding the sampling variance

var[β1|X1, . . . ,Xn] =σ2u∑n

i=1(Xi − X )2

What drives the sampling variability of the OLS estimator?

I The higher the variance of Yi , the higher the sampling varianceI The lower the variance of Xi , the higher the sampling varianceI As we increase n, the denominator gets large, while the numerator is

fixed and so the sampling variance shrinks to 0.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 40 / 103

Page 195: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Estimating the Variance of OLS Estimators

How can we estimate the unobserved error variance Var [u] = σ2u?

We can derive an estimator based on the residuals:

ui = yi − yi = yi − β0 − β1xi

Recall: The errors ui are NOT the same as the residuals ui .

Intuitively, the scatter of the residuals around the fitted regression line shouldreflect the unseen scatter about the true population regression line.

We can measure scatter with the mean squared deviation:

MSD(u) ≡ 1

n

n∑i=1

(ui − ¯u)2 =1

n

n∑i=1

u2i

Intuitively, which line is likely to be closer to the observed sample values on Xand Y , the true line yi = β0 + β1xi or the fitted regression line yi = β0 + β1xi?

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 41 / 103

Page 196: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Estimating the Variance of OLS Estimators

How can we estimate the unobserved error variance Var [u] = σ2u?

We can derive an estimator based on the residuals:

ui = yi − yi = yi − β0 − β1xi

Recall: The errors ui are NOT the same as the residuals ui .

Intuitively, the scatter of the residuals around the fitted regression line shouldreflect the unseen scatter about the true population regression line.

We can measure scatter with the mean squared deviation:

MSD(u) ≡ 1

n

n∑i=1

(ui − ¯u)2 =1

n

n∑i=1

u2i

Intuitively, which line is likely to be closer to the observed sample values on Xand Y , the true line yi = β0 + β1xi or the fitted regression line yi = β0 + β1xi?

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 41 / 103

Page 197: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Estimating the Variance of OLS Estimators

How can we estimate the unobserved error variance Var [u] = σ2u?

We can derive an estimator based on the residuals:

ui = yi − yi = yi − β0 − β1xi

Recall: The errors ui are NOT the same as the residuals ui .

Intuitively, the scatter of the residuals around the fitted regression line shouldreflect the unseen scatter about the true population regression line.

We can measure scatter with the mean squared deviation:

MSD(u) ≡ 1

n

n∑i=1

(ui − ¯u)2 =

1

n

n∑i=1

u2i

Intuitively, which line is likely to be closer to the observed sample values on Xand Y , the true line yi = β0 + β1xi or the fitted regression line yi = β0 + β1xi?

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 41 / 103

Page 198: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Estimating the Variance of OLS Estimators

How can we estimate the unobserved error variance Var [u] = σ2u?

We can derive an estimator based on the residuals:

ui = yi − yi = yi − β0 − β1xi

Recall: The errors ui are NOT the same as the residuals ui .

Intuitively, the scatter of the residuals around the fitted regression line shouldreflect the unseen scatter about the true population regression line.

We can measure scatter with the mean squared deviation:

MSD(u) ≡ 1

n

n∑i=1

(ui − ¯u)2 =1

n

n∑i=1

u2i

Intuitively, which line is likely to be closer to the observed sample values on Xand Y , the true line yi = β0 + β1xi or the fitted regression line yi = β0 + β1xi?

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 41 / 103

Page 199: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Estimating the Variance of OLS Estimators

How can we estimate the unobserved error variance Var [u] = σ2u?

We can derive an estimator based on the residuals:

ui = yi − yi = yi − β0 − β1xi

Recall: The errors ui are NOT the same as the residuals ui .

Intuitively, the scatter of the residuals around the fitted regression line shouldreflect the unseen scatter about the true population regression line.

We can measure scatter with the mean squared deviation:

MSD(u) ≡ 1

n

n∑i=1

(ui − ¯u)2 =1

n

n∑i=1

u2i

Intuitively, which line is likely to be closer to the observed sample values on Xand Y , the true line yi = β0 + β1xi or the fitted regression line yi = β0 + β1xi?

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 41 / 103

Page 200: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Estimating the Variance of OLS Estimators

By construction, the regression line is closer since it is drawn to fit theactual sample we have

Specifically, the regression line is drawn so as to minimize the sum of thesquares of the distances between it and the observations

So the spread of the residuals MSD(u) will slightly underestimate the errorvariance Var[u] = σ2

u on average

In fact, we can show that with a single regressor X we have:

E [MSD(u)] =n − 2

nσ2u (degrees of freedom adjustment)

Thus, an unbiased estimator for the error variance is:

σ2u =

n

n − 2MSD(u) =

n

n − 2

1

n

n∑i=1

ui =1

n − 2

n∑i=1

u2i

We plug this estimate into the variance estimators for β0 and β1.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 42 / 103

Page 201: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Estimating the Variance of OLS Estimators

By construction, the regression line is closer since it is drawn to fit theactual sample we have

Specifically, the regression line is drawn so as to minimize the sum of thesquares of the distances between it and the observations

So the spread of the residuals MSD(u) will slightly underestimate the errorvariance Var[u] = σ2

u on average

In fact, we can show that with a single regressor X we have:

E [MSD(u)] =n − 2

nσ2u (degrees of freedom adjustment)

Thus, an unbiased estimator for the error variance is:

σ2u =

n

n − 2MSD(u) =

n

n − 2

1

n

n∑i=1

ui =1

n − 2

n∑i=1

u2i

We plug this estimate into the variance estimators for β0 and β1.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 42 / 103

Page 202: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Estimating the Variance of OLS Estimators

By construction, the regression line is closer since it is drawn to fit theactual sample we have

Specifically, the regression line is drawn so as to minimize the sum of thesquares of the distances between it and the observations

So the spread of the residuals MSD(u) will slightly underestimate the errorvariance Var[u] = σ2

u on average

In fact, we can show that with a single regressor X we have:

E [MSD(u)] =n − 2

nσ2u (degrees of freedom adjustment)

Thus, an unbiased estimator for the error variance is:

σ2u =

n

n − 2MSD(u) =

n

n − 2

1

n

n∑i=1

ui =1

n − 2

n∑i=1

u2i

We plug this estimate into the variance estimators for β0 and β1.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 42 / 103

Page 203: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Estimating the Variance of OLS Estimators

By construction, the regression line is closer since it is drawn to fit theactual sample we have

Specifically, the regression line is drawn so as to minimize the sum of thesquares of the distances between it and the observations

So the spread of the residuals MSD(u) will slightly underestimate the errorvariance Var[u] = σ2

u on average

In fact, we can show that with a single regressor X we have:

E [MSD(u)] =n − 2

nσ2u (degrees of freedom adjustment)

Thus, an unbiased estimator for the error variance is:

σ2u =

n

n − 2MSD(u) =

n

n − 2

1

n

n∑i=1

ui =1

n − 2

n∑i=1

u2i

We plug this estimate into the variance estimators for β0 and β1.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 42 / 103

Page 204: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Estimating the Variance of OLS Estimators

By construction, the regression line is closer since it is drawn to fit theactual sample we have

Specifically, the regression line is drawn so as to minimize the sum of thesquares of the distances between it and the observations

So the spread of the residuals MSD(u) will slightly underestimate the errorvariance Var[u] = σ2

u on average

In fact, we can show that with a single regressor X we have:

E [MSD(u)] =n − 2

nσ2u (degrees of freedom adjustment)

Thus, an unbiased estimator for the error variance is:

σ2u =

n

n − 2MSD(u) =

n

n − 2

1

n

n∑i=1

ui =1

n − 2

n∑i=1

u2i

We plug this estimate into the variance estimators for β0 and β1.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 42 / 103

Page 205: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Where are we?

Under Assumptions 1-5, we know that

β1 ∼?

(β1,

σ2u∑n

i=1(Xi − X )2

)

Now we know the mean and sampling variance of the samplingdistribution.

Next Time: how does this compare to other estimators for thepopulation slope?

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 43 / 103

Page 206: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Where are we?

Under Assumptions 1-5, we know that

β1 ∼?

(β1,

σ2u∑n

i=1(Xi − X )2

)Now we know the mean and sampling variance of the samplingdistribution.

Next Time: how does this compare to other estimators for thepopulation slope?

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 43 / 103

Page 207: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Where are we?

Under Assumptions 1-5, we know that

β1 ∼?

(β1,

σ2u∑n

i=1(Xi − X )2

)Now we know the mean and sampling variance of the samplingdistribution.

Next Time: how does this compare to other estimators for thepopulation slope?

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 43 / 103

Page 208: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Where We’ve Been and Where We’re Going...

Last WeekI hypothesis testingI what is regression

This WeekI Monday:

F mechanics of OLSF properties of OLS

I Wednesday:F hypothesis tests for regressionF confidence intervals for regressionF goodness of fit

Next WeekI mechanics with two regressorsI omitted variables, multicollinearity

Long RunI probability → inference → regression

Questions?

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 44 / 103

Page 209: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Where We’ve Been and Where We’re Going...

Last WeekI hypothesis testingI what is regression

This WeekI Monday:

F mechanics of OLSF properties of OLS

I Wednesday:F hypothesis tests for regressionF confidence intervals for regressionF goodness of fit

Next WeekI mechanics with two regressorsI omitted variables, multicollinearity

Long RunI probability → inference → regression

Questions?

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 44 / 103

Page 210: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Where We’ve Been and Where We’re Going...

Last WeekI hypothesis testingI what is regression

This WeekI Monday:

F mechanics of OLSF properties of OLS

I Wednesday:F hypothesis tests for regressionF confidence intervals for regressionF goodness of fit

Next WeekI mechanics with two regressorsI omitted variables, multicollinearity

Long RunI probability → inference → regression

Questions?

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 44 / 103

Page 211: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Where We’ve Been and Where We’re Going...

Last WeekI hypothesis testingI what is regression

This WeekI Monday:

F mechanics of OLS

F properties of OLSI Wednesday:

F hypothesis tests for regressionF confidence intervals for regressionF goodness of fit

Next WeekI mechanics with two regressorsI omitted variables, multicollinearity

Long RunI probability → inference → regression

Questions?

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 44 / 103

Page 212: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Where We’ve Been and Where We’re Going...

Last WeekI hypothesis testingI what is regression

This WeekI Monday:

F mechanics of OLSF properties of OLS

I Wednesday:F hypothesis tests for regressionF confidence intervals for regressionF goodness of fit

Next WeekI mechanics with two regressorsI omitted variables, multicollinearity

Long RunI probability → inference → regression

Questions?

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 44 / 103

Page 213: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Where We’ve Been and Where We’re Going...

Last WeekI hypothesis testingI what is regression

This WeekI Monday:

F mechanics of OLSF properties of OLS

I Wednesday:F hypothesis tests for regression

F confidence intervals for regressionF goodness of fit

Next WeekI mechanics with two regressorsI omitted variables, multicollinearity

Long RunI probability → inference → regression

Questions?

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 44 / 103

Page 214: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Where We’ve Been and Where We’re Going...

Last WeekI hypothesis testingI what is regression

This WeekI Monday:

F mechanics of OLSF properties of OLS

I Wednesday:F hypothesis tests for regressionF confidence intervals for regression

F goodness of fit

Next WeekI mechanics with two regressorsI omitted variables, multicollinearity

Long RunI probability → inference → regression

Questions?

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 44 / 103

Page 215: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Where We’ve Been and Where We’re Going...

Last WeekI hypothesis testingI what is regression

This WeekI Monday:

F mechanics of OLSF properties of OLS

I Wednesday:F hypothesis tests for regressionF confidence intervals for regressionF goodness of fit

Next WeekI mechanics with two regressorsI omitted variables, multicollinearity

Long RunI probability → inference → regression

Questions?

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 44 / 103

Page 216: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Where We’ve Been and Where We’re Going...

Last WeekI hypothesis testingI what is regression

This WeekI Monday:

F mechanics of OLSF properties of OLS

I Wednesday:F hypothesis tests for regressionF confidence intervals for regressionF goodness of fit

Next WeekI mechanics with two regressorsI omitted variables, multicollinearity

Long RunI probability → inference → regression

Questions?

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 44 / 103

Page 217: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Where We’ve Been and Where We’re Going...

Last WeekI hypothesis testingI what is regression

This WeekI Monday:

F mechanics of OLSF properties of OLS

I Wednesday:F hypothesis tests for regressionF confidence intervals for regressionF goodness of fit

Next WeekI mechanics with two regressorsI omitted variables, multicollinearity

Long RunI probability → inference → regression

Questions?

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 44 / 103

Page 218: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

1 Mechanics of OLS

2 Properties of the OLS estimator

3 Example and Review

4 Properties Continued

5 Hypothesis tests for regression

6 Confidence intervals for regression

7 Goodness of fit

8 Wrap Up of Univariate Regression

9 Fun with Non-Linearities

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 45 / 103

Page 219: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

1 Mechanics of OLS

2 Properties of the OLS estimator

3 Example and Review

4 Properties Continued

5 Hypothesis tests for regression

6 Confidence intervals for regression

7 Goodness of fit

8 Wrap Up of Univariate Regression

9 Fun with Non-Linearities

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 45 / 103

Page 220: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Example: Epstein and Mershon SCOTUS data

Data on 27 justices from the Warren, Burger, and Rehnquist courts(can be interpreted as a census)

Percentage of votes in liberal direction for each justice in a number ofissue areas

Segal-Cover scores for each justice

Party of appointing president

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 46 / 103

Page 221: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Example: Epstein and Mershon SCOTUS data

Data on 27 justices from the Warren, Burger, and Rehnquist courts(can be interpreted as a census)

Percentage of votes in liberal direction for each justice in a number ofissue areas

Segal-Cover scores for each justice

Party of appointing president

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 46 / 103

Page 222: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Example: Epstein and Mershon SCOTUS data

Data on 27 justices from the Warren, Burger, and Rehnquist courts(can be interpreted as a census)

Percentage of votes in liberal direction for each justice in a number ofissue areas

Segal-Cover scores for each justice

Party of appointing president

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 46 / 103

Page 223: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Example: Epstein and Mershon SCOTUS data

Data on 27 justices from the Warren, Burger, and Rehnquist courts(can be interpreted as a census)

Percentage of votes in liberal direction for each justice in a number ofissue areas

Segal-Cover scores for each justice

Party of appointing president

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 46 / 103

Page 224: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Example: Epstein and Mershon SCOTUS data

Data on 27 justices from the Warren, Burger, and Rehnquist courts(can be interpreted as a census)

Percentage of votes in liberal direction for each justice in a number ofissue areas

Segal-Cover scores for each justice

Party of appointing president

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 46 / 103

Page 225: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

0.0 0.2 0.4 0.6 0.8 1.0

2030

4050

6070

8090

SCscore

CLl

ib

Black

Reed

Frankfurter

Douglas

Jackson

Burton

Clark

Minton

Warren

HarlanWhittaker

Brennan

Stewart

White

Goldberg

FortasMarshall

Burger

Blackmun

Powell

Rehnquist

Stevens

O'Connor

Scalia

Kennedy

Souter

Thomas

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 47 / 103

Page 226: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

0.0 0.2 0.4 0.6 0.8 1.0

2030

4050

6070

8090

SCscore

CLl

ib

Black

Reed

Frankfurter

Douglas

Jackson

Burton

Clark

Minton

Warren

HarlanWhittaker

Brennan

Stewart

White

Goldberg

FortasMarshall

Burger

Blackmun

Powell

Rehnquist

Stevens

O'Connor

Scalia

Kennedy

Souter

Thomas

y = β0 + β1x

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 47 / 103

Page 227: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

0.0 0.2 0.4 0.6 0.8 1.0

2030

4050

6070

8090

SCscore

CLl

ib

Black

Reed

Frankfurter

Douglas

Jackson

Burton

Clark

Minton

Warren

HarlanWhittaker

Brennan

Stewart

White

Goldberg

FortasMarshall

Burger

Blackmun

Powell

Rehnquist

Stevens

O'Connor

Scalia

Kennedy

Souter

Thomas

β0 β1y = + x

Rise____Run

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 47 / 103

Page 228: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

0.0 0.2 0.4 0.6 0.8 1.0

2030

4050

6070

8090

SCscore

CLl

ib

Black

Reed

Frankfurter

Douglas

Jackson

Burton

Clark

Minton

Warren

HarlanWhittaker

Brennan

Stewart

White

Goldberg

FortasMarshall

Burger

Blackmun

Powell

Rehnquist

Stevens

O'Connor

Scalia

Kennedy

Souter

Thomas

y = + x27.6 41.2

Rise____Run

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 47 / 103

Page 229: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

0.0 0.2 0.4 0.6 0.8 1.0

2030

4050

6070

8090

SCscore

CLl

ib

Black

Reed

Frankfurter

Douglas

Jackson

Burton

Clark

Minton

Warren

HarlanWhittaker

Brennan

Stewart

White

Goldberg

FortasMarshall

Burger

Blackmun

Powell

Rehnquist

Stevens

O'Connor

Scalia

Kennedy

Souter

Thomas

y = + x + 27.6 41.2 u

Rise____Run

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 47 / 103

Page 230: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

How to get β0 and β1

β0 = y − β1x .

β1 =

∑ni=1(xi − x)(yi − y)∑n

i=1(xi − x)2.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 48 / 103

Page 231: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

1 Mechanics of OLS

2 Properties of the OLS estimator

3 Example and Review

4 Properties Continued

5 Hypothesis tests for regression

6 Confidence intervals for regression

7 Goodness of fit

8 Wrap Up of Univariate Regression

9 Fun with Non-Linearities

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 49 / 103

Page 232: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

1 Mechanics of OLS

2 Properties of the OLS estimator

3 Example and Review

4 Properties Continued

5 Hypothesis tests for regression

6 Confidence intervals for regression

7 Goodness of fit

8 Wrap Up of Univariate Regression

9 Fun with Non-Linearities

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 49 / 103

Page 233: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Where are we?

!"#$%&'(%)$*+(,(*+#-'./0%)$*

1(./(%)$*/$*2*

3$4/(-#"$#--*5)$-/-,#$'6*

1(./(%)$*/$*2*

7($")8*9(80:/$;*

</$#(./,6*/$*=(.(8#,#.-*

>#.)*5)$"/%)$(:*?#($*

@(A--B?(.C)D*EF<3GH*I-680,)%'*!$J#.#$'#*************

EK*($"*!LH"

1(./(%)$*/$*2*

7($")8*9(80:/$;*

</$#(./,6*/$*=(.(8#,#.-*

>#.)*5)$"/%)$(:*?#($*

M)8)-C#"(-%'/,6*

5:(--/'(:*<?*EF3GH*98(::B9(80:#*!$J#.#$'#***

E,*($"*NH*

1(./(%)$*/$*2*

7($")8*9(80:/$;*

</$#(./,6*/$*=(.(8#,#.-*

>#.)*5)$"/%)$(:*?#($*

M)8)-C#"(-%'/,6*

O).8(:/,6*)J*G..).-*

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 50 / 103

Page 234: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Where are we?

!"#$%&'(%)$*+(,(*+#-'./0%)$*

1(./(%)$*/$*2*

3$4/(-#"$#--*5)$-/-,#$'6*

1(./(%)$*/$*2*

7($")8*9(80:/$;*

</$#(./,6*/$*=(.(8#,#.-*

>#.)*5)$"/%)$(:*?#($*

@(A--B?(.C)D*EF<3GH*I-680,)%'*!$J#.#$'#*************

EK*($"*!LH"

1(./(%)$*/$*2*

7($")8*9(80:/$;*

</$#(./,6*/$*=(.(8#,#.-*

>#.)*5)$"/%)$(:*?#($*

M)8)-C#"(-%'/,6*

5:(--/'(:*<?*EF3GH*98(::B9(80:#*!$J#.#$'#***

E,*($"*NH*

1(./(%)$*/$*2*

7($")8*9(80:/$;*

</$#(./,6*/$*=(.(8#,#.-*

>#.)*5)$"/%)$(:*?#($*

M)8)-C#"(-%'/,6*

O).8(:/,6*)J*G..).-*

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 50 / 103

Page 235: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Where are we?

Under Assumptions 1-5, we know that

β1 ∼?

(β1,

σ2u∑n

i=1(Xi − X )2

)Now we know the mean and sampling variance of the samplingdistribution.

How does this compare to other estimators for the population slope?

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 51 / 103

Page 236: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

OLS is BLUE :(

Theorem (Gauss-Markov)

Given OLS Assumptions I–V, the OLS estimator is BLUE, i.e. the

1 Best: Lowest variance in class

2 Linear: Among Linear estimators

3 Unbiased: Among Linear Unbiased estimators

4 Estimator.

Assumptions 1-5: the “Gauss Markov Assumptions”

The proof is detailed and doesn’t yield insight, so we skip it. (We willexplore the intuition some more in a few slides)

Fails to hold when the assumptions are violated!

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 52 / 103

Page 237: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

OLS is BLUE :(

Theorem (Gauss-Markov)

Given OLS Assumptions I–V, the OLS estimator is BLUE, i.e. the

1 Best: Lowest variance in class

2 Linear: Among Linear estimators

3 Unbiased: Among Linear Unbiased estimators

4 Estimator.

Assumptions 1-5: the “Gauss Markov Assumptions”

The proof is detailed and doesn’t yield insight, so we skip it. (We willexplore the intuition some more in a few slides)

Fails to hold when the assumptions are violated!

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 52 / 103

Page 238: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

OLS is BLUE :(

Theorem (Gauss-Markov)

Given OLS Assumptions I–V, the OLS estimator is BLUE, i.e. the

1 Best: Lowest variance in class

2 Linear: Among Linear estimators

3 Unbiased: Among Linear Unbiased estimators

4 Estimator.

Assumptions 1-5: the “Gauss Markov Assumptions”

The proof is detailed and doesn’t yield insight, so we skip it. (We willexplore the intuition some more in a few slides)

Fails to hold when the assumptions are violated!

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 52 / 103

Page 239: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

OLS is BLUE :(

Theorem (Gauss-Markov)

Given OLS Assumptions I–V, the OLS estimator is BLUE, i.e. the

1 Best: Lowest variance in class

2 Linear: Among Linear estimators

3 Unbiased: Among Linear Unbiased estimators

4 Estimator.

Assumptions 1-5: the “Gauss Markov Assumptions”

The proof is detailed and doesn’t yield insight, so we skip it. (We willexplore the intuition some more in a few slides)

Fails to hold when the assumptions are violated!

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 52 / 103

Page 240: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

OLS is BLUE :(

Theorem (Gauss-Markov)

Given OLS Assumptions I–V, the OLS estimator is BLUE, i.e. the

1 Best: Lowest variance in class

2 Linear: Among Linear estimators

3 Unbiased: Among Linear Unbiased estimators

4 Estimator.

Assumptions 1-5: the “Gauss Markov Assumptions”

The proof is detailed and doesn’t yield insight, so we skip it. (We willexplore the intuition some more in a few slides)

Fails to hold when the assumptions are violated!

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 52 / 103

Page 241: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Gauss-Markov Theorem

Regression Analysis Tutorial 70

Econometrics Laboratory � University of California at Berkeley � 22-26 March 1999

All estimators

unbiased

linear

Gauss-Markov Theorem

OLS is efficient in the class of unbiased, linear estimators.

OLS is BLUE--best linear unbiased estimator.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 53 / 103

Page 242: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Where are we?

Under Assumptions 1-5, we know that

β1 ∼?

(β1,

σ2u∑n

i=1(Xi − X )2

)

And we know that σ2u∑n

i=1(Xi−X )2 is the lowest variance of any linear

estimator of β1

What about the last question mark? What’s the form of thedistribution?

Uniform? t? Normal? Exponential? Hypergeometric?

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 54 / 103

Page 243: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Where are we?

Under Assumptions 1-5, we know that

β1 ∼?

(β1,

σ2u∑n

i=1(Xi − X )2

)

And we know that σ2u∑n

i=1(Xi−X )2 is the lowest variance of any linear

estimator of β1

What about the last question mark? What’s the form of thedistribution?

Uniform? t? Normal? Exponential? Hypergeometric?

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 54 / 103

Page 244: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Where are we?

Under Assumptions 1-5, we know that

β1 ∼?

(β1,

σ2u∑n

i=1(Xi − X )2

)

And we know that σ2u∑n

i=1(Xi−X )2 is the lowest variance of any linear

estimator of β1

What about the last question mark? What’s the form of thedistribution?

Uniform? t? Normal? Exponential? Hypergeometric?

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 54 / 103

Page 245: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Where are we?

Under Assumptions 1-5, we know that

β1 ∼?

(β1,

σ2u∑n

i=1(Xi − X )2

)

And we know that σ2u∑n

i=1(Xi−X )2 is the lowest variance of any linear

estimator of β1

What about the last question mark? What’s the form of thedistribution?

Uniform? t? Normal? Exponential? Hypergeometric?

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 54 / 103

Page 246: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Where are we?

Under Assumptions 1-5, we know that

β1 ∼?

(β1,

σ2u∑n

i=1(Xi − X )2

)

And we know that σ2u∑n

i=1(Xi−X )2 is the lowest variance of any linear

estimator of β1

What about the last question mark? What’s the form of thedistribution? Uniform?

t? Normal? Exponential? Hypergeometric?

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 54 / 103

Page 247: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Where are we?

Under Assumptions 1-5, we know that

β1 ∼?

(β1,

σ2u∑n

i=1(Xi − X )2

)

And we know that σ2u∑n

i=1(Xi−X )2 is the lowest variance of any linear

estimator of β1

What about the last question mark? What’s the form of thedistribution? Uniform? t?

Normal? Exponential? Hypergeometric?

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 54 / 103

Page 248: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Where are we?

Under Assumptions 1-5, we know that

β1 ∼?

(β1,

σ2u∑n

i=1(Xi − X )2

)

And we know that σ2u∑n

i=1(Xi−X )2 is the lowest variance of any linear

estimator of β1

What about the last question mark? What’s the form of thedistribution? Uniform? t? Normal?

Exponential? Hypergeometric?

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 54 / 103

Page 249: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Where are we?

Under Assumptions 1-5, we know that

β1 ∼?

(β1,

σ2u∑n

i=1(Xi − X )2

)

And we know that σ2u∑n

i=1(Xi−X )2 is the lowest variance of any linear

estimator of β1

What about the last question mark? What’s the form of thedistribution? Uniform? t? Normal? Exponential?

Hypergeometric?

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 54 / 103

Page 250: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Where are we?

Under Assumptions 1-5, we know that

β1 ∼?

(β1,

σ2u∑n

i=1(Xi − X )2

)

And we know that σ2u∑n

i=1(Xi−X )2 is the lowest variance of any linear

estimator of β1

What about the last question mark? What’s the form of thedistribution? Uniform? t? Normal? Exponential? Hypergeometric?

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 54 / 103

Page 251: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Large-sample distribution of OLS estimators

Remember that the OLS estimator is the sum of independent r.v.’s:

β1 =n∑

i=1

WiYi

Mantra of the Central Limit Theorem:

“the sums and means of r.v.’s tend to be Normally distributed inlarge samples.”

True here as well, so we know that in large samples:

β1 − β1

SE [β1]∼ N(0, 1)

Can also replace SE with an estimate:

β1 − β1

SE [β1]∼ N(0, 1)

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 55 / 103

Page 252: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Large-sample distribution of OLS estimators

Remember that the OLS estimator is the sum of independent r.v.’s:

β1 =n∑

i=1

WiYi

Mantra of the Central Limit Theorem:

“the sums and means of r.v.’s tend to be Normally distributed inlarge samples.”

True here as well, so we know that in large samples:

β1 − β1

SE [β1]∼ N(0, 1)

Can also replace SE with an estimate:

β1 − β1

SE [β1]∼ N(0, 1)

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 55 / 103

Page 253: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Large-sample distribution of OLS estimators

Remember that the OLS estimator is the sum of independent r.v.’s:

β1 =n∑

i=1

WiYi

Mantra of the Central Limit Theorem:

“the sums and means of r.v.’s tend to be Normally distributed inlarge samples.”

True here as well, so we know that in large samples:

β1 − β1

SE [β1]∼ N(0, 1)

Can also replace SE with an estimate:

β1 − β1

SE [β1]∼ N(0, 1)

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 55 / 103

Page 254: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Large-sample distribution of OLS estimators

Remember that the OLS estimator is the sum of independent r.v.’s:

β1 =n∑

i=1

WiYi

Mantra of the Central Limit Theorem:

“the sums and means of r.v.’s tend to be Normally distributed inlarge samples.”

True here as well, so we know that in large samples:

β1 − β1

SE [β1]∼ N(0, 1)

Can also replace SE with an estimate:

β1 − β1

SE [β1]∼ N(0, 1)

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 55 / 103

Page 255: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Large-sample distribution of OLS estimators

Remember that the OLS estimator is the sum of independent r.v.’s:

β1 =n∑

i=1

WiYi

Mantra of the Central Limit Theorem:

“the sums and means of r.v.’s tend to be Normally distributed inlarge samples.”

True here as well, so we know that in large samples:

β1 − β1

SE [β1]∼ N(0, 1)

Can also replace SE with an estimate:

β1 − β1

SE [β1]∼ N(0, 1)

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 55 / 103

Page 256: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Large-sample distribution of OLS estimators

Remember that the OLS estimator is the sum of independent r.v.’s:

β1 =n∑

i=1

WiYi

Mantra of the Central Limit Theorem:

“the sums and means of r.v.’s tend to be Normally distributed inlarge samples.”

True here as well, so we know that in large samples:

β1 − β1

SE [β1]∼ N(0, 1)

Can also replace SE with an estimate:

β1 − β1

SE [β1]∼ N(0, 1)

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 55 / 103

Page 257: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Where are we?

Under Assumptions 1-5 and in large samples, we know that

β1 ∼ N

(β1,

σ2u∑n

i=1(Xi − X )2

)

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 56 / 103

Page 258: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Where are we?

Under Assumptions 1-5 and in large samples, we know that

β1 ∼ N

(β1,

σ2u∑n

i=1(Xi − X )2

)

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 56 / 103

Page 259: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Sampling distribution in small samples

What if we have a small sample? What can we do then?

Can’t get something for nothing, but we can make progress if wemake another assumption:

1 Linearity2 Random (iid) sample3 Variation in Xi

4 Zero conditional mean of the errors5 Homoskedasticity6 Errors are conditionally Normal

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 57 / 103

Page 260: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Sampling distribution in small samples

What if we have a small sample? What can we do then?

Can’t get something for nothing, but we can make progress if wemake another assumption:

1 Linearity2 Random (iid) sample3 Variation in Xi

4 Zero conditional mean of the errors5 Homoskedasticity6 Errors are conditionally Normal

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 57 / 103

Page 261: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Sampling distribution in small samples

What if we have a small sample? What can we do then?

Can’t get something for nothing, but we can make progress if wemake another assumption:

1 Linearity

2 Random (iid) sample3 Variation in Xi

4 Zero conditional mean of the errors5 Homoskedasticity6 Errors are conditionally Normal

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 57 / 103

Page 262: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Sampling distribution in small samples

What if we have a small sample? What can we do then?

Can’t get something for nothing, but we can make progress if wemake another assumption:

1 Linearity2 Random (iid) sample

3 Variation in Xi

4 Zero conditional mean of the errors5 Homoskedasticity6 Errors are conditionally Normal

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 57 / 103

Page 263: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Sampling distribution in small samples

What if we have a small sample? What can we do then?

Can’t get something for nothing, but we can make progress if wemake another assumption:

1 Linearity2 Random (iid) sample3 Variation in Xi

4 Zero conditional mean of the errors5 Homoskedasticity6 Errors are conditionally Normal

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 57 / 103

Page 264: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Sampling distribution in small samples

What if we have a small sample? What can we do then?

Can’t get something for nothing, but we can make progress if wemake another assumption:

1 Linearity2 Random (iid) sample3 Variation in Xi

4 Zero conditional mean of the errors

5 Homoskedasticity6 Errors are conditionally Normal

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 57 / 103

Page 265: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Sampling distribution in small samples

What if we have a small sample? What can we do then?

Can’t get something for nothing, but we can make progress if wemake another assumption:

1 Linearity2 Random (iid) sample3 Variation in Xi

4 Zero conditional mean of the errors5 Homoskedasticity

6 Errors are conditionally Normal

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 57 / 103

Page 266: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Sampling distribution in small samples

What if we have a small sample? What can we do then?

Can’t get something for nothing, but we can make progress if wemake another assumption:

1 Linearity2 Random (iid) sample3 Variation in Xi

4 Zero conditional mean of the errors5 Homoskedasticity6 Errors are conditionally Normal

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 57 / 103

Page 267: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

OLS Assumptions VI

Assumption (VI. Normality)

The population error term is independent of the explanatory variable, u⊥⊥X , andis normally distributed with mean zero and variance σ2

u:

u ∼ N(0, σ2u), which implies Y |X ∼ N(β0 + β1X , σ

2u)

Note: This implies homoskedasticity and zero conditional mean.

Together Assumptions I–VI are the classical linear model (CLM)assumptions.

The CLM assumptions imply that OLS is BUE (i.e. minimum varianceamong all linear or non-linear unbiased estimators)

Non-normality of the errors is a serious concern in small samples. We canpartially check this assumption by looking at the residuals

Variable transformations can help to come closer to normality

We don’t need normality assumption in large samples

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 58 / 103

Page 268: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

OLS Assumptions VI

Assumption (VI. Normality)

The population error term is independent of the explanatory variable, u⊥⊥X , andis normally distributed with mean zero and variance σ2

u:

u ∼ N(0, σ2u), which implies Y |X ∼ N(β0 + β1X , σ

2u)

Note: This implies homoskedasticity and zero conditional mean.

Together Assumptions I–VI are the classical linear model (CLM)assumptions.

The CLM assumptions imply that OLS is BUE (i.e. minimum varianceamong all linear or non-linear unbiased estimators)

Non-normality of the errors is a serious concern in small samples. We canpartially check this assumption by looking at the residuals

Variable transformations can help to come closer to normality

We don’t need normality assumption in large samples

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 58 / 103

Page 269: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

OLS Assumptions VI

Assumption (VI. Normality)

The population error term is independent of the explanatory variable, u⊥⊥X , andis normally distributed with mean zero and variance σ2

u:

u ∼ N(0, σ2u), which implies Y |X ∼ N(β0 + β1X , σ

2u)

Note: This implies homoskedasticity and zero conditional mean.

Together Assumptions I–VI are the classical linear model (CLM)assumptions.

The CLM assumptions imply that OLS is BUE (i.e. minimum varianceamong all linear or non-linear unbiased estimators)

Non-normality of the errors is a serious concern in small samples. We canpartially check this assumption by looking at the residuals

Variable transformations can help to come closer to normality

We don’t need normality assumption in large samples

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 58 / 103

Page 270: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

OLS Assumptions VI

Assumption (VI. Normality)

The population error term is independent of the explanatory variable, u⊥⊥X , andis normally distributed with mean zero and variance σ2

u:

u ∼ N(0, σ2u), which implies Y |X ∼ N(β0 + β1X , σ

2u)

Note: This implies homoskedasticity and zero conditional mean.

Together Assumptions I–VI are the classical linear model (CLM)assumptions.

The CLM assumptions imply that OLS is BUE (i.e. minimum varianceamong all linear or non-linear unbiased estimators)

Non-normality of the errors is a serious concern in small samples. We canpartially check this assumption by looking at the residuals

Variable transformations can help to come closer to normality

We don’t need normality assumption in large samples

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 58 / 103

Page 271: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

OLS Assumptions VI

Assumption (VI. Normality)

The population error term is independent of the explanatory variable, u⊥⊥X , andis normally distributed with mean zero and variance σ2

u:

u ∼ N(0, σ2u), which implies Y |X ∼ N(β0 + β1X , σ

2u)

Note: This implies homoskedasticity and zero conditional mean.

Together Assumptions I–VI are the classical linear model (CLM)assumptions.

The CLM assumptions imply that OLS is BUE (i.e. minimum varianceamong all linear or non-linear unbiased estimators)

Non-normality of the errors is a serious concern in small samples. We canpartially check this assumption by looking at the residuals

Variable transformations can help to come closer to normality

We don’t need normality assumption in large samples

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 58 / 103

Page 272: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

OLS Assumptions VI

Assumption (VI. Normality)

The population error term is independent of the explanatory variable, u⊥⊥X , andis normally distributed with mean zero and variance σ2

u:

u ∼ N(0, σ2u), which implies Y |X ∼ N(β0 + β1X , σ

2u)

Note: This implies homoskedasticity and zero conditional mean.

Together Assumptions I–VI are the classical linear model (CLM)assumptions.

The CLM assumptions imply that OLS is BUE (i.e. minimum varianceamong all linear or non-linear unbiased estimators)

Non-normality of the errors is a serious concern in small samples. We canpartially check this assumption by looking at the residuals

Variable transformations can help to come closer to normality

We don’t need normality assumption in large samples

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 58 / 103

Page 273: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

OLS Assumptions VI

Assumption (VI. Normality)

The population error term is independent of the explanatory variable, u⊥⊥X , andis normally distributed with mean zero and variance σ2

u:

u ∼ N(0, σ2u), which implies Y |X ∼ N(β0 + β1X , σ

2u)

Note: This implies homoskedasticity and zero conditional mean.

Together Assumptions I–VI are the classical linear model (CLM)assumptions.

The CLM assumptions imply that OLS is BUE (i.e. minimum varianceamong all linear or non-linear unbiased estimators)

Non-normality of the errors is a serious concern in small samples. We canpartially check this assumption by looking at the residuals

Variable transformations can help to come closer to normality

We don’t need normality assumption in large samples

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 58 / 103

Page 274: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Sampling Distribution for β1

Theorem (Sampling Distribution of β1)

Under Assumptions I–VI,

β1 ∼ N(β1,Var[β1 | X ]

)where Var[β1 | X ] =

σ2u∑n

i=1(xi − x)2

which implies β1 − β1√Var[β1 | X ]

=β1 − β1

SE (β)∼ N(0, 1)

Proof.

Given Assumptions I–VI, β1 is a linear combination of the i.i.d. normal random variables:

β1 = β1 +n∑

i=1

(xi − x)

SSTxui where ui ∼ N(0, σ2

u).

Any linear combination of independent normals is normal, and we can transform/standarize anynormal random variable into a standard normal by subtracting off its mean and dividing by itsstandard deviation.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 59 / 103

Page 275: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Sampling Distribution for β1

Theorem (Sampling Distribution of β1)

Under Assumptions I–VI,

β1 ∼ N(β1,Var[β1 | X ]

)where Var[β1 | X ] =

σ2u∑n

i=1(xi − x)2

which implies

β1 − β1√Var[β1 | X ]

=β1 − β1

SE (β)∼ N(0, 1)

Proof.

Given Assumptions I–VI, β1 is a linear combination of the i.i.d. normal random variables:

β1 = β1 +n∑

i=1

(xi − x)

SSTxui where ui ∼ N(0, σ2

u).

Any linear combination of independent normals is normal, and we can transform/standarize anynormal random variable into a standard normal by subtracting off its mean and dividing by itsstandard deviation.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 59 / 103

Page 276: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Sampling Distribution for β1

Theorem (Sampling Distribution of β1)

Under Assumptions I–VI,

β1 ∼ N(β1,Var[β1 | X ]

)where Var[β1 | X ] =

σ2u∑n

i=1(xi − x)2

which implies β1 − β1√Var[β1 | X ]

=β1 − β1

SE (β)∼ N(0, 1)

Proof.

Given Assumptions I–VI, β1 is a linear combination of the i.i.d. normal random variables:

β1 = β1 +n∑

i=1

(xi − x)

SSTxui where ui ∼ N(0, σ2

u).

Any linear combination of independent normals is normal, and we can transform/standarize anynormal random variable into a standard normal by subtracting off its mean and dividing by itsstandard deviation.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 59 / 103

Page 277: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Sampling Distribution for β1

Theorem (Sampling Distribution of β1)

Under Assumptions I–VI,

β1 ∼ N(β1,Var[β1 | X ]

)where Var[β1 | X ] =

σ2u∑n

i=1(xi − x)2

which implies β1 − β1√Var[β1 | X ]

=β1 − β1

SE (β)∼ N(0, 1)

Proof.

Given Assumptions I–VI, β1 is a linear combination of the i.i.d. normal random variables:

β1 = β1 +n∑

i=1

(xi − x)

SSTxui where ui ∼ N(0, σ2

u).

Any linear combination of independent normals is normal, and we can transform/standarize anynormal random variable into a standard normal by subtracting off its mean and dividing by itsstandard deviation.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 59 / 103

Page 278: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Sampling Distribution for β1

Theorem (Sampling Distribution of β1)

Under Assumptions I–VI,

β1 ∼ N(β1,Var[β1 | X ]

)where Var[β1 | X ] =

σ2u∑n

i=1(xi − x)2

which implies β1 − β1√Var[β1 | X ]

=β1 − β1

SE (β)∼ N(0, 1)

Proof.

Given Assumptions I–VI, β1 is a linear combination of the i.i.d. normal random variables:

β1 = β1 +n∑

i=1

(xi − x)

SSTxui where ui ∼ N(0, σ2

u).

Any linear combination of independent normals is

normal, and we can transform/standarize anynormal random variable into a standard normal by subtracting off its mean and dividing by itsstandard deviation.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 59 / 103

Page 279: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Sampling Distribution for β1

Theorem (Sampling Distribution of β1)

Under Assumptions I–VI,

β1 ∼ N(β1,Var[β1 | X ]

)where Var[β1 | X ] =

σ2u∑n

i=1(xi − x)2

which implies β1 − β1√Var[β1 | X ]

=β1 − β1

SE (β)∼ N(0, 1)

Proof.

Given Assumptions I–VI, β1 is a linear combination of the i.i.d. normal random variables:

β1 = β1 +n∑

i=1

(xi − x)

SSTxui where ui ∼ N(0, σ2

u).

Any linear combination of independent normals is normal,

and we can transform/standarize anynormal random variable into a standard normal by subtracting off its mean and dividing by itsstandard deviation.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 59 / 103

Page 280: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Sampling Distribution for β1

Theorem (Sampling Distribution of β1)

Under Assumptions I–VI,

β1 ∼ N(β1,Var[β1 | X ]

)where Var[β1 | X ] =

σ2u∑n

i=1(xi − x)2

which implies β1 − β1√Var[β1 | X ]

=β1 − β1

SE (β)∼ N(0, 1)

Proof.

Given Assumptions I–VI, β1 is a linear combination of the i.i.d. normal random variables:

β1 = β1 +n∑

i=1

(xi − x)

SSTxui where ui ∼ N(0, σ2

u).

Any linear combination of independent normals is normal, and we can transform/standarize anynormal random variable into a standard normal by subtracting off its mean and dividing by itsstandard deviation.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 59 / 103

Page 281: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Sampling distribution of OLS slope

If we have Yi given Xi is distributed N(β0 + β1Xi , σ2u), then we have

the following at any sample size:

β1 − β1

SE [β1]∼ N(0, 1)

Furthermore, if we replace the true standard error with the estimatedstandard error, then we get the following:

β1 − β1

SE [β1]∼ tn−2

The standardized coefficient follows a t distribution n − 2 degrees offreedom. We take off an extra degree of freedom because we had toone more parameter than just the sample mean.

All of this depends on Normal errors! We can check to see if the errordo look Normal.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 60 / 103

Page 282: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Sampling distribution of OLS slope

If we have Yi given Xi is distributed N(β0 + β1Xi , σ2u), then we have

the following at any sample size:

β1 − β1

SE [β1]∼ N(0, 1)

Furthermore, if we replace the true standard error with the estimatedstandard error, then we get the following:

β1 − β1

SE [β1]∼ tn−2

The standardized coefficient follows a t distribution n − 2 degrees offreedom. We take off an extra degree of freedom because we had toone more parameter than just the sample mean.

All of this depends on Normal errors! We can check to see if the errordo look Normal.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 60 / 103

Page 283: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Sampling distribution of OLS slope

If we have Yi given Xi is distributed N(β0 + β1Xi , σ2u), then we have

the following at any sample size:

β1 − β1

SE [β1]∼ N(0, 1)

Furthermore, if we replace the true standard error with the estimatedstandard error, then we get the following:

β1 − β1

SE [β1]∼ tn−2

The standardized coefficient follows a t distribution n − 2 degrees offreedom. We take off an extra degree of freedom because we had toone more parameter than just the sample mean.

All of this depends on Normal errors! We can check to see if the errordo look Normal.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 60 / 103

Page 284: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Sampling distribution of OLS slope

If we have Yi given Xi is distributed N(β0 + β1Xi , σ2u), then we have

the following at any sample size:

β1 − β1

SE [β1]∼ N(0, 1)

Furthermore, if we replace the true standard error with the estimatedstandard error, then we get the following:

β1 − β1

SE [β1]∼ tn−2

The standardized coefficient follows a t distribution n − 2 degrees offreedom. We take off an extra degree of freedom because we had toone more parameter than just the sample mean.

All of this depends on Normal errors! We can check to see if the errordo look Normal.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 60 / 103

Page 285: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

The t-Test for Single Population Parameters

SE [β1] = σu√∑ni=1(xi−x)2

involves the unknown population error variance σ2u

Replace σ2u with its unbiased estimator σ2

u =∑n

i=1 u2i

n−2 , and we obtain:

Theorem (Sampling Distribution of t-value)

Under Assumptions I–VI, the t-value for β1 has a t-distribution with n− 2 degreesof freedom:

T ≡ β1 − β1

SE [β1]∼ τn−2

Proof.The logic is perfectly analogous to the t-value for the population mean — because weare estimating the denominator, we need a distribution that has fatter tails than N(0, 1)to take into account the additional uncertainty.

This time, σ2u contains two estimated parameters (β0 and β1) instead of one, hence the

degrees of freedom = n − 2.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 61 / 103

Page 286: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

The t-Test for Single Population Parameters

SE [β1] = σu√∑ni=1(xi−x)2

involves the unknown population error variance σ2u

Replace σ2u with its unbiased estimator σ2

u =∑n

i=1 u2i

n−2 , and we obtain:

Theorem (Sampling Distribution of t-value)

Under Assumptions I–VI, the t-value for β1 has a t-distribution with n− 2 degreesof freedom:

T ≡ β1 − β1

SE [β1]∼ τn−2

Proof.The logic is perfectly analogous to the t-value for the population mean — because weare estimating the denominator, we need a distribution that has fatter tails than N(0, 1)to take into account the additional uncertainty.

This time, σ2u contains two estimated parameters (β0 and β1) instead of one, hence the

degrees of freedom = n − 2.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 61 / 103

Page 287: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Where are we?

Under Assumptions 1-5 and in large samples, we know that

β1 ∼ N

(β1,

σ2u∑n

i=1(Xi − X )2

)

Under Assumptions 1-6 and in any sample, we know that

β1 − β1

SE [β1]∼ tn−2

Now let’s briefly return to some of the large sample properties.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 62 / 103

Page 288: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Where are we?

Under Assumptions 1-5 and in large samples, we know that

β1 ∼ N

(β1,

σ2u∑n

i=1(Xi − X )2

)Under Assumptions 1-6 and in any sample, we know that

β1 − β1

SE [β1]∼ tn−2

Now let’s briefly return to some of the large sample properties.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 62 / 103

Page 289: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Where are we?

Under Assumptions 1-5 and in large samples, we know that

β1 ∼ N

(β1,

σ2u∑n

i=1(Xi − X )2

)Under Assumptions 1-6 and in any sample, we know that

β1 − β1

SE [β1]∼ tn−2

Now let’s briefly return to some of the large sample properties.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 62 / 103

Page 290: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Where are we?

Under Assumptions 1-5 and in large samples, we know that

β1 ∼ N

(β1,

σ2u∑n

i=1(Xi − X )2

)Under Assumptions 1-6 and in any sample, we know that

β1 − β1

SE [β1]∼ tn−2

Now let’s briefly return to some of the large sample properties.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 62 / 103

Page 291: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Large Sample Properties: Consistency

We just looked formally at the small sample properties of the OLSestimator, i.e., how (β0, β1) behaves in repeated samples of a given n.

Now let’s take a more rigorous look at the large sample properties, i.e., how(β0, β1) behaves when n→∞.

Theorem (Consistency of OLS Estimator)

Given Assumptions I–IV, the OLS estimator β1 is consistent for β1 as n→∞:

plimn→∞

β1 = β1

Technical note: We can slightly relax Assumption IV:

E [u|X ] = 0 (any function of X is uncorrelated with u)

to its implication:

Cov[u,X ] = 0 (X is uncorrelated with u)

for consistency to hold (but not unbiasedness).

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 63 / 103

Page 292: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Large Sample Properties: Consistency

We just looked formally at the small sample properties of the OLSestimator, i.e., how (β0, β1) behaves in repeated samples of a given n.

Now let’s take a more rigorous look at the large sample properties, i.e., how(β0, β1) behaves when n→∞.

Theorem (Consistency of OLS Estimator)

Given Assumptions I–IV, the OLS estimator β1 is consistent for β1 as n→∞:

plimn→∞

β1 = β1

Technical note: We can slightly relax Assumption IV:

E [u|X ] = 0 (any function of X is uncorrelated with u)

to its implication:

Cov[u,X ] = 0 (X is uncorrelated with u)

for consistency to hold (but not unbiasedness).

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 63 / 103

Page 293: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Large Sample Properties: Consistency

We just looked formally at the small sample properties of the OLSestimator, i.e., how (β0, β1) behaves in repeated samples of a given n.

Now let’s take a more rigorous look at the large sample properties, i.e., how(β0, β1) behaves when n→∞.

Theorem (Consistency of OLS Estimator)

Given Assumptions I–IV, the OLS estimator β1 is consistent for β1 as n→∞:

plimn→∞

β1 = β1

Technical note: We can slightly relax Assumption IV:

E [u|X ] = 0 (any function of X is uncorrelated with u)

to its implication:

Cov[u,X ] = 0 (X is uncorrelated with u)

for consistency to hold (but not unbiasedness).

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 63 / 103

Page 294: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Large Sample Properties: Consistency

We just looked formally at the small sample properties of the OLSestimator, i.e., how (β0, β1) behaves in repeated samples of a given n.

Now let’s take a more rigorous look at the large sample properties, i.e., how(β0, β1) behaves when n→∞.

Theorem (Consistency of OLS Estimator)

Given Assumptions I–IV, the OLS estimator β1 is consistent for β1 as n→∞:

plimn→∞

β1 = β1

Technical note: We can slightly relax Assumption IV:

E [u|X ] = 0 (any function of X is uncorrelated with u)

to its implication:

Cov[u,X ] = 0 (X is uncorrelated with u)

for consistency to hold (but not unbiasedness).

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 63 / 103

Page 295: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Large Sample Properties: Consistency

Proof.Similar to the unbiasedness proof:

β1 =

∑ni=1(xi − x)yi∑ni=1(xi − x)2

= β1 +

∑ni (xi − x)ui∑ni (xi − x)2

plim β1 = plim β1 + plim

∑ni (xi − x)ui∑ni (xi − x)2

(Wooldridge C.3 Property i)

= β1 +plim 1

n

∑ni (xi − x)ui

plim 1n

∑ni (xi − x)2

(Wooldridge C.3 Property iii)

= β1 +Cov[X , u]

Var[X ](by the law of large numbers)

= β1 (Cov[X , u] = 0 and Var[X ] > 0)

OLS is inconsistent (and biased) unless Cov[X , u] = 0

If Cov[u,X ] > 0 then asymptotic bias is upward; if Cov[u,X ] < 0asymptotic bias is downwards

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 64 / 103

Page 296: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Large Sample Properties: Consistency

Regression Analysis: mptotics

.nd 4, we covered what are called finite s(fIlI/d le OLS estimators in the population model

y = {3o + {3\Xt + {32x2 + ... + f3k\k I' II

unbiasedness of OLS (derived in Chapter \) 111111

ons is a finite sample property becausc II hllid

ld restriction that n must be at least as larg~ ,I~ III. ;;sion model, k + 1). Similarly, the fact that ( II, Ider the full set of Gauss-Markov aSSUtl1pllllll (

property. we added the classical linear model ASSUltllllll'li s normally distributed and independent 01 till , ive the exact sampling distributions of thl' (ll ., .

ariables in the sample). In particular, '1111'011111 ~ I annal sampling distributions, whieh \cd dlln Ily I ics. If the error is not normally distrih\ltnl lh~' (I nd an F statistic does not have an CXill'I I dl~l!ii ) finite sample properties, it is impOIlali1 III hi unple properties of estimators and Il\1 1\11111 1111

purticular sample SiIC; rather. they an' ddillrlln 'ortunatcly. under the assumptio\ls w~' h.1\ " IIIIHI

Ip~·rtics. One practically important lindllll' ,tj iI in (Assumptiolt MLRJl), t illtd,. sl:tll\lll'. 1\.I\'!' (i/'/IIl ..... It in large sumple SIICS. W~, dl\CIISS Ihl. III 111111

~'ncy 01 O!.S in Sl'l'llllll 'i. I.

istency

l \llIlIatlll

Chapter 5 Multiple Regression Analysis: OLS Asymptotics 177

the OLS estimators are unbiased under MLR.I through MLR.4, in Chapter 11 we will find that there are time series regressions where the OLS estimators are not unbiased. Further, in Part 3 of the text, we encounter several other estimators that are biased yet useful.

Although not all useful estimators are unbiased, virtually all economists agree that consistency is a minimal requirement for an estimator. The Nobel Prize-winning econo­metrician Clive W. J. Granger once remarked: "If you can't get it right as n goes to infin­ity, you shouldn't be in this business." The implication is that, if your estimator of a par­ticular population parameter is not consistent, then you are wasting your time.

There are a few different ways to describe consistency. Formal definitions and results are given in Appendix C; here, we focus on an intuitive understanding. For concreteness, let Sj be the OLS estimator of {3j for some j. For each n, ~ has a probability disujbution (rep­resenting its possible values in different random samples of size n). Because ~ is unbiased under Assumptions MLR.I through MLR.4, t?is distribution has mean value f3j' If this esti­mator is consistent, then the distribution of ~ becomes more and more tightly distributed around {3j as the sample size grows. As n tends to infinity, the distribution of ~ collapses to the single point f3j' In effect, this means that we can make our estimator arbitrarily close to {3) if we can collect as much data as we want. This convergence is illustrated in Figure 5.1.

FIGURE 5.1

Sampling distributions of (3, for sample sizes n, < n2 < n3.

n3

{p,

n2

-1 _

~,Il,

Sampling distributions of β1, for sample sizes n1 < n2 < n3

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 65 / 103

Page 297: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Large Sample Properties: Asymptotic Normality

For statistical inference, we need to know the sampling distribution of βwhen n→∞.

Theorem (Asymptotic Normality of OLS Estimator)

Given Assumptions I–V, the OLS estimator β1 is asymptotically normallydistributed:

β1 − β1

SE [β1]

approx.∼ N(0, 1)

where

SE [β1] =σu√∑n

i=1(xi − x)2

with the consistent estimator for the error variance:

σ2u =

1

n

n∑i=1

u2i

p→ σ2u

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 66 / 103

Page 298: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Large Sample Inference

Proof.Proof is similar to the small-sample normality proof:

β1 = β1 +n∑

i=1

(xi − x)

SSTxui

√n(β1 − β1) =

√n · 1

n

∑ni=1(xi − x)ui

1n

∑ni=1(xi − x)2

where the numerator converges in distribution to a normal random variable by CLT.Then, rearranging the terms, etc. gives you the right formula given in the theorem.

For a more formal and detailed proof, see Wooldridge Appendix 5A.

We need homoskedasticity (Assumption V) for this result, but we do not neednormality (Assumption VI).

Result implies that asymptotically our usual standard errors, t-values, p-values, andCIs remain valid even without the normality assumption! We just proceed as in thesmall sample case where we assume normality.

It turns out that, given Assumptions I–V, the OLS asymptotic variance is also thelowest in class (asymptotic Gauss-Markov).

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 67 / 103

Page 299: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Large Sample Inference

Proof.Proof is similar to the small-sample normality proof:

β1 = β1 +n∑

i=1

(xi − x)

SSTxui

√n(β1 − β1) =

√n · 1

n

∑ni=1(xi − x)ui

1n

∑ni=1(xi − x)2

where the numerator converges in distribution to a normal random variable by CLT.Then, rearranging the terms, etc. gives you the right formula given in the theorem.

For a more formal and detailed proof, see Wooldridge Appendix 5A.

We need homoskedasticity (Assumption V) for this result, but we do not neednormality (Assumption VI).

Result implies that asymptotically our usual standard errors, t-values, p-values, andCIs remain valid even without the normality assumption! We just proceed as in thesmall sample case where we assume normality.

It turns out that, given Assumptions I–V, the OLS asymptotic variance is also thelowest in class (asymptotic Gauss-Markov).

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 67 / 103

Page 300: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Testing and Confidence Intervals

Three ways of making statistical inference out of regression:

1 Point Estimation: Consider the sampling distribution of our pointestimator β1 to infer β1

2 Hypothesis Testing: Consider the sampling distribution of a teststatistic to test hypothesis about β1 at the α level

3 Interval Estimation: Consider the sampling distribution of an intervalestimator to construct intervals that will contain β1 at least100(1− α)% of the time.

For 2 and 3, we need to know more than just the mean and the variance ofthe sampling distribution of β1. We need to know the full shape of thesampling distribution of our estimators β0 and β1.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 68 / 103

Page 301: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Testing and Confidence Intervals

Three ways of making statistical inference out of regression:

1 Point Estimation: Consider the sampling distribution of our pointestimator β1 to infer β1

2 Hypothesis Testing: Consider the sampling distribution of a teststatistic to test hypothesis about β1 at the α level

3 Interval Estimation: Consider the sampling distribution of an intervalestimator to construct intervals that will contain β1 at least100(1− α)% of the time.

For 2 and 3, we need to know more than just the mean and the variance ofthe sampling distribution of β1. We need to know the full shape of thesampling distribution of our estimators β0 and β1.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 68 / 103

Page 302: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Testing and Confidence Intervals

Three ways of making statistical inference out of regression:

1 Point Estimation: Consider the sampling distribution of our pointestimator β1 to infer β1

2 Hypothesis Testing: Consider the sampling distribution of a teststatistic to test hypothesis about β1 at the α level

3 Interval Estimation: Consider the sampling distribution of an intervalestimator to construct intervals that will contain β1 at least100(1− α)% of the time.

For 2 and 3, we need to know more than just the mean and the variance ofthe sampling distribution of β1. We need to know the full shape of thesampling distribution of our estimators β0 and β1.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 68 / 103

Page 303: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Testing and Confidence Intervals

Three ways of making statistical inference out of regression:

1 Point Estimation: Consider the sampling distribution of our pointestimator β1 to infer β1

2 Hypothesis Testing: Consider the sampling distribution of a teststatistic to test hypothesis about β1 at the α level

3 Interval Estimation: Consider the sampling distribution of an intervalestimator to construct intervals that will contain β1 at least100(1− α)% of the time.

For 2 and 3, we need to know more than just the mean and the variance ofthe sampling distribution of β1. We need to know the full shape of thesampling distribution of our estimators β0 and β1.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 68 / 103

Page 304: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Testing and Confidence Intervals

Three ways of making statistical inference out of regression:

1 Point Estimation: Consider the sampling distribution of our pointestimator β1 to infer β1

2 Hypothesis Testing: Consider the sampling distribution of a teststatistic to test hypothesis about β1 at the α level

3 Interval Estimation: Consider the sampling distribution of an intervalestimator to construct intervals that will contain β1 at least100(1− α)% of the time.

For 2 and 3, we need to know more than just the mean and the variance ofthe sampling distribution of β1. We need to know the full shape of thesampling distribution of our estimators β0 and β1.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 68 / 103

Page 305: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

1 Mechanics of OLS

2 Properties of the OLS estimator

3 Example and Review

4 Properties Continued

5 Hypothesis tests for regression

6 Confidence intervals for regression

7 Goodness of fit

8 Wrap Up of Univariate Regression

9 Fun with Non-Linearities

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 69 / 103

Page 306: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

1 Mechanics of OLS

2 Properties of the OLS estimator

3 Example and Review

4 Properties Continued

5 Hypothesis tests for regression

6 Confidence intervals for regression

7 Goodness of fit

8 Wrap Up of Univariate Regression

9 Fun with Non-Linearities

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 69 / 103

Page 307: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Null and alternative hypotheses review

Null: H0 : β1 = 0

I The null is the straw man we want to knock down.I With regression, almost always null of no relationship

Alternative: Ha : β1 6= 0

I Claim we want to testI Almost always “some effect”I Could do one-sided test, but you shouldn’t

Notice these are statements about the population parameters, not theOLS estimates.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 70 / 103

Page 308: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Null and alternative hypotheses review

Null: H0 : β1 = 0

I The null is the straw man we want to knock down.

I With regression, almost always null of no relationship

Alternative: Ha : β1 6= 0

I Claim we want to testI Almost always “some effect”I Could do one-sided test, but you shouldn’t

Notice these are statements about the population parameters, not theOLS estimates.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 70 / 103

Page 309: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Null and alternative hypotheses review

Null: H0 : β1 = 0

I The null is the straw man we want to knock down.I With regression, almost always null of no relationship

Alternative: Ha : β1 6= 0

I Claim we want to testI Almost always “some effect”I Could do one-sided test, but you shouldn’t

Notice these are statements about the population parameters, not theOLS estimates.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 70 / 103

Page 310: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Null and alternative hypotheses review

Null: H0 : β1 = 0

I The null is the straw man we want to knock down.I With regression, almost always null of no relationship

Alternative: Ha : β1 6= 0

I Claim we want to testI Almost always “some effect”I Could do one-sided test, but you shouldn’t

Notice these are statements about the population parameters, not theOLS estimates.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 70 / 103

Page 311: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Null and alternative hypotheses review

Null: H0 : β1 = 0

I The null is the straw man we want to knock down.I With regression, almost always null of no relationship

Alternative: Ha : β1 6= 0

I Claim we want to test

I Almost always “some effect”I Could do one-sided test, but you shouldn’t

Notice these are statements about the population parameters, not theOLS estimates.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 70 / 103

Page 312: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Null and alternative hypotheses review

Null: H0 : β1 = 0

I The null is the straw man we want to knock down.I With regression, almost always null of no relationship

Alternative: Ha : β1 6= 0

I Claim we want to testI Almost always “some effect”

I Could do one-sided test, but you shouldn’t

Notice these are statements about the population parameters, not theOLS estimates.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 70 / 103

Page 313: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Null and alternative hypotheses review

Null: H0 : β1 = 0

I The null is the straw man we want to knock down.I With regression, almost always null of no relationship

Alternative: Ha : β1 6= 0

I Claim we want to testI Almost always “some effect”I Could do one-sided test, but you shouldn’t

Notice these are statements about the population parameters, not theOLS estimates.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 70 / 103

Page 314: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Null and alternative hypotheses review

Null: H0 : β1 = 0

I The null is the straw man we want to knock down.I With regression, almost always null of no relationship

Alternative: Ha : β1 6= 0

I Claim we want to testI Almost always “some effect”I Could do one-sided test, but you shouldn’t

Notice these are statements about the population parameters, not theOLS estimates.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 70 / 103

Page 315: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Test statistic

Under the null of H0 : β1 = c , we can use the following familiar teststatistic:

T =β1 − c

SE [β1]

As we saw in the last section, if the errors are conditionally Normal,then under the null hypothesis we have:

T ∼ tn−2

In large samples, we know that T is approximately (standard)Normal, but we also know that tn−2 is approximately (standard)Normal in large samples too, so this statement works there too, evenif Normality of the errors fails.

Thus, under the null, we know the distribution of T and can use thatto formulate a rejection region and calculate p-values.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 71 / 103

Page 316: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Test statistic

Under the null of H0 : β1 = c , we can use the following familiar teststatistic:

T =β1 − c

SE [β1]

As we saw in the last section, if the errors are conditionally Normal,then under the null hypothesis we have:

T ∼ tn−2

In large samples, we know that T is approximately (standard)Normal, but we also know that tn−2 is approximately (standard)Normal in large samples too, so this statement works there too, evenif Normality of the errors fails.

Thus, under the null, we know the distribution of T and can use thatto formulate a rejection region and calculate p-values.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 71 / 103

Page 317: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Test statistic

Under the null of H0 : β1 = c , we can use the following familiar teststatistic:

T =β1 − c

SE [β1]

As we saw in the last section, if the errors are conditionally Normal,then under the null hypothesis we have:

T ∼ tn−2

In large samples, we know that T is approximately (standard)Normal, but we also know that tn−2 is approximately (standard)Normal in large samples too, so this statement works there too, evenif Normality of the errors fails.

Thus, under the null, we know the distribution of T and can use thatto formulate a rejection region and calculate p-values.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 71 / 103

Page 318: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Test statistic

Under the null of H0 : β1 = c , we can use the following familiar teststatistic:

T =β1 − c

SE [β1]

As we saw in the last section, if the errors are conditionally Normal,then under the null hypothesis we have:

T ∼ tn−2

In large samples, we know that T is approximately (standard)Normal, but we also know that tn−2 is approximately (standard)Normal in large samples too, so this statement works there too, evenif Normality of the errors fails.

Thus, under the null, we know the distribution of T and can use thatto formulate a rejection region and calculate p-values.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 71 / 103

Page 319: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Rejection regionChoose a level of the test, α, and find rejection regions thatcorrespond to that value under the null distribution:

P(−tα/2,n−2 < T < tα/2,n−2) = 1− α

This is exactly the same as with sample means and sample differencesin means, except that the degrees of freedom on the t distributionhave changed.

-4 -2 0 2 4

0.0

0.1

0.2

0.3

0.4

0.5

x

dnor

m(x

)

Retain RejectReject

0.025 0.025t = 1.96-t = -1.96

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 72 / 103

Page 320: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Rejection regionChoose a level of the test, α, and find rejection regions thatcorrespond to that value under the null distribution:

P(−tα/2,n−2 < T < tα/2,n−2) = 1− αThis is exactly the same as with sample means and sample differencesin means, except that the degrees of freedom on the t distributionhave changed.

-4 -2 0 2 4

0.0

0.1

0.2

0.3

0.4

0.5

x

dnor

m(x

)

Retain RejectReject

0.025 0.025t = 1.96-t = -1.96

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 72 / 103

Page 321: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

p-value

The interpretation of the p-value is the same: the probability of seeinga test statistic at least this extreme if the null hypothesis were true

Mathematically:

P

(∣∣∣∣∣ β1 − c

SE [β1]

∣∣∣∣∣ ≥ |Tobs |

)If the p-value is less than α we would reject the null at the α level.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 73 / 103

Page 322: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

p-value

The interpretation of the p-value is the same: the probability of seeinga test statistic at least this extreme if the null hypothesis were true

Mathematically:

P

(∣∣∣∣∣ β1 − c

SE [β1]

∣∣∣∣∣ ≥ |Tobs |

)

If the p-value is less than α we would reject the null at the α level.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 73 / 103

Page 323: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

p-value

The interpretation of the p-value is the same: the probability of seeinga test statistic at least this extreme if the null hypothesis were true

Mathematically:

P

(∣∣∣∣∣ β1 − c

SE [β1]

∣∣∣∣∣ ≥ |Tobs |

)If the p-value is less than α we would reject the null at the α level.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 73 / 103

Page 324: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

1 Mechanics of OLS

2 Properties of the OLS estimator

3 Example and Review

4 Properties Continued

5 Hypothesis tests for regression

6 Confidence intervals for regression

7 Goodness of fit

8 Wrap Up of Univariate Regression

9 Fun with Non-Linearities

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 74 / 103

Page 325: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

1 Mechanics of OLS

2 Properties of the OLS estimator

3 Example and Review

4 Properties Continued

5 Hypothesis tests for regression

6 Confidence intervals for regression

7 Goodness of fit

8 Wrap Up of Univariate Regression

9 Fun with Non-Linearities

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 74 / 103

Page 326: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Confidence intervals

Very similar to the approach with sample means. By the samplingdistribution of the OLS estimator, we know that we can find t-valuessuch that:

P(− tα/2,n−2 ≤

β1 − β1

SE [β1]≤ tα/2,n−2

)= 1− α

If we rearrange this as before, we can get an expression for confidenceintervals:

P(β1 − tα/2,n−2SE [β1] ≤ β1 ≤ β1 + tα/2,n−2SE [β1]

)= 1− α

Thus, we can write the confidence intervals as:

β1 ± tα/2,n−2SE [β1]

We can derive these for the intercept as well:

β0 ± tα/2,n−2SE [β0]

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 75 / 103

Page 327: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Confidence intervals

Very similar to the approach with sample means. By the samplingdistribution of the OLS estimator, we know that we can find t-valuessuch that:

P(− tα/2,n−2 ≤

β1 − β1

SE [β1]≤ tα/2,n−2

)= 1− α

If we rearrange this as before, we can get an expression for confidenceintervals:

P(β1 − tα/2,n−2SE [β1] ≤ β1 ≤ β1 + tα/2,n−2SE [β1]

)= 1− α

Thus, we can write the confidence intervals as:

β1 ± tα/2,n−2SE [β1]

We can derive these for the intercept as well:

β0 ± tα/2,n−2SE [β0]

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 75 / 103

Page 328: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Confidence intervals

Very similar to the approach with sample means. By the samplingdistribution of the OLS estimator, we know that we can find t-valuessuch that:

P(− tα/2,n−2 ≤

β1 − β1

SE [β1]≤ tα/2,n−2

)= 1− α

If we rearrange this as before, we can get an expression for confidenceintervals:

P(β1 − tα/2,n−2SE [β1] ≤ β1 ≤ β1 + tα/2,n−2SE [β1]

)= 1− α

Thus, we can write the confidence intervals as:

β1 ± tα/2,n−2SE [β1]

We can derive these for the intercept as well:

β0 ± tα/2,n−2SE [β0]

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 75 / 103

Page 329: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Confidence intervals

Very similar to the approach with sample means. By the samplingdistribution of the OLS estimator, we know that we can find t-valuessuch that:

P(− tα/2,n−2 ≤

β1 − β1

SE [β1]≤ tα/2,n−2

)= 1− α

If we rearrange this as before, we can get an expression for confidenceintervals:

P(β1 − tα/2,n−2SE [β1] ≤ β1 ≤ β1 + tα/2,n−2SE [β1]

)= 1− α

Thus, we can write the confidence intervals as:

β1 ± tα/2,n−2SE [β1]

We can derive these for the intercept as well:

β0 ± tα/2,n−2SE [β0]

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 75 / 103

Page 330: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

CIs Simulation Example

Returning to our simulation example we can simulate the samplingdistributions of the 95 % confidence interval estimates for β1 and β0

Statistical Inference for Simple Linear RegressionDiagnostics

Properties of the Least Squares Estimator (Point Estimation)Hypothesis TestsConfidence Intervals

Sampling distribution of interval estimates

Returning to the simulation example, we can simulate the sampling distributions of the95% interval estimates for bβ0 and bβ1.

●●

●●

●●

0 2 4 6 8 10

−6

−4

−2

02

46

xx

yy

ββ0

0 2 4 6 8 10

ββ1

−2.0 −1.5 −1.0 −0.5 0.0

Gov2000: Quantitative Methodology for Political Science I

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 76 / 103

Page 331: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

CIs Simulation Example

Returning to our simulation example we can simulate the samplingdistributions of the 95 % confidence interval estimates for β1 and β0

Statistical Inference for Simple Linear RegressionDiagnostics

Properties of the Least Squares Estimator (Point Estimation)Hypothesis TestsConfidence Intervals

Sampling distribution of interval estimates

Returning to the simulation example, we can simulate the sampling distributions of the95% interval estimates for bβ0 and bβ1.

0 2 4 6 8 10

−6

−4

−2

02

46

xx

yy

ββ0

0 2 4 6 8 10

ββ1

−2.0 −1.5 −1.0 −0.5 0.0

Gov2000: Quantitative Methodology for Political Science I

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 76 / 103

Page 332: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

CIs Simulation Example

Statistical Inference for Simple Linear RegressionDiagnostics

Properties of the Least Squares Estimator (Point Estimation)Hypothesis TestsConfidence Intervals

Sampling distribution of interval estimates

When we repeat the process over and over, we expect 95% of the confidence intervalsto contain the true parameters.Note that, in a given sample, one CI may cover its true value and the other may not.

ββ0

0 2 4 6 8 10

ββ1

−2.0 −1.5 −1.0 −0.5 0.0

Gov2000: Quantitative Methodology for Political Science I

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 76 / 103

Page 333: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

CIs Simulation Example

Statistical Inference for Simple Linear RegressionDiagnostics

Properties of the Least Squares Estimator (Point Estimation)Hypothesis TestsConfidence Intervals

Sampling distribution of interval estimates

When we repeat the process over and over, we expect 95% of the confidence intervalsto contain the true parameters.Note that, in a given sample, one CI may cover its true value and the other may not.

ββ0

0 2 4 6 8 10

ββ1

−2.0 −1.5 −1.0 −0.5 0.0

Gov2000: Quantitative Methodology for Political Science I

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 76 / 103

Page 334: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Prediction error

How do we judge how well a line fits the data?

One way is to find out how much better we do at predicting Y oncewe include X into the regression model.

Prediction errors without X : best prediction is the mean, so oursquared errors, or the total sum of squares (SStot) would be:

SStot =n∑

i=1

(Yi − Y )2

Once we have estimated our model, we have new prediction errors,which are just the sum of the squared residuals or SSres :

SSres =n∑

i=1

(Yi − Yi )2

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 77 / 103

Page 335: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Prediction error

How do we judge how well a line fits the data?

One way is to find out how much better we do at predicting Y oncewe include X into the regression model.

Prediction errors without X : best prediction is the mean, so oursquared errors, or the total sum of squares (SStot) would be:

SStot =n∑

i=1

(Yi − Y )2

Once we have estimated our model, we have new prediction errors,which are just the sum of the squared residuals or SSres :

SSres =n∑

i=1

(Yi − Yi )2

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 77 / 103

Page 336: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Prediction error

How do we judge how well a line fits the data?

One way is to find out how much better we do at predicting Y oncewe include X into the regression model.

Prediction errors without X : best prediction is the mean, so oursquared errors, or the total sum of squares (SStot) would be:

SStot =n∑

i=1

(Yi − Y )2

Once we have estimated our model, we have new prediction errors,which are just the sum of the squared residuals or SSres :

SSres =n∑

i=1

(Yi − Yi )2

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 77 / 103

Page 337: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Prediction error

How do we judge how well a line fits the data?

One way is to find out how much better we do at predicting Y oncewe include X into the regression model.

Prediction errors without X : best prediction is the mean, so oursquared errors, or the total sum of squares (SStot) would be:

SStot =n∑

i=1

(Yi − Y )2

Once we have estimated our model, we have new prediction errors,which are just the sum of the squared residuals or SSres :

SSres =n∑

i=1

(Yi − Yi )2

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 77 / 103

Page 338: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Sum of Squares

1 2 3 4 5 6 7 8

67

89

1011

12

Total Prediction Errors

Log Settler Mortality

Log

GDP

per c

apita

gro

wth

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 78 / 103

Page 339: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Sum of Squares

1 2 3 4 5 6 7 8

67

89

1011

12

Residuals

Log Settler Mortality

Log

GDP

per c

apita

gro

wth

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 78 / 103

Page 340: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

R-square

By definition, the residuals have to be smaller than the deviationsfrom the mean, so we might ask the following: how much lower is theSSres compared to the SStot?

We quantify this question with the coefficient of determination or R2.This is the following:

R2 =SStot − SSres

SStot= 1− SSres

SStot

This is the fraction of the total prediction error eliminated byproviding information on X .

Alternatively, this is the fraction of the variation in Y is “explainedby” X .

R2 = 0 means no relationship

R2 = 1 implies perfect linear fit

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 79 / 103

Page 341: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

R-square

By definition, the residuals have to be smaller than the deviationsfrom the mean, so we might ask the following: how much lower is theSSres compared to the SStot?

We quantify this question with the coefficient of determination or R2.This is the following:

R2 =SStot − SSres

SStot= 1− SSres

SStot

This is the fraction of the total prediction error eliminated byproviding information on X .

Alternatively, this is the fraction of the variation in Y is “explainedby” X .

R2 = 0 means no relationship

R2 = 1 implies perfect linear fit

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 79 / 103

Page 342: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

R-square

By definition, the residuals have to be smaller than the deviationsfrom the mean, so we might ask the following: how much lower is theSSres compared to the SStot?

We quantify this question with the coefficient of determination or R2.This is the following:

R2 =SStot − SSres

SStot= 1− SSres

SStot

This is the fraction of the total prediction error eliminated byproviding information on X .

Alternatively, this is the fraction of the variation in Y is “explainedby” X .

R2 = 0 means no relationship

R2 = 1 implies perfect linear fit

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 79 / 103

Page 343: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

R-square

By definition, the residuals have to be smaller than the deviationsfrom the mean, so we might ask the following: how much lower is theSSres compared to the SStot?

We quantify this question with the coefficient of determination or R2.This is the following:

R2 =SStot − SSres

SStot= 1− SSres

SStot

This is the fraction of the total prediction error eliminated byproviding information on X .

Alternatively, this is the fraction of the variation in Y is “explainedby” X .

R2 = 0 means no relationship

R2 = 1 implies perfect linear fit

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 79 / 103

Page 344: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

R-square

By definition, the residuals have to be smaller than the deviationsfrom the mean, so we might ask the following: how much lower is theSSres compared to the SStot?

We quantify this question with the coefficient of determination or R2.This is the following:

R2 =SStot − SSres

SStot= 1− SSres

SStot

This is the fraction of the total prediction error eliminated byproviding information on X .

Alternatively, this is the fraction of the variation in Y is “explainedby” X .

R2 = 0 means no relationship

R2 = 1 implies perfect linear fit

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 79 / 103

Page 345: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

R-square

By definition, the residuals have to be smaller than the deviationsfrom the mean, so we might ask the following: how much lower is theSSres compared to the SStot?

We quantify this question with the coefficient of determination or R2.This is the following:

R2 =SStot − SSres

SStot= 1− SSres

SStot

This is the fraction of the total prediction error eliminated byproviding information on X .

Alternatively, this is the fraction of the variation in Y is “explainedby” X .

R2 = 0 means no relationship

R2 = 1 implies perfect linear fit

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 79 / 103

Page 346: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Is R-squared useful?

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

0 2 4 6 8 10

05

1015

x

y

R−squared = 0.66

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 80 / 103

Page 347: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Is R-squared useful?

●●

●●

●●

●●

● ●

0 2 4 6 8 10

02

46

810

x

y

R−squared = 0.96

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 80 / 103

Page 348: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Is R-squared useful?

●●

●●

●●

● ●

●●

● ●

● ●●

● ●

● ●

● ●●

●●

0 2 4 6 8 10

−10

010

20

x

y

R−squared = 0.27

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 80 / 103

Page 349: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Is R-squared useful?

5 10 15

46

810

12

X

Y

5 10 15

46

810

12

X

Y

5 10 15

46

810

12

X

Y

5 10 15

46

810

12

X

Y

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 80 / 103

Page 350: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Why r 2?

To calculate r2, we need to think about the following two quantities:

1 TSS: Total sum of squares

2 SSE: Sum of squared errors

TSS =n∑

i=1

(yi − y)2.

SSE =n∑

i=1

u2i .

r2 = 1− SSE

TSS.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 81 / 103

Page 351: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

0.0 0.2 0.4 0.6 0.8 1.0

2030

4050

6070

8090

SCscore

CLl

ib

Black

Reed

Frankfurter

Douglas

Jackson

Burton

Clark

Minton

Warren

HarlanWhittaker

Brennan

Stewart

White

Goldberg

FortasMarshall

Burger

Blackmun

Powell

Rehnquist

Stevens

O'Connor

Scalia

Kennedy

Souter

Thomas

TSS

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 82 / 103

Page 352: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

0.0 0.2 0.4 0.6 0.8 1.0

2030

4050

6070

8090

SCscore

CLl

ib

Black

Reed

Frankfurter

Douglas

Jackson

Burton

Clark

Minton

Warren

HarlanWhittaker

Brennan

Stewart

White

Goldberg

FortasMarshall

Burger

Blackmun

Powell

Rehnquist

Stevens

O'Connor

Scalia

Kennedy

Souter

Thomas

TSSSSE

1−SSE/TSS = 0.45

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 83 / 103

Page 353: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Derivation

n∑i=1

(yi − y)2 =n∑

i=1

{ui + (yi − y)}2

=n∑

i=1

{u2i + 2ui (yi − y) + (yi − y)2}

=n∑

i=1

u2i + 2

n∑i=1

ui (yi − y) +n∑

i=1

(yi − y)2

=n∑

i=1

u2i +

n∑i=1

(yi − y)2

TSS = SSE + RegSS

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 84 / 103

Page 354: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Derivation

n∑i=1

(yi − y)2 =n∑

i=1

{ui + (yi − y)}2

=n∑

i=1

{u2i + 2ui (yi − y) + (yi − y)2}

=n∑

i=1

u2i + 2

n∑i=1

ui (yi − y) +n∑

i=1

(yi − y)2

=n∑

i=1

u2i +

n∑i=1

(yi − y)2

TSS = SSE + RegSS

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 84 / 103

Page 355: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Derivation

n∑i=1

(yi − y)2 =n∑

i=1

{ui + (yi − y)}2

=n∑

i=1

{u2i + 2ui (yi − y) + (yi − y)2}

=n∑

i=1

u2i + 2

n∑i=1

ui (yi − y) +n∑

i=1

(yi − y)2

=n∑

i=1

u2i +

n∑i=1

(yi − y)2

TSS = SSE + RegSS

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 84 / 103

Page 356: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Derivation

n∑i=1

(yi − y)2 =n∑

i=1

{ui + (yi − y)}2

=n∑

i=1

{u2i + 2ui (yi − y) + (yi − y)2}

=n∑

i=1

u2i + 2

n∑i=1

ui (yi − y) +n∑

i=1

(yi − y)2

=n∑

i=1

u2i +

n∑i=1

(yi − y)2

TSS = SSE + RegSS

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 84 / 103

Page 357: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Derivation

n∑i=1

(yi − y)2 =n∑

i=1

{ui + (yi − y)}2

=n∑

i=1

{u2i + 2ui (yi − y) + (yi − y)2}

=n∑

i=1

u2i + 2

n∑i=1

ui (yi − y) +n∑

i=1

(yi − y)2

=n∑

i=1

u2i +

n∑i=1

(yi − y)2

TSS = SSE + RegSS

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 84 / 103

Page 358: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Derivation

n∑i=1

(yi − y)2 =n∑

i=1

{ui + (yi − y)}2

=n∑

i=1

{u2i + 2ui (yi − y) + (yi − y)2}

=n∑

i=1

u2i + 2

n∑i=1

ui (yi − y) +n∑

i=1

(yi − y)2

=n∑

i=1

u2i +

n∑i=1

(yi − y)2

TSS = SSE + RegSS

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 84 / 103

Page 359: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Derivation

n∑i=1

(yi − y)2 =n∑

i=1

{ui + (yi − y)}2

=n∑

i=1

{u2i + 2ui (yi − y) + (yi − y)2}

=n∑

i=1

u2i + 2

n∑i=1

ui (yi − y) +n∑

i=1

(yi − y)2

=n∑

i=1

u2i +

n∑i=1

(yi − y)2

TSS = SSE + RegSS

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 84 / 103

Page 360: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Derivation

n∑i=1

(yi − y)2 =n∑

i=1

{ui + (yi − y)}2

=n∑

i=1

{u2i + 2ui (yi − y) + (yi − y)2}

=n∑

i=1

u2i + 2

n∑i=1

ui (yi − y) +n∑

i=1

(yi − y)2

=n∑

i=1

u2i +

n∑i=1

(yi − y)2

TSS = SSE + RegSS

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 84 / 103

Page 361: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Coefficient of Determination

We can divide each side by the TSS:

SSE

TSS+

RegSS

TSS=

TSS

TSS

SSE

TSS+

RegSS

TSS= 1

RegSS

TSS= 1− SSE

TSS= r2

r2 is a measure of how much of the variation in Y is accounted for by X .

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 85 / 103

Page 362: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Coefficient of Determination

We can divide each side by the TSS:

SSE

TSS+

RegSS

TSS=

TSS

TSS

SSE

TSS+

RegSS

TSS= 1

RegSS

TSS= 1− SSE

TSS= r2

r2 is a measure of how much of the variation in Y is accounted for by X .

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 85 / 103

Page 363: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Coefficient of Determination

We can divide each side by the TSS:

SSE

TSS+

RegSS

TSS=

TSS

TSS

SSE

TSS+

RegSS

TSS= 1

RegSS

TSS= 1− SSE

TSS= r2

r2 is a measure of how much of the variation in Y is accounted for by X .

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 85 / 103

Page 364: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Coefficient of Determination

We can divide each side by the TSS:

SSE

TSS+

RegSS

TSS=

TSS

TSS

SSE

TSS+

RegSS

TSS= 1

RegSS

TSS= 1− SSE

TSS= r2

r2 is a measure of how much of the variation in Y is accounted for by X .

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 85 / 103

Page 365: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

1 Mechanics of OLS

2 Properties of the OLS estimator

3 Example and Review

4 Properties Continued

5 Hypothesis tests for regression

6 Confidence intervals for regression

7 Goodness of fit

8 Wrap Up of Univariate Regression

9 Fun with Non-Linearities

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 86 / 103

Page 366: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

1 Mechanics of OLS

2 Properties of the OLS estimator

3 Example and Review

4 Properties Continued

5 Hypothesis tests for regression

6 Confidence intervals for regression

7 Goodness of fit

8 Wrap Up of Univariate Regression

9 Fun with Non-Linearities

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 86 / 103

Page 367: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

OLS Assumptions Summary

!"#$%&'(%)$*+(,(*+#-'./0%)$*

1(./(%)$*/$*2*

3$4/(-#"$#--*5)$-/-,#$'6*

1(./(%)$*/$*2*

7($")8*9(80:/$;*

</$#(./,6*/$*=(.(8#,#.-*

>#.)*5)$"/%)$(:*?#($*

@(A--B?(.C)D*EF<3GH*I-680,)%'*!$J#.#$'#*************

EK*($"*!LH"

1(./(%)$*/$*2*

7($")8*9(80:/$;*

</$#(./,6*/$*=(.(8#,#.-*

>#.)*5)$"/%)$(:*?#($*

M)8)-C#"(-%'/,6*

5:(--/'(:*<?*EF3GH*98(::B9(80:#*!$J#.#$'#***

E,*($"*NH*

1(./(%)$*/$*2*

7($")8*9(80:/$;*

</$#(./,6*/$*=(.(8#,#.-*

>#.)*5)$"/%)$(:*?#($*

M)8)-C#"(-%'/,6*

O).8(:/,6*)J*G..).-*

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 87 / 103

Page 368: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

What Do the Regression Coefficients Mean Substantively?

So far, we have learned the statistical properties of the OLS estimator

However, these properties do not tell us what types of inference wecan draw from the estimates

Three types of inference:

1 Descriptive inference:I Summarizing sample data by drawing the “best fitting” line

I No inference about the underlying population intended

I Assumption required: III (variation in X ) only

2 Predictive inference:I Inference about a new observation coming from the same population

I Example: Wage (Y ) and education (X ):“What’s my best guess about the wage of a new worker who only hashigh school education?”

I Assumptions required: III and II (random sampling)

I Assumptions desired: I (linearity)

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 88 / 103

Page 369: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

What Do the Regression Coefficients Mean Substantively?

So far, we have learned the statistical properties of the OLS estimator

However, these properties do not tell us what types of inference wecan draw from the estimates

Three types of inference:

1 Descriptive inference:I Summarizing sample data by drawing the “best fitting” line

I No inference about the underlying population intended

I Assumption required: III (variation in X ) only

2 Predictive inference:I Inference about a new observation coming from the same population

I Example: Wage (Y ) and education (X ):“What’s my best guess about the wage of a new worker who only hashigh school education?”

I Assumptions required: III and II (random sampling)

I Assumptions desired: I (linearity)

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 88 / 103

Page 370: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

What Do the Regression Coefficients Mean Substantively?

So far, we have learned the statistical properties of the OLS estimator

However, these properties do not tell us what types of inference wecan draw from the estimates

Three types of inference:

1 Descriptive inference:I Summarizing sample data by drawing the “best fitting” line

I No inference about the underlying population intended

I Assumption required: III (variation in X ) only

2 Predictive inference:I Inference about a new observation coming from the same population

I Example: Wage (Y ) and education (X ):“What’s my best guess about the wage of a new worker who only hashigh school education?”

I Assumptions required: III and II (random sampling)

I Assumptions desired: I (linearity)

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 88 / 103

Page 371: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

What Do the Regression Coefficients Mean Substantively?

So far, we have learned the statistical properties of the OLS estimator

However, these properties do not tell us what types of inference wecan draw from the estimates

Three types of inference:

1 Descriptive inference:

I Summarizing sample data by drawing the “best fitting” line

I No inference about the underlying population intended

I Assumption required: III (variation in X ) only

2 Predictive inference:I Inference about a new observation coming from the same population

I Example: Wage (Y ) and education (X ):“What’s my best guess about the wage of a new worker who only hashigh school education?”

I Assumptions required: III and II (random sampling)

I Assumptions desired: I (linearity)

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 88 / 103

Page 372: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

What Do the Regression Coefficients Mean Substantively?

So far, we have learned the statistical properties of the OLS estimator

However, these properties do not tell us what types of inference wecan draw from the estimates

Three types of inference:

1 Descriptive inference:I Summarizing sample data by drawing the “best fitting” line

I No inference about the underlying population intended

I Assumption required: III (variation in X ) only

2 Predictive inference:I Inference about a new observation coming from the same population

I Example: Wage (Y ) and education (X ):“What’s my best guess about the wage of a new worker who only hashigh school education?”

I Assumptions required: III and II (random sampling)

I Assumptions desired: I (linearity)

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 88 / 103

Page 373: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

What Do the Regression Coefficients Mean Substantively?

So far, we have learned the statistical properties of the OLS estimator

However, these properties do not tell us what types of inference wecan draw from the estimates

Three types of inference:

1 Descriptive inference:I Summarizing sample data by drawing the “best fitting” line

I No inference about the underlying population intended

I Assumption required: III (variation in X ) only

2 Predictive inference:I Inference about a new observation coming from the same population

I Example: Wage (Y ) and education (X ):“What’s my best guess about the wage of a new worker who only hashigh school education?”

I Assumptions required: III and II (random sampling)

I Assumptions desired: I (linearity)

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 88 / 103

Page 374: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

What Do the Regression Coefficients Mean Substantively?

So far, we have learned the statistical properties of the OLS estimator

However, these properties do not tell us what types of inference wecan draw from the estimates

Three types of inference:

1 Descriptive inference:I Summarizing sample data by drawing the “best fitting” line

I No inference about the underlying population intended

I Assumption required: III (variation in X ) only

2 Predictive inference:I Inference about a new observation coming from the same population

I Example: Wage (Y ) and education (X ):“What’s my best guess about the wage of a new worker who only hashigh school education?”

I Assumptions required: III and II (random sampling)

I Assumptions desired: I (linearity)

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 88 / 103

Page 375: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

What Do the Regression Coefficients Mean Substantively?

So far, we have learned the statistical properties of the OLS estimator

However, these properties do not tell us what types of inference wecan draw from the estimates

Three types of inference:

1 Descriptive inference:I Summarizing sample data by drawing the “best fitting” line

I No inference about the underlying population intended

I Assumption required: III (variation in X ) only

2 Predictive inference:I Inference about a new observation coming from the same population

I Example: Wage (Y ) and education (X ):“What’s my best guess about the wage of a new worker who only hashigh school education?”

I Assumptions required: III and II (random sampling)

I Assumptions desired: I (linearity)

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 88 / 103

Page 376: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

What Do the Regression Coefficients Mean Substantively?

So far, we have learned the statistical properties of the OLS estimator

However, these properties do not tell us what types of inference wecan draw from the estimates

Three types of inference:

1 Descriptive inference:I Summarizing sample data by drawing the “best fitting” line

I No inference about the underlying population intended

I Assumption required: III (variation in X ) only

2 Predictive inference:

I Inference about a new observation coming from the same population

I Example: Wage (Y ) and education (X ):“What’s my best guess about the wage of a new worker who only hashigh school education?”

I Assumptions required: III and II (random sampling)

I Assumptions desired: I (linearity)

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 88 / 103

Page 377: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

What Do the Regression Coefficients Mean Substantively?

So far, we have learned the statistical properties of the OLS estimator

However, these properties do not tell us what types of inference wecan draw from the estimates

Three types of inference:

1 Descriptive inference:I Summarizing sample data by drawing the “best fitting” line

I No inference about the underlying population intended

I Assumption required: III (variation in X ) only

2 Predictive inference:I Inference about a new observation coming from the same population

I Example: Wage (Y ) and education (X ):“What’s my best guess about the wage of a new worker who only hashigh school education?”

I Assumptions required: III and II (random sampling)

I Assumptions desired: I (linearity)

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 88 / 103

Page 378: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

What Do the Regression Coefficients Mean Substantively?

So far, we have learned the statistical properties of the OLS estimator

However, these properties do not tell us what types of inference wecan draw from the estimates

Three types of inference:

1 Descriptive inference:I Summarizing sample data by drawing the “best fitting” line

I No inference about the underlying population intended

I Assumption required: III (variation in X ) only

2 Predictive inference:I Inference about a new observation coming from the same population

I Example: Wage (Y ) and education (X ):“What’s my best guess about the wage of a new worker who only hashigh school education?”

I Assumptions required: III and II (random sampling)

I Assumptions desired: I (linearity)

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 88 / 103

Page 379: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

What Do the Regression Coefficients Mean Substantively?

So far, we have learned the statistical properties of the OLS estimator

However, these properties do not tell us what types of inference wecan draw from the estimates

Three types of inference:

1 Descriptive inference:I Summarizing sample data by drawing the “best fitting” line

I No inference about the underlying population intended

I Assumption required: III (variation in X ) only

2 Predictive inference:I Inference about a new observation coming from the same population

I Example: Wage (Y ) and education (X ):“What’s my best guess about the wage of a new worker who only hashigh school education?”

I Assumptions required: III and II (random sampling)

I Assumptions desired: I (linearity)

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 88 / 103

Page 380: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

What Do the Regression Coefficients Mean Substantively?

So far, we have learned the statistical properties of the OLS estimator

However, these properties do not tell us what types of inference wecan draw from the estimates

Three types of inference:

1 Descriptive inference:I Summarizing sample data by drawing the “best fitting” line

I No inference about the underlying population intended

I Assumption required: III (variation in X ) only

2 Predictive inference:I Inference about a new observation coming from the same population

I Example: Wage (Y ) and education (X ):“What’s my best guess about the wage of a new worker who only hashigh school education?”

I Assumptions required: III and II (random sampling)

I Assumptions desired: I (linearity)

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 88 / 103

Page 381: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

What Do the Regression Coefficients Mean Substantively?

So far, we have learned the statistical properties of the OLS estimator

However, these properties do not tell us what types of inference wecan draw from the estimates

Three types of inference:

1 Descriptive inference:I Summarizing sample data by drawing the “best fitting” line

I No inference about the underlying population intended

I Assumption required: III (variation in X ) only

2 Predictive inference:I Inference about a new observation coming from the same population

I Example: Wage (Y ) and education (X ):“What’s my best guess about the wage of a new worker who only hashigh school education?”

I Assumptions required: III and II (random sampling)

I Assumptions desired: I (linearity)

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 88 / 103

Page 382: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

What Do the Regression Coefficients Mean Substantively?

3 Causal inference:I Inference about counterfactuals, i.e. hypothetical interventions to the

same units

I Example: Wage (Y ) and education (X ):“What would my current wage be if I only had high school education?”

I Assumptions required (under the current framework): I, II, III and IV(zero conditional mean)

I In this sequence we will continue to discuss causal identificationassumptions

Notice in the wage example, how the omission of unobserved abilityfrom the equation does or does not affect each type of inference

Implications:I When Assumptions I–IV are all satisfied, we can estimate the structural

parameters β without bias and thus make causal inference.I However, we can make predictive inference even if some assumptions

are violated.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 89 / 103

Page 383: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

What Do the Regression Coefficients Mean Substantively?3 Causal inference:

I Inference about counterfactuals, i.e. hypothetical interventions to thesame units

I Example: Wage (Y ) and education (X ):“What would my current wage be if I only had high school education?”

I Assumptions required (under the current framework): I, II, III and IV(zero conditional mean)

I In this sequence we will continue to discuss causal identificationassumptions

Notice in the wage example, how the omission of unobserved abilityfrom the equation does or does not affect each type of inference

Implications:I When Assumptions I–IV are all satisfied, we can estimate the structural

parameters β without bias and thus make causal inference.I However, we can make predictive inference even if some assumptions

are violated.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 89 / 103

Page 384: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

What Do the Regression Coefficients Mean Substantively?3 Causal inference:

I Inference about counterfactuals, i.e. hypothetical interventions to thesame units

I Example: Wage (Y ) and education (X ):“What would my current wage be if I only had high school education?”

I Assumptions required (under the current framework): I, II, III and IV(zero conditional mean)

I In this sequence we will continue to discuss causal identificationassumptions

Notice in the wage example, how the omission of unobserved abilityfrom the equation does or does not affect each type of inference

Implications:I When Assumptions I–IV are all satisfied, we can estimate the structural

parameters β without bias and thus make causal inference.I However, we can make predictive inference even if some assumptions

are violated.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 89 / 103

Page 385: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

What Do the Regression Coefficients Mean Substantively?3 Causal inference:

I Inference about counterfactuals, i.e. hypothetical interventions to thesame units

I Example: Wage (Y ) and education (X ):“What would my current wage be if I only had high school education?”

I Assumptions required (under the current framework): I, II, III and IV(zero conditional mean)

I In this sequence we will continue to discuss causal identificationassumptions

Notice in the wage example, how the omission of unobserved abilityfrom the equation does or does not affect each type of inference

Implications:I When Assumptions I–IV are all satisfied, we can estimate the structural

parameters β without bias and thus make causal inference.I However, we can make predictive inference even if some assumptions

are violated.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 89 / 103

Page 386: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

What Do the Regression Coefficients Mean Substantively?3 Causal inference:

I Inference about counterfactuals, i.e. hypothetical interventions to thesame units

I Example: Wage (Y ) and education (X ):“What would my current wage be if I only had high school education?”

I Assumptions required (under the current framework): I, II, III and IV(zero conditional mean)

I In this sequence we will continue to discuss causal identificationassumptions

Notice in the wage example, how the omission of unobserved abilityfrom the equation does or does not affect each type of inference

Implications:I When Assumptions I–IV are all satisfied, we can estimate the structural

parameters β without bias and thus make causal inference.I However, we can make predictive inference even if some assumptions

are violated.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 89 / 103

Page 387: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

What Do the Regression Coefficients Mean Substantively?3 Causal inference:

I Inference about counterfactuals, i.e. hypothetical interventions to thesame units

I Example: Wage (Y ) and education (X ):“What would my current wage be if I only had high school education?”

I Assumptions required (under the current framework): I, II, III and IV(zero conditional mean)

I In this sequence we will continue to discuss causal identificationassumptions

Notice in the wage example, how the omission of unobserved abilityfrom the equation does or does not affect each type of inference

Implications:I When Assumptions I–IV are all satisfied, we can estimate the structural

parameters β without bias and thus make causal inference.I However, we can make predictive inference even if some assumptions

are violated.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 89 / 103

Page 388: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

What Do the Regression Coefficients Mean Substantively?3 Causal inference:

I Inference about counterfactuals, i.e. hypothetical interventions to thesame units

I Example: Wage (Y ) and education (X ):“What would my current wage be if I only had high school education?”

I Assumptions required (under the current framework): I, II, III and IV(zero conditional mean)

I In this sequence we will continue to discuss causal identificationassumptions

Notice in the wage example, how the omission of unobserved abilityfrom the equation does or does not affect each type of inference

Implications:I When Assumptions I–IV are all satisfied, we can estimate the structural

parameters β without bias and thus make causal inference.I However, we can make predictive inference even if some assumptions

are violated.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 89 / 103

Page 389: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

What Do the Regression Coefficients Mean Substantively?3 Causal inference:

I Inference about counterfactuals, i.e. hypothetical interventions to thesame units

I Example: Wage (Y ) and education (X ):“What would my current wage be if I only had high school education?”

I Assumptions required (under the current framework): I, II, III and IV(zero conditional mean)

I In this sequence we will continue to discuss causal identificationassumptions

Notice in the wage example, how the omission of unobserved abilityfrom the equation does or does not affect each type of inference

Implications:I When Assumptions I–IV are all satisfied, we can estimate the structural

parameters β without bias and thus make causal inference.I However, we can make predictive inference even if some assumptions

are violated.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 89 / 103

Page 390: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

OLS as a Best Linear Predictor (Review of BLUE)

Suppose that we want to predict the values of Y given observed X values

Suppose further that we’ve decided to use a linear predictor β0 + β1X (butnot necessarily assume a true linear relationship in the population)

How to choose a good predictor? A popular criterion is mean squared error:

MSE = E[(Yi − Yi )

2]

= E[(Yi − β0 − β1Xi )

2]

= E[u2i

]The smaller a predictor makes MSE , the better.

Now, note that the sample version of MSE = 1n

∑ni=1 u

2i

Recall how we got the OLS estimator; we minimized∑n

i=1 u2!

This implies that OLS is the best linear predictor in terms of MSE

Which assumptions did we use to get this result?

I Needed: Assumptions II (random sampling) and III (variation in X )I Not needed: Assumptions I (linearity) and IV (zero cond. mean)

Note that Assumption I would make OLS the best, not just best linear,predictor, so it is certainly desired

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 90 / 103

Page 391: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

OLS as a Best Linear Predictor (Review of BLUE)Suppose that we want to predict the values of Y given observed X values

Suppose further that we’ve decided to use a linear predictor β0 + β1X (butnot necessarily assume a true linear relationship in the population)

How to choose a good predictor? A popular criterion is mean squared error:

MSE = E[(Yi − Yi )

2]

= E[(Yi − β0 − β1Xi )

2]

= E[u2i

]The smaller a predictor makes MSE , the better.

Now, note that the sample version of MSE = 1n

∑ni=1 u

2i

Recall how we got the OLS estimator; we minimized∑n

i=1 u2!

This implies that OLS is the best linear predictor in terms of MSE

Which assumptions did we use to get this result?

I Needed: Assumptions II (random sampling) and III (variation in X )I Not needed: Assumptions I (linearity) and IV (zero cond. mean)

Note that Assumption I would make OLS the best, not just best linear,predictor, so it is certainly desired

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 90 / 103

Page 392: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

OLS as a Best Linear Predictor (Review of BLUE)Suppose that we want to predict the values of Y given observed X values

Suppose further that we’ve decided to use a linear predictor β0 + β1X (butnot necessarily assume a true linear relationship in the population)

How to choose a good predictor? A popular criterion is mean squared error:

MSE = E[(Yi − Yi )

2]

= E[(Yi − β0 − β1Xi )

2]

= E[u2i

]The smaller a predictor makes MSE , the better.

Now, note that the sample version of MSE = 1n

∑ni=1 u

2i

Recall how we got the OLS estimator; we minimized∑n

i=1 u2!

This implies that OLS is the best linear predictor in terms of MSE

Which assumptions did we use to get this result?

I Needed: Assumptions II (random sampling) and III (variation in X )I Not needed: Assumptions I (linearity) and IV (zero cond. mean)

Note that Assumption I would make OLS the best, not just best linear,predictor, so it is certainly desired

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 90 / 103

Page 393: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

OLS as a Best Linear Predictor (Review of BLUE)Suppose that we want to predict the values of Y given observed X values

Suppose further that we’ve decided to use a linear predictor β0 + β1X (butnot necessarily assume a true linear relationship in the population)

How to choose a good predictor? A popular criterion is mean squared error:

MSE = E[(Yi − Yi )

2]

= E[(Yi − β0 − β1Xi )

2]

= E[u2i

]The smaller a predictor makes MSE , the better.

Now, note that the sample version of MSE = 1n

∑ni=1 u

2i

Recall how we got the OLS estimator; we minimized∑n

i=1 u2!

This implies that OLS is the best linear predictor in terms of MSE

Which assumptions did we use to get this result?

I Needed: Assumptions II (random sampling) and III (variation in X )I Not needed: Assumptions I (linearity) and IV (zero cond. mean)

Note that Assumption I would make OLS the best, not just best linear,predictor, so it is certainly desired

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 90 / 103

Page 394: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

OLS as a Best Linear Predictor (Review of BLUE)Suppose that we want to predict the values of Y given observed X values

Suppose further that we’ve decided to use a linear predictor β0 + β1X (butnot necessarily assume a true linear relationship in the population)

How to choose a good predictor? A popular criterion is mean squared error:

MSE = E[(Yi − Yi )

2]

= E[(Yi − β0 − β1Xi )

2]

= E[u2i

]The smaller a predictor makes MSE , the better.

Now, note that the sample version of MSE = 1n

∑ni=1 u

2i

Recall how we got the OLS estimator; we minimized∑n

i=1 u2!

This implies that OLS is the best linear predictor in terms of MSE

Which assumptions did we use to get this result?

I Needed: Assumptions II (random sampling) and III (variation in X )I Not needed: Assumptions I (linearity) and IV (zero cond. mean)

Note that Assumption I would make OLS the best, not just best linear,predictor, so it is certainly desired

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 90 / 103

Page 395: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

OLS as a Best Linear Predictor (Review of BLUE)Suppose that we want to predict the values of Y given observed X values

Suppose further that we’ve decided to use a linear predictor β0 + β1X (butnot necessarily assume a true linear relationship in the population)

How to choose a good predictor? A popular criterion is mean squared error:

MSE = E[(Yi − Yi )

2]

= E[(Yi − β0 − β1Xi )

2]

= E[u2i

]The smaller a predictor makes MSE , the better.

Now, note that the sample version of MSE = 1n

∑ni=1 u

2i

Recall how we got the OLS estimator; we minimized∑n

i=1 u2!

This implies that OLS is the best linear predictor in terms of MSE

Which assumptions did we use to get this result?

I Needed: Assumptions II (random sampling) and III (variation in X )I Not needed: Assumptions I (linearity) and IV (zero cond. mean)

Note that Assumption I would make OLS the best, not just best linear,predictor, so it is certainly desired

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 90 / 103

Page 396: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

OLS as a Best Linear Predictor (Review of BLUE)Suppose that we want to predict the values of Y given observed X values

Suppose further that we’ve decided to use a linear predictor β0 + β1X (butnot necessarily assume a true linear relationship in the population)

How to choose a good predictor? A popular criterion is mean squared error:

MSE = E[(Yi − Yi )

2]

= E[(Yi − β0 − β1Xi )

2]

= E[u2i

]The smaller a predictor makes MSE , the better.

Now, note that the sample version of MSE = 1n

∑ni=1 u

2i

Recall how we got the OLS estimator; we minimized∑n

i=1 u2!

This implies that OLS is the best linear predictor in terms of MSE

Which assumptions did we use to get this result?

I Needed: Assumptions II (random sampling) and III (variation in X )I Not needed: Assumptions I (linearity) and IV (zero cond. mean)

Note that Assumption I would make OLS the best, not just best linear,predictor, so it is certainly desired

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 90 / 103

Page 397: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

OLS as a Best Linear Predictor (Review of BLUE)Suppose that we want to predict the values of Y given observed X values

Suppose further that we’ve decided to use a linear predictor β0 + β1X (butnot necessarily assume a true linear relationship in the population)

How to choose a good predictor? A popular criterion is mean squared error:

MSE = E[(Yi − Yi )

2]

= E[(Yi − β0 − β1Xi )

2]

= E[u2i

]The smaller a predictor makes MSE , the better.

Now, note that the sample version of MSE = 1n

∑ni=1 u

2i

Recall how we got the OLS estimator; we minimized∑n

i=1 u2!

This implies that OLS is the best linear predictor in terms of MSE

Which assumptions did we use to get this result?

I Needed: Assumptions II (random sampling) and III (variation in X )I Not needed: Assumptions I (linearity) and IV (zero cond. mean)

Note that Assumption I would make OLS the best, not just best linear,predictor, so it is certainly desired

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 90 / 103

Page 398: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

OLS as a Best Linear Predictor (Review of BLUE)Suppose that we want to predict the values of Y given observed X values

Suppose further that we’ve decided to use a linear predictor β0 + β1X (butnot necessarily assume a true linear relationship in the population)

How to choose a good predictor? A popular criterion is mean squared error:

MSE = E[(Yi − Yi )

2]

= E[(Yi − β0 − β1Xi )

2]

= E[u2i

]The smaller a predictor makes MSE , the better.

Now, note that the sample version of MSE = 1n

∑ni=1 u

2i

Recall how we got the OLS estimator; we minimized∑n

i=1 u2!

This implies that OLS is the best linear predictor in terms of MSE

Which assumptions did we use to get this result?

I Needed: Assumptions II (random sampling) and III (variation in X )I Not needed: Assumptions I (linearity) and IV (zero cond. mean)

Note that Assumption I would make OLS the best, not just best linear,predictor, so it is certainly desired

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 90 / 103

Page 399: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

OLS as a Best Linear Predictor (Review of BLUE)Suppose that we want to predict the values of Y given observed X values

Suppose further that we’ve decided to use a linear predictor β0 + β1X (butnot necessarily assume a true linear relationship in the population)

How to choose a good predictor? A popular criterion is mean squared error:

MSE = E[(Yi − Yi )

2]

= E[(Yi − β0 − β1Xi )

2]

= E[u2i

]The smaller a predictor makes MSE , the better.

Now, note that the sample version of MSE = 1n

∑ni=1 u

2i

Recall how we got the OLS estimator; we minimized∑n

i=1 u2!

This implies that OLS is the best linear predictor in terms of MSE

Which assumptions did we use to get this result?

I Needed: Assumptions II (random sampling) and III (variation in X )I Not needed: Assumptions I (linearity) and IV (zero cond. mean)

Note that Assumption I would make OLS the best, not just best linear,predictor, so it is certainly desired

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 90 / 103

Page 400: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

State Legislators and African American Population

Interpretations of increasing quality:

> summary(lm(beo ~ bpop, data = D))

Coefficients:

Estimate Std. Error t value Pr(>|t|)

(Intercept) -1.31489 0.32775 -4.012 0.000264 ***

bpop 0.35848 0.02519 14.232 < 2e-16 ***

---

Signif. codes: 0 *** 0.001 ** 0.01 * 0.05 . 0.1 1

Residual standard error: 1.317 on 39 degrees of freedom

Multiple R-squared: 0.8385,Adjusted R-squared: 0.8344

F-statistic: 202.6 on 1 and 39 DF, p-value: < 2.2e-16

“African American population is statistically significant (p < 0.001)”

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 91 / 103

Page 401: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

State Legislators and African American Population

Interpretations of increasing quality:

> summary(lm(beo ~ bpop, data = D))

Coefficients:

Estimate Std. Error t value Pr(>|t|)

(Intercept) -1.31489 0.32775 -4.012 0.000264 ***

bpop 0.35848 0.02519 14.232 < 2e-16 ***

---

Signif. codes: 0 *** 0.001 ** 0.01 * 0.05 . 0.1 1

Residual standard error: 1.317 on 39 degrees of freedom

Multiple R-squared: 0.8385,Adjusted R-squared: 0.8344

F-statistic: 202.6 on 1 and 39 DF, p-value: < 2.2e-16

“Percent African American legislators increases with African American population (p <0.001)”

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 91 / 103

Page 402: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

State Legislators and African American Population

Interpretations of increasing quality:

> summary(lm(beo ~ bpop, data = D))

Coefficients:

Estimate Std. Error t value Pr(>|t|)

(Intercept) -1.31489 0.32775 -4.012 0.000264 ***

bpop 0.35848 0.02519 14.232 < 2e-16 ***

---

Signif. codes: 0 *** 0.001 ** 0.01 * 0.05 . 0.1 1

Residual standard error: 1.317 on 39 degrees of freedom

Multiple R-squared: 0.8385,Adjusted R-squared: 0.8344

F-statistic: 202.6 on 1 and 39 DF, p-value: < 2.2e-16

“A one percentage point increase in the African American population causes a 0.35 per-centage point increase in the fraction of African American state legislators (p < 0.001).”

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 91 / 103

Page 403: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

State Legislators and African American Population

Interpretations of increasing quality:

> summary(lm(beo ~ bpop, data = D))

Coefficients:

Estimate Std. Error t value Pr(>|t|)

(Intercept) -1.31489 0.32775 -4.012 0.000264 ***

bpop 0.35848 0.02519 14.232 < 2e-16 ***

---

Signif. codes: 0 *** 0.001 ** 0.01 * 0.05 . 0.1 1

Residual standard error: 1.317 on 39 degrees of freedom

Multiple R-squared: 0.8385,Adjusted R-squared: 0.8344

F-statistic: 202.6 on 1 and 39 DF, p-value: < 2.2e-16

“A one percentage point increase in the African American population is associated witha 0.35 percentage point increase in the fraction of African American state legislators(p < 0.001).”

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 91 / 103

Page 404: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Ground Rules: Interpretation of the Slope

1 Give a short, but precise interpretation of the exact meaning of thevalue of the slope coefficient referring to the concepts, units,direction, and magnitude.

I Estimate suggests that one additional hour of reading the textbook isassociated with 10 additional points on the exam.

2 Do not resort to unwarranted causal language: Say “predicts”,“associated with”, “expected difference” or “correlated with” insteadof “causes”, “leads” or “affects”

3 Give a short, but precise interpretation of statistical significance

4 Give a short, but precise interpretation of practical significance. Youwant to discuss the magnitude of the slope in your particularapplication.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 92 / 103

Page 405: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Ground Rules: Interpretation of the Slope

1 Give a short, but precise interpretation of the exact meaning of thevalue of the slope coefficient referring to the concepts, units,direction, and magnitude.

I Estimate suggests that one additional hour of reading the textbook isassociated with 10 additional points on the exam.

2 Do not resort to unwarranted causal language: Say “predicts”,“associated with”, “expected difference” or “correlated with” insteadof “causes”, “leads” or “affects”

3 Give a short, but precise interpretation of statistical significance

4 Give a short, but precise interpretation of practical significance. Youwant to discuss the magnitude of the slope in your particularapplication.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 92 / 103

Page 406: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Ground Rules: Interpretation of the Slope

1 Give a short, but precise interpretation of the exact meaning of thevalue of the slope coefficient referring to the concepts, units,direction, and magnitude.

I Estimate suggests that one additional hour of reading the textbook isassociated with 10 additional points on the exam.

2 Do not resort to unwarranted causal language: Say “predicts”,“associated with”, “expected difference” or “correlated with” insteadof “causes”, “leads” or “affects”

3 Give a short, but precise interpretation of statistical significance

4 Give a short, but precise interpretation of practical significance. Youwant to discuss the magnitude of the slope in your particularapplication.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 92 / 103

Page 407: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Ground Rules: Interpretation of the Slope

1 Give a short, but precise interpretation of the exact meaning of thevalue of the slope coefficient referring to the concepts, units,direction, and magnitude.

I Estimate suggests that one additional hour of reading the textbook isassociated with 10 additional points on the exam.

2 Do not resort to unwarranted causal language: Say “predicts”,“associated with”, “expected difference” or “correlated with” insteadof “causes”, “leads” or “affects”

3 Give a short, but precise interpretation of statistical significance

4 Give a short, but precise interpretation of practical significance. Youwant to discuss the magnitude of the slope in your particularapplication.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 92 / 103

Page 408: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Ground Rules: Interpretation of the Slope

1 Give a short, but precise interpretation of the exact meaning of thevalue of the slope coefficient referring to the concepts, units,direction, and magnitude.

I Estimate suggests that one additional hour of reading the textbook isassociated with 10 additional points on the exam.

2 Do not resort to unwarranted causal language: Say “predicts”,“associated with”, “expected difference” or “correlated with” insteadof “causes”, “leads” or “affects”

3 Give a short, but precise interpretation of statistical significance

4 Give a short, but precise interpretation of practical significance. Youwant to discuss the magnitude of the slope in your particularapplication.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 92 / 103

Page 409: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Reporting Statistical Significance

A reasonable way to think about statistical significance is to thinkabout the precision of the estimates

If the slope is large substantively but just barely fails to reachconventional levels of significance it may still be interesting.

Examples:I We reject the null hypothesis that the slope is zero at the .05 level

I The slope coefficient suggests that a one unit change in X isassociated with a 10 unit change in Y (p.< .02).

I The slope coefficient is fairly precisely estimated, the 95 % confidenceinterval ranging from 8 to 10

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 93 / 103

Page 410: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Reporting Statistical Significance

A reasonable way to think about statistical significance is to thinkabout the precision of the estimates

If the slope is large substantively but just barely fails to reachconventional levels of significance it may still be interesting.

Examples:I We reject the null hypothesis that the slope is zero at the .05 level

I The slope coefficient suggests that a one unit change in X isassociated with a 10 unit change in Y (p.< .02).

I The slope coefficient is fairly precisely estimated, the 95 % confidenceinterval ranging from 8 to 10

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 93 / 103

Page 411: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Reporting Statistical Significance

A reasonable way to think about statistical significance is to thinkabout the precision of the estimates

If the slope is large substantively but just barely fails to reachconventional levels of significance it may still be interesting.

Examples:I We reject the null hypothesis that the slope is zero at the .05 level

I The slope coefficient suggests that a one unit change in X isassociated with a 10 unit change in Y (p.< .02).

I The slope coefficient is fairly precisely estimated, the 95 % confidenceinterval ranging from 8 to 10

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 93 / 103

Page 412: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Reporting Statistical Significance

A reasonable way to think about statistical significance is to thinkabout the precision of the estimates

If the slope is large substantively but just barely fails to reachconventional levels of significance it may still be interesting.

Examples:

I We reject the null hypothesis that the slope is zero at the .05 level

I The slope coefficient suggests that a one unit change in X isassociated with a 10 unit change in Y (p.< .02).

I The slope coefficient is fairly precisely estimated, the 95 % confidenceinterval ranging from 8 to 10

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 93 / 103

Page 413: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Reporting Statistical Significance

A reasonable way to think about statistical significance is to thinkabout the precision of the estimates

If the slope is large substantively but just barely fails to reachconventional levels of significance it may still be interesting.

Examples:I We reject the null hypothesis that the slope is zero at the .05 level

I The slope coefficient suggests that a one unit change in X isassociated with a 10 unit change in Y (p.< .02).

I The slope coefficient is fairly precisely estimated, the 95 % confidenceinterval ranging from 8 to 10

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 93 / 103

Page 414: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Reporting Statistical Significance

A reasonable way to think about statistical significance is to thinkabout the precision of the estimates

If the slope is large substantively but just barely fails to reachconventional levels of significance it may still be interesting.

Examples:I We reject the null hypothesis that the slope is zero at the .05 level

I The slope coefficient suggests that a one unit change in X isassociated with a 10 unit change in Y (p.< .02).

I The slope coefficient is fairly precisely estimated, the 95 % confidenceinterval ranging from 8 to 10

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 93 / 103

Page 415: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Reporting Statistical Significance

A reasonable way to think about statistical significance is to thinkabout the precision of the estimates

If the slope is large substantively but just barely fails to reachconventional levels of significance it may still be interesting.

Examples:I We reject the null hypothesis that the slope is zero at the .05 level

I The slope coefficient suggests that a one unit change in X isassociated with a 10 unit change in Y (p.< .02).

I The slope coefficient is fairly precisely estimated, the 95 % confidenceinterval ranging from 8 to 10

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 93 / 103

Page 416: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Reporting Substantive Significance

Statistical significance and substantive significance are not the same: with alarge enough sample size even truly microscopic differences can bestatistically significant!

To comment on substantive magnitude you should set up a “plausible”contrast keeping in mind (1) the distributions of variables and the (2) thesubstantive context

Examples:

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 94 / 103

Page 417: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Reporting Substantive Significance

Statistical significance and substantive significance are not the same: with alarge enough sample size even truly microscopic differences can bestatistically significant!

To comment on substantive magnitude you should set up a “plausible”contrast keeping in mind (1) the distributions of variables and the (2) thesubstantive context

Examples:

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 94 / 103

Page 418: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Reporting Substantive Significance

Statistical significance and substantive significance are not the same: with alarge enough sample size even truly microscopic differences can bestatistically significant!

To comment on substantive magnitude you should set up a “plausible”contrast keeping in mind (1) the distributions of variables and the (2) thesubstantive context

Examples:

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 94 / 103

Page 419: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Reporting Substantive Significance

Statistical significance and substantive significance are not the same: with alarge enough sample size even truly microscopic differences can bestatistically significant!

To comment on substantive magnitude you should set up a “plausible”contrast keeping in mind (1) the distributions of variables and the (2) thesubstantive context

Examples:

Turnout on education: The slope estimate suggests that a high schooldropout (10th percentile of the education distribution) on average has a .3lower probability of voting compared to a college graduate (75th percentileof schooling).

The average probability of voting among HSDs is .2, so this corresponds toa 150 % increase for an average HSD.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 94 / 103

Page 420: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Reporting Substantive Significance

Statistical significance and substantive significance are not the same: with alarge enough sample size even truly microscopic differences can bestatistically significant!

To comment on substantive magnitude you should set up a “plausible”contrast keeping in mind (1) the distributions of variables and the (2) thesubstantive context

Examples:

Democracy on GDP: Going from the 25th to the 75th percentile of the GDPdistribution (e.g. comparing Ghana and Spain) is associated with a 10 pointincrease in the average polity index, which corresponds to an increase fromthe 25th to the 52nd percentile of the democracy distribution

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 94 / 103

Page 421: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Reporting Substantive Significance

Statistical significance and substantive significance are not the same: with alarge enough sample size even truly microscopic differences can bestatistically significant!

To comment on substantive magnitude you should set up a “plausible”contrast keeping in mind (1) the distributions of variables and the (2) thesubstantive context

Examples:

Earnings on Schooling: The standard deviation is 2.5 years for schooling and$50,000 for annual earnings. Thus, the slope estimates suggest that a onestandard deviation increase in schooling is associated with a .8 standarddeviation increase in earnings.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 94 / 103

Page 422: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Next Week

OLS with two regressors

Omitted Variables and Multicolinearity

Dummy variables, interactions, polynomials

Reading:I Fox Chapter 5.2.1 (Least Squares with Two Variables)I Fox Chapter 7.1-7.3 (Dummy-Variable Regression, Interactions)

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 95 / 103

Page 423: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Next Week

OLS with two regressors

Omitted Variables and Multicolinearity

Dummy variables, interactions, polynomials

Reading:I Fox Chapter 5.2.1 (Least Squares with Two Variables)I Fox Chapter 7.1-7.3 (Dummy-Variable Regression, Interactions)

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 95 / 103

Page 424: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Next Week

OLS with two regressors

Omitted Variables and Multicolinearity

Dummy variables, interactions, polynomials

Reading:I Fox Chapter 5.2.1 (Least Squares with Two Variables)I Fox Chapter 7.1-7.3 (Dummy-Variable Regression, Interactions)

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 95 / 103

Page 425: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Next Week

OLS with two regressors

Omitted Variables and Multicolinearity

Dummy variables, interactions, polynomials

Reading:I Fox Chapter 5.2.1 (Least Squares with Two Variables)I Fox Chapter 7.1-7.3 (Dummy-Variable Regression, Interactions)

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 95 / 103

Page 426: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Next Week

OLS with two regressors

Omitted Variables and Multicolinearity

Dummy variables, interactions, polynomials

Reading:I Fox Chapter 5.2.1 (Least Squares with Two Variables)I Fox Chapter 7.1-7.3 (Dummy-Variable Regression, Interactions)

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 95 / 103

Page 427: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

1 Mechanics of OLS

2 Properties of the OLS estimator

3 Example and Review

4 Properties Continued

5 Hypothesis tests for regression

6 Confidence intervals for regression

7 Goodness of fit

8 Wrap Up of Univariate Regression

9 Fun with Non-Linearities

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 96 / 103

Page 428: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

1 Mechanics of OLS

2 Properties of the OLS estimator

3 Example and Review

4 Properties Continued

5 Hypothesis tests for regression

6 Confidence intervals for regression

7 Goodness of fit

8 Wrap Up of Univariate Regression

9 Fun with Non-Linearities

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 96 / 103

Page 429: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Fun with Non-Linearities

The linear regression model can accommodate non-linearity in X (butnot in β)

We do this by first transforming X appropriately

A useful transformation when variables are positive and right-skewedis the (natural) logarithm

The log transformation changes the interpretation of β1:

I Regress log(Y ) on X −→ β1 approximates percent increase in Yassociated with one unit increase in X

I Regress Y on log(X ) −→ β1 approximates increase in Y associatedwith a percent increase in X

I Note that these approximations work only for small increments

I In particular, they do not work when X is a discrete random variable

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 97 / 103

Page 430: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Fun with Non-Linearities

The linear regression model can accommodate non-linearity in X (butnot in β)

We do this by first transforming X appropriately

A useful transformation when variables are positive and right-skewedis the (natural) logarithm

The log transformation changes the interpretation of β1:

I Regress log(Y ) on X −→ β1 approximates percent increase in Yassociated with one unit increase in X

I Regress Y on log(X ) −→ β1 approximates increase in Y associatedwith a percent increase in X

I Note that these approximations work only for small increments

I In particular, they do not work when X is a discrete random variable

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 97 / 103

Page 431: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Fun with Non-Linearities

The linear regression model can accommodate non-linearity in X (butnot in β)

We do this by first transforming X appropriately

A useful transformation when variables are positive and right-skewedis the (natural) logarithm

The log transformation changes the interpretation of β1:

I Regress log(Y ) on X −→ β1 approximates percent increase in Yassociated with one unit increase in X

I Regress Y on log(X ) −→ β1 approximates increase in Y associatedwith a percent increase in X

I Note that these approximations work only for small increments

I In particular, they do not work when X is a discrete random variable

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 97 / 103

Page 432: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Fun with Non-Linearities

The linear regression model can accommodate non-linearity in X (butnot in β)

We do this by first transforming X appropriately

A useful transformation when variables are positive and right-skewedis the (natural) logarithm

The log transformation changes the interpretation of β1:

I Regress log(Y ) on X −→ β1 approximates percent increase in Yassociated with one unit increase in X

I Regress Y on log(X ) −→ β1 approximates increase in Y associatedwith a percent increase in X

I Note that these approximations work only for small increments

I In particular, they do not work when X is a discrete random variable

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 97 / 103

Page 433: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Fun with Non-Linearities

The linear regression model can accommodate non-linearity in X (butnot in β)

We do this by first transforming X appropriately

A useful transformation when variables are positive and right-skewedis the (natural) logarithm

The log transformation changes the interpretation of β1:

I Regress log(Y ) on X −→ β1 approximates percent increase in Yassociated with one unit increase in X

I Regress Y on log(X ) −→ β1 approximates increase in Y associatedwith a percent increase in X

I Note that these approximations work only for small increments

I In particular, they do not work when X is a discrete random variable

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 97 / 103

Page 434: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Fun with Non-Linearities

The linear regression model can accommodate non-linearity in X (butnot in β)

We do this by first transforming X appropriately

A useful transformation when variables are positive and right-skewedis the (natural) logarithm

The log transformation changes the interpretation of β1:

I Regress log(Y ) on X −→ β1 approximates percent increase in Yassociated with one unit increase in X

I Regress Y on log(X ) −→ β1 approximates increase in Y associatedwith a percent increase in X

I Note that these approximations work only for small increments

I In particular, they do not work when X is a discrete random variable

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 97 / 103

Page 435: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Fun with Non-Linearities

The linear regression model can accommodate non-linearity in X (butnot in β)

We do this by first transforming X appropriately

A useful transformation when variables are positive and right-skewedis the (natural) logarithm

The log transformation changes the interpretation of β1:

I Regress log(Y ) on X −→ β1 approximates percent increase in Yassociated with one unit increase in X

I Regress Y on log(X ) −→ β1 approximates increase in Y associatedwith a percent increase in X

I Note that these approximations work only for small increments

I In particular, they do not work when X is a discrete random variable

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 97 / 103

Page 436: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Fun with Non-Linearities

The linear regression model can accommodate non-linearity in X (butnot in β)

We do this by first transforming X appropriately

A useful transformation when variables are positive and right-skewedis the (natural) logarithm

The log transformation changes the interpretation of β1:

I Regress log(Y ) on X −→ β1 approximates percent increase in Yassociated with one unit increase in X

I Regress Y on log(X ) −→ β1 approximates increase in Y associatedwith a percent increase in X

I Note that these approximations work only for small increments

I In particular, they do not work when X is a discrete random variable

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 97 / 103

Page 437: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Fun with Non-Linearities

The linear regression model can accommodate non-linearity in X (butnot in β)

We do this by first transforming X appropriately

A useful transformation when variables are positive and right-skewedis the (natural) logarithm

The log transformation changes the interpretation of β1:

I Regress log(Y ) on X −→ β1 approximates percent increase in Yassociated with one unit increase in X

I Regress Y on log(X ) −→ β1 approximates increase in Y associatedwith a percent increase in X

I Note that these approximations work only for small increments

I In particular, they do not work when X is a discrete random variable

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 97 / 103

Page 438: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Example from the American War Library

●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●

●●

0e+00 1e+05 2e+05 3e+05 4e+05 5e+05

0e+

001e

+05

2e+

053e

+05

4e+

055e

+05

6e+

05

X: Numbers of American Soldiers Killed in Action

Y: N

umbe

rs o

f Am

eric

an S

oldi

ers

Wou

nded

in A

ctio

n

JapanItaly TriesteNicaraguaTexas Border Cortina WarHaitiOperation Enduring Freedom, Afghanistan TheaterMexican WarOperation Enduring Freedom, AfghanistanFranco−Amer Naval WarNorth Atlantic Naval WarTerrorism Riyadh, Saudi ArabiaChina Civil WarDominican RepublicRussia Siberia ExpeditionBarbary WarsNicaraguaMexicoChina Yangtze ServiceGrenadaSouth KoreaTexas War Of IndependenceLebanonIsrael Attack/USS LibertyDominican RepublicPanamaChina Boxer RebellionMoro CampaignsRussia North ExpeditionPersian Gulf, Op Desert Shield/StormTerrorism Oklahoma CityPersian GulfTerrorism Khobar Towers, Saudi ArabiaYemen, USS ColeTerrorism, World Trade CenterSpanish American WarIndian WarsPhilippines WarD−DayAleutian CampaignWar of 1812Revolutionary WarIwo JimaOperation Iraqi Freedom, Iraq

Okinawa

Korean War

Civil War, SouthVietnam War

World War I

Civil War, North

World War II

β1 = 1.23 −→

One additional soldier killed predicts 1.23 additional soldierswounded on average

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 98 / 103

Page 439: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Example from the American War Library

●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●

●●

0e+00 1e+05 2e+05 3e+05 4e+05 5e+05

0e+

001e

+05

2e+

053e

+05

4e+

055e

+05

6e+

05

X: Numbers of American Soldiers Killed in Action

Y: N

umbe

rs o

f Am

eric

an S

oldi

ers

Wou

nded

in A

ctio

n

JapanItaly TriesteNicaraguaTexas Border Cortina WarHaitiOperation Enduring Freedom, Afghanistan TheaterMexican WarOperation Enduring Freedom, AfghanistanFranco−Amer Naval WarNorth Atlantic Naval WarTerrorism Riyadh, Saudi ArabiaChina Civil WarDominican RepublicRussia Siberia ExpeditionBarbary WarsNicaraguaMexicoChina Yangtze ServiceGrenadaSouth KoreaTexas War Of IndependenceLebanonIsrael Attack/USS LibertyDominican RepublicPanamaChina Boxer RebellionMoro CampaignsRussia North ExpeditionPersian Gulf, Op Desert Shield/StormTerrorism Oklahoma CityPersian GulfTerrorism Khobar Towers, Saudi ArabiaYemen, USS ColeTerrorism, World Trade CenterSpanish American WarIndian WarsPhilippines WarD−DayAleutian CampaignWar of 1812Revolutionary WarIwo JimaOperation Iraqi Freedom, Iraq

Okinawa

Korean War

Civil War, SouthVietnam War

World War I

Civil War, North

World War II

β1 = 1.23 −→ One additional soldier killed predicts 1.23 additional soldierswounded on average

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 98 / 103

Page 440: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Wounded (Scale in Levels)

JapanItaly TriesteNicaraguaTexas Border Cortina WarHaitiOperation Enduring Freedom, Afghanistan TheaterMexican WarOperation Enduring Freedom, AfghanistanFranco−Amer Naval WarNorth Atlantic Naval WarTerrorism Riyadh, Saudi ArabiaChina Civil WarDominican RepublicRussia Siberia ExpeditionBarbary WarsNicaraguaMexicoChina Yangtze ServiceGrenadaSouth KoreaTexas War Of IndependenceLebanonIsrael Attack/USS LibertyDominican RepublicPanamaChina Boxer RebellionMoro CampaignsRussia North ExpeditionPersian Gulf, Op Desert Shield/StormTerrorism Oklahoma CityPersian GulfTerrorism Khobar Towers, Saudi ArabiaYemen, USS ColeTerrorism, World Trade CenterSpanish American WarIndian WarsPhilippines WarD−DayAleutian CampaignWar of 1812Revolutionary WarIwo JimaOperation Iraqi Freedom, IraqOkinawaKorean WarCivil War, SouthVietnam WarWorld War ICivil War, NorthWorld War II

0e+00 1e+05 2e+05 3e+05 4e+05 5e+05 6e+05

Number of Wounded

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 99 / 103

Page 441: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Wounded (Logarithmic Scale)

JapanItaly TriesteNicaraguaTexas Border Cortina WarHaitiOperation Enduring Freedom, Afghanistan TheaterMexican WarOperation Enduring Freedom, AfghanistanFranco−Amer Naval WarNorth Atlantic Naval WarTerrorism Riyadh, Saudi ArabiaChina Civil WarDominican RepublicRussia Siberia ExpeditionBarbary WarsNicaraguaMexicoChina Yangtze ServiceGrenadaSouth KoreaTexas War Of IndependenceLebanonIsrael Attack/USS LibertyDominican RepublicPanamaChina Boxer RebellionMoro CampaignsRussia North ExpeditionPersian Gulf, Op Desert Shield/StormTerrorism Oklahoma CityPersian GulfTerrorism Khobar Towers, Saudi ArabiaYemen, USS ColeTerrorism, World Trade CenterSpanish American WarIndian WarsPhilippines WarD−DayAleutian CampaignWar of 1812Revolutionary WarIwo JimaOperation Iraqi Freedom, IraqOkinawaKorean WarCivil War, SouthVietnam WarWorld War ICivil War, NorthWorld War II

2 4 6 8 10 12

Log(Number of Wounded)

10 100 1,000 10,000 100,000 1,000,000

Number of Wounded

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 100 / 103

Page 442: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Regression: Log-Level

●●●

●●●●

●●●●●●●●●●●●●●●●

●●●●●●●●●

●●●●

●●●

0e+00 1e+05 2e+05 3e+05 4e+05

24

68

1012

X: Numbers of American Soldiers Killed in Action

Y: l

og(N

umbe

rs o

f Am

eric

an S

oldi

ers

Wou

nded

in A

ctio

n)

Japan

Italy TriesteNicaraguaTexas Border Cortina WarHaitiOperation Enduring Freedom, Afghanistan TheaterMexican WarOperation Enduring Freedom, Afghanistan

Franco−Amer Naval WarNorth Atlantic Naval WarTerrorism Riyadh, Saudi ArabiaChina Civil WarDominican RepublicRussia Siberia ExpeditionBarbary WarsNicaraguaMexicoChina Yangtze Service

GrenadaSouth KoreaTexas War Of Independence

LebanonIsrael Attack/USS LibertyDominican RepublicPanamaChina Boxer Rebellion

Moro CampaignsRussia North ExpeditionPersian Gulf, Op Desert Shield/StormTerrorism Oklahoma CityPersian GulfTerrorism Khobar Towers, Saudi ArabiaYemen, USS Cole

Terrorism, World Trade Center

Spanish American WarIndian Wars

Philippines WarD−DayAleutian CampaignWar of 1812

Revolutionary War

Iwo JimaOperation Iraqi Freedom, Iraq

Okinawa

Korean WarCivil War, SouthVietnam War

World War ICivil War, North

World War II

β1 = 0.0000237 −→

One additional soldier killed predicts 0.0023 percent increasein the number of soldiers wounded on average

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 101 / 103

Page 443: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Regression: Log-Level

●●●

●●●●

●●●●●●●●●●●●●●●●

●●●●●●●●●

●●●●

●●●

0e+00 1e+05 2e+05 3e+05 4e+05

24

68

1012

X: Numbers of American Soldiers Killed in Action

Y: l

og(N

umbe

rs o

f Am

eric

an S

oldi

ers

Wou

nded

in A

ctio

n)

Japan

Italy TriesteNicaraguaTexas Border Cortina WarHaitiOperation Enduring Freedom, Afghanistan TheaterMexican WarOperation Enduring Freedom, Afghanistan

Franco−Amer Naval WarNorth Atlantic Naval WarTerrorism Riyadh, Saudi ArabiaChina Civil WarDominican RepublicRussia Siberia ExpeditionBarbary WarsNicaraguaMexicoChina Yangtze Service

GrenadaSouth KoreaTexas War Of Independence

LebanonIsrael Attack/USS LibertyDominican RepublicPanamaChina Boxer Rebellion

Moro CampaignsRussia North ExpeditionPersian Gulf, Op Desert Shield/StormTerrorism Oklahoma CityPersian GulfTerrorism Khobar Towers, Saudi ArabiaYemen, USS Cole

Terrorism, World Trade Center

Spanish American WarIndian Wars

Philippines WarD−DayAleutian CampaignWar of 1812

Revolutionary War

Iwo JimaOperation Iraqi Freedom, Iraq

Okinawa

Korean WarCivil War, SouthVietnam War

World War ICivil War, North

World War II

β1 = 0.0000237 −→ One additional soldier killed predicts 0.0023 percent increasein the number of soldiers wounded on average

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 101 / 103

Page 444: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Regression: Log-Log

●●●

●●●●

● ●● ● ●●● ●●

●●

● ●●● ●

● ●● ●

●●●●

●●

●●●

●●●

2 4 6 8 10 12

24

68

1012

X: Log(Numbers of American Soldiers Killed in Action)

Y: L

og(N

umbe

rs o

f Am

eric

an S

oldi

ers

Wou

nded

in A

ctio

n)

Japan

Italy TriesteNicaraguaTexas Border Cortina WarHaitiOperation Enduring Freedom, Afghanistan Theater Mexican WarOperation Enduring Freedom, Afghanistan

Franco−Amer Naval WarNorth Atlantic Naval WarTerrorism Riyadh, Saudi ArabiaChina Civil War Dominican RepublicRussia Siberia ExpeditionBarbary Wars NicaraguaMexicoChina Yangtze Service

GrenadaSouth KoreaTexas War Of Independence

LebanonIsrael Attack/USS LibertyDominican RepublicPanamaChina Boxer Rebellion

Moro CampaignsRussia North ExpeditionPersian Gulf, Op Desert Shield/StormTerrorism Oklahoma CityPersian GulfTerrorism Khobar Towers, Saudi ArabiaYemen, USS Cole

Terrorism, World Trade Center

Spanish American WarIndian Wars

Philippines WarD−DayAleutian CampaignWar of 1812

Revolutionary War

Iwo JimaOperation Iraqi Freedom, Iraq

Okinawa

Korean WarCivil War, SouthVietnam War

World War ICivil War, North

World War II

β1 = 0.797 −→

A percent increase in deaths predicts 0.797 percent increase inthe wounded on average

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 102 / 103

Page 445: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

Regression: Log-Log

●●●

●●●●

● ●● ● ●●● ●●

●●

● ●●● ●

● ●● ●

●●●●

●●

●●●

●●●

2 4 6 8 10 12

24

68

1012

X: Log(Numbers of American Soldiers Killed in Action)

Y: L

og(N

umbe

rs o

f Am

eric

an S

oldi

ers

Wou

nded

in A

ctio

n)

Japan

Italy TriesteNicaraguaTexas Border Cortina WarHaitiOperation Enduring Freedom, Afghanistan Theater Mexican WarOperation Enduring Freedom, Afghanistan

Franco−Amer Naval WarNorth Atlantic Naval WarTerrorism Riyadh, Saudi ArabiaChina Civil War Dominican RepublicRussia Siberia ExpeditionBarbary Wars NicaraguaMexicoChina Yangtze Service

GrenadaSouth KoreaTexas War Of Independence

LebanonIsrael Attack/USS LibertyDominican RepublicPanamaChina Boxer Rebellion

Moro CampaignsRussia North ExpeditionPersian Gulf, Op Desert Shield/StormTerrorism Oklahoma CityPersian GulfTerrorism Khobar Towers, Saudi ArabiaYemen, USS Cole

Terrorism, World Trade Center

Spanish American WarIndian Wars

Philippines WarD−DayAleutian CampaignWar of 1812

Revolutionary War

Iwo JimaOperation Iraqi Freedom, Iraq

Okinawa

Korean WarCivil War, SouthVietnam War

World War ICivil War, North

World War II

β1 = 0.797 −→ A percent increase in deaths predicts 0.797 percent increase inthe wounded on average

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 102 / 103

Page 446: Week 5: Simple Linear Regression - Princeton University · Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 10, 12, 2016 1These slides are heavily in uenced by

References

Acemoglu, Daron, Simon Johnson, and James A. Robinson. “The colonialorigins of comparative development: An empirical investigation.” 2000.

Wooldridge, Jeffrey. 2000. Introductory Econometrics. New York:South-Western.

Stewart (Princeton) Week 5: Simple Linear Regression October 10, 12, 2016 103 / 103