21
Teacher effects on student outcomes; What we know from Australian and Overseas studies Gary N Marks Tuesday, 11 October 2016 1

Dr Gary Marks - Australian Catholic University

Embed Size (px)

Citation preview

Page 1: Dr Gary Marks - Australian Catholic University

Teacher effects on student outcomes; What we know from Australian and Overseas studies

Gary N Marks

Tuesday, 11 October 2016

1

Page 2: Dr Gary Marks - Australian Catholic University

Teachers have a strong influence of teacher outcomes

There are “effective” and “not-so-effective” teachers

“Effective” and “not-so-effective” teachers have stable effects on students

No student heterogeneity in their response to teachers.

The characteristics and practices of “effective” teachers can be identified statistically.

This information can be used to improve the teaching of “not-so-effective” teachers thereby substantially improving student outcomes for all students.

The characteristics and practices of “effective” teachers are well-established.

2

Prevailing Assumptions

Page 3: Dr Gary Marks - Australian Catholic University

Age, gender and other demographic characteristics

Personal characteristics, e.g. charisma, able to maintain discipline

Test scores, ATAR rank, performance at university etc. (teacher ability)

Qualifications In subject area

Post-graduate degrees

Experience (fledging, burnt out teachers)

Pay and conditions

Pedagogical style

Professional Development

3

Plethora of Possible Teacher Effects

Page 4: Dr Gary Marks - Australian Catholic University

Higher ATAR cut-offs Reform of teaching courses to be more focused on

paedology and subject knowledge rather than other areas Increase teacher salaries Pay bonus for “effective” teachers Higher qualifications (e.g. master degrees) Greater authority for principals or systems to dismiss

“ineffective” teachers Smaller classes Teaching assistants More professional development Improved facilities in schools (offices), administrative

support

4

Policy Prescriptions

Page 5: Dr Gary Marks - Australian Catholic University

1. Outcome measure?

1. Grades (assume independent of teachers)

2. Comparison with normed tests (e.g. PATMATHS)

3. Cognitive Test scores (e.g. PISA, TIMSS, NAPLAN)

4. Year 12 performance (study score in particular subject)

2. Most studies do not have sufficient numbers of students to identify teacher effects within schools, exception is NAPLAN

3. Identifying “effective” teachers more difficult than identifying “effective” schools

5

Difficulties with Identifying Teacher Effects 1

Page 6: Dr Gary Marks - Australian Catholic University

1. For NAPLAN in primary schools which is administered in May every two years, which teacher is of most interest?

1. Teacher of current year (but only February to May)

2. Teacher of in-between year

3. Teacher two years earlier, May to December

4. More complex if more than one teacher each school year (teachers leave, take time off etc.)

2. For high school students, it is even more complicated to identify teacher effects

1. Year 7 NAPLAN is only 3 months into high school

2. Reading, Spelling, Writing & Grammar not formally taught, testing foundations

3. Different teachers for different subjects

6

Difficulties with Identifying Teacher Effects 2

Page 7: Dr Gary Marks - Australian Catholic University

1. Teacher effects must be isolated from academic characteristics of their students, so cannot simply compare student outcomes between teachers

2. Need an appropriate research design

1. Control for SES

2. Control for prior achievement

3. Hierarchical linear models (3 level model students, classes, schools)

4. Fixed Effects for teachers, and/or students

5. Random assignment of students to different classes

6. Twin studies, identical twins in different classes

7. Distinguish between class (e.g. peer) and teacher effects7

Difficulties with Identifying Teacher Effects 3

Page 8: Dr Gary Marks - Australian Catholic University

1. Students’ test scores (e.g. NAPLAN performance) are very stable, over-time correlations range from 0.5 to 0.8.

1. More stable at higher levels of schooling

2. Very difficult to isolate teacher (and school) effects in this context

3. Immanent differences between students

1. Ability, personality motivation

2. Twin and other familiar studies have established large genetic components to student achievement

3. Shared genes for different subjects, also specific genes for say mathematics

4. In addition, genetic component to student growth

8

Difficulties with Identifying Teacher Effects 4

Page 9: Dr Gary Marks - Australian Catholic University

1. Need an appropriate research design 1. Control for SES

2. Control for prior achievement

3. Fixed Effects for teachers, and/or student

4. Random assignment of students to different classes

5. Twin studies, identical twins in different classes

6. Distinguish between class (e.g. peer) and teacher effects

2. Irony is that if prior student achievement, positive teacher (or for that matte, school) effects are more likely to be identified

3. Implication is that observed and adjusted teacher effects may generate contradictory conclusions, e.g. a class of high achievers.

9

Difficulties with Identifying Teacher Effects 5

Page 10: Dr Gary Marks - Australian Catholic University

Student growth percentile (SGP) models

Value-added models (VAMs).

Differ in the estimation and control variables

Teacher effects from value-added and SGM models are highly correlated. Wright (2010) provides an estimate of 0.9.

For Australia the correlation was 0.95 (Houng & Justman, 2013, p. 6). Different VAM models produce very similar but not identical results (Goldhaber, Walch, & Gabele, 2013)

10

Two Ways of Identifying Student Effects: SGP & VAM

Page 11: Dr Gary Marks - Australian Catholic University

SGPs are generated using a nonparametric quantile regression model in which student achievement in one period is assumed to be a function of one or more years of prior achievement (Koenker, 2005).

SGPs are descriptive measures of growth, designed to provide an easily-communicated metric to teachers and stakeholders. SGPs are descriptive measures of growth, designed to provide an easily-communicated metric to teachers and stakeholders.

Do not include SES-type covariates Do not make strong causal claims, but prior achievement is

a strong control

11

Student growth percentile (SGP)

Page 12: Dr Gary Marks - Australian Catholic University

Value-added models are statistical models that generally try to isolate the contributions to student test scores by individual teachers or schools from factors outside the school’s or teacher’s control. Such factors may include prior test scores, SES, poverty, and race.

Value-added models have long been used by economists focusing on assessing the effects of schooling attributes (class size, teacher credentials, etc.) on student achievement (Hanushek, 2011).

In VAMs, teacher performance estimates are generally derived in one step, and unlike SGPs, performance estimates from VAMs are often intended to be treated as causal because they are estimated based on models that often include student covariates.

Arguments about these models (Darling-Hammond et al. 2012)

12

Value-added Models (VAM)

Page 13: Dr Gary Marks - Australian Catholic University

Hill and Rowe (1996) concluded that 60% of the variance in student outcomes can be attributed to class (teacher?) effects

Overseas studies also tend to find large variance components attributable to teachers

Nye et al.’s (2004) meta-analysis calculated an average effect size of 0.32 for teacher (class) effects using fixed effects for teachers. This is a large effect.

13

What is Known: Variance Components

Page 14: Dr Gary Marks - Australian Catholic University

Few valued-added Australian teacher studies

Hill and Rowe (1996) concluded that 60% of the variance in student outcomes can be attributed to class (teacher?) effects

Leigh (2005) argues that moving the average teacher to the 90th percentile increases student achievement by on average of 0.12 of a standard deviation.

Leigh’s study on Queensland teacher effects estimated that a standard deviation difference in teacher effectiveness translates to a 0.10 standard deviation in test score (Leigh, 2010).

He concludes that a student with a highly effective teacher (as measured by a value-added metric) could achieve in three-quarters of a year what a student with a less effective teacher could in a full year.

14

What is Known: Potential Improvement (Australia)

Page 15: Dr Gary Marks - Australian Catholic University

A US study of value-added models (VAM) of teacher effects suggest that a one standard deviation increase in the distribution of teacher effectiveness (for example from the median to the 84th percentile) translates to an increase of about 0.15 to 0.25 standard deviations of student achievement; magnitudes consistent with findings in other studies (Goldhaber, Walch, & Gabele, 2012).

15

What is Known: Potential Improvement (US)

Page 16: Dr Gary Marks - Australian Catholic University

Not high, similar to schools.

Using a large dataset of elementary and middle school math tests in Florida, McCaffrey et al. (2009) estimate several VAMs and find year-to-year correlations generally ranging from 0.2 to 0.5 in elementary schools and 0.3 to 0.6 in middle schools.

Teacher effects tend to fade out (Raudenbush, 2014)

Weak relationships with evaluations by Principals and other measures of teaching(Jacob & Lefgren, 2007).

16

Stability of Teacher Effects

Page 17: Dr Gary Marks - Australian Catholic University

Higher average test scores are associated with higher math and reading achievement, with far larger effects for math than for reading (Clotfelter et al., 2007).

Darling-Hammond (2000, p. 3) notes that the relationships between academic ability and teacher effectiveness are most often small and are often not statistically significant.

Where effects of teacher test scores are found to be (moderately) important, they are test scores based on knowledge and skills acquired through teacher educations courses, certification tests (e.g. Clotfelter et al., 2007) not academic performance for college entry.

17

Cognitive Skills

Page 18: Dr Gary Marks - Australian Catholic University

Greater experience is associated with stronger teacher effects.

Most of the gains occur within the first few years (Clotfelter, Ladd, & Vigdor, 2007)

Reasonably established finding that novice teachers are less effective.

The OECD (2012) argues that beginning teachers require more support which would reduce attrition as well as improve their performance.

18

Experience

Page 19: Dr Gary Marks - Australian Catholic University

Rivkin et al (2005) summarize the research on the predictive power of master’s degree completion and find little consistent evidence that graduate degree attainment can identify effective teachers.

Similar results are reported other studies.

In the large North Carolina data having a graduate degree exerts no statistically significant effect on student achievement and in some cases the coefficient is negative (Clotfelter et al., 2007).

Negative effect for professional development (Harris & Sass, 2011)

19

Qualifications & PD

Page 20: Dr Gary Marks - Australian Catholic University

Theoretically, better teachers better outcomes

Large between-teacher differences but no consensus on what makes an “effective” teacher.

Very difficult to accurately estimate teacher effects

Not very stable

Only established finding is about neophytes are less effective

No strong evidence for what makes an effective teacher

Much anecdotal evidence

No clear policy direction from the literature

20

Summary

Page 21: Dr Gary Marks - Australian Catholic University

Clotfelter, C. T., Ladd, H. F., & Vigdor, J. L. (2007). Teacher credentials and student achievement: Longitudinal analysis with student fixed effects. Economics of Education Review, 26, 673-682.

Darling-Hammond, L. (2000). Teacher Quality and Student Achievement: A Review of State Policy Evidence. Education Policy Analysis Archives, 8(1).

Darling-Hammond, L., Amrein-Beardsley, A., Haertel, E., & Rothstein, J. (2012). Evaluating teacher evaluation: Popular modes of evaluating teachers are fraught with inaccuracies and inconsistencies, but the field has identified better approaches. Phi Delta Kappan, 93(6), 8–15.

Goldhaber, D., Walch, J., & Gabele, B. (2013). Does the Model Matter? Exploring the Relationship Between Different Student Achievement-Based Teacher Assessments. Statistics and Public Policy, 1(1), 28-39. doi: 10.1080/2330443X.2013.856169

Hanushek, E. A. (2011). The economic value of higher teacher quality. Economics of Education Review, 30, 466-479. doi:http://dx.doi.org/10.1016/j.econedurev.2010.12.006

Harris, D. N., & Sass, T. R. (2011). Teacher training, teacher quality and student achievement. Journal of Public Economics, 95, 798-812.

Hill, P., & Rowe, K. J. (1996). Multilevel modelling in school effectiveness research. School Effectiveness and School Improvement, 7, 1-34.

Houng, B., & Justman, M. (2013). Comparing Least-Squares Value-Added Analysis and Student Growth Percentile Analysis for Evaluating Student Progress and Estimating School Effects Melbourne Institute Working Paper Series. Melbourne: Melbourne Institute for Applied Economic and Social Research.

Jacob, B., & Lefgren, L. (2007). Principals as agents: Subjective performance assessment in education. Journal of Labor Economics, 26, 101-136.

Koenker, R. (2005). Quantile Regression, Econometric Society Monographs. Camridge: Cambridge University Press.

Leigh, A. (2010). Estimating teacher effectiveness from two-year changes in students' test scores. Economics of Education Review, 29(3), 480-488.

Leigh, A., & Mead, S. (2005). Lifting Teacher Performance Policy Report April 2005: Progressive Policy Institute.

McCaffrey, D. F., Sass, T. R., Lockwood, J. R., & Mihaly, K. (2009). The Intertemporal Variability of Teacher Effect Estimates. Education Finance and Policy, 4(4), 572-606, doi:10.1162/edfp.2009.4.4.572.

Nye, B., Konstantopoulos, S., & Hedges, L. V. (2004). How Large Are Teacher Effects? Educational Evaluation and Policy Analysis, 26(3), 237-257, doi:10.3102/01623737026003237.

OECD (2012). Education at a Glance: OECD Indicators 2012: Organisation for Economic Co-Operation and Development.

Raudenbush, S. W. (2014). What do we know about the long-term impacts of teacher value added? Carnegie Foundation for the Advancement of Teaching Knowledge Brief. Retrieved from: http://www.carnegieknowledgenetwork.org/briefs/long-term-impacts/

Rivkin, S. G., Hanushek, E. A., & Kain, J. F. (2005). Teachers, Schools, and Academic Achievement. Econometrica, 73(2), 417-458

Wright, S. P. (2010). An Investigation of Two Nonparametric Regression Models for Value-Added Assessment in Education: SAS Institute Inc.21

References