Upload
carol
View
916
Download
0
Tags:
Embed Size (px)
DESCRIPTION
Value-added assessment
Citation preview
2. Where we are:
Adequate Yearly Progress (AYP)
NCLB measures school performance based on AYP, which calculates the
percentage of various student populations that annually met or
exceed the states academic standards, toward a universal fixed
point.
3. No Child Left Behind
Benefits
Set measureable goals
Reduce achievement gap among subpopulations
Federal funding provided
All students should reach proficiency standards in reading and math
within 12 years
4. With NCLB
States can choose own testing.
Standardized tests measure achievement levels, but not
change.
Systematic evaluation is necessary to determine which educational
approaches are working and which are not.
5. Assumptions
IF . . .
good schools are those that have high test scores,. . .
then . . .
6. Assumptions
Bad schools are those that have low test scores.
7. Assumptions
Students with higher scores are learning more.
HOWEVER. . .
Schools that admit students with low scores and raise them to
average are better schools than those that admit high scoring
students and graduate them at the same level.
8. comparison
Under NCLB we make comparisons across cohorts.
Example: This years reading scores in 5th grade are higher than
last years reading scores. Does that show improvement or were this
years students ahead of last years students?
Value-added measures the progress of each child where they are as
they progress through school.
Value-added measures
change,
not just
achievement.
9. Purpose of Value-Added Data
The value-added approach focuses on changes in test scores over
time, rather than on a single test score at a given moment.
Inequity of standardized-results:
Some schools doing a good job of teaching severely disadvantaged
students could be sanctioned
Some poor schools shepherding top students could be imitated,
Some excellent schools using effective strategies to help high-risk
students may NOT be imitated due to below average test
scores.
(Crane, 2002)
10. Value added
Value-added Assessment
Analysis of student achievement that uses longitudinal student
achievement data to obtain estimates of the impact schools or
teachers have on student learning as measured by test scores. It
measures individual student progress from a relative starting
point.
11. Value-added
Levels the playing field,.
Calculates a projected test score for a student in a given grade
and subject, and
Bases the projected score on the students prior academic
achievement.
12. ValueAdded
Is a way of analyzing test data that measures both teaching and
learning,
Allows us to see how teachers add to a students knowledge over and
above what their families and communities do, and
Separates student effects from school effects.
13. Value-added
IF
Achievement test data is available for each student each
year,
Highly correlated scales of curricular objectives are used,
The scales have sufficient stretch to measure progress of
previously low and high scoring students, and
Scales have appropriate repeatabiities,
THEN
All data that meet these conditions can be used in a value-added
assessment system, regardless of test source.(Crane, 2002)
14. School effectivenes
The differences in schooling effectiveness is the dominant factor
affecting the speed that students move up the ramp of curriculum.
(Sanders, 2000).
15. Teacher quality
An accountability system based on the academic progress of student
populations is one that will hold people accountable for things
over which they have control, rather than for things over which
they do not. Teachers do have primary control of the rate of
academic progress of their students. (Sanders, 2000)
16. Achievementvs. growth
Achievement is best predicted by family income.
Growth is best predicted by the quality of instruction. Teachers
have control over the quality of their instruction.
17. Classroom patterns
Disproportionate attention is given in classrooms to bubble
kidsthose who score just below the state standard for NCLB; more
difficult kids get help as a result.
Research from Tennessee revealed three classroom patterns, shed,
reverse shed, and tepee. The names reflect the shape of the
slope.
Even though these patterns can be found in any classroom, they
occur disproportionately in some circumstances.
Data is based on three year averages.
18. Shed pattern
Typically found in low-income communities.
Elementary teachers focus on the low achievers in order to get high
gains while previous high-achievers get low gains.
19. Shed pattern
If this pattern continues through elementary, there will be few
high achieving children by middle school.
20. Reverse shed pattern
Opposite focus found in high income communities.
Teachers concentrate on their highest performers.
21. Reverse shed pattern
Low achievers get low gains while previous high achievers continue
to get high gains.
22. Tepee pattern
Prevalent in most classrooms in most communities.
Teachers focus on the average student; also called teaching to the
middle.
23. Tepeepattern
This results in previously average achievers getting high gains and
both previously low and high achievers getting low gains.
24. Teaching to the Middle
Biggest impediment to higher achievement are the years where
individual students arent making realistic growth. A pattern can
exist whereby lower scoring students are given opportunities to
make progress but earlier high achieving students are held to the
same pace and place as their lower achieving peers. As this pattern
repeats over the years it becomes a self-fulfilling prophecy that
these students lose ground(Sanders, 2002).
25. Teaching to the Middle
Value-added provides valuable data for teachers to use in focusing
their instructional practices with planned intentionality.
As educators we need to remember that high achieving students come
from housing projects, remote rural areas, and million dollar
homes.
26. Teacher variability
Once again, data suggests that teacher quality may be the single
most important in-school factor determining how much students
learn.
(Crane, 2002, Sanders & Rivers, 1996; Jordan, Mendro, &
Weerasinghe, 1997; Haycock, 1998)
27. Teacher Quality
The Sanders and Rivers study found that good teachers raised
students math scores at least 2-3 years into the future (regardless
of class size)
The Sanders study (1991-1995) found the top 1/5 of teachers raised
students achievement test scores 39 percentile points more than
teachers of bottom 1/5. (These effects were consistent whether
heterogeneous or homogeneous grouping or excelling vs. struggling
students.)
28. Teacher Quality
Layered modelall student achievement data for 5 years is used
simultaneously over all subjects tested, linking each years data to
the current and previous teachers.: teachers given credit for
teaching beyond standards assessed as well as not held accountable
for one years poor performance due to student illness, drug
involvement, etc.
Accommodates real world situations; different modes of instruction
such as self-contained, team teaching, fractured student records,
and data from non-vertically scaled tests.
29. Teacher Variability
Struggling students arent randomly distributed into classrooms.
They are found disproportionately in classrooms where they receive
poor instruction.
30. Teacher variability
The best way to improve test scores is to improve teacher quality.
This can be done by applying lessons learned from value-added
analysis to teacher education and professional development.
Crane, 2002
31. Teacher variability
Personnel decisions can also be made given value-added data.
Incentives could be offered.
Help offered to teachers who raised test scores the least.
Teachers promoted to instructional leadership positions who raised
scores the most.
Over time, lowest performing teachers would leave the
profession.
32. Teacher effectiveness
Of all the factors we studyclass size, ethnicity, location,
povertythese pale to triviality in the face of teacher
effectiveness. Sanders address to Metropolitan School Board in
Nashville, 2010.
33. Sustained growth over time
By having sustained growth over time, we will begin to close the
gap and all students will still be making gains.
34. Sustained Academic Growth
Data is not the enemy but a valuable tool in an educators toolbox
they can use to fine tune instruction to provide the best
opportunity for every child. (Sanders, 2000)
35. Advantages of value-added
Tests dont have to be nearly so closely aligned as they do when
judgments are made from a single year of test results each
year.
Reporting of simple test averages is singularly inaccurate.
Disaggregation of socioeconomic strata is an improvement on raw
score reporting.
Use of regression models are sometimes labeled as
value-added.(Sanders, 2000)
36. Additional information
A value-added analysis is statistical and there is always a margin
of error: they are better able to make distinctions at either end
with more accuracy than the middle.
Distinguishing between true learning and teaching to the test: can
limit effects by changing the tests frequently.
37. Local example of value-added
Dallas measures individual student progress from a relative
starting point. It measures only that knowledge that the
school/teacher is responsible for imparting, calculating the
value-added by each school. (Toch, 2005)
38. Benefits of Value-Added
Provide a more accurate picture of school, districts, and states
success/failure.
Generate objective measures of teacher performance that could be
used to raise teacher quality.
Provide useful data for evaluating school reform programs (Crane,
2002).
Use of up to five years data for each student with different
subjects tested and item analysis (40+ each subtest) could result
in over 1,000 items collectively contributing to information array
for that student; not a snapshot in time (Sanders, 2000).
39. Provides data
To improve data-driven decision making
To build professional learning communities
To differentiate instruction
To measure through growth, not just achievement
40. Resources
Crane, J. (2002). The promise of value-added testing. Policy Report
from http://www.ppionline.org
Doran H. & Fleischman, S. (2005). Challenges of
value-addedassessment. Educational Leadership. 63:3
http://www.ascd.org
Hershberg, T. (2004). Value-added assessment. The Center for
Greater
Philadelphia. http://www.cgp.upenn.edu
Holland, R. (2010). How to build a better teacher. Hoover
Institution Stanford University.
http://www.hoover.org/publications/policy-review/article/7841
Sanders, W. (2000). Value-added assessment from student achievement
data: opportunities and hurdles. Journal of Personnel Evaluation in
Education 14:4. http://www.sas.com/govedu/edu/research.html
Schneider, C. (2002). Can value added assessment raise the level
of
student accomplishment? Peer Review. http://www.aacu.org
Toch, T. (2005). Measure for measure. Washington Monthly.
http://www.washingtonmonthly.com/featues/2005/0510.toch.html
Zurawsky, C. (2004). Teachers matter: Evidence from
value-added
assessments. Research Points. 2:2
http://www.aera.researchpoints.com
(2007). Roundtable discussion on value-added analysis of student
achievement: a summary of findings. Working Group on Teacher
Quality
http://www.tapsystem.org/pubs/value_added_roundtable_08.pdf