33
Test Validity – Test Validity – Revisited Again Revisited Again Virginia Association of Test Directors Conference, Richmond, VA October 29, 2008, 1:30 2:30 David Mott Tests for Higher Standards ROSworks, LLC

Test Validity – Revisited Again

  • Upload
    betrys

  • View
    24

  • Download
    5

Embed Size (px)

DESCRIPTION

Test Validity – Revisited Again. Virginia Association of Test Directors Conference, Richmond, VA October 29, 2008, 1:30 – 2:30. David Mott Tests for Higher Standards ROS works, LLC. The title of this talk is deliberately repetitive. It’s like: “The Department of Redundancy Department”, - PowerPoint PPT Presentation

Citation preview

Page 1: Test Validity – Revisited Again

Test Validity – Revisited AgainTest Validity – Revisited Again

Virginia Association of Test Directors Conference, Richmond, VA October

29, 2008, 1:30 – 2:30

David MottTests for Higher Standards

ROSworks, LLC

Page 2: Test Validity – Revisited Again

The title of this talk is deliberately repetitive.

It’s like:

“The Department of Redundancy Department”,

or

“Déjà vu, all over again”,

or

“Mount Fujiyama”.

Page 3: Test Validity – Revisited Again

Why is this?

Because validity is seen as the most important quality that a test or

assessment can have.

Page 4: Test Validity – Revisited Again

Abstract

Discussions about test validity have been going on for years. One simple prepositional phrase needs to be added to make the ongoing discussion meaningful — “FOR . . .” Test scores can never be either valid or invalid unless the space after FOR is filled in. This is a presentation and a discussion of what needs to go after the FOR, within the world of AYP and NCLB, and why it matters. Data will be presented and there will be a test.

Page 5: Test Validity – Revisited Again

What about me?Began as a psychologist– Worked at the VA DOE for 17+ years as “Supervisor of

various things” in research and testing:

Including Supervisor of Test Development

Testing Supervisor in charge of VSAP

– Took some time off to decide what I wanted to do with the rest of my life — Then I started writing tests.

– Continued writing tests until we had K – 11 in four subject areas — TfHS.

– Then discovered we needed a way to score them and report them — so we created a way to score and report them — ROS.

Page 6: Test Validity – Revisited Again

Back to Validity

• Validity means are you measuring what you think you’re measuring.

• Validity also means that the way you are measuring something supports the purpose you have in measuring — and what inferences you wish to make.

This is the FOR in the abstract.

Page 7: Test Validity – Revisited Again

Two Basic Types of Validity

• The first type is correlational validity– Concurrent Validity

– Predictive Validity

– Convergent Validity

• The second type is non-correlational validity (for lack of a better term).– Face validity

– Content validity

– Diagnostic Validity

Page 8: Test Validity – Revisited Again

The correlational types of validity have been becoming less relevant as more attention has been paid to the “non-correlational” validities.

I will call this (modestly), “Mott’s Law of Validities.”

Page 9: Test Validity – Revisited Again

Correlational Validities

Remember:

Page 10: Test Validity – Revisited Again

Concurrent validity

Concurrent validity refers to the degree to which the test scores correlate with other measures of the same underlying thing that are measured at the same time. For example, this would mean that the scores on tests administered to students should correlate with their grades. That scores on one reading test compare with scores on another reading test.

It can have a useful function for us.

Page 11: Test Validity – Revisited Again

Predictive Validity

Predictive validity is concurrent validity with a sense of time. How does one score predict another in the future. How will this bench-mark test predict scores on the SOL test?

This is what divisions ask my company and others to supply when they ask us for “the validity evidence” for our tests.

Problems, with this?

Page 12: Test Validity – Revisited Again

Convergent Validity

Convergent validity refers to the degree to which a measure is correlated with many other measures that it is theoretically predicted to correlate with. This is sort of concurrent validity on steroids.

Reading ability is the best example of this:DRP units or Lexiles.

We have long had a hint of this because all decent reading tests correlate pretty highly with each other.

Page 13: Test Validity – Revisited Again

Face Validity

Face validity: If it looks like a duck, and walks like a duck, and quacks like a duck — It’s a . . .

It’s a non-statistical type of validity

Page 14: Test Validity – Revisited Again

Content Validity

Content validity is systematic examination of test content to determine whether it covers a good sample of the domain intended to be measured by a test. For standards-related tests: Does the test cover the all desired standards. Does it cover them evenly or in the proportion it should. Does it’s coverage not include other standards? How about matching the cognitive levels?

Page 15: Test Validity – Revisited Again

Diagnostic ValidityDiagnostic validity is the ability of a test to

discriminate accurately between the skills, abilities, etc. an individual has and can do versus those not attained. The overall score is not really important, except as context.

Okay, I made up this term, but it is akin to what Jim Popham has called “instructional sensitivity”. His point is that many of our tests don’t have it.

Page 16: Test Validity – Revisited Again

Diagnostic Validity

Why is diagnostic validity of importance?

Because it is what most testing on the formative end of the spectrum is used for.

This type is not well described statistically. There are no well-known statistics used to measure it.

And there is always questions about how specific we should be or can be in our diagnoses:Total Test Score

Reporting CategoryStandard

Substandard?

Page 17: Test Validity – Revisited Again

Moving back to the FOR

Some common reasons for testing:

• Selection or sorting

• Summative

• Formative

• Motivational

Page 18: Test Validity – Revisited Again

Formative Assessment

Arguments about what is formative vs. summative go on and on.However, most of the research which shows formative assessment is highly effective in learning is based on assessment that is deeply integrated into learning. These are techniques used by teachers that are used in the midst of teaching. The teacher and the student are in a very short feedback loop.

Page 19: Test Validity – Revisited Again

From Assessment ManifestoRick Stiggins, 2008

“ . . . evidence gathered from dozens of studies conducted around the world consistently reveals a half to a full standard deviation gain in student achievement attributable to the careful management of the classroom assessment process, with the largest gains accruing for struggling learners.” (Black and Wiliam, 1998; Hattie and Timperley, 2007).

Page 20: Test Validity – Revisited Again

Notice!

This really does not mean benchmark tests, mid-semester exams, end-of-course tests and the like. It may not even mean teacher made pop quizzes and so on.

It does mean tests in which the student finds out what he does and doesn’t know and can use the information, together with his teacher, to drive his own learning.

Page 21: Test Validity – Revisited Again

M o t i v a t i o n !

This is usually not the stated purpose for assessment, but both the good things and the bad things that come out of testing often relate to this function.

It is the “stakes” part of the high-stakes vs. low-stakes dichotomy.

High-stakes for whom, is a good question to ask.

Page 22: Test Validity – Revisited Again

Motivation — The Key Factor

Stiggins make a number of points about motivation in his Manifesto.

• Tests can not motivate students if they have given up.

• Too-hard tests can make student give up.• Too-hard tests (not properly targeted) can

perpetuate the cycle of failure because they reinforce the student’s loss of self-efficacy.

Page 23: Test Validity – Revisited Again

So . . .

• If we give too-hard (mistargeted) tests we are directly part of the problem.

• The tests are invalid for any use because the students don’t try.

• We must avoid this vicious cycle.

I say we because I mean me, my company, you, your division, your teachers, all of us.

Page 24: Test Validity – Revisited Again
Page 25: Test Validity – Revisited Again

How Can We Tell?

• Christmas trees on answer sheets• A B C D A B C D A B C D• B B B B B B B B B B B B• Getting finished too fast• Talking, etc.• Scores near the chance point

(<30% correct)• Ask the students

Page 26: Test Validity – Revisited Again

How Can We Tell #2?

• Distracter choices that don’t make sense.

• Response times for CBTs (a more sophisticated approach).

Let’s look at this last, because this started me off on this tack.

Page 27: Test Validity – Revisited Again

Response Times for Six Items

Page 28: Test Validity – Revisited Again

Implications

• Much of the clever psychometric work of the last 100 years is heavily contaminated by invalid data.

• Many students are only partially motivated.• Some questions look hard, even if they aren’t.• We can may be able to use this data to better

uncover what student don’t know.• Now that we know this we can watch for it,

especially with CBT.

Page 29: Test Validity – Revisited Again

What can we do about these motivational issues?

• Give properly targeted tests– That means we can’t always give the same

tests to all the students. Not a problem with formative assessment because they’re not used for evaluation but for learning. (Could we call this “Differentiated Assessment”?)

– Correct misunderstandings early, before they multiply.

– Encourage students to admit when they don’t know.

Page 30: Test Validity – Revisited Again

A Technique to Encourage Students to Admit When They Don’t KnowTry A different method of scoring:

Tell the students that they get 1 point for every answer they get right, 1/3 of a point for every answer they omit, and 0 points for every answer they get wrong. This removes the incentive for guessing and it lets you know explicitly that students need help.

Discussion . . .

Page 31: Test Validity – Revisited Again

Comes Full Circle

So it comes full circle. Assessment is about learning. Assessment, used correctly, is really all FOR learning.

The test that follows this session is really a test of us.

Page 32: Test Validity – Revisited Again

ResourcesStiggins, R. (2008). Assessment Manifesto: A call for the Development of Balanced Assessment Systems. Portland, OR: ETS Assessment Training Institute. [www.ets.org/ati]

Popham, W. J. (2008). All about Assessment: A Misunderstood Grail. Educational Leadership, 66(1), 82-83.

Popham, W. J. (2008). Transformative Assessment. Alexandria, VA: Association for Supervision and Curriculum Development.

Wise, S. L. and DeMars, C. E. at the Center for Assessment & Research Studies (CARS) in the JMU School of Education — Response-time studies.

Page 33: Test Validity – Revisited Again

Discussion . . .

Contact David [email protected] Markel Road, Suite 104Richmond, VA 23230