14
Questionnaire Development Measuring Validity & Reliability James A. Pershing, Ph.D. Indiana University

Questionnaire Development Measuring Validity & Reliability James A. Pershing, Ph.D. Indiana University

Embed Size (px)

Citation preview

Page 1: Questionnaire Development Measuring Validity & Reliability James A. Pershing, Ph.D. Indiana University

Questionnaire Development

Measuring Validity & Reliability

James A. Pershing, Ph.D.Indiana University

Page 2: Questionnaire Development Measuring Validity & Reliability James A. Pershing, Ph.D. Indiana University

Definition of Validity

Instrument measures what it is intended to measure: Appropriate Meaningful Useful

Enables a performance analyst or evaluator to draw correct conclusions

Page 3: Questionnaire Development Measuring Validity & Reliability James A. Pershing, Ph.D. Indiana University

Types of Validity Face Content Criterion

Concurrent Predictive

Construct

Page 4: Questionnaire Development Measuring Validity & Reliability James A. Pershing, Ph.D. Indiana University

Face Validity It looks OK Looks to measure

what it is supposed to measure

Look at items for appropriateness

Client Sample respondents

Least scientific validity measure

Looks Good To Me

Page 5: Questionnaire Development Measuring Validity & Reliability James A. Pershing, Ph.D. Indiana University

Content-Related Validity

Organized review of format and content of instrument

Comprehensiveness Adequate number of

questions per objective

No voids in content By subject matter

experts

Balance

Definition Sample Content Format

Page 6: Questionnaire Development Measuring Validity & Reliability James A. Pershing, Ph.D. Indiana University

Criterion-Related ValiditySubject Instrument A Instrument

BTask Observation

Inventory Checklist John yes noMary no noLee yes noPat no noJim yes yesScott yes yesJill no yes

Usually expressed as a correlation coefficient (0.70 or higher is generally accepted as representing good validity)

How one measure stacks-up against another

Concurrent = at same time

Predictive = now and future

Independent sources that measure same phenomena

Seeking a high correlation

Page 7: Questionnaire Development Measuring Validity & Reliability James A. Pershing, Ph.D. Indiana University

Construct-Related Validity A theory exists

explaining how the concept being measured relates to other concepts

Look for positive or negative correlation

Often over time and in multiple settingsUsually expressed as a correlation coefficient (0.70 or higher is generally accepted as representing good validity)

THEORY

Prediction 1 - Confirmed

Prediction 2 - Confirmed

Prediction 3 - Confirmed

Prediction n - Confirmed

Page 8: Questionnaire Development Measuring Validity & Reliability James A. Pershing, Ph.D. Indiana University

Definition of Reliability The degree to which measures obtained with

an instrument are consistent measures of what the instrument is intended to measure

Sources of error Random error = unpredictable error which is

primarily affected by sampling techniques Select more representative samples Select larger samples

Measurement error = performance of instrument

Page 9: Questionnaire Development Measuring Validity & Reliability James A. Pershing, Ph.D. Indiana University

Types of Reliability Test-Retest Equivalent Forms Internal Consistency

Split-Half Approach Kuder-Richardson Approach Cronbach Alpha Approach

Page 10: Questionnaire Development Measuring Validity & Reliability James A. Pershing, Ph.D. Indiana University

Test-Retest Reliability Administer the same instrument twice to the

same exact group after a time interval has elapsed.

Calculate a reliability coefficient (r) to indicate the relationship between the two sets of scores. r of+.51 to +.75 moderate to good r over +.75 = very good to excellent

T I M E

Page 11: Questionnaire Development Measuring Validity & Reliability James A. Pershing, Ph.D. Indiana University

Equivalent Forms Reliability

Also called alternate or parallel forms Instruments administered to same group at same time Vary:

Calculate a reliability coefficient (r) to indicate the relationship between the two sets of scores.

r of+.51 to +.75 moderate to good r over +.75 = very good to excellent

Response Set: -- Order -- Wording

Stem: -- Order -- Wording

Page 12: Questionnaire Development Measuring Validity & Reliability James A. Pershing, Ph.D. Indiana University

Internal Consistency Reliability

Split-Half Break instrument or

sub-parts in ½ -- like two instruments

Correlate scores on the two halves

Best to consult statistics book and consultant and use computer software to do the calculations for these tests

Kuder-Richardson (KR) Treats instrument as

whole Compares variance

of total scores and sum of item variances

Cronbach Alpha Like KR approach Data scaled or ranked

Page 13: Questionnaire Development Measuring Validity & Reliability James A. Pershing, Ph.D. Indiana University

Reliability and Validity

So unreliable as to be invalid

Fair reliability and fair validity

Fair reliability

but invalid

Good reliability

but invalid

Good reliability and good validity

The bulls-eye in each target represents the information that is desired. Each dot represents a separate score obtained with the instrument. A dot in the bulls-eye indicates that the information obtained (the score) is the information the analyst or evaluator desires.

Page 14: Questionnaire Development Measuring Validity & Reliability James A. Pershing, Ph.D. Indiana University

Comments and Questions