23
Reliability and Validity Criteria for Good Measurement Sushant Kumar Sinha Sushovan Bej

Reliablity and Validity

Embed Size (px)

Citation preview

Reliability and ValidityCriteria for Good Measurement

Sushant Kumar SinhaSushovan Bej

Criteria for good measurements?“The use of better instrument will ensure more accuracy in results, which in turn, will enhance the scientific quality of the research”

There are three measurement of the characteristics for evaluating a measurement tool.

1. Validity2. Reliability3. Sensitivity

ValidityIt is the ability of an instrument to measure what it is supposed to measure.

That is, when we ask a questions with the hope that we are tapping the concept, how can we be reasonably certain that we are indeed measuring the concept we set out to do and not something else?

Establishing ValidityResearcher have attempted to assess validity in many ways. They attempt to provide some evidence of a measure’s degree of validity by answering a variety of questions

There are four basic approaches to establishing validity are widely classified as following

1. Face Validity2. Content Validity3. Criterion-Related Validity4. Construct Validity

Face ValidityIt is considered as a basic and very minimum index of content validity. It indicates that the items that are intended to measure a concept, do on the face of it look like they measure the concept.

For e.g. a few people would accept a measure of college student math ability using a question that asked students: 2 + 2 = ? This is not a valid measure of college level math ability on the face of it.

Nevertheless, it is a subjective agreement among professionals that a scale logically appears to reflect accurately what it is supposed to measure. When it appears evident to experts that the measure provides adequate coverage of the concept, a measure has face validity

Continued…Clear, understandable questions such as “How many children do you have?” generally are agreed to have face validity. But it becomes more difficult to assess face validity in regard to more complicated business phenomena.

For instance, consider the concept of customer loyalty. Does the statement “I prefer to purchase my groceries at Delavan Fine Foods” appear to capture loyalty? How about “I am very satisfied with my purchases from Delavan Fine Foods”? What about “Delavan Fine Foods offers very good value”? While the first statement appears to capture loyalty, it can be argued the second question is not loyalty but rather satisfaction. What does the third statement reflect? Do we think it looks like a loyalty statement?

Content ValidityThe content validity of a measuring instrument is the extent to which it provides adequate coverage of the investigate questions guiding the study. If the instrument contains a representative sample of the universe of subject matter of the interest, then the content validity is good.

To put it differently, content validity is a function of how well the dimensions and elements of a concept have been delineated

Continued…Look at the concept of feminism which implies a person's commitment to a set of beliefs creating full equality between men and women in areas of the arts, intellectual pursuits, family, work, politics, and authority relations. Does this definition provide adequate coverage of the different dimensions of the concept?

Then we have the following two questions to measure feminism:

1. Should men and women get equal pay for equal work?2. Should men and women share household tasks?

These two questions do not provide coverage to all the dimensions delineated earlier. It definitely falls short of adequate content validity for measuring feminism

Criterion Related ValidityCriterion validity uses some standard or criterion to indicate the a construct accurately. The validity of an indicator is verified by comparing it with another measure of the same construct in which research has confidence.

There are two subtypes of this kind of validity1. Concurrent Validity2. Predictive Validity

Concurrent ValidityTo have concurrent validity, an indicator must be associated with a preexisting indicator that is judged to be valid.

For e.g. we create a new test to measure intelligence. For it to be concurrently valid, it should be highly associated with existing IQ tests (assuming the same definition of intelligence is used). It means that most people who score high on the old measure should also score high on the new one, and vice versa.

The two measures may not be perfectly associated, but if they measure the same or a similar construct, it is logical for them to yield similar results.

Predictive ValidityCriterion validity whereby an indicator predicts future events that are logically related to a construct is called a predictive validity. It cannot be used for all measures. The measure and the action predicted must be distinct from but indicate the same construct. Predictive measurement validity should not be confused with prediction in hypothesis testing, where one variable predicts a different variable in future.

For e.g. looking at the scholastic assessment tests being given to candidates seeking admission in different subjects. These are supposed to measure the scholastic aptitude of the candidates the ability to perform in institution as well as in the subject. If this test has high predictive validity, then candidates who get high test score will subsequently do well in their subjects. If students with high scores perform the same as students with average or low score, then the test has low predictive validity

Construct ValidityConstruct validity is for measures with multiple indicators.

It addresses the question: If the measure is valid, do the various indicators operate in consistent manner? It requires a definition with clearly specified conceptual boundaries. In order to evaluate construct validity, we consider both theory and the measuring instrument being used.

There are two subtypes of this kind of validity1. Convergent Validity2. Discriminant Validity

Convergent ValidityConvergent validity means that multiple measures of the same construct hang together or operate in similar ways.

For e.g. we construct "education" by asking people how much education they have completed, looking at their institutional records, and asking people to complete a test of school level knowledge. If the measures do not converge (i.e. people who claim to have college degree but have no record of attending college, or those with college degree perform no better than high school dropouts on the test), then our test has weak convergent validity and we should not combine all three indicators into one measure.

Discriminant ValidityAlso called Divergent validity, discriminant validity is the opposite of convergent validity. It means that the indicators of one construct hang together or converge, but also diverge or are negatively associated with opposing constructs. It says that if two constructs A and B are very different, then measures of A and B should not be associated.

For example, we have 10 items that measure political conservatism. People answer all 10 in similar ways. But we have also put 5 questions in the same questionnaire that measure political liberalism. Our measure of conservatism has discriminant validity if the 10 conservatism items hang together and are negatively associated with 5 liberalism ones.

ReliabilityThe reliability of a measure indicates the extent to which it is without bias (error free) and hence ensures consistent measurement across time and across the various items in the instrument.

In other words, the reliability of a measure is an indication of the stability and consistency with which the instrument measures the concept and helps to assess the “goodness” of measure

StabilityThe ability of the measure to remain the same over time despite uncontrollable testing conditions or the state of the respondents themselves is indicative of its stability and low vulnerability to changes in the situation.

This attests to its "goodness" because the concept is stably measured, no matter when it is done.

Two tests of stability are test retest reliability and parallel form reliability

Test-Retest ReliabilityTest retest method of determining reliability involves administering the same scale to the same respondents at two separate times to test for stability. If the measure is stable over time, the test, administered under the same conditions each time, should obtain similar results.

For example, suppose a researcher measures job satisfaction and finds that 64 percent of the population is satisfied with their jobs. If the study is repeated a few weeks later under similar conditions, and the researcher again finds that 64 percent of the population is satisfied with their jobs, it appears that the measure has repeatability.

The high stability correlation or consistency between the two measures attime 1 and at time 2 indicates high degree of reliability. This was at the aggregate level; the same exercise can be applied at the individual level. When the measuring instrument produces unpredictable results from one testing to the next, the results are said to be unreliable because of error in measurement.

Parallel Form ReliabilityWhen responses on two comparable sets of measures tapping the same construct are highly correlated, we have parallel form reliability. It is also called equivalent form reliability.

Both forms have similar items and same response format, the only changes being the wording and the order or sequence of the questions. What we try to establish here is the error variability resulting from wording and ordering of the questions. If two such comparable forms are highly correlated, we may be fairly certain that the measures are reasonably reliable, with minimal error variance caused by wording, ordering, or other factors.

Internal Consistency of MeasureInternal consistency of measures is indicative of the homogeneity of the items in the measure that tap the construct.

In other words, the items should `hang together as a set' and be capable of independently measuring the same concept so that the respondents attach the same overall meaning to each of the items. This can be seen by examining if the items and the subsets of items in the measuring instrument are highly correlated. Consistency can be examined through the inter item consistency reliability and split half reliability.

Inter-item Consistency reliabilityThis is a test of consistency of respondent’s answers to all the items in a measure. To the degree that items are independent measures of the same concept, they will be correlated with one another

Split-Half ReliabilityIt reflects the correlation between two halves of an instrument. The estimates could vary depending on how the items in the measure are split into two halves.

The technique of splitting halves in the most basic method for checking internal consistency when measures contain a large number of items. In the split-half method the researcher may take the results obtained from one half of the scale items(e.g. odd-numbered items) and check them against the results from the other half of the items (e.g. even numbered items). The high correlation tells us there is similarity (or homogeneity) among its items.

Reliability vs ValidityReliability is a necessary but not sufficient condition for validity. A reliable scale may not be valid.

For example, a purchase intention measurement technique may consistently indicate that 20 percent of those sampled are willing to purchase a new product. Whether the measure is valid depends on whether 20 percent of the population indeed purchases the product. A reliable but invalid instrument will yield consistently inaccurate results