59
4. Evaluation of measuring tools: item analysis Psychometrics. 2011/12. Group A (English)

4. Evaluation of measuring tools: item analysis

Embed Size (px)

DESCRIPTION

4. Evaluation of measuring tools: item analysis. Psychometrics . 2011/12. Group A ( English ). Introduction. - PowerPoint PPT Presentation

Citation preview

Page 1: 4.  Evaluation of measuring tools: item analysis

4. Evaluation of measuring tools: item analysis

Psychometrics. 2011/12. Group A (English)

Page 2: 4.  Evaluation of measuring tools: item analysis

Introduction

• Items can adopt different formats and assess cognitive variables (skills, performance, etc.) where there are right and wrong answers, or non-cognitive variables (attitudes, interests, values, etc.) where there are no right answers.

• The statistics that we present are used primarily with attitudinal or performance items.

Page 3: 4.  Evaluation of measuring tools: item analysis

• To carry out the analysis of the items should be available :– A data matrix with the subjects' responses to each items.

• To analyze test scores and the responses to the correct alternative, the matrix will take the form of ones (hits) and zeros (mistakes).

• To analyze incorrect alternatives, in the matrix should appear specific options selected by each subject.

• The correct alternative analysis (which is offered more information about the quality of the test) allow us to obtain the index of difficulty, discrimination, and reliability and validity of the item.

Page 4: 4.  Evaluation of measuring tools: item analysis

• Empirical difficulty of an item: proportion of subjects who answer it correctly.

• Discriminative power: the ability of the item to distinguish subjects with different level in the trait measured.

• Both statistics are directly related to the mean and variance of total test scores.

• The reliability and validity of the items are related to the standard deviation of the test and indicate the possible contribution of each item to the reliability and validity of total scores of the test.

Page 5: 4.  Evaluation of measuring tools: item analysis

Items difficulty

Page 6: 4.  Evaluation of measuring tools: item analysis

• Proportion of subjects who have responded correctly to the item:– One of the most popular indices to quantify the difficulty of the items

dichotomous or dichotomized.

• The difficulty thus considered is relative because it depends on:– Number of people who attempt to answer the item.– Their characteristics.

A: number of subjects who hits the item.N: number of subjects that attempt to respond the item.

• It ranges between 0 and 1.

– 0: No subject has hit the item. It is difficult.– 1: All subjects have answered correctly to the item. It

is easy.

AID

N

Page 7: 4.  Evaluation of measuring tools: item analysis

• Example: A performance item in mathematics is applied to 10 subjects, with the following result :

• The obtained value does not indicate whether the item is good or bad. It represents how hard it has been to the sample of subjects who have attempted to answer it.

Subject a b c d e f g h i j

Answers 1 1 1 1 0 1 0 1 1 0

70.70

10ID

Page 8: 4.  Evaluation of measuring tools: item analysis

• The ID is directly related to the mean and variance of the test. In dichotomous items:

– The sum of all scores obtained by subjects in this item is equal to the number of hits. Therefore, the item difficulty index is equal to its mean.

• If we generalize to the total test, the average of the test scores is equal to the sum of the difficulty indices of items.

1

1 or 0 according to success or failure in the item

n

jj

j

X

IDN

X

1

(hits)n

jj

X A

Page 9: 4.  Evaluation of measuring tools: item analysis

• The relationship between difficulty and variance of the test is also direct. In dichotomous items:

• In item analysis, a relevant question is to find the value of pj that maximizes the variance of items.– Maximum variance is achieved by an item when its pj is 0.5

– An item is appropriate when it is answered by different subjects and causes in them different answers.

2

proportion of subjects that answer correctly to the item; the ID.

1-

j j j

j

j j

S p q

p

q p

Page 10: 4.  Evaluation of measuring tools: item analysis

Correction of hits by chance• The fact of hitting an item depends not only on the subjects know

the answer, but also of subjets’ luck who without know it they choose the correct answer.

• The higher the number of distractors less likely that subjects hit the item randomly.

• It is advisable to correct the ID:

11

corrected ID

hits

E= mistakes

p= proportion of hits

q= proportion of mistakes

k= number of item alternatives

N= número de sujetos que intentan responder el ítem

c

c

EA qKID pN N K

ID

A

Page 11: 4.  Evaluation of measuring tools: item analysis

Example. Test composed by items with 3 alternatives

Subjects Item 1 Item 2 Item 3 Item 4 Item 5

A 1 1 1 1 1

B 1 0 1 0 1

C 1 1 0 1 0

D 1 0 0 1 0

E 0 1 0 1 1

F 1 0 0 1 0

G 0 1 1 1 0

H 1 0 0 1 0

I 1 1 0 0 0

J 0 0 0 1 1

ID

IDc

Calculate ID and Idc to ecah item

Page 12: 4.  Evaluation of measuring tools: item analysis

Results

Subjects Item 1 Item 2 Item 3 Item 4 Item 5

A 1 1 1 1 1

B 1 0 1 0 1

C 1 1 0 1 0

D 1 0 0 1 0

E 0 1 0 1 1

F 1 0 0 1 0

G 0 1 1 1 0

H 1 0 0 1 0

I 1 1 0 0 0

J 0 0 0 1 1

ID 0.7 0.5 0.3 0.9 0.4

IDc 0.55 0.25 0.05 0.85 0.1

Items which have suffered a major correction are those that have proved more difficult.

Page 13: 4.  Evaluation of measuring tools: item analysis

Recommendations• That items with extreme values to the population that

they are targeted in the index of difficulty are eliminated from the final test.

• In aptitude test we will get better psychometric results if the majority of items are of medium difficulty.– Easy items should be included, preferably at the beginning (to

measure less competent subjects).– And difficult items too (to measure less competent subjects).

Page 14: 4.  Evaluation of measuring tools: item analysis

Discrimination

Page 15: 4.  Evaluation of measuring tools: item analysis

• Logic: Given an item, subjects with good test scores have been successful in a higher proportion than those with low scores.

• If an item is not useful to differentiate between subjects based on their skill level (does not discriminate between subjects) it should be deleted.

Page 16: 4.  Evaluation of measuring tools: item analysis

Index of discrimination based on extreme groups(D)

• It is based on the proportions of hits among skill extreme groups (25 or 27% above and below from the total sample).– The top 25 or 27% would be formed by subjects who scored

above the 75 or 73 percentile in the total test.

• Once formed the groups we have to calculate the proportion of correct answers to a given item in both groups and then apply the following equation:

proportion of hits in the higher group

proportion of hits in the lower group

s i

s

i

D p p

p

p

Page 17: 4.  Evaluation of measuring tools: item analysis

• D index ranges berween -1 and 1.– 1= when all people in the upper group hit the item

and those people from the lower group fail it.– 0= item is equally hit in both groups.– Negative values= less competent subjects hit the item

more than the most competent subjects (the item confuses to more skilled subjects).

Page 18: 4.  Evaluation of measuring tools: item analysis

Example• Answers given by 370 subjects at 3 alternatives (A, B, C) of an item where B is

the correct option. In rows are the frequency of subjects who selected each alternative and have received scores above and below to the 27% of their sample in the total test, and the group formed by the central 46%.

• Calculate the corrected index of difficulty and the discrimination index.

A B* C

Upper 27% 19 53 28

Intermediete 46% 52 70 48

Lower 27% 65 19 16

Page 19: 4.  Evaluation of measuring tools: item analysis

Result

• Proportion of correct answers:(53+70+19)/370=0.38

• Proportion of mistakes:228/370=0.62

0.620.38 0.07

1 3 1c

qID p

k

53 19 53 19

0.3419 53 28 65 19 16 100s iD p p

Page 20: 4.  Evaluation of measuring tools: item analysis

Interpretation of D values (Ebel, 1965)

Values Interpretation

D ≥ 0.40 The item discriminates very well

0.30 ≤ D ≤ 0.39 The item discriminates well

0.20 ≤ D ≤ 0.29 The item discriminates slightly

0.10 ≤ D ≤ 0.19 The item needs revision

D < 0.10 The item is useless

Page 21: 4.  Evaluation of measuring tools: item analysis

Indices of discrimination based on the correlation

• If an item discriminate adequately, the correlation between the scores obtained by subjects in that item and the ones obtained in the total test will be positive.– Subjects who score high on the test are more likely to hit the

item.

• Def.: correlation between subjects scores in the item and their scores in the test (Muñiz, 2003).

• The total score of subjects in the test will be calculated discounting the item score.

Page 22: 4.  Evaluation of measuring tools: item analysis

1. Correlation coefficient Φ• When test score and item scores are strictly

dichotomous.• Permite estimar la discriminación de un ítem con algún

criterio de interés (por ej. Aptos-no aptos; géneros, etc.).

• It allow us to estimate the discrimination of an item with some criterion of interest (eg. fit and unfit, sex, etc.).

• First, we have to sort data in a 2x2 contingency table.– 1= item is hit/criterion is exceeded.– 0= item is fail/criterion is not exceeded.xy x y

x x y y

p p p

p q p q

Page 23: 4.  Evaluation of measuring tools: item analysis

Example• The following table shows the sorted results from 50 subjects

who took the last psychometrics exam.

Item 5 (X)

1 0

Criterion (Y) Fit pxy

30/50=0.65 py

35/50=0.7

Not fit 5 10 qy

15/50=0.3

px

35/50=0.7qx

15/30=0.3N=50

Page 24: 4.  Evaluation of measuring tools: item analysis

Result

• There is a high correlation between the item and the criterion. That is, those subjects who hit the item usually pass the psychometrics exam.

0.6 0.7*0.70.52

0.7*0.3*0.7*0.3xy x y

x x y y

p p p

p q p q

Page 25: 4.  Evaluation of measuring tools: item analysis

2. Point-biserial correlation• When the item is a dichotomous variable and the test

score is continuous.

• Remove the item score from the test score.

test mean of subjects that hit the item

test mean

test standard deviation

p= proportion of subjects that hit the item

q= proportion of subjets that fail the item

A Tbp

x

A

T

x

X X pr

S q

X

X

S

Page 26: 4.  Evaluation of measuring tools: item analysis

Example• The following table shows the responses of 5 subjects to 4 items.

Calculate the point-biserial correlation of the second item.

Items Total

Subjects 1 2 3 4 X (X-i)

A 0 1 0 1 2 1

B 1 1 0 1 3 2

C 1 1 1 1 4 3

D 0 0 0 1 1 1

E 1 1 1 0 3 2

Page 27: 4.  Evaluation of measuring tools: item analysis

Result• Subjects who have hit the item are A, B, C and E; so their mean

is:– 1+2+3+2/4=2

• The total mean is:– 1+2+3+1+2/5=1.8

• The test scores standard deviation is:– √12 +22 +32 +12 +22 /5 – (1.82)= √ 0.56=0.75

• The proportion of subjects who have hit item 2 is 4/5=0.8; and the one for subjects who have fail it is 1/5=0.2

2 1.8 0.80.54

0.75 0.2A T

bpx

X X pr

S q

Page 28: 4.  Evaluation of measuring tools: item analysis

3. Biserial correlation• When both item and test score are inherently

continuous variables, although one is dichotomized.

• We can find values greater than 1, especially when one of the variables is not normal.

height in the normal curve corresponding

to the typical score that leaves beneath a

probability value equal to p (see table)

A Tb

x

X X pr

S y

y

Page 29: 4.  Evaluation of measuring tools: item analysis

Example

• Biserial correlation of item 3.

• Because the value p = 0.4 does not appear in the first column of the table you need, we look for its complement (0.6), which is associated with an y = 0.3863.

2.5 1.8 0.40.96

0.75 0.7843br

Page 30: 4.  Evaluation of measuring tools: item analysis

Discrimination in attitude items• There are no right or wrong answers but the subject

must be placed in the continuum established based on the degree of the measured attribute.

• Correlation between item scores and test scores.– Because items are not dichotomous. Pearson

correlation coefficient.• That coefficient can be interpreted as a Homogeneity index

(IH). It indicates how much the item is measuring the same dimension or attitude as the rest of the scale items.

Page 31: 4.  Evaluation of measuring tools: item analysis

• Items in which IH is below 0.20 should be eliminated.• Correction: deduct from the total score the item score or apply

the formula below:

2 2 2 2

( )

[ ( ) ][ ( ) ]

sample size

sum of the subjects scores in J element

sum of the subjects scores in the scale

correlation between scores obtained by subject

jxj x

jx

N JX J X COV JXR

S SN J J N X X

N

J

X

R

s

in J element and in the scale

( ) 2 2 2

jx x jj x j

x j jx x j

R S SR

S S R S S

Page 32: 4.  Evaluation of measuring tools: item analysis

• Other procedures:– Useful but less efficient than the previous because it does not use the entire sample.– Determine whether the item mean for the subjects with higher scores on the total

test is statistically higher than the mean of those with lower scores. It is common to use 25% or 27% of subjects with best and worst scores.

– Once the groups are identified, we calculate if the mean difference is statistically significant by Student T test.

2 2( 1) ( 1) 1 1[ ]

2

Scores mean obtained in the item by 25% subjects

that obtained higher scores in the test.

Scores mean obtained in the item by 25% subjects

that obt

sj ij

s sj i ij

s i s i

sj

ij

X XT

n S n S

n n n n

X

X

2

2

ained lower scores in the test.

Scores variance obtained in the item by 25% subjects

that obtained higher scores in the test.

Scores variance obtained in the item by 25% subjects

that obtained

sj

ij

S

S

lower scores in the test.

n and n number of subjects that conform higher and

lower groups respectively.s i

Ho: means in both groups are equal.

Page 33: 4.  Evaluation of measuring tools: item analysis

Example• Answers of 5 people to 4 attitudes items. Calculate the

discrimination of item 4 by Pearson correlation and that one of item 2 by Student's T test.

Items Total XT X4XT X24 X2

T

Subjects X1 X2 X3 X4

A 2 4 4 3 13 39 9 169

B 3 4 3 5 15 75 25 225

C 5 2 4 3 14 42 9 196

D 3 5 2 4 14 56 16 196

E 4 5 2 5 16 80 25 256

20 72 292 84 1042

Page 34: 4.  Evaluation of measuring tools: item analysis

Solution• The correlation or IH between item 4 and total score of the test

will be:

• Inflated result because item 4 score is included in total score. Correction:– Standard deviation both for item 4 and total score:

2 2 2 2 2 2

5*292 20*720.88

[ ( ) ][ ( ) ] [5*84 20 ][5*1042 72 ]

jx

N JX J XR

N J J N X X

2 2 2 2 22

4

2 2 2 2 22

3 5 3 4 5(4) 0.80 0.89

5

13 15 14 14 16(14.4) 1.04 1.02

5

x

xT

S

S

( ) 2 2

0.88*1.02 0.890.01

1.04 0.80 2*0.88*1.02*0.892

jx x jj x j

x j jx x j

R S SR

S S R S S

Page 35: 4.  Evaluation of measuring tools: item analysis

• The big difference when applying the correction is due to the small number of items that we have used in the example.– As the number of items increase that effect decreases

because the influence of item scores on the total score is getting smaller. With more than 25 items, the result is very close.

Page 36: 4.  Evaluation of measuring tools: item analysis

• To calculate the discrimination of item 2 by Student T Test, we have to do groups with extreme scores. Because of didactic reasons, we are going to use just 2 subjects to conform those groups.

Subjects X2

High group E (16) 5

B (15) 4

Mean: 4.5; Sx: 0.25

Subjects XT

Low group A (13) 4

C (14) 2

Mean: 3; Sx: 1

2 2

2.4 31.9

(2 1)0.25 (2 1)1 1 1( 1) ( 1) 1 1 [ ][ ] 2 2 2 2 22

sj ij

s sj i ij

s i s i

X XT

n S n S

n n n n

Page 37: 4.  Evaluation of measuring tools: item analysis

• Confidence level: 95%. 2 df (2+2-2). T: 2.92• We accept the Ho. The item does not discriminate

properly.• To apply Student's T test, item scores and total scale

scores should be normally distributed and with equal variances. If not, we have to use a nonparametric test.

Page 38: 4.  Evaluation of measuring tools: item analysis

Factors that affect discrimination

Page 39: 4.  Evaluation of measuring tools: item analysis

Variability• Relation between test variability and item discrimination:

• If the test is composed by dichotomous items:

• To maximize the discriminative ability of one test we have to consider together both the difficulty (pj) and the discrimination (rjx) of its items.– It is achieved when discrimination is maximun (rjx=1) and the difficulty is

medium (p=0.5).

1

Standard deviation of the test

Standard deviation of the item

Discrimination index of item j

n

x j jxj

x

j

jx

S S r

S

S

r

2 2 2

1 1

;n n

x j j jx x j j jxj j

S p q r S p q r

Page 40: 4.  Evaluation of measuring tools: item analysis

Item difficulty• An item reaches its maximum discriminative

power when its difficulty is medium.• To optimize the discrimination we must take into

account the item difficulty.

Page 41: 4.  Evaluation of measuring tools: item analysis

Dimensionality of the test

• Number of constructs that we are measuring.• When a test is constructing, normally we try

that it only measures one single construct (unidimensionality).

• In multidimensional test, item discrimination should be estimated considering only the items that are associated with each dimension.

Page 42: 4.  Evaluation of measuring tools: item analysis

Test reliability• If discrimination is defined as the correlation between scores

obtained by subjects in the item and test scores, then reliability and discrimination are closely related.– It is posible to express the Cronbach alpha coefficient from the

discrimination of items:

• Small values in item discrimination are typically associated with unreliable tests.

• As the mean discrimination of the test increases, so does the reliability coefficient.

2 2

1 1

22

1

1 11 1

n n

j jj j

nx

j jxj

S Sn n

n S nS r

Page 43: 4.  Evaluation of measuring tools: item analysis

Indices of item reliability and validity

Page 44: 4.  Evaluation of measuring tools: item analysis

Reliability index• To quantify the degree in which an item is accurately measuring

the attribute of interest.

• When any correlation coefficient is used to calculate the discrimination of items:

• The squared sumatory of items IF matches with the variance of subjects’ scores in the test.

• To the extent that we select items with higher IF, the better the reliability of the test will be.

standard deviation of scores in the item

item discrimination index

j j

j

j

IF S ID

S

ID

j jxIF S r

Page 45: 4.  Evaluation of measuring tools: item analysis

Validity index• The validity of an item involves the correlation of the scores

obtained by a sample of subjects in the item with the scores obtained by the same subjects in any external criterion of our interest.– It serves to determine how much each item of one test contributes to

successfully make predictions about that external criterion.

• In the case that the criterion is a continuous variable and the item is a dichotomous variable,we will use point biserial correlation; but it is not necessary to rest from the total score of the external criterion the item score because it is not included.

• Test validity can be expresed in function of items IV. The higher items IV are, the more optimized the validity of the test will be.

j jyIV S r

Page 46: 4.  Evaluation of measuring tools: item analysis

• The following formula allows us to see how the validity of the test can be estimated from the discrimination of each (rjx), from its validity (rjy) and from its difficulty (Sj=pjqj).

• Paradox in the selection of items: if we want to select items to maximize the reliability of the test we have to choose those items with a high ID, but this would lead us to reduce the coefficient of test validity because it increases as IV are high and IF are low.

Page 47: 4.  Evaluation of measuring tools: item analysis

Analysis of Distractors

Page 48: 4.  Evaluation of measuring tools: item analysis

• It involves investigating in the distribution of subjects across the distractors, to detect possible reasons for the low distribution of any item or see that some alternatives are not selected by anyone, for example.

• In that analysis, the first step is:– Check that all the incorrect options are chosen by a minimum number of

subjects. If possible, they should be equiprobable.• Criterion: that each distractor is selected by at least 10% of the sample and

that there is not much difference between them.

– That performance on the test of subjects who have selected each incorrect alternative is less than the performance of subjects that have selected the correct one.

– It is expected that as the skill level of subjects increases, the percentage of those who select incorrect alternatives decrease and vice versa.

Page 49: 4.  Evaluation of measuring tools: item analysis

Equiprobability of distractors• Equiprobable if distractors are selected by a minimum of

subjects and if they are equally attractive to those who do not know the correct answer.

• X2 Test:

– Degree of freedom: K -1. K is the number of incorrect alternatives.

– Ho: FT=FO (to subjects that do not know the correct answer, the election of any distractor is equally atractive).

2

2

1

( )

theoretical frequencies

FO= observed frequencies

k

j

FT FOX FT

FT

Page 50: 4.  Evaluation of measuring tools: item analysis

Example

• Determine if the incorrect alternatives are equally attractive.

A B* C

27% higher 19 53 28

46% intermediate 52 70 48

27% lower 65 19 16

Total 136 --- 92

Page 51: 4.  Evaluation of measuring tools: item analysis

Solution• FT: (136+92)/2=114

– Each distractor should be selected by 114 subjets.

• FO is the one that appears in the last row.

• X2 Table; for 1 df and 95% CL=3.84• 8.49>3.84; incorrect alternatives are not equally

attractive to all subjects even if they are selected by a minimum of 10%.

22 2

2

1

( )(114 136) (114 92) 968

8.49114 114

k

j

FT FOX FT

Page 52: 4.  Evaluation of measuring tools: item analysis

Discriminative power of distractors• It is expected from a good distractor that its correlation with test

scores is negative.• To quantify the discriminative power of incorrect alternatives we

use the correlation. Depending on the kind of variable, we will use biserial, biserial-point, phi or Pearson.

• **poner figuras escaneadas

Page 53: 4.  Evaluation of measuring tools: item analysis

Example• Answer of 5 subjects to 4 items. Brackets show the

alternatives selected by each subject and the correct alternative with an asterisk. Calculate discrimination of distractor b in item 3.

Items Total

Subjects 1(a*) 2(b*) 3(a*) 4(c*) X (X-i)

A 0 (b) 1 0 (b) 1 2 2

B 1 1 0 (b) 1 3 3

C 1 1 1 1 4 3

D 0 (c) 0 (a) 0 (b) 1 1 1

e 1 1 1 0 (b) 3 2

Page 54: 4.  Evaluation of measuring tools: item analysis

Solution• Subjets who have selected alternative b (the incorrect one) in

item 3 have been A, B and D. The mean of these subjects in the test after eliminating the analyzed item score is:

• Total mean of the test resting from the scores obtained by subjects, the score of item 3:

• Standard deviación of scores corresponding to (X-i):

• The proportion of subjects that have hit the item is 2/5=0.4; and the proportion of subjects that have failed is 3/5=0.6.

2 3 12

3AX

2 3 3 1 22.2

5T iX

2 2 2 2 222 3 3 1 2

(2.2) 0.56 0.755T iS

Page 55: 4.  Evaluation of measuring tools: item analysis

• The point-biserial correlation between the incorrect alternative “b” and test scores discounting the item score is:

– As the incorrect alternative in the score of these score subjects in the item is 0, no need to remove anything from the total test.

• The distractor discriminates in the opposite direction than the correct alternative. It's a good distractor.

2 2.2 0.40.22

0.75 0.6A T i

bpX i

X X pr

S q

Page 56: 4.  Evaluation of measuring tools: item analysis

• Visual inspection of the distribution of subject answers to the various alternatives.

– Proportion of subjects that have selected each option: p– Test mean of subjects that have selected each alterative:

mean– Discrimination index of all options: rbp

A B C*

Skill level High 20 25 55

Low 40 35 25

Statistics p 0.28 0.5 0.22

Mean 5 10 9

rbp -0.20 0.18 0.29

Page 57: 4.  Evaluation of measuring tools: item analysis

• Positive discrimination index: the correct alternative is mostly chosen by competent subjects.

• Distractor A:– Has been selected by an acceptable minimum of

subjects (28%) and is selected by subjects less competent in a higher proportion.

– The test mean of subjects who have selected it is less than the test mean of subjects that have selected the correct alternative (consistent with its negative discrimination index).

Page 58: 4.  Evaluation of measuring tools: item analysis

• Distractor B should be revised:– It is chosen as correct by the subjects with better

scores in the test.– It has been the most selected (50%), its

discrimination is positive, and the mean of the subjects that have selected it is higher than one of subjects who have chosen the correct alternative.

Page 59: 4.  Evaluation of measuring tools: item analysis

• In distractors analysis we still can go further and use statistical inference. The test mean of subjects that choose the correct alternative should be higher than the test mean of subjects that have chosen each distractor.– ANOVA. IV or factor: each item with as many levels

as answer alternatives. DV: the raw score of subjects in the test.