8

Click here to load reader

ON THE EFFICACY OF A COMPUTER-BASED PROGRAM TO TEACH VISUAL BRAILLE READING

  • Upload
    sarah-j

  • View
    215

  • Download
    0

Embed Size (px)

Citation preview

Page 1: ON THE EFFICACY OF A COMPUTER-BASED PROGRAM TO TEACH VISUAL BRAILLE READING

ON THE EFFICACY OF A COMPUTER-BASED PROGRAM TO TEACHVISUAL BRAILLE READING

MINDY C. SCHEITHAUER

LOUISIANA STATE UNIVERSITY

JEFFREY H. TIGER

UNIVERSITY OF WISCONSIN–MILWAUKEE

AND

SARAH J. MILLER

LOUISIANA STATE UNIVERSITY

Scheithauer and Tiger (2012) created an efficient computerized program that taught 4 sightedcollege students to select text letters when presented with visual depictions of braille alphabeticcharacters and resulted in the emergence of some braille reading. The current study extendedthese results to a larger sample (n ¼ 81) and compared the efficacy and efficiency of theinstructional program using 2 different response modalities. One variation of the programrequired a response in a multiple-choice format, and the other variation required a keyed response.Both instructional programs resulted in increased braille letter identification and braille reading.These skills were maintained at a follow-up session 7 to 14 days later. The mean time needed tocomplete the program was 22.8 min across participants. Implications of these results for futureresearch, as well as practical implications for teaching the braille alphabet, are discussed.

Key words: braille, computer-based instruction, maintenance, matching to sample, multiplechoice, reading, selection, teacher training, undergraduates, visual impairment

More than 29,000 students between the agesof 3 and 21 years, with visual impairment astheir predominant disability, received specialeducation services under the Individuals withDisabilities Education Act (IDEA) during the2008–2009 school year (U.S. Department ofEducation, 2011). IDEA requires each childwith visual impairments to have an individual-ized education plan with a consideration ofteaching braille reading and writing. Despite themandates from IDEA, the availability of brailleinstruction for children with visual impairmentsin schools has been limited for quite some time,and it appears to have been trending down forseveral years (Braille Institute, 2010; NationalFederation of the Blind, 2009). Qualifications to

be a braille instructor are based on the NationalCertification in Literary Braille (NCLB;Bell, 2010), with some variation across states.Approximately 140 individuals in the UnitedStates currently have the NCLB (NationalBlindness Professional Certification Board[NBPCB], 2012). With this insufficient numberof qualified braille instructors for the approxi-mately 29,000 visually impaired students, theresponsibility of promoting literacy with visuallyimpaired children has largely fallen on generaleducation teachers with limited braille exposure.Teachers without familiarity with braille mayrely more heavily on alternative technologies.

Even in cases in which technology canproduce appropriate braille materials for ateacher (e.g., some programs have the capabilityof scanning text material and printing brailleequivalents), these technologies fail to provide asubstitute for a teacher who can provide

Address correspondence to Jeffrey H. Tiger, Department ofPsychology, University of Wisconsin–Milwaukee, P.O. Box413, Milwaukee, Wisconsin 53201 (e-mail: [email protected]).

doi: 10.1002/jaba.48

JOURNAL OF APPLIED BEHAVIOR ANALYSIS 2013, 46, 436–443 NUMBER 2 (SUMMER 2013)

436

Page 2: ON THE EFFICACY OF A COMPUTER-BASED PROGRAM TO TEACH VISUAL BRAILLE READING

immediate prompting and feedback duringbraille instruction (Kelly & Smith, 2011).Therefore, general education teachers chargedwith the responsibility of braille instruction maybenefit from learning to read braille. Unfortu-nately, there is a lack of empirically validatedprocedures to teach these teachers the basic skillsneeded for early braille instruction.Unlike visually impaired students who read

braille tactually, teachers can be taught to readbraille depictions visually. Scheithauer and Tiger(2012) used a computer-based training programto teach four college students the correspon-dence between each letter of the Englishalphabet and its braille counterpart in a match-to-sample format. This program presented thebraille character as a sample stimulus andpresented five or six English text letters ascomparison stimuli in a multiple-choice format.The program sequentially introduced letters ingroups of five or six until the participantsresponded with greater than 95% accuracyacross two consecutive sessions; the programthen intermixed new targets along with masteredletters. Participants completed computer-basedtraining in a mean duration of 24.4 min (range,18.5 min to 37.8 min) and demonstrated arudimentary braille-reading repertoire aftertraining. The efficiency of this training program,along with the ability to generate braille reading,may make computer-based training appealing toteachers who are responsible for the education ofchildren with visual impairments. However, it ispremature to assume generality of the outcomesdescribed by Scheithauer and Tiger given thesmall sample of participants.The current study was a replication of

Scheithauer and Tiger (2012) but includedtwo important extensions. First, we evaluatedtheir instructional program with a larger groupof participants to provide a better estimate of thegenerality of their results. Second, we comparedthe efficacy and efficiency of the trainingprogram by incorporating two response modali-ties. That is, in addition to the multiple-choice

format of our previous study, we also requiredsome participants to engage in a keyed response(i.e., typing the target response on a keyboard inlieu of selecting the target from a restrictedstimulus array).

METHOD

Participants, Setting, and ApparatusWe recruited 84 undergraduate students as

participants. Three participants were excludedbecause they did not meet the prerequisiterequirement (described below). Therefore, 81participants completed the study and wereincluded in our data analysis. Participants’ agesranged from 18 to 33 years (M ¼ 20). Theywere predominately female (74%), and themajority self-identified as Caucasian (77%),followed by African American (16%), AsianAmerican (6%), and Hispanic (1%). Theparticipants were recruited through a psychologydepartment research pool; they received coursecredit as compensation. Each participant signedup for two time slots via an online portal. Thefirst time slot was used for the initial instructionand assessment, and the second time slot (7 to14 days later) was used for a maintenanceassessment.All sessions were conducted in a small office

on a university campus on a Hewlett Packardmini-laptop computer equipped with the Micro-soft Windows XP operating system. Thesoftware was programmed in PracticeMill soft-ware (Peladeau, 2000) and was identical to thesoftware described by Scheithauer and Tiger(2012), except where otherwise noted. Aftereach participant arrived in the office, he or shewas told the purpose of the study. He or shethen completed consent and demographic formsprior to beginning a series of three preinstruc-tion probes.

ProcedurePreinstruction probes. Three preinstruction

probes were conducted to determine the extent

VISUAL BRAILLE READING 437

Page 3: ON THE EFFICACY OF A COMPUTER-BASED PROGRAM TO TEACH VISUAL BRAILLE READING

to which participants (a) read standard Englishtext, (b) read braille text, and (c) named braillecharacters.Prerequisite text-reading assessment. The par-

ticipant was given a paper copy of a passage fromthe sixth-grade oral reading fluency (ORF)subtest of the Dynamic Indicators of Basic EarlyLiteracy Skills (DIBELS) assessment in English(Good & Kaminski, 2002). The experimenterinstructed the participant to read as much of thepassage as he or she could in 1 min. We requireda score of 125 words read correctly (indicative oflow risk for reading delays at the sixth-gradelevel, which is the highest level included in thisstandardized, criterion-referenced assessment)for continuation in the study. The purpose ofthis assessment was to ensure that participantswere able to read text visually, which we believeto be a prerequisite for the emergent braillereading skills assessed in this study.Pretest: Braille reading. We translated a first-

grade passage from the DIBELS assessment withall letters represented as lowercase braillecharacters (all punctuation remained in normaltext). The experimenter instructed the partici-pant to read as much of the passage (printed onplain white computer paper) as possible within5 min or to say that he or she could not read thepassage (we did not assess tactile braille readingat any point in this study). We set stop criteria at5 min, a verbal statement of being unable toread the passage, or no words read correctly onthe first line of the passage. We used this passageas a pretest of braille reading fluency.Pretest: Braille letter identification. Partici-

pants completed one pretest probe via computeradministration in which each letter was pre-sented as a braille sample once (i.e., 26 totaltrials). We developed the program to presenteach trial in a multiple-choice format; one trialconsisted of the presentation of one braillecharacter and five or six English text letters thatserved as comparison stimuli. The programprovided instructions to click on one of themultiple-choice English letters using the mouse

or touchpad that matched the sample braillecharacter.Braille letter instruction. We randomly as-

signed participants to one of two instructionalgroups using a random number generator. Forthe multiple-choice group (n ¼ 40), braillestimuli were presented as samples along withfive or six comparison text letters. Participantsresponded on each trial by clicking on the radiobutton next to a comparison stimulus andpressing the space bar to register their response.For the keyed-response group (n ¼ 41), braillestimuli were presented as sample stimuli, butthere were no comparisons. Instead, the partici-pant responded by pressing a key on thekeyboard and then pressing the space bar toregister the response. Other than the responsemechanism, both groups experienced identicalinstruction procedures.The instructional program introduced letter

sets in sequential units. We considered eachletter presentation to be a trial. In each unit, theprogram presented every letter from the newletter set three times and each previouslymastered letter once in each session. Forexample, in Unit 1 we presented the first letterset (O, G, K, A, Y) three times. In Unit 2 wepresented each letter from the Unit 1 set (O, G,K, A, Y) once and each letter in the new set(D, V, S, H, T) three times. Therefore, thenumber of trials presented during each sessionincreased across each unit (i.e., sessions duringUnits 1, 2, 3, 4, and 5 consisted of 15, 20, 25,30, and 38 trials, respectively). We defined unitmastery as two consecutive sessions with 95%accuracy or higher. The program automaticallycalculated the accuracy for each session aftercompletion and advanced participants acrossunits.The program provided feedback to the

participants after each response. If the partici-pant answered correctly, the program presentedthe visual feedback “Great!” and the participanthit the space bar to move on to the next item. Ifthe participant did not answer correctly, the

438 MINDY C. SCHEITHAUER et al.

Page 4: ON THE EFFICACY OF A COMPUTER-BASED PROGRAM TO TEACH VISUAL BRAILLE READING

program presented corrective feedback (e.g.,“No, the correct answer is ‘K’”). The participantthen hit the space bar, and the program repeatedthe same stimuli (i.e., the item was repeateduntil the participant provided the correctanswer). The program recorded only the firstresponse to each item in the data analysis foreach session.Posttest and maintenance probes: Braille reading

and letter identification. We administered theseprobes immediately after completion of thebraille instruction program and identically to themanner of administration during pretest probes,with both the keyed and multiple-choice groupsusing a multiple-choice format for the letter-identification assessment. During the posttestreading probe, we presented the same braillepassage as in the pretest; during the maintenanceprobe, we presented an alternate passage fromthe first-grade ORF probe to prevent practiceeffects. We conducted the posttest probeimmediately after completion of the final unit.The mean delay between the training andmaintenance sessions was 10.0 days for themultiple-choice group and 9.8 days for thekeyed-response group (range, 7 to 14 days forboth groups). In total, 82% (n ¼ 33) ofindividuals in the multiple-choice group and83% (n ¼ 34) of individuals in the keyed-response group completed the maintenanceprobe; the remaining participants failed toattend their maintenance appointment andcould not be contacted to reschedule.Acceptability. We administered an acceptabil-

ity questionnaire after the maintenance probes.This measure consisted of seven items: (a) Thiswould be an acceptable intervention for teachinga school teacher the braille alphabet. (b) I wouldrather study the braille alphabet on my ownthan complete this program. (c) Most peoplewould find this training acceptable. (d) I hadtrouble paying attention and staying alert duringtraining. (e) I feel this training was completed ina reasonable amount of time. (f ) I would suggestthe use of this training to somebody interested

in learning the braille alphabet. (g) I like theprocedures used in this training. Participantsrated each item on a 6-point scale with 1 ¼strongly disagree and 6 ¼ strongly agree. Wereverse coded items (b) and (d) such that scoresof 1 indicated low social validity on each itemand scores of 6 indicated high social validity oneach item. We computed the mean for all sevenitems to create an acceptability score.

Measurement and Data AnalysisThe computer program calculated the follow-

ing dependent variables for each participant: (a)correct responding during the pretest letter-identification test, (b) training duration from theonset of instruction to meeting mastery criterionin the last unit, (c) correct and incorrectresponses during the instruction phase, (d)correct responding during the posttest letter-identification assessment, and (e) correct re-sponding during the maintenance letter-identi-fication assessment. An undergraduate orgraduate research assistant collected data onoral reading during the prerequisite text-readingassessment and the pretest, posttest, andmaintenance braille reading assessments byfollowing along on another copy of the passage(we transcribed braille passages into English textfor data collection) and placing a line throughany word that was read incorrectly. At the end ofthe reading period, the experimenter told theparticipant to stop and placed a bracket afterthe last word attempted within the time limit.The experimenter then subtracted the numberof words read incorrectly from the total numberof words read to calculate the number of wordsread correctly within the time limit. Eachdependent variable underwent an independent-groups t test that compared the means of themultiple-choice and keyed-response groups’performance.

Interobserver AgreementThe software automatically scored and re-

corded the responses made using the computer

VISUAL BRAILLE READING 439

Page 5: ON THE EFFICACY OF A COMPUTER-BASED PROGRAM TO TEACH VISUAL BRAILLE READING

program and the duration of instructionalsessions, so no calculation of interobserveragreement was necessary for these measures.We conducted calibration tests before andafter the study, and both tests were in 100%agreement. An independent observer, eithersimultaneously or from an audiotaped versionof the session, scored 28% of the prerequisitetext-reading assessments, 30% of the pretestbraille-reading assessments, 31% of the posttestbraille-reading assessments, and 26% of themaintenance assessments. Each record wascompared on a word-by-word basis. Interob-server agreement was calculated for each passageby summing the number of words scored inagreement and dividing this number by the totalnumber of agreements and disagreements.If both observers scored zero words as readcorrectly, an agreement of 100% was reported.We then converted this quotient to a percentage,which yielded a mean of 98% agreement (range,89% to 100%).

RESULTS

All participants completed the programsuccessfully. The multiple-choice group com-

pleted the program in a mean of 21.9 min(range, 12.8 min to 36.1 min). The keyed-response group also completed the programquickly, although they took slightly longer thanthe multiple-choice group (M ¼ 23.6 min,range, 14.3 min to 43.5 min); this differencewas not statistically significant, t (79) ¼ 1.24,p ¼ .24. Pretest, posttest, and maintenanceletter-identification test scores are depicted forboth groups in Figure 1. All participants in themultiple-choice and keyed-response groupsresponded at chance levels during the pretestletter-identification probe (M ¼ 23.7%, range,7.7% to 38.5% and M ¼ 24.7%, range, 11.5%to 46.2%, respectively) and achieved near perfectaccuracy on the posttest letter-identificationprobe (M ¼ 99.6%, range, 92.3% to 100%for the multiple-choice group and M ¼ 99.6%,range, 96.2% to 100% for the keyed-responsegroup). As shown for the 67 participants whoattended maintenance sessions, the mean letter-identification accuracy on the maintenanceprobe remained high for both the multiple-choice group (M ¼ 89.5%) and the keyed-response group (M ¼ 83.5%), and the groupdifferences were not statistically significant,t (65) ¼ 1.95, p ¼ .056.

Multiple Choice Keyed Response

Perc

enta

ge C

orre

ct

0

20

40

60

80

100

PretestPosttestMaintenance

Figure 1. Mean percentage correct for the pretest, posttest, and maintenance probes for letter identification. Error barsrepresent the standard deviation for each group.

440 MINDY C. SCHEITHAUER et al.

Page 6: ON THE EFFICACY OF A COMPUTER-BASED PROGRAM TO TEACH VISUAL BRAILLE READING

Braille reading scores from the pretest, posttest,and maintenance test for both groups aredepicted in Figure 2. Participants in both groupsalso increased the number of braille words read atposttest relative to their pretest (all participantsscored 0 words read at pretest) but there wasnotable variability in posttest reading, with amean of 26.3 words for the multiple-choicegroup (range, 1 to 65 words) and a mean of 25.8words for the keyed-response group (range, 4 to65 words). Group differences were not statisti-cally significant, t (79) ¼ 0.17, p ¼ .79. Themultiple-choice group read a mean of 22.8 words(range, 1 to 64 words) and the keyed-responsegroup read a mean of 16.9 words (range, 1 to 59words) on the maintenance probe; this differencealso was not statistically significant, t (65) ¼4.29, p ¼ .11. (See Supporting Informationonline on Wiley Online Library or contact thesecond author for individual participant results.)However, the multiple-choice group made fewererrors (M ¼ 35.3, SD ¼ 16.0) than did thekeyed-response group (M ¼ 55.3, SD ¼ 24.9), t(79) ¼ 4.3, p <.001, during instruction, adifference that was statistically significant evenafter a Bonferroni correction for inflated Type Ierror rate due to multiple tests.

Finally, both groups rated the procedure ashighly acceptable on the survey. In contrast tothe performance measures, which slightly fa-vored the multiple-choice group, the keyed-response group rated their teaching proceduresas more acceptable (M ¼ 5.5) than did themultiple-choice group (M ¼ 5.0). No items onthe acceptability scale differed notably from thismean score, suggesting that all items weresufficiently acceptable.

DISCUSSION

The current study extended the results ofScheithauer and Tiger (2012) by (a) replicatingthe training program with a larger sample ofcollege students (n ¼ 81) to evaluate thegenerality of the findings and (b) comparingmultiple-choice and keyed-response methodsduring the training program on the amount oftime to complete the program, the number oferrors made during instruction, the posttestletter-identification accuracy and braille wordsread, the maintenance of letter-identificationaccuracy and braille words read, and theacceptability of both instructional methods.This evaluation yielded a number of important

Multiple Choice Keyed Response

Num

ber o

f Wor

ds

0

10

20

30

40

50

PretestPosttestMaintenance

Figure 2. Mean number of words read correctly during the prettest, posttest, and maintenance probes for braillereading. Error bars represent the standard deviation for each group.

VISUAL BRAILLE READING 441

Page 7: ON THE EFFICACY OF A COMPUTER-BASED PROGRAM TO TEACH VISUAL BRAILLE READING

findings that suggest the utility of this comput-er-based instructional program.First, similar to the results of Scheithauer and

Tiger (2012), participants in the current studycompleted the training program quickly andsuccessfully identified all braille characters withnear perfect accuracy after instruction. Althoughcomputer training taught only the skill ofmatching braille characters to text letters, theemergence of rudimentary braille reading wasalso noted, with a mean of 26 words read duringthe posttest assessment across both groups.Further, these gains were maintained at fairlyhigh levels during the maintenance assessment.It is important to note that the skill of braillereading was never targeted. Therefore, theemergence of these skills may be considered anemergent transitive relation (Sidman &Tailby, 1982) established through the stimulusequivalence paradigm via relations described byScheithauer and Tiger.Second, most performance measures for the

multiple-choice and keyed-response groups weresimilar. One notable difference, however, wasthe number of errors made by each group. Themultiple-choice group made a mean of 35.3errors, whereas the keyed-response group made amean of 55.3 errors. This outcome is somewhatexpected, because a larger array provides moreopportunities to make an incorrect responseby chance (Rodriguez, 2005). It is interestingto note that we did not see reductions inmaintenance scores given the higher error ratesexhibited by the keyed-response group; previousresearch suggests that performance on posttestsmay suffer when more training errors are made(Nordvik, Schanke, & Landro, 2011; Terrace,1963). Increased error rates also did notadversely affect training time, in that completiontime was similar between the two groups. Thislikely occurred because only a brief repetition ofthe item that the participant could quicklycorrect followed each error.Results should be considered preliminary due

to the sample of participants. College students

served as an appropriate proxy given similardemographic characteristics to teachers in train-ing (e.g., similar age range, higher proportion ofwomen), but they do differ in some potentiallyimportant ways that limit the generality of theseresults to more seasoned teachers (e.g., educa-tional level, teaching experience, and potentialfor prior exposure to braille). Teachers who areresponsible for teaching braille are also likely tohave a higher motivation to learn to read braillethan undergraduate students with potentiallyunrelated majors, so it will be interesting to seehow their performance differs from the partic-ipants in this study. We believe the brevity ofthis program will be appealing to teachers; givena mean of about 23 min, they could completethe program during a lunch break or planninghour.Results also must be considered preliminary

because our participants were not fluent braillereaders at the conclusion of this study. As notedby Scheithauer and Tiger (2012), the currentprogram taught only the initial skill of brailleletter identification, with some collateral effectson reading alphabetic braille. We are currentlyincorporating and evaluating a number ofimportant extensions, such as units for numer-als, punctuation, contractions, and fluencybuilding. We will also evaluate longer mainte-nance periods.Finally, teachers who complete this training

and who continue to develop braille readingfluency will still not be considered competentbraille instructors. The NCLB test requires anumber of other skills, including braillewriting using different methods, proofreadingbraille material, and correct use of brailleformatting and grammar rules (NBPCB,2012). Braille reading, when fully developed,is an important skill for braille instructors,but by no means is it the only skill in whichthey will need to be trained. Development ofcomprehensive and efficient training programsremains an important area for continuedresearch.

442 MINDY C. SCHEITHAUER et al.

Page 8: ON THE EFFICACY OF A COMPUTER-BASED PROGRAM TO TEACH VISUAL BRAILLE READING

REFERENCES

Bell, E. (2010). U.S. National Certification in LiteraryBraille: History and current administration. Journal ofVisual Impairment & Blindness, 104, 489–498.Retrieved from http://www.afb.org/jvib/jvib_main.asp

Braille Institute. (2010). Facts about sight loss and definitionsof blindness. Retrieved from http://www.brailleinsti-tute.org/facts_about_sight_loss5

Good, R. H., & Kaminski, R. A. (Eds.). (2002). Dynamicindicators of basic early literacy skills (6th ed.). Eugene,OR: Institute for the Development of EducationalAchievement. Retrieved from http://dibels.uoregon.edu/

Kelly, S. M., & Smith, D. W. (2011). The impact ofassistive technology on the educational performance ofstudents with visual impairments: A synthesis of theresearch. Journal of Visual Impairment & Blindness,105, 73–83. Retrieved from http://www.afb.org/jvib/jvib_main.asp

National Blindness Professional Certification Board.(2012). Search for certified NCLBs. Retrieved fromhttp://www.nbpcb.org/pages/nclb_lookup.php

National Federation of the Blind. (2009). The brailleliteracy crisis in America: Facing the truth, reversing thetrend, empowering the blind. Baltimore, MD: Author.Retrieved from http://www.nfb.org/images/nfb/documents/word/The_Braille_Literacy_CrisisIn America.doc

Nordvik, J. E., Schanke, A., & Landro, N. I. (2011).Errorless learning and working memory: The impactof errors, distractors, and memory span load on

immediate recall in healthy adults. Journal of Clinicaland Experimental Neuropsychology, 33, 587–595. doi:10.1080/13803395.2010.543886

Peladeau, N. (2000). PracticeMill (Version 2.03)[Computer software]. Montreal, Quebec: ProvalisResearch.

Rodriguez, M. C. (2005). Three options are optimal formultiple-choice items: A meta-analysis of 80 years ofresearch. Educational Measurement: Issues and Practice,24, 3–13. doi: 10.1111/j.1745-3992.2005.00006.x

Scheithauer, M. C., & Tiger, J. H. (2012). A computer-based program to teach braille reading to sightedindividuals. Journal of Applied Behavior Analysis, 45,315–327. doi: 10.1901/jaba.2012.45-315

Sidman, M., & Tailby, W. (1982). Conditional discrimi-nation vs. matching to sample: An expansion of thetesting paradigm. Journal of the Experimental Analysisof Behavior, 37, 5–22. doi: 10.1901/jeab.1982.37-5

Terrace, H. S. (1963). Discrimination learning with andwithout “errors.” Journal of the Experimental Analysis ofBehavior, 6, 1–27. doi: 10.1901/jeab.1963.6-1

U.S. Department of Education, National Center forEducation Statistics. (2011). Digest of educationstatistics, 2010 (NCE 2011-015). Retrieved fromhttp://nces.ed.gov/programs/digest/d10/tables/dt10_045.asp?referrer¼list

Received April 23, 2012Final acceptance November 4, 2012Action Editor, John Borrero

VISUAL BRAILLE READING 443