Upload
lena-coradinho
View
215
Download
0
Embed Size (px)
Citation preview
7/23/2019 Neddenriep, C., Fritz, A., & Carrier, M. (2011).
http://slidepdf.com/reader/full/neddenriep-c-fritz-a-carrier-m-2011 1/14
Psychology in the Schools, Vol. 48(1), 2011 C 2010 Wiley Periodicals, Inc.View this article online at wileyonlinelibrary.com DOI: 10.1002/pits.20542
ASSESSING FOR GENERALIZED IMPROVEMENTS IN READING COMPREHENSION
BY INTERVENING TO IMPROVE READING FLUENCY
CHRISTINE E. NEDDENRIEP, ABIGAIL M. FRITZ, AND MIRANDA E. CARRIER
University of Wisconsin-Whitewater
The relationship between reading fluency and comprehension was evaluated in five 4th-grade stu-dents. These students were identified as being at risk of not meeting yearly goals in reading fluencyand comprehension based on fall benchmark assessment data. A brief intervention assessment wasused to determine which intervention components would be essential to improving reading fluencyacross the five participants. As a result, the combination of repeated practice with performancefeedback and error correction was implemented using instructional-level reading materials twiceper week for 30-minute sessions with progress monitored weekly using AIMSweb measures of oral reading fluency and comprehension. Empirical, single-case designs were used to evaluate theimpact of the program across these five students with assessed, generalized improvements in com-prehension. Results indicated increased rate of words read correctly per minute with generalizedincreases in comprehension for four of five participants. Implications for practice and directionsfor future research are discussed. C 2010 Wiley Periodicals, Inc.
As school psychologists are increasingly working within a changing model of service delivery,
Response to Intervention (RtI; Brown-Chidsey & Steege, 2005), they require valid and reliable mea-
sures to assess students’ progress within the curriculum and their response to changes in instruction.
Curriculum-based measurement (CBM) is a valid and reliable system developed at the University
of Minnesota more than 30 years ago to be part of a problem-solving approach for special educators
evaluating students’ progress toward Individualized Education Program (IEP) goals and objectives
(Deno & Mirkin, 1977). Increasingly, school psychologists are using this measurement technology
to assess general education students’ proficiency in basic skill areas (e.g., reading, math, written
expression, spelling) and to monitor students’ progress within the curriculum. CBM is well-matchedto this task as these procedures are brief, requiring 1 to 3 minutes to administer; grade appropriate,
resembling the typical tasks (e.g., reading aloud) and materials (e.g., reading passages) used in
instruction; repeatable, providing alternate forms of equivalent difficulty; and sensitive, reflecting
small changes in performance over time (Shapiro, 2004). As school psychologists work within a
problem-solving model, CBM procedures yield essential data to inform their decision making about
students’ growth in response to instruction (Deno, Espin, & Fuchs, 2002).
CBM is described as a general outcome measure, meaning that these test procedures do
not measure all aspects of a child’s academic performance but serve as indicators of academic
proficiency (Deno, 1985). The most frequently used and researched CBM of reading proficiency
(R-CBM) assesses oral reading fluency. When R-CBM is assessed, students are asked to read aloudfrom grade-appropriate passages for 1 minute. Substitutions, omissions, and errors in pronunciation
are noted, and the number of correctly read words in 1 minute is recorded. This rate measure
reflects both the speed and accuracy of reading grade-appropriate materials. Numerous studies have
demonstrated that this rate measure is reliable, sensitive to changes in performance over time, and
related to established norm-referenced and criterion-referenced measures of reading. In addition,
it discriminates between higher and lower performing students (see Martson, 1989 and Wayman,
Wallace, Wiley, Ticha, & Espin, 2007 for reviews).
Whereas the technical adequacy of R-CBM has been established, individuals have questioned
the value of assessing oral reading fluency as an indicator of reading proficiency (e.g., Mehrens &
Correspondence to: Christine E. Neddenriep, Psychology Department, University of Wisconsin-Whitewater,
800 West Main Street, Whitewater, WI 53190. E-mail: [email protected]
14
7/23/2019 Neddenriep, C., Fritz, A., & Carrier, M. (2011).
http://slidepdf.com/reader/full/neddenriep-c-fritz-a-carrier-m-2011 2/14
Generalized Improvements in Reading Comprehension 15
Clarizio, 1993; Paris, 2005; Yell, 1992). One of the reasons for the skepticism is premised on the
assertion that fluency also reflects comprehension, the goal of reading. This relationship between
fluency and comprehension has been established by researchers correlating R-CBM measures of
reading fluency with established, norm-referenced measures of reading comprehension (e.g., Bain &
Garlock, 1992; Deno, Mirkin, & Chiang, 1982; Fuchs & Deno, 1992; Fuchs, Fuchs, & Maxell, 1988;Jenkins & Jewell, 1993; Martson, 1989; Reschly, Busch, Betts, Deno, & Long, 2009; Shinn, Good,
Knutson, Tilly, & Collins, 1992). Reported correlations between R-CBM and comprehension are
moderate to strong, ranging from .54 to .93, and have been found to be stronger than those correlations
between more typical measures of comprehension (e.g., question answering) and norm-referenced
measures of comprehension. Although the criterion-related validity of R-CBM is supported by these
group studies, school psychologists assisting teachers in addressing the reading deficits of their
students may be more concerned with the relationship between reading fluency and comprehension
at the individual student level (Markell & Deno, 1997; Wayman et al., 2007).
At the individual level, the relationship between reading fluently and understanding what one
reads has been described theoretically. One such theory explains that, as students become more skilledin decoding and identifying words, their recognition becomes more automatic. This automaticity
allows the reader to spend less time and effort sounding out words and to retain more cognitive
resources for understanding what is being read (LaBerge & Samuels, 1974). This theory supports
the positive correlation between reading fluency and comprehension (Fuchs, Fuchs, Hosp, & Jenkins,
2001; Markell & Deno, 1997; Martson, 1989), but this relationship is not causal. Reading fluency
has been identified as one of several factors necessary but not sufficient for comprehension (National
Institute of Child Health and Human Development [NICHHD], 2000; Pikulski & Chard, 2005; Snow,
Burns, & Griffin, 1998). As a result, when educators implement interventions to improve fluency,
subsequent changes in comprehension can be predicted but not guaranteed (Paris, 2005).
Markell and Deno (1997) directly assessed changes in incomprehension affected by changes inoral reading fluency at the individual level. Specifically, 42 third-grade students were presented with
progressively more difficult passages to read. Each student was presented with three reading passages
at the second-, fourth-, and sixth-grade levels. The students then completed literal comprehension
questions regarding each passage and a Maze passage developed based on the passages. The indi-
vidual analyses revealed that the amount of change in reading fluency was an important factor when
making predictions about general improvements in reading proficiency, including comprehension.
These changes needed to be sufficiently large (e.g., 15–20 words) to reliably predict changes in
comprehension. If a student’s oral reading fluency increased at a rate of 1 – 2 words correct per week
on average, then changes in comprehension would be evidenced in approximately 10–20 weeks of
instruction. Also, Markell and Deno found that a minimum criterion of reading 90 words correctly
per minute was required for students in the study to be able to answer most literal comprehension
questions (70%). Whereas this minimum criterion of 90 words read correctly per minute does not
guarantee comprehension for all students, Markell and Deno asserted that it provided a useful guide
for instructional decision making.
The purpose of the current study was to further our understanding of the relationship between
changes in reading fluency and associated changes in comprehension at the individual level. Whereas
Markell and Deno (1997) had manipulated reading fluency by exposing students to differing levels
of text within a single session, the current study altered performance in reading fluency across time
and assessed corresponding changes in comprehension. Given Markell and Deno’s (1997) assertion
that large differences in reading aloud are necessary before changes in reading comprehension can be
demonstrated, the current study used evidence-based instructional components to affect the reading
fluency of 5 fourth-grade students across a total of 15 weeks and to assess generalized improvements
in comprehension during the same time period. The current study also evaluated Markell and Deno’s
Psychology in the Schools DOI: 10.1002/pits
7/23/2019 Neddenriep, C., Fritz, A., & Carrier, M. (2011).
http://slidepdf.com/reader/full/neddenriep-c-fritz-a-carrier-m-2011 3/14
16 Neddenriep, Fritz, and Carrier
finding of a minimum criterion necessary for comprehension. The results have important implications
for individual progress monitoring and for designing interventions for students with reading fluency
deficits.
METHOD
Participants and Setting
The participants included 5, general education, fourth-grade students (2 boys and 3 girls),
ranging in age from 9 to 10 years old, and attending an elementary school in the Midwestern United
States. These students were nominated by their teachers for participation based on the results of
fall benchmark assessment data, which were collected in September, approximately 2 weeks after
the start of the school year using three AIMSweb R-CBM Fall Benchmark Assessment Passages.
The median scores for each student were 31, 42, 43, 48, and 61 words correct per minute (WCPM),
reflecting a frustration level in fourth-grade material according to Deno and Mirkin’s instructional
level criteria (70–100 WCPM; 1977). In comparison, the average fourth-grade student at the sameschool read at an instructional level, 85 WCPM. Thus, these students were performing below the
25th percentile and were not currently receiving additional services or supports.
The school was located in a rural setting, with approximately 42% of the students receiving free
or reduced lunch. The racial makeup of the school was predominately White, as were the participants
in the study. Latino students made up 28% of the school population, Asian students 3%, and Black
students 2%. All procedures were conducted in separate rooms a short distance from the students’
classrooms. Procedures were conducted after the school day, 2 days per week across a total of 15
weeks.
Materials
AIMSweb R-CBM (Shinn & Shinn, 2002a) and Maze (Shinn & Shinn, 2002b) passages from
the AIMSweb Progress Monitoring and RtI System (www.aimsweb.com) were used to determine the
participants’ initial level of reading performance (fluency and comprehension) as well as to assess
the impact of the reading fluency intervention. AIMSweb Grade 4 Standard Progress Monitoring
Reading Assessment Passages contain 350 words and are written as a story with a beginning and an
end. Thirty passages are available of equivalent difficulty at Grade 4, as determined by the Fry (1968)
readability formula. Alternate-form reliability for the passages was reported to be .85 (Howe & Shinn,
2002). AIMSweb Grade 4 Maze Assessment Passages also contain 350-word stories. Beginning with
the second sentence of each story, approximately every seventh word is omitted and replaced with
three words inside parentheses. One word correctly completes the sentence maintaining the meaning.
The two alternative words are distracters—one is a near distracter, a word of the same part of speech
(e.g., noun, verb, adverb) as the correct word, but does not make sense or preserve the meaning of
the sentence; the other distracter is a far distracter, a word randomly selected from the story that
does not make sense. Thirty alternative passages of equivalent difficulty are available for continuous
assessment. The Maze task has been found to be a reliable measure of reading comprehension for
students in elementary, middle, and high school (Brown-Chidsey, Davis, & Maya, 2003). As well,
the concurrent and criterion-related validity of the Maze task has been well-established (Fuchs &
Fuchs, 1992; Jenkins & Jewell, 1993).
Passages and Sight Phrases from the Great Leaps Elementary Program (Grades 3 – 5; Campbell,
2005) were used within the intervention. The Sight Phrases component includes progressively more
difficult pages of phrases, including high-frequency words found in the English language. Each page
of phrases includes an increasing total number of words grouped in three-word phrases, designed to
be read aloud in 1 minute with no errors to demonstrate fluency. The Passages component includes
Psychology in the Schools DOI: 10.1002/pits
7/23/2019 Neddenriep, C., Fritz, A., & Carrier, M. (2011).
http://slidepdf.com/reader/full/neddenriep-c-fritz-a-carrier-m-2011 4/14
Generalized Improvements in Reading Comprehension 17
progressively more difficult stories with an increasing total number of words designed to be read
aloud in 1 minute with two or fewer errors. The Great Leaps Program also includes graphs to chart
students’ progress. In addition, stopwatches and kitchen timers were used.
Dependent VariablesThree dependent measures of reading proficiency were assessed: oral reading fluency (the rate
of words read correctly per minute in R-CBM passages), errors per minute (the rate of errors made
per minute in R-CBM passages), and responses correct per 3 minutes (the rate of correctly selected
words per 3 minutes in Maze passages).
Procedural Conditions
Brief Intervention Assessment. Based on the benchmark data mentioned earlier in this article
indicating that these five students were reading at a frustration level in fourth-grade materials, a brief
intervention assessment (Witt, Daly, & Noell, 2000) was conducted to determine which instructional
component(s) might be essential to improving their reading fluency. Several instructional componentshave been found to be effective in increasing reading fluency, including practice, modeling, error
correction, contingent reinforcement, and performance feedback. Repeated reading is an evidence-
based strategy that incorporates the primary component of practice. This intervention has been
shown to improve students’ speed, accuracy, and understanding of the passage read (NICHHD,
2000). Therrien (2004) found that, when repeated reading is used to improve students’ overall
reading fluency and comprehension, several essential components are necessary to be included: the
student reads aloud to an adult; the adult corrects errors to ensure accurate practice; and the adult
provides feedback regarding performance to ensure mastery.
Given these essential components, each participant was exposed to stacked conditions of re-
peated practice, performance feedback, and error correction following a baseline condition withinthe brief intervention assessment. When the students were given the opportunity to practice, they
read the passage three times. When the students were provided with performance feedback in ad-
dition to practice, they were told how many words they had read in 1 minute previously, and they
were asked to read the current passage three times. They were told how many words they had read in
1 minute on that passage in comparison. When they were provided with error correction in addition
to practice and performance feedback, they were told which words they had mispronounced or
omitted, prompted to read each phrase with the error word corrected three times, and then asked
to reread the passage two more times. They were again told how many words they had read in
1 minute on that passage in comparison to the previous passage. The participants’ response to each
condition was assessed using four different AIMSweb R-CBM passages. The number of words readcorrectly and errors made per minute were graphed and compared to a baseline level of perfor-
mance to determine which component(s) may be essential to improving the performance of each
participant.
Extended Assessment. Following the collection of three additional baseline data points across
both AIMSweb measures of R-CBM and Maze, the combination of practice, performance feedback,
and error correction was implemented using the Passages and Sight Phrases from the Great Leaps
Elementary Program (Grades 3 – 5; Campbell, 2005) described earlier in text (see Materials section).
The five students were grouped based on their similar reading levels into two pairs (Ethan and Maggie;
Laura and Allie) and a single student (Glen) working with three adults for 30 minutes 2 days a week
across 12 weeks of intervention. Students repeatedly practiced reading the Sight Phrases and Passages
aloud to the adult until they were able to successfully read the Sight Phrases with no errors in 1 minute
and the Passages with 2 or fewer errors in 1 minute. Errors were corrected following each reading,
Psychology in the Schools DOI: 10.1002/pits
7/23/2019 Neddenriep, C., Fritz, A., & Carrier, M. (2011).
http://slidepdf.com/reader/full/neddenriep-c-fritz-a-carrier-m-2011 5/14
18 Neddenriep, Fritz, and Carrier
and feedback regarding their performance was provided and graphed visually. Progress was assessed
weekly using AIMSweb measures of R-CBM and Maze.
Design and Analysis
Empirical, single-case designs (Skinner, 2004) were used to demonstrate the change in fluencyover time between baseline and treatment conditions and to evaluate the concurrent change in
comprehension over the same time period at the individual level. Whereas this design does not allow
us to make a causal inference regarding the intervention, it does allow us to demonstrate the change
in both measures for each participant. The data were graphed and visually inspected comparing
baseline to intervention levels for changes in level and trend (i.e., mean level of performance and
rate of improvement). The percentage of change and standardized effect sizes were also calculated
to determine the difference between the intervention and baseline levels. Percentage of change was
calculated by subtracting the mean of the baseline observations from the mean of the intervention
observations and dividing the result by the mean of the baseline observations and multiplying by
100. The standardized effect size was calculated by subtracting the mean of the baseline observationsfrom the mean of the intervention observations and dividing the result by the standard deviation of
the baseline observations (Shernoff, Kratochwill, & Stoiber, 2002).
Integrity of Experimental Procedures and Inter-Scorer Agreement
Experimenters completed checklists containing the steps pertaining to all experimental pro-
cedures to record procedural integrity data. These data showed that the experimenter implemented
tutoring procedures with 100% integrity across all sessions. A second observer independently
recorded the number of words read correctly per minute and scored the number of correct responses
made per 3 minutes across 20% of the progress-monitoring sessions. Inter-observer agreement was
calculated as the number of agreements divided by the number of agreements plus disagreementsand multiplied by 100. Average inter-scorer agreement was 99.6% (96, 100) for words read correctly
and 99.2% (94, 100) for responses correctly made per 3 minutes.
RESULTS
The results of the brief intervention assessment for the five participants are summarized in
Table 1. Across all five participants, the addition of performance feedback and practice was effective
in increasing the number of words read correctly per minute over baseline and practice alone
conditions. The addition of error correction led to a higher rate of words read correctly for three of
the five participants with five or fewer errors for four of the five participants. Thus, the addition of
error correction was determined to be beneficial to contribute to fluent (fast and accurate) practice.
Table 1WCPM and Errors per Minute (EPM) across Participants and Conditions within the Brief Intervention
Assessment
Baseline Practice Practice + Performance Practice+ Performance Feedback + Error
Participant WCPM (EPM) WCPM (EPM) Feedback WCPM (EPM) Correction WCPM (EPM)
Ethan 60 (8) 58 (14) 94 (7) 104 (8)
Maggie 82 (6) 81 (6) 119 (2) 133 (3)
Laura 74 (6) 73 (5) 130 (7) 109 (5)Allie 60 (2) 52 (9) 120 (0) 114 (5)
Glen 35 (6) 39 (8) 61 (4) 63 (5)
Psychology in the Schools DOI: 10.1002/pits
7/23/2019 Neddenriep, C., Fritz, A., & Carrier, M. (2011).
http://slidepdf.com/reader/full/neddenriep-c-fritz-a-carrier-m-2011 6/14
Generalized Improvements in Reading Comprehension 19
The results of the implementation of practice, performance feedback, and error correction on
the participants’ reading fluency and assessed generalization to comprehension are displayed in
Figures 1–5 with summary data included in Table 2. Ethan initially read an average of 78 WCPM
across baseline sessions. During the 12 weeks of intervention, he read an average of 100 WCPM
FIGURE 1. Ethan’s reading fluency assessed across baseline and intervention conditions with assessed generalization tocomprehension.
Psychology in the Schools DOI: 10.1002/pits
7/23/2019 Neddenriep, C., Fritz, A., & Carrier, M. (2011).
http://slidepdf.com/reader/full/neddenriep-c-fritz-a-carrier-m-2011 7/14
20 Neddenriep, Fritz, and Carrier
FIGURE 2. Maggie’s reading fluency assessed across baseline and intervention conditions with assessed generalization tocomprehension.
(75,125) reflecting a 27% increase over his baseline performance (average gain of 22 words) and an
effect size of 1.19, resulting in his reading at a mastery level across the last three consecutive weeks
of intervention (105, 125, and 122 WCPM). During the same period of time, Ethan’s comprehension
increased, reflecting an average rate of improvement of 1 word correctly selected per week in
Psychology in the Schools DOI: 10.1002/pits
7/23/2019 Neddenriep, C., Fritz, A., & Carrier, M. (2011).
http://slidepdf.com/reader/full/neddenriep-c-fritz-a-carrier-m-2011 8/14
Generalized Improvements in Reading Comprehension 21
FIGURE 3. Laura’s reading fluency assessed across baseline and intervention conditions with assessed generalization tocomprehension.
assessed Maze passages (see Figure 1 and Table 2). Sustained improvements in comprehension
became evident after 5 weeks of fluency intervention.
Maggie initially read an average of 86 WCPM across baseline sessions. During the 12 weeks
of intervention, she read an average of 97 WCPM, reflecting a 13% increase over her baseline
Psychology in the Schools DOI: 10.1002/pits
7/23/2019 Neddenriep, C., Fritz, A., & Carrier, M. (2011).
http://slidepdf.com/reader/full/neddenriep-c-fritz-a-carrier-m-2011 9/14
22 Neddenriep, Fritz, and Carrier
FIGURE 4. Allie’s reading fluency assessed across baseline and intervention conditions with assessed generalization tocomprehension.
performance (average gain of 11 words) and an effect size of .65, resulting in her reading at a
mastery level across the last three consecutive weeks of intervention (110, 131, and 110 WCPM).
During the same period of time, Maggie’s comprehension increased, reflecting an average rate of
improvement of .86 words correctly selected per week in assessed Maze passages (see Figure 2 and
Psychology in the Schools DOI: 10.1002/pits
7/23/2019 Neddenriep, C., Fritz, A., & Carrier, M. (2011).
http://slidepdf.com/reader/full/neddenriep-c-fritz-a-carrier-m-2011 10/14
Generalized Improvements in Reading Comprehension 23
FIGURE 5. Glen’s reading fluency assessed across baseline and intervention conditions with assessed generalization tocomprehension.
Table 2). Sustained improvements in comprehension became evident after just 4 weeks of fluency
intervention.
Laura initially read an average of 74 WCPM across baseline sessions. During the 12 weeks
of intervention, she read an average of 88 WCPM, reflecting an18% increase over her baseline
Psychology in the Schools DOI: 10.1002/pits
7/23/2019 Neddenriep, C., Fritz, A., & Carrier, M. (2011).
http://slidepdf.com/reader/full/neddenriep-c-fritz-a-carrier-m-2011 11/14
24 Neddenriep, Fritz, and Carrier
Table 2Summarized Changes in Reading Fluency and Comprehension across the Five Participants
Average Gain in Percent Change in Effect ROI in Instructional
Participant Number of Words Reading Fluency Size Comprehension Level
Ethan 22 27% 1.19 1.0 Mastery
Maggie 11 13% .65 .86 Mastery
Laura 14 18% 1.17 .56 Instructional
Allie 15 23% 2.14 .8 Mastery
Glen 13 46% 1.08 .02 Frustration
performance (average gain of 14 words) and an effect size of 1.17, resulting in her consistently
reading at or above an instructional level throughout the intervention phase. During the same period
of time, Laura’s comprehension increased, reflecting an average rate of improvement of .56 wordscorrectly selected per week in assessed Maze passages (see Figure 3 and Table 2). Laura’s improved
comprehension became more stable after 6 weeks of fluency intervention.
Allie initially read an average of 66 WCPM across baseline sessions. During the 12 weeks
of intervention, she read an average of 81 WCPM, reflecting a 23% increase over her baseline
performance (average gain of 15 words) and an effect size of 2.14, resulting in her reading at a
mastery level across the last two consecutive weeks of intervention (111 and 103 WCPM). During the
same period of time, Allie’s comprehension increased, reflecting an average rate of improvement of
.8 words correctly selected per week in assessed Maze passages (see Figure 4 and Table 2). Sustained
improvements in comprehension became evident after just 4 weeks of fluency intervention.
Glen initially read an average of 28 WCPM across baseline sessions. During the 11 weeks of
intervention (due to absences), he read an average of 41 WCPM, reflecting a 46% increase over
his baseline performance (average gain of 13 words) and an effect size of 1.08; however, Glen was
continuing to read at a frustration level in fourth-grade materials. During the same period of time,
Glen’s comprehension remained consistently low, reflecting little to no rate of improvement ([ROI]
= .02 words correctly selected per week in assessed Maze passages; see Figure 5 and Table 2).
DISCUSSION
The current study used evidence-based instructional components to affect the reading fluency
of 5 fourth-grade students across a total of 15 weeks and to assess generalized improvements in
comprehension during the same time period. Given Markell and Deno’s (1997) assertion that large
differences in reading aloud are necessary before changes in reading comprehension can be demon-
strated, the current study used a brief intervention assessment to determine the essential components
necessary to increase participants’ reading fluency. During the 12 weeks that repeated practice with
performance feedback and error correction were implemented, participants demonstrated an average
increase of 25% over baseline levels of performance, representing an average gain of 15 words
from baseline to intervention and an average effect size of 1.25. Four of the five participants also
demonstrated meaningful gains in comprehension at a rate exceeding the realistic growth rate for
fourth-grade students (ROI = .39; Fuchs, Fuchs, Hamlett, Walz, & Germann, 1993). Whereas
Markell and Deno (1997) had asserted that a minimum gain of 15–20 words was necessary to
predict changes in comprehension, only two of the four students met this minimum criterion (Ethan
and Allie; see Table 2). Rather, the quality of change appeared to be more meaningful in reflecting
change in comprehension. The four students who demonstrated growth in comprehension had grown
in reading fluency such that they were reading at an instructional or mastery level. Glen, despite
Psychology in the Schools DOI: 10.1002/pits
7/23/2019 Neddenriep, C., Fritz, A., & Carrier, M. (2011).
http://slidepdf.com/reader/full/neddenriep-c-fritz-a-carrier-m-2011 12/14
Generalized Improvements in Reading Comprehension 25
having increased his reading fluency by 46%, was continuing to read at a frustration level. Thus,
these results support Markell and Deno’s recommended use of 90 WCPM as a minimum fluency
criterion for literal comprehension. Whereas this criterion may not be sufficient for comprehension, it
provides a guideline for defining what is “necessary” for comprehension in setting goals for fluency.
Limitations and Directions for Future Research
The current study adds to the literature regarding changes in reading comprehension affected
by changes in reading fluency at the individual level. Several limitations should be noted, however.
First, the use of empirical case designs allowed us to describe the changes in both measures over
time, but we were not able to demonstrate a functional relationship between our implementation of
the intervention components (repeated practice, error correction, and performance feedback) and the
resulting change in reading fluency. Using an experimental case design, such as a multiple baseline
design across participants, would have allowed us to draw a cause–effect relationship. Given the
limited time available, using a multiple baseline design across five participants was not possible.
Future researchers may use this design to demonstrate experimental control.
A second limitation is the lack of maintenance and follow-up data across both measures.
Although changes in reading fluency and comprehension appeared to coincide in four of the five
participants, without continued follow-up we do not know if these gains were maintained over time.
Future researchers should continue to collect data after the intervention has been discontinued to
determine if these gains are maintained over time.
Finally, to assess generalized gains in reading comprehension relative to a fluency intervention,
only an intervention for reading fluency was implemented. This is not to stay that the participants
were not also receiving comprehension strategies within their classroom instruction during the
same time that the fluency intervention was applied. The comprehension strategies, however, would
have occurred across the baseline conditions as well. To determine the relative gain of addingcomprehension strategies to the fluency intervention, an additional phase would have been required.
Future researchers may consider adding comprehension strategies after demonstrating an increase
in fluency and generalized gains in comprehension to determine the added benefit.
Implications for Practice
Students referred for school psychology services most often display reading skill deficits
(Reschly, 2008). As recent data attest, a significant number of fourth-grade students (i.e., 37%)
are performing below the basic level (National Center for Education Statistics et al., 2001), a level
at which they are unable to read and to comprehend grade-level material. This lack of reading
proficiency predicts poor future outcomes for these students. As school psychologists work within
an RtI model to address the reading skill deficits of these at-risk students, the implementation
of reading fluency interventions is essential. Given our understanding of the relationship between
reading fluency and comprehension, we would expect that, as reading fluency increases, so too
would reading comprehension (Reschly et al., 2009). Data from this study add support to the
assertion that reading fluency is necessary yet not sufficient for comprehension. The quality of
reading fluency may be minimally defined in terms of Fuchs and Deno’s instructional-level criteria
in grade-level materials. Even when large gains are made in fluency, these gains may not be sufficient
for comprehension in grade-level materials if not minimally reflecting an instructional level.
As school psychologists work to address and operationally define students’ reading deficits,
they may find that whereas the referred concern is for comprehension, the student’s reading fluency
is not minimally sufficient to expect gains in comprehension. Given the limited instructional time
available for supplemental interventions in the classroom, a reading fluency intervention may be
Psychology in the Schools DOI: 10.1002/pits
7/23/2019 Neddenriep, C., Fritz, A., & Carrier, M. (2011).
http://slidepdf.com/reader/full/neddenriep-c-fritz-a-carrier-m-2011 13/14
26 Neddenriep, Fritz, and Carrier
both an effective and efficient method to achieve gains in fluency and comprehension if fluency is
increased to an instructional level in grade-level materials. Thus, assessment of fluency is essential
to addressing comprehension deficits given the relationship between the two (Baker, Gersten, &
Grossen, 2002). As well, the instructional level criteria may be an especially useful standard for goal
setting with regard to the fluency intervention, with instructional level reflecting a minimum leveland mastery level reflecting an optimal level of fluency achieved.
REFERENCES
Bain, S. K., & Garlock, J. W. (1992). Cross-validation of criterion-related validity for CBM reading passages. Assessmentfor Effective Intervention, 17, 202–208.
Baker, S., Gersten, R., & Grossen, B. (2002). Interventions for students with reading comprehension problems. In A. Thomas& J. Grimes (Eds.), Best practices in school psychology-IV (pp. 731 – 754). Washington, DC: National Association of School Psychologists.
Brown-Chidsey, R., Davis, L., & Maya, C. (2003). Sources of variance in curriculum-based measures of silent reading.Psychology in the Schools, 40, 363–377.
Brown-Chidsey, R., & Steege, M. W. (2005). Response to intervention: Principles and strategies for effective practice.New York: Guilford.
Campbell, K. U. (2005). Great leaps reading program (5th ed.). Gainesville, FL: Diarmuid.
Deno, S. L. (1985). Curriculum-based measurement: The emerging alternative. Exceptional Children, 52, 219 – 232.
Deno, S. L., Espin, C. A., & Fuchs, L. S. (2002). Evaluation strategies for preventing and remediating basic skill deficits. InM. R. Shinn, H. M. Walker, & G. Stoner (Eds.), Interventions for academic and behavior problems II: Preventive andremedial approaches (pp. 213 – 242). Washington DC: National Association of School Psychologists.
Deno, S. L., & Mirkin, P. K. (1977). Data-based program modification: A manual. Reston, VA: Council for ExceptionalChildren.
Deno, S. L., Mirkin, P. K., & Chiang, B. (1982). Identifying valid measures of reading. Exceptional Children, 49, 36 – 45.
Fry, E. (1968). A readability formula that saves time. Journal of Reading, 11, 513– 516, 575– 578.
Fuchs, L. S., & Deno, S. L. (1992). Effects of curriculum within curriculum-based measurement. Exceptional Children, 58,232–243.
Fuchs, L. S., & Fuchs, D. (1992). Identifying a measure for monitoring student reading progress. School Psychology Review,21, 45 – 59.
Fuchs, L. S., Fuchs, D., Hamlett, C. L., Walz, L., & Germann, G. (1993). Formative evaluation of academic progress: Howmuch growth can we expect? School Psychology Review, 22, 27 – 48.
Fuchs, L. S., Fuchs, D., Hosp, M. K., & Jenkins, J. R. (2001). Oral reading fluency as an indicator of reading competence: Atheoretic, empirical, and historical analysis. Scientific Studies of Reading, 5, 239 – 256.
Fuchs, L. S., Fuchs, D., & Maxwell, L. (1988). The validity of informal reading comprehension measures. Remedial andSpecial Education, 9(2), 20 – 28.
Howe, K. B., & Shinn, M. M. (2002). Standard reading assessment passages (RAPs) for use in general outcome measurement:A manual describing development and technical features. Eden Prairie, MN: Edformation.
Jenkins, J. R., & Jewell, M. (1993). Examining the validity of two measures for formative teaching: Reading aloud and Maze.
Exceptional Children, 59, 421–432.
LaBerge, D., & Samuels, S. J. (1974). Toward a theory of automatic information processing in reading. Cognitive Psychology,6, 293–323.
Markell, M. A., & Deno, S. L. (1997). Effects of increasing oral reading: Generalization across reading tasks. The Journal of Special Education, 31, 233 – 250.
Martson, D. B. (1989). A curriculum-based measurement approach to assessing academic performance: What it is and why doit. In M. R. Shinn (Ed.), Curriculum-based measurement: Assessing special children (pp. 18 – 78). New York: Guilford.
Mehrens, W. A., & Clarizio, H. F. (1993). Curriculum-based measurement: Conceptual and psychometric considerations.Psychology in the Schools, 20, 241–254.
National Center for Education Statistics, Office of Educational Research and Improvement, & U.S. Department of Education.(2001). The nation’s report card: Fourth-grade reading 2000 (NCES Publication No. 2001-499). Washington, DC: U.S.Government Printing Office.
National Institute of Child Health and Human Development (NICHHD). (2000). Report of the National Reading Panel. Teach-ing children to read: An evidence-based assessment of the scientific research literature on reading and its implicationsfor reading instruction (NIH Publication No. 00-4769). Washington, DC: U.S. Government Printing Office.
Paris, S. G. (2005). Reinterpreting the development of reading skills. Reading Research Quarterly, 40(2), 184– 202.
Psychology in the Schools DOI: 10.1002/pits
7/23/2019 Neddenriep, C., Fritz, A., & Carrier, M. (2011).
http://slidepdf.com/reader/full/neddenriep-c-fritz-a-carrier-m-2011 14/14
Generalized Improvements in Reading Comprehension 27
Pikulski, J. J., & Chard, D. J. (2005). Fluency: Bridge between decoding and reading comprehension. The Reading Teacher,58, 510–519.
Reschly, A. L., Busch, T. W., Betts, J., Deno, S. L., & Long, J. D. (2009). Curriculum-based measurement oral reading asan indicator of reading achievement: A meta-analysis of the correlational evidence. Journal of School Psychology, 47,427–469.
Reschly, D. J. (2008). School psychology paradigm shift and beyond. In A. Thomas & J. Grimes (Eds.), Best practices inschool psychology (5th ed., pp. 3 – 17). Washington, DC: National Association of School Psychologists.
Shapiro, E. S. (2004). Academic skills problems: Direct assessment and intervention (3rd ed.). New York: Guilford Press.
Shernoff, E. S., Kratochwill, T. R., & Stoiber, K. C. (2002). Evidence-based interventions in school psychology: An illustrationof task force coding criteria using single-participant research design. School Psychology Quarterly, 17, 390 – 422.
Shinn, M. R., Good, R. H., Knutson, N., Tilly, W. D., & Collins, V. L. (1992). Curriculum-based measurement of oral readingfluency: A confirmatory analysis of its relation to reading. School Psychology Review, 21, 459 – 479.
Shinn, M. M., & Shinn, M. R. (2002a). AIMSweb training workbook: Administration and scoring of reading curriculum-basedmeasurement (R-CBM) for use in general outcome measurement. Bloomington, MN: Pearson, Inc.
Shinn, M. R., & Shinn, M. M. (2002b). AIMSweb training workbook: Administration and scoring of reading maze for usein general outcome measurement. Bloomington, MN: Pearson, Inc.
Skinner, C. H. (2004). Single-subject designs: Procedures that allow school psychologists to contribute to the intervention
evaluation and validation process. Journal of Applied School Psychology, 20(2), 1 – 10.Snow, C. E., Burns, S. M., & Griffin, P. (1998). Preventing reading difficulties in young children. Washington, DC: National
Academies Press.
Therrien, W. J. (2004). Fluency and comprehension gains as a result of repeated reading. Remedial and Special Education,25, 252–261.
Wayman, M. M., Wallace, T., Wiley, H. I., Ticha, R., & Espin, C. A. (2007). Literature synthesis on curriculum-basedmeasurement in reading. The Journal of Special Education, 41(2), 85 – 120.
Witt, J. C., Daly, E. J. III, & Noell, G. H. (2000). Functional assessments: A step-by-step guide to solving academic andbehavior problems. Longmont, CO: Sopris West.
Yell, M. L. (1992). Barriers to implementing curriculum-based measurement. Diagnostique, 18, 99 – 112.
Psychology in the Schools DOI: 10.1002/pits