17
Journal of Educational and Psychological Consultation, 22:314–329, 2012 Copyright © Taylor & Francis Group, LLC ISSN: 1047-4412 print/1532-768X online DOI: 10.1080/10474412.2012.731292 Implementation as a Focus of Consultation to Evaluate Academic Tutoring Services in an Urban School District: A Case Study Example JULIE Q. MORRISON and SARAH BAKER ENGLISH University of Cincinnati This article describes a multiagency initiative to evaluate aca- demic tutoring services by focusing on the processes that contribute to effective program implementation. Community-based tutoring service providers serving students in the Cincinnati Public Schools (OH) partnered to initiate a ‘‘Seal of Approval’’ process for promot- ing evidence-based practices among tutoring providers, assigning merit to effective programs, and eliminating or remediating in- effective practices. The consultant-driven process evaluation was designed to be fair and equitable among an array of tutoring service providers. This case example has implications for con- sultants, tutoring program directors, and school district admin- istrators seeking to establish an accountability system for tutoring service providers. As an urban school district with 22,702 students living in poverty (69.8% of the student population), Cincinnati Public Schools (OH) relies on strong partnerships with business, nonprofit, and civic organizations to tackle some of the district’s most pressing challenges and maximize stu- dents’ success. Ensuring the quality of the initiatives conceived through these multiple, and often competing, partnerships requires a level of accountability and collaboration that exceeds the capacity of a high- need, urban school district. It was in this context that the SMART Tutoring Network was born. Program directors from local agencies that provided tutoring to students in the Cincinnati Public Schools gathered together around the common stated goal of providing awareness of and access to a network of trained, high-quality, caring, and supportive academic tutors for Cincinnati Public Schools students in Grades K–8. With tens Correspondence should be sent to Julie Q. Morrison, School Psychology Program, College of Education, Criminal Justice, and Human Services, School of Human Services, University of Cincinnati, P.O. Box 210068, Cincinnati, OH 45221–0068. E-mail: [email protected] 314

Implementation as a Focus of Consultation to Evaluate Academic Tutoring Services in an Urban School District: A Case Study Example

Embed Size (px)

Citation preview

Page 1: Implementation as a Focus of Consultation to Evaluate Academic Tutoring Services in an Urban School District: A Case Study Example

Journal of Educational and Psychological Consultation, 22:314–329, 2012

Copyright © Taylor & Francis Group, LLC

ISSN: 1047-4412 print/1532-768X online

DOI: 10.1080/10474412.2012.731292

Implementation as a Focus of Consultation toEvaluate Academic Tutoring Services in an

Urban School District: A Case Study Example

JULIE Q. MORRISON and SARAH BAKER ENGLISHUniversity of Cincinnati

This article describes a multiagency initiative to evaluate aca-

demic tutoring services by focusing on the processes that contribute

to effective program implementation. Community-based tutoring

service providers serving students in the Cincinnati Public Schools

(OH) partnered to initiate a ‘‘Seal of Approval’’ process for promot-

ing evidence-based practices among tutoring providers, assigning

merit to effective programs, and eliminating or remediating in-

effective practices. The consultant-driven process evaluation was

designed to be fair and equitable among an array of tutoring

service providers. This case example has implications for con-

sultants, tutoring program directors, and school district admin-

istrators seeking to establish an accountability system for tutoringservice providers.

As an urban school district with 22,702 students living in poverty (69.8%of the student population), Cincinnati Public Schools (OH) relies onstrong partnerships with business, nonprofit, and civic organizations totackle some of the district’s most pressing challenges and maximize stu-dents’ success. Ensuring the quality of the initiatives conceived throughthese multiple, and often competing, partnerships requires a level ofaccountability and collaboration that exceeds the capacity of a high-need, urban school district. It was in this context that the SMART TutoringNetwork was born. Program directors from local agencies that providedtutoring to students in the Cincinnati Public Schools gathered togetheraround the common stated goal of providing awareness of and accessto a network of trained, high-quality, caring, and supportive academictutors for Cincinnati Public Schools students in Grades K–8. With tens

Correspondence should be sent to Julie Q. Morrison, School Psychology Program, Collegeof Education, Criminal Justice, and Human Services, School of Human Services, University of

Cincinnati, P.O. Box 210068, Cincinnati, OH 45221–0068. E-mail: [email protected]

314

Page 2: Implementation as a Focus of Consultation to Evaluate Academic Tutoring Services in an Urban School District: A Case Study Example

Implementation 315

of thousands of dollars in Supplemental Educational Services at stake,program directors also wanted to ensure their program would get afair share of the district’s federal funds. Frank discussions among theprogram directors participating in the SMART Tutoring Network led tothe recognition that the directors had no way of judging whether theirprograms were effective and an accountability process was needed topromote the quality programs and weed out the charlatans. With sig-nificant implications for tutoring service providers across the nation, themembers of the SMART Tutoring Network posed the question, ‘‘Whatdoes it mean to be a quality tutoring program?’’

The opening scenario describes the point at which tutoring program directorsparticipating in the SMART Tutoring Network determined the need for a‘‘Seal of Approval’’ process for promoting effective practices among tutoringproviders, assigning merit to effective programs, and eliminating ineffectiveprograms. With this process in place, program directors envisioned a futurepoint in time when the school district would be able to provide families witha list of tutoring programs that received the ‘‘Tutoring Seal of Approval.’’But first, program directors need a model for judging of implementationquality.

Assessing program implementation has been put forth as a ‘‘core issue’’in the future of evaluation (N. L. Smith et al., 2011, p. 574). Implementationrefers to what a program consists of when it is delivered in a particular setting(Durlak & DuPre, 2008). Implementation or process evaluation is essentialfor (a) establishing the internal and external validity of a program, (b) testingthe theory driving the program, (c) understanding the crucial importance ofdifferent intervention components, and (d) monitoring implementation so asto identify targets for program improvement (Durlak & DuPre, 2008). Justas economic pressures, restricted resources, and a demand for evidence ofimpact has given rise to an emphasis on accountability, program implemen-tation and a clarification of activities integral to achieving positive outcomesare increasingly being prioritized as targets for evaluation (N. L. Smith et al.,2011).

This article describes a multiagency consultative initiative to promotehigh-quality school-based and community-based tutoring through account-ability in an urban school district. Using a multimethod, multiinformantprocess evaluation that focused primarily on implementation quality, weillustrate a number of factors critical to the success of the initiative andhighlight the many challenges inherent in ensuring a fair and equitableevaluation of tutoring programs that varied in size, personnel qualifications,instructional approaches, and academic content addressed. We hope thelessons we learned will provide a valuable starting point for other consul-tants, tutoring program directors, and school district administrators seekingto establish an accountability system for tutoring service providers.

Page 3: Implementation as a Focus of Consultation to Evaluate Academic Tutoring Services in an Urban School District: A Case Study Example

316 J. Q. Morrison and S. B. English

THE LEGISLATIVE CONTEXT

Schools and school districts have long partnered with public and privateorganizations to recruit volunteer tutors for the provision of supplementalacademic support to struggling students. Historically, tutoring services wereprovided by community-based and faith-based organizations and privatebusiness and coordinated at the school level without significant funding.Consistent with recent trends toward the privatization of services withineducation, tutoring services have joined the ranks of other market-basedservices. However, efforts to evaluate the quality of tutoring services arelagging (Burch, Steinberg, & Donovan, 2007).

Supplemental Educational Services (SES) was established by the NoChild Left Behind Act (NCLB; 2001), enacted in the Elementary and Sec-ondary Education Act, and reauthorized by NCLB as a ‘‘consequence’’ or‘‘corrective action’’ for schools that failed to make adequate yearly progress.SES includes remedial instruction and tutoring in reading and mathemat-ics. However, because of the widespread, federally mandated use of SES(with more than $2.5 billion available for funding), SES providers havebeen shown to vary widely in their instructional strategies, curriculums, tutorqualifications, and hourly tutoring fees billed to school districts (Anderson &Laguarda, 2005; Burch et al., 2007; Casserly, 2004; Steinberg, 2006; Vegari,2007). A limited number of studies have examined the impact of SES tutoringon student achievement (e.g., Chicago Public Schools, 2007; Rickles & Barn-hart, 2007; Ryan & Fatani, 2005; Zimmer, Gill, Razquin, Booker, & Lockwood,2007), yet no research to date has explored the processes that contribute toeffective implementation of SES to ensure quality academic tutoring servicesacross a broad array of providers.

THE LOCAL CONTEXT

At the point in time in which the evaluation process was conceived, theschool district had a nominal plan in place to assess legal compliance amongthe SES-funded tutoring providers. According to the law, school districts arerequired to ensure the content and educational practices of SES are alignedwith the state’s academic content standards (and applicable federal, state,and local health, safety, and civil rights laws [Section 1116(e)(12)(C)]) anddistricts must withdraw approval from SES providers that fail for 2 years toincrease student academic achievement.

The Tutoring Seal of Approval initiative was developed as a processevaluation to assess the quality of academic tutoring services and provide rec-ommendations for improved service delivery for SES-funded and nonfundedtutoring providers. Traditionally, process evaluations are designed to addressthree major questions: (a) What is the program intended to be? (b) What is

Page 4: Implementation as a Focus of Consultation to Evaluate Academic Tutoring Services in an Urban School District: A Case Study Example

Implementation 317

delivered, in reality? and (c) Why are there gaps between program plansand program delivery? (Scheirer, 1994). However, within the local contextof this case study, tutoring service providers had difficulty defining what theprogram intended to be in a manner that could be applied across a varietyof tutoring programs. A conceptual framework was needed to structure aprocess evaluation of tutoring programs that would be valid, fair, and linkedto student learning outcomes.

CONCEPTUAL FRAMEWORKS

The conceptual framework used in this process evaluation of tutoring ser-vices was developed by the National Implementation Research Network(NIRN; Fixsen, Naoom, Blase, Friedman, & Wallace, 2005). The NIRN’s syn-thesis of research on successfully implemented practices and programs (i.e.,evidence-based practices or practices within evidence-based programs) re-sulted in the identification of processes that facilitate high-fidelity practitionerbehavior, known as implementation drivers. Each of the implementationdrivers have been operationally defined consistent with a behavioral theo-retical orientation. The four implementation drivers that comprise the staffcompetence domain include selection, training, coaching, and performanceassessment/fidelity (Blase, Van Dyke, & Fixsen, 2010). The last driver inthe competence domain, performance assessment/fidelity, overlaps with theorganization supports domain, a domain that also includes decision supportdata system, facilitative administration, and systems intervention. The lead-ership domain encompasses both technical and adaptive leadership (Heifetz& Laurie, 1997). Implementation drivers function in an integrated and com-pensatory fashion to maximize their influence on practitioner behavior andorganizational culture. Weaknesses in one of the implementation drivers(e.g., limited capacity for selecting highly competent tutors) can be overcomeby strengths in other implementation drivers (high-quality training of tutors,coaching, and performance assessment/fidelity; Fixsen et al., 2005).

The Implementation Drivers Framework (Blase et al., 2010; Fixsen et al.,2005) incorporates three traditional aspects to implementation described byDane and Schneider (1998): (a) fidelity, or the extent to which the innovationcorresponds to the originally intended program (also known as adherence,compliance, integrity); (b) dosage, or how much of the original program hasbeen delivered (i.e., quantity, intervention strength); and (c) quality, or howwell different program components have been conducted (e.g., are the mainprogram elements delivered clearly and correctly?). Two additional aspectsof implementation not addressed by the Implementation Drivers Frameworkinclude (a) participant responsiveness, which refers to the degree to whichthe program stimulates the interest or holds the attention of participants(e.g., are students attentive during program lessons) and (b) program differ-

Page 5: Implementation as a Focus of Consultation to Evaluate Academic Tutoring Services in an Urban School District: A Case Study Example

318 J. Q. Morrison and S. B. English

entiation, or the extent to which a program’s theory and practices can bedistinguished from other programs (Dane & Schneider, 1998).

RESEARCH LINKING IMPLEMENTATION

TO OUTCOMES

In their review of the research on the influence of implementation on pro-gram outcomes, Durlak and DuPre (2008) examined studies on preventionand health promotion programs for children and adolescents related to phys-ical health and development; academic performance; drug use; and varioussocial and mental health issues such as violence, bullying, and positive youthdevelopment. Among the five meta-analyses conducted between 1986 and2005, the overall magnitude of the difference favoring programs with betterimplementation was marked, with effect sizes that were 2 to 3 times higherin general (Derzon, Sale, Springer, & Brounstein, 2005; DuBouis, Holloway,Valentine, & Cooper, 2002; J. D. Smith, Schneider, Smith, & Ananiadou,2004; Tobler, 1986; Wilson, Lipsey, & Derzon, 2003). In their own meta-analysis of 59 additional studies, 76% of the studies had a significant positiverelationship between the level of implementation and at least half of theprogram outcomes (Durlak & DuPre, 2008). The majority of these studies(69% of the studies) assessed only one aspect of implementation (fidelity,dosage, or quality). Taken together, Durlak and DuPre (2008) concluded thatthere was ‘‘credible and extensive evidence that the level of implementationaffects program outcomes’’ (p. 334).

RESEARCH REGARDING THE IMPACT OF TUTORING

SERVICES ON STUDENT ACHIEVEMENT

Critical elements of effective tutoring to remediate reading and mathematicsskill deficits have been examined rigorously in the research literature (e.g.,Foorman, Francis, Fletcher, Schatschneider, & Metha, 1998; Fuchs & Fuchs,1986; Greenwood, 1991; Vaughn et al., 2009). Data-based decision makingis essential and includes establishing a specific measurable goal, monitor-ing student progress toward that goal, and making changes to the tutoringintervention in a timely fashion as informed by the progress-monitoringdata. Goal setting and progress monitoring are strongly linked to positiveoutcomes for the tutor and tutoring agency by adding a self-correcting featureto intervention (Hixson, Christ, & Bradley-Johnson, 2008). Gathering and an-alyzing progress-monitoring data (i.e., formative evaluation) has been shownto result in improved student learning outcomes (Fuchs & Fuchs, 1986).

Although many evaluative studies have examined the effects of out-of-school-time tutoring programs on student achievement, relatively few

Page 6: Implementation as a Focus of Consultation to Evaluate Academic Tutoring Services in an Urban School District: A Case Study Example

Implementation 319

are rigorous in their research design and methodological approach. Laueret al. (2006) conducted a synthesis of the published research, selectingonly those studies (N studies D 35) using control or comparison groupsto estimate effect sizes (e.g., gain scores). Using meta-analytic techniques,Lauer et al. examined the relationship between program focus, duration,time frame, student grouping, and grade level and program outcomes.Their results provide evidence that out-of-school-time tutoring programs canhave a positive effect on student achievement (relative to at-risk studentswho did not participate), although the effects were not likely to be largeenough to close the achievement gap between at-risk and more advantagedstudents.

Only one study (Vandell et al., 2005) to date has incorporated mea-sures of implementation quality among after-school tutoring programs in anevaluation of tutoring effects. Tutoring program quality was operationally de-fined as evidencing eight ‘‘promising practices,’’ which included supportiverelations with adults, supportive relations with peers, level of engagement,opportunities for cognitive growth, appropriate structure, opportunities forautonomy, opportunity for leadership, and mastery orientation. The resultsof this study suggest positive effects on test scores for elementary schoolstudents highly active in high-quality programs. No statistically significantprogram effects were identified for middle school students (Vandell et al.,2005).

In summary, the research literature provides a solid foundation for thecritical importance of evaluating implementation. The literature also providessupport for the effectiveness of tutoring on student learning outcomes ifevidence-based practices are implemented with fidelity. Given the significantfunding for SES and the prevalence and variety of tutoring providers seekingto meet the needs of students, a process for evaluating tutoring service de-livery implementation is crucial. The purpose of this case study is to expandthe consultation literature regarding approaches to evaluating community-based tutoring services with a focus on implementation, as operationalizedby the Implementation Drivers Framework (Blase et al., 2010; Fixsen et al.,2005). Specifically, the case study was designed to answer this question:To what extent could the implementation drivers be used to structure aprocess evaluation of tutoring programs that would be valid, fair, and linkedto student learning outcomes?

THE TUTORING SEAL OF APPROVAL PROCESS

Participants

Tutoring providers. Ten tutoring providers voluntarily participated inthe process evaluation during first 2 years of the initiative. These provider

Page 7: Implementation as a Focus of Consultation to Evaluate Academic Tutoring Services in an Urban School District: A Case Study Example

320 J. Q. Morrison and S. B. English

participants included 4 nonprofit community agencies, 3 for-profit corpora-tions, 1 university-affiliated program, and 2 programs operated by the localchapters of a national organization. The providers varied in the numberof students they served, the number of tutors they engaged, and whetherthe tutors were paid (with SES funds or other financial sources) or unpaidvolunteers. All 8 of the SES-funded tutoring providers delivered an interven-tion program based on preassessment, remedial instruction, practice witherror correction and feedback, and postassessment of concepts includedon the state-mandated achievement test. Providers C, D, E, G, H, I, andJ provided intervention in reading and mathematics. Provider G providedtutoring in mathematics only. Provider A offered a reading interventionprogram that involved students in small-group settings viewing videos ofphonics-based instruction and responding to written exercises. Feedbackand error correction were provided by a paid tutor/facilitator. Provider Bprovided volunteer members of the community to serve as tutors providingadditional opportunities for students to read aloud or be read to in a one-on-one arrangement.

Students. Students eligible for SES-funded tutors were identified by theschool district and rank ordered by level of need (i.e., economic disad-vantage, prior achievement on the state-mandated achievement test) by anorganization under contract with the district. Families of SES-eligible studentswere notified by the district of the availability of tutoring and given a list ofapproved SES providers. Students participating in tutoring services that werenot SES funded were referred by their classroom teacher.

Evaluation Design and Procedures

The Tutoring Seal of Approval process was a multimethod, multiinformantevaluation using descriptive research methods. The evaluation was con-ducted in three phases within a specified time frame to establish uniformityand fairness of the process. In other words, the first phase was completed forall 10 tutoring providers before the second phase began. In the first phaseof the evaluation process, face-to-face interview and document review wasconducted individually with each of the program directors. In the secondphase of the process evaluation, a direct observation of a tutoring sessionwas conducted. The tutoring site and tutor observed were selected by theprogram director and arrangements for the observation were made directlywith the tutor. The third phase of the process involved the administrationof an online tutor survey. Data collection was completed across 6 monthswith the interview and document reviews conducted from February to Apriland all direct observations conducted prior to the administration of thestate-mandated achievement test in May. The tutor survey was conductedin June.

Page 8: Implementation as a Focus of Consultation to Evaluate Academic Tutoring Services in an Urban School District: A Case Study Example

Implementation 321

Instruments

Tutoring Seal of Approval (TSoA) Evaluation Tool. The TSoA EvaluationTool provided the structure for the process evaluation. The instrument wascomprised of 13 evaluation questions developed to measure implementationprogress on six of the NIRN’s implementation drivers: selection, training,coaching, performance assessment, decision support data system, and fa-cilitative administration (Fixsen & Blase, 2007). The structure and formatrequired multimethod, multiinformant data collection over a span of timebased on clearly delineated, research-based features of the innovation.

Program Director Interview Protocol and document review. The Pro-gram Director Interview Protocol was a component of the TSoA EvaluationTool. This semistructured interview was comprised of 11 open-ended itemsbased upon the evaluation questions identified in the TSoA Evaluation Tool.The interview questions were developed to gather evidence of implementa-tion progress on six of the NIRN’s implementation drivers. All of the interviewand document reviews were conducted by the primary author at the com-munity agency. Interviews were audiorecorded and transcribed to ensureaccuracy in the data collection activity.

Observation protocol. An observation protocol was developed and usedto conduct a direct observation of a sample of tutoring sessions. Obser-vations were conducted by the second author, a graduate student withprior experience as a special education teacher, for each of the tutoringprograms. The observation protocol consisted of three sections. The firstsection used momentary time sampling across 60 15-s time intervals to mea-sure student engagement. Student behavior was coded as active engagedtime (e.g., writing, reading aloud, answering questions verbally) or passiveengaged time (e.g., eyes on tutor or instructional materials, reading silently)as operationally defined in the Behavior Observation of Students in Schools

(Shapiro, 2004). Partial interval time sampling was used to measure threecategories of off-task behaviors: off-task motor (e.g., out-of-seat, playingwith materials or other objects), off-task verbal (e.g., talking unrelated toinstructional task), or off-task passive (e.g., staring off) as operationally de-fined by Shapiro (2004). The second section consisted of a five-item checklistmeasuring the presence of the elements of strong interventions (Lentz, Allen,& Ehrhardt, 1996). The third section was structured as a four-item checklistmeasuring the presence of selected performance indicators identified in theOhio Department of Education’s Supplemental Educational Services (SES)Effectiveness Report (Ohio Department of Education, 2008). Two graduatestudents were trained in the reliable use of the observation protocol. Theyachieved the minimum criteria of 80% interobserver agreement. Interobserveragreement data were gathered for one of the seven observations to ensurethe accuracy and reliability of the observations. Interobserver agreement wascalculated as (a) the number of agreements of occurrence plus the number

Page 9: Implementation as a Focus of Consultation to Evaluate Academic Tutoring Services in an Urban School District: A Case Study Example

322 J. Q. Morrison and S. B. English

of agreements of nonoccurrence divided by the total number of intervals inSection 1 (86% agreement attained) and (b) the number of agreements ofoccurrence plus the number of agreements of nonoccurrence divided by thetotal number of items on the checklists in Sections 2 and 3 (100% agreementattained).

Tutor survey. A tutor survey, designed to gather information about as-pects of the agency’s tutoring service delivery from the perspective of thetutors, was the third component of the process evaluation. The survey wascomposed of seven items that used a yes/no or yes/no/not sure answerresponse, three items structured as a 6-point Likert rating scale, two open-ended questions, one item structured as a 4-point categorical rating scale,and one checklist. One hundred ninety-one tutors responded to the tutorsurvey for an overall response rate of 44.4%. The response rate ranged from26.2% to 100% among the 7 tutoring providers.

Limitations of the Process Evaluation Design

and Procedures

The evaluation was designed to serve the functional purpose of providing afair and valid assessment of providers’ quality of tutoring service delivery thatcould ultimately be linked to student achievement; as such, the evaluationwas guided by this goal and not primarily by research goals. Consequentially,there are some inherent limitations from an evaluation perspective.

The primary limitation of the process evaluation was that the tutoringsites and tutors observed were selected by the program directors using mixedpurposeful sampling (Patton, 2002) rather than being randomly selected.Therefore the observation data gathered at the tutoring site should be viewedas an exemplar of the provider’s services and not necessarily representative ofall tutoring services offered by that provider. Furthermore, given that the dateand time of the observations were prearranged with each tutor, the tutoringsession itself may have been a demonstration of higher quality than wouldbe observed during an unannounced observation. As previously described,mixed purposeful sampling was judged to be appropriate because (a) theevaluation was designed to be formative in nature with an emphasis on theimprovement of tutoring service delivery, rather than compliance-driven, and(b) most of the tutoring sessions occurred during after-school hours and onSaturdays where student safety and building security priorities prohibitedunannounced site visits. Purposive sampling has many important uses inschool intervention evaluation (e.g., Bohanon et al., 2006).

The second limitation was the decision to focus on implementationquality exclusively and not incorporate measures of intervention fidelity (alsoknown as intervention adherence or treatment integrity) for individual tutor-ing sessions. Evidence of intervention fidelity, that the tutoring interventionwas implemented as planned, is critical to judging a student’s response to

Page 10: Implementation as a Focus of Consultation to Evaluate Academic Tutoring Services in an Urban School District: A Case Study Example

Implementation 323

the intervention and interpreting the resulting student outcomes (McIntyre,Gresham, DiGennaro, & Reed, 2007). Future process evaluations of tutoringservices should incorporate a measure of intervention fidelity as additionalevidence in the performance assessments of tutors/fidelity standard.

A final limitation was the failure to control for a host of confoundingvariables that limited the evaluator’s ability to conduct an impact analysislinking implementation quality to student achievement. Efforts to determineif implementation quality was linked to student learning outcomes were un-successful due to (a) an insufficient number of students meeting the criterionof 30 or more hours of tutoring established by the tutoring provider partners,(b) inconsistent use of progress monitoring (e.g., Dynamic Indicators ofBasic Early Literacy Skills [DIBELS], Curriculum-Based Measurement [CBM])throughout the district, (c) the confounding influence of selection bias for stu-dents who attended 30 or more hours of tutoring, and (d) multiple treatmenteffects for students also receiving school-based and home/community-basedintervention services.

PRELIMINARY FINDINGS AND CONSULTATIVE

IMPLICATIONS

Collectively, tutoring providers received the highest mean ratings in theirimplementation of processes for (a) selecting tutors; (b) training tutors(specifically, providing a tutor/volunteer handbook and training tutors to ad-dress students’ academic needs); (c) implementing an instructional programaligned with state academic content standards and achievement standards;and (d) compliance with health, safety, and civil rights requirements (seeTable 1). Tutoring providers received the lowest mean ratings in generalin their implementation of (a) coaching, (b) decision support data systems(specifically, establishing student learning goals and progress monitoring),and (c) performance assessment/fidelity.

Each of the tutoring programs was provided with an evaluation reportfor its program that detailed ratings and highlighted commendations andrecommendations for program improvement. To addresses weaknesses com-mon to the majority of the tutoring providers, a professional developmentsession was designed and delivered the following year to an audience ofprogram directors on the topic of data-based decision making (i.e., goalsetting and progress monitoring).

Ultimately, the value of designing a process evaluation of tutoring ser-vice delivery based on the implementation drivers is determined by thedegree to which (a) the process evaluation is able to accurately distinguishbetween tutoring providers with higher and lower levels of implementationquality and (b) student learning outcomes vary by tutoring providers’ imple-mentation quality. In this case study, efforts to determine if implementation

Page 11: Implementation as a Focus of Consultation to Evaluate Academic Tutoring Services in an Urban School District: A Case Study Example

324 J. Q. Morrison and S. B. English

TABLE 1 Implementation Ratings for Tutoring Providers (N D 10)

Tutoring provider

Evaluation standard A B C D E F G H I JMeanrating SD

Selection of tutors 4 4 4 4 4 4 4 4 4 4 4.00 0.00Training of tutorsTutor/Volunteer handbook 4 4 4 4 4 4 4 4 4 4 4.00 0.00Trained to address academic needs 4 4 4 4 4 4 4 4 4 4 4.00 0.00Trained to address behavioral needs 4 2 4 3 4 4 4 4 3 4 3.60 0.70Coaching 3 1 3 2 4 3 3 3 3 4 2.90 0.88Performance assessment/fidelity 4 3 4 4 4 4 4 2 3 4 3.60 0.70Decision support data systemSpecific, measurable goals 2 2 4 4 2 3 4 3 3 4 3.10 0.88Progress monitoring 3 2 4 4 2 3 4 3 3 4 3.20 0.78Documenting amount of tutoring

and content3 3 4 4 4 4 4 4 4 4 3.80 0.42

Facilitative administrationCommunication with parents &

school3 3 4 4 4 4 4 4 4 4 3.80 0.42

Instructional alignment with statestandards

4 4 4 4 4 4 4 4 4 4 4.00 0.00

Health, safety, and civil rightsrequirements

4 4 4 4 4 4 4 4 4 4 4.00 0.00

Number of students and gradelevel reported

1 1 4 4 4 4 4 4 4 4 3.40 1.26

quality was linked to student learning outcomes were unsuccessful for sev-eral reasons. First, the school district was inconsistent in its use of DIBELS toassess early literacy skill fluency in Grades K–3. CBM reading measures werenot used systemically for students in Grades 4–8. The annual administrationof the state-mandated Ohio Achievement Test in Reading and Mathematicsassessed students in Grades 3–8; however, only 3 tutoring service providers(Providers C, D, and E) had a sufficient number of students receiving 30or more hours of tutoring to be included in an impact analysis. The 30 hrcriterion for inclusion in an impact analysis was agreed upon among tutoringprovider partners within the multiagency network based upon the expectednumber of hours of tutoring provided by SES-funded providers given (a) theschool district’s per pupil allocation, (b) the hourly rate for tutoring asestablished by the provider, and (c) the number of weeks from the time theroster of eligible students was released and tutoring services were provideduntil the end of the school year.

Failing to control for a host of confounding variables (i.e., selection biasfor students who attended 30 or more hours of tutoring; multiple treatmenteffects for students also receiving school-based and home/community-basedintervention services; the use of DIBELS/CBM for assessing student learningoutcomes at Grades K–3 for tutoring interventions that may have targetedmath skill development, test-taking skills, or homework completion) meant

Page 12: Implementation as a Focus of Consultation to Evaluate Academic Tutoring Services in an Urban School District: A Case Study Example

Implementation 325

that it was not possible to determine the impact of tutoring on studentachievement. Future research will need to examine the degree to whichstudent learning outcomes vary by tutoring providers’ implementation qualityusing large sets of achievement data matched to the content standard oracademic skill targeted.

What Can Consultants Do to Improve Tutoring Services?

Federal funding for tutoring services has attracted many organizations to theacademic remediation market. For many tutoring providers, SES funding hasimproved financial situation considerably, enabling them to sustain otherservices through the income generated through tutoring. Tutoring providershave the potential to have a significant impact on student achievement byextending instruction beyond the school day and providing targeted inter-vention within a prioritized academic content area. Yet tutoring providershave received only minimal guidelines from the federal government on howto operate tutoring program to be compliant with the law. Consultants withexpertise in academic intervention design, implementation, and evaluationhave an important role to play in the support of tutoring service providers.This article describes an initial step toward developing a comprehensiveapproach for the formative and summative evaluation of tutoring programs.With further development, this process has the potential to serve as a muchneeded guidebook for national audience of agency administrators and schooldistrict administrators seeking to design and implement highly effective tu-toring programs.

Ultimately, the evaluation effort itself is not sufficient to improve tutoringservices. To best serve their clients, consultants need to have the capacityto link tutoring service providers to resources for sustained professionaldevelopment to address areas of weakness identified by the evaluation. Thisneed for tutoring training was a primary impetus for the Tutoring Seal ofApproval process and the success of the process will likely be judged by thedegree to which tutoring service provider partners perceived themselves tohave greater access to resources to strengthen their programs.

Areas Where We Need to Proceed Cautiously

Until the challenges of linking the evaluation of implementation quality tostudent learning outcomes can be addressed, the Tutoring Seal of Approvalprocess has unknown value. This recent effort provided insufficient evidenceto show that an application of the Implementation Drivers Framework totutoring service providers will yield differential student achievement. As such,it is conceivable that a tutoring program rated highly for its implementationquality could demonstrate unremarkable student achievement outcomes.

Page 13: Implementation as a Focus of Consultation to Evaluate Academic Tutoring Services in an Urban School District: A Case Study Example

326 J. Q. Morrison and S. B. English

Even with further research and refinement of an approach such as this, con-sultants will need to communicate clearly to tutoring service providers thathigh-quality implementation, although essential for program effectiveness,may not produce the desired effects at the tutor-student level.

Complicating the relationship between implementation and impact is thefinding from previous research that positive results have often been obtainedwith implementation levels around 60% (Durlak & DuPre, 2008). Few studieshave reported implementation levels greater than 80% and no study hasdocumented 100% implementation for all providers (Durlak & DuPre, 2008).As such, consultants need to establish an expectation that full implementationis desired, but not required, to have an impact on student achievement.

What We Still Need to Learn

This article extends the professional literature on evaluating tutoring servicesby focusing on the processes that contribute to effective implementationlinked to student outcomes that was perceived by key stakeholders to befair and equitable among a broad array of providers. Tutoring services varywidely. Among the 10 tutoring providers participating in this process eval-uation, tutoring services included test preparation, homework help, earlyliteracy skill development, and math skill development. Likewise, the pro-fessional training and background of the tutors and the tutoring programdirectors varied widely. SES-funded providers were held to accountabilityrequirements not mandated of non-SES-funded providers. The extent towhich a process evaluation like the Tutoring Seal of Approval is valid foruse with a wide range of academic support services (e.g., academic enrich-ment programs for students identified as gifted, mentoring programs with anacademic tutoring component) remains unknown.

ACKNOWLEDGMENTS

The process evaluation reported in this article was supported by a grant fromthe United Way of Greater Cincinnati. Findings and conclusions are those ofthe authors and do not necessarily reflect the views of the funding agency.We thank David W. Barnett for his contributions to the development of thismanuscript.

REFERENCES

Anderson, L. M., & Laguarda, K. G. (2005). Case studies of supplemental services

under the No Child Left Behind Act: Findings from 2003–04. Washington,

Page 14: Implementation as a Focus of Consultation to Evaluate Academic Tutoring Services in an Urban School District: A Case Study Example

Implementation 327

DC: U.S. Department of Education, Office of Planning, Evaluation and PolicyDevelopment.

Blase, K. A., Van Dyke, M. K., & Fixsen, D. L. (2010). Implementation drivers: Best

practices. Chapel Hill, NC: National Implementation Research Network.Bohanon, H., Fenning, P., Carney, K. L., Minnis-Kim, M. J., Anderson-Harriss, S., Mo-

roz, K. B., : : : Pigott, T. D. (2006). Schoolwide application of positive behavioralsupport in an urban high school: A case study. Journal of Positive Behavioral

Interventions, 8, 131–145.Burch, P., Steinberg, M., & Donovan, J. (2007). Supplemental Educational Services

and NCLB: Policy assumptions, market practices, emerging issues. Educational

Evaluation and Policy Analysis, 29, 115–133.Casserly, M. (2004). Choice and supplemental services in America’s great city schools.

In F. M. Hess & C. E. Finn (Eds.), Leaving no child behind? Options for kids in

failing schools (pp. 191–212). New York, NY: Palgrave Macmillan.Chicago Public Schools. (2007). The 2007 Supplemental Educational Services pro-

gram: Year 4 summative evaluation. Chicago, IL: Chicago Public Schools Officeof Extended Learning Opportunities, Research, Evaluation and Accountability.

Dane, A. V., & Schneider, B. H. (1998). Program integrity in primary and earlysecondary prevention: Are implementation effects out of control? Clinical Psy-

chology Review, 18, 23–45.Derzon, J. H., Sale, E., Springer, J. F., & Brounstein, P. (2005). Estimating intervention

effectiveness: Synthetic projection of field evaluation results. The Journal of

Primary Prevention, 26, 321–343.DuBouis, D. L., Holloway, B. E., Valentine, J. C., & Cooper, H. (2002). Effectiveness

of mentoring programs for youth: A meta-analytic review. American Journal of

Community Psychology, 30, 157–198.Durlak, J. A., & DuPre, E. P. (2008). Implementation matters: A review of research on

the influence of implementation on program outcomes and the factors affectingimplementation. American Journal of Community Psychology, 41, 327–350.

Fixsen, D. L., & Blase, K. A. (2007, September). Implementation: Love it or be

left behind. Plenary presentation, Virginia Transformation Conference, VirginiaDMHMRSAS, Office of Substance Abuse Services, Richmond, VA.

Fixsen, D. L., Naoom, S. F., Blase, K. A., Friedman, R. M., & Wallace, F. (2005). Im-

plementation research: A synthesis of the literature (FMHI Publication No. 231).Tampa, FL: University of South Florida, Louis de la Parte Florida Mental HealthInstitute, the National Implementation Research Network.

Foorman, B. R., Francis, D. J., Fletcher, J. M., Schatschneider, C., & Metha, P. (1998).The role of instruction in learning to read: Preventing reading failure in at-riskchildren. Journal of Educational Psychology, 90, 37–55.

Fuchs, L. S., & Fuchs, D. (1986). Effects of systematic formative evaluation: A meta-analysis. Exceptional Children, 53, 199–208.

Greenwood, C. R. (1991). Class-wide peer tutoring: Longitudinal effects on thereading, language, and mathematics achievement of at-risk students. Journal

of Reading, Writing, & Learning Disabilities International, 7, 105–123.Heifetz, R. A., & Laurie, D. L. (1997, January–February). The work of leadership.

Harvard Business Review, pp. 124–134.Hixson, M., Christ, T. J., & Bradley-Johnson, S. (2008). Best practices in the analysis

of progress-monitoring data and decision making. In A. Thomas & J. Grimes

Page 15: Implementation as a Focus of Consultation to Evaluate Academic Tutoring Services in an Urban School District: A Case Study Example

328 J. Q. Morrison and S. B. English

(Eds.), Best practices in school psychology V (pp. 2133–2146). Bethesda, MD:The National Association of School Psychologists.

Lauer, P. A., Akiba, M., Wilkerson, S. B., Athorp, H. S., Snow, D., & Martin-Glenn,M. L. (2006). Out-of-school-time programs: A meta-analysis of effects for at-riskstudents. Review of Educational Research, 76, 275–313.

Lentz, F. E., Allen, S. J., & Ehrhardt, K. E. (1996). The conceptual elements of stronginterventions in school settings. School Psychology Quarterly, 11, 118–136.

McIntyre, L. L., Gresham, F. M., DiGennaro, F. D., & Reed, D. D. (2007). Treatmentintegrity of school-based interventions with children in the Journal of Applied

Behavior Analysis: 1991–2005. Journal of Applied Behavior Analysis, 40, 659–672.

No Child Left Behind Act of 2001, Pub. L. No. 107-110 (H.R. 1).Ohio Department of Education. (2008). Supplemental Educational Services effective-

ness report, school year 2007–08. Columbus, OH: Ohio Department of Educa-tion, Center for School Improvement, Office of Federal Programs.

Patton, M. Q. (2002). Qualitative research and evaluation methods (3rd ed.). Thou-sand Oaks, CA: Sage.

Rickles, J. H., & Barnhart, M. K. (2007). The impact of Supplemental Educational

Services participation on student achievement: 2005–06 (Publication No. 352).Los Angeles, CA: Los Angeles Unified School District Program Evaluation andResearch Branch, Planning, Assessment and Research Division.

Ryan, S., & Fatani, S. (2005). SES tutoring programs: An evaluation of the second

year. Part one of a two part report (Policy report). Chicago, IL: Office ofResearch, Evaluation and Accountability, Chicago Public Schools.

Scheirer, M. A. (1994). Designing and using process evaluation. In J. S. Wholey, H. P.Hatry, & K. E. Newcomer (Eds.), Handbook of practical program evaluation.San Francisco, CA: Jossey-Bass.

Shapiro, E. S. (2004). Academic skills problems workbook (Rev. ed.). New York, NY:Guilford Press.

Smith, J. D., Schneider, B. H., Smith, P. K., & Ananiadou, K. (2004). The effective-ness of whole-school antibullying programs: A synthesis of evaluation research.School Psychology Review, 33, 547–560.

Smith, N. L., Brandon, P. R., Hwalek, M., Kistler, S. J., Labin, S. N., Rugh, J., : : :

Yarnall, L. (2011). Looking ahead: The future of evaluation. American Journal

of Evaluation, 32, 565–599.Steinberg, M. S. (2006). Private educational services: Whom does the market leave

behind? PolicyMatters, 4, 17–22.Tobler, N. S. (1986). Meta-analysis of 143 adolescent drug prevention programs:

Quantitative outcome results of program participants compared to a control orcomparison group. Journal of Drug Issues, 16, 537–567.

Vandell, D. L., Reisner, E. R., Brown, B. B., Dadisman, K., Pierce, K. M., Lee,D., & Pechman, E. M. (2005). The study of promising after-school programs:

Examination of intermediate outcomes in year 2. Madison, WI: WisconsinCenter for Education Research.

Vaughn, S., Wanzek, J., Murray, C. S., Scammacca, N., Linan-Thompson, S., & Wood-ruff, A. (2009). Response to early reading intervention: Examining higher andlower responders. Exceptional Children, 75, 165–183.

Page 16: Implementation as a Focus of Consultation to Evaluate Academic Tutoring Services in an Urban School District: A Case Study Example

Implementation 329

Vegari, S. (2007). Federalism and market-based education policy: The SupplementalEducational Services mandate. American Journal of Education, 113, 311–339.

Wilson, S. J., Lipsey, M. W., & Derzon, J. H. (2003). The effects of school-basedintervention programs on aggressive behavior: A meta-analysis. Journal of Con-

sulting and Clinical Psychology, 71, 136–149.Zimmer, R., Gill, B., Razquin, P., Booker, K., & Lockwood, J. R., III. (2007). State

and local implementation of the No Child Left Behind Act: Volume I. Title I

school choice, Supplemental Educational Services, and student achievement.

Washington, DC: U.S. Department of Education, Office of Planning, Evaluationand Policy Development.

Julie Q. Morrison, PhD, is an Assistant Professor in the School Psychology Program at

the University of Cincinnati. Her research interests include evaluating the effectiveness ofuniversal and targeted interventions to address the academic and behavioral needs of school-

age children and youth; and program evaluation.

Sarah Baker English, MEd, is a graduate student in the School Psychology Program at the

University of Cincinnati.

Note : The authors report that to the best of their knowledge neither they nor their affiliated

institution have financial or personal relationships or affiliations that could influence or bias

the opinions, decisions, or work presented in this article.

Page 17: Implementation as a Focus of Consultation to Evaluate Academic Tutoring Services in an Urban School District: A Case Study Example

Copyright of Journal of Educational & Psychological Consultation is the property of Taylor & Francis Ltd and

its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's

express written permission. However, users may print, download, or email articles for individual use.