Transcript
Page 1: Instructional and behavior management practices implemented by elementary general education teachers

Journal of School Psychology 51 (2013) 683–700

Contents lists available at ScienceDirect

Journal of School Psychology

j ourna l homepage: www.e lsev ie r .com/ locate / j schpsyc

Instructional and behavior management practices implemented byelementary general education teachers☆

Linda A. Reddy a,⁎, Gregory A. Fabiano b, Christopher M. Dudek a, Louis Hsu a

a Graduate School of Applied and Professional Psychology, Rutgers University, Piscataway, NJ, USAb University of Buffalo, USA

a r t i c l e i n f o

☆ The research reported here was supported by theUniversity. The opinions expressed are those of the au⁎ Corresponding author at: Rutgers University, 152

E-mail address: [email protected] (L.A. RedACTION EDITOR: Renee Hawkins

0022-4405/$ – see front matter © 2013 Published byhttp://dx.doi.org/10.1016/j.jsp.2013.10.001

a b s t r a c t

Article history:Received 13 August 2012Received in revised form 12 October 2013Accepted 14 October 2013

This investigation examined 317 general education kindergarten through fifth-grade teachers'use of instructional and behavioral management strategies as measured by the ClassroomStrategy Scale (CSS)-Observer Form, a multidimensional tool for assessing classroom practices.The CSS generates frequency of strategy use and discrepancy scores reflecting the differencebetween recommended and actual frequencies of strategy use. Hierarchical linear models(HLMs) suggested that teachers' grade-level assignment was related to their frequency ofusing instructional and behavioral management strategies: Lower grade teachers utilized moreclear 1 to 2 step commands, praise statements, and behavioral corrective feedback strategiesthan upper grade teachers, whereas upper grade teachers utilized more academic monitoringand feedback strategies, content/concept summaries, student focused learning and engage-ment, and student thinking strategies than lower grade teachers. Except for the use of praisestatements, teachers' usage of instructional and behavioral management strategies was notfound to be related to years of teaching experience or to the interaction of years of teachingexperience and grade-level assignment. HLMs suggested that teachers' grade level was relatedto their discrepancy scores of some instructional and behavioral management strategies:Upper grade teachers had higher discrepancy scores in academic performance feedback,behavioral feedback, and praise than lower grade teachers. Teachers' discrepancy scores ofinstructional and behavioral management strategies were not found to be related to years ofteaching experience or to the interaction of years of teaching experience and grade-levelassignment. Implications of results for school psychology practice are outlined.

© 2013 Published by Elsevier Ltd. on behalf of Society for the Study of School Psychology.

Keywords:Teacher assessmentTeacher behavior

1. Introduction

Teacher accountability is a prominent topic of conversation in educational arenas (Bales, 2006; Reddy, Kettler, & Kurz, submittedfor publication). Recent changes in the American education system, including the passage of the No Child Left Behind legislation, havefocused attention towards general education teachers and their practices and performance in classrooms. At the same time, Responseto Intervention (RtI; Fletcher, Lyon, Fuchs, & Barnes, 2007) and Positive Behavioral Intervention and Support (PBIS; http://www.pbis.org; Sugai & Horner, 2002, 2007) frameworks are being integrated into school systems. Both programs heavily emphasize the role ofthe general education teacher as a key individual who implements best practice interventions for academic instruction, behavior

Institute of Education Sciences, U.S. Department of Education, through Grant R305A080337 to Rutgersthors and do not represent views of the Institute or the U.S. Department of Education.Frelinghuysen Road, Piscataway, NJ 08854-8085, USA. Tel.: +1 732 289 1365; fax: +1 732 445 4888.dy).

Elsevier Ltd. on behalf of Society for the Study of School Psychology.

Page 2: Instructional and behavior management practices implemented by elementary general education teachers

684 L.A. Reddy et al. / Journal of School Psychology 51 (2013) 683–700

management, or both. The current United States Secretary of Education recently underscored this emphasis by stating, “The quality ofthe teacher in the classroom is the single biggest in-school influence on student learning” (Duncan, Gurria, & van Leeuwen, 2011).Thus what, how, and at what level of quality teachers utilize best practices are critical contributors to elementary classrooms.

Perhaps one reason for the continued emphasis on the practices of general education teachers is that, in their role as a teacher,general educators may choose from a number of potential approaches to help students learn and ultimately achieve. Thesechoices and the degree to which a teacher uses (or does not use) a chosen strategy can have implications for learning in theclassroom. For example, one of the most robust predictors of academic achievement is the provision of academic responseopportunities. Academic response opportunities represent chances for the student or students to provide answers, applyconcepts, or contribute to group discussions on class content. Research has highlighted the number of academic responseopportunities present in the classroom to be related to student participation and engagement in learning (e.g., Partin, Robertson,Maggin, Oliver, & Wehby, 2010; Stitcher et al., 2009; Sutherland, Adler, & Gunter, 2003; Sutherland, Wehby, & Yoder, 2002;Taylor, Pearson, Peterson, & Rodriguez, 2003). Current research suggests these opportunities should occur frequently, as many as3- to 4 times per minute (Englert, 1983; Stitcher et al., 2009). In addition to providing these opportunities to respond, teachersmust also offer time for students to think about and process academic material (Stitcher et al., 2009).

An additional strategy teachers may use to help present and integrate academic content is to frequently review lesson contentand material through summarizing concepts and lesson content. Concept summaries may include the activation of thinking aboutprior learning through review, serve as an advance organizer for the present lesson, reinforce learning through summary andrepetition, and subsequently improve students' organization and recall of material taught and overall understanding of lessoncontent (Brophy, 1998; Brophy & Alleman, 1991; Hines, Cruickshank, & Kennedy, 1985; Reddy, Fabiano, Barbarasch, & Dudek,2012; Rosenshine & Stevens, 1986). Additionally, the quality of academic feedback and the promotion of metacognitive,higher-order thinking (i.e., students' thinking about thinking) can serve as ways of promoting engagement in learning (e.g., Adey& Shayer, 1993; Bangert-Drowns, Hurley, & Wilkinson, 2004; Bender, 2008; Haywood, 2004; Mevarech & Kramarski, 1997; Tayloret al., 2003; What Works Clearinghouse, 2012).

In addition to instruction-related strategies that are proximal to learning, there are classroom management strategies that can alsopromote effective learning environments (Gable, Hester, Rock, & Hughes, 2009). Multiple studies in the 1960s and 1970s illustrated thatteacher attention (following positive behaviors), reprimands (following negative behaviors), and instructions impacted studentbehavior and rule following (e.g., O'Leary, Kaufman, Kass, & Drabman, 1970). These behaviors include positive attending strategies suchas labeled praise or “catching students being good.” Multiple studies indicate that such contingent attention results in improvedclassroom behavior and rule-following (e.g., Hall, Panyan, Rabon, & Broden, 1968; Madsen, Becker, & Thomas, 1968; Thomas, Becker, &Armstrong, 1968;Walker & Buckley, 1968;Ward & Baker, 1968;White, 1975). Likewise, corrective feedback in the form of reprimands,informing the child privately and neutrally ofmisbehavior, or othermethods of redirecting (e.g., prompting and preventingmisbehaviorthrough routines) can also improve classroom behaviors (e.g., Abramowitz, O'Leary, & Rosen, 1987; Acker & O'Leary, 1987; O'Learyet al., 1970; Rosen, O'Leary, Joyce, Conway, & Pfiffner, 1984). In addition, clear behaviorally-specific instructions and commandsresult in higher rates of student compliance and follow-through compared to instructions and commands that are vague or unclear(e.g., Forehand & Long, 1996; Walker & Eaton-Walker, 1991).

Based on this long-standing and considerable research literature, these teacher strategies have clear evidence as effectiveinterventions to promote student behavior and learning. However, this literature is limited in some respects. First, these strategiesare typically employed in a reciprocal, recursive, and ongoing fashion in classrooms with multiple combinations of strategiesbeing necessary and dependent on the content and type of lesson (e.g., White, 1975). Studying any single strategy in isolationignores the fact that teachers typically employ many of these strategies and some are dependent on one another (e.g., a teacherwho issues many vague directives may have to issue more corrective feedback if students are not following the directives). Thispoint is underscored when one considers the ratio of positive, supportive statements and demands or reprimands that occur inthe classroom. Recommended ratios of at least three praise statements for every demand or reprimand are often required forimproving student behavior and academic performance (e.g., Fabiano et al., 2007; Good & Grouws, 1977; Pfiffner, Rosen, &O'Leary, 1985; Stitcher et al., 2009). Second, there are important developmental considerations that may make some strategiesmore appropriate for younger ages relative to older ages in school. For example, White (1975) documented the decrease inteachers' use of positive attending strategies starting in the second grade of school. One explanation for this finding could be thatas children progress through school and learn routines and expectations, there may be a reduced need for frequent behaviormanagement in some situations (Brophy & Good, 1986). However, it remains unclear how educators' grade-level assignmentimpacts general instructional and behavioral management practices. In addition, there is a question regarding whether teachingexperience may play a role in the use of best practice strategies. Although intuitively it may make sense that more experiencedteachers utilize greater amounts of best practices, research findings regarding the effects of teacher experience on strategy use aremixed (Ghaith & Yaghi, 1997; Guskey, 1988), and this area of research is in need of additional study.

This investigation examined general education kindergarten through fifth-grade teachers' use of classroom instructional andbehavioral management practices through direct observations with a new teacher assessment tool, the Classroom Strategies Scale(CSS)-Observer Form. One output produced from the CSS-Observer Form, is an actual frequency rating of a teacher's use ofspecific instructional and behavioral management strategies (e.g., providing opportunities to respond; providing correctivefeedback to students) as well as a complimentary recommended frequency rating of the degree to which the strategy should havebeen used given the classroom context. To facilitate the development of practice goals, a discrepancy score is calculated betweenthe frequency and recommended frequency rating. Small discrepancy scores indicate practice appropriate for the observedcontext whereas large discrepancy scores suggest areas of instructional practice that may need improvement.

Page 3: Instructional and behavior management practices implemented by elementary general education teachers

685L.A. Reddy et al. / Journal of School Psychology 51 (2013) 683–700

To this end, twomajor research questionswere addressed. The first question concerns the frequency that teachers' use commonlyemployed general education instructional and behavioral management strategies. The second concerns possible effects of two factorson the frequency of use of these strategies and the discrepancy of strategy usage. These factors were (a) grade-level assignment and(b) years of teaching experience. No specific hypotheses were formulated for the first research question due to its descriptive nature.For the second, it was hypothesized that classroommanagement strategieswould bemorewidely employed at the lower grade levelsrelative to the upper grade levels. Given the mixed results of previous investigations concerning effects of years of teachingexperience (e.g., Ghaith & Yaghi, 1997; Guskey, 1988), we examined the relation between years of teaching experience and theinteraction of grade level and years of teaching experience on educators' use of behavioral and instructional strategies. Because of thenesting of teachers (N = 317)within observers (N = 67), the effects of grade level, teachers' years of teaching experience, and of the(grade level × years of experience) interactions on the: (a) CSS frequency scores and (b) discrepancy scores (i.e.,|recommendedfrequency − frequency ratings|) were estimated using hierarchical linear models.

2. Method

2.1. Sample

A sample of 317 general education teachers was observed for the purposes of piloting and validating the CSS version 2.0 as anelementary classroom observation measure. The sample comes from 73 public and private elementary schools located within 39districts in New Jersey and New York that participated in the 2009 to 2010 school year. School characteristics were collected fromthe National Center for Educational Statistics Common Core of Data online database for the 2009 to 2010 school year (see Table 1).

Teachers were stratified by grade-level assignment and included 60 kindergarten teachers, 48 first-grade teachers, 64second-grade teachers, 60 third-grade teachers, 41 fourth-grade teachers, and 44 fifth-grade teachers. The teacher sample includedpredominantly Caucasian (95%) women (92%) with an average age of 39 years (SD = 11.68 years). Within the sample, the averagenumber of students per classroomwas 21 (SD = 3.94). Educational degree of the participating teachers included 40%with a bachelordegree and 60%with amaster's degree. The average number of years of teaching experiencewas 11.91 (SD = 8.91). Years of teachingexperience was conceptualized into four categories: (a) less than 3 years, (b) 4 to 9 years, (c) 10 to 19 years, and (d) 20 and moreyears. Similar categories were used by the U.S. Department of Education, National Center for Educational Statistics (2010) and theNational Education Association (2010) in annual publications relating to public school teacher characteristics.

Observations were conducted by 67 unique individuals who were either school principals or research staff (i.e., graduatestudents or project staff) from both the New Jersey and New York sites. A total of 44 school principals (66%) filled out the CSS on168 teachers (53%). The principals were either Caucasian (97%) or Black (3%), and the sample was predominantly composed ofwomen (75%) with an average age of 46 years (SD = 11.40 years). Principals reported the following educational degrees: 3%with a bachelor degree, 93% with a master's degree, and 4% with a doctoral degree.

Table 1Characteristics of participating schools across New Jersey and New York.

Characteristic Percentage

Type of communitySuburb: Large 68.18%City: Large 7.58%City: Small 7.58%Rural: Distant 3.03%Rural: Fringe 7.58%Town: Fringe 4.55%Town: Distant 1.52%

Type of schoolPublic 83.56%Private 16.44%

Students receiving free and reduced lunch0% 17%1–24% 29%25–49% 45%Greater than 50% 9%

Student demographic distributionNative American/Alaskan Native 1.07%Asian 12.64%African American/Black 15.51%Hispanic/Latino 10.40%White/Caucasian 59.54%Hawaiian or Asian Pacific Islander b1%Two or more ethnicities b1%

Note. Percentage values were calculated across all students at each school and then averagedacross all schools participating in the study. In accordance with IRB procedures, classroomlevel student data was not collected.

Page 4: Instructional and behavior management practices implemented by elementary general education teachers

686 L.A. Reddy et al. / Journal of School Psychology 51 (2013) 683–700

The 23 research staff observers (53%) filled out the CSS on 149 teachers (47%). The research staff were composed 13undergraduate students, 8 graduate students, and 2 of the study authors. Researcher staff were predominantly women (74%) whowere Caucasian (78%), Black (4.55%), Asian (4.5%), Pacific Islander (4.5%) and Middle Eastern (8.5%). The average age of researchstaff was 24 years (SD = 5.53 years), and their educational degrees included 43% with an associate degree, 43% with a bachelordegree, 8.5% with a master's degree, and 4.5% with a doctoral degree.

Table 2Descriptions of the CSS Part 1 strategy counts and Part 2 strategy rating scales.

Part 1 Strategy counts Definitions

Concept summaries A teacher summarizes or highlights key concepts or facts taught during the lesson. Summarizationstatements are typically brief and clear. This teaching strategy helps students organize and recallmaterial taught.

Academic response opportunities A teacher creates opportunities for students to provide verbal academic responses (i.e., answers orresponds to lesson content questions, summarizes or repeats key points, generates questions,brainstorms ideas, explains answer).

Clear one or two step commands A teacher directed verbal instruction that specifically requests a behavior. These commands are clearand direct, and they provide specific instructions to students. They are declarative statements (notquestions), describe the desired behavior, and include no more than two steps.

Vague commands A teacher directed verbal instruction that is unclear when requesting a behavior. These commands arevague, may be issued as questions, and often include excess verbalizations or more than two steps.

Praise statements A teacher issues a verbal or nonverbal statement or gesture to provide feedback for a positive orappropriate behavior.

Corrective feedback A teacher issues a verbal or nonverbal statement or gesture to redirect inappropriate behavior.Total The sum of the frequency of the six teacher behaviors.

Part 2: Instructional strategies scales Definitions

Total scale The Total Instructional Strategies scale reflects the overall use of Instructional Methods and AcademicMonitoring/Feedback.

Instructional methods composite scale How classroom instruction occurs. Measures teachers' use of teacher directed or student directedmethods. This includes how a teacher incorporates active learning techniques such as hands on learningand collaborative learning in the presentation of lessons as well as how a teacher delivers academiccontent to students.

Student focus learning & engagement subscale Strategies for engaging students in the lesson, creating active learners, and encouraging self-initiative inthe learning process. These practices encompass direct experience, hands on instructional techniques,linking lesson content to personal experiences, and cooperative learning strategies.

Instructional delivery subscale Methods for conveying information to students and strategies employed while teaching lesson content/concepts. These practices include modeling, advanced organizers, summarizing, and other instructionalmethodology.

Academic monitor/feedback composite scale How teachers monitor students' understanding of the material and provide feedback on theirunderstanding. These strategies assess students' thinking and encourage students to examine their ownthought processes. Teachers guide students understanding by encouraging students, affirmingappropriate application of the material, and correcting misperceptions.

Promotes student thinking subscale Practices for stimulating students' metacognitive and higher order thinking abilities. They encouragestudents' to critically think about the lesson material (why/how analysis), generate new ideas, andexamine their own thought processes.

Academic performance feedback subscale How teachers provide feedback to students' on their understanding of the material. These practicesassess teacher efforts to explain what is correct or incorrect with student academic performance.

Part 2: Behavioral management strategies scales Definitions

Total scale The Total Behavioral Management Strategies scale reflects the overall use of Proactive Methods andBehavior Feedback.

Behavior feedback composite scale How teachers respond to students appropriate and inappropriate behaviors. This includes the usage ofpraise to encourage positive behaviors and corrective feedback to redirect negative behaviors.

Praise subscale Verbal and nonverbal strategies teachers use to praise students for specific appropriate behaviors in theclassroom.

Corrective feedback subscale Verbal and nonverbal strategies teacher use to redirect or correct students' inappropriate behavior inthe classroom.

Proactive methods composite scale Strategies teachers use to promote positive behaviors in the classroom and reduce the likelihood ofnegative behaviors. These strategies include prompts, routines, reviewing rules, and presentinginstructions or requests in a clear manner.

Prevention management subscale Proactive verbal and nonverbal strategies teachers use to promote positive classroom functioning andestablish effective learning environments. These practices include taking actions to prevent problembehaviors from occurring, establishing clear and consistent expectations, and creating a positiveatmosphere in the classroom.

Directives/Transitions subscale Strategies teachers use to communicate their behavioral requests to students and manage themovement and behavior of students during class transitions.

Page 5: Instructional and behavior management practices implemented by elementary general education teachers

687L.A. Reddy et al. / Journal of School Psychology 51 (2013) 683–700

2.2. Measure

Teachers' classroom practices were measured using the Classroom Strategies Scale (CSS) Observer Form. Historically, behavioralassessment and intervention approaches have utilized classroomobservations to enhance student behavior and teacher performance.(e.g., Pelham, Fabiano, & Massetti, 2005; Pelham, Greiner, & Gnagy, 1998; Rosen et al., 1984; White, 1975; Ysseldyke & Burns, 2009;Ysseldyke & Elliott, 1999). The CSS-Observer Form builds on these studies by including empirically supported instructional andbehavioral management strategies in a single measure (See Table 2). Grounded in research on instructional and behavioralmanagement practices, the CSS is composed of three parts that includes items addressing empirically-supported strategies(e.g., Bender, 2008; Gable et al., 2009; Kalis, Vannest, & Parker, 2007; Kern & Clemens, 2007;Marzano, 1998; Tomlinson & Edisonson,2003;Walker, Colvin, & Ramsey, 1995;WhatWorks Clearinghouse, 2012). The CSS-Observer Form is designed to be administered inan ongoing formative assessment context with the intent of helping educators' identify empirically supported teaching strategies intheir classroom, facilitate educators' development of practice goals related to these strategies, and monitor progress towardsachieving these goals. To facilitate these aims, the CSS-Observer Form includes a three part assessment that yields complementaryand distinct information.

2.2.1. CSS Part 1During the classroom observation period, the observer completes the Part 1 (Strategy Counts). For the Part 1 (Strategy Counts)

the observer tallies the frequency of six teacher strategies (see Table 2 for a description of these strategies). Observers note eachtime an instructional or behavior management strategy was used and whether it was the strategy used for an individual studentor group of students (i.e., two or more students).

2.2.2. CSS Part 2The Part 2 (Strategy Rating Scales) consists of an Instructional Strategies (IS) scale and a Behavioral Management Strategies

(BMS) scale that are completed after a classroom observation period (see Table 2). The IS scale includes 26 items that compose atotal scale, two composite scales, and four subscales. Maximum scores are frequency scores. The Instructional Methods Compositescale (14 items producing a maximum score of 98) consists of the Instructional Delivery subscale (7 items producing a maximumscore of 49) and the Student Focus Learning and Engagement subscale (7 items producing a maximum score of 49). The AcademicMonitoring/Feedback Composite scale (12 items producing a maximum score of 84) consists of the Promotes Student Thinkingsubscale (6 items producing a maximum score of 42) and the Academic Performance Feedback subscale (6 items producing amaximum score of 42).

The BMS scale includes 23 items that composes a total scale, two composite scales, and four subscales. The BehavioralFeedback Composite scale (11 items producing a maximum score of 77) consists of the Praise subscale (5 items producing amaximum score of 35) and the Corrective Feedback subscale (6 items producing a maximum score of 42). The Proactive MethodsComposite scale (12 items producing a maximum score of 84) consists of the Prevention Management subscale (5 itemsproducing a maximum score of 35) and the Directives/Transitions subscale (7 items producing a maximum score of 49). Table 2lists IS and BMS scales and definitions.

Both the Part 2 IS and BMS rating scales require observers to fill out a Frequency rating and Recommended Frequency rating. Forthe Frequency rating, observers rate how often teachers used specific positive instructional and behavioralmanagement strategies ona 7-point Likert scale (1 = never used, 3 = sometimes used, and 7 = always used). After completing the Frequency rating, observersthen rate the Recommended Frequency of the strategies based on the context of the lesson. For the Recommended Frequency,observers rate how often the teachers should have used each strategy on a 7-point Likert scale (1= never used, 3 = sometimes used,and 7 = always used).

The Part 2 rating scales yield Frequency scores and Discrepancy scores for each scale. Part 2 item/strategy discrepancy scoresare computed as follows: |recommended frequency − frequency ratings|. In this study Part 2 discrepancy (absolute) scores wereused to assess whether the observer determined if any change (regardless of direction) was needed in the teacher's classroompractices. The larger the average absolute value discrepancy score, the greater amount of change needed. For this investigation,frequency and discrepancy scores were separately analyzed.

2.2.3. CSS Part 3Part 3 (Classroom Checklist) is completed prior to leaving the classroom. The Classroom Checklist assesses the presence of 10

specific classroom structures procedures, including the posting and specificity of rules, student accomplishments, and charts formonitoring behavioral or academic progress (see Table 6 for a description).

2.2.4. Administration and scoringAtminimuma single observation can be used to complete the CSS-Observer Formassessment. In the current study, two observations

for each teacherwere conducted using the CSS-Observer Form. Subsequently, scoreswere calculated in accordancewith CSS proceduresformultiple observations. For Part 1, the six teacher strategies were averaged across observations 1 and 2. For Part 2, both the frequencyand absolute value discrepancy scoreswere first calculated at the item level for the IS and BMS scales, for classroomobservations 1 and 2separately. IS and BMS scale scoreswere then calculated for observations 1 and 2 separately by summing these discrepancy scores of theassociated items. We then added the respective scale scores from Observation 1 to the corresponding scale scores in Observation 2, andthen divided by 2 to obtain the average absolute value discrepancy score across both observations.

Page 6: Instructional and behavior management practices implemented by elementary general education teachers

688 L.A. Reddy et al. / Journal of School Psychology 51 (2013) 683–700

Aggregates (e.g., sums ormeans) of absolute values of itemdiscrepancy scoreswould approximate zero for teacherswho consistentlyuse all strategies in a set appropriately, andwould gradually increase when teachers deviate from the recommended use of strategies inlessons. This type of information is helpful for identifying teachers whomay need professional development and supports. In this studywe chose not to aggregate signed discrepancy item scores because of the ambiguity of these aggregates: An item discrepancy score ofzero indicates that a teacher's observed use of a strategymatches or equals the recommended use of that strategy. A teacherwho obtainsan item discrepancy score of approximately zero is using the strategy appropriately. A large positive item discrepancy score indicatesthat the recommended use of a strategy is much higher than the teacher's observed use of that strategy (i.e., the teacher under-usesthat strategy); A large negative item discrepancy scores indicates that the recommended use is much below the teacher's observed use(i.e., the teacher over-uses that strategy). An aggregate of signed discrepancy scores, such as a sumormean of these scores, could be zeroor close to zero.

2.2.5. Evidence supporting CSSThe CSS Observer Form has construct validity evidence supporting it based on extensive school personnel input on directions, items,

and scales; feedback from a National Advisory Board composed of experts in the area of instruction and behaviormanagement; decadesof evidence-based instructional and behavioral management research; and analysis of its scores (see Reddy, Fabiano, Dudek, & Hsu,2013a). The Part 2 IS and BMS total scales, composite scales, and subscales are theoretically and factor-analytically derived (usingconfirmatory factor analysis). The CSS Part 2 IS and BMS total scales have strong internal consistency (Cronbach alpha values of .92–.93).

Inter-rater reliability data were randomly collected on 82 cases (approximately 26% of the current sample) for all three partsof the CSS. Two methods were used to estimate inter-rater reliability: (1) Pearson product moment correlations and (2) percentagreement. Pearson product moment correlations between Observer 1 and Observer 2 were calculated for each of the six teacherstrategies on the CSS Part 1 and the Part 1 total score. For the Part 2 rating scales, pearson product moment correlations betweenObserver 1 and Observer 2 were calculated for the Part 2 IS and BMS total, composite, and subscale scores. For the Part 3 checklist,a pearson product moment correlation was calculated for the total checklist score. Percent agreement between Observer 1 andObserver 2 was calculated for the Part 1 Total Strategies, the Part 2 IS and BMS total scale scores, and the Part 3 total score. For the Part 1percent agreement was calculated by counting the total number of cases where Observer 1 and Observer 2 scores were determined toagree and then dividing by the total number of cases in the sample. For the Part 2 IS and BMS total scores, percent agreement wascalculated using a similar procedure. First, the absolute value of the difference between Observer 1 and Observer 2 scores on each of theCSS IS and BMS total scales was calculated. Scores were then determined to agree based on a difference threshold of 1 point or less peritem in the scale. Other classroom observation validation research (Classroom Assessment Scoring System, CLASS; Pianta, La Paro, &Hamre, 2008) aswell as large-scale studies (Kane & Staiger, 2012; NICHD Early Child Care ResearchNetwork, 2002a, 2002b) have used adifference score of 1 point or less per item to determine percent agreement. The total number of agreements for both the IS and BMS totalscale scorewas then divided by the total number of cases in the sample to determine percent agreement. The percent agreement for thePart 3 total score followed the same procedure used in determining the Part 1 calculation.

Using Cicchetti (1994) guidelines, good inter-rater reliability was found for each of the six Part 1 teacher strategies: ConceptSummaries (r = .79, percent agreement 93%), Academic Response Opportunities (r = .93, percent agreement 89%), ClearCommands (r = .92, percent agreement 90%), Vague Commands (r = .80, percent agreement 90%), Praise Statements (r = .86,percent agreement 90%), and Corrective Feedback (r = .97, percent agreement 90%). The Part1 Total Strategies yielded goodinter-rater reliability (r = .94; percent agreement 92%). The Part 2 (Strategy Rating Scales) IS Total and BMS Total scale scores alsoyielded fair to good inter-rater reliability (r = .80, percent agreement 92% and r = .72; percent agreement 88%, respectively) as didthe Part 3 Classroom Checklist (r = .86; percent agreement 91%). Reliability estimates are dependent on the type of measurementand no objective threshold exists for these estimates, only commonly accepted or used values (e.g., Goodwin & Goodwin, 1999;Knapp & Brown, 1995). The inter-rater reliability estimates reported in the present study align with accepted values for otherclassroom observation assessments in the field such as the CLASS and themeasures used in theMeasures of Effecting Teacher Project(Cantrell, 2013; Kane& Staiger, 2012; Pianta et al., 2008). Inter-rater reliability results from theMeasures of Effecting Teaching Projectsuggest reliability estimates should aim to exceed a value of .65 and estimates greater than .80 are highly reliable (Cantrell, 2013;Kane & Staiger, 2012).

In addition to inter-rater reliability evidence, test–retest reliability evidence (across approximately 2 to 3 weeks) was found tobe fair to good (Cicchetti, 1994). Test–retest reliability data was collected on a sample of 57 classrooms from the current study(approximately 18%). In congruence with the current study, retest sample teachers received two initial observations along withthe other participants. Approximately 2 to 3 weeks later, the same principals or research staff who conducted the firstadministration of the CSS-Observer Form returned to the classroom to conduct an additional two observations (observations 3and 4) for the same teacher. CSS-Observer Form score calculations for observations 3 and 4 followed the same procedures asobservations 1 and 2. The averaged results of observations 1 and 2 were compared to the averaged results of observations 3 and 4using Pearson product moment correlations and percent agreement. The same procedures used to percent agreement for theinter-reliability sample were used for the test–retest sample. Fair to good estimates were found for the Part 1 Total Strategies(r = .70, percent agreement 81%), Part 2 IS and BMS Total scales (r = .86, percent agreement 93% and r = .80, percentagreement 85%, respectively), and Part 3 Classroom Checklist (r = .77; percent agreement 81%). There is also evidence of validityfor the CSS scores. The CSS-Observer Form scores were compared to the CLASS, a well-established measure of teacher andclassroom quality, (Pianta et al., 2008). In a study with 125 teachers where the CSS and CLASS were completed concurrently, theCSS scales and subscales have been found to demonstrate good convergent and discriminant validity with the ClassroomAssessment Scoring System domains (Reddy, Fabiano, & Dudek, 2013). Preliminary validity studies have found the CSS scores

Page 7: Instructional and behavior management practices implemented by elementary general education teachers

689L.A. Reddy et al. / Journal of School Psychology 51 (2013) 683–700

sensitive to change following teacher consultation for improving classroom practices (Reddy & Dudek, in press). Hierarchicallinear modeling revealed that the CSS IS discrepancy scores predict student mathematics and language arts state-wide testingscores (Reddy, Fabiano, Dudek, & Hsu, 2013b). Finally, differential item functioning analyses have revealed that the Part 2 StrategyRating Scales and items are free of item bias for important teacher demographic variables (i.e., age, educational degree, and yearsof teaching experience (Reddy et al., 2013a).

2.3. Procedures

Observers and teacherswere recruited as part of a larger validation study using the CSS-Observer Form. The central administrativeoffice for each school district was first contacted to obtain permission to conduct the study. Each individual school in a given districtwas then contacted to obtain permission to recruit participants and conduct the study. School principals and teachers were informedof the study through flyers and school-based presentations. All participating teachers in the current study volunteered to be observedby their participating principal or by CSS research staff. All participating principals in the current sample volunteered to receiveobserver training and conduct observations using the CSS-Observer Form for the teachers participating at their school. Informedconsent was obtained from all participating principals, teachers, and research procedures were approved by Institutional ReviewBoards at both universities in NewYork andNew Jersey sponsoring the research. All participants signed an agreement form indicatingthat CSS data could not be used for the purposes of evaluating teachers' job performance.

Observers participated in training prior to observing teachers' strategy use and classroom practices using the CSS. Due to thediverse backgrounds and credentials within the observers in the study, multiple training procedures were available to ensure allobservers had a basic level of applied practice in classroom observations. To standardize CSS implementation across sites, observerswatched a DVD training video that introduced CSS observation procedures, provided an overview of how ratings are completed, andthen showed several classroom examples of teachers displaying specific behaviors assessed by the CSS (e.g., praise statements andacademic response opportunities). Following presentation of the DVD training video, observers received two didactic trainingsessions (2 h each) from a CSS Trainer/Master Coder which included discussion of definitions and criteria, and then observersindividually practiced coding elementary general education classroom videos to assess observer reliability. Practice coding resultswere reviewed by CSS Trainer/Master Coder and specific feedback was provided to observers to further orient them to the CSSdefinitions and criteria.

During these two training sessions, observers were also oriented to the scientific literature guiding the development of the CSSand the recommended frequencies of these strategies to ensure observers operated with the same knowledge base for judging theRecommended Frequency of the CSS Part 2. Training on the Recommended Frequency of strategies was informed by the effectiveinstruction literature that spans over 60 years (e.g., Brophy & Good, 1986; Creemers, 1994; Gage, 1978; Hattie, 1992; Horner,Sugai, Todd, & Lewis-Palmer, 2000; Kounin, 1970; Marzano, 1998; Marzano, Pickering, & Pollock, 2001; Walberg, 1986; Wang,1991). For example, the academic and behavioral literatures have indicated praise statements should be used frequently andconsistently (e.g., Alber, Heward, & Hippler, 1999; Beaman & Wheldall, 2000; Sutherland & Wehby, 2001). In particular, praiseshould be used at a ratio of 3:1 to corrective feedback (i.e., reprimands). The CSS Academic Response Opportunities strategy,which comes from the opportunity to respond (OTR) literature, should be used at a rate of 3.5 per minute during activeinstruction (e.g., Partin et al., 2010; Stitcher et al., 2009; Sutherland et al., 2002, 2003).

For the current study, observers scheduled two 30-minute observations within seven school days of one another. Observerscompleted the first observation using the CSS Part 1 and immediately rated the Part 2 IS and BMS items. These steps wererepeated for the second observation, and after both observations were completed, the Part 3 classroom checklist was completed.Observers returned their forms independently to the study coordinators at each site.

2.4. Data analytic plan

Because the 317 teachers were nested under 67 observers (principals and research staff), 2-level hierarchical linear models (HLMLevel 1 units = 317 teachers; Level 2 units = 67 observers) were first used to estimate and test hypotheses about net effects ofobservers on frequency and discrepancy scores of instructional and behavioral management strategies. The two-level HLMs includedrandom intercepts and fixed slopes. Large and significant intraclass correlations (ICCs) supported the use of HLMs to estimate and testhypotheses about effects of (a) grade levels, (b) years of teaching experience and (c) interactions of grade levels and years of teachingexperience. As shown on Tables 7, 8, and 9, 35 HLMswere fitted to the data: 7 for Part 1, 14 for Part 2 IS and BMS scale frequency scores,and 14 for Part 2 IS and BMS scale discrepancy scores. An alpha of .05 was used for all tests of statistical significance. The Dunn–Sidakmethod (see Kirk, 1982, pp. 110–111) was used to determine the individual test alpha-levels that are required to maintain thefamily-wise error rates (for the family of the 7 planned tests of fixed effects in each HLM) below conventional alpha levels (i.e., p b .05)within each of the 35 HLMs. In particular, the individual test alpha-level controlling for family-wise (FW) error rates were b .0073(FW b .05), respectively. Additionally, effect sizes in the form of half-standardized regression coefficients were computed (Hedges,Laine, & Greenwald, 1994). Half-standardized regression coefficients are used for continuous dependent and independent variables andallow comparisons of effects of the same regressor (e.g., grade level) on different outcome measures (e.g., CSS scales). Lindsey andWilson's (2001) interpretation of Cohen's guidelines (ES ≤ .20 = small, ES = .50 = medium and ES ≥ .80 = large)were used. For thisinvestigation we have interpreted ESs N .21 and b .79 as medium.

Page 8: Instructional and behavior management practices implemented by elementary general education teachers

Table 3Descriptive statistics of the CSS Part 1 and Part 2 frequency scores.

Measures M SD Range Maximum possibleScore

% of Maximumscore

Skewness(Std. E.)

Kurtosis(Std. E.)

Part 1 Strategy countsConcept summaries 5.73 5.03 0–23.5 1.28 (.14) 1.20 (.27)Academic response opportunities 27.25 14.34 4–73 0.76 (.14) 0.12 (.27)Clear 1 to 2 step commands 17.09 8.77 1.5–51 0.92 (.14) 0.84 (.27)Vague commands 3.67 3.92 0–19 1.55 (.14) 2.09 (.27)Praise 11.36 8.33 0–58.5 1.90 (.14) 6.06 (.27)Corrective feedback 8.86 6.98 0.5–50.5 2.22 (.14) 7.55 (.27)Total 66.63 26.21 10–183 0.86 (.14) 1.40 (.27)

Part 2 Strategy rating scalesInstructional strategies total scale 123.71 24.99 47.50–181 182 68% 0.24 (.14) −0.33 (.27)Instructional methods composite 66.19 14.45 25–98 98 68% 0.12 (.14) −0.43 (.27)

Student focused learning & engagement 31.31 7.79 9–49 49 64% 0.20 (.14) −0.56 (.27)Instructional delivery 34.81 8.13 13–49 49 89% −0.16 (.14) −0.59 (27)

Academic monitoring/feedback composite 57.47 11.90 22–84 84 68% 0.17 (.14) −0.38 (.27)Promotes student thinking 26.85 7.0 8–42 42 64% 0.13 (.14) −0.55 (.27)Academic performance feedback 30.62 6.66 10.50–42 42 73% −0.28 (.14) −0.32 (.27)

Behavioral management strategies total scale 107.97 23.83 55–159.50 161 67% 0.15 (.14) −0.83 (.27)Proactive methods composite 60.35 11.85 21–84 84 72% −0.34 (.14) −0.28 (.27)

Prevention management 21.28 5.81 6.50–35 35 61% 0.04 (.14) −0.59 (.27)Directives/transitions 39.03 7.35 12.50–49 49 80% −0.64 (.14) −0.11 (.27)

Behavior feedback composite 47.52 14.88 18–76 77 62% 0.12 (.14) −1.01 (.27)Praise 21.38 7.99 5–35 35 61% 0.00 (.14) −1.02 (.27)Corrective feedback 26.09 8.03 9–42 42 62% 0.04 (.14) −1.10 (.27)

690 L.A. Reddy et al. / Journal of School Psychology 51 (2013) 683–700

3. Results

3.1. Descriptive statistics of strategy usage

Descriptive statistics for frequency scores from the CSS Part 1 (Strategy Counts) and Part 2 (Strategy Rating Scales) arereported in Table 3. For the CSS Part 1, Academic Response Opportunities were observed as the most frequently used strategy,followed by Clear 1 to 2 Step Commands. Praise Statements was the third most frequently observed strategy, followed byCorrective Feedback. Concept Summaries and Vague Commands were observed as the least frequently used strategies.

Comparatively, for the Part 2 IS and BMS rating scales, it is difficult to determine which scales and subscales occurred morefrequently because each scale possesses a different number of items and maximum score. Dividing the average of each scale by itsmaximum score allowed for the comparison of frequency. Higher percentages suggest increased frequency usage of the items foreach scale. As evident in Table 3, teachers possessed similar percent of maximum scores on the IS and BMS total scales (on average68% and 67%, respectively), suggesting similar frequency usage of instructional and behavioral management strategies. Within theIS scale, the Academic Performance Feedback (73%) and Instructional Delivery subscales (71%) possessed the highest percent ofmaximum score. Within the BMS scale, Directives/Transitions (73%) had the highest percent of maximum score.

As shown on Tables 4 and 5, descriptive statistics are computed for the CSS Part 1 (Strategy Counts) and Part 2 (Strategy RatingScales) by grade level (i.e., kindergarten through fifth grade) and years of teaching experience (i.e., less than 3 years; 4 to 9 years; 10to 19 years; and 20 and more years). On Table 4, descriptive results suggest trends in increased usage in specific instructionalstrategies and decreased usage in behavioral management strategies with increased grade-level assignment. On Table 5, descriptiveresults suggest educators' strategy usage is comparable across years of teaching experience. Table 6 presents a summary of the CSSPart 3 (Classroom Checklist) for the entire sample. In general, the provision and availability of materials were the most commonlyobserved classroom components (e.g., materials available for completing assignments; presence of tissues and hand sanitizer).Antecedent control approaches (e.g., posting homework assignments) and progress monitoring strategies (e.g., methods for trackingstudent progress) were observed in fewer classrooms. Overall, when separated by grade level or years of teaching experience, themajority of the classrooms evidenced similar parameters as the entire sample.1

3.2. Strategy frequency scores by grade and years of teaching experience

Table 7 presents a summary of the HLM analyses for grade level assignment and frequency of strategy usage as measuredby the CSS Part 1 (Strategy Counts). Intraclass correlations (ICCs) ranged from .18 to .57, and all were statistically significant.Grade significantly related to educators' frequency of CSS Concept Summaries, Clear 1 to 2 Step Directives, Praise Statements,Corrective Feedback, and Total strategie. Findings indicate that increased grade resulted in more frequent use of ConceptSummaries, β = .30, p b .001; ES = .06, and less frequent use of Clear 1 to 2 Step Directives, Praise Statements, Corrective

1 Descriptive results for the Part 3 Classroom Checklist by grade level or years of teaching experience can be obtained from contacting the first author.

Page 9: Instructional and behavior management practices implemented by elementary general education teachers

Table 4Descriptive statistics of the CSS Part 1 and Part 2 frequency scores by grade level.

Measures KN = 60M (SD)

1N = 48M (SD)

2N = 64M (SD)

3N = 60M (SD)

4N = 41M (SD)

5N = 44M (SD)

Part 1 Strategy countsConcept summaries 5.32 (5.17) 5.58 (4.27) 5.30 (4.83) 5.22 (4.41) 7.68 (6.56) 5.93 (4.89)Academic response opportunities 26.32 (13.32) 27.94 (13.94) 29.74 (15.47) 28.75 (15.78) 24.88 (14.13) 24.33 (12.26)Clear 1 to 2 step commands 19.78 (8.76) 19.33 (10.88) 18.74 (9.62) 15.42 (6.80) 14.73 (6.46) 13.06 (6.87)Vague commands 3.76 (3.68) 3.63 (4.07) 3.38 (3.34) 4.32 (4.51) 3.40 (4.11) 3.33 (3.99)Praise statements 16.06 (10.54) 14.47 (8.79) 11.38 (7.86) 9.47 (5.73) 9.12 (6.32) 6.20 (4.70)Corrective feedback 11.69 (8.63) 11.27 (8.63) 9.71 (5.23) 8.03 (6.18) 6.48 (5.62) 5.93 (4.46)Total 75.41 (29.29) 74.97 (27.80) 70.50 (26.28) 62.57 (21.40) 59.49 (23.88) 52.13 (19.00)

Part 2 Strategy rating scalesInstructional strategies total 121.01 (21.73) 126.00 (27.67) 122.48 (26.67) 119.90 (21.42) 131.79 (25.93) 124.45 (26.56)Instructional methods composite 65.28 (13.92) 66.92 (15.99) 65.74 (14.36) 63.90 (12.51) 69.96 (15.79) 66.93 (14.77)

Student focus learning & engagement 30.38 (7.30) 30.90 (7.99) 30.75 (7.87) 30.41 (7.33) 34.28 (8.07) 32.39 (8.12)Instructional delivery 34.91 (8.71) 35.88 (9.18) 34.99 (7.87) 33.27 (6.86) 35.69 (8.51) 34.55 (7.86)

Academic monitoring/feedback composite 55.73 (9.28) 58.95 (12.76) 56.74 (13.53) 55.84 (10.29) 61.83 (11.50) 57.52 (13.34)Promotes student thinking 23.78 (6.00) 26.93 (7.25) 26.77 (7.94) 26.65 (6.09) 30.34 (5.66) 28.16 (7.32)Academic performance feedback 31.95 (5.55) 32.02 (7.31) 29.98 (7.07) 29.19 (5.98) 31.49 (6.42) 29.36 (7.36)

Behavioral management strategies total 112.12 (23.22) 113.96 (27.72) 107.36 (22.98) 102.21 (20.88) 107.09 (24.58) 105.22 (23.35)Proactive methods composite 61.07 (12.23) 61.66 (13.32) 60.43 (11.45) 58.37 (11.32) 60.11 (12.81) 60.80 (10.23)

Prevention management 21.24 (6.17) 22.26 (6.64) 21.15 (5.92) 20.39 (4.99) 21.24 (5.55) 21.74 (5.55)Directives/Transitions 39.83 (7.45) 39.16 (7.76) 39.28 (6.89) 37.98 (7.43) 39.06 (6.19) 39.03 (7.35)

Behavioral feedback composite 51.05 (13.80) 51.95 (16.36) 46.93 (14.87) 43.96 (12.89) 46.67 (4.24) 44.16 (16.23)Praise 23.86 (7.05) 23.72 (8.40) 20.91 (7.91) 20.13 (7.72) 20.54 (7.67) 18.64 (8.38)Corrective feedback 27.19 (7.94) 28.22 (8.64) 26.02 (8.11) 24.03 (6.50) 25.91 (7.65) 25.28 (9.11)

691L.A. Reddy et al. / Journal of School Psychology 51 (2013) 683–700

Feedback, and Total strategies, β = −1.5, p b .001, ES = − .17; β = −1.78, p b .001, ES = .21; β = −1.29, p b .001, ES = − .19;and β = −4.69, p b .001, ES = − .18, respectively. For reference, half-standardized regression coefficients (ES) can be interpreted asfollows: an increase of one year in grade level is associatedwith 0.06 standard deviation increase (on average) in Concept Summariesscores (Hedges et al., 1994). Likewise, an increase of one year in grade level is associated with .17 standard deviation decrease(on average) in Clear 1 to 2 Step Directives scores. ES results suggest small positive effects of grade level on CSS Part 1 scores(Lindsey & Wilson, 2001).

Table 5Descriptive statistics of the CSS Part 1 and Part 2 frequency scores by years of teaching experience.

Measures 3 or less yearsN = 55M (SD)

4–9 yearsN = 95M (SD)

10–19 yearsN = 97M (SD)

20+ yearsN = 65M (SD)

Part 1 Strategy countsConcept summaries 5.32 (4.84) 6.50 (5.40) 5.30 (4.74) 5.89(5.09)Academic response opportunities 28.46 (14.49) 27.56 (15.79) 28.61 (14.99) 23.64(10.19)Clear 1 to 2 step commands 16.52 (9.15) 16.87 (7.39) 18.50 (9.95) 15.59 (8.49)Vague commands 4.07 (3.86) 4.04 (4.23) 3.40 (3.64) 3.28 (4.06)Praise statements 11.77 (6.88) 10.97 (7.65) 11.66 (10.13) 11.39 (7.80)Corrective feedback 10.98 (8.06) 8.51 (8.06) 8.83 (6.47) 7.79 (4.65)Total 68.98 (25.66) 66.37 (25.20) 69.51 (30.82) 61.03 (20.23)

Part 2 Strategy rating scalesInstructional strategies total 123.95 (20.72) 123.22(24.22) 123.26(27.34) 126.10(26.34)Instructional methods composite 66.83 (12.07) 65.18 (14.08) 66.24 (15.38) 67.75 (15.60)

Student focus learning & engagement 31.58 (6.60) 30.91 (7.87) 30.81(8.10) 32.76 (8.08)Instructional delivery 35.25 (7.30) 34.15 (7.90) 35.43 (8.50) 34.85 (8.62)

Academic monitoring/feedback composite 57.12 (10.02) 57.98 (11.57) 57.02 (13.37) 58.15 (11.85)Promotes student thinking 26.16 (6.41) 27.41 (6.73) 26.64 (7.59) 26.86 (7.11)Academic performance feedback 30.96 (5.75) 30.57 (6.61) 30.38 (7.43) 31.28 (6.25)

Behavioral management strategies total 107.88 (21.27) 105.88 (22.51) 108.27 (25.53) 110.92(25.44)Proactive methods composite 59.45 (10.83) 59.58 (11.70) 61.20 (12.52) 61.50 (11.91)

Prevention management 21.28 (5.24) 21.03 (7.32) 21.27 (6.38) 21.76 (8.53)Directives/Transitions 38.16 (7.22) 38.39(7.60) 39.81 (7.34) 39.74 (7.02)

Behavioral feedback composite 48.44 (13.46) 46.24 (13.76) 46.95 (16.12) 49.42 (15.98)Praise 21.95 (7.09) 21.03 (7.32) 20.91 (9.14) 22.30 (7.98)Corrective feedback 26.48 (7.60) 25.23 (7.99) 26.04 (7.97) 27.12 (8.63)

Page 10: Instructional and behavior management practices implemented by elementary general education teachers

Table 6Descriptive statistics of the CSS Part 3 classroom checklist.

Part 3 classroom checklist items Classrooms with checklistitems present

Learning materials/resources1. Learning materials (e.g., pencils, rulers) and resources (e.g., Internet, encyclopedia, dictionary, books) to completeassignments are available to students.

98.7%

2. Learning materials and areas in the classroom are labeled. 64.4%

Classroom structure/organization3. A procedure or routine exists for students to organize their desks, backpacks, or learning materials. 86.4%4. Student work, artwork, and accomplishments are displayed in the classroom. 81.1%5. Methods for tracking student academic and/or behavioral progress (e.g., homework tracking chart, rule-following chart,sticker/star chart) are posted.

65.1%

6. Tissues and hand sanitizers are available to students. 98.1%7. Classroom lesson or activity schedule is posted. 74.1%8. Assignments (e.g., homework, readings, tests) are clearly posted. 57.8%

Classroom rules9. Classroom rules are posted. 82.6%10. Classroom rules specify positive behaviors that students should do rather than not do. 75.7%

Note. Results for the Part 3 Classroom Checklist by grade level or years of teaching experience can be obtained from contacting the first author.

692 L.A. Reddy et al. / Journal of School Psychology 51 (2013) 683–700

As shown on Table 7, years of teaching experience effects and the grade level by years of teaching experience effects were, ingeneral (but with one exception), not significantly related to teachers' use of CSS Part 1 strategies. The exception was teachers'use of Praise Statements. Frequency of use of praise declined with increasing grade levels in years of teaching experience groups,

Table 7HLM analysis with CSS Part 1 strategy counts.

Part 1 strategy counts Grade Years of teaching experience(effect coded dummy variables)

Grade-by-years-of-teachingexperience interaction

β (SE) Z pa β (SE)b Z pa β (SE) c Z pa

Concept summaries 0.30 (0.07) 4.38* b .001 −0.99 (0.52) −1.90 .057 0.30 (0.18) 1.66 .0960.44 (0.48) 0.91 .358 0.09 (0.16) 0.60 .5440.29 (0.46) 0.62 .533 −0.33 (0.15) −2.14 .032

Academic response opportunities −0.42 (0.30) −1.38 .167 2.10 (2.30) 0.91 .361 −0.49 (0.81) −0.60 .545−1.44 (2.09) −0.69 .488 1.13 (0.69) 1.63 .102

1.96 (2.05) 0.95 .339 −0.71 (0.67) −1.05 .291Clear 1 to 2 step commands −1.5 (0.18) −8.52* b .001 −0.56 (1.40) −0.40 .686 0.45 (0.49) 0.92 .356

−0.12 (1.26) −0.09 .924 0.36 (0.41) 0.87 .3842.42 (1.24) 1.94 .052 −0.52 (0.40) −1.28 .198

Vague commands −0.13 (0.07) −1.78 .075 0.74 (0.54) 1.36 .173 −0.20 (0.19) −1.08 .279−0.44 (0.50) −0.89 .374 0.21 (0.16) 1.26 .204

0.09 (0.48) 0.20 .840 0.01 (0.16) 0.08 .930Praise statements −1.78(0.17) −10.31* b .001 −1.17 (1.30) −0.90 .367 0.64 (0.46) 1.38 .165

−0.33 (1.17) −0.28 .778 0.08 (0.38) 0.21 .8273.08 (1.15) 2.66 .008 −1.02 (0.38) −2.67* .007

Corrective feedback −1.29 (0.15) −8.22 * b .001 2.31 (1.93) 1.94 .052 −0.10 (0.42) −0.24 .8030.21 (1.06) 0.20 .838 −0.21 (0.35) −0.61 .541

−0.21(1.05) −0.20 .838 −0.07 (0.34) −0.20 .840Total −4.69 (0.52) −8.87 * b .001 0.99 (3.97) 0.25 .802 1.03 (1.40) 0.73 .464

−0.72 (3.61) −0.20 .842 1.24 (1.19) 1.03 .3007.29 (3.54) 2.05 .040 −2.72 (1.16) −2.34 .019

Note.a p Values for individual tests; the Dunn–Sidak method was used to maintain the family-wise error rate (FW) below .05; * denotes FW b .05.b The three lines in each box of this column correspond to three effects-coded dummy variables representing Years of Teaching Experience groups. The first line

β is an estimate of the distance of the intercept of the regression line of teachers with 3 or fewer years of experience, from the average intercept of all Years ofTeaching Experience groups at Grade = 0 (i.e., kindergarten); the second line β is an estimate of the distance of the intercept of the regression line of teacherswith 4 to 9 years of experience, from the average intercept of all Years of Teaching Experience groups at Grade = 0 (i.e., kindergarten); the third line β is anestimate of the distance of the intercept of the regression line of teachers with 10 to 19 years of experience, from the average intercept of all Years of TeachingExperience groups at Grade = 0 (kindergarten).

c The three lines in each box of this column correspond to three interaction effects, each involving an effects-coded dummy variable for Years of TeachingExperience. The first line β in each box is an estimate of the difference between (a) the effect of Grade Level on the usage of a strategy for teachers with 3 or lessyears of experience, and (b) the average effect of Grade Level of all Years of Teaching Experience groups. The 2nd line β in each box is an estimate of the differencebetween (a) the effect of Grade Level on the usage of a strategy for teachers with 4 to 9 years of experience, and (b) the average effect of Grade Level of all Years ofTeaching Experience groups. The 3rd line β in each box is an estimate of the difference between (a) the effect of Grade Level on the usage of a strategy for teacherswith 10 to 19 years of experience, and (b) the average effect of Grade Level of all Years of Teaching Experience groups.

Page 11: Instructional and behavior management practices implemented by elementary general education teachers

693L.A. Reddy et al. / Journal of School Psychology 51 (2013) 683–700

but the decline was significantly larger for teachers with 10 to 19 years of teaching experience than the average decline for allteaching experience groups.

Table 8 presents a summary of the HLM analyses for grade level and strategy usage as measured by the CSS Part 2 (StrategyRating Scale) frequency ratings. ICCs ranged from .40 to .55, and all were statistically significant. HLM analyses indicated thatgrade level significantly relates to educators' use of instructional and behavioral management strategies as measured by the CSSPart 2 (Strategy Rating Scales). For the CSS IS scale, findings indicated that increased grade level resulted in higher scores(increased strategy usage) on the Student Focus Learning and Engagement scale, β = .47, p b .001, ES = .06), AcademicMonitoring/Feedback composite, β = .45, p = .006, ES = .04), and its associated Promotes Student Thinking scale, β = .82,p b .001, ES = .11. Also, increased grade level resulted in lower scores (decreased strategy usage) on the Academic PerformanceFeedback scale, β = − .37, p b .001; ES = .06. As evident in Table 8, none of the tests on the effects of years of teachingexperience or the grade level by years of teaching experience interaction effects were statistically significant.

For the BMS scales, findings indicated that increased grade level resulted in lower scores (decreased strategy usage) on theBMS Total, β = −1.67, p b .001; ES = .07; Behavior Feedback Composite, β = −1.41, p b .001, ES = − .01, and its associatedPraise scale, β = − .99, p b .001, ES = − .12, and Corrective Feedback scale, β = − .45, p b .01, ES = − .05. Additionally, theHLM results indicated that years of teaching experience and the grade level by years of teaching experience interaction dummyvariables were not significantly related to strategy usage on the BMS scales. Due to marginal maximum likelihood estimationdifficulties encountered with HLMs for IS Instructional Delivery and BMS Directives/Transitions scales that included interactionterms, the interaction terms were excluded from HLMs for these two scales. Thus, for these two scales no estimates or hypothesistests for grade level by years of teaching experience interaction effects are available.

3.3. Strategy discrepancy scores by grade level and years of teaching experience

Table 9 presents a summary of the HLM analyses for grade-level assignment and discrepancy scores as measured by the CSSPart 2 (Strategy Rating Scale). HLM analyses indicated that in general grade-level assignment significantly relates to educators'discrepancy scores of instructional and behavioral management strategies. For the CSS IS scale, findings indicated that increasedgrade-level assignment resulted in higher discrepancy scores (increased need for change) on the Academic PerformanceFeedback scale (β = .20, p = .006, ES = .06). As evident in Table 9, none of the tests on the effects years of teaching experience,or grade level by years of teaching experience interaction effects were significant.

For the BMS scales, findings indicated that increased grade-level assignment resulted in higher discrepancy scores (increasedneed for change) on the Behavior Feedback Composite (β = .61, p b .001; ES = .08) and its associated Praise scales (β = .46,p b .001, ES = .10). HLMs results revealed teachers' discrepancy scores of behavioral management strategies were not found tobe related to years of teaching experience and the grade level by years of teaching experience interaction dummy variables.

4. Discussion

This investigation examined general education teachers' use of classroom instructional and behavioral management practicesin elementary school as measured by the CSS-Observer Form and the relation of educators' grade-level assignment, years ofteaching experience, and interaction of grade and years of teaching experience on strategy usage using HLM analyses. Overall,general education teachers' frequency of general classroom practices were lower than rates recommended by the instructionaland behavioral management research literature (e.g., Good & Grouws, 1977; Pfiffner et al., 1985; Stitcher, Lewis, Richter, Johnson,& Bradley, 2006). Although years of teaching experience and the interaction between grade level and years of teaching experiencedid not relate to strategy usage as measured by the CSS (with the exception of Part 1 Praise Statements), educators' grade-levelassignment was found to relate to the frequency at which these strategies were employed. Results offer directions forschool-based practice.

4.1. Teachers' natural strategy usage

Examples of the provision of prompts in the opportunity to respond (OTR) literature encompass the CSS Part 1 categoryAcademic Response Opportunities (e.g., Partin et al., 2010; Stitcher et al., 2009; Sutherland et al., 2003). For this CSS Part1strategy, an average total of 27.25 prompts per 30 min—a rate of 0.91 prompts per minute—was found. This finding is in contrastto research by Englert (1983) and Sutherland et al. (2003) that recommend special educators use (as an optimum rate) 3.5 OTRprompts per minute for improving student outcomes. Stitcher et al. (2009) also reported similar results (i.e., 2.61 OTR promptsper minute) in a sample of 35 general education elementary school teachers. The present results suggest that fewer than half thenumber of prompts occur in general education settings relative to the recommendations for special educators, and this largersample suggests more modest use of prompts than that reported by Stitcher et al. (2009).

Overall, Praise statements were found to occur at a greater frequency than Corrective Feedback. Although this result is ideal,the observed ratio of praise to corrective feedback across grade levels (approximately 1:1) was less than the longstandingrecommended ratios of 3:1 and 4:1 for improving student behavior and academic performance (e.g., Good & Grouws, 1977;Pfiffner et al., 1985; Stitcher et al., 2009). Comparing the ratio of the observed Praise to the total amount of behavioral requests(i.e., Corrective Feedback, Clear 1 or 2 Step Commands, and Vague Commands) showed teachers praising at a ratio ofapproximately two praise statements for every five demands. Similarly, the ratio of observed feedback (i.e., Praise and Corrective

Page 12: Instructional and behavior management practices implemented by elementary general education teachers

Table 8HLM analysis with CSS Part 2 strategy rating scales frequency scores.

Part 2 Instructional strategies scale Grade Years of teaching experience(effect coded dummy variables)

Grade-by-years-of-teaching-experienceinteraction

β (SE) Z pa β (SE) b Z pa β (SE) c Z pa

Total scale 0.74 (0.33) 2.21 .027 −1.85 (3.34) −0.55 .580 1.01 (1.19) 0.84 .397−2.23 (3.11) −0.71 .474 1.09 (1.03) 1.05 .292

1.08 (2.94) 0.36 .713 −0.97 (0.98) −0.99 .320Instructional methods composite 0.29 (0.20) 1.44 .148 −0.69 (2.02) −0.34 .733 0.55 (0.72) 0.75 .448

−2.91 (1.88) −1.54 .122 1.05 (0.62) 1.69 .0910.97 (1.78) 0.54 .584 −0.62 (0.59) −1.05 .290

Student focus learning & engagement 0.47 (0.10) 4.53 * b .001 0.08 (1.03) 0.08 .936 0.04 (0.37) 0.13 .895−1.41 (0.96) −1.46 .143 0.51 (0.32) 1.59 .111−0.19 (0.91) −0.21 .830 −0.19 (0.30) −0.64 .517

Instructional delivery −0.18 (0.23) − .78 .434 0.33 (0.77) 0.43 .661 d

−0.34 (0.63) −0.54 .5860.10 (0.63) 0.15 .875

Academic monitoring/feedback composite 0.45 (0.16) 2.74* .006 −1.13 (1.65) −0.68 .493 0.45 (0.59) 0.76 .4450.48 (1.57) 0.31 .751 0.08 (0.50) 0.15 .8740.12 (1.45) 0.08 .932 −0.35 (0.48) −0.72 .466

Promotes student thinking 0.82 (0.09) 8.38* b .001 −0.66 (0.98) −0.68 .497 0.32 (0.35) 0.93 .3500.23 (0.90) 0.26 .794 0.14 (0.30) 0.47 .633

−0.43 (0.86) −0.49 .619 −0.08 (0.28) −0.29 .770Academic performance feedback −0.37 (0.10) −3.68* b .001 −0.42 (1.00) −0.41 .675 0.10 (0.35) 0.29 .772

0.10 (0.92) 0.11 .906 −0.02 (0.30) −0.06 .9460.57 (0.88) 0.65 .511 −0.27 (0.29) −0.94 .343

Part 2 Behavioral management strategies scalesTotal scale −1.67 (0.44) −3.79* b .001 −4.56 (3.30) −1.38 .166 2.13 (1.16) 1.82 .068

−0.63 (3.03) −0.21 .834 0.05 (1.00) 0.05 .9601.96 (2.97) 0.65 .510 −0.46 (0.97) −0.47 .636

Proactive methods composite −0.28 (0.25) −1.12 .261 −1.97 (1.88) −1.05 .293 0.61 (0.66) 0.92 .353−0.42 (1.70) −0.25 .803 0.18 (0.56) 0.31 .750

0.65 (1.69) 0.38 .698 −0.18 (0.55) −0.32 .742Prevention management −0.08 (0.12) −0.67 .499 −1.06 (0.91) −1.16 .244 0.43 (0.32) 1.34 .179

0.43 (0.83) 0.52 .598 .640−0.12 (0.84) −0.15 .878 .895

Directives/Transitions −0.18(0.22) −0.81 .416 −0.56 (0.73) −0.77 .440 d

−0.07 (0.60) −0.12 .9030.29 (0.61) 0.48 .630

Behavioral feedback composite −1.41 (0.26) −5.41* b .001 −2.55 (1.95) −1.30 .191 1.47 (0.69) 2.13 .0330.06 (1.78) 0.03 .972 −0.16 (0.59) −0.28 .7760.83 (1.74) 0.47 .633 −0.19 (0.57) −0.33 .738

Praise −0.99 (0.14) −6.81* b .001 −1.87 (1.08) −1.72 .084 0.91 (0.38) 2.37 .0180.37 (0.99) 0.37 .706 0.00 (0.32) 0.02 .9790.67 (0.96) 0.70 .483 −0.24 (0.31) −0.77 .438

Corrective feedback −0.45 (0.15) −2.90* .004 −0.76 (1.15) −0.66 .509 0.59 (0.40) 1.45 .147

−0.39 (1.05) −0.37 .711 −0.15 (0.34) −0.44 .6590.21 (1.02) 0.20 .838 0.00 (0.33) 0.02 .981

694L.A

.Reddyet

al./JournalofSchoolPsychology

51(2013)

683–700

Page 13: Instructional and behavior management practices implemented by elementary general education teachers

695L.A. Reddy et al. / Journal of School Psychology 51 (2013) 683–700

Feedback) to observed number of OTR prompts (i.e., Academic response opportunities, Clear 1 to 2 Step Commands, andVague Commands) was approximately 1:4, suggesting teachers in this investigation provided feedback for only about 25% of thetotal OTR prompt opportunities presented. Findings of low rates of observed praise in this study are consistent with prior research(e.g., Gunter & Denny, 1998; Shores, Gunter, & Jack, 1993; Sutherland & Wehby, 2001; Sutherland et al., 2002).

The CSS Part 2 IS and BMS scales assessed the frequency of specific evidence-based instruction and behavior managementstrategies used. Comparing mean scores on the eight subscales to the maximum score for each revealed that teachers utilizedevidence-based strategies approximately 60% and 70% of the time. This finding is a positive one given that the frequency at whichteachers implement the strategies associated with these eight subscales has been linked in the effective instruction literature topositive student outcomes (e.g., Creemers, 1994; Gable et al., 2009; Marzano, 1998; Tomlinson & Edisonson, 2003; Wang, Haertel,& Walhberg, 1993; Wenglinsky, 2002).

4.2. Grade level and years of teaching experience

Using HLM analyses, the present investigation found that grade-level assignment related to general education kindergarten tofifth-grade teachers' use some of the instructional and behavioral management strategies. Teachers assigned to lower gradesutilized the CSS Part 1 teacher behavior of praise at a greater frequency compared to those teachers assigned to the upper grades.Further analyses of the CSS Part 2 Praise subscale revealed the teachers at lower grades exhibiting greater frequency than teachersat upper grade levels. Similarly, the strategies associated with the IS Part 2 Academic Performance Feedback subscale, whichmeasured aspects of praise specifically related to academics, occurred at greater frequencies for teachers in the lower gradescompared to teachers in the upper grades. HLMs results also revealed that upper grade teachers had higher discrepancy scores(i.e., greater need for change) in Academic Performance Feedback, Behavioral Feedback, and Praise than lower grade teachers.Overall, findings suggest elementary school teachers' use praise statements less as students become older, replicating previousresearch with a contemporary sample (Brophy & Good, 1986; White, 1975).

Teachers assigned to lower grades were observed using corrective feedback more often on the CSS Part 1 Corrective Feedbackand Part 2 Corrective Feedback subscale than those assigned to upper grades. Because Praise and Corrective Feedback arecomplimentary strategies that guide student behavior, it is not surprising to find that the lower grades had increased use of praiseand corrective feedback compared to the upper grades. However, in this study, the ratio of praise to corrective feedback wasconsistent (i.e., 1:1) across grade levels.

Overall, educators in this study were observed delivering higher rates of commands (verbal requests as measured on the CSSPart 1) to students in lower grades relative to students in higher grades. From a developmental perspective, it is possible thatyounger students had less familiarity with classroom and school routines and may therefore require more verbal guidance bytheir teachers. Also, these findings may reflect educators' higher behavioral expectations of older students in elementary schoolto navigate more independently through instructional contexts. Consistent with these findings, teachers assigned to lowergrades have been found to utilize more developmentally appropriate, “child-focused” practices (i.e., partner activities, smallgroups, individual learning centers and experiential learning) compared to higher grade teachers (e.g., Bredekamp, 1989;Buchannan, Burts, White, & Charlesworth, 1998; Stipek & Byler, 1997). Because student behavior rates were not collected as partof this study, it is not possible to know whether decreased usage of BMS strategies are in response to improved student behavioror to some change in teacher expectations and practice. Given that research has found improved student functioning when BMSstrategies are employed (Fabiano et al., 2007), a combination of factors may be contributing to these findings.

In this study, teachers' usage of metacognitive and critical thinking strategies, as measured by the Part 2 Promotes Student'sThinking subscale, increased by grade level. Thus, teachers implement metacognitive thinking and critical thinking strategiesto greater degrees as children become older. In contrast to the current findings, Santuli (1991) did not observe grade-level differenceswhile investigating second-grade teachers and fifth-grade teachers usage of metacognitive suggestions (e.g., comments thatencourage students to reflect on their learning process) versus direct strategies (e.g., goal-oriented activities students can perform to

Notes to Table 8Note.

a p-values for individual tests; the Dunn–Sidak method was used to maintain the family-wise error rate (FW) below .05; * denotes FW b .05.b The three lines in each box of this column correspond to three effects-coded dummy variables representing Years of Teaching Experience groups. The first

line β is an estimate of the distance of the intercept of the regression line of teachers with 3 or less years of experience, from the average intercept of all Years ofTeaching Experience groups at Grade = 0 (i.e., kindergarten); the second line β is an estimate of the distance of the intercept of the regression line of teacherswith 4 to 9 years of experience, from the average intercept of all Years of Teaching Experience groups at Grade = 0 (i.e., kindergarten); the third line β is anestimate of the distance of the intercept of the regression line of teachers with 10 to 19 years of experience, from the average intercept of all Years of TeachingExperience groups at Grade = 0 (kindergarten).

c The three lines in each box of this column correspond to three interaction effects, each involving an effects-coded dummy variable for Years of TeachingExperience. The first line β in each box is an estimate of the difference between (a) the effect of Grade Level on the usage of a strategy for teachers with 3 or lessyears of experience, and (b) the average effect of Grade Level of all Years of Teaching Experience groups. The 2nd line β in each box is an estimate of the differencebetween (a) the effect of Grade Level on the usage of a strategy for teachers with 4 to 9 years of experience, and (b) the average effect of Grade Level of all Years ofTeaching Experience groups. The 3rd line β in each box is an estimate of the difference between (a) the effect of Grade Level on the usage of a strategy for teacherswith 10 to 19 years of experience, and (b) the average effect of Grade Level of all Years of Teaching Experience groups.

d No estimates or hypothesis tests for (grade level × years of teaching Experience) interaction effects are available due to marginal maximum likelihoodestimation difficulties encountered with HLMs.

Page 14: Instructional and behavior management practices implemented by elementary general education teachers

Table 9HLM analysis with CSS Part 2 strategy rating scales discrepancy scores.

Part 2 Instructional strategies scale Grade Years of teaching experience(effect coded dummy variables)

Grade-by-years-of-teaching-experienceinteraction

β (SE) Z pa β (SE) b Z pa β (SE) c Z pa

Total scale 0.48 (0.29) 1.62 .104 3.92 (2.23) 1.38 .165 −0.71 (0.79) −0.90 .3660.64 (2.05) 0.31 .754 −0.65 (0.68) −0.96 .333

−1.54 (2.00) −0.77 .439 0.37 (0.65) 0.57 .566Instructional methods composite 0.24 (0.17) 1.45 .146 1.64 (1.27) −1.29 .196 −0.33 (0.45) −0.74 .458

1.06 (1.17) 0.91 .362 −0.56 (0.38) −1.44 .147−1.26 (1.14) −0.11 .265 0.30 (0.37) 0.80 .418

Student focus learning & engagement 0.14 (0.09) 1.53 .123 0.55 (0.68) 0.81 .413 −0.15 (0.24) −0.64 .5160.50 (0.62) 0.80 .423 −0.29 (0.20) −1.44 .149

−0.47 (0.61) −0.77 .441 0.15 (0.19) 0.76 .446Instructional delivery 0.10 (0.09) 1.04 .294 1.04 (0.71) 1.45 .146 −0.16 (0.25) −0.65 .509

0.67 (0.66) .1.02 .306 −0.29 (0.21) −1.35 .174−0.83 (0.64) −1.30 .192 0.16 (0.21) 0.77 .437

Academic monitoring/feedback composite 0.21 (0.14) 1.50 .131 1.42 (1.09) 1.30 .191 −0.36 (0.38) −0.95 .339−0.28 (0.99) −0.29 .770 −0.15 (0.32) −0.47 .638−0.29 (0.97) −0.30 .760 0.08 (0.31) 0.26 .788

Promotes student thinking 0.01 (0.08) 0.12 .901 0.83 (0.64) 1.30 .193 −0.17 (0.22) −0.76 .4420.01 (0.58) 0.02 .983 −0.21 (0.19) −1.21 .262

−0.24 (0.57) −0.42 .673 0.03 (0.18) 0.19 .848Academic performance feedback 0.20 (0.07) 2.71* .006 0.57 (0.57) 0.98 .323 −0.19 (0.20) −0.93 .352

−0.28 (0.52) −0.54 .585 0.05 (0.17) 0.33 .737−0.06 (0.51) 0.11 .905 0.05 (0.16) 0.30 .761

Part 2 Behavioral management strategies scalesTotal scale 0.56 (0.29) 1.90 .056 5.23 (2.22) 2.34 .018 −1.44 (0.78) −1.82 .067

−1.35 (2.02) −0.66 .503 0.22 (0.66) 0.33 .738−2.25 (2.03) −1.11 .266 0.72 (0.66) 1.09 .271

Proactive methods composite −0.05 (0.15) −0.31 .750 2.76 (1.19) 2.31 .020 −0.69 (0.42) −1.62 .103−0.63 (1.08) −0.58 .556 0.10 (0.35) 0.29 .764−1.10 (1.09) −1.01 .309 0.37 (0.35) 1.04 .297

Prevention management −0.04 (0.07) −0.54 .586 1.53 (0.57) 2.67 .0074 −0.47 (0.20) −2.34 .018−0.48 (0.51) −0.94 .345 0.08 (0.17) 0.49 .623−0.53 (0.51) −1.03 .300 0.23 (0.16) 1.41 .157

Directives/Transitions −0.01 (0.10) −0.12 .903 1.20 (0.77) 1.55 .120 − .019 (0.27) −0.71 .476−0.13 (0.70) −0.18 .852 0.01 (0.23) 0.06 .946−0.51 (0.69) −0.73 .463 0.11 (0.22) 0.49 .621

Behavioral feedback composite 0.61 (0.16) 3.76* b .001 2.44 (1.23) 1.98 .047 −0.74 (0.43) −1.70 .088−0.67 (1.11) −0.60 .544 0.11 (0.36) 0.30 .759−1.19 (1.09) −1.08 .276 0.36 (0.36) 1.01 .310

Praise 0.46 (0.10) 4.38* b .001 1.09 (0.80) 1.35 .174 −0.37 (0.28) −1.30 .190−0.16 (0.72) −0.02 .822 −0.01 (0.23) −0.07 .943−0.66 (0.71) −0.93 .350 0.22 (0.23) 0.96 .333

Corrective feedback 0.13 (0.07) 1.73 .083 1.32 (0.58) 2.24 .024 −0.36 (0.20) −1.74 .081−0.48 (0.53) −0.90 .362 0.11 (0.17) 0.66 .507−0.49 (0.52) −0.94 .347 0.14 (0.17) 0.82 .406

696L.A

.Reddyet

al./JournalofSchoolPsychology

51(2013)

683–700

Page 15: Instructional and behavior management practices implemented by elementary general education teachers

697L.A. Reddy et al. / Journal of School Psychology 51 (2013) 683–700

complete a task) during mathematics instruction (Moely, Santulli, & Obach, 1995). No differences were found between the second-and fifth-grade teachers frequency of metacognitive suggestions although fifth-grade teachers did use more direct strategies. Itwas also apparent from our results that teachers of higher grades utilized more instructionally-focused strategies (e.g., conceptsummaries) relative to teachers of younger grade. Similarly, Moely et al. (1992) examined the use of metacognitive and memoryknowledge instructional practices for 69 kindergarten to sixth-grade teachers. In contrast to the current study and Santuli (1991),Moely et al. (1992) found teachers using cognitive strategies more often for grades 2 and 3 compared to lower and higher grades.Thus, differences across studies suggest some variation in the use of these strategies; future research that employsmultiple measuresand multiple grade levels is needed to determine the use of these specific strategies and their contributions to student outcomes.

Interestingly, the present study confirmed teachers' usage of metacognitive strategies in the lower grades, yet it underscores asignificant shift in the frequency with which teachers implement metacognitive and critical thinking strategies with increasedgrade. Metacognition emerges during preschool and continues to develop throughout adolescence (Fisher, 1987, 1998). Asstudents become older, their metacognitive abilities become stronger, thus teachers may be more apt to place metacognitivedemands on students or use developmentally appropriate metacognitive learning activities. Another explanation for this shift ineducators' practices in upper grades may be related to state-wide testing requirements. For elementary school, state-wide testingoccurs in third grade through fifth grade and in general focus on students' ability to critically think in the academic areas ofliteracy, mathematics, science, and social studies in most states. Thus, the observed grade level effects could be due to greateremphasis in teaching students the skills necessary to pass these tests.

Grade level effects were also present in the Student Focused Learning and Engagement subscale with increased usageassociated with increased grade level. These findings are surprising given the research in the late 1980s and 1990s to implementdevelopmentally appropriate (i.e. “child-centered”) strategies focused on the primary grades of kindergarten to third grade andpreschool programs (e.g., Abbot-Shim & Sibley, 1997; Bredekamp, 1989; Bredekamp & Copple, 1997; Goldstein, 1997; Gronlund,1995). Although the associated body of research notes differences in teachers' use of developmentally appropriate strategies bygrade level (Abbot-Shim & Sibley, 1997; Buchannan et al., 1998), with the lower grades utilizing more strategies, this researchfocuses on the primary grades kindergarten to grade 3 and preschool programs.

At first glance, the present study findings may seem to contradict the literature, but one must consider that the initialdevelopmentally appropriate and child-centered literature focused exclusively on the primary grades kindergarten to third gradein order to bring the instructional techniques used in primary grades of kindergarten to third grade that are more in line withpreschool programs. Therefore, there is insufficient evidence that developmentally appropriate strategies occur in greaterfrequencies in the primary grades kindergarten to third grade versus higher grades. Secondly, the revised developmentallyappropriate guidelines set forth by the National Association for the Education of Young Children (Bredekamp & Copple, 1997)addressed the need for teachers to utilize both child centered learning and traditional techniques (e.g., direct instruction) forkindergarten through eighth grade. Thus, it is well within the spectrum of expectations to find upper grade teachers usingchild-focused strategies.

Overall, no association was found between years of teaching experience and either (a) frequency of use of instructional orbehavioral management strategies or (b) appropriateness of use of these strategies. Thus, teachers in this study consistently usedinstructional and behavioral management strategies across years of teaching experience. Research on the moderating effect ofyears of teaching experience has primarily focused on student academic outcomes (Monk, 1994; Wang et al., 1993) not teacherprofessional practice (e.g., Ghaith & Yaghi, 1997; Guskey, 1988). Related to the present study's findings, Guskey (1988) foundteachers' willingness to use new instructional practices was not moderated by their years of experience, whereas Ghaith andYaghi (1997) found years of experience was negatively associated with teachers' willingness to adopt new practices.

4.3. Limitations and future directions

The teachers in the study come from only two geographic regions in the Northeast and from kindergarten through fifth grades.Participants also included only general education teachers. The study also did not collect detailed information on educators' prioreducation, training, or professional development. Thus, these results may not generalize to other geographic regions, grade levels,teachers with particular training or professional development experiences, or special education settings. Further, these results

Notes to Table 9Note.

a p-Values for individual tests; the Dunn–Sidak method was used to maintain the family-wise error rate (FW) below .05; * denotes FW b .05.b The three lines in each box of this column correspond to three effects-coded dummy variables representing Years of Teaching Experience groups. The first

line β is an estimate of the distance of the intercept of the regression line of teachers with 3 or less years of experience, from the average intercept of all Years ofTeaching Experience groups at Grade = 0 (i.e., kindergarten); the second line β is an estimate of the distance of the intercept of the regression line of teacherswith 4 to 9 years of experience, from the average intercept of all Years of Teaching Experience groups at Grade = 0 (i.e., kindergarten); the third line β is anestimate of the distance of the intercept of the regression line of teachers with 10 to 19 years of experience, from the average intercept of all Years of TeachingExperience groups at Grade = 0 (kindergarten).

c The three lines in each box of this column correspond to three interaction effects, each involving an effects-coded dummy variable for Years of TeachingExperience. The first line β in each box is an estimate of the difference between (a) the effect of Grade Level on the usage of a strategy for teachers with 3 or lessyears of experience, and (b) the average effect of Grade Level of all Years of Teaching Experience groups. The 2nd line β in each box is an estimate of the differencebetween (a) the effect of Grade Level on the usage of a strategy for teachers with 4 to 9 years of experience, and (b) the average effect of Grade Level of all Years ofTeaching Experience groups. The 3rd line β in each box is an estimate of the difference between (a) the effect of Grade Level on the usage of a strategy for teacherswith 10 to 19 years of experience, and (b) the average effect of Grade Level of all Years of Teaching Experience groups.

Page 16: Instructional and behavior management practices implemented by elementary general education teachers

698 L.A. Reddy et al. / Journal of School Psychology 51 (2013) 683–700

represent a sampling of teacher behavior—one hour of instructional time split across two lessons. However, it is important to notethat the observation procedures used in the current study are consistent with observational practices commonly conducted inschools by elementary school principals and school personnel. The observational data are also limited to the specific operationaldefinitions used in the coding scheme—observational codes with different foci may yield different results.

Teachers were aware of the observer's presence in the classroom, which may have influenced their behaviors due toreactivity or demand characteristics. In this study procedures were in place to reduce the impact of the observer on teacherbehavior (i.e., teachers and observers assigned a written agreement that CSS scores could not be used for teacher performanceevaluations, observations were announced, and observers did not interact with teachers or students while observing. Despitethese efforts, this study did not examine the influence of observers on teacher practices and thus, the results must be viewed inlight of this potential limitation.

This study did not assess the relation between teacher behavior (or changes in teacher behavior) to student academic or behavioroutcomes. Thus, it remains unknown whether increased use of CSS strategies result in changes in student outcomes. Further,alternativemeasurement approaches are needed to studywhether teacher behaviors intended tomodify student behavior (academicor social) actually did so (see for example the Student Behavior Teacher Response Observation code which addresses thedependencies between teacher and student behaviors; Pelham, Greiner, & Gnagy, 2008; Vujnovic et al., in press). Finally, this studydid not examine why teachers use specific strategies or sets of strategies. Further studies are needed to determine the reasonsteachers use or do not use strategies known to be related to effective instruction and behavior management.

4.4. Implications for practice

The present results have implications for school psychologists, general education teachers, general education training programs,and professional development efforts. Results in this investigation suggest good news: in a sample of over 300 general educationteachers, there was consistent evidence that general education teachers used best practices in two half-hour samples of theirinstruction time. On the other hand, rates of use weremodest in some cases, and group averages were lower than recommendationsby research, suggesting recommendations for teachers made decades ago remain aspirational (e.g., the overall ratio of praise tocorrective feedback was close to 1:1 rather than the recommended 3:1 or greater). The reasons for the discrepancy betweenprofessional recommendations are not clear. It may be that general education teachers are not learning effective strategies forinstruction and behaviormanagement in their educational programs, or theymay have learned these strategies and drifted from bestpractice. It is interesting to find that teachers use fewer meta-cognitive strategies and concept summaries with younger studentscompared to older students, even though meta-cognitive instructional strategies and concept summaries have been found useful forall ages including preschoolers (e.g., Fisher, 1987, 1998). Similarly, teachers provide fewer praise statements as children progressthrough school, possibly due to increased expectations of student independence and self-management. Yet, teachers could praisestudent behaviors that represent independence and self-management in an effort to shape and promote such behaviors. Educators'modest rates of strategy usage offer opportunities for school psychologists to engage in collaborative consultation aimed at improvingteachers' Tier 1 practices. Overall, methods to help teachers use and sustain the use of these strategies during a single school year andfor successive school years are needed.

The present study indicated that across multiple instructional and behavioral management strategies, years of teachingexperience overall was not related to strategy usage or discrepancy in strategy usage. This is an important finding for individualsresponsible for professional development. Specifically, professional development efforts will likely need to be targeted acrossfaculty regardless of experience. Findings also suggest a measure such as the CSS scores may be a valuable tool to provideindividualized teacher feedback and follow-up support that is tailored to a teacher's repertoire of current practice (Reddy &Dudek, in press; Reddy, Fabiano, Barbarasch, & Dudek, 2012). An additional advantage of a measure such as the CSS is that it canbe re-administered in an ongoing fashion to document a teacher's use of specific strategies over time and across content areas toserve as a means of progress monitoring implementation. Additionally, obtaining teachers' input on their use of best practicesmay promote self-reflection, collaboration, and communication with consultants (Reddy & Dudek, in press).

5. Conclusion

Overall, this investigation presents findings related to kindergarten to fifth-grade general educators' use of instructional andbehavior management strategies. Results suggest teachers are using many best practice approaches for promoting studentlearning and managing their classrooms, yet teachers do have areas that may warrant improvement. This study offers a snapshotof contemporary general education practice, and it yields useful information for principals, school psychologists, and directors ofcurriculum and instruction charged with ensuring all students receive optimal educational opportunities.

References

Abbot-Shim, M., & Sibley, A. (1997). Developmentally appropriate practices across grade levels. Paper presented at the annual meeting of the American EducationalResearch Association, Chicago, IL.

Abramowitz, A. J., O'Leary, S. G., & Rosen, L. A. (1987). Reducing off-task behavior in the classroom: A comparison of encouragement and reprimands. Journal ofAbnormal Child Psychology, 15, 153–163.

Acker, M. M., & O'Leary, S. G. (1987). Effects of reprimands and praise on appropriate behavior in the classroom. Journal of Abnormal Child Psychology, 15(4),549–557.

Page 17: Instructional and behavior management practices implemented by elementary general education teachers

699L.A. Reddy et al. / Journal of School Psychology 51 (2013) 683–700

Adey, P., & Shayer, M. (1993). An exploration of long-term far-transfer effects following an extended intervention programme in the high school sciencecurriculum. Cognition and Instruction, 11(1), 1–29.

Alber, S. R., Heward, W. L., & Hippler, B. J. (1999). Teaching middle school students with learning disabilities to recruit positive teacher attention. ExceptionalChildren, 65, 253–270.

Bales, B. L. (2006). Teacher education policies in the United States: The accountability shift since 1980. Teaching and Teacher Education, 22, 395–407.Bangert-Drowns, R. L., Hurley, M. M., & Wilkinson, B. (2004). The effects of school-based writing-to-learn interventions on academic achievement: A

meta-analysis. Review of Educational Research, 74, 29–58.Beaman, R., & Wheldall, K. (2000). Teacher's use of approval and disapproval in the classroom. Educational Psychology, 20, 431–446.Bender, W. N. (2008). Differentiating instruction for students with learning disabilities: Best teaching practices for general and special educators (2nd ed.) CityThousand

Oaks, CA: Corwin Press.Bredekamp, S. (Ed.). (1989). Developmentally appropriate practice in early childhood programs serving children from birth through age 8. Washington, DC: National

Association for the Education of Young Children.Bredekamp, S., & Copple, C. (Eds.). (1997). Developmentally appropriate practice in early childhood programs (Rev. ed.). Washington, DC: National Association for

the Education of Young Children.Brophy, J. (1998). Motivating students to learn. New York, NY: McGraw-Hill.Brophy, J., & Alleman, J. (1991). A caveat: Curriculum integration isn't always a good idea. Educational Leadership, 49(2), 66.Brophy, J. E., & Good, T. (1986). Teacher behavior and student achievement. In M. C. Wittrock (Ed.), Handbook of research in teaching (pp. 328–375) (3rd ed.). New

York, NY: Macmillian.Buchannan, D. C., Burts, J. B., White, V. F., & Charlesworth, R. (1998). Predictors of the developmental appropriateness of the beliefs and practices of first, second,

and third grade teachers. Early Childhood Research Quarterly, 13, 459–483.Cantrell, S. (2013). Ensuring fair and reliable measures of effective teaching. Bill & Melinda Gates Foundation.Cicchetti, D. V. (1994). Guidelines, criteria, and rules of thumb for evaluating normed and standardized assessment instruments in psychology. Psychological

Assessment, 6, 284–290.Creemers, B. P. M. (1994). The effective classroom. London, UK: Cassell.Duncan, A., Gurria, A., & van Leeuwen, F. (2011). Uncommon wisdom on teaching. Retrieved from the internet July 22, 2011 from http://www.huffingtonpost.

com/arne-duncan/uncommon-wisdom-on-teachi_b_836541.htmlEnglert, C. S. (1983). Measuring special education teacher effectiveness. Exceptional Children, 50, 247–254.Fabiano, G. A., Pelham, W. E., Gnagy, E. M., Burrows-MacLean, L., Coles, E. K., & Robb, J. A. (2007). The single and combined effects of multiple intensities of

behavior modification and multiple intensities of methylphenidate in a classroom setting. School Psychology Review, 36, 195–216.Fisher, R. (Ed.). (1987). Problem solving in primary schools. Oxford, UK: Blackwell.Fisher, R. (1998). Thinking about thinking: Developing metacognition in children. Early Child Development and Care, 141, 1–15.Fletcher, J. M., Lyon, G. R., Fuchs, L. S., & Barnes, M. A. (2007). Learning disabilities: From identification to intervention. New York, NY: Guilford Press.Forehand, R., & Long, N. (1996). Parenting the strong-willed child. Chicago, IL: Contemporary Books.Gable, R. A., Hester, P. H., Rock, M. L., & Hughes, K. G. (2009). Back to basics: Rules, praise, ignoring and reprimands revisited. Intervention in School and Clinic, 44,

195–205.Gage, Nathaniel L. (1978). The scientific basis of the art of teaching. New York: Teachers College Press.Ghaith, G., & Yaghi, H. (1997). Relationships among experience, teacher efficacy, and attitudes toward the implementation of instructional innovation. Teaching

and Teacher Education, 13, 451–458.Goldstein, L. S. (1997). Teaching with love: A feminist approach to early childhood education. New York, NY: Peter Lang.Good, T., & Grouws, D. (1977). Teaching effects: A process–product study in fourth grade mathematics classrooms. Journal of Teacher Education, 28, 49–54.Goodwin, L. D., & Goodwin, W. L. (1999). Measurement myths and misconceptions. School Psychology Quarterly, 14, 408–427.Gronlund, N. E. (1995). How to write and use instructional objectives (5th ed.)Englewood Cliffs, NJ: Prentice Hall.Gunter, P. L., & Denny, R. K. (1998). Trends and issues in research regarding academic instruction of students with emotional behavioral disorders. Behavioral

Disorders, 24, 44–50.Guskey, T. R. (1988). Teacher efficacy, self-concept, and attitudes toward the implementation of instructional innovation. Teaching and Teacher Education, 4,

63–69.Hall, R. V., Panyan, M., Rabon, D., & Broden, M. (1968). Instructing beginning teachers in reinforcement procedures which improve classroom control. Journal of

Applied Behavior Analysis, 1, 315–322.Hattie, J. A. (1992). Measuring the effects of schooling. Australian Journal of Education, 36, 5–13.Haywood, H. C. (2004). Thinking in, around, and about the curriculum: The role of cognitive education. International Journal of Disability, Development and

Education, 51(3), 231–252.Hedges, L. V., Laine, R. D., & Greenwald, R. (1994). An exchange: Part 1: Does money matter? A meta-analysis of studies of the effects of differential school inputs

on student outcomes. Educational Researcher, 23, 5–14.Hines, C. V., Cruickshank, D. R., & Kennedy, J. J. (1985). Teacher clarity and its relationship to student achievement and satisfaction. American Educational Research

Journal, 22, 87–99.Horner, R. H., Sugai, G., Todd, A. W., & Lewis-Palmer, T. (2000). Elements of behavioral support plans: A technical brief. Exceptionality, 8, 205–215.Kalis, T. M., Vannest, K. J., & Parker, R. (2007). Praise counts: Using self-monitoring to increase effective teaching practices. Preventing School Failure, 51, 20–27.Kane, T. J., & Staiger, D. O. (2012). Gathering feedback for teaching: Combining high-quality observations with student surveys and achievement gains. MET

Research Paper. Seattle, Washington: Bill & Melinda Gates Foundation (Retrieved July 16, 2012, from http://www.metproject.org/downloads/MET_Gathering_Feedback_Research_Paper.pdf.)

Kern, L., & Clemens, N. (2007). Antecedent strategies to promote appropriate classroom behavior. Psychology in the Schools, 44, 65–75.Kirk, R. E. (1982). Experimental design (2nd ed.)Belmont, CA: Brooks/Cole Publishing Company.Knapp, T. R., & Brown, J. K. (1995). Ten measurement commandments that often should be broken. Research in Nursing & Health, 18, 465–469.Kounin, J. S. (1970). Discipline and group management in classrooms. New York, NY: Holt, Rinehart, and Winston.Lindsey, M. W., & Wilson, D. B. (2001). Practical meta-analysis. CA, Sage: Thousand Oaks.Madsen, C. H., Becker, W. C., & Thomas, D. R. (1968). Rules, praise, and ignoring: Elements of elementary classroom control. Journal of Applied Behavior Analysis, 1,

139–150.Marzano, R. J. (1998). A theory-based meta-analysis of research on instruction. Aurora, CO: Mid-continent Research for Education and Learning (Eric Document

Reproduction Service No. ED 427 087).Marzano, R. J., Pickering, D. J., & Pollock, J. E. (2001). Classroom instruction that works: Research-based strategies for increasing student achievement. Alexandria, VA:

Association for Supervision and Curriculum Development.Mevarech, Z. R., & Kramarski, B. (1997). Improve: A multidimensional method for teaching mathematics in heterogeneous classrooms. American Educational

Research Journal, 34(2), 365–394.Moely, B. E., Hart, S. S., Leal, L., Santulli, K. A., Rao, N., Johnson, T., et al. (1992). The teacher's role in facilitating memory and study strategy development in the

elementary school classroom. Child Development, 63, 653–672.Moely, B. E., Santulli, K. A., & Obach, M. S. (1995). Strategy instruction, metacognition, and motivation in the elementary school classroom. In F. E. Weinert, &

W. Schneider (Eds.), Memory performance and competencies: Issues in growth and development (pp. 301–321). Mahwah, NJ: Erlbaum.Monk, D. (1994). Subject area preparation of secondary mathematics and science teachers and student achievement. Economics of Education Review, 12, 125–145.National Center for Education Statistics (2010). Common Core of Data. Washington, DC: U.S. Department of Education, Institute of Education Sciences. Retrieved

from http://nces.ed.gov/ccd/districtsearch/

Page 18: Instructional and behavior management practices implemented by elementary general education teachers

700 L.A. Reddy et al. / Journal of School Psychology 51 (2013) 683–700

National Education Association (2010). National Education Association. Retrieved from the http://www.nea.org/NICHD Early Child Care Research Network (2002a). Child care and children's development prior to school entry. American Education Research Journal, 39, 133–164.NICHD Early Child Care Research Network (2002b). The interaction of child care and family risk in relation to child development at 24 and 36 months. Applied

Developmental Science, 6, 144–156.O'Leary, K. D., Kaufman, K. F., Kass, R. E., & Drabman, R. S. (1970). The effects of loud and soft reprimands on the behavior of disruptive students. Exceptional

Children, 37, 145–155.Partin, T., Robertson, R., Maggin, D., Oliver, R., & Wehby, J. (2010). Using teacher praise and opportunities to respond to promote appropriate student behavior.

Preventing School Failure, 54, 172–178.Pelham, W. E., Fabiano, G. A., & Massetti, G. M. (2005). Evidence-based assessment of attention-deficit/hyperactivity disorder in children and adolescents. Journal

of Clinical Child and Adolescent Psychology, 34, 449–476.Pelham, W. E., Greiner, A. R., & Gnagy, E. M. (1998). Summer treatment program for ADHD: Program manual. Buffalo, NY: CTADD.Pelham, W. E., Greiner, A. R., & Gnagy, E. M. (2008). Student behavior teacher response observation code manual. Unpublished observation code manual.Pfiffner, L. J., Rosen, L. A., & O'Leary, S. G. (1985). The efficacy of an all-positive approach to classroommanagement. Journal of Applied Behavioral Analysis, 18, 257–261.Pianta, R. C., La Paro, K. M., & Hamre, B. K. (2008). Classroom Assessment Scoring System [CLASS] manual: Pre-K. Baltimore, MD: Brookes Publishing.Reddy, L., & Dudek, C. (in press). Teacher progress monitoring of instructional and behavioral management practices: An evidence-based approach to improving

classroom practices. International Journal of School and Educational Psychology.Reddy, L., Fabiano, G., Barbarasch, B., & Dudek, C. (2012). Behavior management of students with Attention-Deficit/Hyperactivity Disorders using teacher and

student progress monitoring. In L. M. Crothers, & J. B. Kolbert (Eds.), Understanding and managing behaviors of children with psychological disorders: A referencefor classroom teachers (pp. 17–47). New York, New York: Continuum International Publishing.

Reddy, L., Fabiano, G., & Dudek, C. (2013). Concurrent validity of the Classroom Strategies Scale—Observer Form. Journal of Psychoeducational Assessment, 31,258–270.

Reddy, L., Fabiano, G., Dudek, C., & Hsu, L. (2013a). Development and construct validity of the Classroom Strategy Scale-Observer Form. School Psychology Quarterly.Reddy, L. A., Fabiano, G., Dudek, C. M., & Hsu, L. (2013b). Predictive validity of the Classroom Strategies Scale-Observer Form on statewide testing. School

Psychology Quarterly.Reddy, L. A., Kettler, R. J., & Kurz, A. (submitted for publication). School-wide educator evaluation for improving school capacity and student achievement in high

poverty schools: Year 1 of the school system improvement project. (submitted for publication).Rosen, L. A., O'Leary, S. G., Joyce, S. A., Conway, G., & Pfiffner, L. J. (1984). The importance of prudent negative consequences for maintaining the appropriate

behavior of hyperactive students. Journal of Abnormal Child Psychology, 12, 581–604.Rosenshine, B., & Stevens, R. (1986). Teaching functions. In M. C. Witrock (Ed.), Handbook of research on teaching (pp. 376–391) (3rd ed.). New York, NY: Macmillan.Santuli, K. A. (1991). Teachers' role in facilitating students strategic and metacognitive processes during the representational, solution, and evaluation phase of

mathematics problem solving. (Dissertation Abstracts International), 52 (10). (pp. 5559), 5559 (University Microfilms No. AAC92-09661).Shores, R. E., Gunter, P. L., & Jack, S. L. (1993). Classroom management strategies: Are they setting events for coercion? Behavioral Disorders, 2(18), 92–102.Stipek, D. J., & Byler, P. (1997). Early childhood education teachers: Do they practice what they preach? Early Childhood Research Quarterly, 12, 305–325.Stitcher, J. P., Lewis, T. J., Richter, M., Johnson, N. W., & Bradley, L. (2006). Assessing antecedent variables: The effects of instructional variables on student

outcomes through in-service and peer coaching professional development models. Education and Treatment of Children, 29, 665–692.Stitcher, J. P., Lewis, T. J., Whittaker, T. A., Richter, M., Johnson, N. W., & Trussell, J. R. (2009). Assessing teacher use of opportunities to respond and effective

classroom management strategies: Comparisons among high-and low-risk elementary school. Journal of Positive Behavior Interventions, 11, 68–81.Sugai, G., & Horner, R. H. (2002). The evolution of discipline practices: School-wide positive behavior supports. Child and Family Behavior Therapy, 24, 23–50.Sugai, G., & Horner, R. H. (2007). Is school-wide Positive Behavioral Support an evidence-based practice? Downloaded from the world wide web October 24,

2007, http://pbis.org/files/101007evidencebase4pbs.pdfSutherland, K. S., Adler, N., & Gunter, P. L. (2003). The effect of varying rates of opportunities to respond to academic requests on the classroom behavior of

students with EBD. Journal of Emotional and Behavioral Disorders, 11, 239–248.Sutherland, K. S., & Wehby, J. H. (2001). Exploring the relationship between increased opportunities to respond to academic requests and the academic and

behavioral outcomes of students with EBD: A review. Remedial and Special Education, 22, 113–121.Sutherland, K. S., Wehby, J. H., & Yoder, P. J. (2002). Examination of the relationship between teacher praise and opportunities for students with EBD to respond to

academic requests. Journal of Emotional and Behavioral Disorders, 10, 5–13.Taylor, B. M., Pearson, P. D., Peterson, D. S., & Rodriguez, M. C. (2003). Reading growth in high-poverty classrooms: The influence of teacher practices that

encourage cognitive engagement in literacy learning. Elementary School Journal., 104, 3–28.Thomas, D. R., Becker, W. C., & Armstrong, M. (1968). Production and elimination of disruptive classroom behavior by systematically varying teacher's behavior.

Journal of Applied Behavior Analysis, 1, 35–45.Tomlinson, C. A., & Edisonson, C. C. (2003). Differentiation in practice: A resource guide for differentiating curriculum, grades K-5. Alexandria, VA: Association for

Supervision and Curriculum Development.Vujnovic, R. K., Fabiano, G. A., Pelham, W. E., Greiner, A., Waschbusch, D. A., Gera, S., et al. (in press). The Student Behavior Teacher Response (SBTR) System:

Preliminary psychometric properties of an observation system to assess teachers' use of effective behavior management strategies in preschool classrooms.Education and Treatment of Children (in press).

Walberg, H. J. (1986). Synthesis of research on teaching. In M. Wittrock (Ed.), Handbook of Research on Teaching (3rd ed.). New York, NY: Macmillan.Walker, H. M., & Buckley, N. K. (1968). The use of positive reinforcement in conditioning attending behavior. Journal of Applied Behavior Analysis, 1, 245–252.Walker, H. M., Colvin, G., & Ramsey, E. (1995). Antisocial behavior in school: Strategies and best practices. Pacific Grove, CA: Brooks/Cole.Walker, H. M., & Eaton-Walker, J. E. (1991). Coping with noncompliance in the classroom: A positive approach for teachers. Austin, TX: Pro-Ed.Wang, M. C. (1991). Productive teaching and instruction: Assessing the knowledge base. Phi Delta Kappan, 71, 470–478.Wang, M. C., Haertel, G. D., & Walhberg, H. J. (1993). Toward a knowledge base for school learning. Review of Educational Research, 63, 249–294.Ward, M. H., & Baker, B. L. (1968). Reinforcement therapy in the classroom. Journal of Applied Behavior Analysis, 1, 323–328.Wenglinsky, H. (February 13). How schools matter: The link between teacher classroom practices and student academic performance. Education Policy Analysis

Archives, 10(12), (Retrieved November 2, 2005, from: http://epaa.asu.edu/epaa/v10n12/)White, M. A. (1975). Natural rates of teacher approval and disapproval in the classroom. Journal of Applied Behavioral Analysis, 8, 367–372.What Works Clearinghouse (2012). What works clearinghouse. Downloaded from the internet on April 15, 2012 at http://ies.ed.gov/ncee/wwc/Ysseldyke, J., & Burns, M. (2009). Functional assessment of instructional environments for the purpose of making data-driven instructional decisions. In T. Gutkin,

& C. Reynolds (Eds.), The handbook of school psychology (pp. 410–433) (4th ed.). Hoboken, NJ: Wiley.Ysseldyke, J., & Elliott, J. (1999). Effective instructional practices: Implications for assessing educational environments. In C. Reynolds, & T. Gutkin (Eds.),

The handbook of school psychology (pp. 497–518) (3rd ed.). New York, NY: Wiley.


Recommended