40

SERVE Florida Office · 2013-08-02 · Helen DeCasper Karen DeMeester Paula Egelson Jane Griffin Kathleen Mooney Donna Nalley Beth Thrift Michael Vigliano Jean Williams Rick Basom

  • Upload
    others

  • View
    0

  • Download
    0

Embed Size (px)

Citation preview

Page 1: SERVE Florida Office · 2013-08-02 · Helen DeCasper Karen DeMeester Paula Egelson Jane Griffin Kathleen Mooney Donna Nalley Beth Thrift Michael Vigliano Jean Williams Rick Basom
Page 2: SERVE Florida Office · 2013-08-02 · Helen DeCasper Karen DeMeester Paula Egelson Jane Griffin Kathleen Mooney Donna Nalley Beth Thrift Michael Vigliano Jean Williams Rick Basom

The content of this publication does not necessarilyreflect the views or policies of the Office of Educa-tional Research and Improvement, U.S. Departmentof Education, nor does mention of trade names, com-mercial products, or organizations imply endorse-ment by the U.S. Government. This document wasproduced with funding from the Office of Educa-tional Research and Improvement, U.S. Departmentof Education, under contract no. ED-01-CO-0015.

SERVE North Carolina Office800-755-3277 (Main Office)

SERVE Florida Office800-352-6001

SERVE Georgia Office800-659-3204

www.serve.org

VISIONVolume 1, Number 2

2002

MAGAZINE

THE

SERVE’s Vision Magazine

Volu

me

1, N

um

ber

2

2

LETTERFROM THEEXECUTIVEDIRECTOR

Dear Colleagues:

In this expanded issue of The Vision, we examineemerging thinking about assessment systems thatleads to improved instructional practices, and we dis-cuss the challenges educators and policymakers facewhen designing such systems. Accountability systemsestablish responsibility in an effort to raise educators’and students’ performance to achieve a higher standardof quality. Designing systems that encourage improvementrather than just punish failure and that promote achieving higher standardsrather than just higher test scores is a daunting and challenging task.

In our feature story, we report on the work of the Commission on InstructionallySupportive Assessments to define what a good state assessment system mightlook like and to offer recommendations for achieving systems that effectivelyimprove teaching and learning. Included is an interview with the Commission’schairperson, Dr. James Popham, and a response to the Commission’s recom-mendations by Mr. Lou Fabrizio, the Director of Accountability Services forthe North Carolina Department of Public Instruction.

To effectively improve student achievement, district leaders need help in stay-ing focused on improving the quality of learning opportunities their teach-ers provide students. In the article “Standards of Classroom Practice: Defininga Vision of Quality in the Classroom,” we explore ways to support teachers intheir efforts to provide high-quality, interesting, challenging, and purposefullearning experiences for students and discuss what some districts are cur-rently doing in the Southeast to begin to achieve this vision of quality. Lind-say Clare Matsumara of CRESST continues the discussion of classroom qualityby describing her organization’s research aimed at developing a method forinvestigating the quality of students’ learning environments by developingcriteria for rating teachers’ assignments.

In this issue’s installment of our continuing series on relevant legislation, welook at the assessment requirements of the Leave No Child Behind Act anddiscuss the implications of those requirements for those designing state as-sessment systems. Finally, we return our focus to the Southeast with reportsfrom our policy analysts in Florida and South Carolina and a description ofour Southern States Seminar, where participants worked to create a regionalresponse to school improvement efforts and state policy recommendations.

John R. Sanders, Ed.D.Executive Director

Produced by:SERVE, The Regional EducationalLaboratory for the Southeast

Associated with the School of Education,University of North Carolina atGreensboro

Edited by:Wendy McColskey,Director of Assessment, Accountability,and Standards, SERVE

Donna Nalley,Publications Director, SERVE

Designed by:Tracy Hamilton,Publications Art Director, SERVE

The Vision Editorial Board:Diana BowmanHelen DeCasperKaren DeMeesterPaula EgelsonJane GriffinKathleen MooneyDonna NalleyBeth ThriftMichael ViglianoJean WilliamsRick Basom

Select photographs used in this magazine are fromComstock Klips, Corbis, Creatas, EyeWire Images,Image 100, and/or PhotoDisc.

Page 3: SERVE Florida Office · 2013-08-02 · Helen DeCasper Karen DeMeester Paula Egelson Jane Griffin Kathleen Mooney Donna Nalley Beth Thrift Michael Vigliano Jean Williams Rick Basom

SERVE is an education organization withthe mission to promote and support the

continuous improvement of educationalopportunities for all learners in the South-

east. To further this mission, SERVE engagesin research and development that addresseducation issues of critical importance to edu-cators in the region and provides technicalassistance to SEAs and LEAs that are strivingfor comprehensive school improvement. Thiscritical research-to-practice linkage is sup-ported by an experienced staff strategicallylocated throughout the region. This staff ishighly skilled in providing needs assessmentservices; conducting applied research inschools; and developing processes,products, and programs that in-form educators and increasestudent achievement.

VISION FEATURE STORY:Can High-Stakes State Testing Be Improved?

SERVE SPOTLIGHT:Standards of Classroom Practice: Defining aVision of Quality in the Classroom

GUEST COMMENTARY:Lindsay Clare Matsumura on Taking a CloseLook at the Quality of Teachers’ Assignmentsand Student Work

EXAMPLE FROM THE FIELD:The Southern States Seminar:A Regional Strategy for Success

SERVE SENIOR POLICY RESEARCH ANALYSTSON REGIONAL ISSUES:Florida Approves New Grading Rule for SchoolAccountability System

South Carolina AddressesAfrican-American Student Achievement

ESEA ISSUES AND UPDATES:New Testing Requirements

SERVE RESOURCES

4

16

26

30

TABLE OFCONTENTS

ABOUTSERVE

32

34

36

Page 4: SERVE Florida Office · 2013-08-02 · Helen DeCasper Karen DeMeester Paula Egelson Jane Griffin Kathleen Mooney Donna Nalley Beth Thrift Michael Vigliano Jean Williams Rick Basom

StoryFe

atu

re

SERVE’s Vision Magazine

Volu

me

1, N

um

ber

2

4SERVE’s Vision Magazine

Volu

me

1, N

um

ber

1

4

FEATURESTORY

Recently, a group of major educational professional associationstook a proactive step in trying to influence the debate aboutthe nature of state testing programs. These five associations(AASA, NAESP, NASSP, NEA, and NMSA—see sidebar forcomplete names and website contact information) wereconcerned that high-stakes state testing can have unintendednegative consequences on the quality of teaching and learning.At the same time, they understood that some policymakersperceive educators as “running from accountability.” Thus,this coalition of organizations turned to an independentcommission of nationally recognized experts in assessment,curriculum, and instruction to attract the attention of statepolicymakers and urge them to examine their testing programs.Their goal was to engage the public in a debate about what agood state assessment system might look like.

The five associations are:

AASA—American Association ofSchool Administrators(www.aasa.org)

NAESP—National Association ofElementary School Principals(www.naesp.org)

NASSP—National Association ofSecondary School Principals(www.nassp.org)

NEA—National EducationAssociation (http://nea.org)

NMSA—National Middle SchoolAssociation (http://nmsa.org)

Page 5: SERVE Florida Office · 2013-08-02 · Helen DeCasper Karen DeMeester Paula Egelson Jane Griffin Kathleen Mooney Donna Nalley Beth Thrift Michael Vigliano Jean Williams Rick Basom

Story

Feature

SERVE’s Vision Magazine

Volu

me

1, N

um

ber

2

5

(continued)

W. James Popham, a noted professoremeritus of the University of Califor-nia and author of numerous articlesand books on assessment, was invitedto serve as the chair of the Commis-sion on Instructionally SupportiveAssessment, and he selected membersof the Commission from a list ofnominees submitted by the associations.The Commission functioned with com-plete autonomy from the convening or-ganizations. The Commission’s goal wasto develop a set of recommendationsthat could potentially improve state-administered achievement tests so thatthey might serve the dual purposes ofaccountability (e.g., grading schools)and improvement of instruction, ratherthan just the single purpose of satisfyingpublic demands for accountability.

The Commission’s ten members re-leased their report entitled, BuildingTests to Support Instruction and Account-ability: A Guide for Policymakers, inOctober 2001. The report, which isavailable on each of the associations’websites, advanced nine requirementsfor states to consider. The followingstatement accompanied the release ofthe report:

In the rush to implement testingsystems, too few states have teststhat are designed to help teachersimprove teaching methods andcurriculum or are useful in helpingchildren learn higher-level think-ing skills. As educators closest toAmerica’s students, we recognizethat current tests fall short in theeffort to improve teaching andlearning. As state-mandated testscontinue to be the measure ofschool quality, it is imperative thatquality tests be implemented in away that helps students achieveand acquire a love of learning.(October 23, 2001 Press Release)

The nine requirements from theCommission’s report are summarizedat right. An interview with Dr. Pophamfollows to aid in conceptualizing thethinking behind the requirements. Aresponse to the Commission’s reportfrom a state assessment director con-tinues the dialogue. Finally, a set of dis-cussion questions related to each of thenine requirements is offered.

Building Tests to Support Instruction andAccountability: A Guide for PolicymakersThe report outlined the following nine requirements:

1. A state’s content standards must be prioritized to supporteffective instruction and assessment.

This requirement recommends that states review and prioritize theircontent standards resulting in a high-priority set. The purpose of theprioritization is to identify a small number of content standards, suitablefor large-scale assessment, that represent the most important skills andknowledge students need to learn in school.

2. A state’s high-priority content standards must be clearly andthoroughly described so that the knowledge and skills studentsneed to demonstrate competence are evident.

This requirement builds on the assumption that better teaching and testing aremore likely if educators clearly understand where they need to focus and testdevelopers understand clearly what they are to test. Therefore, states shouldanalyze their high-priority standards and identify what students must do andunderstand to demonstrate they have achieved standards. The analysis shouldresult in relatively brief, educator-friendly descriptions of each high-prioritystandard’s meaning. The high-priority standards and their descriptions needto be articulated across grade levels so that they build from grade-to-grade.

3. The results of a state’s assessment of high-priority contentstandards should be reported standard-by-standard for eachstudent, school, and district.

This requirement assumes that educators can do little to improve students’achievement without information about their performance on each high-priority content standard. In other words, standard-by-standard reportingof students’ performance on state tests is critical to the success of standards-based educational reform. The Commission acknowledges that students willneed to answer several items for each content standard assessed, which, inturn, means fewer content standards can be assessed by state tests giventypical time constraints. They believe that having standard-by-standardclassroom, school, or district-level information will enable educators toevaluate the effectiveness of instruction related to each standard and thenimprove instruction where needed. They point out that information basedon just a few items is likely to be less reliable as a measure of an individualstudent’s true knowledge and skills and suggest that teachers will have tobring additional sources of classroom-based information to their evaluationand intervention decisions for individual students.

4. A state must provide educators with optional classroom assess-ment procedures that can measure students’ progress in attain-ing content standards not assessed by state tests.

This requirement recognizes that statewide tests measure a limited num-ber of high-priority standards and that other important standards, noton the state tests, should receive instructional and assessment attentionin the classroom. Therefore, states need to develop optional classroomassessments for these non-state-tested standards to support educators’efforts to teach a wide range of skills and knowledge. The Commissionrecommends that states conduct professional development for educatorsregarding how to best use optional assessments for instructional im-provement and also how to design their own classroom assessments tomeasure students’ progress in meeting state standards. States need to let

Page 6: SERVE Florida Office · 2013-08-02 · Helen DeCasper Karen DeMeester Paula Egelson Jane Griffin Kathleen Mooney Donna Nalley Beth Thrift Michael Vigliano Jean Williams Rick Basom

StoryFe

atu

re

SERVE’s Vision Magazine

Volu

me

1, N

um

ber

2

6

educators know how results from classroom assess-ments can be reported alongside state test results toprovide parents and policymakers with a completepicture of students’ achievement on all the state’scontent standards.

5. A state must monitor the breadth of the cur-riculum to ensure that instructional attentionis given to all content standards and subjectareas, including those that are not assessedby state tests.

This requirement obliges states to support educatorsin curriculum coverage that extends beyond just thoseprioritized standards tested by the state. The Com-mission acknowledges that state assessments can in-advertently lead to a narrowing of the curriculum aseducators work to ensure that students do well onstate tests. A narrowed curriculum means dealing al-most exclusively with content assessed on state tests.To prevent narrowing, this recommendation suggestsmonitoring curricular breadth at the state, district,and/or school levels using quantitative and/or qualita-tive methods.

6. A state must ensure that all students have theopportunity to demonstrate their achievementof state standards; consequently, it must pro-vide well-designed assessments appropriatefor a broad range of students, with accommo-dations and alternate methods of assessmentavailable for students who need them.

This requirement obliges states to design state assess-ments or appropriate alternatives that will provide ac-curate and useful information for the teacher on theextent to which students with special needs have dem-onstrated the skills and knowledge described in thestate content standards. This requirement is consis-tent with federal laws that obligate states to developguidelines for districts about how all students partici-pate in the assessment.

7. A state must generally allow test developers aminimum of three years to produce statewidetests that satisfy the Standards for Educa-tional and Psychological Testing and similartest-quality guidelines.

The Commission bases the need for this require-ment on the belief that there exists widespread mis-understanding that high-quality achievement testscan be developed in two years or less. Recognizingthat there is often pressure to produce tests quickly,they counter with the argument that these tests arefar too important to be developed improperly. Expe-rience shows that a minimum of three years isneeded to develop a state test to assess high-priority

standards in a way that promotes instructional im-provement. The steps in developing a good test taketime. They include prioritizing standards, determin-ing skills/knowledge students must demonstratefor each standard to be tested, developing sufficientnumbers of items, evaluating the items throughsmall-scale pilots or other reviews, formal field-testingof all items, and assembling the final tests. Evidenceregarding test quality, as called for in the Standardsfor Educational and Psychological Testing, is impor-tant to assemble after the test is developed.

8. A state must ensure that educators receive pro-fessional development focused on how to opti-mize children’s learning based on the results ofinstructionally supportive assessments.

This requirement obliges states to educate policy-makers and others about the need to provide profes-sional development activities that will promotesuccess in the use of the instructionally supportiveassessment systems advocated by these requirements.Educators will need help in learning how to use in-formation for instructional improvement and howto use or develop classroom assessments that supple-ment state tests.

9. A state should secure evidence that supportsthe ongoing improvement of its state assess-ments to ensure those assessments are (a)appropriate for the accountability purposesfor which they are used, (b) appropriate fordetermining whether students have attainedstate standards, (c) appropriate for enhanc-ing instruction, and (d) not the cause of nega-tive consequences.

The tests that the Commission advances in this reportrepresent a new generation of state tests that provideinformation for accountability as well as informa-tion for improving instruction. The new kind of as-sessment system is envisioned as a combination ofstate tests that focus on high-priority content stan-dards combined with results from classroom assess-ments that focus on non-state-tested standards andtogether provide a more complete picture. The re-port suggests that states conduct independent evalu-ations to find out if state and classroom assessmentsare working together as envisioned. Other issues forindependent review are the degree to which studentshad sufficient opportunity to learn the standards be-ing tested, the degree to which tests are sensitive todifferences in instructional quality, and the nature ofany negative, unanticipated consequences of statetests, such as dramatically increased dropout rates.

Page 7: SERVE Florida Office · 2013-08-02 · Helen DeCasper Karen DeMeester Paula Egelson Jane Griffin Kathleen Mooney Donna Nalley Beth Thrift Michael Vigliano Jean Williams Rick Basom

Story

Feature

SERVE’s Vision Magazine

Volu

me

1, N

um

ber

2

7

Purpose of the Report

Why did these groups feel the need to call this Commissiontogether to develop these requirements?

Popham: The primary impetus was the likelihood that therewould be a stringent requirement for testing across the U.S.based on pending (at that time) federal legislation calling forgrade 3–8 testing. In addition, many educators feel the cur-rent approach to state testing has problems. Both of thesefactors contributed to the need for the report.

You mentioned in the report’s foreword that all too ofteneducators’ legitimate concerns about state assessment arenot heard; do you think this report will be heard?

Popham: If people dismiss the report because they think they al-ready know what these organizations have to say, they will bemaking a mistake because what the professional organizationsthat sponsored the report promised the Commission was thatthey wouldn’t interfere with what we came up with. They saidthat if we produced the report and did it without charge (so itwould not look as if we were just doing this for fee), they wouldnot interfere with the content. So the Commission did this re-port on the condition that whatever we came up with wouldnot be changed by what they wanted, and the sponsoring or-ganizations agreed and followed through. So it’s really an in-dependent commission that they convened.

Each of the organizations recommended a slate of people, butI selected the members. I did the choosing primarily on thegrounds that these people would not only represent a diversityof gender, ethnicity, and so on, but also that those selectedwould be very, very knowledgeable regarding both assessmentand instruction, and they were.

Understanding the Nine Requirements

Can you talk a little about the first of the nine requirementshaving to do with prioritizing the state’s content standardsand why this is the first requirement?

Popham: What people have to understand is that we have statecurricular goals (i.e., content standards), but they’ve comeup with an absolute plethora of content standards that real-istically cannot be taught in the time teachers have availableto them. In addition, because there are so many content stan-dards, they absolutely cannot all be measured and certainlycannot be measured in such a way as to give feedback on astandard-by-standard basis. So the situation we have now withstate testing is wrong. We’ve got to do something different to

FEATURE STORY INTERVIEW:

Interview with Dr. James Popham, Chair of the Commission on InstructionallySupportive Assessment

In an effort to understand the logic behind the nine requirements developed by the Commission onInstructionally Supportive Assessment for consideration by policymakers and educators, SERVE interviewedDr. Popham in December 2001. The interview highlights are summarized below.

(continued)

find out how schools are doing, something that giveseducators a reasonable chance to improve the quality ofinstruction.

The hard reality of the current state-testing situation isthat you’re not measuring all the standards (because thereare too many), teachers can’t teach all the standards (be-cause there are too many), and you can’t give the citizensand teachers good feedback from test results about spe-cifically what is going well and what isn’t.

So one of the keys, according to your report, is that statecontent standards need to be more focused in terms ofwhich ones a state is going to put its assessment dollarsinto (the first requirement)?

Popham: Exactly! What states can do is identify a group ofcurriculum experts (maybe the same people who cameup with the content standards for that state) and askthem to put the content standards into three piles: theones they think are absolutely essential, the ones theythink are very desirable, and the ones they think arejust desirable. Then, you have them look at just the ab-solutely essential ones and rank order them from themost important to least important. Then, this essentiallist is given to the assessment people, who given theamount of time they have for the test and the necessityto provide standard by standard test results, go as fardown the list as they can with their test items, and, allof a sudden, you’ve got a very different kind of test.

What the Commission is saying is that states can’t test allof the content standards that are sitting out there rightnow, and to pretend to do so is hypocrisy. Therefore, ifyou can’t test them all, let us prioritize. If states prioritizeand have a smaller number of content standards tested,then, of course, you have to ask about the content stan-dards that don’t make it on the state test. We have otherrequirements that directly emerge from the first require-ment. However, our first and most important assumptionwas that a state test that does not help instruction oughtnot to be used, which led us to the first requirement, which,in turn, led to the others.

If you have a high-stakes test and do not identify with clar-ity what is to be assessed, except to say in a general sensethat the test will assess state content standards, then teach-ers really don’t know what will be coming at their stu-dents. That is the kind of uncertainty that breedsitem-specific teaching (teaching to particular test items).

Page 8: SERVE Florida Office · 2013-08-02 · Helen DeCasper Karen DeMeester Paula Egelson Jane Griffin Kathleen Mooney Donna Nalley Beth Thrift Michael Vigliano Jean Williams Rick Basom

StoryFe

atu

re

SERVE’s Vision Magazine

Volu

me

1, N

um

ber

2

8

The teacher may feel he or she can’t win (i.e., I can’t do agood job in teaching these students because I don’t knowwhat’s going to be assessed). Out of fear of having studentsnot do well, the teacher may design teaching guided by spe-cific information about what kind of items will be on thestate test. The current situation (of states not communicat-ing exactly what content standards are most essential andassessed by items on state tests) is breeding a level of in-structional ineffectiveness and instructional dishonesty thatis very undesirable.

Implementation of Report Requirements

Do you think just publishing this report will result inchanges in the design of state tests?

Popham: According to the folks at the NEA, there have been 14states already that have expressed some interest in theserequirements, and of the 14 that have expressed an inter-est, two or three might actually do something. So once yousee the possibility of two or three states actually workingon these requirements, it seems to me there might be someminds changing.

We developed a second report that contains sample lan-guage for a Request for Proposal (RFP) that would be is-sued by a state to create the kind of test that we’re talkingabout. All a state has to do is get that RFP report and dis-cover how to put language into the RFP that would forcecommercial test developers to develop a test in line withthe recommendations of the Commission. The two docu-ments together are crucial. We didn’t want to come up withrecommendations in the first reportand then not give people the help onhow to pull it off! If you look at someof the language in the RFP example,you will see that the kind of test theCommission envisions would be avery different test from the ones sit-ting out there now.

Finally, I think we tried to describe specific rather than gen-eral requirements so that those in a state could easily assesswhere the state stands on these requirements. Either they doit or they don’t. Let’s say you have standard-by-standard re-porting of your test results, where’s the equivocation aroundthat? Does the state do it or not? Does the state have a smallset of prioritized content standards that it assesses on a statetest, or does it develop a particular state test against a list of200? Does the state supply optional classroom assessmentsfor teachers; does it monitor the breadth of the curriculumin schools or districts so that students don’t experience anarrowed curriculum that reflects just the focus of the statetest; does it allow test developers a minimum of three yearsto produce statewide tests; does it ensure that educators re-ceive professional development on how to use the results ofinstructionally supportive assessments? These are specificquestions generated by the requirements that people can askand answer of their state.

Most of the states in the Southeast already have statetests, but they probably don’t produce the kind ofstandard-by-standard reporting your Commission issuggesting. Is that accurate?

Popham: That is one of the problems we will face with the newESEA legislation. One of the easiest things to do is to saywe’ve got tests in place, but, unfortunately, the tests maynot be as good as they could be. The states may have state-developed tests aligned with their content standards, butthey still don’t meet the requirements the Commission hasoutlined. They don’t have standard-by-standard reporting,which we believe is pivotal if you are going to have stan-dards-based assessment that improves the quality of in-struction. If you cannot find out from the test whichstandards kids are getting and which ones they’re not, youcan’t help them improve. The Southeast has historicallymoved more rapidly in state testing than other places, butthe kinds of tests they are using right now are not the kindsof tests that the Commission envisions.

Would you envision it costing a state more money toimplement some of the requirements that go beyond theactual test development process (e.g., optional classroomassessments and ongoing evidence of effectiveness ofstate tests)?

Popham: Those kinds of things should be done in any state de-partment and thus, ideally, should not represent new costs.The idea of continually evaluating to see whether your test-ing program is working (Requirement Nine) is just sensiblemanagement. As far as producing classroom assessment ex-

amples for teachers, a number of statesare doing that right now based on the re-alization that they have to produce someof the tools necessary for teachers to usein the classroom. The big expense forsome states will be building new tests,and, ideally, there will be some federaldollars there.

What can you say about the state and district relationshipas regards responsibilities for student assessment?

Popham: If we have a statewide test that measures a smallernumber of content standards as we’re suggesting, and doesso honestly, telling teachers the ones that will be measured,then there’s going to be a tendency for curricular reduc-tion or narrowing. So what we’re suggesting to counteractthis response at the local level is to 1) come up with someway of monitoring the extent to which the breadth of thecurriculum is being taught (Requirement Five) and 2) tosupply some tools that would make it easier for teachers topursue those content standards not assessed at the statelevel (Requirement Four). These functions could be desig-nated as state or district responsibilities.

District-developed assessments would be optional. We havean obligation especially with the new federal law to say

See Illustrative Language for an RFPto Build Tests to Support Instructionand Accountability, 2001, onlinefrom each convening association;e.g., www.nea.org.

Page 9: SERVE Florida Office · 2013-08-02 · Helen DeCasper Karen DeMeester Paula Egelson Jane Griffin Kathleen Mooney Donna Nalley Beth Thrift Michael Vigliano Jean Williams Rick Basom

Story

Feature

SERVE’s Vision Magazine

Volu

me

1, N

um

ber

2

9

clearly to districts: here are the content standards that we(the state) really want you to teach, and they will be assessedby our state tests, and we will give you standard by standardreporting. However, if you want to go beyond our state as-sessment focus and assess some other important standardson your own, go for it. At least that way, the district will beable to plan and use their assessment development resourceswisely to complement and build on what the state is doingwith its assessment program. If I were running a district, Iwould want to see how we were doing on the small set ofessential standards tested by the state, and I’d want to seehow we were doing on the other important standards notassessed by the state, but that should be a local call.

Is there an organization that intends to monitor howstates begin to consider these requirements?

Popham: The Commission’s report was released at a press con-ference at the National Press Club on October 23, 2001,in Washington D.C. At that time, NEA announced thatone year after the enactment of the federal law (ESEA), itwould begin to annually monitor the extent to which stateswere creating tests consistent with these requirements. Sothere is going to be a systematic annual appraisal of statesby the professional associations that convened theCommission’s report.

Members of the Commission

Commission Chair: W. James Popham, Professor Emeritusat the UCLA Graduate School of Education andInformation Studies and a past president of theAmerican Educational Research Association

Eva L. Baker, Professor of Educational Psychology andSocial Research Methods at the UCLA Graduate Schoolof Education and Information Studies and theCo-Director of the National Center for Research onEvaluation, Standards, and Student Testing (CRESST)

David Berliner, Regents’ Professor, School of Education atArizona State University

Carol Camp Yeakly, Professor of Urban Politics and Policyat the Curry School of Education, University of Virginia

James W. Pellegrino, Liberal Arts and SciencesDistinguished Professor of Cognitive Psychology andDistinguished Professor of Education, University ofIllinois at Chicago

Rachel Quenemoen, Senior Fellow for TechnicalAssistance and Research at the National Center onEducation Outcomes, University of Minnesota

Flora V. Rodriguez-Brown, Professor of Curriculum andInstruction/Reading, Writing, and Literacy, University ofIllinois at Chicago

Paul Sandifer, former Director of the South CarolinaDepartment of Education’s Office of StudentPerformance Assessment

Stephen G. Sireci, Associate Professor in the Center forEducation Assessment, School of Education, Universityof Massachusetts

Martha L. Thurlow, Director for the National Center onEducation Outcomes at the University of Minnesota

Story

Feature

SERVE’s Vision Magazine

Volu

me

1, N

um

ber

2

9

Page 10: SERVE Florida Office · 2013-08-02 · Helen DeCasper Karen DeMeester Paula Egelson Jane Griffin Kathleen Mooney Donna Nalley Beth Thrift Michael Vigliano Jean Williams Rick Basom

StoryFe

atu

re

SERVE’s Vision Magazine

Volu

me

1, N

um

ber

2

10

The report, Building Tests to SupportInstruction and Accountability: AGuide for Policymakers, is concise andeasy to read, but somewhat idealistic.The authors, all well regarded and na-tionally known, have pieced togethernine requirements for a new genera-tion of state achievement tests, whichare hard to argue against. However,the requirements seem to ignore thefact that no single test can do every-thing, as the comparison below ofcharacteristics of tests designed fordifferent purposes illustrates. It wouldseem that the authors of this new re-port believe that a single test can serveboth purposes.

FEATURE STORY RESPONSE:

A State Assessment Director Responds to the Suggested RequirementsBy Louis M. Fabrizio(These comments do not necessarily reflect the opinions of the North Carolina Department of Public Instruction or SERVE.)

Editor’s Note: North Carolina is often referred to as a model for what state testing programs should be doing. The state testswere designed primarily for school accountability and, more recently, individual student accountability purposes. Because of hisexperiences in North Carolina, SERVE asked Lou Fabrizio, the Director of Accountability Services for the North CarolinaDepartment of Public Instruction, for his reactions to the Commission’s recommendations for states.

Two areas that resonate positively,based on my experiences in NorthCarolina, specify that test developersshould have a minimum of three yearsto produce statewide tests (Require-ment 7) and that states should ensurethat educators receive professional de-velopment focused on how to opti-mize children’s learning based on theresults of instructionally supportiveassessments (Requirement 8). Obtain-ing the resources and staff to providethe professional development has beenvery difficult for states in the past,however, and professional develop-ment is an easy target when budgetcuts are needed.

Requiring states to prioritize contentstandards (Requirement 1) is a goodidea in theory, but even the Commis-sion’s authors caution that schools andteachers may resort to focusing theirinstruction on the high-priority stan-dards and not address all standards.Their response to schools that might notteach all standards is Requirements 4and 5. These requirements suggest thatstates provide educators with optionalclassroom assessments for the lesser pri-ority standards and monitor the breadthof the implemented curriculum to en-sure that instructional attention is givento all content standards and subject ar-eas. Again, these requirements sound

Comparison of Assessments Designed for Different Goals

Assessment Designedfor Measurement

ValidReliable

ObjectiveCost-efficientTime-efficient

Centrally mandated

Widely applicableCentrally processed

Multiple-choice

Machine-scorableDelayed feedback

Used independently

Formal

Producing stable scores

Results designed for external user

Page 11: SERVE Florida Office · 2013-08-02 · Helen DeCasper Karen DeMeester Paula Egelson Jane Griffin Kathleen Mooney Donna Nalley Beth Thrift Michael Vigliano Jean Williams Rick Basom

Story

Feature

SERVE’s Vision Magazine

Volu

me

1, N

um

ber

2

11

good in theory, but I question the practi-cality of offering “optional” classroomassessments (Requirement 4) or askingthe state to monitor the breadth of thecurriculum (Requirement 5). Specifi-cally, in Requirement 5, the authors rec-ommend “monitoring of curricularbreadth at these same three levels [state,district, and school] using quantitativeand/or qualitative methods that statesand school districts develop.” I believethis recommendation will be difficult toimplement. Many state departments ofeducation are facing budget and staffshortages. Securing funds to hire staff todo the monitoring could be an issue.

Requirement 3 suggests that states reporttest results “standard-by-standard” foreach student, school, and state, but intheir discussion, the authors cautionabout the unreliability of making thesedecisions based on just a few items. Thescore reporting is further compromisedwhen you consider that the tests wouldonly measure high-priority standards andnot represent sampling of all standards.

The solution of combining ongoingclassroom assessment results with the re-sults of state assessments is problematicdue to measurement difficulties in com-bining results from non-mandated class-room assessments throughout the school

Source: Cole, N. S. (1987).A realist’s appraisal of theprospects for unifyinginstruction and assessment.New York: ETS InvitationalConference.

year with the results of the statewide as-sessments across the various schools.Moreover, if the classroom assessmentsare optional, what happens if a schoolsystem decides not to use them?

Requirement 6 is a necessary require-ment in light of federal statutes, but onethat continues to be of major concern tostates as they implement alternate as-sessments and offer accommodationsfor students with disabilities.

Although many essential topics werecovered in the report, some importanttopics were not mentioned at all or werediscussed only briefly. For example, fieldtesting was mentioned very briefly inRequirement 7, but continually devel-oping tests to meet these requirementswould be both time consuming and ex-pensive for schools as well as state de-partments of education. Furthermore,the significant issue of releasing testquestions was not addressed. There wasnot much discussion of the timing ofthe tests. If the tests are administered atthe end of the year, which I assume frommy reading of the material, the test for-mat becomes an issue. Are the testssolely multiple-choice, or do they con-tain constructed response items as well?If the tests contain constructed responseitems, the amount of testing time

increases, and the number of items ableto be administered in a typical testingtime period decreases. The tests wouldneed to be administered a month ortwo prior to the end of the school yearto ensure that the items can be scoredand the results reported to the schoolsbefore summer vacation.

At the time the authors wrote this re-port, there was no way to know what theresults of the new reauthorization of theElementary and Secondary EducationAct (ESEA) would involve. That legisla-tion now mandates annual testing inreading and mathematics in grades 3–8beginning in the 2005–2006 school yearand in the area of science beginning in2007–2008. It will be interesting to seewhat impact this report has on states’ ef-forts to meet the requirements of thenew ESEA. There seems to be a trade-offbetween what is tested, how many itemsper standard will be included, and howlong, in terms of minutes and/or days,students will need to take these varioustests. The most ambitious part of thenew ESEA, however, is the stipulationthat all students must be proficient onstate assessments by the year 2014. Find-ing a guide that will help us get to thatgoal is what we really need.

Assessment Designedfor Instruction

Quality judged by effect on instruction

Design determined by instructional goals

Instructional raison d’etreTeacher-mandated

Adapted to local context

Test task of instructional value

Locally scorableImmediate feedback

Used with other information

Informal

Results subject to short-term change

Meaningful to students

Page 12: SERVE Florida Office · 2013-08-02 · Helen DeCasper Karen DeMeester Paula Egelson Jane Griffin Kathleen Mooney Donna Nalley Beth Thrift Michael Vigliano Jean Williams Rick Basom

StoryFe

atu

re

SERVE’s Vision Magazine

Volu

me

1, N

um

ber

2

12

FEATURE STORY DISCUSSION:

SERVE Offers Discussion QuestionsAround the Nine Requirements

A state’s content standards must be prioritized to support effectiveinstruction and assessment.

● What process will be used to prioritize standards?

● Are current state standards good enough to use as a starting point forprioritization, or will standards need to be re-written?

● Should the enhanced descriptions of each high-priority standard (in Re-quirement 2) be developed before the prioritization process so that it isclearer what the standards mean before they are prioritized?

A state’s high-priority content standards must be clearly and thoroughlydescribed so that the knowledge and skills students need to demon-strate competence are evident.

● How should prioritized standards be explained to educators so that they under-stand these are prioritized for the purpose of focusing large-scale testing efforts?

● If high-priority standards are identified for which it is difficult to developeasily scored paper-and-pencil items (perhaps an extensive performancetask is more appropriate), should states still develop items for a state testeven though the item types used might lead to inappropriate assessmentpractices in classrooms (as teachers mimic state test items in their class-room assessment)?

● Where do performance standards fit into the picture?

The results of a state’s assessment of high-priority content standardsshould be reported standard-by-standard for each student, school, anddistrict.

● If individual student results are not very reliable standard-by-standard,should they even be reported?

● What kind of information or training would a state have to provide to ensurethat individual student standard-by-standard results were not misused?

A state must provide educators with optional classroom assessment pro-cedures that can measure students’ progress in attaining content stan-dards not assessed by state tests.

● Who will develop the classroom assessments to measure standards that arenot assessed by the state? Who will pilot and validate the assessments as beinggood for classroom use?

● Will classroom assessments be developed only for grade levels and contentareas tested at the state level?

● How will educator expertise in classroom assessment be continuously devel-oped in the state?

A state must monitor the breadth of the curriculum to ensure that in-structional attention is given to all content standards and subject areas,including those that are not assessed by state tests.

● Who will do this monitoring, and what kinds of resources would be needed?

● What will happen if a school or district is found to be narrowing the curriculum?

StoryFe

atu

re

SERVE’s Vision Magazine

Volu

me

1, N

um

ber

2

12

Page 13: SERVE Florida Office · 2013-08-02 · Helen DeCasper Karen DeMeester Paula Egelson Jane Griffin Kathleen Mooney Donna Nalley Beth Thrift Michael Vigliano Jean Williams Rick Basom

Story

Feature

SERVE’s Vision Magazine

Volu

me

1, N

um

ber

2

13

A state must ensure that all students have the opportunity to demonstratetheir achievement of state standards; consequently, it must provide well-designed assessments appropriate for a broad range of students, with ac-commodations and alternate methods of assessment available forstudents who need them.

● What is the process for developing standard-by-standard reporting that couldbe reliably reported out for individuals with special needs?

A state must generally allow test developers a minimum of three years toproduce statewide tests that satisfy the Standards for Educational andPsychological Testing and similar test-quality guidelines.

● If new forms of tests are developed each year from items found in a large itembank, should each year’s tests be formally field-tested?

● Prior to assembling the final test, is it important to review test drafts in a sub-ject area across the grade progression tested (grades 3-8) so that questionsabout relative grade appropriateness of items and adequacy of representationof most important concepts to master in each grade level can be addressed inthe context of the whole set of tests?

● Should evidence as to the validity of the tests be provided for each annualform of the test?

A state must ensure that educators receive professional developmentfocused on how to optimize children’s learning based on the results ofinstructionally supportive assessments.

● Whose job responsibility in states or districts will it be to ensure all educators(principals, district staff, teachers) are assessment-literate?

● How should an awareness of the need for improvement in assessment literacybe developed in a state or district? What is the state role? What is the districtrole? University role?

● Given all the competing demands for the small amount of professional de-velopment time built into the school schedule (e.g., to support district andschool improvement initiatives that change each year), how can states inviteprincipals and teachers to become more assessment-literate without alsoaddressing the issue of time?

A state should secure evidence that supports the ongoing improvementof its state assessments to ensure those assessments are (a) appropriatefor the accountability purposes for which they are used, (b) appropriatefor determining whether students have attained state standards, (c) ap-propriate for enhancing instruction, and (d) not the cause of negativeconsequences.

● Who is the audience for the evidence on how the assessment system can beimproved (e.g., an external oversight committee), and how will the results bediscussed in public forums with educators who are on the receiving end ofstate assessment systems so that a well-functioning feedback loop from theclassroom to the policymakers and state department is ensured?

● Should information regarding how state tests are being used for individualstudent promotion/retention decisions be collected from districts withfeedback given to districts about the appropriateness or inappropriatenessof their use of state test results for this purpose (given the warnings frommeasurement experts that a single test should not be used for promotiondecisions)?

Story

Feature

SERVE’s Vision Magazine

Volu

me

1, N

um

ber

2

13

Page 14: SERVE Florida Office · 2013-08-02 · Helen DeCasper Karen DeMeester Paula Egelson Jane Griffin Kathleen Mooney Donna Nalley Beth Thrift Michael Vigliano Jean Williams Rick Basom

StoryFe

atu

re

SERVE’s Vision Magazine

Volu

me

1, N

um

ber

2

14

CRESST, the National Center for Research on Evaluation,Standards, and Student Testing, in partnership with the Con-sortium for Policy Research in Education (CPRE), with theEducation Commission of the States (ECS), and with adviceand review from numerous colleagues in research and prac-tice, offers Standards for Educational Accountability Systems.These standards are intended to provide guidance to statesand districts in conducting self-reviews of their own systemsand to delineate criteria by which developing accountabilitysystems can be judged. The Standards for Educational Ac-countability Systems represent compiled knowledge devel-oped from sources including the Standards for Educationaland Psychological Testing (AERA, APA, & NCME, 1999), re-search findings on testing and accountability systems, andstudies of best practices.

Because experience with accountability systems is still devel-oping, the standards are intended to help evaluate existingsystems and to guide the design of improved procedures. Thestandards strongly endorse each state’s responsibility to con-duct continuing evaluation of its own accountability system.It is not possible at this stage in the development of account-ability systems to know in advance how every element of anaccountability system will actually operate in practice or whateffects it will produce. Evaluations—conducted in-house orby universities, external organizations, or teams of experts—are essential if states are going to learn systematically fromone another and the nation is going to judge the effectivenessof its efforts for children. Evaluation results will be essentialto the continuing improvement of testing programs and ac-countability provisions.

In sum, the standards represent models of practice derivedfrom three perspectives: research knowledge, practical experi-ence, and ethical considerations. They should be conceived ofas targets for state and local systems and as criteria to judgeproposed models of accountability development. (It shouldbe understood that tests included in an accountability systemshould meet the Standards for Educational and PsychologicalTesting.) What is highlighted here are criteria that apply espe-cially to accountability systems. It is likely that additional

FEATURE STORY RELATED WORK:

CRESST Defines Standards for State Accountability SystemsBy Dr. Joan Herman, Co-Director, CRESST, UCLA

Editor’s Note: The work of the Commission reported on earlier is one attempt to define criteria or guidelines against which statescan evaluate their testing programs. The work of the Commission focused on state tests, how they are developed, and how studentassessment information from various levels might be brought together to improve instruction. Others in the field are working on stan-dards for state accountability systems. State accountability systems are typically designed to create incentives for students, schools,and districts to focus on improving student achievement. Most states hope to bring some or all students to proficient levels of perfor-mance. The measure of proficiency that students or schools are held to is defined differently across states. Definitions of AdequateYearly Progress are used to identify schools and districts in need of improvement. In other words, state accountability systems typi-cally involve defining and measuring student outcomes, making judgments about the adequacy of the student achievement results,and attaching consequences to the results achieved by students, schools, and districts. The work on standards for accountability sys-tems described below starts from the point of view that accountability is currently a major purpose of state testing.

standards will subsequently be developed based on reportedevaluations of accountability system effects.

The Standards were developed by Dr. Eva Baker, Dr. RobertLinn, Dr. Joan Herman, and Dr. Daniel Koretz, and are orga-nized into five categories:

1. Standards on System Components● Accountability expectations should be made public

and understandable for all participants in the system.

● Accountability systems should employ different typesof data from multiple sources.

● Accountability systems should include data elementsthat allow for interpretations of student, institution,and administrative performance.

● Accountability systems should include the perfor-mance of all students, including subgroups that his-torically have been difficult to assess.

● The weighting of elements in the system, differenttest content, and different information sourcesshould be made explicit.

● Rules for determining adequate progress of schoolsand individuals should be developed to avoid errone-ous judgments attributable to fluctuations of the stu-dent population or errors in measurement.

2. Testing Standards● Decisions about individual students should not be

made on the basis of a single test.

● Multiple test forms should be used when there are re-peated administrations of an assessment.

● The validity of measures that have been administeredas part of an accountability system should be docu-mented for the various purposes of the system.

● If tests are to help improve system performance, thereshould be information provided to document thattest results are modifiable by quality instruction andstudent effort.

Page 15: SERVE Florida Office · 2013-08-02 · Helen DeCasper Karen DeMeester Paula Egelson Jane Griffin Kathleen Mooney Donna Nalley Beth Thrift Michael Vigliano Jean Williams Rick Basom

Story

Feature

SERVE’s Vision Magazine

Volu

me

1, N

um

ber

2

15

● If test data are used as a basis of rewards or sanctions,evidence of technical quality of the measures anderror rates associated with misclassification of indi-viduals or institutions should be published.

● Evidence of test validity for students with different lan-guage backgrounds should be made publicly available.

● Evidence of test validity for children with disabilitiesshould be made publicly available.

● If tests are claimed to measure content and perfor-mance standards, analyses should document the rela-tionship between the items and specific standards orsets of standards.

3. Stakes● Stakes for accountability systems should apply to

adults and students.

● Incentives and sanctions should be coordinated foradults and students to support system goals.

● Appeal procedures should be available to contest re-wards and sanctions.

● Stakes for results and their phase-in schedule shouldbe made explicit at the outset of the implementationof the system.

● Accountability systems should begin with broad, dif-fuse stakes and move to specific consequences for in-dividuals and institutions as the system aligns.

4. Public Reporting Formats● System results should be made broadly available to

the press, with sufficient time for reasonable analysis

and with clear explanations of legitimate and poten-tial illegitimate interpretations of results.

● Reports to districts and schools should promoteappropriate interpretations and use of results byincluding multiple indicators of performance, errorestimates, and performance by subgroup.

5. Evaluation● Longitudinal studies should be planned, implemented,

and reported, evaluating effects of the accountabilityprogram. Minimally, questions should determine thedegree to which the system

● Builds capacity of staff

● Affects resource allocation

● Supports high-quality instruction

● Promotes student equity access to education

● Minimizes corruption

● Affects teacher quality, recruitment, and retention

● Produces unanticipated outcomes

● The validity of test-based inferences should be sub-ject to ongoing evaluation. In particular, evaluationshould address

● Aggregate gains in performance over time

● Impact on identifiable student and personnelgroups

NOTE: A more comprehensive version of the standards, includ-ing comments, will be published as a forthcoming CRESSTpolicy brief. See www.cse.ucla.edu.

Page 16: SERVE Florida Office · 2013-08-02 · Helen DeCasper Karen DeMeester Paula Egelson Jane Griffin Kathleen Mooney Donna Nalley Beth Thrift Michael Vigliano Jean Williams Rick Basom

SpotlightSE

RV

E

SERVE’s Vision Magazine

Volu

me

1, N

um

ber

2

16

SERVESPOTLIGHT

You can hear the frustration in their voices when teachers say: You want usto increase test scores, but you don’t help us figure out how to do it. Or when ahigh school math teacher says: It’s so frustrating. I teach this Algebra I conceptover and over, and they just don’t get it. How will they pass the state end-of-course test? What can I do? Or when a veteran teacher says: I know I’m notteaching in ways that will help my students with their long-term developmentas learners, but I feel pressured to “drill and kill” for them to do well on statetests because that’s what I’m going to be judged on.

Page 17: SERVE Florida Office · 2013-08-02 · Helen DeCasper Karen DeMeester Paula Egelson Jane Griffin Kathleen Mooney Donna Nalley Beth Thrift Michael Vigliano Jean Williams Rick Basom

Spo

tligh

tSERVE

SERVE’s Vision Magazine

Volu

me

1, N

um

ber

2

17

External accountability demands for better test scores arrive at thedoors of district leaders. Some districts leverage external accountabilitydemands into strategies that build school capacity for a higher level ofprofessionalism and commitment to the quality of education provided stu-dents. They realize that passing along more edicts to schools will not pro-duce higher levels of learning.

Internal accountability means “how a school system hires, evaluates,and supports its staff; how it makes decisions; how it acquires and uses thebest knowledge; how it evaluates its own functioning; and how it safe-guards student welfare” (Darling-Hammond, 1997, p. 245). Successful dis-tricts use external accountability as an opportunity to build internalaccountability.

(continued)

District and school leaders are strug-gling to find ways to funnel teachers’frustrations with pressure for highertest scores into a focus on improvingthe quality of learning opportunitiesthey provide their students. In con-tinuously improving the quality oflearning experiences they provide stu-dents, teachers need ideas, tools, andfeedback from outside experts. Theyalso need help in using each other asresources and critics in designingmore interesting, challenging, andpurposeful learning experiences forstudents and in examining the qualityof learning that results. Outside or ex-ternal help, in the form of books, Webresources, purchased materials, train-ing, and consultants, is often available.Internal support, in the form of ahighly professional working environ-ment that provides time for the con-tinuous improvement of teaching toenhance student motivation andlearning, is often lacking.

The Challenges ofAccountability

In the early 1990s, concerns that Ameri-can students needed to improve theirthinking and problem-solving skills forthe United States to remain internation-ally competitive were reflected in thestudent outcomes described in nationaland state standards documents. Thesestandards documents (however over-loaded or ill-defined in some cases) werecommendable attempts to articulate anew vision for student performance.The standards implied a need for morecognitively engaging and rigorous in-struction and assessment in classrooms.In the late 1990s, as strong state ac-countability policies were implementedin some states, the goal of getting morestudents to achieve grade-level profi-ciency (sometimes inappropriately de-fined by a single score on a state test)may have inadvertently supplanted(rather than supplemented) the goal ofproviding more rigorous and challeng-ing instruction to all students.

The crux of the problem we now facewith external accountability and high-stakes testing, according to Tony Wagner,co-director of the Change LeadershipGroup at Harvard University’s GraduateSchool of Education, is that “paper-and-

pencil, computer-scored state tests donot begin to assess the competenciesthat large majorities of adults agreeare essential today: the ability to com-prehend difficult reading material andapply information to the solution ofcomplex problems; the ability to writeand speak clearly and thoughtfully;the ability to understand mathemati-cal data and use math and technologyas problem-solving tools; the ability towork effectively in teams; and finally,respect for others and an understand-ing of our roles as citizens.”

In other words, the competencies as-sessed by state tests can’t represent thetotal vision schools have for students’development. One implication ofWagner’s statement is that if studentsare going to develop competencies thatgo beyond what can be assessed on astate test, teachers must provide high-quality learning opportunities reflecting

the larger vision. The kinds of work onewould see students doing in schools anddistricts focused on developing the skillsthat Wagner described are very differentfrom the kinds of work one would ob-serve students doing in schools and dis-tricts that focus exclusively on preparingstudents to do well on state tests.

For example, students who only under-stand reading comprehension to meangetting the right answers to multiple-choice questions about a passage orbook (as found on state tests) will bemuch less able to face the challengesof difficult high school and collegecourses than students who have cometo understand reading comprehensionin a broader sense (e.g., summarizing,explaining, discussing, interpreting,evaluating, and, in general, makingsense of what they read). Studentscome to these different understandings

Reducing the Achievement Gap through ChallengingInstructional Opportunities

Consider the following research findings that Linda Darling-Hammondreports in her 1997 book, The Right to Learn:

● When students of similar achievement levels are exposed to more andless challenging material, those given the richer curriculumsystematically outperform those placed in less challenging classes(Alexander & McDill, 1976; Oakes, 1985; Gamoran & Berends, 1987).

● Teacher interaction with students in lower-track classes is lessmotivating and less supportive and also less demanding of higher-order reasoning and responses (Good & Brophy, 1986). Theinteractions are also less academically oriented and more likely tofocus on criticisms of students’ behavior, especially for minoritystudents (Eckstrom & Villegas, 1991; Oakes, 1985). Learning tasks areless engaging and students are less engaged (Rosenbaum, 1976).

Page 18: SERVE Florida Office · 2013-08-02 · Helen DeCasper Karen DeMeester Paula Egelson Jane Griffin Kathleen Mooney Donna Nalley Beth Thrift Michael Vigliano Jean Williams Rick Basom

SpotlightSE

RV

E

SERVE’s Vision Magazine

Volu

me

1, N

um

ber

2

18

A substantial body of research has found that much of the difference in school achievement among students isdue to the effects of substantially different school opportunities and, in particular, greatly disparate access tohigh-quality teachers and teaching. Although most research on the relationship between learning opportuni-ties and outcomes is correlational, experimental studies offer strong evidence that what students learn is sub-stantially a function of the opportunities they are provided.… It is becoming clear that differences in teacherexpertise are a major reason for the difference in learning opportunities across schools and classrooms (Darling-

Hammond, 1997, p. 271).

of reading comprehension, in part,from the kinds of work they are askedto do in school. Although state testscores provide a glimpse into what stu-dents can do, they reveal little about thequality of the learning opportunitiesstudents routinely experience over timein a particular district or school.

Low organizational expectations forteachers’ classroom teaching make pro-viding rigorous learning opportunitiesto all students even more challenging. AsElmore puts it, the perennial critique ofAmerican education is that “teachingand learning is in its most commonform emotionally flat and intellectuallyundemanding and unengaging. Everyschool can point to its energetic, en-gaged, and effective teachers.… Weregularly honor and deify these peda-gogical geniuses. But these exceptionsare not the rule. For the most part, weregard inspired and demanding teachingas an individual trait of teachers, muchlike hair color or shoe size, rather thanas a professional norm, or an expecta-tion that might apply to any teacher”(Elmore, 1996, p. 299).

To the extent that continuous improve-ment toward high-quality teaching isnot the norm in many schools, then dis-trict leaders face a challenge in improv-ing the quality of classroom learningopportunities provided students. Lead-ership and support to help teachers takeresponsibility for the effectiveness andquality of learning experiences they de-sign and provide is critical.

High-Quality LearningOpportunities for Students

“If academic outcomes are to change,aggressive action must be taken tochange the caliber of learning opportu-nities students encounter” (Darling-Hammond, 1997, p. 277).

For districts, improving student out-comes must involve continuous work onthe quality of learning opportunitiesprovided. Without explicit discussionsabout what constitutes high-qualityclassrooms, it is easy for schools to losefocus and get caught up in implement-ing a myriad of strategies that, althoughimportant, may have little impact on thequality of classroom instruction (such aschanging schedules, buying technology,involving more parents and businesses,offering more outside tutoring, etc.).

Dr. Phillip Schlechty, founder and CEOof the Center for Leadership in SchoolReform in Louisville, Kentucky, arguesconvincingly that schools must focuson what they do to produce resultsrather than getting preoccupied withobtaining certain results. In otherwords, “schools do not produce testscores; students produce test scores.”Schools should focus on the quality ofthe “knowledge work” in which theyask students to engage. He has coinedthe term “working on the work”(WOW) to describe the importance ofteachers creating and constantly im-proving upon the schoolwork they as-sign to their students.

For teachers to design high-quality workand experiences for their students, theyneed to be able to think and talk aboutthe work they give students in terms ofhow it meets certain criteria. Schlechtydefines ten qualities of work (which hecalls the WOW Framework) that teach-ers might use in their professional dia-logues and reflections about theinstructional program:

1. Content and substance2. Organization of knowledge3. Product focus4. Clear and compelling product

standards5. Protection from adverse consequences

for initial failures

6. Affirmation of the significanceof performance

7. Affiliation8. Novelty and variety9. Choice10. Authenticity

Schlechty is not suggesting that theseattributes need to be applied to everyassignment given to students, but thatteachers need to be supported as theybegin to apply these criteria. His organi-zation has worked with many districts intrying to engage schools in applyingthese attributes to think about the workstudents do.

The Case of CommunityDistrict #2

Richard Elmore, a highly regarded edu-cation researcher and writer, has stud-ied the progress of Community District#2 in New York, a district that has en-gaged in a long-term process of system-wide instructional improvement. Hecontrasts this district’s approach toother districts that tweak curriculumand instruction through the introduc-tion of new programs and projects thathave short life spans. According toElmore, “this scattershot approach—popular with schools trying to keep upwith the latest fads—may be ‘the pro-fessional equivalent of yo-yo dieting’”(p. 157). Embedded in the district’scontinuous improvement philosophy isthe assumption that the district musthave a well-thought-out strategy forhow it will influence the quality ofclassroom instruction. Elmore suggeststhat districts that are most likely to suc-ceed in responding to the external pres-sure for student performance (stateaccountability) are those that use thispressure as an opportunity to build in-ternal improvement processes aimed atthe classroom.

Page 19: SERVE Florida Office · 2013-08-02 · Helen DeCasper Karen DeMeester Paula Egelson Jane Griffin Kathleen Mooney Donna Nalley Beth Thrift Michael Vigliano Jean Williams Rick Basom

Spo

tligh

tSERVE

SERVE’s Vision Magazine

Volu

me

1, N

um

ber

2

19

Professional development for teachers should be school-based, preferablyembedded in instructional efforts through collaborative analysis of stu-dent work. This is contrary to most traditional professional development,such as courses leading to certificates or degrees but unrelated to the spe-cific needs of the school, quick-fix workshops that do not offer consistentfeedback, or professional development offered by external trainers to helpteachers adopt specific programs. The National Commission on Teachingand America’s Future recommends teachers “develop professional discoursearound problems of practice” as a central component of professional de-velopment. What is needed, the Commission says, is replacing the isolationof teaching with “forums in which teaching and learning can be discussedand analyzed, and where serious examination of practice, its outcomes, andits alternatives is possible” (Lewis, 2001, p. 22).

— From a report on Using Research to Improve Education for Low Income andMinority Students

happens teacher by teacher in a profes-sional work environment where hardquestions about practice are discussedand individualized feedback to teach-ers is common practice.

Defining Standards ofClassroom Practice

At some point in the process of trying toimprove learning opportunities acrossschools and teachers in a district, educa-tors must define, research, discuss, andrefine the concept of “quality learningopportunities.” For example, many dis-tricts, as they have taken on the goal ofimproved literacy instruction over a sus-tained period of time, have realized aneed to define, as best they could, whatconstitutes good instructional practicein literacy. It’s the “vision question”:what kind of classroom learning envi-ronments do we envision for this schoolor district that will engender the highestlevels of learning for all our students?

When experienced educators come to-gether, a common vision for good class-room practice emerges. SERVE has beenworking with a consortium of districtsfor the past five years called SERVE-Leads. Twice a year, SERVE convenes ameeting for district leadership teams todiscuss, share, and work on issues re-lated to improving student learning. At ameeting in February 2001, SERVE staffmembers spent a day asking each dis-trict team to think about what is impor-tant in establishing quality classroomlearning environments. Although eachdistrict’s list was worded and organizeddifferently, there were strong common-alities. For example, all the teams indi-cated that engaging students ininteresting and challenging dialogue andcognitive work was central to achievingquality in the classroom.

The following Standards of ClassroomPractice represent a synthesis of whatSERVE-Leads educators believed werethe characteristics of a powerful learningenvironment. The list is short, by design,in that participants were asked to cometo agreement on five-to-seven compel-ling characteristics of a high-qualitylearning experience. The statements areintended to encourage professional dia-logue and self-assessment.

(continued)

Community District #2 began its focuson improving the quality of instructionin the late 1980s with teacher studygroups, seminars, professional develop-ment on the teaching of literacy, andidentification of models of good class-room practice (i.e., benchmarking). As aresult of this work, the district began todevelop “implicit standards of practice”that were commonly held expectationsabout what good teaching and learningin literacy was supposed to look like.They created a common language andmental model for good teaching in lit-eracy. As time progressed, the districtshifted into a more explicit focus onstandards of practice in classrooms andon standards for student performance.In this district, explicit beliefs aboutclassroom practice “grew out of thework” on improving instruction.

Quality LearningOpportunities: ContinuousImprovement Takes Time

Studies show that teachers think theyare providing challenging learning op-portunities to students to a greater ex-tent than they actually are. One studyof biology teachers’ assessment prac-tices showed that although teachersreported broad thinking and problem-solving goals for their students, theirassessment practices, in reality, focusedon recognition and recall kinds of skills(Bol & Strage, 1996). Even in collegeclassrooms, those who have studiedassessment practices have found thatteachers tend to think they are teaching

to higher-order thinking goals, butwhen student assignments are exam-ined, higher-order thinking goals tendto be weakly represented (Angelo &Cross, 1993). Similarly, Spillane andZeuli (1999), in a study of 24 math-ematics teachers who said they wereimplementing mathematics reform asoutlined by the NCTM standards,found that only four of those teacherswere giving students work that re-flected the broader problem-solvingand reasoning goals one would expectto see.

Equalizing the quality of learning op-portunities does not come about by justputting out some literature for teachersor conducting a workshop. Surprisingly,it may not come about by adopting aninnovative curriculum either becausegood instructional materials can beused poorly. In a presentation at aSERVE conference, Dr. Sam Stringfield,from the Center for Research on theEducation of Students Placed At Risk,described the turnaround of a low-performing, high-poverty urbanschool (for more information, go towww.csos.jhu.edu/crespar/reports/report20.pdf). He reported that theschool adopted the rigorous curriculumof a well-regarded, elite private schoolbut phased it in slowly (one grade levelat a time), with a staff person assignedto work with teachers on implementa-tion and also providing feedback. Inother words, experience suggests that thehard work of improving the quality oflearning experiences provided students

Page 20: SERVE Florida Office · 2013-08-02 · Helen DeCasper Karen DeMeester Paula Egelson Jane Griffin Kathleen Mooney Donna Nalley Beth Thrift Michael Vigliano Jean Williams Rick Basom

SpotlightSE

RV

E

SERVE’s Vision Magazine

Volu

me

1, N

um

ber

2

20

1. Instruction is organized and deliv-ered in well-planned units that focuson clear learning goals and expecta-tions for student learning.

2. Work assigned to students is pur-poseful, frequently cognitivelychallenging, and results in usefulfeedback from the teacher to helpstudents improve.

3. Daily instructional interactionsbetween teacher and student arestructured to promote the develop-ment of key thinking and commu-nication skills.

4. Persistence and effort on the part ofstudents are nurtured.

5. Strategies are implemented to helpstudents develop responsibility fortheir progress toward stated goalsthrough self-assessment of the qual-ity of their own work and thinking.

6. Meaningful social and interpersonalrelationships are established thathonor individual differences andrespect for one another as learnerswith different styles, personalities,and viewpoints.

The SERVE-Leads Standards ofClassroom Practice, the Principles of

Reports fromSERVE-Leads Districts

It would be difficult, even overwhelm-ing, for a school or district to beginworking on all their identified Standardsof Classroom Practice at one time. Thewritings of Schlechty and others pointto the importance of our second Stan-dard of Classroom Practice: that workassigned to students should be purpose-ful, cognitively challenging, and result inuseful feedback to students on how toimprove. Thus, this Standard of Class-room Practice was selected as an initialfocus. One of the tools SERVE locatedto help teachers reflect on the qualityof their assignments is a process forcollecting “typical” teacher assign-ments and scoring them on a numberof dimensions identified in the researchliterature as being important for studentachievement. (See Taking a Close Look atthe Quality of Teachers’ Assignments andStudent Work on page 26.)

The following are reports from someof the districts participating in SERVE-Leads that are in the early stages of in-troducing initiatives to help teachersapply definitions of quality to the workthey assign students. It is interesting tonote that no districts took the sameapproach in beginning these conversa-tions about quality with their teachersand schools.

Newmann and Wehlage’sAuthentic Pedagogy

A five-year study of classroomsidentified dimensions of goodteaching related to higher levels ofachievement, especially in low-in-come or high-minority schools.According to the researchers, “au-thentic” pedagogy involves (a) en-gaging students in higher-orderthinking, (b) addressing centralideas thoroughly in order to helpstudents acquire deep knowledge,(c) fostering substantive conversa-tion among students, and (d) con-necting student learning to theworld beyond the classroom. Formore information, see SuccessfulSchool Restructuring: A Report tothe Public and Educators byNewmann and Wehlage, availableat www.wcer.wisc.edu/archives/completed/cors/default.htm.

Learning (see text box), the character-istics of “authentic pedagogy” (textbox), and Schlechty’s WOW Frame-work described earlier are all attemptsto define a powerful vision of effectiveclassroom practice. Just as we talkabout the need for a school vision toframe school improvement planning,teachers also need a classroom visionto focus ongoing dialogue aboutneeded improvements.

Institute forLearning’sPrinciples ofLearning

The Principles of Learningare condensed theoreticalstatements summarizingdecades of learning re-search. These principles aredesigned to help educators ana-lyze the quality of instruction andopportunities for learning thatthey offer to students.

1. Academic Rigor in a ThinkingCurriculum

2. Accountable TalkSM

3. Clear Expectations

4. Fair and Credible Evaluations

5. Learning as Apprenticeship

6. Organizing for Effort

7. Recognition ofAccomplishment

8. Socializing Intelligence

9. Self-Management ofLearning

For more information and an expla-nation of these Principles of Learning,visit www.instituteforlearning.org/pol3.html.

Team memberscollaborate todefine a qualityclassroom.

Page 21: SERVE Florida Office · 2013-08-02 · Helen DeCasper Karen DeMeester Paula Egelson Jane Griffin Kathleen Mooney Donna Nalley Beth Thrift Michael Vigliano Jean Williams Rick Basom

Spo

tligh

tSERVE

SERVE’s Vision Magazine

Volu

me

1, N

um

ber

2

21

In Mississippi,Petal SchoolDistrict AsksTeachers toSubmit a QualityAssignment

Collect teacher assign-ments and offer teachers feedback fromkey district leaders as a first step.

Because it is a small district, PetalSchool District can move quickly. Inthe last several years, district leadershave introduced school leaders to theidea of “working on the work” throughgroup study of Dr. Schlechty’s book.Even though the district is a successfuldistrict (by state standards anyway—receiving the highest state accreditationrating of 5), leaders realize that interms of getting all students to the levelof functioning that will bring themsuccess in a postsecondary setting, thedistrict has much work to do. Groupbook studies and other strategies havebeen used to help school administra-tors and teachers realize the need tocontinuously improve.

While participating in a SERVE-Leadswork session presented by LindsayClare of CRESST on using CRESST’srubric to score samples of “typical”teacher assignments in reading com-prehension and writing, Ione Bond(Assistant Superintendent) and TriciaBridges (Secondary Instructional Spe-cialist) saw a tool to help implementtheir district’s strategic plan. Theyquickly went to work. They wanted firstto determine their teachers’ perceptionsof what quality work meant and secondto gather baseline data in order to mea-sure subsequent teacher growth.

In the spring of 2001, Dr. James Hutto(the superintendent), Bond, andBridges asked all teachers to provide alist of attributes that characterize qual-ity work. They decided that they wouldmodify the CRESST process and askteachers to submit a “quality” assign-ment instead of a “typical” assignment.Then they randomly selected teachersfrom each grade level and/or contentarea and asked them to submit an ex-ample of a quality assignment from

their teaching with accompanying stu-dent work. Seventy-six percent of theteachers in the sample turned in as-signments (with cover sheets describ-ing the assignment).

Bond and Bridges then scored theassignments submitted using a rubricthat a SERVE-Leads group had adaptedfrom the CRESST rubric (see page 29).The rubric they used had seven di-mensions—level of thinking requiredby the assignment, clarity of instruc-tional goals and match to actual taskdemands, alignment with state/districtstandards, appropriateness of task de-mands for age, value beyond school,quality of assessment process, andoverall learning value of assignment.

The average rating of all assignmentssubmitted was a 2.47 (on a 3-pointscale with 3 designating the highestquality). Considering that they used a3-point scale rather than a 4-pointscale, as used in the CRESST research,and that they asked teachers for a“quality” rather than a “typical” assign-ment, the results are not inconsistentwith the CRESST findings in whichhigher-achieving schools had assign-ments averaging a 2.2 on a 4-pointscale of quality.

Dr. Hutto, along with Bond andBridges, then met with the teachersfrom each school who submitted as-signments and gave them qualitativefeedback on the strengths and weak-nesses of the assignments submitted.Overall, the feedback given to teachersabout their assignments was positivebecause many had submitted product-focused assignments, such as oral pre-sentations and writing assignmentsthat were cross-curricular in nature.One of the positive aspects of this ini-tial effort has been to encourage allteachers to become familiar with theQuality Assignment Rubric used toscore their assignments as a guide forfuture planning and development ofassignments.

The feedback shared with the teachersincluded the following:

● At the lower elementary level, therewere few examples of authentic as-sessments submitted and a lack of

clear criteria for assessing the workstudents produced.

● At the upper elementary level, thereview showed that some teacherswere giving credit for just attempt-ing the assignment, not holding stu-dents accountable and not applyingthe assessment criteria.

● The greatest weakness at the middleschool was not setting high expecta-tions for students, and, in general,the quality of the assignments wasweak.

● The high school did not participatewell in this district endeavor, whichwas disappointing; however, it wasevident that the vocational teachersand Special Education teachers weredoing a good job with quality work.

Having successfully introduced the ideaof providing constructive feedback toteachers on the quality of assignmentsthey give students, the next step was tofind ways that teachers could begin tooffer each other feedback in “safe,” colle-gial settings. At a SERVE-Leads meetingin the fall of 2001, protocols developedby the Annenberg Institute for SchoolReform (www.annenberginstitute.org)for assisting teachers in structuringgroup feedback meetings were provided.

Beginning in late fall of 2001, teachersvolunteered to begin working in smallgroups to test “protocols” for looking atstudent work. To date, only a few vol-unteers have shared teacher assign-ments and student work with theirpeers. The response from peer groupsthat have participated has been verypositive. One teacher said: “By viewingactual work samples, I was able to con-nect the definition of quality work tothe rubric. The words [in the rubric]came to life in the form of authenticstudent work.” The volunteers who pre-sented their assignments and samplesof student work affirmed the value ofthis as a learning experience. Oneteacher presenter said: “Presenting thisactivity in front of my peers validatedmy understanding of quality work.”

According to the Petal district leaders,this is still a work in progress.

(continued)

Page 22: SERVE Florida Office · 2013-08-02 · Helen DeCasper Karen DeMeester Paula Egelson Jane Griffin Kathleen Mooney Donna Nalley Beth Thrift Michael Vigliano Jean Williams Rick Basom

SpotlightSE

RV

E

SERVE’s Vision Magazine

Volu

me

1, N

um

ber

2

22

InFlorida,Bay DistrictBegins with aSmall Set ofSchools

Invite four schools in a K–12feeder system to participatein a district research anddevelopment project.

Bay District Schools in Panama City,Florida, is a mid-size district serving atourist-based community. It is a districtwith a history of site-based manage-ment. Because of this context, districtleaders have a history of initiating dis-trict-wide initiatives as research and de-velopment efforts in a few schools. If theinitiative works in those schools, it is of-fered to more schools. Because of itslong history as a co-developer withSERVE on professional development forteachers in classroom assessment, we de-cided to continue this collaboration bydeveloping and piloting tools and pro-cesses for helping schools address theStandards of Classroom Practice.

The plan for the spring, summer, andfall of 2002 is to work with school lead-ership teams to collect individual schoolbaseline data and outline a unique ap-proach for each school to begin “work-ing on the work.”

The leadership for this initiative in-cludes newly elected SuperintendentJames McCalister, Director of Instruc-tion Lendy Willis, and central officeResource Teacher Patricia Schenck,who has in the past headed up the dis-trict-wide training in classroom assess-ment and the development of districtguidelines for good classroom assess-ment. In the fall of 2001, teams fromthe four pilot schools met with districtleaders and SERVE staff to learn aboutthe project. They subsequently ex-plained the project to their respectivefaculties. SERVE and Bay County de-veloped a process for collectingbaseline data for each school on theStandards of Classroom Practice. Thesevisits took place in February 2002. Re-ports back to the schools in May 2002focused on understanding and use ofstate standards, strength of school andclassroom vision, teacher working

environment in terms of quality of op-portunities to work together, and stu-dent learning environment, includingquality of work they are asked to do.

Conducting a comprehensive review ofclassroom quality by using outside ob-servers and teacher artifacts (assignmentratings) will help ground the nextphase of professional developmentplanning. The rationale for involvingleadership teams from a K–12 feedersystem of four schools is that studentsultimately are educated across 13 yearsand many teachers, and there shouldbe continuity in the quality of their op-portunities to learn as they move fromone level of schooling to the next.

In NorthCarolina, LeeCounty Public SchoolsBegins with Mentors

Start with mentor/mentee pairs ofteachers.

Lee County is a mid-sized district inNorth Carolina with a history of focus-ing on teacher development. It was oneof the first districts in North Carolinato implement a voluntary formativeteacher evaluation component in all itsschools. The district has done extensivework with the community and strategicplanning, and it sees this initiative toimprove the quality of work assignedstudents as supportive of its strategicplan. As with many districts, it hasmany ongoing initiatives that place de-mands on teachers, so the leaders de-cided to start experimenting with theidea of quality work within an existingprogram. The district’s mentor pro-gram was selected. The mentor/menteeprogram in Lee County provides begin-ning teachers with non-evaluative andcollaborative assistance from an experi-enced teacher.

District staff participating in a SERVE-Leads Consortium selected a group often mentor teachers and ten mentees(beginning teachers) in grades 3–8 whowould focus on the quality of readingassignments. By encouraging beginning

teachers (who will continue to enterthe district’s schools in high numbers)to think about the quality of work theygive students, the district leaders be-lieve that they can slowly develop a cul-ture that includes critical collaborativeexamination of the quality of instruc-tion and assessment.

During the fall and winter of 2001,the ten mentor teachers reviewed andadopted the SERVE-Leads rubric forevaluating teacher work products.During this review, they recognized theneed to revisit their own beliefs aboutwhat constitutes quality teacher workand the need to include the concept ofquality work as part of the mentor/mentee cycle of assistance. They agreedto solicit teacher work from their col-leagues and collected about 50 samplesfrom these volunteers. Samples ofteacher assignments are submitted witha cover sheet completed by the teacherexplaining the context for the assign-ment. This cover sheet (adapted fromthe CRESST process) has been devel-oped as an electronic file at the requestof the participating mentors in an ef-fort to simplify the gathering ofsamples from teachers.

The group used these samples to prac-tice applying the “quality” rubric. Theywill continue to practice with the rubricto enhance their ability to provide effec-tive critiques. As part of the learningprocess, the group is looking at inter-rater reliability. After several work ses-sions, the mentors are beginning to feelcomfortable with their ability to applythe rubric effectively. Their applicationof the rubric to the samples uncovered awide range of quality. Of particular in-terest is the ongoing group discussioncentered on an apparent lack of cogni-tive challenge in the work submitted.

As Lee County begins planning for thenext year, the original project group ofmentors/mentees will be expanded.Discussion will continue on how usingthis rubric fits into the reflective pro-cess of the mentor/mentee relationship.Lee County’s quality teacher work ini-tiative is an attempt to train and “re-tool” veteran, experienced teachers andto include the concept of quality workfor students as part of the teacher in-duction process.

Page 23: SERVE Florida Office · 2013-08-02 · Helen DeCasper Karen DeMeester Paula Egelson Jane Griffin Kathleen Mooney Donna Nalley Beth Thrift Michael Vigliano Jean Williams Rick Basom

Spo

tligh

tSERVE

SERVE’s Vision Magazine

Volu

me

1, N

um

ber

2

23

In Alabama,Mobile CountyPublic SchoolSystem Startswith a SmallCore Groupof TeacherLeaders

Start informally with a small group ofvolunteer teacher leaders.

Mobile County Public School System isa very large urban school district locatedin Mobile, Alabama. The district is un-dergoing severe budget cuts from thestate and has very little money to spendon professional development, so the dis-trict had to start very small in its effortto get teachers involved in looking at thequality of work given to students. Twocentral office staff, along with a volun-teer group of six teachers from themiddle and high school, have soughtout support from district personnel andmade arrangements to work together toseek out methods to improve the qualityof assignments teachers give students.They seek to heighten awareness andgenerate support for the need to im-prove the quality of classroom practiceswithin their school district. The smallgroup of teachers and central office staffmeets regularly to bring work samplesto the table and to try to apply rubrics,like that of CRESST, to improve thequality. They have discovered that theyneed to see more examples of high-quality assignments in order to get abetter sense of where their assignmentsfall short, particularly in terms of “cog-nitive challenge.” In other words, theyhave realized they need to see and dis-cuss “exemplar” assignments.

In Georgia, ForsythCounty Schools

Will InvolveTeachers inExamining

the Qualityof Standards-

Based Units

First, build teacher capacity to develop stan-dards-based units and then engage teachersin examining the quality of the units.

Forsyth County School District islocated in Cumming, Georgia, in theAtlanta-metropolitan area. Currently adistrict of 18,000-plus, it has doubledin size in the past five years.

This district is one of the original ten pi-lot districts in the country involved withPhillip Schlechty’s Center for Leadershipin School Reform. The key premise ofthe reform is that the core business ofschools is to provide students with qual-ity knowledge work—work that is en-gaging and content appropriate. Allimprovement efforts in the district havereflected this premise. For example, thedistrict has worked on:

● Identifying standards and bench-marks for student performance andcollecting assessment data relativeto student performance on thesestandards.

● Implementing personnel evaluationprocesses (the Professional andLeadership Appraisal Cycles) thatreinforce the key premise.

● Developing an effective SchoolImprovement Model.

In the 2001–2002 school year, a newsystem-wide staff development initiativewas initiated under the guidance of Cur-riculum Coordinator Beth Reynolds. Inthe summer of 2001, each of the 21 dis-trict schools were invited to select four-to-six teacher leadership teams (calledCore Teams) to participate in a yearlong,grant-funded, district-planned profes-sional development program. To date,115 teachers from the 21 schools haveparticipated in five days of customizedstaff development. The Core Team strat-egy provides extensive and ongoing pro-fessional development on (a) buildingquality standards-based units, (b) de-signing quality standards-based assess-ments, and (c) aligning classroomassessment, grading, and reporting prac-tices to standards.

These Year One Core Team participantswill be provided two additional days ofstaff development during the summerof 2002 (for a total of seven days for theyear) and will be provided district-levelsupport throughout the next schoolyear (Year Two) in the continued devel-opment and implementation of stan-dards-based units and assessments. The

district will engage a new cadre of CoreTeam participants in the fall of 2002.

Building on this comprehensive dis-trict-wide initiative to improve teachercapacity in implementing standards-based units, Reynolds adapted theQuality Assignment Rubric for use byForsyth teachers in assessing the qualityof their unit designs. Indicators builtinto the resulting Quality Unit Rubricinclude the principles of standards-based unit design as well as Schlechty’sWOW Design Qualities. The QualityUnit Rubric, which will be piloted withCore Team participants as they com-plete their units, is structured to answerthe following questions:

● Does the unit engage students insubstantive content and provide op-portunities to apply highly complexthinking?

● Does the unit exhibit alignmentamong learning goals, standards, as-sessment strategies, grading criteria,and instructional strategies?

● Does the unit exemplify a quality as-sessment environment that will leadstudents to experience success?

● Does the unit help students to applyand use knowledge beyond theschool setting?

● Does the design of the unit supportopportunities that engage and moti-vate students to learn?

● Does the unit show evidence thateffective strategies are employed tomeet the needs of all students?

Final Thoughts

The district summaries above provide aglimpse of how several districts are go-ing about drawing teachers’ attentionto the quality of classroom learning op-portunities. Clearly, there is no singleright way. We call our research and de-velopment work with these SERVE-Leads districts the Standards ofClassroom Practice Project. Helpingdistricts develop such standards, assessthe degree to which identified stan-dards (their vision for quality) arefound in classrooms, and developlearning opportunities for teachers sothat they can “work on the work” is a

(continued)

Page 24: SERVE Florida Office · 2013-08-02 · Helen DeCasper Karen DeMeester Paula Egelson Jane Griffin Kathleen Mooney Donna Nalley Beth Thrift Michael Vigliano Jean Williams Rick Basom

SpotlightSE

RV

E

SERVE’s Vision Magazine

Volu

me

1, N

um

ber

2

24

References

Angelo, T. A., & Cross, P. K. (1993). Classroom assessmenttechniques (2nd edition). San Francisco: Jossey-Bass.

Bol, L., & Strage, A. (1996). The contradiction betweenteachers’ instructional goals and their assessment prac-tices in high school biology courses. Science Education,80(2), 145-163.

Darling-Hammond, L. (1997). The right to learn: A blue-print for creating schools that work. San Francisco:Jossey-Bass.

Eckstrom, R., & Villegas, A. M. (1991). Ability grouping inmiddle grade mathematics: Process and consequences.Research in Middle Level Education, 15(1), 1-20.

Elmore, R. E. (1996). Getting to scale with good educa-tional practices. In S. H. Fuhrman & J. A. O’Day (Eds.),Rewards and reform: Creating educational incentives thatwork. San Francisco: Jossey-Bass.

Elmore, R. E., & Burney, D. (1998). Continuous improve-ment in Community District #2, New York City. Re-trieved from www.Irdc.pitt.edu/hplc/hplc.html

Good, T. L., & Brophy, J. E. (1986). Education psychology(3rd edition). White Plains, NY: Longman.

Institute for Learning. (2001). Principles of learning. Re-trieved from www.instituteforlearning.org/pol3.html

Lewis, A. (2001). Add it up: Using research to improveeducation for low-income and minority students.

critical aspect of internal accountability.There are many other aspects of districtfunctioning that contribute to high lev-els of internal accountability (the qualityof processes to recruit, hire, induct, andevaluate teachers; develop and evaluateprincipals; provide assistance to

Poverty and Race Research Council. Retrieved [email protected]

Newmann, F., & Wehlage, G. (1995). Successful school restructur-ing: A report to the public and educators. Madison, WI: Cen-ter on the Organization and Restructuring of Schools,University of Wisconsin.

Oakes, J. (1985). Keeping track: How schools structure inequality.New Haven, CT: Yale University Press.

Powell, A. G., Farrar, E., & Cohen, D. K. (1985). The shoppingmall high school. Boston: Houghton Mifflin.

Rosenbaum, J. E. (1976). Making inequality: The hidden curricu-lum of high school tracking. New York: Wiley.

Schlechty, P. C. (2001). Shaking up the school house. San Fran-cisco: Jossey-Bass.

Spillane, J. (2000). District leaders perceptions of teacher learn-ing. Consortium for Policy Research in Education OccasionalPaper Series. Retrieved from www.upenn.edu/gse/cpre

Spillane, J. P., & Zeuli, J. S. (1999). Reform and teaching:Exploring patterns of practice in the context of nationaland state mathematics reforms. Educational Evaluationand Policy Analysis, 21(1), 1-27.

Wagner, T., & Vander Ark, T. (2001). Education Week. Retrievedfrom www.edweek.org/ew/ew_printstory.cfm?slug=30wagner.h20

low-performing schools; support stan-dards-based curriculum and assessment;engage the community; etc.). However,defining a vision for quality in class-rooms and engaging teachers in workingtoward this vision is increasingly beingrecognized as a missing component.

For more information about SERVE’sStandards of Classroom PracticeProject, contact Nancy McMunn([email protected]) or WendyMcColskey ([email protected]).

Page 25: SERVE Florida Office · 2013-08-02 · Helen DeCasper Karen DeMeester Paula Egelson Jane Griffin Kathleen Mooney Donna Nalley Beth Thrift Michael Vigliano Jean Williams Rick Basom

SERVE’s Vision Magazine

Volu

me

1, N

um

ber

2

25

SERVE will host the 2002 Forum on School Improvementat the Birmingham Sheraton in the historic downtown of Birmingham,

Alabama, October 20–22. Join us for in-depth courses of study related

to Assessment & Accountability, Early Childhood Education, Improvement of Math &

Science Education, Literacy, School Reform, Teacher Leadership, Technology, and

much more. For more information, please e-mail Rebecca Rhoden Ogletree

([email protected]) or Cynthia Robertson ([email protected]), or call them at

800-352-6001. Watch our website, www.serve.org/forum, for

updates as the event draws closer.

SERVEFORUM

Page 26: SERVE Florida Office · 2013-08-02 · Helen DeCasper Karen DeMeester Paula Egelson Jane Griffin Kathleen Mooney Donna Nalley Beth Thrift Michael Vigliano Jean Williams Rick Basom

CommentaryG

ues

t

SERVE’s Vision Magazine

Volu

me

1, N

um

ber

2

26

GUESTCOMMENTARY

The National Center for Research in Evaluation, Standards,and Student Testing (CRESST) at UCLA has been conductingresearch aimed at developing a method for investigating thequality of students’ learning environments by developingcriteria for rating teachers’ assignments and student work(Aschbacher, 1999; Clare, 2000; Clare & Aschbacher, 2001;Clare, Valdés, Pascal, & Steinberg, 2001).

CRESST’s research has focused on developing indicators of in-structional quality that can help monitor and support diverseschool reform initiatives as well as that have the potential to di-rect attention to dimensions of practice that are germane to stu-dent learning (Linn & Baker, 1998). Our work so far hascentered on language arts instruction in third and seventh grade.This research involved the collection of a number of differentlanguage arts assignments from teachers over the past four years,including “typical” writing and reading comprehension assign-ments, as well as “challenging” assignments. For each assignmentsubmitted, teachers completed a one-page cover sheet describingtheir learning goals and assessment criteria. Teachers also sub-mitted four samples of student work for each assignment—twoof which they considered to be of high quality and two of whichthey considered to be of medium quality.

CRESST Dimensions of Quality

The framework for defining the quality of classroom assign-ments was rooted in research focused on the most effective in-structional practices for promoting improved student learning(Newmann, Lopez, & Bryk, 1998; Porter & Brophy, 1988;Resnick & Hall, 1998; Slavin & Madden, 1989) and establishedprofessional standards for teaching (Danielson, 1996). Ratersused a 4-point scale (1=poor to 4=excellent) to rate the follow-ing six dimensions of quality for each assignment submitted:

Cognitive challenge of the task. This dimension describesthe level of thinking required of students to complete thetask. Specifically, this dimension describes the degree towhich students have the opportunity to apply higher-orderreasoning and engage with grade-appropriate academic con-tent material. For example, an assignment given a high scorefor cognitive challenge might require students to synthesize

ideas, analyze cause and effect, and/or analyze a problem andpose reasonable solutions using content-area knowledge (e.g.,comparing themes from different books, etc.). An assignmentgiven a low score on this dimension, in contrast, might onlyrequire students to recall very basic, factual information.

Clarity of the learning goals. This dimension describes howclearly a teacher articulates the specific skills, concepts, orcontent knowledge students are to gain from completing theassignment. The primary purpose of this dimension is to de-scribe the degree to which an assignment could be considereda purposeful, goal-driven activity focused on student learn-ing. An assignment given a high score on this dimensionwould have goals that were very clear, detailed, and specific,and it would be possible to assess whether or not studentshad achieved these goals.

Clarity of the grading criteria. The purpose for this dimen-sion is to assess the quality of the grading criteria for the as-signment in terms of their specificity and potential forhelping students improve their performance. Raters considerhow clearly each aspect of the grading criteria is defined andhow much detail is provided for each of the criteria. An as-signment given a high score for this dimension would havegrading criteria that clearly detail the guidelines for successand provide a great deal of information to students aboutwhat they need to do to successfully complete the task.

Alignment of goals and task. This dimension focuses on thedegree to which a teacher’s stated learning goals are reflected inthe design of the assignment’s tasks. Specifically, this dimen-sion attempts to capture how well the assignment appears topromote the achievement of the teacher’s goals for studentlearning. An assignment given a high score on this dimensionwould involve tasks and goals that overlap completely.

Alignment of goals and grading criteria. This dimensionis intended to describe the degree to which a teacher’s grad-ing criteria support the learning goals (i.e., the degree towhich a teacher assesses students on the skills and conceptsthey are intended to learn through the completion of the as-signment). Also considered in this rating is whether or notthe grading criteria include extraneous dimensions that do

Quality of instruction is the heart of the school reform process. Students need qualitatively different learningopportunities if they are to achieve the goals set out for them by many states and districts. Despite the importanceof quality instruction, however, limitations in available methodologies have made it difficult to collect informationon the quality of classroom practice on a broad scale. For this reason, the nature and quality of students’ learningenvironments have existed as a “black box” in many large-scale evaluation designs.

SERVE’s Vision Magazine

Volu

me

1, N

um

ber

2

26

Page 27: SERVE Florida Office · 2013-08-02 · Helen DeCasper Karen DeMeester Paula Egelson Jane Griffin Kathleen Mooney Donna Nalley Beth Thrift Michael Vigliano Jean Williams Rick Basom

Co

mm

entary

Guest

SERVE’s Vision Magazine

Volu

me

1, N

um

ber

2

27

not support the learning goals, as well as the appropriatenessof the criteria for supporting the learning goals.

Overall quality. This dimension is intended to provide aholistic rating of the quality of the assignment based on itslevel of cognitive challenge, the specificity and focus of thelearning goals, the clarity of the grading criteria, the alignmentof the learning goals and the assignment task, and the align-ment of the learning goals and the grading criteria.

CRESST’s Findings

Quality of classroom assignments. For the first few years ofour research, we focused on elementary and middle schoolsthat served primarily lower achieving students (as assessed bystandardized achievement test scores) from low-income fami-lies. In general, we found that the majority of assignmentswe collected from teachers were considered to be of “basic”quality (i.e., were scored a 2 on a 4-point scale) across thedifferent dimensions (Clare, 2000). In the third year of ourstudy, we scaled up our work to include elementary schoolsthat served more economically advantaged students whowere relatively higher achieving. Results (see Table 1) indi-cated that students from the higher achieving schools re-ceived higher-rated assignments overall, though there werecertainly exceptions to this pattern (Clare et al., 2001). Inother words, some teachers from the higher-achievingschools provided their students with relatively low-qualityassignments, and vice versa.

Reliability and stability of the assignment ratings. Find-ings so far have indicated that there is an acceptable level ofagreement between raters and good internal consistency forthe classroom assignment rating scales (Clare, 2000; Clare etal., 2001). Results also indicated that three to four teacherassignments rated by at least three raters appeared to yield astable estimate of quality. In other words, the estimated vari-ance components based on theteacher assignment ratings showedthat most of the variation in ratingwas accounted for by differencesacross teachers and not by differ-ences across raters or assignmenttype. There is some indication, how-ever, that this may only be truewhen teachers submit assignmentsthat they themselves created, ratherthan submitting assignments theycreated in tandem with assignmentsthey obtained from commercialsources. We are investigating thisissue further in our current work.

Relation of classroom assign-ments and observed instructionand student work. To investigatethe validity of the classroom assign-ment rating scales, we looked at therelation of classroom assignmentratings, observed practice, and thequality of student work. Our resultsso far indicated that the quality of

teachers’ assignments was associated with the quality of their ob-served instruction across a range of learning environments. Thequality of teachers’ assignments also was associated with thequality of student work (Clare, 2000; Clare et al., 2001). In otherwords, teachers who received high ratings on their assignmentstended to receive high ratings for their observed practice as well.The quality of student work also tended to be higher for thoseteachers who submitted highly rated assignments.

Use of the Classroom Assignment Rubric andFuture Research

More recently, we have turned our attention to investigatingthe use of the Classroom Assignment Rubric as part of theLos Angeles Unified School District’s new accountabilitysystem. This method was piloted during the 2000–2001school year with teachers in grades 4, 8, and 10. Future re-search also will focus on the use of the method for support-ing self-reflection on the part of teachers. To this end, wehope to collaborate with SERVE to investigate teachers’ useof the classroom assignment rubric in school-based profes-sional development settings. Specifically, we intend to inves-tigate whether this framework for looking at classroomassignments helps teachers create higher-quality learningenvironments for students by helping them think about:

● The kinds of skills their assignments promote

● How well their assignments promote their learning goals

● How clear and informative their grading criteria are tostudents

● The match between their grading criteria and theirlearning goals

Over the long term, we hope that attention to quality dimen-sions of classroom assignments will result in greater successfor students on increasingly more challenging tasks.

[1] Reproduced from Clare et al. (2001).

Cognitive challengeof the task

Clarity oflearning goals

Clarity ofgrading criteria

Alignment ofgoals and task

Alignment of goalsand grading criteria

Overall quality

1.64 (.44)

1.92 (.50)

2.37 (1.01)

1.83 (.49)

1.81 (.59)

1.71 (.43)

2.23 (.61)

2.32 (.56)

1.94 (.66)

2.17 (.48)

1.71 (.55)

2.21 (.48)

0

0.007

0.07

0.013

0.52

0

NOTE: Items were scored on a 4-point scale (1 = poor, 4 = excellent).

Higher achieving(N = 16)M (SD)

Lower achieving(N = 13)M (SD) P value

Table 1: Comparison of Assignment Ratings

Page 28: SERVE Florida Office · 2013-08-02 · Helen DeCasper Karen DeMeester Paula Egelson Jane Griffin Kathleen Mooney Donna Nalley Beth Thrift Michael Vigliano Jean Williams Rick Basom

CommentaryG

ues

t

SERVE’s Vision Magazine

Volu

me

1, N

um

ber

2

28

References

Aschbacher, P. R. (1999). Developing indicators of classroom prac-tice to monitor and support school reform (Center for theStudy of Evaluation Technical Report #513). Los Angeles:University of California, National Center for Research onEvaluation, Standards, and Student Testing (CRESST).

Brophy, J., & Good, T. L. (1986). Teacher behavior and studentachievement. In M. C. Wittrock (Ed.), Handbook of researchon teaching (3rd edition). New York: MacMillan.

Clare, L. (2000). Using teachers’ assignments as an indicatorof classroom practice. (Center for the Study of EvaluationTechnical Report #532). Los Angeles: University of Califor-nia, National Center for Research on Evaluation, Standards,and Student Testing (CRESST).

Clare, L., & Aschbacher, P. R. (2001). Exploring the technicalquality of using assignments and student work as indicatorsof classroom practice. Educational Assessment, 7(1), 39–59.

Clare, L., Valdés, R., Pascal, J., & Steinberg, J. R. (2001).Teachers’ assignments as indicators of instructional quality inelementary schools (Center for the Study of Evaluation

Technical Report #545). Los Angeles: University of California,National Center for Research on Evaluation, Standards, andStudent Testing (CRESST).

Danielson, C. (1996). Enhancing professional practice: A frame-work for teaching. Alexandria, VA: Association for Supervisionand Curriculum Development.

Linn, R. L., & Baker, E. L. (1998). School quality: Some missingpieces. CRESSTline. Los Angeles: University of California,CRESST.

Newmann, F. M., Lopez, G., & Bryk, A. S. (1998). The quality ofintellectual work in Chicago schools: A baseline report. Chicago:Consortium on Chicago School Research.

Porter, A. C., & Brophy, J. (1988). Synthesis of research on goodteaching: Insights from the work of the Institute for Researchon Teaching. Educational Leadership, 45, 74–85.

Resnick, L. B., & Hall, M. W. (1998). Learning organizations forsustainable educational reform. Daedalus, 127(4), 89–117.

Slavin, R., & Madden, N. (1989). What works for students at risk:A research synthesis. Educational Leadership, 46, 4–13.

Rating a Reading ComprehensionAssignment

How would you rate this third-grade readingcomprehension assignment?

A teacher asked students to read the book, Dr. De Soto,by William Steig, and individually answer the followingbasic comprehension questions:

● Why did Dr. De Soto have a lot of patients?● How did Dr. De Soto work on extra large animals?● What was wrong with the fox?● Why do you think the De Sotos decided to treat

the fox?● What does the word woozy mean?

The teacher’s stated learning goals for the assignmentwere to work on comprehension and answering in com-plete sentences. The teacher’s stated grading criteriawere: I checked for answers that were in complete sentencesand whether or not students understood the question andwrote answers clearly. Credit was given to certain studentswith acceptable answers even if they didn’t use completesentences.

The assignment received a 2 because the questions didnot require any higher level or inferential thinking on thepart of students, and students were required to writeonly 1–2 sentence responses. Additionally, the level of thebook was below what would be expected in third grade.Compared to stated learning goals for other higher-qual-ity assignments, the goals for this assignment were notvery elaborate or specific with regard to what aspect ortheme of the story the teacher most wanted the studentsto comprehend. Comprehension seemed to be about

Rating a Writing Assignment

How would you rate this seventh-grade writingassignment?

Students were asked to write a five-paragraph essay ontheir dreams for the future.

The teacher’s stated learning goals were to teach stu-dents, step by step, how to write a five-paragraph essayand to demonstrate the creativity and fun involved inessay writing. The teacher’s stated grading criteria wereas follows: content is well explained, writers focus onwhat needs to be talked about, and writing process iscompletely done.

This assignment received a 2 because students were notrequired to engage with substantive content material (e.g.,research a topic) or apply higher-level thinking skills by, forexample, arguing a position or comparing and contrastingdifferent concepts, characters, or experiences in their writ-ing. The focus of the assignment appeared to be more onjust writing to a prompt than writing to learn or to com-municate. Keep in mind that teachers were asked to sub-mit a “typical” assignment given to students, and that overtime, having students repeatedly working on five-para-graph essays in response to artificial prompts does notrepresent a high level of cognitive challenge. The teacher’sstated learning goals for the assignment appeared to fo-cus more on the activity of producing the essay, ratherthan developing content knowledge, developing students’thinking skills, or deepening their understanding of spe-cific concepts. The criteria were considered to be of basicquality since the teacher did not clarify for students howshe would decide how well they had explained theirdreams or focused on what needed “to be talked about.”

(Rating Reading, continued next page) (Rating Writing, continued next page)

Page 29: SERVE Florida Office · 2013-08-02 · Helen DeCasper Karen DeMeester Paula Egelson Jane Griffin Kathleen Mooney Donna Nalley Beth Thrift Michael Vigliano Jean Williams Rick Basom

Co

mm

entary

Guest

SERVE’s Vision Magazine

Volu

me

1, N

um

ber

2

29

COGNITIVECHALLENGE

FOCUS OFTHE GOALS

ON STUDENTLEARNING

CLARITY OFTHE GRADING

CRITERIA

ALIGNMENT OFLEARNING

GOALS ANDTASK

ALIGNMENT OFLEARNING

GOALS ANDGRADINGCRITERIA

OVERALL TASKQUALITY

Task requires stronglycomplex thinking as anextensive, major focus ofthe task. Students alsoengage with substantivecontent material.

Goals are very focusedon student learning.Goals are very clear andexplicit in terms of whatstudents are to learnfrom the assignment.Additionally, all the goalsare elaborated.

Teacher’s grading crite-ria are very clear, explicit,and elaborated.

There is exact alignmentbetween the teacher’sstated learning goals forstudents and what thetask requires students todo. The task fully sup-ports the instructionalgoals. The task and goalsoverlap completely—neither one calls forsomething not includedin the other.

Excellent quality in termsof level of cognitive chal-lenge, clarity, and appli-cation of learning goalsand grading criteria.

There is exact alignmentbetween the teacher’sstated learning goals forstudents and the statedgrading criteria.

Task requires complexthinking. Students mayalso engage with substan-tive content material.

Goals are mostly focusedon student learning. Goalsare mostly clear and ex-plicit in terms of what stu-dents are to learn from theassignment.

Teacher’s grading criteriaare mostly clear andexplicit.

There is good alignmentbetween the teacher’sstated learning goals andwhat the task requires stu-dents to do. The task sup-ports the instructionalgoals.

There is good alignmentbetween the teacher’sstated learning goals andthe stated grading criteria.

Good quality in terms oflevel of cognitive chal-lenge, clarity, and applica-tion of learning goals andgrading criteria.

Task requires some mod-erately complex thinking.Students may also en-gage with some substan-tive content material.

Goals are somewhat fo-cused on student learn-ing. Goals are somewhatclear and explicit interms of what studentsare to learn from the as-signment. Goals may bevery broadly stated. Orthere may be a combina-tion of learning goalsand activities.

Teacher’s grading crite-ria are somewhat clearand explicit.

There is only some align-ment between theteacher’s stated goalsand what the task re-quires students to do.The task only somewhatsupports the instruc-tional goals. Or the goalmay be so broadly statedthat the task and goalare aligned only at a verygeneral level.

There is only some align-ment between theteacher’s stated learninggoals and the statedgrading criteria.

Limited quality in termsof level of cognitive chal-lenge, clarity, and appli-cation of learning goalsand grading criteria.

Task does not require anydegree of complex think-ing and/or does not en-gage students with sub-stantive content material.

Goals are not focused onstudent learning, and arenot clear and explicit interms of what students areto learn from the assign-ment. Or all goals may bestated as activities with nodefinable objective.

Teacher does not specifya grading criteria, or it isnot possible to determinethe grading criteria fromthe teacher’s documents.

There is very little or noalignment between theteacher’s stated goals andwhat the task requiresstudents to do. The taskdoes not support the in-structional goals.

There is very little or noalignment between theteacher’s stated learninggoals and the statedgrading criteria.

Poor quality in terms oflevel of cognitive chal-lenge, clarity, and applica-tion of learning goals andgrading criteria.

Cop

yrig

ht 2

000

, The

Reg

ents

of

the

Un

iver

sity

of C

alif

orn

ia

4 3 2 1

just getting the facts of the story right. The teacher’sgrading criteria did not seem very clear compared toother higher-quality assignments with very elaborate cri-teria for what the teacher expected. These grading crite-ria provided little information to students with regard towhat they would need to do to be successful at the task.Also, there is some misalignment between the teacher’sstated learning goals, which emphasized writing in com-plete sentences, and the grading criteria, which allowedfor answers that were not in complete sentences.

As such, these criteria provided little information to stu-dents about what they would need to do to successfullycomplete the task.

(Rating Reading, continued) (Rating Writing, continued)

CRESST Quality Assignment Rubric

Page 30: SERVE Florida Office · 2013-08-02 · Helen DeCasper Karen DeMeester Paula Egelson Jane Griffin Kathleen Mooney Donna Nalley Beth Thrift Michael Vigliano Jean Williams Rick Basom

SERVE’s Vision Magazine

Volu

me

1, N

um

ber

2

30

the FieldEx

amp

le fr

om EXAMPLE

FROM THEFIELD

Andrew and Hathia Hayes kicked offthe seminar by presenting the findingsfrom their study of ComprehensiveSchool Reform Demonstration imple-mentation efforts in 70 North Carolinaschools. One problem they identifiedwas that structural barriers in statepolicy and in local policy and practicesare slowing down reform in manycases. They found that many of theschools’ activities appeared not to con-tribute to the overarching goal ofschool reform and actually took timeaway from reform activities.

John Poteat, despite being a staunchsupporter of the standards movement,discussed the unintended consequencesthat occur when higher standards arethe driving force behind reform. Onesuch consequence is the mounting op-position to the testing movement,which, according to Poteat, is fueled bylegitimate concerns such as flaws intesting programs, debates over passingstandards, charges of teaching to thetest, disappearance of non-tested cur-riculum, and pressure on teachers andstudents. As a result, Poteat said, “The

wealth of testing data now available tothe public has spawned its own unin-tended consequences: feeding a negativeimage of public schools, causing a grow-ing dissatisfaction in our minority com-munity, and threatening to fracture thetraditional pro-public school lobby.”

Challenges to Reformand Accountability

Seminar participants identified the fol-lowing factors that undermine effortsto achieve higher standards and ulti-mately successful school reform andimprovement:

Policymakers’ and teachers’interpretations of accountabilitydo not always coincide.

Teachers feel pressured to improve testscores rather than teach to contentstandards. Teachers indicated, in nearly

At the SERVE Southern States Seminar in Destin, Florida, January 10–11,2002, approximately 50 mostly state-level administrators from the sixSERVE states of Alabama, Florida, Georgia, Mississippi, North Carolina,and South Carolina had the opportunity to interact with researchersand to discuss state efforts to improve schools. The goal was to createa regional response to school improvement efforts and state policyrecommendations—a “Southern Strategy for Success.” The drive toachieve higher standards is fundamental to a school reform andimprovement strategy. The keynote presenters, Andrew and HathiaHayes of the University of North Carolina-Wilmington and John Poteatof the North Carolina Public School Forum, discussed the challenges toachieving higher standards, and participants met in groups to discussthe ideas presented.

Page 31: SERVE Florida Office · 2013-08-02 · Helen DeCasper Karen DeMeester Paula Egelson Jane Griffin Kathleen Mooney Donna Nalley Beth Thrift Michael Vigliano Jean Williams Rick Basom

SERVE’s Vision Magazine

Volu

me

1, N

um

ber

2

31

the Field

Example from

every interview with the Hayes team,“a concern for penalties they may incurdue to low student test scores.” In theirresearch, the Hayes team found that“state testing and high-stakes account-ability programs contributed to low lev-els of reform.” Reform efforts should becontributing to rather than competingwith states’ accountability efforts. Com-prehensive school reform was meant toassist schools in raising student achieve-ment and, therefore, should assistschools in meeting the requirements ofthe state accountability system.

Recruitment and retention ofteachers is difficult.

Participants reported that many low-performing schools continue to fillpositions with teachers working out-side their content areas, and others areplacing permanent substitutes in class-rooms until qualified teachers becomeavailable. The Hayes team reported,“In several of these schools, staffingwas mainly with initially-licensedteachers or lateral-entry teachers whorevealed little evidence that they hadthe capability to make changes throughtheir own efforts. Many of these teach-ers were merely trying to make itthrough each day and week.” To recruitand retain teachers, many school sys-tems are changing the “one-size-fits-all” pay schedule. According to Poteat,more schools are offering signing bo-nuses, providing higher pay for hard-to-fill jobs, and giving financialincentives to teachers willing to work inlow-performing schools. In the future, aregional policy needs to be establishedthat will raise the level of knowledgeand capacity of all teachers in theSoutheast so that they are better ableto meet state standards.

Local and district leaders lackexperience in reform.

State representatives noted that low-performing schools often demonstratea lack of clarity in school vision and afailure to strategically address the needsof schools, especially when school lead-ers see improvement efforts as a matterof compliance rather than an initiativeto change. The Hayes report indicatesthat principal leadership is essential forsuccessful school reform: “An impor-tant proportion of the CSR schools did

not have leaders skilled in (or willing toengage in) planning, developing, andcommunicating new and different vi-sions of their school, nor were they ex-pert in making decisions dependent onvalid, goal-related data.” Their studyalso revealed “a clear link between suc-cess and the expertise, experiences, andengagement of the principal, severalfactors in the history of the school, andthe strength of the local educationagency support.” State and district lead-ers need to find ways to provide theprofessional development the schooland district leaders need and to sup-port administrators in their efforts asreform leaders.

Resources for professional develop-ment are limited.

Unfortunately, many school staff mem-bers are not receiving the continuous,focused, and meaningful professionaldevelopment they need. One partici-pant complained, “There is a failure toalign money with the intense programassistance expected in individualschools.” Furthermore, the Hayes teamreported that staff development—asworkshops primarily intended to intro-duce staff to models or practices, team-building retreats, and conferences—lacked the necessary intensity, duration,and/or quality, especially consideringthe degree of change being attemptedand the degree of development teachersneed. Private and federal resourcesneed to be obtained to provide educa-tors the professional development theyneed to carry out the states’ efforts toraise student achievement.

Southern States’ Progress

Despite these challenges, states have ac-complished much as a result of focus-ing their efforts on improvinglow-performing schools. They havegained a clearer understanding of the

needs of low-performing schools and agreater awareness of how to use disag-gregated test data to improve instruc-tion. Furthermore, they have developedfocused school improvement plans, ob-tained professional development fo-cused on the specific needs of schoolsand staff members as identified throughassessments, exposed teachers to var-ied instructional procedures, and usedresources more effectively. In addition,they have learned the following lessons:

● Ongoing professional developmentis essential.

● On-site assistance is most effective.

● Flexibility is essential because thereis no one best strategy for allschools.

● Strong leadership must be devel-oped at all levels, especially the dis-trict level.

● Schools must build capacity forcontinuous improvement.

Next Steps

In the last session, participants fromthe various states met in teams to ad-dress common concerns and to sug-gest approaches to address these issuesregionally. Good ideas were generated,and participants agreed to continue towork toward successful strategies forregional success. To encourage thecontinued development of this re-gional vision, SERVE has established awebsite at www.serve.org/LPS. In ad-dition, the SERVE policy analystshoused in each state department ofeducation are partnering with SERVESchool Development and Reform staffto plan the agenda for the next meet-ing. The policy analysts will ensurethat participants in the January 2003seminar will contribute to the creationof a shared regional vision.

“The best-intentioned movement in modern education—thedrive for higher standards—has driven public education to thebrink. Unintended consequences of the movement could changethe face of American education.”

—John Poteat of the North Carolina Public School Forumat the SERVE Southern States Seminar

Page 32: SERVE Florida Office · 2013-08-02 · Helen DeCasper Karen DeMeester Paula Egelson Jane Griffin Kathleen Mooney Donna Nalley Beth Thrift Michael Vigliano Jean Williams Rick Basom

SERVE’s Vision Magazine

Volu

me

1, N

um

ber

2

32

SERVE SENIORPOLICY RESEARCHANALYSTS ONREGIONALISSUES

In 1999, the Florida State Legislaturepassed the Bush/Brogan A+ EducationPlan that established a school gradingsystem in an effort to improve studentlearning. Under this plan, each schoolreceives a grade of A–F based on itsstudents’ performance on the FloridaComprehensive Assessment Test(FCAT). The FCAT is the primarymeasure of the Sunshine State Stan-dards, which were developed by theFlorida Department of Education (inclose consultation with teachers andother educators) to give parents, stu-dents, teachers, and administrators aclear understanding of what Florida’sstudents should know at differentstages of their academic careers. Stu-dent scores are classified into fiveachievement levels, with 1 being thelowest and 5 being the highest. OnDecember 18, 2001, as part of the finalphase of the Bush/Brogan A+ Plan,Education Commissioner CharlieCrist joined Governor Jeb Bush and

fellow Cabinet members in approvingchanges to the state’s school account-ability grading system.

Under the new system, a greater em-phasis is put on higher performancestandards. Individual student progressis measured by learning gains, and astrong focus is placed on the lowest-performing students in each school.“This more diagnostic approach tomeasuring success in our schools willprovide a better understanding of howwell schools and students are perform-ing,” said Crist. “Accountability is thekey, and this plan promises to openmore doors for students as it will chal-lenge them to work hard and achieve.”

The Governor and Cabinet memberswho sit as the State Board of Educa-tion approved the changes to theschool-grading rule—a major compo-nent of the Bush/Brogan A+ Plan forEducation—by a unanimous vote. Aschool’s grade will now be determinedbased on the following three criteria:

● The percentage of students meetinghigh standards on the FCAT in read-ing and mathematics in grades 3–10and writing in grades 4, 8, and 10.

● The percentage of students demon-strating annual learning gains inFCAT math and reading in grades3–10. Students demonstrate learn-ing gains by advancing from oneachievement level to the next,maintaining satisfactory achieve-ment at level three or higher (ona scale of 1–5), or demonstratingmore than one year’s growth withinachievement levels one or two.

● The progress in reading of the low-est 25% of students in each gradeaggregated for each school. Theminimum requirement for ad-equate progress is defined as atleast 50% of these students makinggains. If this requirement is notmet, a school’s advisory councilmust develop a component in itsSchool Improvement Plan toachieve it. If a school is designatedas a “B” or “C” school and does notmeet this minimum progress fortwo years in a row, the school’s fi-nal grade will be reduced by oneletter grade.

“We have developed an exciting new conceptin education that stresses both accountabilityand achievement, while not losing sight of thebasic fundamentals of learning.”

—Charlie Crist, Florida’s Education Commissioner

Page 33: SERVE Florida Office · 2013-08-02 · Helen DeCasper Karen DeMeester Paula Egelson Jane Griffin Kathleen Mooney Donna Nalley Beth Thrift Michael Vigliano Jean Williams Rick Basom

An

alysts on

Reg

ion

al Issues

SERVE Senior Policy Research

SERVE’s Vision Magazine

Volu

me

1, N

um

ber

2

33

The revision will provide benefits tostudents, parents, educators, and ad-ministrators. The new system mea-sures the progress of students fromyear to year and clearly identifiesstudents’ strengths and weaknesses.This identification allows adminis-trators to channel resources moreeffectively to facilitate and target im-provement. “The changes we havemade today benefit educators, stu-dents and parents alike,” said Crist.“We can now measure the progressof each individual student and at thesame time assess the performance ofan entire school as well as place aspecial focus on those students whoneed the most help.”

Individual student achievement, akey component of the new gradingsystem, has long been an objectiveof the A+ Plan. Increased emphasisis placed on the achievement of thelowest-performing students in eachschool. This is clearly seen in thefact that in order for a school toearn an “A,” it must meet the mini-mum requirement of at least 50%of its lowest performers making ad-equate progress. The difference be-tween reading achievement amongthe lowest quartile and the readingachievement of the overall popula-tion of students tested must also bewithin ten percentage points ofeach other in order for a school toearn an “A.” This focus on reading,which is a cornerstone of a solideducation, aims to ensure that allchildren have the essential skillsto succeed.

In May of 1999, the South Carolina State Superintendent of Education, InezTenenbaum, established the African-American Student Achievement Committeeto research, study, and develop strategies to close the academic achievement gapbetween minority and non-minority students in South Carolina. The Committeeissued its report on May 30, 2001. The strategies, action steps, and recommenda-tions issued in that report have been used to develop a framework for the work ofthe African-American Student Achievement Program (AASAP). The Program’sadvisory committee is comprised of educators from around the state, communityleaders, faith leaders, South Carolina Department of Education professionals, andlocal and state agency representatives.

Some of the recommendations of the African American Student AchievementProgram include:

● Collaborate with various groups to sponsor an annual conference on closingthe achievement gap. The conference is plannedfor this year and seeks to bring leaderstogether to focus on closing theachievement gap.

● Provide leadership and professionaldevelopment training for teachers,administrators, and support staff.

● Ensure that parents/familiescan receive opportunities tofoster parental growth.

Additional information about thework of the AASAP can be foundon the Department’s website(www.myscschools.com/offices/ssys/youth_services/aasap).

Page 34: SERVE Florida Office · 2013-08-02 · Helen DeCasper Karen DeMeester Paula Egelson Jane Griffin Kathleen Mooney Donna Nalley Beth Thrift Michael Vigliano Jean Williams Rick Basom

and UpdatesES

EA Is

sues

SERVE’s Vision Magazine

Volu

me

1, N

um

ber

2

34

RELEVANTESEA ISSUESANDUPDATES

The No Child Left Behind Act incorpo-rates performance standards and conse-quences not only to Title I but also to allmajor programs. In addition to requir-ing tough corrective actions for chroni-cally failing schools, it gives students infailing schools the right to either trans-fer to a better public school or obtainsupplemental services.

The new federal law requires that statesmust implement annual reading andmath assessments for grades 3–8. Stateswill have until the 2005–2006 schoolyear to develop and implement these as-sessments, and Congress has authorized$490 million for states to develop andadminister them.

In addition to reading and math assess-ments in grades 3–8, states must also in-corporate one other academic indicator.For secondary schools, it is graduationrates, and for elementary schools, it is anacademic indicator determined by thestate. In addition, states and school dis-tricts are free to incorporate additionalindicators (including assessments) inthe definition of adequate yearlyprogress. However, the only indicatorsthat can put schools into school im-provement, corrective action, or restruc-turing are the grades 3–8 reading andmath assessments.

While states may select and design as-sessments of their choosing, these stateassessments must be aligned with stateacademic standards; allow studentachievement to be comparable fromyear to year; be of objective knowledge;be based on measurable, verifiable, andwidely accepted professional assessmentstandards; and not evaluate or assesspersonal or family beliefs and attitudes.

Under the new law, states must defineadequate yearly progress so that all stu-dents are expected to improve and thatin 12 years all students will achieve atthe state-defined “proficient” level onstate reading and math academic

assessments. Each state’s definition ofadequate yearly progress must applyspecifically to disadvantaged students, aswell as to the overall student population.

States set the starting point, or achieve-ment “bar,” to reach 100% proficiencyand may choose where to set the initialbar based upon the lowest-achieving de-mographic subgroup or the lowest-achieving schools in the state, whicheveris higher. States are free, however, tochoose an even higher starting point.Once the initial bar is established, thestate is required to “raise the bar” gradu-ally and in equal increments to reach100% proficiency. The initial bar mustbe raised after the first two years andthen must be raised again, at least everythree years.

In order to avoid “over-identification”of failing schools when students in aschool are making significant academicprogress, a “safe harbor” is allowed ifstudents in the subgroups make a 10%reduction in the number of studentsnot proficient. If students in a particu-lar subgroup are 30% proficient andachieve a 7% increase in the number ofproficient students, which is a 10% re-duction in the number of students notproficient, they would be deemed tohave made adequate yearly progressand would not be identified as failing.This provision has the added advantageof requiring larger gains for the sub-groups farthest from proficiency whileallowing for smaller gains for thosecloser to proficiency (where gains areharder to achieve).

In order to help turn around poor-per-forming schools, the new law also in-creases the current 0.5% set-aside of astate’s total Title I allocation for schoolimprovement activities to 2% for FY2002–2003; it will increase to 4% for FY2004–2007. In addition, the bill retainsthe separate authority for school im-provement activities and authorizes

it at $500 million in FY 2002 and suchsums as may be necessary in FY 2003through FY 2007. These funds will aug-ment state and local efforts to providetechnical assistance and improve schoolsidentified as needing improvement.Technical assistance provided with thesefunds must be based on scientificallybased research.

Requirements for Title I

● States must develop and adopt asingle, comprehensive accountabilitysystem that will be used to hold dis-tricts and schools accountable for allstudents meeting state academic con-tent standards.

● States must include in their stan-dards specific goals for closing theacademic achievement gap betweenwhites and minorities and betweenrich and poor.

● Adequate Yearly Progress (AYP), thestandard for judging educationalsuccess or failure, is redefined to en-sure that every student reaches thestate standard for proficiency within12 years.

● Schools that do not meet the statestandard for adequate progress butshow at least a 10% improvement to-ward the goal of 100% proficiency foreach subgroup and make progress onone other indicator will not be identi-fied as in need of improvement.

● States must not only disaggregatetheir test data based on race and eth-nicity, limited English proficient stu-dents, income, and disability, but alsodisaggregate the results for account-ability. Each subgroup must meet thestate standard for adequate progressfor both mathematics and reading;otherwise, the school will be identi-fied as in need of improvement.

● Content standards and assessmentsfor math and reading in key grades

Page 35: SERVE Florida Office · 2013-08-02 · Helen DeCasper Karen DeMeester Paula Egelson Jane Griffin Kathleen Mooney Donna Nalley Beth Thrift Michael Vigliano Jean Williams Rick Basom

Issues &

Up

dates

Relevant ESEA

SERVE’s Vision Magazine

Volu

me

1, N

um

ber

2

35

(elementary, middle, and highschool) must be in place no laterthan the 2001–2002 school year.

● New annual state assessments ingrades 3–8 in mathematics and read-ing must be developed and imple-mented by the 2005–2006 schoolyear. Tests must measure students’progress in achieving proficiency onthe state academic content standards.

● Science standards must be developedby the 2005–2006 school year.

● Science assessments in key gradesmust be developed and imple-mented by the 2007–2008 schoolyear. Key grades indicate the admin-istration of the science assessmentat least once in elementary, middle,and high school.

● States must biennially participate inthe National Assessment of Educa-tional Progress (NAEP) in math andreading in the fourth and eighthgrades to provide a benchmark forstate assessments and standards.

● No less than 95% of all students mustparticipate in the assessments, and thereasons for students not participatingin assessments must be reported.

● States and local districts must com-pile and publish school-level reportcards that contain meaningful anduseful data about academic perfor-mance to parents and the public.

● An additional $500 million is pro-vided to states to help turn aroundlow-performing schools, and statesmust set aside up to 2% of their newTitle I funds in FY 2002 (and up to4% for the next four years) for thesame purpose.

● Schools must meet rigorousschool-improvement timelines.Schools will be given deadlines forimplementing improvement plansand program reforms, and will re-ceive additional technical and fi-nancial assistance if they initiallyfail to make adequate progress.

● Schools that fail for two consecutiveyears will give parents the option totransfer their child to a higher-per-forming public school with transpor-tation costs provided.

● Schools that fail for three consecu-tive years will give parents the option

to receive outside tutoring assistancefrom a state-approved list of providersthat may include private organiza-tions, non-profit organizations, andcommunity-based organizations.

● Schools that fail for four consecutiveyears must undergo at least one cor-rective action, such as instituting anew curriculum, replacing the prin-cipal and some staff, or reopeningthe school as a charter school.

● Schools that fail for five consecutiveyears must continue the actions fromthe previous year and begin planningfor restructure.

● Schools that fail for six consecutiveyears must be completely restruc-tured, including instituting a newgovernance structure (such as acharter school or private manage-ment organization) and replacingall relevant staff.

● No later than four years after enact-ment (January 8, 2006), all teachers’aides must have (a) completed atleast two years of study at an insti-tution of higher education, (b) ob-tained an associate’s or higherdegree, or (c) met a rigorous stan-dard of quality established at thestate and local level that includes anassessment of math, reading, andwriting. Paraprofessionals hired af-ter January 8, 2002, must meet thenew criteria.

● Up to 5% of any increase in Title Ifunding may be set aside by states toprovide rewards to schools (andteachers in such schools) that sub-stantially close the achievement gapbetween the lowest- and highest-per-forming students and that have madeoutstanding yearly progress for twoconsecutive years. In addition, 5% ofany Title I funding increase may beset aside to award to successful teach-ers in schools that have been identi-fied for improvement. Title II stateactivity funds may also be used toprovide teacher awards.

Requirements for Title II

Teacher Quality

● States must set annual goals for in-creasing the number of highly quali-fied teachers and must have allteachers highly qualified by 2006.

● States must hold districts account-able for meeting teacher quality ob-jectives and providing high-qualityprofessional development to teachersand principals.

● Districts that fail to meet annualteacher quality goals for two yearswill be required to develop improve-ment strategies. Districts that con-tinue to fail to meet their goals afterthree years will be forced to imple-ment state-developed improvementstrategies.

English Proficiency Requirements

● States must develop English profi-ciency standards and set annualachievement objectives for the devel-opment and attainment of Englishproficiency by limited English profi-cient (LEP) students. States musthold local districts accountable forannually assessing the English profi-ciency of all LEP students.

● Districts that fail to meet annualmeasurable objectives regarding En-glish proficiency gains and academicachievement for LEP students aftertwo years must develop improve-ment strategies. After three years offailure, districts must institute a newlanguage instruction program andreplace relevant staff.

Requirements forOther Titles

● Recipients of federal funds must setgoals for performance in key pro-grams, including Technology, Safeand Drug Free Schools, Reading, 21st

Century Learning Centers, and Inno-vative Programs.

● States and districts that meet AYPgoals will be rewarded by being givengreater flexibility over the use of vari-ous funds.

● States that fail to meet AYP or theirEnglish proficiency goals shall beidentified by the Secretary, expectedto make improvements, and listed inan annual report to Congress.

Page 36: SERVE Florida Office · 2013-08-02 · Helen DeCasper Karen DeMeester Paula Egelson Jane Griffin Kathleen Mooney Donna Nalley Beth Thrift Michael Vigliano Jean Williams Rick Basom

SERVE’s Vision Magazine

Volu

me

1, N

um

ber

2

36

RESOURCESSE

RV

E

RELATEDSERVERESOURCES

Early Childhood Readiness Assessment ProductsSchools and other early childhood programs across the region are struggling with ques-

tions about how best to assess young children and how to know if children are ready for success inschool. Recognizing the critical need for valid and appropriate assessments, SERVE developed twopublications to guide schools and other programs in selecting and implementing readiness assess-ment systems.

The first publication, entitled Assessing Kindergarten Children: What School Systems Need to Know, isa guidebook that presents principles for planning assessment systems and a practical process for de-signing assessment systems. The document combines information from a variety of sources to pro-duce a practical, well-grounded planning guide. Sources of information for the guidebook includeprinciples developed by the National Education Goal Panel Early Childhood Assessments ResourceGroup, numerous position statements on early childhood assessment, recent developments in earlychildhood research and theory, and SERVE’s staff members’ experiences working with states and lo-cal districts to develop assessment systems. Assessing Kindergarten Children: What School SystemsNeed to Know provides helpful suggestions for addressing complex issues related to early childhoodassessment and includes a step-by-step guide for schools and programs to follow as they develop as-sessment systems.

Assessing Kindergarten Children: A Compendium of Assessment Instruments is a companion to the as-sessment guidebook. Selecting an appropriate assessment instrument is a critical part of designing asuccessful early childhood assessment system. However, there are a myriad of possible instrumentsand, prior to the publication of the Compendium, no centralized place for information on multiple in-struments. The Compendium provides basic information on almost 40 assessment in-struments that are commercially available. Arranged in an easy-to-use matrix, theCompendium presents information on each instrument’s purpose, target age group,established reliability and validity, training requirements, and other essential infor-mation such as the cost of the instrument and where it can be purchased. The Com-pendium is designed to offer early childhood educators a reference tool forselecting the most suitable/appropriate assessment instruments.

Taken together, these publications provide essential information forprograms in the process of designing assessment systems.

To request a print copy, please contact ourPublications Unit at 800-352-6001 or visit usonline at www.serve.org/publications.

Assessment HotSpotsSERVE has collaborated with schools, districts, state departments, andhigher education institutions interested in improving assessment methodsand practices at the classroom level. The primary purpose of this maga-zine, Assessment Hotspots, is to share the experiences of these educators.It contains eight articles describing reflections on initiatives by severalschools and districts, one state department, and a teacher educator.

To request a copy, please contact our Publications Unit at 800-352-6001 or visit us online at www.serve.org/publications.

Page 37: SERVE Florida Office · 2013-08-02 · Helen DeCasper Karen DeMeester Paula Egelson Jane Griffin Kathleen Mooney Donna Nalley Beth Thrift Michael Vigliano Jean Williams Rick Basom

RESO

UR

CES

SERVE

SERVE’s Vision Magazine

Volu

me

1, N

um

ber

2

37

Competent Assessment of Reading:A Professional Development Module Training for TrainersLearning to read effectively is an individual journey. Good feedback is an important part of that journey—especially for struggling middle school students trying to improve their reading comprehension skills. CompetentAssessment of Reading: A Professional Development Module for Middle School Teachers helps teachers think abouthow to use assessment to improve students’ reading skills. This new SERVE resource offers a continuous profes-sional development program for fourth- through eighth-grade teachers in reading assessment. After an initialworkshop ideally held at the beginning of the school year (for which all materials are included in the Module),teachers begin to implement new assessment practices. Focused attention is given to individualized reading assess-ment through “literature circles” and “individual reading conferences.” This professional development moduleprovides agendas and materials for follow-up sessions held throughout the school year to give support as needed,provide more in-depth explanations of assessment methods, review evidences, and continue professional growthin reading assessment.

The first Train the Trainer Session on this resource will be offered at the October 2002 SERVE Forum in Birming-ham, Alabama. The one and a half day session is appropriate for any staff developer, administrator, teacher, orother educator who has a strong reading background, knowledge of classroom assessment, and who works withteachers on improving reading comprehension.

For more information, or to sign up for training, contact Nancy McMunn at 800-755-3277 or [email protected] session is limited to the first 50 who sign up.

The Professional Review Process:SERVE’s Comprehensive System of Teacher EvaluationOver the last ten years, SERVE has worked with educators in the Southeast to find ways to improve upon existingteacher evaluation systems. The result is The Professional Review Process for career teachers currently being imple-mented in districts in North Carolina and Mississippi.

What is different about SERVE’s teacher evaluation model?

● Teachers use the same criteria (the Summative Scoring Matrix) for self-assessment and individual goal settingas principals use to provide feedback to teachers on their performance. The Summative Scoring Matrix pro-vides teachers with guidance on concrete targets for improvement.

● SERVE’s model of teacher evaluation includes both formative (feedback for individual professional growth)and summative (administrator judgment of teacher performance) components with the expectation that com-petent teachers will alternate formative years with a summative year so that principals are not evaluating everyteacher every year.

● Teachers are actively engaged in examining their performance in areas like planning and assessment thatrequire evidence beyond classroom observations. Having the principal engage the teacher in a final inter-view at the conclusion of a summative evaluation year creates an opportunity for professional dialogueand teacher reflection.

● The inclusion of Teacher Impact as a category in the Summative Scoring Matrix extends what is expected ofteachers to how their teaching impacts students. The expectation that teachers should be able to report on anddiscuss knowledgeably their impact on student learning promotes a “bottom line” climate of continual learn-ing and improvement in professional practices.

● Clear expectations for teacher performance are defined in 22 performance dimensions within six categories ofteaching (i.e., Summative Scoring Matrix). Districts who choose to implement the SERVE model can modifythe Summative Scoring Matrix to reflect their vision for good teaching. Individualizing the Summative ScoringMatrix to fit long-term district professional development goals allows accountability as well as feedback andsupport for teachers as they work toward implementing district-wide initiatives.

For more information, contact Dr. Barbara Howard at 800-755-3277 or [email protected].

Page 38: SERVE Florida Office · 2013-08-02 · Helen DeCasper Karen DeMeester Paula Egelson Jane Griffin Kathleen Mooney Donna Nalley Beth Thrift Michael Vigliano Jean Williams Rick Basom

SERVE’s Vision Magazine

Volu

me

1, N

um

ber

2

38

RESOURCESSE

RV

E

Recent research suggests that student learning is influencedmost, not by family income or the level of a parent’s edu-cation, but by good teaching. In other words, the qualityof instruction in the classroom has the greatest influenceon student achievement (Haycock, 2001). Late in 1997,the U.S. Congress passed legislation establishing theComprehensive School Reform Demonstration (CSRD)program in an effort to enhance student learning. Thenine original components of CSRD build on the effec-tive school correlates (Brookover & Lazotte, 1979) andfocus on the importance of rigorous curriculum andhigh standards for all students, effective leadership andefficient school management, ongoing and high-qualitystaff development, and sustained parental involvement.(For more information on these components, visitwww.serve.org/UCR.)

From 1999 to 2001, SERVE conducted research onCSRD schools’ progress toward implementing the ninecomponents of school reform and the changes necessaryto raise student achievement. Purposeful sampling wasused to select 38 par-ticipating schoolswithin our six-stateregion. Of the 38schools, 18 had pur-chased school reformmodels from outsidevendors, and five haddeveloped their ownmodels.

During the two yearsof research, trainedand unbiased observ-ers visited classroomsand completed theSchool ObservationMeasure© (SOM)(Ross, Smith, Alberg,& Lowther, 1999)

and collected information on teachers’ and professionalstaff ’s perceptions of CSRD program implementation attheir school through the use of the Comprehensive SchoolReform Teacher Questionnaire© (CSRTQ) (Alberg & Ross,1999). In addition, the School Climate Inventory© (SCI)(Butler & Alberg, 1991), which contains 49 items withseven dimensions, was also administered, and principal in-terviews and teacher focus groups were conducted. The in-formation collected provided SERVE a combination ofquantitative and qualitative data from each site.

As the research results unfolded, the following was noted:although nearly all of the CSRD programs being imple-mented at the school sites promoted a change to student-focused learning, the observations conducted in theclassrooms indicated that little change had taken place ininstruction. Across schools, the primary instructionalorientation observed is best described as didactic andteacher-led, with independent seatwork the most prevalentstudent activity.

In response to these findings, SERVE in collabora-tion with Carole Cooper, Director of InnerActionsin Education, bridged the gap between researchand practice, by creating the professional develop-ment workshop called Best Practices in ClassroomInstruction to assist schools in making the class-room a place where every child has a greater op-portunity to be a high-achiever. SERVE used itsresearch and Best Practice: New Standards forTeaching and Learning in America’s Schools(Zemelman, Daniels, & Hyde, 1998) and MethodsThat Matter: Six Structures for Best Practice

Page 39: SERVE Florida Office · 2013-08-02 · Helen DeCasper Karen DeMeester Paula Egelson Jane Griffin Kathleen Mooney Donna Nalley Beth Thrift Michael Vigliano Jean Williams Rick Basom

RESO

UR

CES

SERVE

SERVE’s Vision Magazine

Volu

me

1, N

um

ber

2

39

How Teaching Matters: Bringing the Classroom Back intoDiscussions of Teacher Quality published by Miliken FamilyFoundation and Educational Testing Service (ETS) suggeststhat high-quality professional development is the way toimprove teaching, and that policymakers need to “stopscratching the surface of teaching and learning throughsuperficial policies and instead roll up their sleeves and diginto the nature of teaching and learning by influencing whatgoes on in the classroom” (Education Week, October 2000).

”“Phenomenal ‘data-driven’ workshop. Have been to AERA,NSTA, NSDC in Canada, CA, IN, FL, NC—none compare to this.

—2001 Forum Participant—Evaluation Report by Susan S. Harman, Ph.D.

the DDI sites. In addition, SERVE will continue to collectdata via the SOM, SCI, principal interviews, and teacherfocus groups and to provide an annual report to theschools each year that helps drive the decisions beingmade toward the goal of increasing student achievement.

SERVE staff members are rolling up their sleeves andplanning to use Data-Driven Implementation to improvethe quality of instruction and learning in the classroom.

While the number of schools touched by this first pilotprogram is small, the potential is infinite.

Classrooms (Daniels & Bizar, 1998) to designthis workshop to give participants an in-creased understanding of classroom bestpractices; increased access to best prac-tices theory, research, and materials;and increased understanding of theprocesses needed to support teach-ers’ implementation of best prac-tices. The Best Practices inClassroom Instruction workshop andmaterials have been delivered in a va-riety of formats from 2000 through2002. Participants have consistently ratedthe workshops as highly informative, timely,and useful. In response to participants’ feed-back and research from the site visits indicating thatschool staff need to learn “how to use data to develop andcontinuously improve their school reform efforts” (CSRDin the Field, 2000), SERVE staff expanded the Best Prac-tices workshop to include training in the Data-DrivenImplementation (DDI) process.

Schools implementing the DDI process will analyze datafrom their schools, including student work. The data willguide the staff working in learning teams to create andcarry out an action plan for implementing best practicesin classroom instruction. SERVE staff ’s role in theimplementation of DDI in each of the selected schoolsincludes providing technical assistance via the Web;correspondence through electronic mail, telephone, andfax; and periodic visits that focus on the needs of each of

Page 40: SERVE Florida Office · 2013-08-02 · Helen DeCasper Karen DeMeester Paula Egelson Jane Griffin Kathleen Mooney Donna Nalley Beth Thrift Michael Vigliano Jean Williams Rick Basom

SERVE1203 Governor’s Square Blvd.

Suite 400Tallahassee, FL 32301

PRSRT STDUS POSTAGE

PAIDPERMIT HOLDER

THEVISION