8
Fal 2010 Vol. 22, No. 1 (continued on next page...) WISCONSIN CENTER FOR EDUCATION RESEARCH SCHOOL OF EDUCATION UNIVERSITY OF WISCONSIN–MADISON WWW.WCER.WISC.EDU 3 Randomizing Single-Case Designs Reclaiming Assessment Teachers' Changing Beliefs about Engineering and Learning Instruction 4 6 Until now, no one has tried to estimate the costs of educational adequacy across all 50 states using a common method applied in a consistent manner. UW-Madison education professor Allan Odden and colleagues have realized that goal. In a recent report, Odden, Lawrence Picus, and Michael Goetz provide state-by-state estimates of the cost of the evidence-based model. The evidence-based model relies primarily on research evidence when making programmatic recommendations (see text box, p. 3). The evidence-based approach starts with a set of recommendations based on a distillation of research and best practices. As implementation unfolds, teams of state policymakers, education leaders, and practitioners review, modify, and tailor those core recommendations to the context of their state’s situation. Odden’s report compares those estimates to each state’s current spending. Odden and colleagues studied districts and schools that have made substantial gains in student performance. They identified the strategies used, then compared those strategies to the recommendations of the evidence-based model. The research found a strong alignment between the strategies and the resources in the evidence-based model and those strategies used by districts and schools that have seen a dramatic increase in student learning. The study found that many states are in a position to provide the full range of what educators claim is needed to improve student performance without spending substantially more money per pupil and, in some states, with fewer dollars than currently in the system. The report presents state-by-state compari- sons according to two measures. One table illustrates costs of the evidence-based model using national average compensation. The other provides estimates using state-by-state estimates of average teacher compensation and national averages for all other positions. Using the national average compensation figures, the weighted per pupil estimated costs for adequacy using the evidence-based model is $9,641, an average increase of $566 per student on a national basis. In 30 of the 50 states, additional revenues are needed to reach the estimated cost level. In the remain- ing 20 states and Washington, D.C., current funding levels are more than enough. Comparing Adequacy Across 50 States

WCER Research Highlights, Fall 2010

Embed Size (px)

DESCRIPTION

>Comparing adequacy across 50 states >Randomizing single-case designs >Reclaiming assessment >Teachers' changing beliefs about engineering and learning instruction

Citation preview

Page 1: WCER Research Highlights, Fall 2010

Fal 2010Vol. 22, No. 1

(continued on next page...)

WISCONSIN CENTER FOR EDUCATION RESEARCH • SCHOOL OF EDUCATION • UNIVERSITY OF WISCONSIN–MADISON • WWW.WCER.WISC.EDU

3 Randomizing Single-Case Designs

Reclaiming Assessment

Teachers' Changing Beliefs about Engineering and Learning Instruction

4

6

Until now, no one has tried to estimate the costs of educational adequacy across all 50 states using a common method applied in a consistent manner. UW-Madison education professor Allan Odden and colleagues have realized that goal.

In a recent report, Odden, Lawrence Picus, and Michael Goetz provide state-by-state estimates of the cost of the evidence-based model. The evidence-based model relies primarily on research evidence when making programmatic recommendations (see text box, p. 3). The evidence-based approach starts with a set of recommendations based on a distillation of research and best practices. As implementation unfolds, teams of state policymakers, education leaders, and practitioners review, modify, and tailor those core recommendations to the context of their state’s situation. Odden’s report compares those estimates to each state’s current spending.

Odden and colleagues studied districts and schools that have made substantial gains in student performance. They identified the strategies used, then compared those strategies to the recommendations of the evidence-based model. The research found

a strong alignment between the strategies and the resources in the evidence-based model and those strategies used by districts and schools that have seen a dramatic increase in student learning.

The study found that many states are in a position to provide the full range of what educators claim is needed to improve student performance without spending substantially more money per pupil and, in some states, with fewer dollars than currently in the system.

The report presents state-by-state compari-sons according to two measures. One table illustrates costs of the evidence-based model using national average compensation. The other provides estimates using state-by-state estimates of average teacher compensation and national averages for all other positions.

Using the national average compensation figures, the weighted per pupil estimated costs for adequacy using the evidence-based model is $9,641, an average increase of $566 per student on a national basis. In 30 of the 50 states, additional revenues are needed to reach the estimated cost level. In the remain-ing 20 states and Washington, D.C., current funding levels are more than enough.

Comparing Adequacy Across 50 States

Page 2: WCER Research Highlights, Fall 2010

2

WCER

This issue of WCER Research High-lights focuses on new developments in the world of K-12 education. Stories topics include school finance policy, class size, math-science-engineering instruction, and the design of inter-ventions for children with educa-tional and psychological challenges.

Allan Odden has long argued that funds allocated to K-12 education could be used more effectively. In a recent report, he and his colleagues propose that many states could provide the full range of what is needed to improve student performance without spending substantially more money per pupil and, in some states, with fewer dollars than currently in the system.

Effective assessment practices are more likely to take place in schools with higher achievement. That’s because these contexts offer explicit connections, through assessments, to varied communities of interest: district, school, teachers, students, and families. These connections increase the likelihood that the assessments meet the needs of these audiences and that they prompt some kind of action. Beth Graue and her research team discovered this as they followed nine SAGE schools in six districts from 2004-2008.

As professional development programs in pre-college engineering proliferate, the creators of these programs need to understand how teachers’ beliefs and expectations about engineering instruction and learning change as they are exposed to more instruction. Mitchell Nathan and L. Allen Phelps document how these views changed as teachers were trained to use a middle- and high- school level engineering curriculum, Project Lead The Way.

Meeting the needs of students with educational and psychological challenges is often informed by the results of studies using traditional single-case designs. Tom Kratochwill and Joel Levin describe how adding randomization to the structure of single-case designs can augment intervention research in two ways: by strengthening the internal validity of these designs and by allowing the researcher to apply statistical tests based on randomization models that improve the statistical-conclusion validity of the research.

FROM THE DIRECTOR

Adam Gamoran

Adam GamoranWCER DirectorProfessor, Sociology and Educational Policy Studies

If all states were to receive funding at the estimated level of the evidence-based model, the total cost would be $27.0 billion, or a 6.2% increase. However, the politically feasible approach would not allow using the “excess funds” from the states currently spending above that level. Given that, the total cost rises to $47.2 billion (a 10.9% increase) to fully fund the model’s estimates.

When state level average teacher salaries and benefits are used, the estimated weighted average cost per pupil of the evidence-based model amounts to $9,940. The total cost of this model amounts to $41.3 billion (a 9.5% increase), or $54.6 billion (a 12.6% increase) if excess funds are not recaptured from high-spending states. Although this represents a significant jump in education spending, Odden says, it is not unreasonable over time.

The estimates reported here do not take into account cost differences that result from lack of economies of scale or from decisions to have larger or smaller core class sizes. Odden explains that many states make adjustments in their formulas for small schools. And when implemented in an individual state, carefully documented adjustments are made to prorate resources to smaller schools and adapt the evidence-based model to the actual school conditions found in that state. As a result, Odden says, the costs of adequacy reported in this study likely represent a lower bound of the total costs that might be identified if evidence-based studies were done in each of the 50 states.

Nor does this research address how the funds should be allocated once they are sent to school districts. This is an important point, Odden says, because some states currently spend more than identified in this model, yet do not appear to show the gains in student performance the model suggests are possible.

Finally, in some states, the cost of implementing this, or any other, adequacy model might be quite high, Odden says. “But states do not have to do this all at once,” he says. Instead, they can establish a multiyear implementation plan and fund the increase over time. “If they do that,” he says, “we recommend that they start with the professional development components including instructional coaches, additional resources for trainers, and additional days for teacher professional develop-ment." Then they can focus on strategies for struggling students, which are relatively low-cost options with larger effect sizes.

Some material adapted from “Leveraging the NSF Broader-Impacts Criterion for Change in STEM Education,” in CHANGE, May–June 2009, and from “Science For The Masses,” NATURE, May 2010

Allan Odden

Page 3: WCER Research Highlights, Fall 2010

3

RESEARCH highlights

Randomizing Single-Case Designs

THE EviDEnCE-BASED MODEl AnD ADEquACy

When experts discuss education finance, they sometimes use the term “adequacy.” Odden offers this definition: “Providing a level of resources to schools that will enable them to make substantial improvements in student performance over the next 4 to 6 years, as progress toward ensuring that all, or almost all, students meet their state’s performance standards in the longer term.”

“Substantial improvement in student performance” means that, where possible, the proportion of students meeting a proficiency goal will increase substantially in the short- to medium term. Specific targets might vary, depending on the state and a school’s current performance. Yet this goal could be interpreted as raising the percentage of students who meet a state’s student proficiency level from 35% to 70%, or from 70% to something approaching 90% and, in both examples, to increase the percentage of students meeting advanced proficiency standards. There are several approaches to estimating adequacy. They include cost functions, professional judgment, successful schools and districts, and the evidence-based approach.

Although traditional single-case designs are useful, they can be transformed into even more scientifically credible randomized single-case interventions, according to UW-Madison education professor Thomas Kratochwill and University of Arizona, Tucson colleague Joel Levin.

That’s important because, despite many areas of applica-tion, traditional single-case designs have typically not met the criteria for a randomized controlled trial. Relative to conventional multiple-unit or multiple-group designs, single-case designs have been less likely to be included in literature reviews to establish evidence.

Professors Kratochwill and Levin propose that adding randomization to the structure of single-case designs can augment intervention research in two ways.

◾ Randomization can strengthen the internal validity of these designs.

◾ Including a randomization scheme in the design allows the researcher to apply various statistical tests based on randomization models that improve the statistical-conclusion validity of the research.

In a recent Psychological Methods article* Kratochwill and Levin discuss ways to incorporate randomized experimen-tal schemes into various single-case designs. For each design they illustrate how various forms of randomization can be introduced into the basic design structure.

In particular, they recommend that single-case designs include a previously missing component—a “randomized trials” link, modeled after the “clinical trials” phase in medi-cal research. This stage would occur between the interven-tion’s initial development and testing and its prescription and implementation.

This “randomized trials” stage would draw from a range of educational and psychological randomized experiments, including those with the experimental units composed of individual participants and those consisting of one or more classrooms, schools, or communities.

ABOuT SinGlE-CASE DESiGn

Single-case design is being used with increasing frequency in education research. Like other experimental methods, single-case designs seek to establish causal relationships between an independent variable (here, the intervention) and dependent variables (outcome measures). The importance of single-case design has been emphasized by the U.S. Department of Education’s Institute of Education Sciences. Professional journals publish single-case intervention studies in such fields as clinical psychology, counseling psychology, school psychology, social work, speech and language fields, and special education. Kratochwill and Levin reviewed the kinds of single-case designs that would be applicable in psychological and educational intervention research.

Tom Kratochwill

* “Enhancing the Scientific Credibility of Single-Case Intervention Research: Randomization to the Rescue,” published in the American Psychological Association’s journal Psychological Methods, 2010 Volume 15, Issue 2 (Jun), pp 124-144.

Funding: Institute of Education Sciences, U.S. Department of Education.

Kratochwill and Levin say that, ultimately, the benefits of randomized single-case intervention designs will become evident in the greater scientific credibility of research find-ings and in the integrative summaries of the professional knowledge base for evidence-based interventions across areas of research in psychology and related fields.

Page 4: WCER Research Highlights, Fall 2010

4

WCER

Effective assessment practices are more likely to take place in schools with higher achievement. That’s because these contexts offer explicit connections, through assessments, to varied communities of interest: district, school, teachers, students, and families. These connections increase the likelihood that the assessments meet the needs of these audiences and that they prompt some kind of action.

These findings result from a recent study conducted in nine Wisconsin elementary schools participating in a class size reduction program called Student Achievement Guarantee in Education (SAGE). SAGE is a state-supported class size reduction (CSR) program that provides funding to districts to limit class sizes to 15 students and 1 teacher in grades K-3. Almost 500 Wisconsin schools participate.

By reducing the number of students in a class, teachers are thought to have more opportunities for formative and summative assessment, which provides information for more targeted instruction, which results in improved student achievement.

A research team directed by UW-Madison education professor Beth Graue has followed nine SAGE schools in six districts from 2004-2008. The diverse sample included schools representing a range of urban, rural, and semi-urban locations, and schools affected by poverty, and low student achievement.

Study participants reported that assessment was easier in SAGE classrooms because teacher time and attention were spread among fewer students. Teachers said the smaller groups allowed for more effective diagnosis and intervention.

Graue emphasizes that action is the lynchpin of high quality assessment. In supportive assessment systems, teachers

have tools that they understand and that they can use to improve practice. This improvement relates to the needs of their students this year.

By contrast, assessment in lower quality classrooms takes place in disjointed systems that focus primarily on summative rather than formative assessment. In these schools, teachers have tools to find out where students are, but this knowledge is not connected to instructional action.

Earhart Elementary SchoolGraue and colleagues found an example of what they consider “best practice” at Earhart Elementary School (a pseudonym). The administration, teachers, and students at Earhart together designed an accountability system to meet student needs.

Earhart Elementary is a small diverse K-5 school in a working class neighborhood. More than two-thirds of the students are classified as poor, one-third are English Language Learners (Latino and Hmong), one-third African-American, and one-third white.

Two components characterized Earhart’s assessment practices. The first was a district-designed curriculum and assessment that promoted coherence between district instructional practices and the state and federal accountability system. The second was the school’s professional learning community that created a shared sense of purpose and responsibility. Through these top-down and bottom-up forces Earhart developed its approach to assessment and instruction.

Standards, instruction, assessment, and reporting were critical and related elements of this education system. School staff regularly talked about linking their activities in the classroom with assessments and reporting forms.

Reclaiming Assessment

Page 5: WCER Research Highlights, Fall 2010

5

highlights

COnTEXT FOR inquiRy

SAGE is a multi-faceted classroom reform, composed of four implementation pillars. In addition to the class size component, Wisconsin’s SAGE legislation requires schools to

◾ provide rigorous curricula,

◾ strengthen home and school links by keeping the school building open for extended hours and connecting families with community resources, and

◾ enhance teacher professional development and evaluation.

This approach recognizes that improving student achievement is a complex challenge that requires equally complex interventions, particularly in communities troubled by racism and poverty. Although initially developed to address concerns about urban poverty, SAGE is open to all Wisconsin schools. Participating schools receive $2250 per low-income child in grades K-3 to offset costs of implementation. SAGE legislation provides annual funding for program evaluation.

Beth Graue

How they did it

Earhart teacher Tammy Helman worked with 12 first graders. When the results of an assessment diverged from what she thought a child could do, Ms. Helman simply changed her practice. While some might have discounted the results of the assessment, Ms. Helman used the discrepancy between her teaching and assessment experiences to prompt more intensive instruction for the child.

In her third year as Earhart’s principal, Paula Walworth led her staff in a comprehensive school reform process called Professional Learning Communities. PLC emphasizes shared leadership and professional collaboration. It centers on three questions: (a) What do we want our students to learn?, (b) How will we know if they learn it?, and (c) What will we do if they don’t? These questions and their action-oriented premise shaped much of their work.

To encourage collaborative practice, Mrs. Walworth designed schedules so that grade-level teams had weekly shared planning time. These groups met periodically with their respective instructional resource staff.

One example of shared vision related to student learning and assessment was the use of an assessment wall, where staff posted student data across time to understand progress. This visual depiction of learning helped staff see learning in a concrete way—a visual that moved student learning from within one teacher’s head to a shared document that often brought up questions about practice. Assessments were linked to instruction in the classroom to provide a strong community sense of the goals, grounding instruction in sense-making that has purpose.

Mrs. Walworth paired the use of the assessment wall with examination of instructional time between classrooms and Title I reading teachers. This led to changes in teachers’ scheduling and practice to ensure that Title I services were provided in addition to, rather than separate from, regularly scheduled teaching.

Ms. Helman encouraged students to provide feedback on writing. Conversations around peer evaluation were made possible, in part, by the small group size of 12 first graders and Ms. Helman’s modeling of supportive evaluation.

The importance of being aligned

Research at Earhart and other SAGE schools revealed that systematic alignment was perhaps the most striking aspect of constructive assessment practices. Teacher participation in the alignment process was crucial for professional buy-in and made it more likely that instruction connected to assessments. Everything was more difficult in schools that lacked alignment: The system was more chaotic and assessments were considered more of a burden. If standards, curriculum, instruction, assessment, and

RESEARCH

reporting tools are all aligned, the focus was never pulled away from the task at hand.

At the heart of the alignment process is the issue of audi-ence. Assessments serve varied audiences for different purposes. Some assessments are designed to inform classroom decision making, some are used to track school efficacy. Many assessments are used to inform multiple audiences, including, but not limited to: state and district administrators, teachers, parents, community members, and the students themselves. When assessments were required for outside audiences and teachers couldn’t see the relationship to their own instruction, they said their practice felt unaligned and the assessments seemed intrusive.

The final aspect of alignment is action, or the degree to which educators feel they can act on assessment informa-tion. The practices at Earhart promoted action that linked assessments, instruction, collaboration, and professional development. The systemic nature of the district’s approach facilitated this and was taken up by a school staff hungry for taking an active role in planning.

Page 6: WCER Research Highlights, Fall 2010

6

More K-12 educators are participating in science, technology, engineering, and math (STEM) professional development activities. As professional development programs in pre-college engineering proliferate, the creators of these programs need to understand how teachers’ beliefs and expectations about engineering instruction and learning change as they are exposed to more instruction.

When this knowledge is carefully documented, policies and programs for teacher education and professional development can be created and improved, based on a sound empirical foundation.

UW-Madison education professors Mitchell Nathan and L. Allen Phelps examined teachers’ beliefs and expectations about engineering instruction and student learning as it occurs at the high school level. They documented how these views changed as teachers were trained to use a high school engineering curriculum, Project Lead The Way (PLTW). The PLTW curriculum integrates engineering, math, science, and technology into middle- and high school students’ programs of study. PLTW is well regarded and is one of the most widely used precollege engineering curricula in the U.S.

Nathan and Phelps measured STEM teachers’ baseline views and documented the differences among them, before the teachers formally diverged into two distinct groups. The groups consisted of those teachers who participated in a PLTW summer institute training program, and who went on to actually teach a PLTW course (called the Summer Institute group), and those who did not elect to train and teach a course (called the Control group).

Teachers’ Changing Beliefs about Engineering learning and instruction

The pretest revealed that teachers generally believed that to become an engineer a student must show high academic achievement in math, science, and technology courses. Teachers also believed that having a parent as an engineer increases a students’ likelihood of becoming one, as does being male and either white or Asian. However, the pretest also revealed that control teachers and institute teachers started out with some differences in their beliefs and expectations about engineering.

Most differences were not statistically significant, but three were:

Difference 1. Control teachers were less likely than future institute teachers to identify sources of support for engineering in their schools.

Difference 2. Control teachers agreed more strongly than future institute teachers that, to be successful as an engineer, a student needs to demonstrate high scholastic achievement in math, science and technology. Teachers of these courses consider excellence in academic performance as a kind of gatekeeper for engineering. This finding replicates previous results showing differences among high school teachers with a greater emphasis on college preparation (like the control teachers here) or a focus on career readiness (like the institute teachers here).

Difference 3. Even before teaching PLTW courses, institute teachers were more likely than control teachers to claim that science and math content taught in their classes was integrated with engineering content.

WCER

Page 7: WCER Research Highlights, Fall 2010

highlights

After the training and after the institute group taught their course, Nathan and Phelps re-administered the beliefs survey. As reported on the baseline survey1, control teachers were more likely than institute teachers to believe that high academic achievement in science, math, and technology courses was necessary to become an engineer, and this group difference showed no change over time. Teachers in both groups initially reported that they did not strongly address students’ interest and cultural backgrounds when designing classroom instruction. At re-test, regardless of PLTW training, teachers reported attending to student background and interest less than they reported at first, essentially adopting a less constructivist attitude toward student learning.

Institute teachers were initially more positive about the institutional support they experienced for engineering at their schools than control teachers, and this difference grew significantly over time. The control teachers remained essentially constant in their views.

Finally, institute teachers believed more strongly than control teachers that the math and science concepts taught in their courses were explicitly connected to engineering. The re-testing of beliefs showed that for this sample of teachers the gap grew. The growth was due to stronger agreement among institute teachers and stronger disagreement among control teachers over time.

The primary change attributable specifically to PLTW training and teaching was an increased belief among institute teachers that they provided instruction to their students that effectively integrates science and math concepts with engineering activities. Nathan and Phelps say this change is important for three reasons:

First, there is a growing recognition of the need to teach STEM content in an integrated manner, and several federal and state policy initiatives advocate this integration across grade levels. If the professional development programs sponsored by PLTW foster greater integration, this is valuable.

Second, a number of recent research studies have examined engineering curricula and classroom instruction to determine the extent to which academic and technical subjects are integrated, and the resulting effect this has on student achievement in science and math.

Finally, engineering education reform recognizes the importance of teachers as agents for informing and implementing educational reform and the key role that professional development can play toward these goals.

The challenge of integrating concepts

Prospective PLTW teachers were more likely than control teachers to identify sources of support for engineering in their schools, to report that science and math concepts

RESEARCH

were integrated with engineering instruction, and to support greater access to engineering. Over time, teachers from both groups were significantly less inclined to use students’ interests and backgrounds to shape classroom instruction.

Along with a growing urgency for promoting STEM education has come a drive to rethink and integrate the contributing areas of science, technology, engineering, and mathematics. In this light it is critical to acknowledge both the difficulties and promising approaches for integrating concepts from science and math with engineering instruction in a manner that influences student thinking.

This work was funded by a grant from the National Science Foundation and conducted as part of the AWAKEN project, a collaborative research project with the UW–Madison College of Engineering and the School of Education.

1Because of a reduced response rate for the second survey administration, comparisons between groups and from June 2008 to January 2009 are presented for only those teachers who provided complete data at both points in time.

More papers on this and related research: http://website.education.wisc.edu/~mnathan/

Mitchell nathan l. Allen Phelps

7

Page 8: WCER Research Highlights, Fall 2010

WCER

RESEARCH highlights WCER

DIRECTOR Adam GamoranEDITOR Paul BakerEDITORIAL CONSULTANTS Rebecca Holmes & Cathy LoebPRODUCTION Media Education Resources & Information Technology

WCER Research Highlights is published by the Wisconsin Center for Education Research, School of Education, University of Wisconsin–Madison. WCER is funded through a variety of federal, state, and private sources, including the U.S. Depart-ment of Education, the National Science Foundation, and UW–Madison. The opinions expressed in this publication do not necessarily refl ect the position, policy, or endorsement of the funding agencies. Fourth-class, bulk-rate postage is paid at UW–Madison, Madison, WI. Send changes of address to WCER, 1025 West Johnson Street, Madison, WI 53706 or call (608) 263-4200. Include the address label from this issue.

No copyright is claimed on the contents of WCER Research Highlights. In reproducing articles, please use following credit: "Reprinted with permission from WCER Research Highlights, published by the Wisconsin Center for Education Research, UW–Madison School of Education." If you reprint, please send a copy to Research Highlights.

WCER Research Highl ights is avai lable on the Web at http://www.wcer.wisc.edu.

ISSN 1073-1822 Vol. 22, No. 1, Fall 2010

This Newsletter is archived in PDF form on WCER's website:

www.wcer.wisc.edu/publications

WCER Today is a monthly email newsletter

reaching more than 1900 readers at more

than 700 organizations. A sample issue and

subscription information are available here,

www.wcer.wisc.edu/publications/index.php.

Nonprofi tOrganization

U.S. POSTAGEPAID

Madison, WisconsinPermit No. 658

Wisconsin Center for Education ResearchSchool of Education • University of Wisconsin–Madison1025 West Johnson Street • Madison, WI 53706