4
EDITORS’ NOTES A n old Italian proverb reads, “What’s old is new, what’s new is old” (Melfi, 2011). This quote characterizes the story of mixed methods in the evaluation community in that mixed methods have been used by evaluators for many years. Many evaluators intuitively came to the con- clusion that evaluations on complex social programs could be enhanced by the use of multiple methods; hence the combination of both quantitative and qualitative data in the same study is nothing new. Attention to mixed methods in evaluation was apparent in the New Directions for Evaluation (NDE) edited by Jennifer Greene and Valerie Caracelli in 1997 (Greene & Caracelli, 1997). Since that time, attention to mixed methods has increased exponentially, as evidenced by the launch of the Journal of Mixed Methods Research in 2007, which had an initial impact factor of 2.219 and ranked fifth out of 83 journals in the social sciences, interdisciplinary category, according to the 2010 Journal Citation Reports by Thomson Reuters (2011). The American Evaluation Association (AEA) Topical Interest Group (TIG): Mixed Methods in Evaluation was founded in 2010 and quickly became one of the largest of AEA’s TIGs. And, the Sage Handbook of Mixed Methods in Social and Behavioral Research (Tashakkori & Teddlie, 2010) is in its second edition. Increasingly, policy makers and funders seek markers of credibility of evidence from evaluators. As the stakes for demonstrating credibility esca- late, discussions of philosophical and methodological issues surrounding the politics of knowledge building become more important. Assisting the pursuit of “credible evidence” is the turn toward evidence-based practices— that rely primarily on quantitatively driven methods such as the random- ized controlled trial (RCT) gold standard by which the credibility of evaluation findings are judged. Based on assumptions associated with the postpositivist paradigm, these tools help the evaluator to limit and measure the extent of bias in evaluation findings through a reliance on strict experi- mental design and measurement procedures. The evaluation community demonstrated its valuing of mixed methods in evaluation in its response to the U.S. Department of Education’s (2003) decision to prioritize scientifi- cally based evaluation methods. AEA responded with a statement praising the department for prioritizing evaluation of its programs, but also cau- tioned that limiting evaluators to a single, quantitatively focused approach would not be in the best interests of achieving the goals of improved edu- cational experiences (AEA, 2003). AEA’s official statement noted that the use of mixed methods, in a rigorous way, had the potential to address a 1 NEW DIRECTIONS FOR EVALUATION, no. 138, Summer 2013 © Wiley Periodicals, Inc., and the American Evaluation Association. Published online in Wiley Online Library (wileyonlinelibrary.com) • DOI: 10.1002/ev.20052

Editors' Notes

Embed Size (px)

Citation preview

Page 1: Editors' Notes

EDITORS’ NOTES

An old Italian proverb reads, “What’s old is new, what’s new is old” (Melfi, 2011). This quote characterizes the story of mixed methods in the evaluation community in that mixed methods have been used

by evaluators for many years. Many evaluators intuitively came to the con-clusion that evaluations on complex social programs could be enhanced by the use of multiple methods; hence the combination of both quantitative and qualitative data in the same study is nothing new. Attention to mixed methods in evaluation was apparent in the New Directions for Evaluation (NDE) edited by Jennifer Greene and Valerie Caracelli in 1997 (Greene & Caracelli, 1997). Since that time, attention to mixed methods has increased exponentially, as evidenced by the launch of the Journal of Mixed Methods Research in 2007, which had an initial impact factor of 2.219 and ranked fifth out of 83 journals in the social sciences, interdisciplinary category, according to the 2010 Journal Citation Reports by Thomson Reuters (2011). The American Evaluation Association (AEA) Topical Interest Group (TIG): Mixed Methods in Evaluation was founded in 2010 and quickly became one of the largest of AEA’s TIGs. And, the Sage Handbook of Mixed Methods in Social and Behavioral Research (Tashakkori & Teddlie, 2010) is in its second edition.

Increasingly, policy makers and funders seek markers of credibility of evidence from evaluators. As the stakes for demonstrating credibility esca-late, discussions of philosophical and methodological issues surrounding the politics of knowledge building become more important. Assisting the pursuit of “credible evidence” is the turn toward evidence-based practices—that rely primarily on quantitatively driven methods such as the random-ized controlled trial (RCT) gold standard by which the credibility of evaluation findings are judged. Based on assumptions associated with the postpositivist paradigm, these tools help the evaluator to limit and measure the extent of bias in evaluation findings through a reliance on strict experi-mental design and measurement procedures. The evaluation community demonstrated its valuing of mixed methods in evaluation in its response to the U.S. Department of Education’s (2003) decision to prioritize scientifi-cally based evaluation methods. AEA responded with a statement praising the department for prioritizing evaluation of its programs, but also cau-tioned that limiting evaluators to a single, quantitatively focused approach would not be in the best interests of achieving the goals of improved edu-cational experiences (AEA, 2003). AEA’s official statement noted that the use of mixed methods, in a rigorous way, had the potential to address a

1NEW DIRECTIONS FOR EVALUATION, no. 138, Summer 2013 © Wiley Periodicals, Inc., and the American Evaluation Association. Published online in Wiley Online Library (wileyonlinelibrary.com) • DOI: 10.1002/ev.20052

Page 2: Editors' Notes

2 MIXED METHODS AND CREDIBILITY OF EVIDENCE

NEW DIRECTIONS FOR EVALUATION • DOI: 10.1002/ev

broader number of issues as to how and why a program might be effective or not.

As interest in and attention to mixed methods grows in the evaluation community, it seems reasonable to ask about the connection between the call for evidence-based programs and the potential contribution of mixed methods to the creation of credible evidence. The purpose of this issue is to examine the contributions of mixed methods evaluation and its emerging philosophies, theories, and practices that can enhance the credibility of findings from RCTs, as well as opening up the possibility of enhancing credibility with evaluations that start from several paradigmatic stances, such as postpositivism, pragmatism, constructivism, and transformativism. The authors of this issue examine a range of truth claims with regard to mixed methods evaluation in general, and aim to examine critically the logical truth claims that surround the implementation of mixed methods evaluation designs.

The scope of this issue covers advances in mixed methods evaluation from philosophical, theoretical, and praxis perspectives. In the past, many higher-education evaluation programs prepared students in quantitative and/or qualitative methods; hence, many evaluators are self-taught about the topic of mixed methods. Herein, we offer frameworks and strategies for promoting rigor and for harnessing the synergy that the combination of two different methods can create, whereby one method can enable the other to be more effective and together, both methods can also serve to provide a fuller understanding of the evaluation focus. Toward this end, this issue provides evaluators with a range of important philosophical and theoretical approaches combined with practice-based mixed methods strat-egies. The authors address issues evaluators encounter when designing and implementing their mixed methods evaluations and suggest ways to address these issues to enhance the credibility of their evaluations. Hence, we provide the thinking of leaders in the field of mixed methods within the evaluation context as a way of furthering our understandings of how to enhance credibility of evidence through the use of mixed methods. The chapters in this issue draw from multiple disciplines, such as education, health care, youth services, and environmental issues.

Issue Overview

This issue identifies the types of synergistic outcomes a mixed methods evaluation design can harness by examining how deploying different meth-odological perspectives can enhance understanding of what credible mixed methods evaluation is. The authors analyze how the implementation of a mixed methods design can enhance understandings of issues of difference within a given evaluation design. They explore how mixed methods designs can also further issues of social justice and social transformation. A large

Page 3: Editors' Notes

3EDITORS’ NOTES

NEW DIRECTIONS FOR EVALUATION • DOI: 10.1002/ev

part of this issue addresses the specific philosophical issues evaluators face in implementing a mixed methods design; therein lies a much deeper challenge—that of crossing paradigmatic divides.

After an introductory chapter, the next three chapters describe the philosophical territories associated with mixed methods. Hall writes about the use of a pragmatic paradigm to justify mixed methods use. Mertens’s chapter focuses on the use of the transformative paradigm as a framing for mixed methods evaluations that explicitly address human rights and social-justice goals. The Johnson and Stefurak chapter describes a dialectical stance in mixed methods in which they argue that there is a value-added dimension by adhering to constructivist and post-positivist assumptions in qualitative and quantitative phases in a single evaluation, in ways that pro-vide for conversations across the paradigms to bolster the credibility of the evidence produced by both methods. These philosophical chapters include implications for evaluation methods emanating from the philosophical stances discussed.

The next set of chapters shifts the focus to the design of evaluations from theoretical and practical perspectives and addresses methodological and methods challenges more explicitly. The Hesse-Biber chapter provides a bridge between philosophy and practice as she examines the potential for increasing the credibility of evidence in RCTs when a multimethodology and mixed methods framework is used in the design and interpretation phases of the study. White’s chapter also addresses the use of mixed meth-ods in RCT designs, but he does so from the position that the evaluation issue drives the method decisions. Hence, his view is that RCTs are best for impact evaluations and that these can be combined with qualitative data collection to answer process evaluation questions. Frost and Nolas contrib-ute further to the methods used in mixed methods evaluations by examin-ing how multiple methods are used for triangulation; they illustrate their approach through an example of an evaluation of a youth inclusion pro-gram. Collins and Onwuegbuzie continue the emphasis on increasing rigor in mixed methods evaluations by critically examining sampling strategies that contribute to enhanced quality and credibility, and provide multiple illustrations of different mixed methods sampling strategies.

Caracelli and Cooksy address the challenging concept of synthesiz-ing across evaluation studies in order to get a broader picture of the evi-dence about an intervention with their critical examination of how quality is assessed in evaluation syntheses, taking into account the role of mixed methods in the synthesis strategy and in the determination of the quality of the individual studies. Jennifer C. Greene contributes the final chapter, in which she provides reflections on the issues raised in the pre-ceding chapters, as well as raises questions for evaluators to consider in terms of how mixed methods can or cannot increase the credibility of their findings.

Page 4: Editors' Notes

4 MIXED METHODS AND CREDIBILITY OF EVIDENCE

NEW DIRECTIONS FOR EVALUATION • DOI: 10.1002/ev

Acknowledgments

Putting together this special issue benefited from the wisdom and assis-tance of others along the way. We wish to thank Norman Denzin for encouraging a dialogue among diverse scholars in the qualitative, quantita-tive, and mixed methods communities. We also benefited greatly from the feedback we received in presenting our initial ideas at the International Congress of Qualitative Inquiry (ICQI) and the American Evaluation Asso-ciation’s annual meeting. We wish to thank all the authors for their vision-ary articles that tackle many thorny issues that are of utmost importance to the evaluation community. The New Directions editors, Sandra Mathison and Paul Brandon, along with anonymous reviewers, provided us with feedback that enhanced the quality of this issue. We also wish to thank our families who loved us and supported us throughout the process of prepar-ing this issue.

References

American Evaluation Association (AEA). (2003). Response to U.S. Department of Educa-tion. Retrieved from http://www.eval.org/doestatement.htm

Greene, J. C., & Caracelli, V. J. (Eds.). (1997). Advances in mixed method evaluation. New Directions for Evaluation, 74.

Melfi, M. (2011). Italy revisited: Folk sayings on aging. Retrieved from http://www.italyrevisited.org

Tashakkori, A., & Teddlie, C. (Eds.). (2010). Sage handbook of mixed methods in social & behavioral research (2nd ed.). Thousand Oaks, CA: Sage.

Donna M. MertensSharlene Hesse-Biber

Editors

DONNA M. MERTENS is a professor in the International Development Program at Gallaudet University and editor of the Journal of Mixed Methods Research.

SHARLENE HESSE-BIBER is professor of sociology at Boston College.