31
METHODOLOGICAL AND ANALYTICAL ISSUES An Alternative Approach 9 MIXED METHODS DESIGN: AN ALTERNATIVE APPROACH Joseph A. Maxwell Diane M. Loomis T he explicit use of both quantitative and qualitative methods in a single study, a combination commonly known as mixed methods research, has be- come widespread in many of the social sciences and applied disciplines during the past 25 years. Tashakkori and Teddlie (1998, p. 14) dated the explicit emergence of mixed methods research to the 1960s, with this approach becoming common by the 1980s with the waning of the “para- digm wars.” They also identified a sub- sequent integration of additional aspects of the qualitative and quantitative ap- proaches—not just methods—beginning during the 1990s, which they called “mixed model” studies (p. 16). Such as- pects include epistemological assump- tions, types of investigation and re- search design, and analysis and inference strategies. However, the practice of mixed meth- ods (and mixed models) research has a much longer history than the explicit dis- cussion of the topic. In natural sciences such as ethology and animal behavior, evolutionary biology, paleontology, and geology, the integration of goals and meth- ods that typically would be considered qualitative (naturalistic settings, inductive approaches, detailed description, atten- tion to context, and the intensive investi- gation of single cases) with those that are generally seen as quantitative (experi- mental manipulation; control of extrane- ous variables; formal hypothesis testing; 241

Maxwell and Loomis - SAGE Publications Inc

  • Upload
    others

  • View
    4

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Maxwell and Loomis - SAGE Publications Inc

METHODOLOGICAL AND ANALYTICAL ISSUESAn Alternative Approach

��$�!���#�!� !���&��

�%������"������#

� Joseph A. MaxwellDiane M. Loomis

�he explicit use of both quantitativeand qualitative methods in a singlestudy, a combination commonly

known as mixed methods research, has be-come widespread in many of the socialsciences and applied disciplines duringthe past 25 years. Tashakkori and Teddlie(1998, p. 14) dated the explicit emergenceof mixed methods research to the 1960s,with this approach becoming common bythe 1980s with the waning of the “para-digm wars.” They also identified a sub-sequent integration of additional aspectsof the qualitative and quantitative ap-proaches—not just methods—beginningduring the 1990s, which they called“mixed model” studies (p. 16). Such as-pects include epistemological assump-

tions, types of investigation and re-search design, and analysis and inferencestrategies.

However, the practice of mixed meth-ods (and mixed models) research has amuch longer history than the explicit dis-cussion of the topic. In natural sciencessuch as ethology and animal behavior,evolutionary biology, paleontology, andgeology, the integration of goals and meth-ods that typically would be consideredqualitative (naturalistic settings, inductiveapproaches, detailed description, atten-tion to context, and the intensive investi-gation of single cases) with those that aregenerally seen as quantitative (experi-mental manipulation; control of extrane-ous variables; formal hypothesis testing;

� ���

Page 2: Maxwell and Loomis - SAGE Publications Inc

theory verification; and quantitative sam-pling, measurement, and analysis) hasbeen common for more than a century. Inaddition, many classic works in the socialsciences employed both qualitative andquantitative techniques and approacheswithout deliberately drawing attention tothis. (Several of these classic studies areanalyzed in detail later in the chapter.)From this broader perspective, mixedmethods research is a long-standing (al-though sometimes controversial) practicerather than a recent development.

Indeed, a case could be made thatmixed methods research was more com-mon in earlier times, when methods wereless specialized and compartmentalizedand the paradigm wars were less heated.Staw (1992) observed, “When the field oforganizational behavior was beginning inthe 1950s, there was less of an orthodoxyin method. People observed, participated,counted, and cross-tabulated. There wasready admission that each methodologywas flawed” (p. 136). And Rabinowitzand Weseen (2001) argued,

There was a time in psychology whenqualitative and quantitative methodswere more easily combined that theyare today. Famous experimental socialpsychologists such as Solomon Asch,Stanley Milgram, and Leon Festingercombined both approaches in some oftheir most famous works, although thequalitative aspects of the pieces tend toget lost in textbook accounts of theirwork, as well as in the minds of manyinstructors and researchers. (pp. 15-16)

This widespread but relatively implicituse of methods, approaches, and conceptsfrom both the qualitative and quantitativeparadigms makes it important, in under-standing mixed methods design, to inves-tigate the actual conduct of the study (in-

sofar as this can be determined from thepublications resulting from the research)rather than depending only on the au-thors’ own characterization of what theydid. Kaplan (1964) coined the terms“logic-in-use” and “reconstructed logic”to describe this difference (p. 8). This issueis magnified when mixed model researchis considered because aspects of the studyother than methods are often less explic-itly identified. It is thus important to payparticular attention to the logic-in-use ofmixed methods studies in attempting tounderstand how qualitative and quantita-tive methods and approaches can be inte-grated.

In this chapter, we address this issue bypresenting an alternative to the usual waysof thinking about mixed methods design.There are two points on which our posi-tion differs from most other approaches tomixed methods studies. First, our conceptof “design” is different from that em-ployed in most approaches to designingmixed methods studies. The authors of thelatter works have typically taken a typo-logical view of research design, presentinga taxonomy of ways to combine qualita-tive and quantitative methods. In thishandbook, for example, the chapterson design by Morse (Chapter 7) andCreswell, Plano Clark, Gutmann, andHanson (Chapter 8) both focus on the dif-ferent types of mixed methods research,delineating the dimensions on which suchstudies can vary and identifying the possi-ble and actual combinations of qualitativeand quantitative methods.

We approach the issue of design from afundamentally different perspective. Wesee the design of a study as consistingof the different components of a researchstudy (including purposes, conceptualframework, research questions, and valid-ity strategies, in addition to “methods”in a strict sense) and the ways in whichthese components are integrated with, and

��� � METHODOLOGICAL AND ANALYTICAL ISSUES

Page 3: Maxwell and Loomis - SAGE Publications Inc

mutually influence, one another. We pre-sent what Maxwell (1996) called an “in-teractive” model for research design andapply this model to mixed methods re-search, showing how the different compo-nents of actual mixed methods studies areintegrated. The model is termed interac-tive (systemic would also be appropriate)because the components are connected ina network or web rather than a linear orcyclic sequence.

The second way in which our approachis distinctive is that we base our approachto mixed methods research on a concep-tual analysis of the fundamental differ-ences between qualitative and quanti-tative research (Maxwell, 1998; Maxwell& Mohr, 1999; Mohr, 1982, 1995, 1996).This analysis employs a distinction be-tween two approaches to explanation,which we call variance theory and processtheory. The use of this distinction leads tosomewhat different definitions of thesetwo types of research from those found inmost other works, and thus it leads to asomewhat different idea of what mixedmethods research consists of.

Our purpose in this chapter is to pro-vide some tools for analyzing such studiesand for developing mixed methods de-signs. We begin by presenting the con-trast between prevalent typological viewsof design and an interactive approach toresearch design. We develop the latterapproach in detail, explaining the compo-nents of the interactive model and the sys-temic relationships among these compo-nents. We then turn to the nature of thequalitative-quantitative distinction, pre-senting an analysis of this distinction thatis grounded in the contrast between twofundamentally different ways of thinkingabout explanation. This leads to a discus-sion of paradigms and of whether qualita-tive research and quantitative researchconstitute distinct or incompatible para-digms. These two analyses are then com-

bined in a presentation of the ways inwhich qualitative and quantitative ap-proaches to each of the design compo-nents differ and of some of the sources ofcomplementarity that these differencesgenerate. Finally, we apply this approachto a variety of actual studies that combinequalitative and quantitative strategies andmethods, providing an in-depth analysisof how the designs of these studies actuallyfunctioned and the strengths and limita-tions of the designs.

In proposing this alternative approach,we are not taking a polemical or ad-versarial stance toward other approachesto mixed methods design. We see our ap-proach as complementary to others and asproviding some tools and insights thatother approaches might not as clearly pro-vide. The complementarity that we see be-tween different approaches to design issimilar to the complementarity that we ad-vocate in mixed methods research, whichGreene and Caracelli (1997) called “dia-lectical” (p. 8), and we believe that com-bining typological and systemic strategiesfor analyzing and creating research de-signs will be more productive than eitherused alone.

� Existing Approachesto Mixed Methods Design

We stated previously that existing ap-proaches to mixed methods design havebeen primarily typological. This is not toclaim that issues other than typology havebeen ignored. We believe that these issueshave generally been framed within anoverall typological approach and that theanalysis of mixed methods studies hasfocused on the classification of these stud-ies in terms of a typology of mixed meth-ods designs. For example, Caracelli andGreene (1997) identified two basic types

An Alternative Approach � ���

Page 4: Maxwell and Loomis - SAGE Publications Inc

of mixed methods designs, which theycalled “component” and “integrated” de-signs. Component designs are ones inwhich “the methods are implemented asdiscrete aspects of the overall inquiry andremain distinct throughout the inquiry,”while integrated designs involve “a greaterintegration of the different method types”(pp. 22-23). Within these broad catego-ries, they described seven subtypes, basedlargely on the purposes for combiningmethods. Patton (1990) presented a dif-ferent typology, based on qualitative orquantitative approaches to three keystages of a study (design, data, and analy-sis); he used this to generate four possiblemixed designs, involving a choice of quali-tative or quantitative methods at eachstage (not all sequences were viewed aspossible by Patton). Tashakkori andTeddlie (1998) built on Patton’s approachto create a much more elaborate typology.They distinguished mixed methods de-signs (combining methods alone) frommixed model designs (combining qualita-tive and quantitative approaches to allphases of the research process) and cre-ated an elaborate set of subtypes withinthese.

Not all work on mixed methods de-sign has been typological. For example,Bryman (1988) focused on identifying thepurposes for combining qualitative andquantitative methods, and Brewer andHunter (1989) took a similar approach,organizing their discussion in terms of thedifferent stages of the research. Creswell(1994) presented three models for mixedmethods research but then related thesemodels to each of his “design phases,”which correspond roughly to the differentsections of a research proposal.

Typologies are unquestionably valu-able. They help a researcher to make senseof the diversity of mixed methods studiesand to make some broad decisions about

how to proceed in designing such a study.In particular, distinctions based on thesequence or order in which approachesare combined, the relative dominance oremphasis of the different approaches,whether the approaches are relatively self-contained or integrated, and the differ-ent purposes for combining methods areparticularly important in understandingmixed methods design.

However, typological approaches alsohave their limitations. First, the actual di-versity in mixed methods studies is fargreater than any typology can adequatelyencompass; this point was emphasized byCaracelli and Greene (1997) as well asTashakkori and Teddlie (1998, pp. 34-36,42). In particular, the recognition of multi-ple paradigms (e.g., positivist, realist, con-structivist, critical, postmodern) ratherthan only two, the diversity in the aspectsof quantitative and qualitative approachesthat can be employed, the wide range ofpurposes for using mixed methods, anddifferences in the setting where the study isdone and the consequences of this for thedesign all make the actual analysis of amixed methods design far more compli-cated than simply fitting it into a taxo-nomic framework.

Second, most typologies leave out whatwe feel are important components of de-sign, including the purposes of the re-search, the conceptual framework used,and the strategies for addressing validityissues. All of these components are incor-porated into the interactive design modelpresented next. Typologies also tend to belinear in their conception of design, seeingthe components as “phases” of the designrather than as interacting parts of a com-plex whole.

Third, typologies by themselves gener-ally do little to clarify the actual function-ing and interrelationship of the qualitativeand quantitative parts of a design; the

��� � METHODOLOGICAL AND ANALYTICAL ISSUES

Page 5: Maxwell and Loomis - SAGE Publications Inc

typology presented by Caracelli andGreene (1997) is an exception to this criti-cism because that typology is based partlyon the purposes for which a mixed ap-proach is used. Similarly, Pawson andTilley (1997, p. 154) argued that a prag-matic pluralism in combining methodsleads to no new thinking and does notclarify how to integrate approaches orwhen to stop.

� An Interactive Model of Design

We believe that an interactive approach toresearch design can help to address theseproblems. Rather than seeing “design” asa choice from a fixed set of possible ar-rangements or sequences in the researchprocess, such approaches (e.g., Grady &Wallston, 1988; Martin, 1982; Maxwell,1996) treat the design of a study as con-sisting of the actual components of a studyand the ways in which these componentsconnect with and influence one another.This approach to design is consistent withthe conception of design employed in ar-chitecture, engineering, art, and virtuallyevery other field besides research methodsin which the term is used: “an underlyingscheme that governs functioning, develop-ing, or unfolding” and “the arrangementof elements or details in a product or workof art” (Merriam-Webster, 1984). A gooddesign, one in which the components arecompatible and work effectively together,promotes efficient and successful func-tioning; a flawed design leads to poor op-eration or failure.

The interactive model presented herehas two essential properties: the compo-nents themselves and the ways in whichthese are related. There are five com-ponents to the model, each of whichcan be characterized by the issues that it

is intended to address (Maxwell, 1996,pp. 4-5):

1. Purposes

What are the goals of this study? What is-sues is it intended to illuminate, and whatpractices or outcomes is it intended to in-fluence? Why is the study worth doing?These purposes can be personal, practi-cal, or intellectual; all three kinds of pur-poses can influence the rest of the researchdesign.

2. Conceptual Framework

What theories and beliefs about the phe-nomena studied will guide or inform theresearch? These theories and beliefs maybe drawn from the literature, personal ex-perience, preliminary studies, or a varietyof other sources. This component of thedesign contains the theory that the re-searcher has developed, or is developing,about the setting or issues being studied.

3. Research Questions

What specifically does the researcher wantto understand by doing this study? Whatquestions will the research attempt toanswer?

4. Methods

How will the study actually be conducted?What approaches and techniques will beused to collect and analyze the data, andhow do these constitute an integratedstrategy? There are four distinct parts ofthis component of the model: (a) the rela-tionship that the researcher establisheswith the participants in the study; (b) theselection of settings, participants, timesand places of data collection, and otherdata sources such as documents (what is

An Alternative Approach � ���

Page 6: Maxwell and Loomis - SAGE Publications Inc

often called “sampling”); (c) data collec-tion methods; and (d) data analysis strate-gies and techniques.

5. Validity

How might the conclusions of the study bewrong? What plausible alternative expla-nations and validity threats are there tothe potential conclusions of the study, andhow will these be addressed?

These components are not radically dif-ferent from the ones presented in manyother discussions of research design (e.g.,LeCompte & Preissle, 1993; Miles &Huberman, 1994; Robson, 1993). What isinnovative is the way in which the rela-tionships among the components are con-ceptualized. In this model, the compo-nents form an integrated and interactingwhole, with each component closely tiedto several others rather than being linked

in a linear or cyclic sequence. Each of thefive components can influence and be in-fluenced by any of the other components.The key relationships among the compo-nents are displayed in Figure 9.1. In thisdiagram, the most important of these re-lationships are represented as two-wayarrows. There is considerable similarity toa systems model of how the parts of a sys-tem are organized in a functioning whole.

While all of the five components can in-fluence other components of the design,the research questions play a central role.In contrast to many quantitative models ofdesign, the research questions are not seenas the starting point or guiding componentof the design; instead, they function as thehub or heart of the design because theyform the component that is most directlylinked to the other four. The researchquestions need to inform, and be respon-sive to, all of the other components of thedesign.

�� � METHODOLOGICAL AND ANALYTICAL ISSUES

������ #��� ;� ��������� 2���� �� '������� "����

Page 7: Maxwell and Loomis - SAGE Publications Inc

There are many other factors besidesthese five components that can influencethe design of a study. These include the re-sources available to the researcher, the re-searcher’s abilities and preferences inmethods, perceived intellectual or prac-tical problems, ethical standards, the re-search setting, the concerns and responsesof participants, and the data that are col-lected. These additional influences arebest seen not as part of the design itself butrather either as part of the environmentwithin which the research and its designexist or as products of the research(Maxwell, 1996, pp. 6-7). Figure 9.2 pre-sents some of the factors in the environ-

ment that can influence the design andconduct of a study. The five componentsof this design model, by contrast, repre-sent issues that are not external to the de-sign of the study but rather are integralparts of it; they represent decisions and ac-tions that must be addressed, either explic-itly or implicitly, by the researcher.

One way in which this design modelcan be useful is as a tool or template forconceptually mapping the design of astudy, either as part of the design processor in analyzing the design of a completedstudy. This involves filling in the boxes orcircles for the five components of themodel with the actual components of a

An Alternative Approach � ��

������ #��� ���������� ������ ���������� � '������� "����

Page 8: Maxwell and Loomis - SAGE Publications Inc

particular study’s design. We apply thistechnique specifically to a variety of mixedmethods studies later in the chapter.

� The Qualitative-QuantitativeDistinction

Because there are so many points of differ-ence between the qualitative and quantita-tive approaches, there has been consider-able variation in how the distinctionbetween the two has been framed. Earlywork often based this distinction simplyon the kind of data employed (textual ornumerical). Creswell (1994), by contrast,saw the distinction between inductive anddeductive approaches as most important,while Tashakkori and Teddlie (1998,p. 55) distinguished three different stagesor dimensions for which the distinctioncan be made: type of investigation (explor-atory or confirmatory), data collection(qualitative or quantitative), and analysisand inference (qualitative or statistical).Guba and Lincoln (1989) made the dis-tinction at a more philosophical level, as adistinction between constructivism andpositivism.

In our view, the qualitative-quantitativedistinction is grounded in the distinctionbetween two contrasting approaches toexplanation, which Mohr (1982) termedvariance theory and process theory. Vari-ance theory deals with variables and thecorrelations among them; it is based on ananalysis of the contribution of differencesin values of particular variables to differ-ences in other variables. Variance theory,which ideally involves precise measure-ment of differences on and correlationsbetween variables, tends to be associatedwith research that employs extensive pre-structuring of the research, probabilitysampling, quantitative measurement, sta-tistical testing of hypotheses, and experi-

mental or correlational designs. As Mohrnoted, “The variance-theory model of ex-planation in social science has a close af-finity to statistics. The archetypal render-ing of this idea of causality is the linear ornonlinear regression model” (p. 42).

Process theory, by contrast, deals withevents and the processes that connectthem; it is based on an analysis of thecausal processes by which some events in-fluence others. Because process explanationdeals with specific events and processes, itis much less amenable to quantitative ap-proaches. It lends itself to the in-depthstudy of one or a few cases or a small sam-ple of individuals and to textual forms ofdata that retain the contextual connec-tions between events. Weiss (1994) pro-vided a concrete example of this strategy:

In qualitative interview studies thedemonstration of causation restsheavily on the description of avisualizable sequence of events, eachevent flowing into the next. . . . Quan-titative studies support an assertion ofcausation by showing a correlation be-tween an earlier event and a subse-quent event. An analysis of data col-lected in a large-scale sample surveymight, for example, show that there isa correlation between the level of thewife’s education and the presence of acompanionable marriage. In qualita-tive studies we would look for a pro-cess through which the wife’s educa-tion or factors associated with hereducation express themselves in mari-tal interaction. (p. 179)

Mohr (1996) has more recently ex-tended his original distinction betweenprocess theory and variance theory toidentify two conceptions of causation thathe has called “factual causation” and“physical causation.” Factual causation is

��� � METHODOLOGICAL AND ANALYTICAL ISSUES

Page 9: Maxwell and Loomis - SAGE Publications Inc

the traditional mode of reasoning aboutcauses in quantitative research, where theargument for causality is based on thecomparison of situations in which the pre-sumed causal factor is present or absent orhas different values. Physical causation,by contrast, does not rely on such compar-ative logic; it is based on a notion of a me-chanical connection between a cause andits effect (p. 16). Similar distinctions havebeen developed by realist philosopherssuch as Harre (1972; see also Harre &Madden, 1975) and Salmon (1984, 1989,1998). While factual causation is an ap-propriate concept for comparative studieswith large N’s, physical causation is ap-propriate for case studies or qualitative in-terview studies that do not involve formalcomparisons.

Maxwell and Mohr (1999) used thisdistinction to identify two aspects of astudy that can be productively denoted bythe terms qualitative and quantitative:data and design/analysis.

We define quantitative data as cate-gorical data, with either enumerationor measurement within categories. Aconceptual dimension that is itself acategory subdivided by measurement,or that is divided into subcategoriesfor enumerative or frequency data, isgenerally called a “variable,” which isa hallmark of the quantitative ap-proach. Qualitative data, in contrast,are typically textual in nature, con-sisting of written or spoken words, butmay include video recordings andphotographs as well as narrative text.(p. 2)

Categorical data lend themselves to aggre-gation and comparison, and they are eas-ily quantified. Textual data, on the otherhand, lend themselves to investigation ofthe processes by which two events or char-acteristics are connected.

In addition, we propose that quantita-tive design/analysis is research designand consequent analysis that rely in avariety of ways on the comparison offrequencies or measurements acrosssubjects or across categories. Such de-signs focus on identifying differencesbetween groups or correlations be-tween variables. In contrast, qualita-tive design/analysis is design and anal-ysis that rely in various ways on thetreatment of focal entities as singularwholes in context, with an emphasison the identification of meaning andprocess.

With these definitions of secondaryterms in mind, the two fundamentallydistinct ways of understanding theworld can be specified as two distinctcombinations of types of data on theone hand with types of design/analysison the other. Thus, a quantitative wayof understanding the world is a waythat views the world in terms of cate-gorical data, featuring the compari-son of frequencies and measurementsacross subjects and categories. A qual-itative way of understanding is a waythat views the world in terms of tex-tual data, featuring the treatment offocal entities as singular wholes in con-text. (p. 2)

� Paradigmatic Unityand Compatibility

This analysis of the qualitative-quantita-tive distinction reframes the nature of thequalitative and quantitative paradigmsbut does not address the issue of paradig-matic unity or of the compatibility of dif-ferent paradigms. This unity is often as-sumed to be a critical issue in combiningmethods. For example, Patton (1980,p. 110) emphasized the “integrity” of

An Alternative Approach � ���

Page 10: Maxwell and Loomis - SAGE Publications Inc

each approach, and Morse (Chapter 7,this volume) argues,

When using mixed methods, it is im-portant that methodological congru-ence be maintained, that is, that allof the assumptions of the majormethod be adhered to and that compo-nents of the method (such as the datacollection and analytical strategies) beconsistent.

However, the need for such para-digmatic integrity cannot be assumed.McCawley (1982) examined the debatebetween two positions in linguistics, gen-erative semantics and interpretive seman-tics, that had generally been seen as uni-tary paradigms. He showed that both ofthese approaches in fact consisted of twopackages of positions on a large number ofissues, with each package correspondingto the views of some prominent membersof two communities of linguists. However,

neither of these communities was com-pletely homogeneous, no member ofthe community retained exactly thesame set of views for very long, . . . andthe relationships among the views thatwere packaged together as “generativesemantics” or as “interpretive seman-tics” were generally far more tenuousthan representative members of eithercommunity led people (includingthemselves) to believe. (p. 1)

Pitman and Maxwell (1990) similarlyargued that the supposed paradigmaticunity of one area of qualitative research,qualitative evaluation, is largely illusoryand that major figures in this field holdwidely divergent and conflicting views onmany of the fundamental issues regardingthe use of qualitative approaches for pro-gram evaluation. On the quantitative side,

the recent debate over null hypothesis sig-nificance testing has revealed how thedevelopment of this approach incor-porated fundamentally incompatible as-sumptions from different schools ofstatistics.

Such a position does not entail thatthere is no relationship among the differ-ent aspects of each paradigm, as Reichardtand Cook (1979, p. 18) appeared to argue.We agree with Sayer (1992) that there are“resonances” among the different compo-nents of each paradigm that “encouragethe clustering of certain philosophical po-sitions, social theories, and techniques”(p. 199). The relationship is simply not anecessary or invariant one. Each paradigmconstitutes a “loosely bundled innova-tion” (Koontz, 1976, cited in Rogers,1995, p. 178), and researchers often re-semble the innovation adopters describedby Rogers (1995), “struggling to give theirown unique meaning to the innovationas it is applied in their local context”(p. 179).

Thus, we do not believe that there existuniform, generic qualitative and quantita-tive research paradigms. Despite thephilosophical and methodological reso-nances among the components of eachparadigm, both of these positions includea large number of distinct and separablecomponents, and there is disagreementeven within each approach over the na-ture, use, and implications of some of thedifferent components. The classic qualita-tive approach includes the study of naturalreal-life settings, a focus on participants’meanings and context, inductive genera-tion of theory, open-ended data collection,analytical strategies that retain the textualnature of the data, and the frequent use ofnarrative forms of analysis and presenta-tion. The quantitative approach includesthe formulation of prior hypotheses, theuse of experimental interventions, a com-parison of treatment and control groups,

�� � METHODOLOGICAL AND ANALYTICAL ISSUES

Page 11: Maxwell and Loomis - SAGE Publications Inc

random sampling or assignment, stan-dardization of instruments and data col-lection, quantitative data, statistical hy-pothesis testing, and a focus on causalexplanation. Each of these (and other vari-ations too numerous to list) is a separablemodule with its own requirements and im-plications rather than an integral and in-separable part of a larger methodologicaland epistemological whole (Maxwell,Sandlow, & Bashook, 1986; Patton, 1990;Pitman & Maxwell, 1992). While the con-nections among these components are cru-cial to the overall coherence of a particularresearch design (Maxwell, 1996), the pos-sible legitimate ways of putting togetherthese components are multiple rather thansingular and, to a substantial extent, needto be discovered empirically rather thanlogically deduced (Maxwell, 1990).

However, we also agree with Kidderand Fine’s (1987) statement, “We sharethe call for ‘synthesis,’ but at the sametime, we want to preserve the significantdifferences between the two cultures. In-stead of homogenizing research methodsand cultures, we would like to see re-searchers become bicultural” (p. 57). Ourview of mixed methods design includes theposition that Greene and Caracelli (1997)termed “dialectical” in which differencesbetween the paradigms are viewed as im-portant and cannot be ignored or rec-onciled. Bernstein (1983), in discussingthe differences between Habermas andDerrida, provided a clear statement ofwhat we advocate:

I do not think there is a theoretical po-sition from which we can reconciletheir differences, their otherness toeach other—nor do I think we shouldsmooth out their “aversions and at-tractions.” The nasty questions thatthey raise about each other’s “project”need to be relentlessly pursued. One ofthe primary lessons of “modernity/

postmodernity” is a radical skepticismabout the possibility of a reconcilia-tion—an aufhebung, without gaps, fis-sures, and ruptures. However, to-gether, Habermas/Derrida provide uswith a force-field that constitutes the“dynamic, transmutational structureof a complex phenomenon”—the phe-nomenon I have labeled “modernity/postmodernity.” (p. 225)

From this perspective, the “compatibil-ity” of particular qualitative and quantita-tive methods and approaches becomes amuch more complex issue than eitherparadigmatic or pragmatist approachesusually suggest. Maxwell (1990) claimedthat “the theoretical debate about com-bining methods has prevented us from see-ing the different ways in which researchersare actually combining methods, and fromunderstanding what works and whatdoesn’t” (p. 507). What we want to dohere is use the interactive design modelto understand how qualitative and quan-titative approaches can productively becombined.

� Qualitative andQuantitative Approachesto the Design Components

In this section, we identify the distinctiveproperties of the quantitative and qualita-tive approaches to each of the componentsof design described previously: purposes,conceptual framework, research ques-tions, methods, and validity. The ways inwhich the two paradigms typically frameeach of the components are describedbriefly and are summarized in Table 9.1. Amore detailed discussion of each of thecomponents, focusing mainly on qualita-tive research but also contrasting this with

An Alternative Approach � ���

Page 12: Maxwell and Loomis - SAGE Publications Inc

��� � METHODOLOGICAL AND ANALYTICAL ISSUES

������#�� , ���0�� ������������ ��� ����������� ���-���� � ���*����� & - �����

������������ ����������

�������� ��������������������������������������%���

����%�����������������%��)�������%���

���������������������������������

2�������������������"��������������������������

�����������������������+���������������������������������������������������

����������������)��- ��������������� ���������������

'���������������� ����������������(������������������������������%�����"�������������������������

?������������������������ ��������

����������������?�)�����)��

2������������� �������?�������������������������������

�����)��-��������� ��������

'���������������

'��������� *%@�����A�������������������� ���������������������������%���

+��������������������������������������� �����������������������������

������� ���%�%���������������%��������������������

������������������

"������������� �������������������������������������$����2����������A������T

���������A����������

�������������������������������;�������������������������������������������������������

�������

"����������� &������������������������ ���������������������

�������������������������%���������������������������������������������������������

���%����������������

(��������������� �����������������������

#��������������&�����������������

������

�������������� �������������������������

���������������������������� ����������

��������������%����

"��������������������������������������������������������� ����������

���������������������������������������

#������$�%��� �������������� �������%����

(��������%���#������$������������

Page 13: Maxwell and Loomis - SAGE Publications Inc

quantitative research, is provided inMaxwell (1996).

PURPOSES

The possible purposes of a study are toonumerous and disparate to list, and spe-cific personal and practical purposes areusually not tightly linked to one or theother approach. Intellectual purposes, incontrast, do tend to segregate into quali-tative and quantitative categories. Quan-titative purposes include precise mea-surement and comparison of variables,establishing relationships between vari-ables, identifying patterns and regularitiesthat might not be apparent to the peoplein the settings studied, and making in-ferences from the sample to some popu-lation. Qualitative purposes include un-derstanding the context, process, andmeaning for participants in the phenom-ena studied; discovering unanticipatedevents, influences, and conditions; induc-tively developing theory; and understand-ing a single case.

CONCEPTUAL FRAMEWORK

The conceptual framework for a studyconsists of the theory (or theories) relevantto the phenomena being studied that in-form and influence the research. The keyissue for mixed methods studies, then, isthe nature of these theories. Are they vari-ance theories, process theories, some com-bination of these, or theories that do notfit neatly into this dichotomy? A mismatchbetween the conceptual framework and theresearch questions or methods used cancreate serious problems for the research; avariance theory cannot adequately guideand inform a process-oriented investiga-tion and vice versa. Mismatches betweenthe conceptual framework and the pur-

poses or validity strategies are less com-mon but can also be problematic. A mixedmethods study is often informed by bothvariance and process theories, and themain design issue is sorting out specificallyhow different parts of the conceptualframework are integrated with one an-other and how they are linked to the otherdesign components.

RESEARCH QUESTIONS

As with conceptual frameworks, re-search questions can usually be catego-rized as variance questions or processquestions. The research questions in aquantitative study typically are questionsabout the measurement or analysis of vari-ation—the amount or frequency of somecategory, the value of some variable, or therelationship between two or more vari-ables. Such questions are usually framedin terms of the values of key variables, andspecific hypotheses are often stated. Thequestions and hypotheses are nearly al-ways specifically formulated (or presentedas if they were formulated) in advance ofany data collection, and they are fre-quently framed in “operational” terms,connecting directly to the measurement ordata collection strategies. In a qualitativestudy, by contrast, the research questionstypically deal with the verbal descriptionof some event, phenomenon, or process(What is happening here? What are thecharacteristics of this phenomenon?); itsmeaning to participants in the settingstudied; or the process by which someevents or characteristics of the situationinfluence other events or characteristics.The questions might not be explicitlystated, and when they are, they might in-clude only the broad initial questions withwhich the study began and not the morefocused questions that developed duringthe research.

An Alternative Approach � ���

Page 14: Maxwell and Loomis - SAGE Publications Inc

METHODS

As described previously, “methods” asa design component include (a) the rela-tionship that the researcher establisheswith individuals and groups being studied;(b) the selection of sites, participants, set-tings, and times of data collection; (c) themethods used for data collection; and (d)the strategies used for data analysis.

Research Relationship. The relationshipthe researcher has with participants in thestudy, or with others who control accessto these individuals or groups or that mayinfluence the conduct of the study, is a keycomponent of the research design andcan have a major impact on the conductand results of a study. This aspect of de-sign tends to be treated very differentlyin quantitative and qualitative studies.Quantitative researchers tend to see theresearch relationship as an extraneousvariable—something to be controlled.This can be done either to prevent the rela-tionship from influencing the results oraffecting the variables studied or to pre-vent variance in the relationship from in-troducing confounding variance in thedependent variables (e.g., standardizingsurvey interview procedures so that differ-ences in procedures, either within or be-tween interviewers, do not create an addi-tional source of variation in the results).Qualitative studies, on the other hand,typically treat the research relationshipnot as a variable but rather as a process,one that can have important positive aswell as negative consequences for the re-search. The goal is not to create a stan-dardized relationship but rather to createa relationship that maximizes the under-standing gained from each participant in-terviewed or each situation observed.Such a relationship is often much morepersonal and informal than is the case inquantitative studies.

Sampling. The two main strengths ofquantitative sampling (and for experimen-tal research, this can be extended to in-clude assignment of participants to condi-tions) are to establish valid comparisonsand to allow generalization from the sam-ple to the population of interest. Someform of probability sampling (or ran-dom assignment) is usually the preferredmethod; in the absence of this, post hocstrategies (matching or analytical tech-niques such as analysis of covariance) canbe used to increase comparability andgeneralizability. Qualitative research nor-mally places less emphasis on formal com-parisons, and the usual sampling strategyis some form of purposeful sampling. Inthis approach, participants are selectedbecause they are most likely to provide rel-evant and valuable information or toallow the researcher to develop or testparticular theoretical ideas (in groundedtheory research, the latter strategy is calledtheoretical sampling).

Data Collection. Quantitative data collec-tion is typically preplanned, structured,and designed to ensure comparability ofdata across participants and sites. Thedata are normally collected in numericalor categorical form, using instruments orprocedures that have been designed andtested to ensure reliability and validity.Qualitative data collection is typicallymore open-ended, flexible, and inductive,and the data are usually textual descrip-tions, either written notes or recorded ver-bal data that are converted to textual formby transcribing (increasingly, visual meanssuch as videotaping are being used).

Data Analysis. Quantitative analysis canbe descriptive (assigning numbers or cate-gory labels to data or aggregating data onparticular variables) or relational (investi-gating the relationship between two ormore variables in the sample). Quantita-tive analysis can also make inferences to

��� � METHODOLOGICAL AND ANALYTICAL ISSUES

Page 15: Maxwell and Loomis - SAGE Publications Inc

the population from which the sample wasdrawn, either estimating the values ofpopulation variables or testing hypothesesabout the relationship of variables in thepopulation. In addition, textual data canbe converted into categorical or numericalform for analysis. Qualitative analysis ismore diverse but typically addresses thegoals listed under purposes (meaning,context, process, inductive theory devel-opment, and in-depth understanding ofsingle cases). The analysis can involve thecategorization (coding) of the textualdata, but the purpose is quite differentfrom that of quantitative categorization.Rather than being a preliminary step tocounting instances of something or aggre-gating measurements on some variable,the function of qualitative categorizationis to collect all of the instances of sometype of phenomenon for further qualita-tive comparison and investigation. Thegoals of this strategy are to develop an in-depth description of this phenomenon, toidentify key themes or properties, and togenerate theoretical understanding. Thecategories are often inductively developedduring the analysis rather than systemati-cally formulated prior to the analysis.Both quantitative and qualitative analysiscan be either exploratory (on exploratoryquantitative data analysis, see Tukey,1977) or confirmatory, although qualita-tive researchers usually do not simply testa prior theory without further developingthat theory.

VALIDITY

Under validity, we include both causal(internal) validity and generalizability (ex-ternal validity). Quantitative researchers,most notably Campbell and Stanley(1963) and Cook and Campbell (1979),have developed a detailed typology of va-lidity issues, validity threats, and strate-gies for addressing these threats. In addi-

tion to causal validity and generalizability,Cook and Campbell identified statisticalconclusion validity (the validity of infer-ences from the sample to the populationsampled) and construct validity (the valid-ity of the theoretical constructs employed)as distinct issues. There is less agreementon classifying validity issues in qualitativeresearch. Maxwell (1992) distinguishedfour main categories of validity in qualita-tive research: descriptive validity (the va-lidity of the descriptions of settings andevents), interpretive validity (the validityof statements about the meanings or per-spectives held by participants), explana-tory (or theoretical) validity (the validityof claims about causal processes and re-lationships, including construct validityas well as causal validity proper), andgeneralizability.

Inferences about causality are contro-versial in qualitative research. Some re-searchers (e.g., Guba & Lincoln, 1989)deny that causality is an appropriate con-cept in qualitative research, and this viewhas been widely accepted. In contrast,Sayer (1992, 2000) and Maxwell (1998),taking a critical realist perspective, arguethat causal explanation not only is legiti-mate in qualitative research but is a partic-ular strength of this approach, although ituses a different strategy from quantitativeresearch, based on a process rather than avariance concept of causality. Constructvalidity is similar for both approaches,although quantitative research may usequantitative means of assessing the con-struct validity of instruments. General-izability is also similar (statistical gener-alization to the population sampled isincluded under statistical conclusion va-lidity) and is always a matter of transfer-ring the conclusions of a study to other sit-uations, an inherently judgmental process;Guba and Lincoln (1989) referred to thisas “transferability.” However, in quantita-tive research, generalizability is usuallyseen as a matter of the results of the study

An Alternative Approach � ���

Page 16: Maxwell and Loomis - SAGE Publications Inc

being valid in other settings (replicability).Qualitative researchers, by contrast, tendto “generalize to theory” (Yin, 1984,pp. 39-40)—developing a theory and thenapplying that theory to other settings thatmay be dissimilar but that can be illumi-nated by the theory in question, appropri-ately modified (Becker, 1990).

In Table 9.1, we have tried to summa-rize the typical features of both quantita-tive and qualitative research as these in-volve the five design components of theinteractive model.

ANALYSIS OF SELECTED EXAMPLESOF MIXED METHODS DESIGNS

Uncovering the actual integration ofqualitative and quantitative approachesin any particular study is a considerablymore complex undertaking than simplyclassifying the study into a particularcategory on the basis of a few broad di-mensions or characteristics. It requires anunderstanding of each of the five compo-nents of the study’s design and of the waysin which each component incorporatesquantitative elements, qualitative ele-ments, or both. In addition, as stated pre-viously, it is important to examine theactual conduct of the study rather thansimply depending on the author’s asser-tions about the design. This issue is illus-trated by Blumstein and Schwartz’s (1983)study of American couples, which usedboth survey questionnaires and open-ended interviews. The authors describedthe results of their study as based entirelyon the statistical analysis of the surveydata, while the qualitative data were rele-gated to providing illustrative instances:

we use the phrase “we find . . .” in pre-senting a conclusion based on statisti-cal analysis of data from the question-

naires. . . . The interview data help usinterpret our questionnaire findings,but unless we are using one of the partsof the interview that is readily quanti-fiable, we do not afford them the samedegree of trust we grant to informationderived from the questionnaires.

The interviews serve another pur-pose. We use the interview materials toillustrate both majority patterns andimportant exceptions. (p. 23)

And the authors characterize the chaptersin their book that deal with relationshiphistories, which are based mainly on theinterviews, by stating, “In these chapters,which have nothing to do with our analy-sis of the data but are included only fortheir illustrative value . . .” (p. 22).

However, this does not explain whyBlumstein and Schwartz (1983) con-ducted in-depth interviews, lasting 2.5 to4.0 hours, with both partners, separatelyand together, for 300 couples; transcribedand coded these interviews; and followedup with questionnaires to fill in any gaps.It also seems inconsistent with the factthat, in addition to their extensive use ofquotes in the thematically organized sec-tions of the book, they devoted 213 pages,nearly half of the results section of thebook, to detailed case studies of 20 cou-ples’ relationships. A closer analysis oftheir account reveals that triangulation ofmethods was an important feature of thestudy so as to “see couples from severalvantage points” (p. 15) and that the casestudies “helped to illuminate some of theways in which money, sex, and workshape the nature of [the partners’] rela-tionships” (p. 332). It appears that the“reconstructed logic” of the design washeavily influenced by a quantitative ideol-ogy of what counts as “results,” distortingthe study’s logic-in-use and the actual con-tribution of the qualitative component.

�� � METHODOLOGICAL AND ANALYTICAL ISSUES

Page 17: Maxwell and Loomis - SAGE Publications Inc

The main purpose of this section is topresent in-depth analyses of well-docu-mented, complex examples of mixedmodel research, illustrating the numerousways in which qualitative and quantitativeapproaches to each of the design compo-nents can be combined. We discuss thesestudies in terms of Caracelli and Greene’s(1997) distinction between “component”and “integrated” mixed methods designs,moving from studies that resemble com-ponent designs to those that resemble inte-grated designs. Component designs arethose in which the different methods re-main discrete throughout the study andonly the results of the methods are com-bined (p. 22). Integrated designs, by con-trast, are those in which there is “a greaterintegration of the different method types”(p. 23); such designs involve the use not ofrelatively self-contained qualitative andquantitative methods modules but ratherof qualitative and quantitative elements orstrategies integrated within a single phaseor strand of the research; the elementsoccur concurrently and in constant inter-action with one another rather than asconceptually separate enterprises that arelater linked together.

Their distinction is most useful whenapplied to methods; it is less meaningfulwhen applied to the other components of aresearch design, and in fact the use of bothqualitative and quantitative elements ofcomponents other than methods seems tohave been treated by Caracelli and Greene(1997) as an “integrated” design almostby definition. In addition, Caracelli andGreene’s two types are not categoricallydistinct; actual studies exhibit a contin-uum of the amount of integration of meth-ods and also a variety of different strate-gies for integration. We have nonethelessorganized the studies in this order for tworeasons. First, doing so provides a clearerorganization to this section. Second, it al-

lows us to address the design features ofparticular types of mixed methods studiesas well as the specific studies we describe.

A common approach to using bothquantitative and qualitative methods is touse them sequentially. Sutton and Rafaeli(1992) provided an unusually detailed andcandid account of such a design, a study ofthe relationship between expressed emo-tion and sales in convenience stores (seealso Sutton & Rafaeli, 1988). They begantheir research with a well-developedtheory of the expression of emotion byemployees, based not only on publishedliterature but also on informal queryingof waitresses, clerks, and telephone oper-ators. They had numerous ideas for pos-sible empirical studies, but no actualresearch in progress, when they unexpect-edly gained access to a quantitative dataset derived from covert observations ofemployees and from company sales rec-ords, with detailed data on numerouscontrol variables. Although one of theauthors had considerable experience withqualitative research, this study was orig-inally designed as a purely quantitativemultiple regression analysis of this dataset.

Sutton and Rafaeli’s statistical analysisof this data was intended to achieve twomain purposes. First, it would supporttheir theory and further develop theirscholarly agenda on expressed emotion.Second, it would advance their careerswithout involving all the work of collect-ing their own data. Unfortunately, theanalysis flatly contradicted their hypothe-ses; expressed positive emotions had aconsistently negative correlation withsales. They tried tinkering with the analy-sis, but to no avail; they could find no er-rors, and dozens of runs using differentcombinations of variables gave the sameresult. Their validity checks were unableto resolve the contradiction between their

An Alternative Approach � ��

Page 18: Maxwell and Loomis - SAGE Publications Inc

theory and their results. It was clear thatthey needed to revise their conceptualframework.

Fortunately, a colleague suggested analternative theory, which came to be calledthe “Manhattan effect”: that in busystores, employees did not have time and/orwere too harassed to express positive emo-tions. This theory was consistent withtheir data, and the authors’ initial inclina-tion was to simply revise their hypothesesand submit the paper for publication, hav-ing learned from experienced colleaguesthat this was common practice in both thenatural and social sciences. There weretwo reasons why they did not do this.First, it would contradict their previouslypublished theoretical work, potentiallyimpairing their career advancement. Sec-ond, they wanted to write a paper thatconveyed their actual process, believingthat, although it would be harder to pub-lish, it would be a better paper. To do this,however, they needed a clearer theoreticalunderstanding of their findings. This ledto the qualitative phase of the study, whichconsisted of interviews with managers andexecutives, four case studies, informal ob-servations in stores, and one of the authorsworking for a day as a store clerk. Suttonand Rafaeli (1992) stated,

These qualitative data proved to be es-sential for helping us to refine our re-vised conceptual perspective. For ex-ample, while we had thought abouthow a crowded store suppresses thedisplay of positive emotion, we hadnot thought about the ways in which aslow store supports the display ofgood cheer. During the day that Bobspent working as a clerk, he learnedthat customers are an importantsource of entertainment, and thatclerks are more friendly during slowtimes because they are genuinely

pleased to see customers and want toencourage customers to engage in con-versation. (p. 123)

Their revised and elaborated theory wasused to develop a different hypothesis,which was supported by a further quanti-tative analysis of the original data set.

This research thus involved two cyclesof induction and deduction. The first cyclewas typical of quantitative research; it be-gan with informal data collection and lit-erature-based theorizing about how thedisplay of positive emotion influencessales, and it ended with the statistical testof a hypothesis derived from this theory.The failure of the study to support the hy-pothesis forced the authors into a secondcycle, beginning with a colleague’s sugges-tion and continuing with a diverse array ofqualitative data collection and analysis,which eventually led to the inductive de-velopment of a new conceptual frame-work that emphasized the reverse process:how store pace has a negative effect on thedisplay of positive emotion. This concep-tual framework was used to generate anew quantitative hypothesis, which wasthen tested statistically.

In this study, the quantitative and qual-itative phases were relatively distinct.The qualitative phase was largely self-contained, and its purpose was nearly ex-clusively to revise and develop the concep-tual framework, incorporating a processmodel of how the pace of work affects dis-played emotion. This framework was thenused to generate a variance theory hypoth-esis that was tested with quantitative data.Figure 9.3 provides a design map of thestudy.

In other component studies, rather thanshifting from one approach to another insequence, the two approaches are usedconcurrently, although separately, andintegrated only in drawing conclusions.

��� � METHODOLOGICAL AND ANALYTICAL ISSUES

Page 19: Maxwell and Loomis - SAGE Publications Inc

Trend (1978/1979) gave an account ofsuch a study, an evaluation of an experi-mental federal housing subsidy programinvolving both quantitative and quali-tative data collection and analysis. Trenddescribed the study as a “naturalistic ex-periment” (p. 69), but it would more accu-

rately be called a “pre-experiment” inCampbell and Stanley’s (1963) typologybecause it did not involve a control group.Extensive quantitative data were collectedon agency activities, expenses, demo-graphic characteristics of clients, andhousing quality, mainly through surveys.

An Alternative Approach � ���

������ #�+� "���� 2�� �� ������ ��� '����� ./88� .//3� �����

Conceptual Framework:

Conceptual Framework:

Research Question:

Research Question:

Page 20: Maxwell and Loomis - SAGE Publications Inc

In addition, each site had an observer(usually an anthropologist) who prepareda qualitative case study of that site, usingfield observations, interviews, and docu-ments. The intent was that program out-comes would be determined through analy-sis of the quantitative data, while the casestudies would provide a holistic pictureof program process (Trend, 1978/1979,p. 70).

However, this plan began to unravelwhen the conclusions of an observer inone site directly contradicted the results ofthe quantitative analysis of program out-comes in that site. While neither sidedoubted “the facts” produced by theother, the two interpretations of thesefacts differed radically. The agency con-ducting the evaluation sided with thequantitative results, and the observer wasrepeatedly told to rewrite his analysis tofit the quantitative conclusions. Finally,Trend and the observer made a sustainedeffort to get at what had really been goingon, using both the quantitative and quali-tative data. They eventually came up witha coherent process explanation for nearlyall of the data that went well beyond eitherthe quantitative or the initial qualitativeconclusions and that revealed seriousshortcomings in both accounts.

Although this study clearly fits into the“component” type of design in that thequantitative and qualitative data were col-lected and analyzed separately and werecombined only in developing conclu-sions, it also resembles the most devel-oped subtype of integrated design de-scribed by Caracelli and Greene (1997),the transformative design. In such designs,the value commitments of different tra-ditions are integrated, giving voice to dif-ferent ideologies and interests in the set-ting studied. (In the interactive designmodel, these value commitments can formpart of both the purposes and the concep-tual framework.) The quantitative ana-

lysts tended to represent the views of theprogram managers and funders, while theobserver was an advocate for the agencystaff and clients. These differing valuestances, as well as the separation of thequantitative and qualitative strands of thestudy, led to polarization and conflict;“each side held so tightly to its own viewsthat it was impossible to brush aside thelack of congruence” (Trend, 1978/1979,p. 84).

Trend (1978/1979) concluded thatmultiple methods might not lead to aneasy integration of findings and that “una-nimity may be the hallmark of work inwhich other avenues to explanation havebeen closed off prematurely” (p. 68). Ifthe discrepancy between the qualitativeand quantitative accounts had been dis-covered earlier, or if the two approacheshad been more closely integrated, then it ispossible that the observer would havebeen subtly or overtly coerced into makinghis conclusions fit the “hard” data (p. 84).Trend thus argued that “the proliferationof divergent explanations should be en-couraged” (p. 68) but also that an effortshould be made to develop an accountthat does justice to all of the conflictingperspectives.

A third study that initially appears“component-like,” in that the quantita-tive and qualitative elements are concep-tually distinct phases of the research, is theresearch described in Festinger, Riecken,and Schachter’s (1956) book, WhenProphecy Fails. This was a psychologicalstudy of an end-of-the-world cult and theconsequences for cult members of the fail-ure of its predictions. The study beganwith a variable-oriented theory and a hy-pothesis about the conditions under whichdisconfirmation of belief will paradoxi-cally be followed by increased commit-ment. The data were collected entirelythrough participant observation; a num-ber of researchers pretended to be con-

� � METHODOLOGICAL AND ANALYTICAL ISSUES

Page 21: Maxwell and Loomis - SAGE Publications Inc

verts to the cult and covertly amassed de-tailed descriptive notes on what happenedas the day of judgment approached andthen passed. However, to test the hypothe-sis, these observational data were ana-lyzed primarily by categorizing membersin terms of the degree of prior commit-ment and social support (the two key inde-pendent variables) and measuring changesin proselytizing (the indicator of subse-quent commitment) following disconfir-mation. Figure 9.4 depicts the design ofthe study.

This study differs from a componentstudy such as Sutton and Rafaeli’s in thatthe “components” are different aspects ofa single research design rather than sepa-rate quantitative and qualitative strandsor phases of a larger study. At first glance,it seems to fit one of Patton’s (1990) types

of “methodological mixes”—experimen-tal design, qualitative data, and statisticalanalysis (p. 191)—and would thus be con-sidered a mixed model design. The maindifferences from Patton’s type are that thestudy was a “natural” experiment (moreaccurately, a quasi-experiment) ratherthan a manipulated intervention and thatthe analysis was hypothesis testing, vari-able focused, quantitative, and based onprior analytical categories but not specifi-cally statistical due to the small number ofparticipants. However, the design is morecomplex than this categorization suggests,and we want to analyze the study to revealsome of these complexities.

The purposes and explicit researchquestions for Festinger et al.’s (1956)study were predominantly quantitative—a goal of testing the predictions of their

An Alternative Approach � ��

������ #�1� "���� 2�� �� ������� �� ��9 ./7L� �����

Research Question:

Conceptual Framework:

Page 22: Maxwell and Loomis - SAGE Publications Inc

theory of how people with a strongly heldbelief respond to disconfirmation of thatbelief; a hypothesis, deductively generatedfrom this theory, about the effect of socialsupport following disconfirmation on thekey measure of commitment (proselytiz-ing); and the testing of this hypothesis,with the goal of creating generalizableknowledge. However, their conceptualframework addressed both the process bywhich the predicted outcome (disconfir-mation leads to increased commitment)could occur and the variables that couldinfluence this outcome, and some implicitprocess questions became apparent in theconclusions section.

In terms of methods, the study could beseen as a quasi-experiment, with a natu-rally occurring intervention, pre- and post-intervention data collection, and a com-parison of two parts of the group thatdiffered in the degree of social support.However, with the detailed qualitativedata collection, the logic also resembled aqualitative case study. The research rela-tionships and data collection involved co-vert participant observation, intensive in-volvement of the researchers in the cult,and narrative fieldnotes of events. It is un-clear what formal qualitative analysistechniques, if any, were used. In the narra-tive of the study, the researchers made fre-quent inferences to the meaning of eventsfor participants, and there were rich de-scriptions of situational influences andprocesses.

In the concluding chapter of their book,Festinger et al. (1956) first gave a case-by-case analysis of all participants in terms ofthe hypothesized preconditions (inde-pendent variables) and outcomes. Partici-pants were then categorized in terms ofthese variables, and the authors tallied theconfirmations and exceptions to the pre-dictions and compared the two situationsthat differed in the key independent var-iable (social support). This argument is

essentially quantitative. However, it isextensively supplemented by a processanalysis of the sequence of events for eachindividual; this is used to explain apparentexceptions and to modify the hypothesesto some extent. The authors also made useof unanticipated outcomes (e.g., the per-sistence of predictions of disaster, the iden-tification of visitors as spacemen) thatwere relevant to their conclusions.

This was a coherent and workablemixed methods design because the differ-ent components were compatible andcomplementary in this particular situa-tion, not because they derived from a sin-gle paradigm or were consistent with a sin-gle set of assumptions. Testing Festingeret al.’s (1956) specific hypothesis (the pri-mary aim of the study) would ideally haveinvolved an experimental design. How-ever, the nature of the phenomenon ad-dressed by the theory made an experimen-tal test of the hypothesis impossible. Theonly real alternative was a kind of “natu-ral experiment,” and one was droppedinto the researchers’ laps. The authorsnoted, somewhat apologetically, that thesituation that was available to them pre-cluded the sort of formal standardizedmethods that “the orthodoxy of social sci-ence” would normally require (pp. 248-249); consequently, the sampling and datacollection were almost purely qualitative.These consisted mainly of the use of par-ticipant observers, who gathered what-ever data they could that related to thetheory and research questions—includingdata on the meaning, context, and processof the group’s activities—and produced adetailed narrative account of the eventsleading up to and following the discon-firmation of the group’s predictions.

The crucial links in making this a coher-ent design were the analysis and validityprocedures employed to connect the au-thors’ qualitative data to their researchquestions, hypotheses, theories, and pur-

�� � METHODOLOGICAL AND ANALYTICAL ISSUES

Page 23: Maxwell and Loomis - SAGE Publications Inc

poses. This was accomplished in twoways. One of these involved quantifyingthe qualitative data to adapt these to thelogical requirements of hypothesis testing.The two groups of believers, which dif-fered in the value of the major indepen-dent variable (social support), were com-pared in terms of the main outcomevariable (extent of proselytizing) as well ason other indicators of the strength of com-mitment (a key mediating variable) bothbefore and after the disconfirmation.

If this were the entire analysis, however,the research results would have been farless convincing than they were given thatthe number of participants (17) on whomsufficient data existed was quite small.The study’s conclusion that social supportwas essential to strengthened belief andproselytizing was buttressed by a secondqualitative analysis that examined thedata on each group member for evidencerelevant to the hypothesis and constructeda “mini-case study” of each member. Thesecases relied heavily on inductive identifi-cation of relevant data, attention to mean-ing and context, and a process accountthat elucidated the mechanisms by whichbelief was strengthened or weakened—allfeatures that are characteristic of qualita-tive research. In addition, most of the re-port was such a “case study” of the entirephenomenon, revealing in rich detail howthe group developed and how it respondedto the disconfirmation of its predictions.These analyses reveal (or create) a setof implicit qualitative research questionsabout the meaning, processes, and contextof the events studied that parallel thequantitative hypothesis and connect toqualitative aspects of the authors’ concep-tual framework. This dual analysis was fa-cilitated by the conceptual framework forthe study, which included both varianceand process aspects of the phenomenon.

The validity of Festinger et al.’s (1956)conclusions is vulnerable to the fact that

traditional experimental controls wereimpossible, that data were not collected ina structured way that would ensure reli-ability and facilitate comparison, and thatthe sample was quite small and self-selected. The researchers’ main strategyfor dealing with these validity issues wasto explicitly identify plausible alternativeexplanations and to use their data to arguethat these are not credible explanationsfor their results. This strategy drawson both qualitative and quantitativeapproaches.

We believe that few, if any, sequentially“mixed” designs of the type described byPatton (1990) maintain a complete se-quential separation of the qualitative andquantitative elements of the research. Asin this example, the different componentstend to grow “tendrils” backward andforward, integrating both qualitative andquantitative elements into all componentsof the research. This is understandablegiven the “resonance” among the compo-nents of each approach; qualitative datacollection tends to generate qualitativeanalysis, research questions, conceptual-izations, and validity strategies, and thesame is true of quantitative components,while a qualitative component of the con-ceptual framework tends to generate qual-itative research questions and methods.

Another approach that blurs the dis-tinction between component and inte-grated designs is to conduct the quantita-tive and qualitative data collection strandsin parallel, as in the studies by Trend(1978/1979) and Festinger et al. (1956),but to embed these within an overall ex-perimental or quasi-experimental design,one that involves a deliberate interventionas well as establishing experimental andcontrol conditions. This sort of design hasbeen employed by Lundsgaarde, Fischer,and Steele (1981) and by Maxwell et al.(1986), among others. Such studies areclassed as integrated designs by Caracelli

An Alternative Approach � ��

Page 24: Maxwell and Loomis - SAGE Publications Inc

and Greene (1997, p. 26) and would beconsidered mixed model designs byTashakkori and Teddlie (1998) becausethey go beyond the mixing of methods in astrict sense. However, the actual methodscomponents can range from largely sepa-rate (as in the study by Maxwell et al.,1986) to closely integrated (as in the studyby Milgram, 1974 [discussed later]). Thestudy by Lundsgaarde et al. (1981), con-ducted during 1976-1977, illustratessome of the possible complexities of suchdesigns.

These researchers, all anthropologists,carried out what they described as an“ethnographic” study of the effect of acomputerized medical information system(known as PROMIS) on the functioning ofa hospital ward. They did this by studyingtwo hospital wards prior to the implemen-tation of PROMIS and then continuingthis research while PROMIS was intro-duced on one of the wards, using the otherward as a control group. They describedthis ethnographic study as one “compo-nent” of the PROMIS evaluation; it wasdesigned to complement the other compo-nents of the evaluation, which employed aquantitative analysis of medical records todetermine the impact of PROMIS on thehealth care delivery process. The contextin which PROMIS was implemented waspolitically charged, and the developers ofthe overall evaluation strategy were con-cerned that variation in human and situa-tional variables might make it difficult tointerpret the overall quantitative results.The goals of the ethnographic study wereto document the events surrounding theimplementation of the PROMIS system,and the experiences of the health care pro-viders using this system, using a more de-scriptive and inductive approach so as tocharacterize the context in which the sys-tem was developed and demonstrated(Lundsgaarde et al., 1981, pp. 10-11).

However, the “ethnographic” compo-nent itself involved a mix of qualitative

and quantitative elements. The purposes(described previously) were mainly quali-tative but included the explicit compari-son of the experimental and control wardsso as to determine the effects of thePROMIS implementation on the experi-mental ward. The conceptual frameworkfor the study was largely drawn from in-novation theory (expressed in 20 “propo-sitions” that were a mix of variance andprocess statements) and the sociology ofmedical practice. No research questionswere explicitly stated; although some pro-cess questions can be clearly inferred fromthe study’s purposes, the overall evalua-tion was guided by a specific variance hy-pothesis about the effect of PROMIS onpatient care behavior, a hypothesis thatwas tested by the quantitative componentsof the evaluation. The ethnographic com-ponent relied heavily on participant obser-vation, informal interviewing, and doc-ument analysis, and Lundsgaarde et al.(1981) presented an explicit defense ofsuch qualitative methods in terms of thegoals of context and meaning (p. 16).However, the study also included a ques-tionnaire, a more structured interview,and a comparative observational assess-ment (following the introduction ofPROMIS) of the amount of time spentgenerating and using medical records onthe two wards, using matched pairs of res-idents or interns observed on a random-ized schedule. (The latter task was re-quired by the funding institution [p. 11]and forced a reallocation of much of thelater qualitative data collection time fromparticipant observation to in-depth inter-views.) In addition, midway through thestudy, the observers on the two wardsswitched settings so as to gain a compara-tive perspective and to control for biases.

Lundesgaarde et al. (1981) justified thismix of quantitative and qualitative meth-ods as a means of triangulating data andresolving contradictions between datasources (p. 16). The concerns of the evalu-

�� � METHODOLOGICAL AND ANALYTICAL ISSUES

Page 25: Maxwell and Loomis - SAGE Publications Inc

ation planners were well-founded; thequantitative analysis of medical recordsfound no statistically significant advan-tages of PROMIS over its manual coun-terpart, while the ethnographic studyshowed that “many of the clinicians whowere required to use the system were un-willing participants in the experiment andeven unsympathetic to many of the goalsof those who developed it” and that“many of the human and organizationalproblems . . . could have been avoided, orat least neutralized, if the developers hadpaid more attention to contextual socialvariables affecting system users” (p. 2).The authors stated,

It is the unpredictability of the tempo-ral characteristics of all innovationsthat presents researchers with the mostthorny problems of analysis. The ob-jective measurement of the rate of ac-ceptance, and the estimation of the po-tential rate of diffusion, has proved themost difficult analytical problem inour study of the PROMIS innovation.(p. 4)

For this reason, they emphasized “the im-portance of a multifaceted and flexible re-search design for the study of the many so-cial and operational problems created bythe installation of [PROMIS]” (p. 9). Thepresentation of the results of the studydemonstrated a close integration of thequantitative and qualitative elements indrawing conclusions and addressing va-lidity threats. For example, their discus-sion of the effect of PROMIS on housestaff physicians (pp. 61-91) closely inte-grated the data from participant observa-tions, qualitative interviews, and the sys-tematic time-sampling observations ofresidents and interns. This presentationembedded the statistical analysis of thequantitative behavioral data in a descrip-tive account of these activities, one thatclarifies the contextual variations in, and

influences on, these behaviors. They notedthat the quantitative data did not supportthe widespread perception on the experi-mental ward that residents and internsspent more time entering medical datainto patients’ records, and the authorsdevoted considerable space to discussingpossible reasons for this misperception,drawing on their interviews and observa-tions (pp. 86-90).

While this evaluation superficially re-sembles a component design, with sep-arate qualitative and quantitative com-ponents of the evaluation, a detailedexamination reveals a much more inte-grated design. Some of this integrationmay initially have been externally im-posed but was fully incorporated into theanalysis, validity procedures, and conclu-sions. The triangulation of different meth-ods was the result of not only the differentpurposes of the evaluation but also the va-lidity concerns that would have threat-ened a purely quantitative study. The pres-ence of quantitative elements in theethnographic part of the study was partlythe result of an implicit purpose (the re-searchers’ need to satisfy the externalfunder) that had little intrinsic connectionto the study’s conceptual framework.However, these elements were closely in-tegrated into the study’s analysis, using avalidity approach based on both quantita-tive and qualitative concepts (experimen-tal controls and statistical tests and a pro-cess approach to ruling out alternativeexplanations). Figure 9.5 provides a de-sign map of this study.

The quantitative and qualitative ele-ments can be even more closely integratedthan in this example. Milgram’s (1974)Obedience to Authority is a report of anexperimental study (carried out between1960 and 1963) of how people respondwhen they are ordered by authorities to in-flict pain and possible serious harm onothers. Milgram and his associates de-signed a series of laboratory situations in

An Alternative Approach � ��

Page 26: Maxwell and Loomis - SAGE Publications Inc

which participants were deceived into be-lieving that they were part of a study of theeffects of punishment on learning andwere then told to give increasingly severeelectrical shocks to a supposed “subject”who was actually an accomplice of the re-searchers and who feigned pain and even-tual refusal to cooperate. Unlike Festingeret al. (1956), Milgram (1974) explicitlygrounded this study in the experimentaltradition in social psychology (p. xv). Theresearchers employed numerous differ-ent experimental conditions designed todetermine the effect of different variableson the degree of obedience (the depen-dent variable), and they collected quanti-tative data about the level of shock thatparticipants administered (the main mea-sure of obedience) in each of the differentconditions.

However, the researchers were alsoconcerned with the process by which peo-ple responded to the researchers’ direc-tions: how the participants made sense ofand reacted to these directions and whythey complied with or resisted the orders.In introducing the individual case studies,Milgram (1974) stated,

From each person in the experimentwe derive one essential fact: whetherhe has obeyed or disobeyed. But it isfoolish to see the subject only in thisway. For he brings to the laboratory afull range of emotions, attitudes, andindividual styles. . . . We need to focuson the individuals who took part in thestudy not only because this provides apersonal dimension to the experimentbut also because the quality of each

� � METHODOLOGICAL AND ANALYTICAL ISSUES

������ #�8� "���� 2�� �� 5���������� �� ��9 ./8.� �����

Conceptual Framework:

Research Question:

2 wards, experimental and controlpre/post implementation measurescomparative behavioral observationsstructured questionnairedata from clinical recordsstatistical analysisparticipant observationinterviewsdocuments

experimental controls: pre/post control groupdesign

statistical hypothesis testingtriangulation of methods and data sourcesruling out alternative explanations

hypotheses about effect of PROMIS on users’ practices(implicit) questions about context of innovation andusers’ experiences

Page 27: Maxwell and Loomis - SAGE Publications Inc

person’s experience gives us clues tothe nature of the process of obedience.(p. 44)

The researchers covertly recorded the par-ticipants’ behavior during the experiment,interviewed some participants at lengthafter the experiment was over to deter-mine their reasons for compliance or re-fusal, and sent a follow-up questionnaireto all participants that allowed expressionof their thoughts and feelings. The analy-sis of these data is primarily qualitativebut is closely integrated with the quantita-tive data. The results chapters of the bookpresent a fine-grained blending of quanti-tative tables and graphs with observa-tional notes, excerpts from recorded ob-servations and interviews, and case studiesof particular participants’ responses to theexperimental situation.

In addition, the theoretical model de-veloped from the study is not a pure “vari-ance” model, restricted to the differentvariables that affect obedience; as in thestudy by Festinger et al. (1956), it incorpo-rates extensive discussion of the socialprocesses and subjective interpretationsthrough which obedience and resistanceto authority develop. And in discussingpotential validity threats to the study’sconclusions, Milgram (1974) used boththe quantitative results from the expe-rimental manipulations and qualitativedata from the observations to rule outthese threats. In this study, experimentalintervention, laboratory controls, andquantitative measurement and analysiswere integrally combined with qualitativedata collection and analysis to answerboth qualitative and quantitative researchquestions. Although Milgram himself saidvirtually nothing explicitly about the inte-gration of quantitative and qualitative ele-ments in this study, Etzioni (1968) claimedthat this research “shows that the oftenstated opposition between meaningful,

interesting humanistic study and accurate,empirical quantitative research is a falseone: The two perspectives can be com-bined to the benefit of both” (cited inMilgram, 1974, p. 201). Figure 9.6 pro-vides a design map of the study.

� Conclusions and Implications

In this chapter, we have tried to show thevalue of a broader and more interactiveconcept of research design for understand-ing mixed methods research. We have alsoargued for a broader and more fundamen-tal concept of the qualitative-quantitativedistinction, one that draws on the idea oftwo different approaches to explanationas well as two different types of data.Through detailed examination of particu-lar studies, we have tried to demonstratehow these tools can be used to attain abetter understanding of mixed methodsresearch. We draw several implicationsfrom these arguments and examples.

First, the logic-in-use of a study can bemore complex, and can more closely inte-grate qualitative and quantitative ele-ments of the study, than an initial readingof the report would suggest. The studiesby Blumstein and Schwartz (1983),Lundsgaarde et al. (1981), Festinger et al.(1956), and Milgram (1974) all involved agreater integration of qualitative andquantitative approaches than one wouldguess from their explicit descriptions oftheir methods, and the two other studiespresented (Sutton & Rafaeli, 1988, 1992;Trend, 1978/1979) may be exceptionsonly because the authors had publishedcandid in-depth accounts of their studies’designs and methods, including aspectsrarely addressed in research reports.

Second, the interactive design modelthat we have presented can be a valuabletool in understanding the integration of

An Alternative Approach � �

Page 28: Maxwell and Loomis - SAGE Publications Inc

qualitative and quantitative approachesand elements in a particular study. For ex-ample, the conceptual framework of astudy may be largely variance theory(Sutton & Rafaeli, 1988, 1992, Phase 1),largely process theory (Sutton & Rafaeli,1988, 1992, Phase 2), a combination ofboth types of theories (Trend, 1978/1979;Lundsgaarde et al., 1981), or an integra-tion of the two in a single theory (Festingeret al., 1956; Milgram, 1974).

Third, there is considerable value in adetailed understanding of how qualitativeand quantitative methods are actually in-tegrated in particular studies. For exam-ple, the degree of integration of qualitativeand quantitative elements in the concep-tual framework, analysis, or validity com-ponents of a study might not correspondto the integration of data collection meth-

ods. The study by Lundsgaarde et al.(1981) has more integration in the meth-ods and validity components than in theconceptual framework, while the study byFestinger et al. (1956) has more integra-tion in the conceptual framework and va-lidity than in methods. In addition, theactual integration among different com-ponents of the design is often essential tounderstanding how a particular combina-tion of quantitative and qualitative ele-ments is or is not coherent. For example,the integrated process/variance concep-tual framework of Milgram’s (1974) studyplayed a key role in the integration ofmethods and analysis.

Fourth, we do not believe that typo-logical models by themselves provide ade-quate guidance for designing mixed meth-ods research. The examples and analyses

�� � METHODOLOGICAL AND ANALYTICAL ISSUES

������ #�2� "���� 2�� �� 2����� ./0J� �����

Conceptual Framework:

Researach Question (largely implicit):

Page 29: Maxwell and Loomis - SAGE Publications Inc

of specific studies provided by Greene andCaracelli (1997) and by Tashakkori andTeddlie (1998) are essential complementsto their typologies; these provide both aconcrete realization of how the types playout in practice and an illustration of as-pects of mixed methods design that are notcaptured in the typology.

Fifth, we also believe, however, thatthere is no easy generalizability or trans-ferability of the analysis of particular stud-ies; the actual integration of the compo-nents of a study is influenced by a widerange of conditions and factors and is notdictated by the category in which it fits.The design model that we have presentedis a tool for designing or analyzing anactual study rather than a template fordesigning a particular type of study. In asense, we are presenting a more qualitativeapproach to mixed methods design, em-phasizing particularity, context, holisticunderstanding, and the process by whicha particular combination of qualitativeand quantitative elements plays out inpractice, in contrast to a more quanti-tative approach based on categorizationand comparison. As with quantitativeand qualitative approaches in general,we advocate an integration of the twoapproaches.

� References

Becker, H. S. (1990). Generalizing from casestudies. In E. Eisner & A. Peshkin (Eds.),Qualitative inquiry in education: The con-tinuing debate (pp. 233-242). New York:Teachers College Press.

Bernstein, R. J. (1992). The new constellation:The ethical-political horizons of modernity-postmodernity. Cambridge, MA: MIT Press.

Blumstein, P., & Schwartz, P. (1983). Ameri-can couples. New York: Simon & Schuster.

Brewer, J., & Hunter, A. (1989). Multimethodresearch: A synthesis of styles. NewburyPark, CA: Sage.

Bryman, A. (1988). Quantity and quality insocial research. London: Unwin Hyman.

Campbell, D. T., & Stanley, J. C. (1963). Ex-perimental and quasi-experimental designsfor research on teaching. In N. L. Gage(Ed.), Handbook of research on teaching(pp. 171-246). Chicago: Rand McNally.(Reprinted in 1966 as Experimental andquasi-experimental designs for research)

Caracelli, V. J., & Greene, J. C. (1997).Crafting mixed-method evaluation designs.In J. C. Greene & V. J. Caracelli (Eds.), Ad-vances in mixed-method evaluation: Thechallenges and benefits of integrating di-verse paradigms (New Directions for Evalu-ation, No. 74, pp. 19-32). San Francisco:Jossey-Bass.

Cook, T. D., & Campbell, D. T. (1979). Quasi-experimentation: Design and analysis is-sues for field settings. Boston: HoughtonMifflin.

Creswell, J. W. (1994). Research design:Quantitative and qualitative approaches.Thousand Oaks, CA: Sage.

Etzioni, A. (1968). A model of significant re-search. International Journal of Psychiatry,6, 279-280.

Festinger, L., Riecken, H. W., & Schachter, S.(1956). When prophecy fails. Minneapolis:University of Minnesota Press.

Grady, K. A., & Wallston, B. S. (1988). Re-search in health care settings. NewburyPark, CA: Sage.

Greene, J. C., & Caracelli, V. J. (1997). De-fining and describing the paradigm issue inmixed-method evaluation. In J. C. Greene& V. J. Caracelli (Eds.), Advances in mixed-method evaluation: The challenges andbenefits of integrating diverse paradigms(New Directions for Evaluation, No. 74,pp. 5-17). San Francisco: Jossey-Bass.

Guba, E. G., & Lincoln, Y. S. (1989). Fourthgeneration evaluation. Newbury Park, CA:Sage.

Harre, R. (1972). The philosophies of science.Oxford, UK: Oxford University Press.

An Alternative Approach � ��

Page 30: Maxwell and Loomis - SAGE Publications Inc

Harre, R., & Madden, E. H. (1975). Causalpowers: A theory of natural necessity. Ox-ford, UK: Basil Blackwell.

Kaplan, A. (1964). The conduct of inquiry. SanFrancisco: Chandler.

Kidder, L. H., & Fine, M. (1987). Qualitativeand quantitative methods: When storiesconverge. In M. M. Mark & R. L. Shotland(Eds.), Multiple methods in program evalu-ation: New directions in program evalua-tion (pp. 57-75). San Francisco: Jossey-Bass.

Koontz, V. (1976). Innovation: Componentsof bundles. Medical Care, 7(4).

LeCompte, M. D., & Preissle, J. (1993). Eth-nography and qualitative design in educa-tional research (2nd ed.). San Diego: Aca-demic Press.

Lundsgaarde, H. P., Fischer, P. J., & Steele,D. J. (1981). Human problems in comput-erized medicine (Publications in Anthro-pology, No. 13). Lawrence: University ofKansas.

Martin, J. (1982). A garbage can model ofthe research process. In J. E. McGrath,J. Martin, & R. Kulka (Eds.), Judgmentcalls in research (pp. 17-39). ThousandOaks, CA: Sage.

Maxwell, J. A. (1990). Response to “Campbell’sRetrospective and a Constructivist’s Per-spective.” Harvard Educational Review,60, 504-508.

Maxwell, J. A. (1992). Understanding and va-lidity in qualitative research. Harvard Edu-cational Review, 62, 279-300.

Maxwell, J. A. (1996). Qualitative researchdesign: An interactive approach. ThousandOaks, CA: Sage.

Maxwell, J. A. (1998). Using qualitative re-search to develop causal explanations.Working paper, Harvard Project on School-ing and Children, Harvard University.

Maxwell, J. A., & Mohr, L. B. (1999). Quanti-tative and qualitative: A conceptual analy-sis. Paper presented at the annual meetingof the American Evaluation Association,Orlando, FL.

Maxwell, J. A., Sandlow, L. J., & Bashook,P. G. (1986). Combining ethnographic andexperimental methods in evaluation re-search: A case study. In D. M. Fetterman &

M. A. Pitman (Eds.), Educational evalua-tion: Ethnography in theory, practice, andpolitics (pp. 121-143). Newbury Park, CA:Sage.

McCawley, J. (1982). Thirty million theoriesof grammar. Chicago: University of Chi-cago Press.

Merriam-Webster. (1984). Webster’s ninthnew collegiate dictionary. Springfield, MA:Author.

Miles, M. B., & Huberman, A. M. (1994).Qualitative research design (2nd ed.). Thou-sand Oaks, CA: Sage.

Milgram, S. (1974). Obedience to authority:An experimental view. New York: Harper& Row.

Mohr, L. (1982). Explaining organizationalbehavior. San Francisco: Jossey-Bass.

Mohr, L. (1995). Impact analysis for programevaluation (2nd ed.). Thousand Oaks, CA:Sage.

Mohr, L. (1996). The causes of human behav-ior: Implications for theory and method inthe social sciences. Ann Arbor: Universityof Michigan Press.

Patton, M. Q. (1980). Qualitative evaluationand research methods. Beverly Hills, CA:Sage.

Patton, M. Q. (1990). Qualitative evaluationand research methods (2nd ed.). NewburyPark, CA: Sage.

Pawson, R., & Tilley, N. (1997). Realistic eval-uation. London: Sage.

Pitman, M. A., & Maxwell, J. A. (1992). Qual-itative approaches to evaluation. In M. D.LeCompte, W. L. Millroy, & J. Preissle(Eds.), The handbook of qualitative re-search in education (pp. 729-770). SanDiego: Academic Press.

Rabinowitz, V. C., & Weseen, S. (2001).Power, politics, and the qualitative/quanti-tative debates in psychology. In D. L.Tolman & M. Brydon-Miller (Eds.), Fromsubjects to subjectivities: A handbook of in-terpretive and participatory methods (pp.12-28). New York: New York UniversityPress.

Reichardt, C. S., & Cook, T. D. (1979). Be-yond qualitative versus quantitative meth-ods. In T. D. Cook & C. S. Reichardt (Eds.),Qualitative and quantitative methods in

� � METHODOLOGICAL AND ANALYTICAL ISSUES

Page 31: Maxwell and Loomis - SAGE Publications Inc

program evaluation (pp. 7-32). BeverlyHills, CA: Sage.

Robson, C. (1993). Real world research: A re-source for social scientists and practitioner-researchers. Oxford, UK: Blackwell.

Rogers, E. C. (1995). Diffusion of innovations(4th ed.). New York: Free Press

Salmon, W. C. (1984). Scientific explanationand the causal structure of the world.Princeton, NJ: Princeton University Press.

Salmon, W. C. (1989). Four decades of scien-tific explanation. In P. Kitcher & W. C.Salmon (Eds.), Scientific explanation (pp. 3-219). Minneapolis: University of Minne-sota Press.

Salmon, W. C. (1998). Causality and explana-tion. New York: Oxford University Press.

Sayer, A. (1992). Method in social science: Arealist approach. London: Routledge.

Sayer, A. (2000). Realism and social science.Thousand Oaks, CA: Sage.

Staw, B. M. (1992). Do smiles lead to sales?Comments on the Sutton/Rafaeli study. InP. Frost & R. Stablein (Eds.), Doing exem-plary research (pp. 136-142). NewburyPark, CA: Sage.

Sutton, R. I., & Rafaeli, A. (1988). Untanglingthe relationship between displayed emo-tions and organizational sales: The case of

convenience stores. Academy of Manage-ment Journal, 31, 461-487.

Sutton, R. I., & Rafaeli, A. (1992). How weuntangled the relationship between dis-played emotions and organizational sales:A tale of bickering and optimism. In P. Frost& R. Stablein (Eds.), Doing exemplary re-search (pp. 115-128). Newbury Park, CA:Sage.

Tashakkori, A., & Teddlie, C. (1998). Mixedmethodology: Combining qualitative andquantitative approaches (Applied SocialResearch Methods, No. 46). ThousandOaks, CA: Sage.

Trend, M. G. (1979). On the reconciliation ofqualitative and quantitative analyses: Acase study. In T. D. Cook & C. S. Reichardt(Eds.), Qualitative and quantitative meth-ods in evaluation research (pp. 68-86).Newbury Park, CA: Sage. (Original workpublished 1978)

Tukey, J. W. (1977). Exploratory data analysis.Reading, MA: Addison-Wesley.

Weiss, R. S. (1994). Learning from strangers:The art and method of qualitative inter-viewing. New York: Free Press.

Yin, R. K. (1984). Case study research: Designand methods. Beverly Hills, CA: Sage.

An Alternative Approach � ��