22
This article was downloaded by: [Dokuz Eylul University ] On: 07 November 2014, At: 04:59 Publisher: Routledge Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK Journal of the Learning Sciences Publication details, including instructions for authors and subscription information: http://www.tandfonline.com/loi/hlns20 Schedules of Practical Work for the Analysis of Case Studies of Learning and Development Rogers Hall Published online: 17 Nov 2009. To cite this article: Rogers Hall (2001) Schedules of Practical Work for the Analysis of Case Studies of Learning and Development, Journal of the Learning Sciences, 10:1-2, 203-222, DOI: 10.1207/S15327809JLS10-1-2_8 To link to this article: http://dx.doi.org/10.1207/S15327809JLS10-1-2_8 PLEASE SCROLL DOWN FOR ARTICLE Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content.

Schedules of Practical Work for the Analysis of Case Studies of Learning and Development

  • Upload
    rogers

  • View
    212

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Schedules of Practical Work for the Analysis of Case Studies of Learning and Development

This article was downloaded by: [Dokuz Eylul University ]On: 07 November 2014, At: 04:59Publisher: RoutledgeInforma Ltd Registered in England and Wales Registered Number: 1072954Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH,UK

Journal of the LearningSciencesPublication details, including instructions forauthors and subscription information:http://www.tandfonline.com/loi/hlns20

Schedules of Practical Work forthe Analysis of Case Studies ofLearning and DevelopmentRogers HallPublished online: 17 Nov 2009.

To cite this article: Rogers Hall (2001) Schedules of Practical Work for the Analysis ofCase Studies of Learning and Development, Journal of the Learning Sciences, 10:1-2,203-222, DOI: 10.1207/S15327809JLS10-1-2_8

To link to this article: http://dx.doi.org/10.1207/S15327809JLS10-1-2_8

PLEASE SCROLL DOWN FOR ARTICLE

Taylor & Francis makes every effort to ensure the accuracy of all theinformation (the “Content”) contained in the publications on our platform.However, Taylor & Francis, our agents, and our licensors make norepresentations or warranties whatsoever as to the accuracy, completeness,or suitability for any purpose of the Content. Any opinions and viewsexpressed in this publication are the opinions and views of the authors, andare not the views of or endorsed by Taylor & Francis. The accuracy of theContent should not be relied upon and should be independently verified withprimary sources of information. Taylor and Francis shall not be liable for anylosses, actions, claims, proceedings, demands, costs, expenses, damages,and other liabilities whatsoever or howsoever caused arising directly orindirectly in connection with, in relation to or arising out of the use of theContent.

Page 2: Schedules of Practical Work for the Analysis of Case Studies of Learning and Development

This article may be used for research, teaching, and private study purposes.Any substantial or systematic reproduction, redistribution, reselling, loan,sub-licensing, systematic supply, or distribution in any form to anyone isexpressly forbidden. Terms & Conditions of access and use can be found athttp://www.tandfonline.com/page/terms-and-conditions

Dow

nloa

ded

by [

Dok

uz E

ylul

Uni

vers

ity ]

at 0

4:59

07

Nov

embe

r 20

14

Page 3: Schedules of Practical Work for the Analysis of Case Studies of Learning and Development

COMMENTARY

Schedules of Practical Work for theAnalysis of Case Studies of Learning and

DevelopmentRogers Hall

Graduate School of EducationUniversity of California, Berkeley

I have learned something from each of these articles that I can use, either in my ownresearch or in my teaching, so this special issue (for me) is already a success. Thisis, in part, because the articles provide such a varied and interesting exploration of“intentional learning environments,” which is one focus for this special issue.Across the articles we find (a) teams constructing a virtual world of planetary dy-namics in an undergraduate astronomy course (Barab, Hay, & Yamagata-Lynch,2001, this issue), (b) groups of high-school students exploring the physics of mo-tion using the Interactive Physics™ microworld (Roth, 2001, this issue), (c) an ex-perimental participant solving an open-ended “story problem” in one of the Jasperseries of video-anchored macrocontexts (Kulikowich & Young, 2001, this issue),and (d) first graders in a “teaching experiment” working on the concept and relatedpractices of linear measure (Cobb, Stephan, McClain, & Gravemeijer, 2001, this is-sue). This is a very diverse collection of research projects, each is concerned withan experimental approach to instructional design (all but Cobb et al., cite Brown’s,1992 seminal article), and each involves some form of longitudinal analysis ofcases of learning and development.

The main focus of the special issue is to investigate the relation between theoryand methods when studying these kinds of environments, and this is where I directmy comments. In particular, I focus on the “practical work of analysis” set out in

THE JOURNAL OF THE LEARNING SCIENCES, 10(1&2), 203–222Copyright © 2001, Lawrence Erlbaum Associates, Inc.

Correspondence and requests for reprints should be sent to Rogers Hall, EMST, 4641 Tolman Hall,University of Berkeley, Berkeley, CA 15213. E-mail: [email protected]

Dow

nloa

ded

by [

Dok

uz E

ylul

Uni

vers

ity ]

at 0

4:59

07

Nov

embe

r 20

14

Page 4: Schedules of Practical Work for the Analysis of Case Studies of Learning and Development

204 HALL

each of these articles as they attempt to tell us how to go about doing this kind ofresearch. This is important to me for two reasons. First, as Roth (2001, this issue)points out in his article, this work is how researchers in the learning sciences pro-duce accounts of cognition, learning, and teaching. In a trivial sense, this must bethe case because “the learning sciences” consists of whatever people in the fielddo, here are four examples, and so on. However, to the extent that the reflexive in-tent of these articles is realized, they promise much more for the field.

Specifically, these articles describe new research practices in the learning sci-ences, and each does this in a way that takes us back to a critical rupture in the field.This rupture started with a pair of landmark studies, each coming from outside1 thedominant perspective on cognition at the time. One of these was the critical anthro-pology of cognitive science and respecification of cognition in practice providedby Lave (1988, 1996), and the other was a radical critique of relations among lan-guage, computation, and interaction provided by Suchman (1987, 1988). By look-ing carefully and critically at the work practices of cognitive scientists, Lave andSuchman raised issues that have led the field toward very different theories oflearning and cognition. However, they did not, and did not intend to, lay out how todo this new kind of work. The field has been moving quickly since this criticalturn, and this special issue is a good occasion to take stock.

Second, attention to the practical work of analysis—as something that can gobeyond formulaic specification of “methods” to reveal otherwise hidden layers ofwork and practical skill—is very important for teaching newcomers how to do re-search. Due to the fact that I teach graduate courses in research methods, I haveread the articles in this special issue with an eye to piecing together, in each case, aschedule of work that someone would face if they set out to follow the authors’proposals (i.e., a concrete account of work practices). In the body of my commen-tary, I want to use these various schedules to underscore some of the more provoc-ative aspects of these articles and to examine puzzles or problems that arise in lightof them. More on this in a moment, but this way of reading the papers places them,for me, in the company of a broader collection of other articles that reveal differentaspects of the practical work of analysis. I briefly mention those not already citedin the articles as a way of filling in the broader frame.

ACCOUNTING FOR THE PRACTICAL WORK OFANALYSIS

One part of this collection concerns how to make recordings of human activity.Schatzman and Srauss (1973) made a great case for treating oneself as a recording

1Lave and Suchman both drew from earlier traditions in the human sciences (e.g., activity theory andethnomethodology, each with its own critical history), but their work has influenced studies of cognitionand learning on a scale that is remarkable.

Dow

nloa

ded

by [

Dok

uz E

ylul

Uni

vers

ity ]

at 0

4:59

07

Nov

embe

r 20

14

Page 5: Schedules of Practical Work for the Analysis of Case Studies of Learning and Development

mechanism that can be coordinated with other, mechanical devices in the field (e.g.,audio, video, or even computerized recordings). Goodwin (1981, 1993) describedhow to arrange audio and video devices to capture, selectively, records of talk in in-teraction geared to different kinds of research questions. Finally, one of my articles(Hall, 2000; also see Roschelle, 2000, in the same volume) examines how strategiesfor video recording irreversibly encode theoretical perspectives on learning andteaching.

Another part of the collection concerns how to use a set of recordings to identifytypes of interaction that are typical (Erickson & Schultz, 1977), how to interpretand describe these adequately against a background of ethnographic observation(McDermott, Gospodinoff, & Arons, 1978), and even how to shift among ortho-graphic systems of transcription to selectively highlight different aspects of inter-action and language use (Goodwin, 1994; Ochs, 1979). A last part of the collectionconcerns how to coordinate the work of analysis and writing to generate groundedtheoretical categories across a corpus of field materials. I include these articles be-cause every article but Kulikowich and Young (2001, this issue) mentions thework of Strauss and his colleagues. In this tradition, Charmaz (1983) described theprocess of writing notes and memos (observational and theoretical) that eventuallybecome sections of her published articles. Charmaz’s article, read in parallel withSchatzman and Strauss (1973), starts to make the links between writing field notesand “writing up” visible. In a more programmatic fashion, Strauss (1987) usedtranscripts from working sessions with his students to describe the process of de-veloping and then writing about grounded theoretical categories. Again, all the ar-ticles in this broader collection—and given my purpose, also the articles in thisspecial issue—provide a window into the practical work of linking data collection,inference, and theory development together.

In overview, it is very important to examine the work practices of researchers inthis special issue.This isbothbecausesomereaderswillbe tryingout theapproachesdescribed and because work practices tend to be deleted or smoothed over in aca-demic writing (i.e., a traditional view of writing as a later or even final phase of re-search practice). I have read these articles with an eye to finding or recovering thissense of practical work, and in my commentary, I try to give an account of it for eacharticle. In the sections that follow, I consider each article in turn, first giving a de-scription of what, on my reading, the authors have set out to do. I then extract (withsome inferences of my own) a specific schedule of work that may be required to fol-low their proposals. In terms of this schedule, I consider what is strong in each pro-posal as well as what I find puzzling or problematic. As a final note beforeproceeding, these are challenging, provocative articles. They raise issues that are“live” in my own work and teaching, so I hope that my comments will be read as partof an ongoing conversation within the field. Looking back over events of the past 15years, there are still more critical ruptures ahead for the learning sciences. The arti-cles collected in this special issue help take us forward.

COMMENTARY 205

Dow

nloa

ded

by [

Dok

uz E

ylul

Uni

vers

ity ]

at 0

4:59

07

Nov

embe

r 20

14

Page 6: Schedules of Practical Work for the Analysis of Case Studies of Learning and Development

206 HALL

CONSTRUCTING NETWORKS OF ACTIVITY (BARAB,HAY, AND YAMAGATA-LYNCH)

If only in terms of the number of theories drawn together to get an analysis started,this article is the most ambitious of the lot. Social practice theory is pulled in tosketch “knowing about” as a trajectory of participation that is distributed over peo-ple and things; activity theory is adopted to draw a boundary around action-relevantepisodes that will provide the material for analysis; and actor network theory(ANT) is brought in to follow “knowledge in the making” as a process that buildsnetworks out of these episodes (nodes) that link ideas, people, things, and practices.This entire apparatus is directed toward understanding how students learn in teamsas they work on relatively open-ended design projects. As an example, the authorsdescribe a case study of an undergraduate astronomy course where students buildvirtual reality simulations of planetary dynamics.

The schedule of work, as I read this article, starts by placing observers withvideo cameras in a classroom where computing-intensive design projects are un-derway. Students in the projects are also interviewed, and researchers are en-couraged to check on the validity of emerging analytic categories by asking forfeedback from study participants on an as-needed basis. For field data generatedby these arrangements to be usable in subsequent stages of analysis, of course,someone needs to do all this work in a way that is well coordinated (more onthis in a moment). Next in the schedule, observations are parsed into a codingscheme that should capture what the authors call a minimal meaningful ontology(also called a context). This ontology, which is developed in advance of thestudy and wired directly into menu choices in a database entry form used by ob-servers, provides linked categories for issues (e.g., a topic of discussion), whoinitiates an issue, who participates (other than the initiator), what resources areused (e.g., to include concepts and tools), and what practices are actually carriedout by the initiator.

Once episodes or nodes have been coded in this fashion, further codes and sub-categories are developed during weekly meetings in which researchers use tech-niques borrowed from grounded theory (none are mentioned, specifically), andthese allow researchers to identify tracers that they will follow systematicallythrough the growing network of episodes. As the network (and its analytic con-cepts) in their relational database becomes saturated (I am guessing about their useof grounded theory), the researchers use database reporting functions to makegraphs that plot Issue and Tracer × Participant × Time (e.g., see Figures 4 and 6).Patterns can be identified in these graphs that allow the researcher to follow thedistributed development of knowing about something (e.g., a teacher demonstratesa particular animation technique in one group, then directs another group to con-sult with the first). Finally, on the basis of finding these patterns and the use ofmore mundane capabilities for search and frequency analysis, instructional ar-

Dow

nloa

ded

by [

Dok

uz E

ylul

Uni

vers

ity ]

at 0

4:59

07

Nov

embe

r 20

14

Page 7: Schedules of Practical Work for the Analysis of Case Studies of Learning and Development

rangements that appear to support student learning are to be carried forward in fu-ture cycles of classroom design experiments.

This whole schedule of work—along with the package of recording, comput-ing, and representational technologies supporting it—is called the methodology ofconstructing networks of action-relevant episodes. In my view, the package hasboth some desirable and some puzzling features. Most desirable, perhaps, is a sys-tematic proposal for “chunking” (Jordan & Henderson, 1995) very dense field ma-terials into episodes that have close relevance for the authors’ theoretical approachto distributed or situated learning. As analytic categories become more refined(i.e., their conditions are established across instances in the corpus), they can be in-corporated back into field data collection and are even written into the database en-try form used by observers. Also desirable, relatively standard features of arelational database program (specific techniques are not described) are used to fa-cilitate access to and reorganization of what would otherwise be an increasinglyunwieldy collection of field data. Software tools for qualitative analysis are be-coming increasingly popular, and this article provides an example of their use.

Several things are more puzzling about this approach. First, and acknowledgingthat one cannot do everything in a 60-page manuscript, I find some sizable gapswithin the schedule of work I have just reviewed (well, constructed). One of theseconcerns is how to coordinate writing computer-based field notes with operating avideo camera, in which the quality of records produced by each activity are criticalfor following the distributed development of knowing about something. In my ex-perience (and that of many others; see Sanjek, 1990), writing field notes is a la-bor-intensive activity that can be both inspiring and horrifically dull. Sometimestoo much is happening to get it all down; at other times, seemingly nothing at all ishappening. At the same time (and I really mean the same time, in this case), mak-ing a film record of ongoing activity is also a labor-intensive activity. Althoughsomething useful may be recorded by a stationary camera with a good quality ex-ternal microphone (i.e., to include a wireless microphone), my experience is thatthe interesting stuff usually gets up and moves out of the frame. This means thatsomeone has to operate the camera, and moreover, they need to learn to film com-plex activity with an eye to the kind of analysis that will later be conducted. Writ-ing field notes and filming each take sustained attention and involve considerableskill. If the coordinations required for doing both together were clearly described,the field would be in better shape.

Another gap that puzzles me (not just in this article, but in general) is how pre-existing codes (e.g., the frame structure for an episode)—or even coding catego-ries that are developed using grounded theory—can span the divide betweenintended and emergent characteristics of some instructional environment. Whenmethods of analysis are geared to support decisions within or across design experi-ments, how does the pace of analysis keep up with design concerns, on the onehand, yet still remain open to unexpected developments in a study site, on the

COMMENTARY 207

Dow

nloa

ded

by [

Dok

uz E

ylul

Uni

vers

ity ]

at 0

4:59

07

Nov

embe

r 20

14

Page 8: Schedules of Practical Work for the Analysis of Case Studies of Learning and Development

208 HALL

other? The opportunistic use of tracers is one interesting response to this problem,and thinking this through leads to questions about how analysis sessions are struc-tured within an ongoing study. For example, would alternating between periods ofdata collection and periods of close analysis yield adequately dense field materialsfor longitudinal analysis? I raise this kind of question because I am intimately fa-miliar with the work involved in first “content logging” field materials (both fieldnotes and video recordings) and then selectively transcribing film (or audio) re-cords of talk in interaction. If the study requires bringing categories forward for thenext day’s instruction, some kinds of analysis are simply out of the question. Whatcan be managed in this complex tradeoff between the coordinated work of record-ing, technical preparation of field data, and close analysis of selected episodes is aninteresting, open question.

A final gap, or maybe just a criticism, concerns what I read as sharply reductivesimplifications of theories concerning social practice, activity, and symmetricanalysis of people and things in networks of knowledge in the making. Borrowingthe idea that participation follows a trajectory (i.e., from legitimate and peripheralto more central participation), as set out in Lave and Wenger’s (1991) theory of sit-uated learning, seems about right. However, despite a time-ordered analysis thatcodes the participation status of every participant at every moment (i.e., the net-work graphs are “full” when anyone in a group engages a tracer), there is no seri-ous attempt to describe who is central or peripheral to some developing practiceover time, or whether there are sustained conflicts between participants over howto proceed. For example, at the top of Figure 6 all of the students in two groups ofthe astronomy class (and, I assume, any others listening) are coded as participantsin an episode of the eclipse practice (i.e., all students are coded with the time index,65). However, following the network backward, I notice that Taro (for example)appears only twice and without full participation in this practice (i.e., nodes or-dered 6 and 64). Under this coding scheme, what does it mean for there to be aneclipse practice underway in the classroom and what does it mean for Taro to par-ticipate in it?

I find a related theoretical reduction in Barab et al.’s (2001, this issue) use of ac-tivity theory to define the minimal meaningful context for an activity. Althoughthey explicitly do not attempt to address a cultural or historical level for activityand its motives in their analysis (e.g., this leads Leont’ev’s multilevel approach),they nonetheless recruit bystanders like Taro into something called the eclipsepractice. Again, under this coding policy, are the boundaries of what is meaningfulsimply fixed at the edges of the intended curriculum, regardless of what individu-als are doing? Is there any relevant role for conflict, resistance, or ensuing transfor-mation in collective activity? These are central concerns for current work inactivity theory (Engestrom, 1999), and they could be important for Barab et al.’sanalysis, too. Finally, concerning their use of ANT, Barab et al. rule out the possi-bility that things (nonhumans) can be agents in the networks that are built into

Dow

nloa

ded

by [

Dok

uz E

ylul

Uni

vers

ity ]

at 0

4:59

07

Nov

embe

r 20

14

Page 9: Schedules of Practical Work for the Analysis of Case Studies of Learning and Development

knowing about something. One can always simplify. However, particularly in aninstructional context with immersive and powerfully dynamic computational me-dia, why is this simplification desirable? More pointedly, after flattening out whatis meant by participation (i.e., only and all humans are said to participate), excisingany serious analysis of history or culture, and housing individuals in task environ-ments called tracers, how does this schedule of work depart significantly from thecollection and analysis of verbal protocols as reports of the contents of mind?

SITUATING COGNITION (ROTH)

This is the most provocative and personal (even idiosyncratic) article in the specialissue, in that Roth (2001, this issue) retrospectively sets out to reveal how his ownpractices of “zooming” across levels of analysis are used to create fully elaborateddescriptions of what the individual experiences while doing and learning science.In his view, the results are usually too complex for journal publication, so, and as afinal part of his schedule of work (discussed later), he needs to simplify and selec-tively pull the analysis apart so that it will make sense in publication. It is in this pro-vocative sense that Roth (or any researcher) situates or constructs cognition as a se-lective account of the goings ons available in field data. The article is retrospectivein that Roth attempts to unite a variety of topics in his prior research around thiswork of situating cognition (e.g., the role of physical space and artifacts in learning,the appropriation of scientific language by children, and close analysis of relationsbetween gesture and talk as indicative of changing conceptual understanding).Roth’s general proposal is that the analyst needs to zoom across levels of organiza-tion that are extended in time so as to follow the distribution (or diffusion) of capa-bilities across media and learners. The particulars of zooming are illustrated in astudy of high-school students who learn about the physics of motion while using amicroworld that provides dynamic control over forces acting on idealized physicalobjects (i.e., Interactive Physics).

The schedule of work in Roth’s illustration starts with entering into a relation ofparticipation with people in some site that allows for broad access to and a livedexperience of learning and teaching. In the illustrative case, Roth works as theteacher in three sections of a physics course, but beyond this he teaches for 3 yearsin the school, so he has extended (and institutionally consequential) relations withparticipants in the study. Next, Roth makes daily video recordings of each of threegroups of students working at computers running the microworld software (seeFigure 3). In each case the camera is stationary and (judging from drawings in Fig-ure 5) zoomed (technically, not metaphorically) into the computer screen. Alongwith these video records, Roth takes time after each physics session to write reflec-tive field notes, collects finished student products in the course (e.g., worked prob-lems and reports on investigations), interacts with and studies these same studentsin other school contexts, and has access to their full school records.

COMMENTARY 209

Dow

nloa

ded

by [

Dok

uz E

ylul

Uni

vers

ity ]

at 0

4:59

07

Nov

embe

r 20

14

Page 10: Schedules of Practical Work for the Analysis of Case Studies of Learning and Development

210 HALL

Roth analyzes these field data in what I read as an alternation between twostances or perspectives on teaching and learning. Inhabiting (literally) the perspec-tive of a teacher, he makes continual rearrangements to instruction on the basis ofwhat he notices day to day in the classroom. Then taking up the perspective of a re-searcher—an engagement that occurs later and stretches out over “weeks (evenmonths)”—he transcribes all video recordings and repeatedly views these record-ings in the context of other documentary materials. The result is a linked textualand graphical environment (e.g., organized as a collection of html documents)through which Roth zooms to examine phenomena of teaching and learning at dif-ferent levels. In general, these levels reflect articulated dimensions of develop-ment for activity, person, and practice (Cole, 1996; Hutchins, 1995; see alsoFigure 1 in Roth’s article, 2001, this issue). However, for the specific purpose ofanalyzing conversations over physically inscribed media, Roth arranges these aslayers that bridge (somehow, and this is the point of analysis) between media andthe school community (see Figure 4). By zooming across these layers, the analyst(Roth or anyone situating cognition in this way) “follows his or her actors in shift-ing between fields of attention” (p. 31). After what Roth admits may seem like abrute force analysis, he carves the model of cognition at multiple levels into a “sep-arate presentation of each analysis” (p. 33) that meets the space (and program-matic) limitations of journals.

There are many things to recommend this approach. First, I deeply appreciatethat Roth sets out to give an explicit account of how his own work practices con-struct a version of cognition (doing, learning, teaching). There are selections madeat every step, and there is a frank description of what is involved in immersing one-self in field data as an analyst. For example, Roth usually transcribes his own videorecordings, and although he does not mention it, there is a good argument that tran-scription is itself a form of theoretical work that an analyst may not want to farmout to others (Hutchins, 1995; Ochs, 1979). Second, Roth discusses alternating be-tween linked but not identical roles of teacher and researcher in this work, and (bymy reading) this divides most of the schedule of work, both in time and in types ofactivity. For example, Roth writes field notes after teaching each day, and theseprovide an opportunity for him to test out ideas about students’ understandings andto make corresponding changes in instruction. This experimental attitude is quitesimilar to what Cobb et al. (2001, this issue) call a form of imagination in instruc-tional design. However, this is not the work of zooming that comes later in Roth’sschedule as a researcher, either in terms of when the work happens, how long ittakes, or (I presume) what are the prevailing questions. Looking for productive re-lations between the work of teaching and research, rather than fusing them into anidentity, strikes me as a positive feature of his approach (more on this later).Finally, Roth’s recommendation to follow the lived experience of learners (he de-scribes this as “attention”) is very important. I agree with him (Hall, 1996) thatcognitive studies of educational practice, particularly when conceptualized as a

Dow

nloa

ded

by [

Dok

uz E

ylul

Uni

vers

ity ]

at 0

4:59

07

Nov

embe

r 20

14

Page 11: Schedules of Practical Work for the Analysis of Case Studies of Learning and Development

design discipline, have to find new ways to break out of a normative observer’sperspective.

There are also some aspects of Roth’s proposal that I find puzzling. First, whilealternating between teaching and doing research is used to provide different per-spectives on learning, Roth argues that being a teacher–researcher brings with it“all the benefits of ethnographic research” (p. 36). In particular, being a teacher inthe classroom is said to allow one to “appropriate the participants’ competencesystems” (p. 36). However, to the extent that the ethnographer’s relations withstudy participants govern what can be observed, how does the sharply asymmetricrelation between a teacher and his students (both also in relation to the surroundingorganization) either enable or block access to these systems? In thinking about thisproblem, I am reminded of Eckert’s (1989) ethnographic study of “jocks and burn-outs” as symbolic resources for identity formation among high school students.Eckert decided to stay outside classrooms and never to work in teacher-designatedspaces because she wanted her relation with students, in which she would inevita-bly be seen as an adult, to be clearly distinct from the roles of teachers and adminis-trators. Concerning Roth’s proposal, what would students be unlikely to share withtheir teacher–researcher, how do these things figure in “competence systems,” andwhat consequences might this have for situating their cognition?

Second, and related to the division of work between teacher and researchermentioned previously, I wonder what it takes to support downstream zoomingwith questions that concern phenomena at very different levels of analysis. More-over, how could teacher–researchers collect field data when they may not even beaware of these questions at the time of recording? For example, I assume the draw-ings that show gestures at the computer screen (see Figure 5) were produced overthe full frame of a video record collected while Roth was teaching (i.e., what canbe recovered, visually, is what appears on or at the surface of the computer screen,and nothing else). Knowing about the demands of full-time teaching, and despiteRoth’s report that “transcription, data analyses, and other research-related activi-ties were completed at night” (p. 37), I also assume that these drawings and thetranscript that (probably) preceded them were created long after Roth taught thecourse. Given all this, at some later point Roth zooms in to do a study of gestureand talk. It is only at this point that he and other researchers who were not presentin the classroom begin to notice and systematically track the deictic and iconicfunctions of gesture in relation to technical language, as well as the developmentalcourse of their full articulation (e.g., Roth reports that it takes Glen and his peers 2weeks to point and talk in a way that is “consistent with scientific practice”; p. 54).However, as Roth and his colleagues do this analysis, their attention must be lim-ited to whatever happens to pass in front of the stationary camera that was runningwhile Roth was teaching. In this sense, the analysts have no access to gestural pro-duction beyond the edges of the computer screen, and they have very limited infor-mation about who is attending to what is assembled at the screen, or how that

COMMENTARY 211

Dow

nloa

ded

by [

Dok

uz E

ylul

Uni

vers

ity ]

at 0

4:59

07

Nov

embe

r 20

14

Page 12: Schedules of Practical Work for the Analysis of Case Studies of Learning and Development

212 HALL

assembly is produced as a matter of recipient design (i.e., what Roth says is “cre-ated for the analyst spectator”; p. 56). My point is certainly not to detract fromRoth’s analysis of gesture, which he has effectively cut apart for publication (thework is listed as being in press), but instead to raise two more general issues. Oneis that important limits on zooming are irrevocably set at the time of recording. Theother is that, in principle, the time scales of teaching and doing research may not al-ways match either in sequence or pace (i.e., what can be “completed at night”when a detailed analysis of development extends over 2 weeks?).

LOCATING AN ECOLOGICAL PSYCHOLOGYMETHODOLOGY (KULIKOWICH AND YOUNG)

This article is relatively narrow in the kind of environment being analyzed (i.e., anindividual attempting to solve a computer- and video-based task). However, inkeeping with the other articles in this issue, it is quite broad in its theoretical aspira-tions. Kulikowich and Young are concerned with building an analytic scheme thatwill render (or literally inscribe, discussed later) the ecology of information seekingbehavior of a learner faced with an open-ended problem to solve. They have con-structed a computer-based planning tool that sits atop one of the Jasper series ofvideo-based problems developed for anchored instruction by the Cognition andTechnology Group at Vanderbilt University. Design of this planning tool and theiranalysis of its use draws from and proposes to extend work in ecological psychol-ogy, a field that provides theoretical resources for studies of situated cognition andlearning.2 To support the learner, their planning tool provides a structured templatefor asking questions and recording facts that may be relevant for solving a problem(e.g., finding sources of fuel on the basis of hints buried in the video context). Tosupport an analyst, on the other hand, their tool is designed to capture dynamictraces of information seeking by collecting “dribble files” as the student (or experi-mental subject) works with the tool. The tool and proposed method of analysis areillustrated with a case study of a single student (an undergraduate, I assume) who,as recorded in the dribble file, makes two attempts “to successfully complete thetask” (p. 185) of piloting a boat along a creek (i.e., how the student describes his ap-proach to the video-based context). By my reading, the case study contrasts whatcan be discovered by using (a) a cluster analysis over sequences in the dribble file to

2Lave, Murtaugh, & de la Rocha (1984, pp. 69–72), for example, drew from ideas about staffing andsynomorphy in Barker’s studies of behavior settings when working out a unit for “person-acting-in-set-ting.” More recently, Greeno (1995) has drawn from Gibson’s account of direct perception to describehow individuals learn and are able to transfer what they understand by becoming attuned to “constraintsand affordances” in particular types of situations.

Dow

nloa

ded

by [

Dok

uz E

ylul

Uni

vers

ity ]

at 0

4:59

07

Nov

embe

r 20

14

Page 13: Schedules of Practical Work for the Analysis of Case Studies of Learning and Development

identify types of information seeking behavior and (b) a time or topic-based analy-sis of interface events to identify qualitative shifts in solution path.

Due to the fact that Kulikowich and Young describe anchored instruction insome detail, I include this as part of the schedule of work that would be involved intheir approach. First, they need to find or build problem-solving contexts that arerich or authentic for some particular collection of learners. This involves movingaway from worksheets or word problems (e.g., a typical distance, rate, time prob-lem in middle-school mathematics) and toward problem situations that includemore extensive descriptions of context, a broader array of information that may berelevant to solution, and thematic material important for the subject matter disci-pline of instruction. This kind of enriched problem, called a macrocontext, is illus-trated in the case study by one of the early videos in the Jasper series, Journey toCedar Creek (i.e., a distance, rate, and time problem embedded in a complex sce-nario about buying and piloting a boat; see Bransford, Zech, Schwartz, Barron, &Vye, 1996). Next, Kulikowich and Young adopt what they call an “instrumentalinterventionist approach” to design a planning tool for learners using themacrocontext. In this case, the tool combines a map (i.e., to navigate through thevideo, in which narrative time follows a trip along the river), menu-driven promptsthat encourage well-structured questions, forms for the student to record informa-tion while searching the video, and various assessment items.

Given a tool fitted to the macrocontext, studies are conducted in which problemsolvers simultaneously access the macrocontext through the tool and inscribe theirproblem-solving activity in a dribble (or log) file. Although their case study is ofan individual in an experimental setting, the tool and macrocontext could (presum-ably) also be used by groups in a classroom. The resulting dribble file is analyzedas a protocol of information-seeking utterances and actions. This protocol can beclustered into activity types (e.g., searching the video and finding facts are shownto be related activities), marked up with qualitative descriptions to trace a partici-pant’s goals or intentions, and graphed over time or topic to show patterns in infor-mation-seeking behavior (e.g., see Figures 6 and 7). Kulikowich and Youngdescribe these steps as an interplay between quantitative and qualitative analysis.Finally, the results of analysis are fed back into their interventionist stance to de-sign new visualization tools and studies of agent–environment interaction.

Kulikowich and Young start their article with the observation that “there isnothing simple about the theories of knowing that draw on the complex dynamicsof agent–environment interactions” (p. 168), and by the end most readers wouldprobably agree. I appreciate their efforts to link together theoretical ideas fromecological psychology with studies of situated learning and action. For example,they explore how a moment of interaction could be understood to emerge from anontological descent (see Figure 1) that binds together constraints and affordancesat multiple levels of organization and development (i.e., what they call “spacetimes” in Figure 2). This is similar to Roth’s argument (also to Hutchins and Cole,

COMMENTARY 213

Dow

nloa

ded

by [

Dok

uz E

ylul

Uni

vers

ity ]

at 0

4:59

07

Nov

embe

r 20

14

Page 14: Schedules of Practical Work for the Analysis of Case Studies of Learning and Development

214 HALL

mentioned previously), and the whole lot should help direct our attention both tothe cultural–historical axis of a learner’s ongoing activity and to the insertion of re-searchers and designers (both of technology and curriculum) into that trajectory.Entering this expanded theoretical landscape for learning and development alsoleads to “nothing simple,” of course, but it does lead to interesting problems for thelearning sciences. One of these, and another contribution in this article, is how tothink about the tight coupling between people and tools. This is not just a processthrough which the environment determines perception (i.e., welding direct percep-tion to internalization as an explanation for learning), but it is also a processthrough which people create and externalize the very stuff of the environment.

Theory meets method in this article in ways that I also find deeply puzzling.First, and regarding early phases in the schedule of work, developing authentic (oreven interesting) macrocontexts is a labor and capital-intensive project. By myreading of the literature on how the Jasper series has been developed and used, thevideo context alone is not what is most important. Instead, there needs to be a care-fully elaborated approach to instruction that, outside of experimental settings, in-volves extensive professional development. Breaking away from word problemssounds like a great idea, but it is a very large first step, at least if one wants to studyinformation-seeking behavior in the context of classroom teaching. Adding in thedevelopment of planning tools that fit the content and technical infrastructure of aparticular macrocontext, we get a second investment in technical design. Of all thearticles in this special issue, this one comes with the highest entry cost. By analogyto centralized projects in high-energy physics, the proposed work is interesting,but it is not entirely clear who could play in this game.

A second puzzle concerns how to work out a complex theory of “percep-tion–conception–action couplings” (i.e., a very nimble set of interpretive phenom-ena) over data that consists exclusively of a time-stamped series of interface events(i.e., the dribble or log file). These events are precisely and exhaustively recorded,and they may even allow for a kind of playback that can recover what was activeon computer and video screens at any given moment. However, these data are notprotocols or conversations in any usual sense, despite arguments given in the arti-cle. The test, of course, is in the kinds of findings they do provide under analysis,and here the evidence is not convincing. For example, after log files are subjectedto cluster analysis, we discover that some menu selections (e.g., planning) are as-sociated with others (e.g., questioning). However, how could it be otherwise if theplanning tool has been reasonably well designed? Would a user not need to tracejust this hierarchical organization of related menus and modes of action in order touse the environment at all? As a second example, a qualitative analysis of the logfiles in combination with a time-based graph of activity types shows that the stu-dent pauses at two “critical transition points.” These are called moments of “dwelltime” and, ironically, they are precisely the moments at which nothing at all showsup in the data that would help to recover, render, or analyze them as consequential

Dow

nloa

ded

by [

Dok

uz E

ylul

Uni

vers

ity ]

at 0

4:59

07

Nov

embe

r 20

14

Page 15: Schedules of Practical Work for the Analysis of Case Studies of Learning and Development

moments of interaction. It is in this sense that the dribble files are exactly not con-versations. They are a form of data that can show us something, but it seems oddthat these data are mute (or at least provide nothing for further analysis) at justthese critical moments.

PARTICIPATING IN CLASSROOM MATHEMATICALPRACTICES (COBB ET AL.)

Of all the articles in this special issue, this is the one that I find most instructive interms of the broader project set out in the introduction (i.e., considering theory,methods, and environment). Although Cobb et al. have been writing about theirconstructivist studies of learning in classrooms for about a decade, this article is awelcome overview (and update) of how they approach the design, conduct, andanalysis of “teaching experiments” in elementary school mathematics. Joiningforces with a parallel program of realistic mathematics education in the Nether-lands (i.e., Gravemeijer’s earlier work, in which realistic was to mean realizable bya learner), they describe a developmental approach to instructional design and re-search that is oriented, at bottom, toward “improving learning” rather than produc-ing grand cognitive theory.3 They illustrate this approach with a case study of thedevelopment of mathematical practices for linear measure among elementary stu-dents and their teacher. In the analysis, they alternate between social and individualperspectives on learning, arguing (as has Rogoff) that each provides a partial andcomplementary view of a single phenomenon. The result, in their analysis, is a de-scription of collective practices (dispositions, expectations) and individual acts ofreasoning (measuring, challenging) that remain visible and, in theory, make eachother up.

The schedule of work for Cobb et al. is a “design-research cycle” that starts withan explicit phase of imagination in which designers (including teachers) anticipatehow learners’ diverse activities could give rise to a sequence of linked mathemati-cal practices. This genetic sequence is said to emerge in response to instructionalmaterials, tasks, and the organization of talk in a classroom. So the design chal-lenge becomes one of supporting and organizing the trajectory of practices. In thecase study, for example, Cobb et al. expected that students may reason about andparticipate in body-based measurement practices in two quite different ways. Onewould understand numbers as references to discrete acts of measuring (e.g., 5 re-

COMMENTARY 215

3I must say, looking back, that Cobb’s work has been more ambitious regarding theory than most inmathematics education, spanning Piagetian, radical constructivist, social practice, and cultural-histori-cal theories of learning and development. The practical humility advocated in this paper, however, looksto be productive by focusing on what emerges among learners as a way to decide what to do next in ateaching experiment.

Dow

nloa

ded

by [

Dok

uz E

ylul

Uni

vers

ity ]

at 0

4:59

07

Nov

embe

r 20

14

Page 16: Schedules of Practical Work for the Analysis of Case Studies of Learning and Development

216 HALL

fers to the 5th step in a heel-to-toe measuring scheme), whereas the other wouldtreat numbers as structural properties of the measured object (e.g., 5 refers to an ac-cumulated distance that structures the measured object). The second understand-ing might arise out of the first in response to changing the embodied units (e.g.,using hand or arm spans) and pushing students for conceptual explanations of theiractivities (e.g., explaining how the spatial extent of an object is the same, evenwhen measured using different units). Next, a teaching experiment lasting severalmonths is organized using these materials, and in the experiment, researchers areencouraged to play an active role by asking students questions about their activity.Clinical interviews are conducted with all students before and after the experi-ment, then video recordings are made that follow researchers working with targetstudents, and these (or other) researchers write field notes and collect relevant stu-dent work. The result is a rapidly expanding corpus of field data.

As the teaching experiment is underway, the research team meets regularly toanalyze the accumulating field data. Cobb et al. describe this as a process of merg-ing two lists: One is a chronology annotating critical events or episodes, and theother is a linked list of conjectures about developing practices, refutations, or con-firmations based on supporting field data. Working out the relations between theselists requires generating a description of development for each target student, butalso generating a description of classroom practices that spans individual students(i.e., alternation between individual and collective perspectives). In this fashion,the research team develops a set of grounded theoretical categories concerningchanges in classroom mathematical practices. As these descriptions and categoriesare produced, the team revisits their initial design decisions, possibly rearrangingthe instructional sequence and looking for new forms of evidence about theoreticalcategories as the teaching experiment continues. As any particular iteration of ateaching experiment concludes, the team writes up their analysis with an eye to theusefulness of their reports for different audiences. In particular, one of their strate-gies for reporting attempts to document instructional sequences in a way that isopen to further analysis and refinement (what they call “local instructional theo-ries”) in the context of teacher professional development.

There are many strengths in this article, and I mention two that I find particu-larly useful. First, their description of field data collection is clear, and it raises animportant issue for research based on longitudinal case studies. Cobb et al. de-scribe making daily video records of researchers who follow the activities of fivetarget students. I assume there is another person running the camera, and a remotemicrophone is carried with the researcher to capture conversations with target chil-dren. Under this strategy for recording, and subject to considerable skill and coor-dination, the video captures a researcher’s selective orientation to how theteaching experiment is progressing. This has interesting consequences (at least inmy imagination). Due to the fact that the camera follows the researcher, the pri-mary record skips across target students and does not capture the ongoing activity

Dow

nloa

ded

by [

Dok

uz E

ylul

Uni

vers

ity ]

at 0

4:59

07

Nov

embe

r 20

14

Page 17: Schedules of Practical Work for the Analysis of Case Studies of Learning and Development

of any particular student who participates in an emerging mathematical practice.What this strategy loses, I suppose, are moments that are not produced for (or with)an adult participant. On the other hand, because a researcher writes reflective fieldnotes about their daily experience, the film record is tightly linked to their ongoinganalysis, and so is already partly analyzed even before being taken out of the cam-era. What I find interesting about this arrangement is that it compresses the time re-quired to coordinate notes and film records, and so makes these field materialsaccessible to the relatively fast-paced (i.e., daily) work of experimentation under-way in the research team. This looks like an interesting tradeoff, and it provides an-other example of how to coordinate writing field notes and collecting videorecordings with an eye to particular kinds of analysis.

A second strength is Cobb et al.’s deliberate attention to the difficult work ofcombining detailed investigation of particular episodes with a broader theoreticalanalysis of what develops during a teaching experiment. For example, during thefirst session of the experiment, they notice that the teacher’s use of tape to mark in-termediate measuring results becomes a visible (and durable) resource for studentsas they challenge each other to explain why measuring schemes work. Then whenfollowing the progress of a particular student (Megan), they use this same observa-tion about intermediate marks as a mediating resource to help explain her more so-phisticated understanding of measured values. In particular, during the 16thclassroom session, Megan enlists the teacher to mark off intermediate results on anadding machine tape being measured as if it were a piece of lumber. Her perfor-mance is produced as an explanation for another student, and it demonstrates whatshe “contributed to the emergence of the third [most sophisticated] mathematicalpractice” (p. 142). Their analysis not only pursues theoretical categories across ep-isodes looking for confirming evidence, but it also threads these conjectures to-gether in a way that helps reveal individual and collective dimensions ofdevelopment.

An important aspect of this article that I find puzzling is how (a) “normativetaken-as-shared” expectations about activity are combined with (b) “the diversityof students’ ways of participating” in these normative practices to (c) make infer-ences about what develops. This is a mouthful, and it joins Cobb et al.’s theoreticalclaims to the data they collect. Early in the article, they explain why they prefertaken-as-shared expectations as an alternative to shared when describing collec-tive norms or practices. This leaves open the possibility that individual studentsmay not hold beliefs that conform to collective dispositions, and so they may do orsay things that are at odds with classroom norms (i.e., this is what is meant by di-versity). Due to the fact that teachers and researchers insist that students give con-ceptual explanations with backing for their measurements, this diversity ofindividual beliefs will become visible when students challenge each other in wholeclass discussions. With these assumptions in place, Cobb et al. claim to be in a po-sition to make inferences about individual or collective understandings of particu-

COMMENTARY 217

Dow

nloa

ded

by [

Dok

uz E

ylul

Uni

vers

ity ]

at 0

4:59

07

Nov

embe

r 20

14

Page 18: Schedules of Practical Work for the Analysis of Case Studies of Learning and Development

218 HALL

lar mathematical practices. Moreover, their inferences sometimes proceed fromthe absence of talk about these same practices.

For example, in the development of “measuring by iterating a footstrip” (i.e.,the second mathematical practice), students initially disagree with a researcher’ssuggestion to fold up or cut off the remainder of a measuring strip. Due to the factthat the researchers’ proposal makes sense from a more sophisticated practice ofmeasuring, the students are said to be involved in a less sophisticated practice(with attendant beliefs and understandings). However, the researcher prevails, andon subsequent days, students are seen “mentally cutting the footstrip” withoutchallenge or further discussion. Now Cobb et al. interpret the absence of challeng-ing talk as evidence for a “crucial advance made by the classroom community” (p.137) concerning measurement as a property of the measured object (i.e., ataken-as-shared classroom mathematical practice). This seems reasonable, butonly under the assumption that as teachers and researchers continue pressing stu-dents to challenge statements or activities that “violate norms,” the students willdo so.

Later in the article, when tracing the development of a practice called “measur-ing by iterating the Smurf bar” (p. 142), Cobb et al. appear to reverse the schemeand interpret the presence of a challenge between students as evidence that there isnot yet a taken-as-shared practice in the classroom. In this example, one studentchallenges another’s incorrect observation that a value of 20, questioned in thecontext of measuring with units of 10, refers only to a second use of the measuringunit and not to the spatial extent of the object being measured. Again, the teacherand researchers are pressing students to challenge each other, but now a challengeis taken as evidence that a collective practice has not yet been established.

What puzzles me about this inference scheme is that, when a student says some-thing that an adult would hear as a mistake, the absence of a challenge by other stu-dents is taken as evidence that none (or very few) of them understand somepractice in the making. On the other hand, when a student says something that anadult would hear as correct, the absence of a challenge is taken as evidence thatstudents (or many of them) understand an increasingly stable mathematical prac-tice. Even assuming that teachers and researchers push for challenges and thatcosts associated with challenging one’s peers are negligible for students (seeLampert, Rittenhouse, & Crumbaugh, 1996, for evidence to the contrary), thisstrikes me as being a very delicate problem of interpretation. Of course, new kindsof delicacy may be required as researchers begin to travel across the reflexive rela-tion between individual and collective.

CONCLUDING REMARKS

I started this commentary looking for the details of new research practices in thelearning sciences. Each article in the issue provides interesting and useful material

Dow

nloa

ded

by [

Dok

uz E

ylul

Uni

vers

ity ]

at 0

4:59

07

Nov

embe

r 20

14

Page 19: Schedules of Practical Work for the Analysis of Case Studies of Learning and Development

from this perspective. Taking stock, all of the articles are trying to follow learningand development by using new units of analysis that can be linked to a critical turnin the field toward theories of situated action and learning. The unit carried forward,at least in my attempt to pull common elements from each of the articles, treatslearning as a trajectory of participation in specific practices that can be found andanalyzed in moments of mediated interaction. In a reflexive fashion, these mo-ments of interaction are organized by and achieve those very practices. As Barab etal. demonstrate, moments of interaction involve complex relations between people,physical artifacts, and language. As Roth argues, threading them together into alearning trajectory requires analyses that span levels we have traditionally kept sep-arate (i.e., of ongoing activity, individual change, and cultural and historical devel-opment). As Kulikowich and Young find in this regard, even carefully designed in-structional contexts do not determine the emergent focus or meaning of activity inthe moment (i.e., the outcome of what they call an ontological descent). Finally, asCobb et al. demonstrate most clearly, individual and collective levels of analysisare both required to follow learning trajectories, with neither providing a privilegedperspective.

There are challenges and tradeoffs for any set of methods that simultaneouslyhope to look closely at moments of interaction and to follow these across place andtime in complex instructional environments. Several of these are richly illustratedin the articles, and because I have tried to extract schedules of practical work ineach instance, I end by making some general observations in these terms. The firstchallenge concerns getting access to a site in which a design experiment is being orcould be conducted. Across the four articles we find an undergraduate sciencecourse at a major university, a private high school, a university laboratory, and apublic elementary school classroom. In at least two of the articles, relations be-tween the researchers and the site are long term (i.e., spanning periods from a se-mester to several years), and all but the study described by Cobb et al. requiressubstantial access to computing. There is very little discussion of how to arrangefor these kinds of environments, but by my reading, they need to be in place forsomething like a design experiment to even begin. To the extent that design experi-ments conducted in intentional learning environments are big budget, multiyearundertakings, relatively few people can participate (i.e., either researchers, teach-ers, or learners). To put the question in its most general form, what relations be-tween infrastructure, innovation, and equity both enable and limit this kind ofeducational research? Can there be innovation without infrastructure or infrastruc-ture without equity?

A second challenge concerns what gets into the record for analysis, given a fo-cus on following interaction over time (i.e., describing learning trajectories), andwhat kind of coordination of field methods are required to produce a corpus that isadequate for later (or ongoing) analysis. The articles in this issue provide a widerange of “strategies for recording” (borrowing from Schatzman & Strauss, 1973),

COMMENTARY 219

Dow

nloa

ded

by [

Dok

uz E

ylul

Uni

vers

ity ]

at 0

4:59

07

Nov

embe

r 20

14

Page 20: Schedules of Practical Work for the Analysis of Case Studies of Learning and Development

220 HALL

and my commentary has focused most closely on how writing field notes may becoordinated with collecting video and audio recordings. Being strategic meanskeeping an eye on the kinds of questions that will be asked later, so devoting avideo camera to a computer monitor while the researcher is teaching produces onekind of record (e.g., Roth’s article), while having someone else use a video camerato follow the observations (and interventions) of a researcher produces another(e.g., Cobb et al.’s article). Neither strategy is necessarily better than the other, buteach is selective with respect to what can be a topic for later analysis, and each cre-ates different down stream demands on the time and attention of the analyst. In thelimit, there is probably no strategy for recording that can support any (or all) subse-quent kinds of analysis (i.e., allowing one to arbitrarily zoom, using Roth’s meta-phor). However, there are clearly strategies that allow one to move across levels ofanalysis.

Third, these articles provide great material for thinking about how to managethe work of analysis as it is (or may be) divided across researchers, developers, andteachers (Kulikowich & Young are most explicit about this). There are complex is-sues concerning the pace and topic for analysis, and the daily demands of teachingand managing a complex classroom study place limits on what can be found or fol-lowed in a growing set of field data. Making field observations according to a lim-ited set of codes (Barab et al. describe this), choosing tracers as the predominanttopic for analysis as data are being collected (all the articles but Kulikowich &Young describe this in some form), and arranging a technical or conversational en-vironment so that participants inscribe their understandings more directly (Cobb etal. and Kulikowich & Young describe versions of this) are all ways of compressingthe pace of analysis so that it can match, at least in principle, the pace of teachingand design.

Finally, there are interesting challenges concerning who should be consideredthe clients of a design experiment (e.g., the researcher community, practicingteachers, or students who participate in the studies themselves). I am reminded ofthis in Brown’s (1992) argument that, “a major question facing contemporary de-signers [i.e., those well past what she called the “Dewey effect”] is how to avoidrepeating the Cuban–Dewey circle: exhilaration, followed by scientific credibility,followed by disappointment and blame” (p. 172). For Brown, the situation wasanalogous to a traditional, linear model of software design involving development,testing, and adoption (i.e., she calls these alpha, beta, and gamma phases). If inno-vations were not widely adopted after supporting structure was removed (i.e., theydid not scale up to widespread use in the gamma phase), then they had poor “shelflife” and it was back to the development phase. However, after reading and think-ing about the articles in this special issue, I wonder if the issue is not one of scale upat all, but instead one of scale out and sustainability. From this perspective on de-sign, the process of adoption itself involves further design, fitting, and adaptationfor circumstances of local use. Cobb et al., for example, describe writing articles

Dow

nloa

ded

by [

Dok

uz E

ylul

Uni

vers

ity ]

at 0

4:59

07

Nov

embe

r 20

14

Page 21: Schedules of Practical Work for the Analysis of Case Studies of Learning and Development

and developing case studies that can support “the development of local instruc-tional theories” (p. 156) among teachers in professional development activities. Ithink this is a very exciting development, one that enlivens the idea of a design ex-periment with the very same theoretical ideas being pursued in this special issue(i.e., that people learn by the experience of participating in specific practices, in-cluding forms of externalization that change those very practices). Furthermore, itconnects up with ongoing work on methods for the design of sociotechnical sys-tems in some of the same traditions that provided a point of departure for the cur-rent critical turn in the learning sciences (e.g., see Bowker, Star, & Turner, 1997;Engestrom & Middleton, 1998; Suchman, 1995).

REFERENCES

Barab, S. A., Hay, K. E, & Yamagata-Lynch, L. C. (2001). Constructing networks of activity: An in-situresearch methodology. The Journal of the Learning Sciences, 10, 63–112.

Bowker, G. C., Star, S. L., & Turner, W. (1997). Social science, technical systems and cooperativework: Beyond the great divide. Mahwah, NJ: Lawrence Erlbaum Associates, Inc.

Charmaz, K. (1983). The grounded theory method: An explication and interpretation. In R. M. Emerson(Ed.), Contemporary field research: A collection of readings (pp. 109–126). Boston: Little, Brown.

Cobb, P., Stephan, M., McClain, K., & Gravemeijer, K. (2001). Participating in classroom mathematicalpractices. The Journal of the Learning Sciences, 10, 113–163.

Cole, M. (1996). Cultural psychology: A once and future discipline. Cambridge, MA: The BelknapPress of Harvard University Press.

Eckert, P. (1989). Jocks and burnouts. New York: Teachers College Press.Engestrom, Y. (1999). Activity theory and individual and social transformation. In Y. Engestrom, R.

Miettinen, & U. Punamaki (Eds.), Perspectives on activity theory (pp. 19–38). Cambridge, England:Cambridge University Press.

Engestron, Y., & Middleton, D. (1998). Cognition and communication at work. Cambridge, England:Cambridge University Press.

Erickson, F., & Schultz, J. (1977). When is a context? Some issues and methods in the analysis of socialcompetence. Quarterly Newsletter of the Laboratory of Comparative Human Cognition 1, 5–10.

Goodwin, C. (1981). Conversational organization: interaction between speakers and hearers. NewYork: Academic.

Goodwin, C. (1993). Recording human interaction in natural settings. Pragmatics, 3, 181–209.Goodwin, C. (1994). Professional vision. American Anthropologist, 96, 606–633.Greeno, J. G. (1995). Understanding concepts in activity. In C. A. Weaver & S. Mannes (Eds.), Dis-

course comprehension: Essays in honor of Walter Kintsch (pp. 65–95). Mahwah, NJ: LawrenceErlbaum Associates, Inc.

Hall, R. (1996). Representation as shared activity: Situated cognition and Dewey’s cartography of expe-rience. Journal of the Learning Sciences, 5, 209–238.

Hall, R. (2000). Video recording as theory. In D. Lesh & A. Kelley (Eds.), Handbook of research designin mathematics and science education (pp. 647–664). Mahwah, NJ: Lawrence Erlbaum Associates,Inc.

Kulikowich, J. M., & Young, M. F. (2001). Locating an ecological psychology methodology for situatedaction. The Journal of the Learning Sciences, 10, 165–202.

Lampert, M., Rittenhouse, P., & Crumbaugh, C. (1996). Agreeing to disagree: Developing sociablemathematical discourse. In D. R. Olson & N. Torrance (Eds.), The handbook of education and hu-

COMMENTARY 221

Dow

nloa

ded

by [

Dok

uz E

ylul

Uni

vers

ity ]

at 0

4:59

07

Nov

embe

r 20

14

Page 22: Schedules of Practical Work for the Analysis of Case Studies of Learning and Development

222 HALL

man development: New models of learning, teaching and schooling (pp. 731–764). London:Blackwell.

Lave, J. (1988). Cognition in practice. Cambridge, England: Cambridge University Press.Lave, J. (1996). The savagery of the domestic mind. In L. Nader (Ed.), Naked science (pp. 87–100). New

York: Routledge.Lave, J., Murtaugh, M., & de la Rocha, O. (1984). The dialectic of arithmetic in grocery shopping. In B.

Rogoff & J. Lave (Eds.), Everyday cognition: Its development in social context (pp. 67–94). Cam-bridge, MA: Harvard University Press.

McDermott, R. P., Gospodinoff, K., & Aron, J. (1978). Criteria for an ethnographically adequate de-scription of activities and their contexts. Semiotica, 24, 245–275.

Ochs, E. (1979). Transcription as theory. In E. Ochs & B. B. Schieffelin (Eds.), Developmentalpragmatics (pp. 43–72). New York: Academic.

Roschelle, J. (2000). Choosing and using video equipment for data collection. In D. Lesh & A. Kelley(Eds.), Handbook of research design in mathematics and science education (pp. 709–731).Mahwah, NJ: Lawrence Erlbaum Associates, Inc.

Roth, W.- M. (2001). Situating cognition. The Journal of the Learning Sciences, 10, 27–61.Sanjek (1990). Fieldnotes: The makings of anthropology. Ithaca, NY: Cornell University Press.Schatzman, L., & Strauss, A. L. (1973). Field research. Englewood Cliff, NJ: Prentice Hall.Strauss, A. (1987). Qualitative analysis for social scientists. Cambridge, England: Cambridge Univer-

sity Press.Suchman, L. (1987). Plans and situated actions: The problem of human–machine communication.

Cambridge, England: Cambridge University Press.Suchman, L. (1988). Representing practice in cognitive science. Human Studies, 11, 305–325.Suchman, L. (1995). Representations of work. Communications of the ACM, 38, 56–64.

Dow

nloa

ded

by [

Dok

uz E

ylul

Uni

vers

ity ]

at 0

4:59

07

Nov

embe

r 20

14