17
ORIGINAL ARTICLE Open Access Spatial analogies pervade complex relational reasoning: Evidence from spontaneous gestures Kensy Cooperrider 1* , Dedre Gentner 2 and Susan Goldin-Meadow 1 Abstract How do people think about complex phenomena like the behavior of ecosystems? Here we hypothesize that people reason about such relational systems in part by creating spatial analogies, and we explore this possibility by examining spontaneous gestures. In two studies, participants read a written lesson describing positive and negative feedback systems and then explained the differences between them. Though the lesson was highly abstract and people were not instructed to gesture, people produced spatial gestures in abundance during their explanations. These gestures used space to represent simple abstract relations (e.g., increase) and sometimes more complex relational structures (e.g., negative feedback). Moreover, over the course of their explanations, participantsgestures often cohered into larger analogical models of relational structure. Importantly, the spatial ideas evident in the hands were largely unaccompanied by spatial words. Gesture thus suggests that spatial analogies are pervasive in complex relational reasoning, even when language does not. Keywords: Analogy, Relational reasoning, Gesture, Complex systems, Spatial cognition Significance Many phenomena in the natural and social world involve complex causal systems. Biological organisms, financial markets, ecosystems, and mechanical devices, for instance, all exhibit different types of feedback relationships, in which multiple causal factors in a system change and bring about changes in each other. Feedback systems are central to the STEM fields, as well as to urgent societal issues such as climate change. Here we investigate the possibility that people reason about such causal patterns in part by using spatial analogiesthat is, spatial models of the relations involved. To investigate this possibility, we had people explain the differences between positive and negative feedback systems and examined their gestures. Gesture is ubiquitous in communication and classroom instruction, and it sometimes expresses spatial ideas not found in speech. We found that people gestured copiously when explaining the feedback systems, even though such systems are not inherently spatial. For example, people located different elements of the system in different parts of space, showed causal relationships as movements between locations, and modeled the overall behavior of the systems with complex movements. Interestingly, most of these spatial gestures were produced without spatial words. These findings suggest that spatial analogies may be a pervasive part of peoples on-the-fly reasoning about complex relational systems, and, further, that gesture may be an important medium through which such analogies are expressed. The pervasiveness of spatial analogy in reasoning about complex phenomena has implications for how such phenomena are taught in a range of fields. Backround Permafrost once covered much of the Arctic, but it is now thawing. This has a number of consequences, but one is especially insidious: as the frozen ground melts it releases methane, which then enters the atmosphere and accelerates the very warming that caused the ground to thaw in the first place (Anthony, 2009). This self-reinforcing pattern is an example of a complex causal system: multiple causal factorsin this case, atmospheric temperature, melting permafrost, and * Correspondence: [email protected] 1 Department of Psychology, University of Chicago, 5848 S. University Avenue, Chicago, IL 60637, USA Full list of author information is available at the end of the article Cognitive Research: Principles and Implications © The Author(s). 2016 Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Cooperrider et al. Cognitive Research: Principles and Implications (2016) 1:28 DOI 10.1186/s41235-016-0024-5

Spatial analogies pervade complex relational reasoning ... · ORIGINAL ARTICLE Open Access Spatial analogies pervade complex relational reasoning: Evidence from spontaneous gestures

Embed Size (px)

Citation preview

Cognitive Research: Principlesand Implications

Cooperrider et al. Cognitive Research: Principles and Implications (2016) 1:28 DOI 10.1186/s41235-016-0024-5

ORIGINAL ARTICLE Open Access

Spatial analogies pervade complexrelational reasoning: Evidence fromspontaneous gestures

Kensy Cooperrider1* , Dedre Gentner2 and Susan Goldin-Meadow1

Abstract

How do people think about complex phenomena like the behavior of ecosystems? Here we hypothesize thatpeople reason about such relational systems in part by creating spatial analogies, and we explore this possibility byexamining spontaneous gestures. In two studies, participants read a written lesson describing positive and negativefeedback systems and then explained the differences between them. Though the lesson was highly abstract andpeople were not instructed to gesture, people produced spatial gestures in abundance during their explanations.These gestures used space to represent simple abstract relations (e.g., increase) and sometimes more complexrelational structures (e.g., negative feedback). Moreover, over the course of their explanations, participants’ gesturesoften cohered into larger analogical models of relational structure. Importantly, the spatial ideas evident in thehands were largely unaccompanied by spatial words. Gesture thus suggests that spatial analogies are pervasive incomplex relational reasoning, even when language does not.

Keywords: Analogy, Relational reasoning, Gesture, Complex systems, Spatial cognition

SignificanceMany phenomena in the natural and social world involvecomplex causal systems. Biological organisms, financialmarkets, ecosystems, and mechanical devices, for instance,all exhibit different types of feedback relationships, inwhich multiple causal factors in a system change andbring about changes in each other. Feedback systems arecentral to the STEM fields, as well as to urgent societalissues such as climate change. Here we investigate thepossibility that people reason about such causal patternsin part by using spatial analogies—that is, spatial modelsof the relations involved. To investigate this possibility, wehad people explain the differences between positive andnegative feedback systems and examined their gestures.Gesture is ubiquitous in communication and classroominstruction, and it sometimes expresses spatial ideas notfound in speech. We found that people gestured copiouslywhen explaining the feedback systems, even though suchsystems are not inherently spatial. For example, people

* Correspondence: [email protected] of Psychology, University of Chicago, 5848 S. University Avenue,Chicago, IL 60637, USAFull list of author information is available at the end of the article

© The Author(s). 2016 Open Access This articleInternational License (http://creativecommons.oreproduction in any medium, provided you givthe Creative Commons license, and indicate if

located different elements of the system in different partsof space, showed causal relationships as movementsbetween locations, and modeled the overall behavior ofthe systems with complex movements. Interestingly, mostof these spatial gestures were produced without spatialwords. These findings suggest that spatial analogies maybe a pervasive part of people’s on-the-fly reasoning aboutcomplex relational systems, and, further, that gesture maybe an important medium through which such analogiesare expressed. The pervasiveness of spatial analogy inreasoning about complex phenomena has implications forhow such phenomena are taught in a range of fields.

BackroundPermafrost once covered much of the Arctic, but it isnow thawing. This has a number of consequences, butone is especially insidious: as the frozen ground melts itreleases methane, which then enters the atmosphereand accelerates the very warming that caused theground to thaw in the first place (Anthony, 2009). Thisself-reinforcing pattern is an example of a complexcausal system: multiple causal factors—in this case,atmospheric temperature, melting permafrost, and

is distributed under the terms of the Creative Commons Attribution 4.0rg/licenses/by/4.0/), which permits unrestricted use, distribution, ande appropriate credit to the original author(s) and the source, provide a link tochanges were made.

Cooperrider et al. Cognitive Research: Principles and Implications (2016) 1:28 Page 2 of 17

methane release—are changing over time and are caus-ing changes in each other. Such systems are legion inthe environmental sciences, but are hardly limited tothem. Further examples can be found in financial mar-kets, in the human body, in household appliances, in plantphysiology, and beyond. Indeed, complex relational sys-tems underlie phenomena throughout the natural and so-cial world, in all domains and on all scales. Yet, despitethe ubiquity of such systems—and their centrality toSTEM (Science, Technology, Engineering, and Mathemat-ics) fields and to urgent societal issues—much remains tobe learned about the cognitive processes involved in un-derstanding them. To fully grasp the alarming implica-tions of permafrost melt, for example, we not only need tobe able to understand the system as it is now, but also toproject its behavior into the future. How do we do this?Current evidence suggests that complex relational rea-

soning presents challenges even for adults. For example,undergraduates have considerable difficulty detectinghigher-order causal patterns such as positive feedbackand negative feedback, two important examples of com-plex relational systems (Rottman, Gentner, & Goldwater,2012; see also Day, Motz, & Goldstone, 2015) and alsothe focus of the present paper. Expertise in identifyingthese and other causal patterns can develop, eitherthrough exposure to the same patterns across a range ofdomains (Rottman et al., 2012) or through a scaffoldedprocess of comparing examples (Goldwater & Gentner,2015). Considerable work has also examined the kindsof information that people use to make inferences aboutother types of causal structure in the world (Lagnado,Waldmann, Hagmayer, & Sloman, 2007; Sloman, 2005),and has detailed the challenges posed by complex sys-tems in particular (Chi, Roscoe, Slotta, Roy, & Chase,2012; Jacobson, Kapur, So, & Lee, 2011). An importantopen question, however, concerns the nature of the rep-resentations that people form as they reason aboutcausal structure. What is the format of these representa-tions and, moreover, how can we gain insight into them?Possible clues come from research on how people

understand complex systems more generally. Much pre-vious research has investigated how people understandmechanical processes with multiple causal components,such as sets of gears and pulleys. A major finding of thisline of work is that people often develop mental modelsof the system that are visuospatial in nature (Gentner &Stevens, 1983; Hegarty, 2004). One line of evidence forthe visuospatial character of these models is that, whenreasoning about or explaining such systems, peopleoften produce diagrams (Forbus, Usher, Lovett, &Wetzel, 2011; Novick, 2001; Tversky, 2011) or gestures(e.g., Kang, Tversky, & Black, 2014; Nathan & Martinez,2015; Schwartz & Black, 1996). Based on such observa-tions, it seems plausible that people develop spatial mental

models of other types of complex systems, such as thecausal patterns under consideration here. However, thereis a crucial difference between mechanical systems andpositive and negative feedback systems. Positive feedbackand negative feedback are consummate abstractions. Theyare relational patterns that may sometimes be instantiatedin mechanical or concretely spatial systems—e.g., a flushvalve toilet, which is an example of a negative feedbacksystem—but their relational essence transcends any oneconcrete instantiation. It might thus seem unhelpful, oreven counterproductive, to recruit visuospatial reasoningprocesses when thinking about such pure abstractions.At the same time, a separate line of research has in-

vestigated how spatial concepts provide a foundationfor more abstract ideas. This tendency can be seen inthe spatial words and grammatical structures peopledraw on to talk about abstract domains, including timeand others (Brugman, 1988; Clark, 1973; Heine, 1997;Jamrozik & Gentner, 2015; Traugott, 1978). In fact, evi-dence has now accumulated that this is not just a linguis-tic phenomenon—people draw on spatial representationswhen reasoning online about abstract concepts, whetheror not language is involved (Boroditsky, 2001; Casasanto& Bottini, 2014). One vivid and naturalistic source ofevidence for the use of space in abstract reasoningcomes from the gestures people produce (Cienki, 1998;McNeill, 1992; Núñez, 2008). To date, perhaps the best-studied case of abstract spatial analogy in gesture—inwhich space is used to represent relations that are notinherently spatial—is the representation of a temporalsequence as a line. For instance, an English speakermight reference a future event in speech while locatingit in gesture to their right, tacitly implying a relationbetween it and other unnamed events (for a review, seeCooperrider, Núñez, & Sweetser, 2014). Such temporalgestures use a simple spatial structure—a line—to cap-ture a simple relational structure—order along a dimen-sion—but they raise the possibility that people maycreate more complex spatial structures in gesture to rep-resent more complex relational structures.The above observations lead us to a hypothesis about

how people reason about complex relational patternslike positive and negative feedback: they may do so, atleast in part, by creating abstract spatial models of therelational structures involved—that is, spatial analogies.Furthermore, if this hypothesis is correct, then gestureshould provide a powerful window onto this phenomenon.Gesture is well suited to the expression of spatial ideas(Alibali, 2005), and it has been shown to reveal implicitaspects of understanding that people have difficulty ver-balizing (Broaders, Cook, Mitchell, & Goldin-Meadow,2007; Goldin-Meadow, 2003). Moreover, it has beennoted that the spatial information revealed in people’sabstract gestures sometimes goes beyond what is found in

Cooperrider et al. Cognitive Research: Principles and Implications (2016) 1:28 Page 3 of 17

the language co-produced with those gestures (Casasanto& Jasmin, 2012; Cienki, 1998).In the present studies, we explore this spatial analogy

hypothesis by having people read a lesson about twotypes of complex relational systems—positive and nega-tive feedback—and then explain the key differencesbetween them. In our materials, we attempted toencourage highly abstract representations of suchsystems, devoid of the kinds of imagery that mightprompt concrete spatial gestures. The most interestingpossibility is that gesture might nonetheless revealspatial analogies—that is, spatial models of relationalstructure—even though the relations involved in positiveand negative feedback are not inherently spatial.What would constitute evidence for such spatial analo-

gies? A first kind of evidence would come from individualgestures that represent abstract relations spatially. Thesemight include gestures representing simple relationalstructures, such as the use of a downward gesture to rep-resent the notion of a decrease. Though interesting, suchgestures do not necessarily go beyond prior observationsabout gestures representing simple relational structures,such as those described previously in the domain of time.More interestingly, people could represent complex rela-tional structures within the confines of a single gesture. Toour knowledge, analogical gestures of this more complextype have not been investigated in detail. A second kind ofevidence may come from how gestures cohere over thecourse of an explanation. People may express complex re-lational structure—not in a single gesture—but by buildingup a spatial model of that relational structure over time.Gestural models have been observed in extended descrip-tions of concrete spatial layouts (Emmorey, Tversky, &Taylor, 2000), but gestural models of purely abstract rela-tionships have not been described previously. Note, how-ever, that the abstract nature of causal systems leaves openthe possibility that people might not spatialize much ofanything in their explanations. After all, gesture is thoughtto stem from vivid visuospatial or motoric imagery (e.g.,Hostetter & Alibali, 2008), which was left out of our lessonby design.If analogical gestures do occur, a secondary question

concerns how those gestures relate to what people aresaying when they produce them. If people’s abstractspatial gestures are most often produced along with cor-responding spatial language—such as an upward gestureto represent an increase accompanied by “rise”, or a cir-cular gesture when describing a feedback “loop”—ges-ture cannot be considered a uniquely revealing source ofevidence for the presence of spatial analogies. Such ges-tures may be triggered by local lexical effects rather thanby a stable, systematic spatial model of the system (seeGentner, 2001). However, to the extent that people’sgestures represent spatial ideas not found in speech,

this would suggest that gesture plays a vital role in un-derstanding the representations people draw on whenreasoning about complex relational systems.

Study 1: Basic causal systems lessonMethodsThe study was approved by the University of ChicagoInstitutional Review Board, under protocol #IRB13-0248.

ParticipantsTwenty three adults from the University of Chicagocommunity participated for course credit or cash. Fourparticipants were excluded from the analyses: three be-cause their gestures were largely occluded on the video;one for producing no gestures at all. In all, data from19 participants (10 female; mean age = 20.8 years) arereported in the analyses.

Materials and procedureFamiliarization phase After giving consent to participateand to be videotaped, participants carried out a series ofactivities that served both to familiarize them with causalsystems and to assess their understanding of the systems.First, participants completed an adaptation of the Ambigu-ous Sorting Task (AST) used previously by Rottman et al.(2012) and Goldwater and Gentner (2015). In this type ofsorting task, participants are given a set of vignettesprinted on index cards and are asked to sort the cards intocategories. Each vignette is an example of one of severaltypes of causal systems (e.g., positive feedback) instantiatedin one of several domains (e.g., economics). Participantsare also given seed cards—vignettes just like those thatneed to be sorted but which serve as anchors for the cat-egories to be used. A key feature of the task is that the seedcards leave the relevant categories open to interpretation: aparticipant may correctly categorize the vignettes accord-ing to the type of causal system described (e.g., positivefeedback) or, more concretely, by the domain in which thatsystem is couched (e.g., economics). In the adaptation ofthe AST used here, participants were presented with threeseed cards: a first unlabeled card describing thephenomenon of stock market bubbles (a positive feedbacksystem, economics domain), a second unlabeled card de-scribing predator-prey relationships (a negative feedbacksystem, natural world domain), and a third card that wasblank except for the word ‘other.’ Participants were thenpresented with 11 new vignettes and were given five mi-nutes to sort them. The vignettes described a range of phe-nomena, including internet routers, perspiration, interestrates, and so on, each instantiating a causal system.After the sorting was complete, the experimenter re-

moved the materials and prompted the participant toexplain the main difference between the different cat-egories involved in the sorting task. Most participants

Cooperrider et al. Cognitive Research: Principles and Implications (2016) 1:28 Page 4 of 17

at least attempted to characterize differences in causalstructure between the categories. This phase is the pre-lesson explanation. Together, the sorting task and thepre-lesson explanation served to familiarize participantswith causal systems and perhaps prepared them to getmore out of the lesson they would go on to study andexplain (e.g., Little & Bjork, 2016).

Causal systems lesson Next, participants were given aone-page written lesson (‘Causal Systems Lesson’)explaining the differences between positive and negativefeedback systems (though without using those labels).The lesson was grounded in the seed cards used in thesorting task. It explained how the stock market andpredator-prey vignettes exemplify different types ofcausal systems (see Appendix 1). The lesson also movedbeyond the particular examples, characterizing in moreabstract terms how each type of system involves differentrelationships between causal factors. Importantly, thelesson used no concrete spatial imagery. This decisionwas made to encourage participants to view the systemsas abstractly as possible. It is well known that peoplegesture to represent concrete imagery (e.g., McNeill,1992), but our interest is in the less studied question ofhow people gesture when viewing a phenomenon ab-stractly in terms of its causal structure. Additionally,the lesson used very little abstract spatial language. Par-ticipants were instructed to study the lesson for threeminutes and were told that they would later be asked toexplain it to another participant.When the three minutes were up, the experimenter

removed the lesson and brought in the ‘student’ (whowas actually a confederate). The experimenter thenprompted the participant as follows: “Please explain thelesson you just read. Go into as much detail as possible,but focus on the differences between the two types ofcausal systems”. The instructions made no mention ofgesture. This phase is the post-lesson explanation.

AnalysisVideos of participants’ pre- and post-lesson explanationswere transcribed and analyzed using ELAN video anno-tation software (https://tla.mpi.nl/tools/tla-tools/elan/).Participants gestured during both explanations. How-ever, during the pre-lesson explanations, participantswere highly variable in how much they talked (and thushow much they gestured), in what they talked about,and at what level of detail. The analyses reported herethus focus on the post-lesson explanations, which weremore consistent across participants.A first step in the gesture analysis was to identify all ges-

tures in the post-lesson explanations that were “represen-tational” (e.g., Chu, Meyer, Foulkes, & Kita, 2013).

Representational gestures depict some property of a refer-ent (commonly called “iconic” or “metaphoric” gestures),or point to a referent’s location (commonly called “deictic”gestures). In the present data, the representational ges-tures were abstract in nature—that is, they used location,movement, and spatial arrangement to depict ideas thatthemselves had no concrete location, did not actuallymove, and had no visible spatial properties. Once repre-sentational gestures were identified, they were then cate-gorized into gesture types according to the type of systemelement they represented. We then classified the proper-ties of these gestures, first considered individually andthen considered in relation to other gestures in the ex-planation. Reliability was assessed by having a secondcoder classify the representational gestures in five ran-domly selected explanations (26% of the data), and it is re-ported with each analysis. The judgments of the primarycoder were retained for all analyses.

ResultsSorting performanceParticipants sorted the vignettes according to the appro-priate causal structure the majority of the time (M = 0.62,SD = 0.22). The variability in sorting performance was lim-ited, however, with three quarters of participants scoringabove 50% (interquartile range = 0.50 to 0.76). This limitedvariability made it difficult to explore possible relations be-tween participants’ sorting performance and their ges-tures, and such relations are not discussed further.

Spatial ideas in speechBefore analyzing the spatial ideas evident in people’sgestures, we first examined the spatial ideas evident intheir speech. Based on analogy theory (Gentner, 1983;Gentner & Markman, 1997), as well as on pilot studiesinvolving similar materials, we distinguished four ele-ments of feedback systems that people mention in theirexplanations: 1) the factors involved in the system; 2) thechanges those factors undergo; 3) the causal relationsbetween the factors; and 4) the behavior of the wholesystem. We analyzed how often people’s references tothese different aspects of the systems involved overtlyspatial words. A word was considered spatial if the firstdictionary definition for the word uniquely described aspatial idea, such as location, orientation, size, or motion(based on American Heritage Dictionary (5th edition);see Appendix 3 for further details). On this criterion,words such as “increase,” which has a first definition thatis not uniquely spatial, were not considered spatial; how-ever, words such as “growth,” which has a first definitionreferring uniquely to a change in size, were. In theirpost-lesson explanations, participants expressed thesefour system elements using spatial words an average of6.4 (SD = 4.9) times per explanation.

Cooperrider et al. Cognitive Research: Principles and Implications (2016) 1:28 Page 5 of 17

Spatial ideas in gestureRates and types On average, participants’ post-lessonexplanations lasted 88.3 seconds (SD = 33.1) and in-volved 35.0 gestures (SD = 13.0), for a mean rate of24.1 (SD = 4.4) representational gestures per minutespeaking. The abstract nature of the Causal SystemsLesson thus did not stand in the way of eliciting repre-sentational gestures. Indeed, these rates are comparableto—or higher than—gesture rates reported in studiesusing concrete visual stimuli (e.g., Alibali, Heath, &Myers, 2001). The participants’ gestures spatialized thefour elements of feedback systems in the following ways.First, people located the factors (e.g., the predator and preypopulations in the negative feedback example) by placingtheir gestures in space or by pointing to locations. These wecall factor reference gestures (see Fig. 1). Second, people rep-resented changes to the factors (e.g., an increase in thepredator population) as movements. These we call factorchange gestures. Third, people represented causal relationsin the system (e.g., how the change in the predator popula-tion causes a change in the prey population) as movements,sometimes between previously established locations. Thesewe call causal relation gestures. Fourth, people used move-ments to characterize the behavior of the system as a whole(e.g., the equilibrium that is reached in the predator-preysystem). These we call whole system gestures. People alsogestured in a variety of other ways, such as to spatially con-trast the two systems they read about (with the first systemin one location and the second in another), or to illustratean aspect of the system that was not one of the intendedcausal factors (such as the increase in investor confidencewhen stock values rise).

Fig. 1 Examples of the different gesture types, taken from two participantslocations in space (yellow circles). Factor change gestures (b, f) represent increlation gestures (c, g) represent causation as movement (curved yellow arroas a whole and often involve multiple movement phases (multiple yellow a

Some gestures were unclear and could not be definitivelyclassified into one of the four categories or the ‘other’category. Of those gestures that both coders judged to becodable (80% of gestures analyzed), 79% (N = 177, Cohen’sK = 0.67) were assigned to the same category.Overall, the majority (476/666, or 71%) of participants’

representational gestures fell into one of the four spatialtypes, with an average of 25.1 (SD = 9.0) such gestures perexplanation.1 In other words, participants spatialized thesystem elements in gesture (25.1 per explanation) almostfour times as often as they spatialized them in speech (6.4per explanation). Further details about the counts and ratesobserved for each gesture type are given in Table 1. Subse-quent analyses focus on those gestures that fit into thecategorization scheme.

Properties of gestures considered individually Wenext analyzed the properties that these four gesture typesexhibited, starting with the spatial axis employed in eachgesture: left-right, toward-away, up-down, a combination ofaxes (e.g., left-right and up-down), or no single axis. Factorreference gestures were analyzed for the axis along whichthe factors were located; factor change gestures, causal fac-tor gestures, and whole system gestures were analyzed forthe axis along which movement was represented. Codersagreed 89% (N = 114, Cohen’s K = 0.84) of the time in judg-ing the spatial axis involved.The vast majority of factor reference gestures located

the factors on the left-right axis (202/248, or 81%), mostoften with the first-mentioned factor on the left and thesecond-mentioned factor on the right. Factor change ges-tures were more spatially variable, with 21% (37/176)

’ explanations. Factor reference gestures (a, e) represent the factors asreases and decreases as movements (straight yellow arrows). Causalws). Whole system gestures (d, h) represent the behavior of the systemrrows)

Table 1 Gesture counts and rates by type

Factor reference Factor change Causal relation Whole system Other or unclear Totals

Study 1

Number of gestures 248 176 32 20 190 666

Mean per explanation 13.1 9.3 1.7 1.1 10.0 35.0

Proportion of participants producing type 1.00 0.95 0.68 0.47 0.79 –

Study 2

Number of gestures 301 171 24 72 234 802

Mean per explanation 12.5 7.1 1.0 3.0 9.8 33.4

Proportion of participants producing type 1.00 0.92 0.46 0.83 0.96 –

Cooperrider et al. Cognitive Research: Principles and Implications (2016) 1:28 Page 6 of 17

depicting increases and decreases as movements along theleft-right axis, 29% (51/176) along the toward-away axis,26% (45/176) along the up-down axis, and the remaining24% (43/176) involving either some combination of theseaxes (e.g., simultaneously upward and rightward) or amore complex movement. Causal relation gestures mostcommonly (24/32, or 75%) depicted causation as move-ment along the left-right axis. The spatial properties ofwhole system gestures were much more variable (seebelow), with the majority not representing movementalong a single axis (11/20, or 55%).Factor reference gestures locate abstract entities in dis-

tinct spatial locations. They either do this explicitly, aswhen locating both factors simultaneously (Fig. 1a), or im-plicitly, as when locating a single factor on one side ofspace and implying a contrast with an unrepresented fac-tor (Fig. 1e). The other three types of gestures representabstract relations spatially and thus provide a first type ofevidence that spatial analogies are at play. Factor changegestures represent changes in quantity (a relation betweenan entity at one point in time and the same entity atanother point in time) as movements. Causal relationgestures represent a higher-order relation (that a changein one factor results in a change in the other factor) as anaction through space. Finally, whole system gestures rep-resent the system’s behavior through time as movement.Importantly, these gestures differ in the complexity of

the relations represented. In particular, whole systemgestures summarize an entire system over time—not justa single relation such as an increase—and thus would beexpected to be more spatially complex. To examine thispossibility, we analyzed the type of stroke involved inthe different gesture types. The stroke is the part of thegesture that is meant to represent what is being de-scribed (McNeill, 1992). The canonical gesture stroke issimple in that it involves a single movement phase pro-ceeding in a single direction, but gesture strokes can alsobe more complex in several ways. In the case of concreteiconic gestures, complex gesture strokes might occur, forinstance, when someone traces the outline of a square,shows a repetitive action such as hammering, or depicts

alternating action such as two weights oscillating on ascale (for discussion of different stroke types, see McNeill,1992, p. 308; Kita, Van Gijn, & Van der Hulst, 1998, 2014).We considered gesture strokes complex if they had mul-tiple movement phases (see Fig. 1d, h). Some strokes wereunclear with respect to complexity; for those gestures thatboth coders considered codable (97% of gestures ana-lyzed), agreement as to whether the stroke was simple orcomplex was 96% (N = 137, PABAK = 0.91).2

As predicted, whole system gestures involved complexstrokes in 45% of cases (9/20), compared to 13% (58/456)for the other gesture types (Fig. 2). While factor changegestures, for instance, usually involve single, smoothmovements along particular spatial axes, whole systemgestures involve incremental upward movements, widen-ing spirals, alternating upward and downward movements,and so on (Fig. 3). We also examined the number ofparticipants who favored simple or complex strokes forthe different gesture types. A significant number of partic-ipants favored simple strokes for factor reference (19/19,two-tailed exact binomial test, p < 0.001), factor change(17/18, two-tailed exact binomial test, p < 0.001), andcausal relation gestures (10/13, two-tailed exact binomialtest, p = 0.03). In contrast, a non-significant majority ofparticipants favored complex strokes for their whole sys-tem gestures (5/9, two-tailed exact binomial test, p = 0.25).

Properties of gestures considered across the explanationWe next considered properties of people’s gestures acrosstheir entire explanation. Our primary interest was whetherparticipants’ gestures were integrated into a larger ana-logical model—in other words, whether people producedan interconnected system of spatial gestures to represent aninterconnected system of entities and relations. This kindof parallel interconnected structure is characteristic of ana-logical mapping (Gentner, 1983). If a participant maintainsconsistent, contrasting spatial locations for the factors,these gestures would be considered model-integrated. Bycontrast, if a participant locates the factors erratically fromone mention to the next, these gestures would not be con-sidered model-integrated. Further, if a participant produces

Fig. 2 The proportion of gestures of different types that involved complex strokes

Cooperrider et al. Cognitive Research: Principles and Implications (2016) 1:28 Page 7 of 17

a gesture representing an increase to a factor that incorpo-rates the location of that factor as used over the course ofthe explanation, the gesture would be considered model-integrated. If, on the other hand, the increase was depictedin neutral space, the gesture would not be consideredmodel-integrated. Similarly, if a participant produces a ges-ture representing a causal relation between two factors as amovement between the locations of the two factors as usedover the course of the explanation, the gesture would beconsidered model-integrated (for examples, see Fig. 1cand g). Alternatively, if the gesture represented causationas a movement in neutral space, it would not be consid-ered model-integrated. Note that we did not expectwhole system gestures, which depict a high-level

Fig. 3 Schematic depictions of whole system gestures used tocharacterize the dynamics of positive and negative feedback systems

summary of the behavior of the system, to be closely in-tegrated with the low-level causal details depicted in theother gestures.3 This gesture type was thus not includedin the analysis. Importantly, model-integration goes be-yond mere spatial consistency. A speaker could producespatially consistent gestures when representing, for in-stance, the notion of increase, without ever integratingthose increase gestures into a larger model.Examples of two different gesture sequences are illus-

trated in Fig. 4, one with factor change gestures that aremodel-integrated and one with factor change gestures thatare not. For some gestures it was hard to make a judgmentabout whether or not they were model-integrated, andthey were coded as unclear. For those factor reference,factor change, and causal relation gestures that bothcoders judged to be codable (85% of gestures analyzed),agreement as to whether the gestures were model-integrated or not was 91% (N = 95, PABAK = 0.81).Overall, 90% (224/248) of participants’ factor reference

gestures were model-integrated (mean percentage = 90%),as were 49% (86/176) of their factor change gestures(mean percentage = 53%), and 72% (27/32) of their causalrelation gestures (mean percentage = 64%). All partici-pants (19/19) produced at least one model-integratedfactor reference gesture, 84% (16/19) of participants pro-duced at least one model-integrated factor change gesture,and 47% (9/19) produced at least one model-integratedcausal relation gesture. Three participants produced onlymodel-integrated gestures in their explanations: in onecase, 14 gestures (three factor reference, nine factorchange, two causal relation); in another, 15 gestures (fivefactor reference, ten factor change); and in another, 12

Fig. 4 Examples of gestures that are model-integrated (sequence 1)and gestures that are not model-integrated (sequence 2) producedduring explanations of negative feedback. Factor reference and factorchange gestures are indicated in blue for factor A and in green forfactor B. Note that the factor change gestures (the gestures witharrows) in sequence 1 (b, c, d) are consistently produced in thelocations originally assigned to each factor (the dots in a). In contrast,the factor change gestures in sequence 2 (the arrows in b and d)are not produced in the locations assigned to the relevant factor(the dots in a and c)

Cooperrider et al. Cognitive Research: Principles and Implications (2016) 1:28 Page 8 of 17

(all factor reference gestures). Thus, although some-times gestures expressed unconnected flashes of spatialimagery (see Fig. 4, sequence 2), they often coheredover the course of an explanation into a systematic ana-logical model.

Gestures in relation to speechFinally, we analyzed the relationship between partici-pants’ gestures and the language with which they wereco-produced. Most often, in 94% of cases (448/476), thegestures represented aspects of the system that weresimultaneously mentioned in speech. For example, a par-ticipant would produce a factor reference gesture whilereferring in speech to “the first factor” or a factor changegesture while mentioning an “increase”. Interestingly,however, gestures sometimes filled in where speech leftoff. Of the 20 whole system gestures, three (15%) wereproduced without actually verbalizing the system behavior.For example, a speaker describing a positive feedback sys-tem said, “so it’s like this sort of…” trailing off in speechbut providing a spatial characterization in gesture. Con-veying information in gesture and not in speech occurrednumerically less often for the other gesture types: in 5% ofcases for factor reference gestures, 5% for factor changegestures, and 9% for causal relation gestures. The higherrate observed for whole system gestures may stem fromdifficulty in verbalizing the overall system dynamics. How-ever, given the small number of observations, more dataare needed to confirm this pattern.Lastly, we investigated the relationship between the

different gesture types and overt spatial language. If thegesture’s verbal affiliate—the word or phrase thatexpresses the same meaning as the gesture—included aspatial word, the gesture was classified as occurringwith spatial language; otherwise, it was classified asoccurring with non-spatial language. Table 2 providesexamples of both spatial and non-spatial language thatwas co-produced with the different gesture types.Of the gestures that represented aspects of the system

that were simultaneously mentioned in speech, the over-whelming majority (84% (403/448)) were co-producedwith non-spatial language. Given the paucity of spatialideas expressed in speech overall, as described earlier,this result is unsurprising. However, the different gesturetypes varied in how often they were produced withspatial language. For example, factor reference gestures,though consistently involving the left-right axis, were notonce co-produced with a reference to “left” or “right” andwere only co-produced with a spatial word in a singleinstance (“external variable”). The other gesture types didoccur with spatial language, although less than half thetime: factor change 32%, causal relation 45%, and wholesystem behavior 35%. Overall, these results confirm that

Table 2 Examples of spatial and non-spatial languageco-produced with gestures

Non-spatial language Spatial language

Factor reference “First factor” “External variable”

“Certain variable” –

“Factor A” –

Factor change “Increase” “Rise”

“Decrease” “Go up”

“Change” “Go down”

Causal relation “Influences” “Rebounds”

“Causes” “Leads to”

“Effects” “Impacts”

Whole system “Self-correcting” “Negative loop”

“Regulate each other” “Seesaw”

“Constant increasing” “Building on each other”

Cooperrider et al. Cognitive Research: Principles and Implications (2016) 1:28 Page 9 of 17

much of the spatial content in people’s explanationscomes out not in their words, but in their hands.

DiscussionWhen explaining positive and negative feedback systems(two examples of complex relational patterns), partici-pants recruited spatial representations in abundance. Ele-ments of these systems were expressed spatially in gesturealmost four times as often as they were expressed spatiallyin speech. These spatial gestures provided several types ofevidence that people were drawing extensively on spatialanalogy. First, participants produced individual gesturesthat used space to represent abstract relations. These in-cluded both simple relations, as in factor change gesturesthat represented increases and decreases as movementsalong axes, and complex relations, as in the whole systemgestures used to summarize the behavior of the entire sys-tem. Second, considered in relation to each other, partici-pants’ gestures were often integrated into larger analogicalmodels, as when depicting a causal relationship as move-ment between previously established locations. Finally, thespatial analogies as revealed in gesture very often occurredwithout spatial language and, in some cases, without anylanguage at all. Spatial analogies are thus pervasive inspontaneous gesture during complex relational reasoning,and are not merely a manual echo of spatial speech.One limitation of Study 1 was that, although the lesson

was largely devoid of rich imagistic content, it did includea sprinkling of abstract spatial words. For example, thewords “direction” and “reversed” were used to describechanges in the system. Importantly, participants producedthe same basic gesture types in their pre-lesson explana-tions as in their post-lesson explanations—thus, the lesson

did not seem to tip participants off to the idea of spatializ-ing aspects of the system. Nonetheless, it remains possiblethat the presence of spatial language increased partici-pants’ propensity to spatialize their mental models of thesystem. To address this possibility, in Study 2, we removedthe remaining spatial language from the Causal SystemsLesson. If the abundant use of analogical gestures in Study1 was cued by the spatial language in our lesson, these be-haviors should be considerably dampened. A second aimof Study 2 is to again examine whole system gestures(which were relatively infrequent in Study 1)—specifically,whether we will again find that they tend to involve com-plex strokes.

Study 2: Causal systems lesson with spatiallanguage removedMethodsThe study was approved by the University of ChicagoInstitutional Review Board, under protocol #IRB13-0248.

ParticipantsThirty adults from the University of Chicago communityparticipated in exchange for course credit or cash. Sixparticipants were excluded from the analyses for differ-ent reasons: three because of irregularities in the ex-perimental procedure; two because many of theirgestures were occluded; and one for guessing our inter-est in gesture during the debriefing. In all, data from 24participants (16 female, mean age = 21.1 years) are re-ported in the analysis.

Materials and procedureThe materials and procedure were the same as in Study1, with the following exceptions to the sorting task andthe lesson.

Familiarization phase The structure of the sorting taskwas modified in two ways from Study 1: first, participantssorted 18 vignettes instead of 11; second, another seed cardwas added, which was an example of a chain of causally re-lated events. These two changes were aimed at developinga more sensitive measure of causal systems understandingthan the measure used in Study 1. Participants were giveneight minutes for the expanded sorting task.

Causal systems lesson The written lesson was editedfor Study 2 (see Appendix 2). The primary change wasthat spatial language was removed, at least to the extentpossible. Note that some abstract uses of prepositionsremain in certain stock uses (e.g., “a change in factor A,”“begins to decrease”), as it is not possible to write intelli-gible English without them. The post-lesson explanationis again the focus of our analyses (see Appendix 4 for

Cooperrider et al. Cognitive Research: Principles and Implications (2016) 1:28 Page 10 of 17

basic information about gesture during the pre-lessonexplanations).

AnalysisVideos were transcribed and analyzed in the same wayas in Study 1, and reliability was assessed by having asecond coder classify the gestures in six randomly se-lected explanations (25% of the data).

ResultsSorting performanceParticipants sorted the vignettes according to the appro-priate causal structure the majority of the time (M = 0.75,SD = 0.19). As in Study 1, however, the variability in sort-ing performance was quite limited, with three quarters ofparticipants scoring above 61% (interquartile range = 0.61to 0.89). Again, because of this limited variability, we donot discuss relations between participants’ sorting per-formance and their gestures.

Spatial ideas in speechParticipants expressed the four system elements spatiallyin speech an average of 5.5 (SD = 4.2) times per explan-ation, a rate comparable to Study 1.

Spatial ideas in gestureGesture rates and types Participant’s explanations lasted85.5 seconds on average (SD = 33.1) and involved an aver-age of 33.4 gestures (SD = 32.5), for a mean rate of 23.9(SD = 8.9) representational gestures per minute speaking.These rates are similar to those found in Study 1. Of thosegestures that both coders judged to be codable (83% ofgestures analyzed), 84% (N = 195, Cohen’s K = 0.77) wereassigned to the same category.As in Study 1, the majority (568/802, or 71%) of par-

ticipants’ representational gestures fell into one of thefour types. Participants expressed an average of 23.7(SD = 11.2) system elements spatially in their gestures.In other words, they produced more than four times asmany spatial ideas in gesture (23.7 per explanation) asthey did in speech (5.5 per explanation). Observedcounts and rates of the different gesture types are givenin Table 1. Overall, the patterns mirror those observedin Study 1, with the exception that more whole systemgestures were produced, and by a greater proportion ofthe participants. This difference provides an opportunityto revisit provisional observations made about wholesystem gestures in Study 1.

Properties of gestures considered individually As inStudy 1, we first examined how the gestures used space,with coders agreeing about the axis used in 82% (N = 172,Cohen’s K = 0.70) of cases. The majority of factor refer-ence gestures located the factors on the left-right axis

(227/301, or 75%). Again, factor change gestures weremore variable, with 20% (35/171) depicting increases anddecreases as movements along the left-right axis, 18% (30/171) along the toward-away axis, 32% (55/171) along theup-down axis, and the remaining 30% (51/171) involvingeither some combination of these axes or a more complexmovement. Causal relation gestures most commonly (16/24, or 67%) depicted causation as movement along theleft-right axis. Most whole system gestures did not exhibitmovement along a single axis (64/72, or 89%).Again, we investigated whether spatially complex

gestures were used to characterize relationally complexconcepts. For those gestures that both coders consideredcodable (98% of gestures analyzed), agreement as towhether a stroke was simple or complex was 95% (N =145, PABAK = 0.90). Whole system gestures were complexin 86% of cases (62/72), compared to 15% (74/496) for theother types combined. Thus the pattern seen in Study 1,but limited by the small number of observations, provedto be robust in Study 2. A significant number of partici-pants favored simple strokes for factor reference gestures(23/24 participants, two-tailed exact binomial test, p <0.001) and for factor change gestures (22/22 participants,two-tailed exact binomial test, p < 0.001). Most partici-pants (8/11) also favored simple strokes for causal relationfactors, but this bias did not reach significance (two-tailedexact binomial test, p = 0.16). In contrast, a highly signifi-cant number of participants (18/20) favored complexstrokes for their whole system gestures (two-tailed exactbinomial test, p < 0.001).

Properties of gestures considered across theexplanation We next analyzed gestures in relation toother gestures produced in the same explanation. Again,some gestures were considered unclear with respect tomodel-integration. For those factor reference, factorchange, and causal relation gestures that both codersjudged to be codable (66% of gestures analyzed), agree-ment as to whether the gestures were model-integratedor not was 82% (N = 101, PABAK = 0.64). Overall, 85%(256/301) of participant’s factor reference gestures (meanpercentage = 78%), 44% (75/171) of participants’ factorchange gestures (mean percentage = 42%), and 63% (15/24) of their causal relation gestures (mean percentage =64%) were model-integrated. Moreover, 96% (23/34) ofparticipants produced at least one model-integratedfactor reference gesture, 75% (18/24) at least onemodel-integrated factor change gesture, and 38% (9/24)least one model-integrated causal relation gesture.

Gestures in relation to speechAs in Study 1, the overwhelming majority of gesturesrepresented aspects of the system that were also men-tioned in speech (546/568, or 96%), but this differed by

Cooperrider et al. Cognitive Research: Principles and Implications (2016) 1:28 Page 11 of 17

gesture type. Gestures without speech were exceedinglyrare for factor reference gestures (2%), factor changegestures (1%), and causal relation gestures (0%), but oc-curred numerically more frequently for whole systemgestures (21%). Whole system gestures in the absenceof speech were produced by eight participants. Partici-pants turned to gesture when they had difficulty puttingcomplex relational ideas into words.Finally, paralleling Study 1, we found that the bulk

of gestures were produced without overt spatial lan-guage (483/546, or 88%). The gesture types were as-sociated with spatial language to differing degrees, asfollows: factor reference 0%, factor change 18%, causalrelation 21%, and whole system behavior 49%. Exam-ples of spatial and non-spatial references are includedin Table 2.

DiscussionIn Study 2, we modified the Causal System Lesson toremove spatial language. Despite this change—and mir-roring the primary findings of Study 1—participantsshowed strong evidence of drawing on spatial analogyduring their explanations. It appears that participants’use of spatial analogy was spontaneous and not entirelycued by the wording of the lesson. Moreover, Study 2yielded more cases of whole system gestures, providingstronger evidence that such gestures are more spatiallycomplex than other gestures types.

General discussionIn the two studies we investigated the possibility thatpeople would spontaneously use spatial analogies whenreasoning about positive and negative feedback, rela-tional systems that are complex, widespread, and ab-stract in nature. As a potential window onto suchhypothesized analogies, we examined the gestures peopleproduced as they explained the main characteristics ofthese causal systems and the differences between them.Despite the paucity of concrete visual imagery in thelesson we provided—and, in Study 2, the lack of spatiallanguage—when people explained the systems, they ges-tured at high rates. These gestures did not represent theactual locations or movements of objects—rather, theyused space abstractly to represent the different factors inthe system, the changes to those factors, the causal rela-tions between the factors, and the overall dynamics ofthe system. Abstract system elements were thus spatia-lized extensively in gesture. Importantly, these elementswere not often spatialized in speech, highlighting thevalue of observing gesture in reasoning tasks.The gestures we observed provided several strands of

evidence for the pervasiveness of spatial analogy in causalsystems reasoning. A first kind of evidence comes fromthe gestures considered individually. Some gestures used

space to represent relatively simple abstract relations. Forexample, factor change gestures used movements torepresent changes in magnitude, and causal relation ges-tures used movements to depict causation as movementbetween the two factors. Other gestures used space to ex-press relationally complex concepts. Whole system ges-tures (e.g., illustrating a balance between two changes or apattern of incremental increase) used spatially complexmovements to capture the overall behavior of the system.A second kind of evidence for the presence of spatial ana-logies comes from gestures considered in relation to othergestures within the same explanation. Gestures in manyexplanations were not merely isolated fragments of spatialimagery, but were knitted together into larger analogicalmodels. Interestingly, the stability of such models wassometimes directly visible from the fact that peoplewould hold an earlier gesture in place while repre-senting another aspect of the system with the otherhand (see Figs. 1 and 4). Such models exhibit a coreproperty of analogy (Gentner, 1983) in that they in-volve a mapping between an interconnected system ofentities and relations in one domain and a parallelsystem of interconnected spatial locations and move-ments. Of course, not everyone produced an ana-logical model that was fully realized (in the sense ofinvolving all gesture types) and fully systematic (inthe sense of each gesture being model-integrated).However, most participants produced multiple gesturetypes and at least some model-integrated gestures,and a few participants were unwaveringly systematic.

Spatial analogies do not depend on spatial languageResearch in recent decades has demonstrated a tightrelationship between gesture and speech, with the twooften “co-expressing” the same idea in two formats atonce (McNeill, 1992). Given such observations, wemight have guessed that abstract gestures would largelybe used along with overt spatial words like “rise” or“loop”. Prior work has noted that abstract spatialgestures sometimes occur without any tip-off in the co-occurring speech (e.g., Casasanto & Jasmin, 2012;Cienki, 1998; Cienki & Müller, 2008), but it has been un-clear from these observations whether such cases are theexception or the rule. Our results suggest that, at leastin explanations of causal systems, spatial language andanalogical gesture do not always go hand-in-hand. Infact, gestures were sometimes the only medium in whichan idea surfaced at all, as when people trailed off inspeech but conveyed a sophisticated spatial analogy intheir hands. Moreover, the spatial analogies we observedin gesture did not depend on giving people spatial lan-guage in the lesson they were explaining. The lesson weused in Study 1 did include a small amount of abstractspatial language. However, participants produced the full

Cooperrider et al. Cognitive Research: Principles and Implications (2016) 1:28 Page 12 of 17

suite of gesture types before ever seeing this lesson, intheir pre-lesson explanations. More decisively, in Study2, we removed the abstract spatial language from thelesson and saw no change in people’s propensity for pro-ducing spatial analogies in gesture.

Spatial analogies in STEMAnalogies are ubiquitous in STEM fields, and spatialanalogies appear to be especially so. Perhaps the best-studied examples to date are maps, scale models, andsketches (e.g., DeLoache, 2004; Forbus et al., 2011; Jeeet al., 2014; Kastens & Ishikawa, 2006; Uttal & Wellman,1989). In such cases, a set of concrete spatial relations inthe world is mapped in schematic fashion to somespatial representation of that world. The analogical map-ping is thus between one spatial format and anotherspatial format. By contrast, in the spatial analogies underexamination here, the mapping is between a purely ab-stract set of entities and relations—including factors,changes, causation, and overall system behavior—and aset of spatial locations and spatial relations—locations,movements, movements between locations, and an overallsystem topology. Prior work has demonstrated that peopleare able to understand spatial analogies of this abstracttype (Gattis & Holyoak, 1996; Gattis, 2001; Novick, 2001;see also the causal diagramming task in Goldwater &Gentner, 2015). The present work takes these prior ob-servations one step further by showing that spatial ana-logies are also produced in spontaneous behavior.How might the present findings inform classroom

practice in the STEM fields? Analogical gestures areevanescent, and—like gestures generally—are a less overtmedium of expression (see Goldin-Meadow, 2003), butthey are nonetheless relevant for the classroom in atleast two ways. First, what teachers do with their handswhen introducing relational abstractions like positiveand negative feedback may guide learning. Though notmuch is known about how analogical gestures are proc-essed, early evidence from the domain of temporalthinking suggests that seeing abstract relational gestureshas powerful consequences for downstream reasoning(Jamalian & Tversky, 2012). Second, students’ gesturesmay help teachers identify outright misconceptions orshaky understandings in their pupils (e.g., Alibali, Flevares,& Goldin-Meadow, 1997). As our data show, analogicalgestures very often occur without spatial speech andsometimes without any speech at all. Thus, in a smallgroup interaction, a teacher might notice that a studentuses a seesawing gesture to characterize what is, in fact, apositive feedback system and then offer clarification. Insum, analogical gestures may be an important mediumthrough which students learn from their teachers andteachers learn about their students.

An important issue for further research is whetheranalogical gestures might reveal individual differences inconceptual understanding, as gestures in other domainshave been found to do (e.g., Church & Goldin-Meadow,1986; Perry, Church, & Goldin-Meadow, 1988; Pine,Lufkin, & Messer, 2004). In our studies, we might have ex-pected that students with more expert-like understandingwould be more likely to produce whole system gestures orwould exhibit more systematic gestural models. We didnot find evidence for such relationships, but this may bedue to low variability in our sorting task. In other popula-tions, or with more intensive measures of expert-like un-derstanding, such relationships may well emerge. Anotherway that gesture could reveal expert-like understanding issuggested by work on how students understand emergentprocesses. Chi et al. (2012) propose that to deeply under-stand such a system, it is not enough for a student tounderstand the low-level elements and dynamics of thesystem (in our studies, the factors, factor changes, andcausal relations binding them) and the high-level dy-namics of the system (in our studies, the whole systembehavior); the student must also understand how thelow-level behavior generates the high-level behavior.Future work could examine whether gesture providesevidence that students understand such connectionsbetween levels, using feedback systems or other types ofcomplex systems where misconceptions are common(Chi et al., 2012; Jacobson et al., 2011).A final point is that spatial analogies may play important

roles in teaching and learning in STEM whether or notthey are expressed in gesture. In the case of causal sys-tems, such analogies may also be expressed and under-stood through diagrams (Ainsworth, Prain, & Tytler,2011; Forbus et al., 2011; Goldwater & Gentner, 2015)and they can even be glimpsed—albeit in fragmentaryfashion—in everyday language, as when we refer to“feedback loops” or “domino effects”. In fact, analogiesmay play important roles even when they go com-pletely unexpressed: students working on problemsalone, reading texts, or listening to lectures may stillbe engaging in backstage spatial reasoning. If space isa format in which students intuitively think aboutcomplex relational structures, then we may want todesign materials, classroom activities, and assessmentsaccordingly.

Why are spatial analogies pervasive?Why did our participants produce analogical gestures insuch abundance? And how might the spatial analogieswe see in gesture relate to the ones seen in diagrams,such as those sometimes used in STEM education? Onepossibility is that people create spatial analogies in ges-ture because of their familiarity with spatial analogiesfrom diagrams. An alternative possibility is that people

Cooperrider et al. Cognitive Research: Principles and Implications (2016) 1:28 Page 13 of 17

create spatial analogies in both gesture and in diagramsfor the deeper reason that space is an especially intuitiveformat for reasoning about relational structure (see Gattis,2001, for discussion). One reason that space is so intuitivemay be that its relational structure is deeply familiar,even at a young age. Of course, these possible rela-tionships between gestures and diagrams are not mu-tually exclusive: gesture and diagramming could bothultimately spring from an urge to spatialize relationalstructure while, at the same time, shaping each otherto some degree.The idea that space is an intuitive format for reasoning

about relational structure is supported by the fact thatspatial analogy crops up routinely in everyday communi-cation. Consider an example from a 2012 public hearingon public transportation funding in San Francisco.4 Atissue are proposed budget cuts to a program that subsi-dizes bus rides for low-income youth. During the hearing,a concerned commenter laments the apparent “tradeoffbetween maintenance and this program”. As she says“tradeoff” she holds her hands on either side of her bodyand rocks them up and down. In one complex stroke,the gesture represents the idea that as financial supportfor one (maintenance) increases, support for the other(the subsidy program) decreases. Other everyday ideasthat may be illustrated in analogical gestures includerelational abstractions like reciprocity, tit-for-tat, give-and-take, and many others (for additional discussion,see Beaudoin-Ryan & Goldin-Meadow, 2014; Cooperrider& Goldin-Meadow: When gesture becomes analogy,submitted).The propensity to set up and use spatial analogical

models has also been noted in established sign languages(Emmorey, 2014). Such a propensity is evident in thephenomenon known as “spatial modulation” (Senghas &Coppola, 2001; see also Liddell, 1995), which resemblesthe model-integrated gestures found in our data. InAmerican Sign Language (ASL), for example, a verb maybe said to be “spatially modulated” if it incorporatesspatial information that was previously established forone or more of its arguments. Thus, to describe a persongiving a book to someone else, an ASL signer would as-sign locations to the giver and to the recipient and thenproduce the sign GIVE along a path from the former tothe latter. As our data show, hearing gesturers dosomething similar under the right circumstances (seealso So, Coppola, Licciardello, & Goldin-Meadow,2005). Together, these cases suggest that analogical ges-tures—and their counterpart, analogical signs—are abroader phenomenon outside the laboratory.Space may be so intuitive as a format for relational

reasoning, in fact, that people use it automatically. Ana-logy is sometimes thought of as the product of an effort-ful attempt to communicate a complex idea. However,

empirical work has shown that this formulation is oftenmisleading: analogical mapping can occur unintention-ally, without any effort (Day & Gentner, 2007; see alsoSteen, 2008). We suggest that a similar unintentionaldeployment of analogy may be at work in our studies.Participants could have referred to their gestures—or tothe spatial information contained therein—explicitly(e.g., “Imagine the system is like this”), but they did not.Nor did they show signs of engaging in an effortfulprocess of design and development, as might be signaledby restarts or amendments to the spatial structure. Ra-ther, participants appeared to construct their spatial ana-logies fluidly and more or less effortlessly as theyarticulated the relational structure they were describing.Given the seeming automaticity with which analogical

gestures were produced in our explanation task, an intri-guing possibility is that they may have been helpful tothe speaker. Prior work has argued that producing repre-sentational gestures aids conceptualization (e.g., Kita,2000) and learning (e.g., Goldin-Meadow, Cook, &Mitchell, 2009), and has shown that gesturing duringexplanations can reduce cognitive load (e.g., Goldin-Meadow, Nusbaum, Kelly, & Wagner, 2001). To ourknowledge, however, cognitive benefits of this sort havenot been shown for analogical gestures of the type de-scribed here, and, if spatial analogy is indeed an effortfulprocess, then producing analogical gestures could in-crease cognitive load rather than lighten it. However,one serendipitous observation lends weight to the possi-bility that gestures helped people reason in our studies:a handful of participants produced gestures while read-ing the lesson to themselves, and these gestures oftenformed extended sequences like those produced duringthe explanations. Other work has described such “co-thought gestures” in the case of concrete spatial reason-ing tasks, such as mental rotation, and has shown thatthat they benefit reasoning (Chu & Kita, 2011; see alsoJamalian, Giardino, & Tversky, 2013). Speculatively, thesilent flurries of spatial gesture we observed may havehelped participants encode the relational structure of thesystems they were reading about.

ConclusionComplex relational patterns underlie diverse phenom-ena across the natural and social worlds. While earlierwork has demonstrated the difficulties of reasoningabout such patterns, much remains to be understoodabout the representations people bring to bear duringsuch reasoning. Here we provide evidence that spon-taneous spatial analogies pervade explanations of causalsystems, and that gesture is an important—sometimesthe only—medium through which such analogies areexpressed. Indeed, evidence for such analogies wouldhave been scarce from a verbal transcript alone.

Cooperrider et al. Cognitive Research: Principles and Implications (2016) 1:28 Page 14 of 17

Although we have barely begun to explore this arena, itremains plausible that spatial analogies lie at the coreof the human ability to understand complex relationalphenomena.

Endnotes1Participants also produced these same four gesture

types in their pre-lesson explanations (see Appendix4). However, we do not analyze these explanations indetail because they were shorter and more variablein terms of what participants talked and gesturedabout.

2For two of our coding categories—stroke complex-ity and model integration—we report the Prevalence-Adjusted Bias-Adjusted Kappa, or PABAK. In thesecases, one of the two coding categories is muchmore prevalent than the other, which complicates theinterpretation of Cohen’s Kappa (Byrt, Bishop, &Carlin, 1993).

3As noted by other researchers, systems often exhibitbehavior at a zoomed-out, macro-level that is qualitativelydistinct from the behavior they exhibit at a zoomed-in,micro-level. For discussion, see Chi et al. (2012) and Smithand Gentner (2014).

4The example comes from an online database, the TVNews Archive, and can be viewed at the following link(accessed 16 June 2016): https://archive.org/details/SFGTV_20120422_210000#start/180/end/240.

Appendix 1Basic Causal Systems Lesson (Study 1)The brief paragraphs you just read give examples ofdifferent types of causal systems. Consider the firstone:When stocks are rising (a bull market), investors

commonly believe that further rises are probable,and begin to buy more stocks. This buy-up increasesthe confidence of other investors and inspires themto buy as well. Many economists believe that thisprinciple is occurring in the current housing boom.People are buying houses expecting the value tokeep on rising.In this example, a change in one factor (stock

prices rising) causes a change in another factor (morebuying activity) which then leads to a further change(stocks rising even more). More generally, in this typeof causal system a change in one element in the sys-tem—factor A—causes a change in another ele-ment—factor B—which then leads to further changein factor A in the same direction as the originalchange. Overall, the initial change in factor A is fur-thered in this type of system.This type of system contrasts with the one described

in the other paragraph:

Predator/prey populations of animals sometimes followa very predictable pattern. If the prey population increasesin number, the predator population has more food and in-creases as well. At some point the predator populationover eats and the prey population begins to decline innumber. Consequently, the predator population decreaseswith the scarcity of available food.In this example, a change in one factor (growth in

the predator population) causes a change in anotherfactor (decline in the prey population) which thenleads to a further change (decline in the predatorpopulation). More generally, in this a change in oneelement of the system—factor A—causes a change in an-other element—factor B—which then leads to change infactor A in the opposite direction to the original change.Overall, the initial change in factor A is reversed in thistype of system.

Appendix 2Causal Systems Lesson with spatial language removed(Study 2)The brief paragraphs you just read give examples ofdifferent types of causal systems. Consider the firstone:When stock values are increasing (a bull market), in-

vestors commonly believe that further increases areprobable, and begin to buy more stocks. This increasesthe price of stocks. As stock prices increase, investorsbecome more confident, and their rate of stock-buyingincreases. Many economists believe that this pattern iswhat causes stock market bubbles.In this example, a change in one factor (stock prices

increasing) causes a change in another factor (more buy-ing activity), which then causes an additional change inthe first factor (stock prices increasing even more). Moregenerally, in this type of causal system a change in oneelement in the system—factor A—causes a change in an-other element—factor B—which then causes additionalchange in factor A that is the same as the originalchange. Overall, the initial change in factor A is perpetu-ated in this type of system.This type of system contrasts with the one described

in the following paragraph:Predator/prey populations of animals sometimes fol-

low a very predictable pattern. If the prey populationincreases in number, the predator population hasmore food and increases as well. At some point thepredator population over eats and the prey populationbegins to decrease in number. Consequently, thepredator population decreases with the scarcity ofavailable food.In this example, a change in one factor (increase in the

prey population) causes a change in another factor (in-crease in the predator population), which then causes an

Table 3 Gesture counts and rates by type during pre-lessonexplanations

Factorreference

Factorchange

Causalrelation

Wholesystem

Other orunclear

Totals

Study 1 — Pre-lessonexplanations

Number ofgestures

28 47 1 34 82 192

Mean perexplanation

1.5 2.5 0.1 1.8 4.3 10.1

Proportion ofparticipantsproducing type

0.58 0.74 0.05 0.68 1.00 –

Study 2 — Pre-lessonexplanations

Number ofgestures

98 29 6 81 149 363

Mean perexplanation

4.1 1.2 0.3 3.4 6.2 15.1

Proportion ofparticipantsproducing type

0.75 0.50 0.21 0.79 1.00 –

Cooperrider et al. Cognitive Research: Principles and Implications (2016) 1:28 Page 15 of 17

additional change in the first factor (decrease in the preypopulation). More generally, in this type of system achange in one element of the system—factor A—causesa change in another element—factor B—which thencauses a change in factor A that is the opposite of theoriginal change. Overall, the initial change in factor A iscorrected in this type of system.

Appendix 3Spatial language coding procedure1) To decide whether a gesture was accompanied byspatial language, we examined the lexical or phrasal af-filiate of the gesture. For the gestures we are analyzing,the lexical or phrasal affiliate is the part that refers tothe factor reference (e.g., “factor A”), the factor change(e.g., “increases”), causation (e.g., “causes”), or the wholesystem behavior (e.g., “oscillates”). In the case of aphrase, if any of the words is considered spatial (accord-ing to the following criteria) the gesture is considered tobe “co-produced with spatial language”.2) Co-produced language was transcribed during the

gesture coding. The presence of spatial language wasdetermined from this transcription. However, in casesof ambiguities in usage, the video was examined toconsider the fuller context.3) Words were considered spatial if the first definition

listed in the American Heritage Dictionary (5th edition)was overtly and uniquely spatial in that it referred tomovement, action, shape, configuration, or size.Notes:a) The definition consulted was for the part of speech

that was actually used. Some words have spatial defini-tions when used as one part of speech, but non-spatialdefinitions (by our criteria) when used as another. Forexample, “reverse” is overtly spatial when used as a verbbut is not when used as a noun.b) Some words are defined in an abstract way that

covers both spatial and non-spatial uses. We onlyconsider words spatial if they do not also have a non-spatial sense given in the same definition—in otherwords, if they are uniquely spatial. For example, “in-crease” is defined in some dictionaries as “become ormake greater in size, amount, intensity, or degree”.This one definition encompasses both overtly spatial(size) and non-spatial (amount, intensity, and degree)meanings.c) In some cases, spatial words are used as part of

conventional phrases that have holistic meanings. Thisincludes, for instance, “turn” in the phrase “in turn”.These were not considered spatial unless the phrase it-self has a spatial meaning. Spatial prepositions (“in,”“on,” “to”, etc.) were not considered spatial when theyare part of stock phrases. For example, a “change in” ora “change to” are not considered spatial.

d) In case of ambiguity in the spatiality of the defin-ition, example utterances were consulted.e) If the American Heritage Dictionary (5th edition)

did not list a separate definition for a common phrase,Google’s built in dictionary was consulted.

Appendix 4

AcknowledgmentsWe thank Erin Richard and Ishara Ruffins for research assistance.

FundingFunding for this study was provided by the NSF Spatial Intelligence andLearning Center (SBE 0541957; DG and SG-M are co-PIs), ONR (N00014-16-1-2613 to DG), and NICHD (R01-HD47450 to SG-M).

Availability of data and materialsQuantitative data based on the video analysis will be made available uponrequest. Materials not included in the appendices are also available uponrequest.

Authors’ contributionsAll authors developed the study concept. KC supervised the data collection.All authors interpreted the results. KC drafted the manuscript, with criticalrevisions provided by DG and SG-M. All authors approved the final version ofthe manuscript.

Competing interestsThe authors declare that they have no competing interests.

Author details1Department of Psychology, University of Chicago, 5848 S. University Avenue,Chicago, IL 60637, USA. 2Department of Psychology, Northwestern University,2029 Sheridan Road, Evanston, IL 60208, USA.

Received: 31 July 2016 Accepted: 27 October 2016

Cooperrider et al. Cognitive Research: Principles and Implications (2016) 1:28 Page 16 of 17

ReferencesAinsworth, S., Prain, V., & Tytler, R. (2011). Drawing to learn in science. Science,

333, 1096–1097.Alibali, M. W. (2005). Gesture in spatial cognition: expressing, communicating,

and thinking about spatial information. Spatial Cognition & Computation,5(4), 307–331.

Alibali, M., Flevares, L., & Goldin-Meadow, S. (1997). Assessing knowledgeconveyed in gesture: do teachers have the upper hand? Journal ofEducational Psychology, 89, 183–193.

Alibali, M. W., Heath, D. C., & Myers, H. J. (2001). Effects of visibility betweenspeaker and listener on gesture production: some gestures are meant tobe seen. Journal of Memory and Language, 44, 169–188.

Anthony, K. W. (2009). Methane: a menace surfaces. Scientific American,301, 68–75.

Beaudoin-Ryan, L., & Goldin-Meadow, S. (2014). Teaching moral reasoningthrough gesture. Developmental Science, 17(6), 984–990.

Boroditsky, L. (2001). Does language shape thought? Mandarin and Englishspeakers’ conceptions of time. Cognitive Psychology, 43(1), 1–22.

Broaders, S. C., Cook, S. W., Mitchell, Z., & Goldin-Meadow, S. (2007). Makingchildren gesture brings out implicit knowledge and leads to learning.Journal of Experimental Psychology: General, 136(4), 539–550.

Brugman, C. M. (1988). The story of over: polysemy, semantics, and the structure ofthe lexicon. New York: Garland.

Byrt, T., Bishop, J., & Carlin, J. B. (1993). Bias, prevalence and Kappa. Journal ofClinical Epidemiology, 46(5), 423–429.

Casasanto, D., & Bottini, R. (2014). Spatial language and abstract concepts. WileyInterdisciplinary Reviews: Cognitive Science, 5(2), 139–149.

Casasanto, D., & Jasmin, K. (2012). The hands of time: temporal gesture in Englishspeakers. Cognitive Linguistics, 23(4), 643–674.

Chi, M. T. H., Roscoe, R. D., Slotta, J. D., Roy, M., & Chase, C. C. (2012). Misconceivedcausal explanations for emergent processes. Cognitive Science, 36(1), 1–61.

Chu, M., & Kita, S. (2011). The nature of gestures’ beneficial role in spatial problemsolving. Journal of Experimental Psychology: General, 140(1), 102–116.

Chu, M., Meyer, A., Foulkes, L., & Kita, S. (2013). Individual Differences inFrequency and Saliency of Speech-Accompanying Gestures: The Role ofCognitive Abilities and Empathy. Journal of Experimental Psychology. General,143(2), 694–709.

Church, R. B., & Goldin-Meadow, S. (1986). The mismatch between gesture andspeech as an index of transitional knowledge. Cognition, 23, 43–71.

Cienki, A. (1998). Metaphoric gestures and some of their relations to verbalmetaphorical expressions. In J.-P. Koenig (Ed.), Discourse and cognition:Bridging the gap (pp. 189–204). Stanford, California: CSLI.

Cienki, A., & Müller, C. (Eds.) (2008). Metaphor and gesture. Philadelphia: JohnBenjamins.

Clark, H. H. (1973). Space, time, semantics, and the child. In T. E. Moore (Ed.),Cognitive development and the acquisition of language (pp. 28–63). New York:Academic.

Cooperrider, K., Núñez, R., & Sweetser, E. (2014). The conceptualization of time ingesture. In C. Müller, A. Cienki, E. Fricke, S. Ladewig, D. McNeill, & J. Bressem(Eds.), Body-Language-Communication (vol. 2) (pp. 1781-88). New York:Mouton de Gruyter.

Day, S. B., & Gentner, D. (2007). Nonintentional analogical inference in textcomprehension. Memory & Cognition, 35(1), 39–49.

Day, S. B., Motz, B. A., & Goldstone, R. L. (2015). The cognitive costs ofcontext: the effects of concreteness and immersiveness in instructionalexamples. Frontiers in Psychology, 6(Dec), 1–13. http://doi.org/10.3389/fpsyg.2015.01876.

DeLoache, J. S. (2004). Becoming symbol-minded. Trends in Cognitive Sciences,8(2), 66–70.

Emmorey, K. (2014). Iconicity as structure-mapping. Philosophical Transactions ofthe Royal Society, Biological Sciences, 369, 20130301.

Emmorey, K., Tversky, B., & Taylor, H. A. (2000). Using space to describe space:Perspective in speech, sign, and gesture. Spatial Cognition & Computation, 2,157–180.

Forbus, K., Usher, J., Lovett, A., & Wetzel, J. (2011). CogSketch: sketchunderstanding for cognitive science research and for education. Topicsin Cognitive Science, 3(4), 648–666.

Gattis, M. (Ed.) (2001). Spatial schemas in abstract thought. Cambridge, MA: MIT Press.Gattis, M., & Holyoak, K. (1996). Mapping conceptual to spatial relations in

visual reasoning. Journal of Experimental Psychology: Learning, Memory,and Cognition, 22(1), 231–239.

Gentner, D. (1983). Structure-mapping: a theoretical framework for analogy.Cognitive Science, 7, 155–170.

Gentner, D. (2001). Spatial metaphors in temporal reasoning. In M. Gattis(Ed.), Spatial schemas in abstract thought (pp. 203–222). Cambridge: MITPress.

Gentner, D., & Markman, A. B. (1997). Structure mapping in analogy and similarity.American Psychologist, 52(1), 45–56.

Gentner, D., & Stevens, A. L. (Eds.). (1983). Mental models. Hillsdale: Erlbaum.Goldin-Meadow, S. (2003). Hearing gesture: how our hands help us think.

Cambridge: Harvard University Press.Goldin-Meadow, S., Cook, S. W., & Mitchell, Z. A. (2009). Gesturing gives children

new ideas about math. Psychological Science, 20(3), 267–272.Goldin-Meadow, S., Nusbaum, H. C., Kelly, S. D., & Wagner, S. M. (2001). Explaining

math: gesturing lightens the load. Psychological Science, 12, 516–522.Goldwater, M. B., & Gentner, D. (2015). On the acquisition of abstract knowledge:

structural alignment and explication in learning causal system categories.Cognition, 137, 137–153.

Hegarty, M. (2004). Mechanical reasoning by mental simulation. Trends inCognitive Sciences, 8(6), 280–285.

Heine, B. (1997). Cognitive foundations of grammar. Oxford: Oxford University Press.Hostetter, A. B., & Alibali, M. W. (2008). Visible embodiment: gestures as simulated

action. Psychonomic Bulletin & Review, 15(3), 495–514.Jacobson, M., Kapur, M., So, H.-J., & Lee, J. (2011). The ontologies of

complexity and learning about complex systems. Instructional Science,39(5), 763–783.

Jamalian, A., Giardino, V., & Tversky, B. (2013). Gestures for thinking. In M. Knauff,M. Pauen, N. Sebanz, & I. Wachsmuth (Eds.), Proceedings of the 35th AnnualConference of the Cognitive Science Society (pp. 645–650). Austin: CognitiveScience Society.

Jamalian, A., & Tversky, B. (2012). Gestures alter thinking about time. In N. Miyake, D.Peebles, & R. P. Cooper (Eds.), Proceedings of the 34th Annual Cognitive ScienceSociety (pp. 503–508). Austin: Cognitive Science Society.

Jamrozik, A., & Gentner, D. (2015). Well-hidden regularities: abstract uses of in andon retain an aspect of their spatial meaning. Cognitive Science, 39, 1881–1911.

Jee, B. D., Gentner, D., Uttal, D. H., Sageman, B., Forbus, K., Manduca, C. A., …Tikoff, B. (2014). Drawing on experience: how domain knowledge isreflected in sketches of scientific structures and processes. Research inScience Education, 44(6), 859–883.

Kang, S., Tversky, B., & Black, J. B. (2014). Coordinating gesture, word, anddiagram: explanations for experts and novices. Spatial Cognition &Computation, 15(1), 1–26.

Kastens, K. A., & Ishikawa, T. (2006). Spatial thinking in the geosciences andcognitive sciences: a cross-disciplinary look at the intersection of the twofields. Geological Society of America Special Papers, 413, 53–76.

Kita, S. (2000). How representational gestures help speaking. In D. McNeill (Ed.),Language and gesture (pp. 162–185). Cambridge: Cambridge University Press.

Kita, S., Van Gijn, I., & Van der Hulst, H. (1998). Movement phase in signs and co-speech gestures, and their transcriptions by human coders. Gesture and SignLanguage in Human-Computer Interaction. Bielefeld Gesture Workshop.September, 17–19, 23–35.

Kita, S., van Gijn, I., & Van der Hulst, H. (2014). The non-linguistic status of theSymmetry Condition in signed languages: evidence from a comparison ofsigns and speech-accompanying representational gestures. Sign Language &Linguistics, 17(2), 215–238.

Lagnado, D. A., Waldmann, M. R., Hagmayer, Y., & Sloman, S. A. (2007). Beyondcovariation: cues to causal structure. In A. Gopnik & L. Schulz (Eds.), Causallearning: Psychology, philosophy, and computation (pp. 154–172). New York:Oxford University Press.

Liddell, S. K. (1995). Real, surrogate, and token space: grammatical consequencesin ASL. In K. Emmorey & J. S. Reilly (Eds.), Language, gesture, and space (pp.19–41). Hillsdale: Lawrence Erlbaum.

Little, J. L., & Bjork, E. L. (2016). Multiple-choice pretesting potentiates learningof related information. Memory & Cognition, 44, 1085–1101.

McNeill, D. (1992). Hand and mind: what gestures reveal about thought. Chicago:Chicago University Press.

Nathan, M. J., & Martinez, C. V. J. (2015). Gesture as model enactment: the role ofgesture in mental model construction and inference making when learningfrom text. Learning: Research and Practice, 1(1), 4–37.

Novick, L. R. (2001). Spatial diagrams: key instruments in the toolbox forthought. In D. L. Medin (Ed.), The psychology of learning and motivation(Vol. 40, pp. 279–325). San Diego: Academic Press.

Cooperrider et al. Cognitive Research: Principles and Implications (2016) 1:28 Page 17 of 17

Núñez, R. (2008). A fresh look at the foundations of mathematics: gesture andthe psychological reality of conceptual metaphor. In A. Cienki & C. Müller(Eds.), Metaphor and gesture (pp. 93–114). Amsterdam: John Benjamins.

Perry, M., Church, R. B., & Goldin-Meadow, S. (1988). Transitional knowledge inthe acquisition of concepts. Cognitive Development, 3, 359–400.

Pine, K. J., Lufkin, N., & Messer, D. (2004). More gestures than answers: childrenlearning about balance. Developmental Psychology, 40, 1059–1106.

Rottman, B. M., Gentner, D., & Goldwater, M. B. (2012). Causal systems categories:differences in novice and expert categorization of causal phenomena.Cognitive Science, 36(5), 919–932.

Schwartz, D. L., & Black, J. B. (1996). Shuttling between depictive models andabstract rules: induction and fallback. Cognitive Science, 20(4), 457–497.

Senghas, A., & Coppola, M. (2001). Children creating language: how NicaraguanSign Language acquired a spatial grammar. Psychological Science, 12(4), 323–328.

Sloman, S. A. (2005). Causal models: how people think about the world and itsalternatives. Oxford: Oxford University Press.

Smith, L. A., & Gentner, D. (2014). The role of difference-detection in learningcontrastive categories. In P. Bello, M. Guarini, M. McShane, & B. Scassellati(Eds.), Proceedings of the 36th Annual Meeting of the Cognitive Science Society(pp. 1473–1478). Austin: Cognitive Science Society.

So, C., Coppola, M., Licciardello, V., & Goldin-Meadow, S. (2005). The seeds ofspatial grammar in the manual modality. Cognitive Science, 29, 1029–1043.

Steen, G. (2008). The paradox of metaphor: why we need a three-dimensionalmodel of metaphor. Metaphor and Symbol, 23(4), 213–241.

Traugott, E. C. (1978). On the Expression of Spatio-Temporal Relations inLanguage. In J. H. Greenberg (Ed.), Universals of Human Language. Volume 3:Word Structure (pp. 369–400). California: Stanford University Press.

Uttal, D. H., & Wellman, H. M. (1989). Young children’s representation of spatialinformation acquired from maps. Developmental Psychology, 25(1), 128–138.

Submit your manuscript to a journal and benefi t from:

7 Convenient online submission

7 Rigorous peer review

7 Immediate publication on acceptance

7 Open access: articles freely available online

7 High visibility within the fi eld

7 Retaining the copyright to your article

Submit your next manuscript at 7 springeropen.com