What Brain Imaging Can Tell Us About Embodied Meaning

Embed Size (px)

Citation preview

  • 8/9/2019 What Brain Imaging Can Tell Us About Embodied Meaning

    1/8

    In Press, To appear in M. de Vega, A. Glenberg, & A. Graesser (Eds.), Neural embodiment of sentence and word meaning

    Symbols, embodiment, and meaning. Oxford, UK: Oxford University Press.

    What Brain Imaging Can Tell Us About Embodied Meaning

    Marcel Adam JustCenter for Cognitive Brain Imaging

    Department of PsychologyCarnegie Mellon University

    Pittsburgh, PA

    Abstract: Brain imaging studies of language processing (using fMRI) can indicate under whatcircumstances the embodied aspects of language representations become activated. In particular,the processing of language is distributed across a number of cortical centers, including not onlyclassic language areas in association cortex (which might be involved in symbolic processing), butalso sensory and motor areas. A set of fMRI studies on visual imagery in sentence comprehensionreveals both the perceptual-motor and the symbolic aspects of brain function that underlie languageprocessing. Moreover, they indicate some of the conditions under which perceptual or motorrepresentations are most likely to be activated. Another set of studies on word comprehensionindicates that the neural signature of certain concrete semantic categories (tools and dwellings) andindividual category exemplars can be identified by machine learning algorithms operating on thefMRI data, and that perceptual and motor representations constitute part of the signature.

    Visual imagery in sentence comprehension

    Many types of thinking, in particular languagecomprehension, entail the use of mental imagery.

    Understanding a text on architecture or automobiledesign seems impossible without mental imagery.Language often refers to perceptually-basedinformation. For example, to evaluate a sentence likeThe number eight when rotated 90 degrees looks like

    a pair of spectacles, a reader must process thecontent of the sentence, retrieve a mental image ofthe shape of the digit 8, mentally apply a rotationtransformation to it, and then evaluate the resultingimage. In this case, there seems little doubt that aperceptually-based representation is involved in thecomprehension. This perceptually-based

    representation has sometimes been called thereferential representation or the situation model. OurfMRI studies attempt to determine the characteristicsof such representations and the conditions underwhich they are likely to be evoked or activated.

    Previous studies have indicated that mentalimagery generated by verbal instructions and byvisual encoding activate similar cortical regions(Mellet et al., 1996, 1998, & 2002; Mazoyer et al.,2002). Several studies examining mental imagery

    have observed activation of the parietal area (Just et al.,2001; Mellet et al., 1996 & 2000; Deiber et al., 1998;Ishai et al., 2000, Kosslyn et al., 1993, 1996 & 1999),particularly around the intra-parietal sulcus. Our

    imagery studies attempted to determine the conditionsunder which such activation occurs during languagecomprehension.

    There is also a possibility that the neural activityunderlying the imagery in language processing isaffected by the presentation modality of the language(i.e., written vs. spoken). For example, the neuralactivity elicited in primary visual cortex during mentalimagery following verbal vs. visual encoding wasdifferent (Mellet et al., 2000); there was less primaryvisual activation during imagery after visual encodingcompared to verbal encoding, suggesting that

    presentation modality may indeed affect later imageryprocessing. Another study by Eddy and Glass (1981)examined how the visual processes in reading might berelated to the visual imagery processes that a sentenceengenders, comparing visual and auditory sentencepresentation modes. High-imagery sentences tooklonger to verify as true or false than low-imagerysentences when the sentences were presented visually,but not when they were presented auditorily. Thesefindings again suggest that the presentation modality of

  • 8/9/2019 What Brain Imaging Can Tell Us About Embodied Meaning

    2/8

    In Press, To appear in M. de Vega, A. Glenberg, & A. Graesser (Eds.), Neural embodiment of sentence and word meaning

    Symbols, embodiment, and meaning. Oxford, UK: Oxford University Press.

    2

    a sentence may affect the processing of thesubsequent imagery.

    Our imagery studies examined mental imageryprocesses in the context of a language comprehensiontask (Just et al., 2004). One of the main goals was toexamine the interaction between two somewhat

    separable neural systems, the mental imagery andlanguage processing systems. In the context of theembodiment debate, these studies ask not whetherembodied (perceptual or motor) activation occurs, butthe circumstances under which it occurs and how it isrelated to other more symbolic activation. Toaccomplish this goal, we used fMRI to measure notonly the activation levels but also the functionalconnectivities of the regions believed to be involvedin mental imagery, to determine the relations betweenthe two systems. A second goal was to examine theeffect of input modality, comparing the effect on the

    imagery-related activation when the sentences wereeither heard or read.

    The study examined brain activation whileparticipants read or listened to high-imagerysentences like The number eight when rotated 90degrees looks like a pair of spectacles or low-imagery sentences, and judged them as true or false.They included sentences requiring various types ofspatial transformation or spatial assessment such asmental rotation (like the spectacles sentence),evaluation of spatial relations (e.g., On a map,

    Nevada is to the right of California), combination of

    shapes (e.g., The number nine can be constructedfrom a circle and a horizontal line, a false example),and comparison of visual aspects of common objects(e.g.,In terms of diameter, a quarter is larger than anickel, which is larger than a dime ). Although thesesentences generally required that a spatialtransformation be mentally performed, pilot studiesindicated that understanding a complex spatialdescription without a transformation produced similarresults. The low-imagery sentences could be verifiedby referring to general knowledge, without the use ofimagery (e.g., Although they are now a sport,

    marathons started with Greek messengers bringingnews).

    The sentence imagery manipulation affected theactivation in regions (particularly the left intraparietalsulcus) that activate in other mental imagery tasks,such as mental rotation. Both the auditory and visualand auditory presentation experiments indicatedmuch more activation of the intraparietal sulcus areain the high-imagery condition as shown in Figure 1,suggesting a common neural substrate for language-

    Figure 1. The activation in the intraparietal

    sulcus area is higher for high imagerysentences (particularly in the left hemisphere),

    and the effect is similar regardless of whether

    the sentences are presented visually or

    auditorally.

    evoked imagery that is independent of the inputmodality. There was more activation in theintraparietal sulcus area in the reading than in thelistening condition (probably owing to the attentionaldemands ofdirecting spatial attention and possibly eyemovements to particular sentence locations during

    reading), but the magnitude of the imagery effect wascomparable in the two cases.

    Functional connectivity and imagery. The variousanatomical regions of the cortex involved in processinga task must be able to effectively communicate andsynchronize their processes for the system to function.In a language task, this means that the areasresponsible for executing subcomponent processesmust collaborate to synthesize the informationnecessary for comprehension. Such collaboration can

  • 8/9/2019 What Brain Imaging Can Tell Us About Embodied Meaning

    3/8

    In Press, To appear in M. de Vega, A. Glenberg, & A. Graesser (Eds.), Neural embodiment of sentence and word meaning

    Symbols, embodiment, and meaning. Oxford, UK: Oxford University Press.

    3

    be measured in functional neuroimaging studies bycomputing the correlation of the activation timeseries in a given region with the activation time seriesof another region. The extent to which the activationlevels of two regions rise and fall in tandem is takenas a reflection of the degree to which the two regions

    are functionally connected, and the term that iswidely used to refer to the activation time seriescorrelation is functional connectivity. Previousresearch has provided some evidence that as taskdemands increase, functional connectivity alsoincreases, for example, as a function of workingmemory load (e.g., Diwadkar, Carpenter, & Just,2000), reflecting the need for tighter coordination in amore demanding condition. Functional connectivityhas also been shown to be modulated bycomprehension difficulty, and differentially so forpeople of different working memory capacity (Prat et

    al., 2007). A relation between functional andanatomical connectivity has been demonstrated inautism, where the functional connectivity betweencortical regions is correlated with the size of thecorpus callosum segment that anatomically connectsthem (Just et al., 2007).

    In addition to exhibiting higher activation levelsduring the processing of high-imagery sentences, theleft intraparietal sulcus also showed greaterfunctional connectivity in this condition with othercortical regions, particularly with languageprocessing regions, regardless of the input modality.

    The imagery manipulation affected the functionalconnectivity between the left intraparietal sulcus areaand other brain areas that are activated in this task. Italso the affected the functional connectivity betweenthe left superior temporal (Wernickes) area and otherbrain areas. These two areas are proposed to berespectively involved in the imagery processing ofhigh-imagery sentences (left intraparietal) andsemantic (symbolic) processing (Wernickes). Thefive other activated brain areas were left hemisphereareas, including areas centrally involved in languageprocessing (pars opercularis, pars triangularis,

    dorsolateral prefrontal cortex, frontal eye fields, andinferior parietal lobule). The intraparietal sulcus areahad higher functional connectivities with the fiveactivated brain areas when the sentences were highin imagery, whereas the left temporal area had higherfunctional connectivities to these five areas for low-imagery sentences. This result applies to be the visualand auditory presentation conditions , as shown inFigure 2.

    Figure 2. The average functionalconnectivity of the left intraparietal sulcusROI (LIPS) and the left temporal ROI(LT) and with five left hemisphere ROIs:

    pars opercularis, pars triangularis, theinferior parietal lobule, dorsolateralprefrontal cortex, and frontal eye fields.The functional connectivity between LIPSand these areas is greater for high-imagery sentences. The opposite is truefor LT, which is not involved in imageryprocessing. The same pattern occurs forboth visual and auditory presentation ofsentences.

    The result provides important convergingevidence for implicating the intraparietal sulcus inimagery processing in sentence comprehension, andmore directly indicates the higher degree of functionalinteraction between this embodied imageryrepresentation and some of the other key activatedregions in the high imagery condition.

    Sentence imagery in autism. In a recent study, weexamined the processing of such high- and low-imagery sentences in adults with high-functioning

  • 8/9/2019 What Brain Imaging Can Tell Us About Embodied Meaning

    4/8

    In Press, To appear in M. de Vega, A. Glenberg, & A. Graesser (Eds.), Neural embodiment of sentence and word meaning

    Symbols, embodiment, and meaning. Oxford, UK: Oxford University Press.

    4

    autism, in comparison to age- and IQ-matchedcontrols. This study provides the opportunity todetermine whether the use of imagery (or embodiedrepresentations) in sentence comparison might bedisrupted in a special neurological population. Canmeaning embodiment be disrupted or modulated by a

    neurological disorder?First, consider the effect of imagery on the

    control participants, which is very similar to the studydescribed above. Figure 3 shows the imagery effect(high imagery minus low), displaying the prominentextra activation in the parietal area, as well as somein the prefrontal and inferior temporal areas.

    Figure 3. High minus low imagery effectof visual sentence comprehension incontrol participants, displaying a largeimagery effect in parietal, prefrontal, andinferior temporal areas.

    What of people with high-functioning autism?There have been frequent suggestions thatspatial/perceptual processing is spared (or sometimes

    even enhanced) in autism. For example, Shah andFrith (1983; 1993) found that participants with autismhave an advantage at certain types of spatial tasks.When a task is amenable to either a visual or a verbalstrategy, there is a suggestion that people with autismprefer a visual strategy. There are many informalreports that individuals with autism arepredominantly visual thinkers (Grandin, 1995).Temple Grandin, an accomplished professional withhigh-functioning autism, entitled her book Thinking

    in Pictures. In an fMRI letter n-back study, Koshino etal. (2005) found of more visual coding of letters inautism compared to a verbal coding strategy in thecontrols. Similar results were also found in a facesworking memory task (Koshino et al., in press). Thesestudies indicate that there is a tendency in people with

    autism to use more visuospatial regions by recruitingposterior brain regions in accomplishing varied tasks,including language tasks.

    The group with autism showed similar activationto the controls in the processing of high-imagerysentences (prominent activation in the parietal area,particularly around the intraparietal sulcus). It is

    Figure 4. High minus low imagery effectof visual sentence comprehension in peoplewith autism, displaying a negligibleimagery effect.

    reasonable to conclude that in the high-imagerycondition, which can be said to make use of embodied(perceptual) representations, the processing of the

    autism group resembled the processing of the controlgroup. However, the interesting new result was thatunlike the control group, the autism group displayed asimilar amount of parietal imagery-related activationeven in the low-imagery condition. Figure 4 shows theminimal imagery effect (the subtraction of high minuslow imagery). It appears that the autism group usesembodied representations even in contexts wherecontrol participants do not.

  • 8/9/2019 What Brain Imaging Can Tell Us About Embodied Meaning

    5/8

    In Press, To appear in M. de Vega, A. Glenberg, & A. Graesser (Eds.), Neural embodiment of sentence and word meaning

    Symbols, embodiment, and meaning. Oxford, UK: Oxford University Press.

    5

    The tentative account for why this occurs is thatthe cortical connectivity of the neural system iscompromised in autism (the main tenet of theunderconnectivity theory of autism) , particularlyaffecting the communication between frontal andmore posterior areas, resulting in a greater reliance on

    the more posterior areas. Thus in autism, there isoften less frontal activation and more posterioractivation in a number of different types of tasks.One might say that cognition is more embodied orconcrete in autism, and less abstract.

    Visual imagery in the comprehension of novel

    metaphors. Mason, Eviatar, and Just (under review)compared the comprehension of frozen and novelmetaphors, and found an interesting difference thatbears on embodiment. During the comprehension of afrozen metaphor passage, the same languageprocessing areas are active as in normal reading (e.g.,

    left middle and superior temporal lobe, left inferiorfrontal gyrus, dorsolateral prefrontal cortexbilaterally, right middle and superior temporal, andsuperior medial frontal areas). However, it is thecontrast with novel metaphors that is central here.The novel metaphors evoked parietal activationaround the intraparietal sulcus area bilaterally,suggesting that visual imagery processes were beingused to instantiate and/or interpret the novelmetaphors. The novel metaphors were generallyvisual in nature, such as a metaphor comparing awinding road to a ribbon. These results demonstrate

    the selective use of imagery in metaphorcomprehension, suggesting that perceptualrepresentations are used in the comprehension ofnovel but not frozen metaphors. It may be that themappings between domains that are needed in novelmetaphor comprehension (e.g. the mapping betweenthe solar system and the atom, to take a clearly spatialexample) are often mediated through spatialrepresentations. A more general principle of neuralfunction is that a given representation or process canbe activated on an as-needed basis. In this view,the embodiment occurs for only those types of

    metaphors that require it for their appropriatecomprehension.

    Selectivity of the perceptual representations

    activated in a haptic imagery task. The imagery-related activation around the intraparietal sulcus canalso be evoked in a haptic imagery task. Participantsare asked to compare the either the geometric (visual)properties of two objects (e.g., Which is larger, a

    pumpkin or a cucumber?) or haptic (touch) properties(Which is softer, a lemon or an egg?) (Newman et al.,

    2005). When the haptic properties (e.g. hardness) ofthe objects are interrogated via haptic imagery, thenthere is extra activation in the inferior extrastriateregion. Additionally, a region in the lateral occipitalcortex is activated for either type of probe. Thus,thinking about the object in almost any way evokes

    some perceptual activation in a number of areas, butthe amount of activation increases in areas whoseinformation is probed. So there is selectivity in theperceptually-related activation.

    Summary. This set of fMRI studies of imageryprovides several lessons about embodiment,particularly highlighting the circumstances underwhich embodied representations are likely to be inplay. First, perceptual (embodied) representations areused when there is some mental action or assessment tobe performed on the perceptual representation in orderto comprehend the sentence or perform the task.

    Second, in the processing of sentence imagery, theparietal activation associated with the perceptualrepresentation is synchronized with the symboliclanguage areas with which it is collaborating. Third, inautism, perceptual representations are used forcomprehending even low-imagery sentences, whichcontrol participants process without the benefit ofperceptual representations. And fourth, perceptualrepresentations are used in the comprehension of novelmetaphors but not frozen metaphors. All of theselessons have an overarching theme. The embodiedrepresentations are activated more often or to a higher

    level in those situations in which the perceptualinformation is particularly useful or salient.

    The embodied neural signature of words referring

    to object concepts

    In a multidisciplinary project done incollaboration with Tom Mitchell and our two researchgroups, particularly involving Robert Mason, SvetlanaShinkareva, Vincente Malave, and Francisco Peirera,we have been using machine learning techniques thatcan learn what activation pattern defines a cognitive

    state. Our investigation of the representation of wordsand sentences attempted to determine the neuralsignature of a semantic category using an engineeringrather than a neuroscience approach. We started withconcrete nouns as stimuli, taken from a small numberof categories. In one of our studies, we showedparticipants a total of 14 different words, includingseven tools (e.g., hammer, hatchet, pliers, screwdriver)and seven dwellings (e.g., mansion, castle, palace, hut,apartment). The experiments and the data analyses are

  • 8/9/2019 What Brain Imaging Can Tell Us About Embodied Meaning

    6/8

    In Press, To appear in M. de Vega, A. Glenberg, & A. Graesser (Eds.), Neural embodiment of sentence and word meaning

    Symbols, embodiment, and meaning. Oxford, UK: Oxford University Press.

    6

    unconventional, as are the goals. The experimentaltasks attempt to isolate individual thoughts of briefduration, typically about 4 sec. We present each wordseveral times during a scanning session (six times inthis experiment). We then train the classifiers on allbut one of the presentations, and then try to classify

    the activation patterns in the left-out presentation,iterating through all the possible ways of leaving onepresentation out. A classifier, of which logisticregression is a common example, is a decision systemwhich takes as its input the values of some features,and produces as output a discrete label related to theinput values. In this application, the input to theclassifier is the activation level of a set of voxels (say200) measured during the reading of a word, and theoutput is the category of the word that the wordbelongs to.

    In our quest for the neural signature of concepts,

    we have discovered that the neural representations ofat least these concepts contain perceptual and motorinformation that is pertinent to a category. Theembodiment of a category, namely its representationof its perceptual and motor attributes, is part of thecategorys neural signature. For example, therepresentations of tools include voxels in the motorarea and in the somatosensory area.

    There are several ways in which our findingsresemble those of two other fMRI projects reportedin this volume, in the papers of Pulvermuller. First,the neural signatures span a number of areas

    distributed over the cortex, including all four lobes inboth hemispheres as well as the cerebellum. Second,what differentiates the activation for differentcategories (what makes it a signature) is not whichvoxels activate, but the intensity level of theiractivation.

    Our results extend beyond just thesecommonalities with the other studies. In addition, wehave been able to classify with some accuracy thecategory of the word that is being read. Moreover, wehave been able to identify the individual word that aperson is reading from the neural signature.

    The machine learning techniques were able toidentify which of two categories of word is beingready by a participant, with a median rank accuracyover 12 participants of 79% (as of December 2005).(Rank accuracy is the percentile rank of the correctword within a list of predictions made by theclassifier. Chance rank accuracy would be 50%).For the participant with the best result, the classifierwas extremely accurate in identifying the category ofthe word being read, with a rank accuracy of 93%.

    The classifier was also able to identify to some extentwhich of the 14 different words was being read, with amedian rank accuracy over 12 participants of 69% (stillfar above chance), and the rank accuracy for the bestparticipant was 76%. The voxels used by the classifierto distinguish among categories and among words are

    distributed across many regions of the cortex,including the right hemisphere and motor areas. Forexample, Figure 5 shows a motor region in whichseveral participants had diagnostic voxels, indicatingthat the motor representation of how a tool is used ispart of the meaning of a word that names a hand tool.

    Figure 5. Voxels in the motor region used bythe classifier for word identification inseveral participants.

    The rank accuracy of the identification(particularly for distinguishing among the 14 words) ismeasured as follows. For each test input, the trainedclassifier outputs a rank-ordered list of the 14 words,ordered from most to least probable according to itslearned model. A perfect identification produces a rank

    accuracy of 1 (the correct answer is ranked as the mostlikely) and random performance by the classifierproduces a rank accuracy of 0.5. Classifier accuracywas measured by the percentile rank of the correctclassification in the output sorted list.

    The classifier that has been most expedient for ourresearch is a Gaussian Nave Bayes (GNB) classifier.Other classifiers that we sometimes use includeartificial neural networks and logistic regression. Theresults are somewhat similar across classifiers. The

  • 8/9/2019 What Brain Imaging Can Tell Us About Embodied Meaning

    7/8

    In Press, To appear in M. de Vega, A. Glenberg, & A. Graesser (Eds.), Neural embodiment of sentence and word meaning

    Symbols, embodiment, and meaning. Oxford, UK: Oxford University Press.

    7

    input to the classifier is represented as a featurevector, where each feature typically corresponds tothe observed MR intensity value at a specific voxel ata specific time, relative to a baseline. Although theclassification can be done by providing input to theclassifier about all of the voxels, the accuracy is

    generally higher if only a small subset of all thevoxels (say 25-200 out of about 15000) is used in theclassification. We have used a number of differentcriteria to select the subset of diagnostic voxels , andmost useful criterion has been the consistency of thevoxels across presentations in how they respond tothe different words.

    The neural signature is based on many voxels,some of which are in motor areas, many are inperceptual areas, and some are in frontal (conceptual)areas. Some informal sensitivity analyses were doneto determine if some subset of voxels is particularly

    important or unimportant for accurate classification.So far, no striking systematic differences have beenfound in how useful the voxels in various locationsare. For example, voxels in the motor area are notvery different from voxels elsewhere in terms of theircontribution to classifying the signature.

    Summary. The neural evidence clearly indicatesthat in at least some cases, perceptual and motorrepresentations are activated during processing that isprimarily conceptual. The scientific questions for thefuture might center around the when and the how ofembodied cognition. Brain imaging has provided a

    venue for addressing these questions, as well asproviding substantiating evidence that embodiedcognition is much more prevalent than was generallyassumed a decade ago.

    Acknowledgments

    This research was supported by the National Instituteof Mental Health Grant MH029617 and the W. M.Keck Foundation. Address correspondence to MarcelAdam Just, Center for Cognitive Brain Imaging,Department of Psychology, Carnegie Mellon

    University, Pittsburgh, PA 15213 or [email protected].

    References

    Deiber, M-P, Ibanez, V, Honda, MSN, Raman, R andHallett, M (1998). Cerebral processes related tovisuomotor imagery and generation of simplefinger movements studied with positron emissiontomography.NeuroImage, 7, 73-85.

    Diwadkar, VA, Carpenter, PA and Just, MA (2000).Collaborative activity between parietal and dorso-lateral prefrontal cortex in dynamic spatial workingmemory revealed by fMRI. NeuroImage, 12, 85-99.

    Eddy, JK and Glass, AL (1981). Reading and listening

    to high and low imagery sentences. Journal ofVerbal Learning and Verbal Behavior, 20, 333-45.

    Grandin, T (1995). Thinking in pictures: And otherreports from my life with autism. New York:Doubleday.

    Ishai, A, Ungerleider, LG, Martin, A and Haxby, JV(2000). The representation of objects in the humanoccipital and temporal cortex. Journal of Cognitive

    Neuroscience, 12, 35-51.

    Just, MA, Carpenter, PA, Maguire, M, Diwadkar, VAand McMains, S (2001). Mental rotation of objects

    retrieved from memory: An fMRI study of spatialprocessing. Journal of Experimental Psychology:General, 130, 493-504.

    Just, MA, Cherkassky, VL, Keller, TA, Kana, RK andMinshew, NJ (2007). Functional and anatomicalcortical underconnectivity in autism: Evidence froman fMRI study of an executive function task andcorpus callosum morphometry. Cerebral Cortex,17, 951-961.

    Just, MA, Newman, SD, Keller, TA, McEleney, A andCarpenter, PA (2004). Imagery in sentencecomprehension: An fMRI study. NeuroImage, 21,112-24.

    Koshino, H, Carpenter, PA, Minshew, NJ, Cherkassky,VL, Keller, TA and Just, MA (2005). Functionalconnectivity in an fMRI working memory task inhigh-functioning autism.NeuroImage, 24, 810-21.

    Koshino, H, Kana, RK, Keller, TA, Cherkassky, VL,Minshew, NJ and Just, MA. (in press). fMRIinvestigation of working memory for faces inautism: Visual coding and underconnectivity withfrontal areas. Cerebral Cortex.

    Kosslyn, SM, Alpert, NM, Thompson, WL, Maljkovic,

    V, Weise, SB, Chabris, CF, Hamilton, SE, Rauch,SL and Buonanno, FS (1993). Visual mentalimagery activates topographically organized visualcortex: PET investigations. Journal of Cognitive

    Neuroscience, 5, 263-87.

    Kosslyn, SM, Pascual-Leone, A, Felician, O,Camposano, S, Keenan, JP, Thompson, WL, Ganis,G, Sukel, KE and Alpert, NM (1999). The role ofarea 17 in visual imagery: Convergent evidencefrom PET and rTMS. Science, 284, 167-70.

  • 8/9/2019 What Brain Imaging Can Tell Us About Embodied Meaning

    8/8

    In Press, To appear in M. de Vega, A. Glenberg, & A. Graesser (Eds.), Neural embodiment of sentence and word meaning

    Symbols, embodiment, and meaning. Oxford, UK: Oxford University Press.

    8

    Kosslyn, SM., Shin, LM, Thompson, WL, McNally,RJ, Rauch, SL, Pitman, RK and Alpert, NM(1996). Neural effects of visualizing andperceiving aversive stimuli: A PET investigation.

    NeuroReport, 7, 1569-76.

    Mazoyer, B, Tzourio-Mazoyer, N, Mazard, A Denis,

    M and Mellet, E (2002). Neural bases of imageand language interactions. International Journalof Psychology, 37, 204-08.

    Mellet, E, Bricogne, S, Crivello, F, Mazoyer, B,Denis, M and Tzourio-Mazoyer, N (2002). Neuralbasis of mental scanning of a topographicrepresentation built from a text. Cerebral Cortex,12, 1322-30.

    Mellet, E, Bricogne, S, Tzourio-Mazoyer, N, Ghaem,O, Petit, L, Zago, L, Etard, O, Berthoz, A,Mazoyer, B and Denis, M (2000). Neural

    correlates of topographic mental exploration: Theimpact of route versus survey perspectivelearning.NeuroImage, 12, 588-600.

    Mellet, E, Petit, L, Mazoyer, B, Denis, M andTzourio, N (1998). Reopening the mental imagery

    debate: Lessons from functional anatomy.NeuroImage, 8, 129-39.

    Mellet, E, Tzourio, N, Crivello, F, Joliot, M, Denis, Mand Mazoyer, B (1996). Functional neuroanatomyof spatial mental imagery generated from verbalinstructions.Journal of Neuroscience, 16, 6504-12.

    Newman, SD, Klatzky, RL, Legerman, SJ and Just,MA (2005). Imagining material versus geometricproperties of objects: An fMRI study. Cognitive

    Brain Research, 23, 235-46.

    Prat, C, Keller, TA and Just, MA (2007). Individualdifferences in sentence comprehension: An fMRIinvestigation of syntactic and lexical processingdemands. Journal of Cognitive Neuroscience, 19,1950-1963.

    Shah, A and Frith, U (1983). An islet of ability inautistic children: A research note. J Child Psychol

    Psychiatry, 24, 613-20.Shah, A and Frith, U (1993). Why do autistic

    individuals show superior performance on the blockdesign task? Journal of Child Psychology andPsychiatry, 34, 1351-64.