13
Neuropsychologia xxx (2005) xxx–xxx Speech and gesture share the same communication system Paolo Bernardis, Maurizio Gentilucci Dipartimento di Neuroscienze, Universit` a di Parma, via Volturno 39, 43100 Parma, Italy Received 4 February 2005; received in revised form 12 April 2005; accepted 10 May 2005 Abstract Humans speak and produce symbolic gestures. Do these two forms of communication interact, and how? First, we tested whether the two communication signals influenced each other when emitted simultaneously. Participants either pronounced words, or executed symbolic gestures, or emitted the two communication signals simultaneously. Relative to the unimodal conditions, multimodal voice spectra were enhanced by gestures, whereas multimodal gesture parameters were reduced by words. In other words, gesture reinforced word, whereas word inhibited gesture. In contrast, aimless arm movements and pseudo-words had no comparable effects. Next, we tested whether observing word pronunciation during gesture execution affected verbal responses in the same way as emitting the two signals. Participants responded verbally to either spoken words, or to gestures, or to the simultaneous presentation of the two signals. We observed the same reinforcement in the voice spectra as during simultaneous emission. These results suggest that spoken word and symbolic gesture are coded as single signal by a unique communication system. This signal represents the intention to engage a closer interaction with a hypothetical interlocutor and it may have a meaning different from when word and gesture are encoded singly. © 2005 Elsevier Ltd. All rights reserved. Keywords: Symbolic gesture; Speech; Arm kinematics; Voice spectra; Social intention 1. Introduction Gesture is a universal feature of human communication. In every culture speakers produce gestures, although the extent and typology of the produced gesture vary. For some types of gestures, execution is frequently associated with speech production (see Goldin-Meadow, 1999; see McNeill, 1992). In particular, speakers frequently pronounce words while they also execute symbolic gestures, which express the same meaning as the word. Consider expressing approbation. While one pronounces “ok”, one often forms a circle with the forefinger and thumb in contact at their tips, while the rest of the fingers extend outward. There are two competing views of the relationship between gesture and speech. The first posits that gesture and speech are separate communication systems (Hadar, Wenkert-Olenik, Krauss, & Soroker, 1998; Krauss & Hadar, 1999; Levelt, Richardson, & La Heij, 1985). According to this Corresponding author. Tel.: +39 0521 903899; fax: +39 0521 903900. E-mail address: [email protected] (M. Gentilucci). view, gesture functions as an auxiliary support when verbal expression is temporally disrupted or word retrieval is dif- ficult. The second view (McNeill, 1992) posits that gesture and speech form a single system of communication, because they are linked to the same thought processes even if the expression modality differs. Neuropsychological and neurophysiological evidence is in accord with the idea that speech and gestures share com- mon neural substrates. It is well known that left cortical lesions producing aphasia are frequently associated to apraxia and, especially to ideomotor apraxia, a disorder of purpose- ful actions (Heath, Roy, Black, & Westwood, 2001). Brain- imaging studies show that language areas, such as Broca’s area, are activated by imitation (Grezes, Armony, Rowe, & Passingham, 2003; Tanaka & Inui, 2002), by observa- tion of gestures (Decety et al., 1997; Grezes et al., 2003; MacSweeney et al., 2002, 2004) or, more generally, by obser- vation of upper arm actions (Buccino et al., 2004). From an evolutionary point of view, Stokoes and col- leagues (Amstrong, Stokoe, & Wilcox, 1995; see also Corballis, 2002; Hewes, 1973) hypothesized that spoken 0028-3932/$ – see front matter © 2005 Elsevier Ltd. All rights reserved. doi:10.1016/j.neuropsychologia.2005.05.007 NSY-2058; No. of Pages 13

New Speech and gesture share the same communication system · 2005. 10. 12. · E-mail address: [email protected] (M. Gentilucci). view, gesture functions as an auxiliary support

  • Upload
    others

  • View
    3

  • Download
    0

Embed Size (px)

Citation preview

Page 1: New Speech and gesture share the same communication system · 2005. 10. 12. · E-mail address: gentiluc@unipr.it (M. Gentilucci). view, gesture functions as an auxiliary support

Neuropsychologia xxx (2005) xxx–xxx

Speech and gesture share the same communication system

Paolo Bernardis, Maurizio Gentilucci∗

Dipartimento di Neuroscienze, Universit`a di Parma, via Volturno 39, 43100 Parma, Italy

Received 4 February 2005; received in revised form 12 April 2005; accepted 10 May 2005

Abstract

Humans speak and produce symbolic gestures. Do these two forms of communication interact, and how? First, we tested whether thetwo communication signals influenced each other when emitted simultaneously. Participants either pronounced words, or executed symbolicgestures, or emitted the two communication signals simultaneously. Relative to the unimodal conditions, multimodal voice spectra wereenhanced by gestures, whereas multimodal gesture parameters were reduced by words. In other words, gesture reinforced word, whereasword inhibited gesture. In contrast, aimless arm movements and pseudo-words had no comparable effects. Next, we tested whether observingword pronunciation during gesture execution affected verbal responses in the same way as emitting the two signals. Participants respondedv inforcementi s single signalb cutor and itm©

K

1

Iets1wsWft

baW1

rbaldif-eause

if the

e iscom-

icalraxiaose-

roca’se,-03;r-

ol-

en

0d

erbally to either spoken words, or to gestures, or to the simultaneous presentation of the two signals. We observed the same ren the voice spectra as during simultaneous emission. These results suggest that spoken word and symbolic gesture are coded ay a unique communication system. This signal represents the intention to engage a closer interaction with a hypothetical interloay have a meaning different from when word and gesture are encoded singly.2005 Elsevier Ltd. All rights reserved.

eywords:Symbolic gesture; Speech; Arm kinematics; Voice spectra; Social intention

. Introduction

Gesture is a universal feature of human communication.n every culture speakers produce gestures, although thextent and typology of the produced gesture vary. For someypes of gestures, execution is frequently associated withpeech production (seeGoldin-Meadow, 1999; seeMcNeill,992). In particular, speakers frequently pronounce wordshile they also execute symbolic gestures, which express theame meaning as the word. Consider expressing approbation.hile one pronounces “ok”, one often forms a circle with the

orefinger and thumb in contact at their tips, while the rest ofhe fingers extend outward.

There are two competing views of the relationshipetween gesture and speech. The first posits that gesturend speech are separate communication systems (Hadar,enkert-Olenik, Krauss, & Soroker, 1998; Krauss & Hadar,

999; Levelt, Richardson, & La Heij, 1985). According to this

∗ Corresponding author. Tel.: +39 0521 903899; fax: +39 0521 903900.E-mail address:[email protected] (M. Gentilucci).

view, gesture functions as an auxiliary support when veexpression is temporally disrupted or word retrieval isficult. The second view (McNeill, 1992) posits that gesturand speech form a single system of communication, becthey are linked to the same thought processes evenexpression modality differs.

Neuropsychological and neurophysiological evidencin accord with the idea that speech and gestures sharemon neural substrates. It is well known that left cortlesions producing aphasia are frequently associated to apand, especially to ideomotor apraxia, a disorder of purpful actions (Heath, Roy, Black, & Westwood, 2001). Brain-imaging studies show that language areas, such as Barea, are activated by imitation (Grezes, Armony, Row& Passingham, 2003; Tanaka & Inui, 2002), by observation of gestures (Decety et al., 1997; Grezes et al., 20MacSweeney et al., 2002, 2004) or, more generally, by obsevation of upper arm actions (Buccino et al., 2004).

From an evolutionary point of view, Stokoes and cleagues (Amstrong, Stokoe, & Wilcox, 1995; see alsoCorballis, 2002; Hewes, 1973) hypothesized that spok

028-3932/$ – see front matter © 2005 Elsevier Ltd. All rights reserved.oi:10.1016/j.neuropsychologia.2005.05.007

NSY-2058; No. of Pages 13

Page 2: New Speech and gesture share the same communication system · 2005. 10. 12. · E-mail address: gentiluc@unipr.it (M. Gentilucci). view, gesture functions as an auxiliary support

2 P. Bernardis, M. Gentilucci / Neuropsychologia xxx (2005) xxx–xxx

language derives from an ancient system which used armgestures in order to communicate. Interestingly, the 18th-century Neapolitan philosopher, GiambattistaVico, in LaScienza Nuova(1744/1953)also formulated a theory of theorigin of language proposing that humans were originallymute and communicated by gestures, not by speech. In thesame vein,Condillac (1746)proposed that language evolvedfrom manual gestures, and not from animal calls. RecentlyCorballis (2002)proposed that spoken language developedas the repertoire of gestures was gradually transferred fromarm to mouth. This may have happened because of the exis-tence of double motor commands simultaneously sent to botharm and mouth (Gentilucci, Benuzzi, Gangitano, & Grimaldi,2001). In addition there is a strict relationship between earlylanguage development in children and several aspects ofmanual activity, such as communicative and symbolic ges-tures (for a review seeBates & Dick, 2002; Volterra, Caselli,Capirci, & Pizzuto, 2005). Canonical babbling in 6–8 monthsaged children is accompanied by rhythmic hand movements(Masataka, 2001). Word comprehension in children between8 and 10 months and word productions between 11 and 13months are accompanied by deictic and recognition gestures,respectively (Bates & Snyder, 1987; Volterra, Bates, Benigni,Bretherton, & Camaioni, 1979).

From a behavioral point of view, if pronouncing wordsand executing symbolic gestures with the same meaning arec tasksi dies( -n ated.M e-q ositet t handa ylla-bS i,R edt eter-m n aref uni-c ingfulg g thes eudo-w ouldm of am ent)

m ist d int r ges-t actora er ofe )t rom‘ tiono rbalr com-

munication signals can do. This hypothesis moved from ourprevious findings (Gentilucci, 2003a; Gentilucci, Santunione,et al., 2004; Gentilucci, Stefanini, et al., 2004) showing thatthe observation of transitive hand actions such as graspingand bringing to the mouth an object affects voicing of sylla-bles pronounced during action observation. The effects werethe same as those found during their execution (Gentilucci,Santunione, et al., 2004; Gentilucci, Stefanini, et al., 2004).In experiments 3 and 4, we compared the voice spectra ofthe same words used in the first part of the study, repeatedafter presentation of either spoken word or gesture with thoserepeated after simultaneous presentation of both the two com-munication signals.

2. Experiments 1 and 2

We verified whether gesture execution and word pronun-ciation influenced each other when simultaneously emitted.We expected that the execution of a meaningful gesture modi-fied the voice spectra of a word having the same meaning, butnot of a meaningless word (i.e. a pseudo-word). Conversely,we expected that the pronunciation of a word modified thekinematics of the corresponding meaningful gesture, but notof a meaningless gesture.

2

2ere

c ven-t of1 nts.O y inv 96P ltyo ici-p

2ach

t si-t( n,t ordL senb entedw rox-i ) ona ni-ta inf ant.E een( on of1

ontrolled by a single communication system, these twon an experimental setting should interact. Previous stuLevelt et al., 1985; McNeill, 1992) showed that word prounciation and gesture execution are temporally coordinore generally,Kimura (1973)showed that speaking is fruently associated with free movements of the hand opp

o the speech hemisphere. Conversely, we showed thactions guided by an object influence voice spectra of sles pronounced simultaneously with the actions (Gentilucci,antunione, Roy, & Stefanini, 2004; Gentilucci, Stefaninoy, & Santunione, 2004). In experiments 1 and 2 we aim

o extend the conclusions of previous studies and to dine whether gesture execution and word pronunciatio

unctionally related. If the hypothesis of a single commation system is true, the command to execute a meanesture should modify the voice spectra of a word haviname meaning, but not of a meaningless word (i.e. a psord). Conversely, the command to pronounce a word shodify the kinematics of a meaningful gesture, but noteaningless gesture (i.e. a meaningless complex movemIf the hypothesis about a single communication syste

rue, this system should be involved in the execution anhe interpretation of messages expressed by words oures, or by both. Indeed, a link is necessary betweennd observer as the link between sender and receivach message is. According toRizzolatti and Arbib (1998,

he “mirror system” may provide the necessary bridge fdoing’ to ‘communicating’. Consequently, the observaf both word and gesture should affect the voicing of a veesponse just as the simultaneous emission of the two

.

.1. Methods

.1.1. ParticipantsTwenty-eight Italian male adults (age 21–24 years) w

lassified as right-handed according to Edinburgh Inory (Oldfield, 1971). They were divided in two groups4 individuals participating in one of the two experimenly male participants were included to reduce variabilit

owel formants (Ferrero, Magno Caldognetto, & Cosi, 19;ickett, 1999). The Ethics Committee of the Medical Facuf the University of Parma approved the study. All partants were naıve to the purpose of the study.

.1.2. Apparatus and stimuliParticipants sat on a chair in front of a table. Before e

rial, their right wrist was placed on a fixed starting poion. The stimuli were the three words CIAO (/ao/), NO/nɔ/), STOP (/stɔp/) printed on a PC monitor. In additiohe string of letters XXX in experiment 1 or the pseudo-wAO (/lao/) in experiment 2 were presented. LAO was choecause it contains the same vowels as the other presords. The stimuli were presented in medium grey (app

mately 6 cd/m2) against a black background (1 cd/m219 in. SONY LCD monitor controlled by a PC. The mo

or was set at a spatial resolution of 1024 pixels× 768 pixelsnd at a temporal resolution of 75 Hz. It was placed

ront of the table, at a distance of 2 m from the participach trial started with a BEEP followed by a black scr

300 ms), after which the stimulus was presented (durati000 ms).

Page 3: New Speech and gesture share the same communication system · 2005. 10. 12. · E-mail address: gentiluc@unipr.it (M. Gentilucci). view, gesture functions as an auxiliary support

P. Bernardis, M. Gentilucci / Neuropsychologia xxx (2005) xxx–xxx 3

Fig. 1. Examples of arm trajectory during execution of meaningful and meaningless gestures in experiment 1.

2.1.3. ProcedureIn four successive blocks of trials the participants were

required to emit different responses at the end of stimulus(printed word, pseudo-word or string XXX) presentation. Theresponse 1 was the gesture having the same meaning as theprinted word (meaningful gesture,Fig. 1). The response 2 waspronunciation of the word or the pseudo-word. The response3 was pronunciation of the word and simultaneous executionof the corresponding gesture. The response 4 in experiment1 was pronunciation of the word and execution of a mean-ingless arm movement (meaningless gesture). This consistedof up-down oscillatory movements involving wrist, elbow,and shoulder joints just as the meaningful gestures (Fig. 1).This meaningless gesture was executed also when the stringXXX was presented. The response 4 of experiment 2 wasexecution of the gesture corresponding to the printed wordand pronunciation of the pseudo-word LAO. To exclude thatsilent repetition of the read word influenced gesture execu-tion, the response 1 of experiments 1 and 2 and the response4 of experiment 2 was emitted after counting aloud from 1 to5. We preferred to run two experiments instead of a longerone to insure that the task did not become automatic and par-

ticipants emitted the responses in a spontaneous way like ineveryday life. Summing up, both experiments 1 and 2 con-sisted of four experimental conditions in each of which adifferent response was required to stimuli presented with thesame modality.

The participants were presented with each condition in oneblock of trials during the experimental session. The blocks oftrials were quasi-counterbalanced across participants. Eachstimulus was quasi-randomly presented five times duringeach block of trials. Consequently, each of the four blocks oftrials consisted of 15 trials (3 stimuli× 5 repetitions) exceptthe blocks corresponding to condition 1 of experiment 1 andcondition 2 of experiment 2, which consisted of 20 trials (4stimuli× 5 repetitions). Participants were left free to use asmuch time as needed to complete each trial.

2.1.4. Data recordingThe participants wore a light-weight dynamic headset

microphone (Shure, model WH20). The frequency responseof the microphone ranged from 50 to 15,000 Hz. The micro-phone was connected to a second PC by a sound card(16 PCI Sound Blaster; CREATIVE Technology Ltd, Sin-

Page 4: New Speech and gesture share the same communication system · 2005. 10. 12. · E-mail address: gentiluc@unipr.it (M. Gentilucci). view, gesture functions as an auxiliary support

4 P. Bernardis, M. Gentilucci / Neuropsychologia xxx (2005) xxx–xxx

gapore). We acquired voice data during word pronuncia-tion using the Avisoft SASLab professional software (Avi-soft Bioacoustics, Germany), whereas we calculated theparticipants’ voice parameters using the PRAAT software(http://www.praat.org). We analyzed the time course of for-mant (F) 1 and 2 of the word vowels. It is well known thatF1 and F2 exactly define vowels from an acoustical point ofview. Both formant transition and pure vowel pronunciationwere included in the analysis. Mean F1, F2, pitch, intensityand vowel duration were calculated. Finally, the beginningof word pronunciation was measured with respect to the endof stimulus presentation.

Movements of the participant’s arm were recorded usingthe 3D-optoelectronic ELITE system (B.T.S. Milan, Italy).The ELITE consists of two video cameras detecting infraredreflecting markers at a sampling rate of 50 Hz. Recordingof arm movement was synchronized with voice recordingby means of the PC used to present the stimuli. Move-ment reconstruction in 3-D coordinates and computation ofthe kinematics parameters are described in a previous work(Gentilucci, Chieffi, Scarpa, & Castiello, 1992). We used thefollowing three markers: the first one was placed on the tip ofthe participant’s index finger, the second one was placed onthe participant’s wrist and the third one, placed on the tableplane leftward respect to the participant’s sagittal axis, wasused as a reference point. We used the first marker to calcu-l , i.e.t dexfi ax-i tipr f theh final3 Og veloc-i cityd andt . then theo ing-l gavea , andfi gert nta-t m,t p int eatertf cteda thei eacho g tot aftert scil-l lessg eacho

2.1.5. Data analysisThree series of analyses were carried out. The first series

aimed to characterize the voice spectra of the words andthe kinematics of the gestures when executed singly. Onlydata of the two baseline conditions, i.e. when either the wordwas singly pronounced or the gesture was singly executed,were included. In the ANOVAs the between-subjects factorwas experiment (experiment 1 versus experiment 2) and thewithin-subjects factor was communication signal (i.e. eitherword or gesture, CIAO versus NO versus STOP). For S.D.of number of velocity peaks the within-subjects factor wasgesture (CIAO versus NO versus meaningless gesture). Thisanalysis aimed to discover whether the meaningless gesturewas less stereotyped than the NO and CIAO gestures.

The second series of analyses aimed to discover differ-ences between the baseline condition and each of the otherthree conditions (i.e. word and gesture, word and meaning-less gesture, pseudo-word and gesture). We performed thefollowing comparisons:

(a) The first comparison was between the condition involv-ing production of simultaneous word and congruentgesture and the condition involving only the emissionof word or meaningful gesture. In the ANOVAs thebetween-subjects factor was experiment (experiment 1versus experiment 2) and the within-subjects factors were

ean-sus

( itionne

peri-ere) andme-it was

( olv-cia-ofAser-rsusame-ture

rela-t com-p jectsf nt 2)a rsush ver-s ges-t do-w rsus

ate the following kinematics parameters: gesture timehe time from when the participant started to lift his innger tip to when he begun to turn down it (see below); mmal height, i.e. the maximal height of the index fingereached during the whole gesture; mean amplitude oand oscillations (i.e. mean distance between initial and-D position of the index finger) during the CIAO and Nestures and the meaningless gesture; maximal peak

ty, i.e. maximal value of the peaks of the tangential velouring the oscillations of the CIAO and NO gestures

he meaningless gesture; number of velocity peaks, i.eumber of velocity peaks of the index finger tip duringscillations of the CIAO and NO gestures and the mean

ess gesture; S.D. of the number of velocity peaks (whichmeasure of how much the movement was stereotyped)nally, the beginning of the gesture (i.e. of the index finip movement) with respect to the end of stimulus preseion. Since the spatial error of the ELITE system is 0.4 mhe index finger tip was considered to start and to stohose frames in which the marker displacement was grhan 0.4 mm (Gentilucci, Toni, Chieffi, & Pavesi, 1994). Therame corresponding to movement beginning was selefter verifying that the displacement of the marker on

ndex finger tip increased along any Cartesian axis inf the eight forthcoming frames. The frame correspondin

he end of movement was chosen after verifying that,he peak velocity during the STOP gesture and the last oation peak velocity during the CIAO, NO and meaningesture, the marker went down along the vertical axis inf the four forthcoming frames.

,

condition (either word or gesture versus word and mingful gesture) and communication signal (CIAO verNO versus STOP).

b) The second comparison was between the condinvolving the only pronunciation of a word, and the oinvolving both a meaningless gesture and a word (exment 1). In the ANOVAs the within-subjects factors wcondition (no gesture versus meaningless gestureword (CIAO versus NO versus STOP) for voice paraters, whereas for meaningless gesture parameterscondition (no word versus word).

c) The third comparison was between the condition inving only gesture and the one involving the pronuntion of the pseudo-word LAO during the executionmeaningful gestures (experiment 2). In the ANOVthe within-subjects factors were condition (no word vsus pseudo-word) and gesture (CIAO versus NO veSTOP) for gesture parameters, whereas for voice parters of the pseudo-word LAO it was condition (no gesversus meaningful gesture).

The third series of analyses concerned the temporalionship between gesture and speech beginnings. Whenaring words with congruent gestures, the between-sub

actor was experiment (experiment 1 versus experimend the within-subjects factors were effector (mouth veand) and communication signal (CIAO versus NOus STOP). In both the comparisons of meaninglessure with words and of meaningful gestures with pseuord the within-subjects factors were effector (mouth ve

Page 5: New Speech and gesture share the same communication system · 2005. 10. 12. · E-mail address: gentiluc@unipr.it (M. Gentilucci). view, gesture functions as an auxiliary support

P. Bernardis, M. Gentilucci / Neuropsychologia xxx (2005) xxx–xxx 5

Fig. 2. Examples of spectrograms of words and pseudo-word pronounced without simultaneous gesture execution (experiment 2). F1: formant 1; F2: formant2.

hand) and communication signal (CIAO versus NO versusSTOP).

In all analyses paired comparisons were performed usingthe Newman–Keuls procedure. The significance level wasfixed atP< 0.05.

2.2. Results

2.2.1. Emission of single communication signals(baseline condition)

We firstly analyzed the words pronounced without per-forming gestures and, conversely, the corresponding-in-meaning gestures executed without pronouncing words (firstseries of analyses).

Fig. 2presents examples of spectrograms when pronounc-ing the words CIAO, NO, and STOP. For all the three wordsthe time course of F1 showed an increasing phase followedby a decreasing phase, whereas the time course of F2 wascharacterized only by a decreasing phase. However, bothmean F1 and F2 of the three words varied. F1 of NO was

significantly higher than F1 of CIAO, and F1 of CIAO wassignificantly higher than F1 of STOP (Fig. 3, Table 1). F2gradually decreased moving from CIAO to NO and STOP(Fig. 3, Table 1). Moreover, intensity, pitch and vowel dura-tion differed among the three words (Fig. 3, Table 1).

Fig. 1 shows representative examples of the index fingerpath when participants executed the three gestures. Note thatthe amplitudes of the hand oscillations were larger for CIAOthan for NO. The ANOVAs showed that gesture times ofCIAO and NO were, as expected, longer than that of STOP(Fig. 4, Table 1). The maximal height reached during theNO gesture was lower than that of CIAO and STOP (Fig. 4,Table 1). Finally, hand oscillation amplitude, maximal peakvelocity, and number of velocity peaks of the NO gesturedecreased compared with those of CIAO (Fig. 4, Table 1).S.D. of number of velocity peaks did not differ betweenmeaningful and meaningless gestures (Table 1, meaningfulgestures 1.10, meaningless gesture 1.31). In other words, themeaningless movement was stereotyped as well as the mean-ingful gestures were.

Table 1Voice spectra parameters and kinematics parameters in experiments 1 and 2. Results of the statistical comparisons among the words and gestures in thebaselinecondition

Voice spectra parameters Kinematics parameters

F GeF MaI HaP MaV Nu

S.D .

S stures

1 F(2,52) = 83.3,P< 0.000012 F(2,52) = 22.8,P< 0.00001

ntensity F(2,52) = 26.0,P< 0.00001itch F(2,52) = 6.2,P< 0.004owel duration F(2,52) = 614.0,P< 0.00001

.D. of number of velocity peaks refers to comparisons among the ge

sture time F(2,52) = 94.8,P< 0.00001ximal arm height F(2,52) = 4.2,P< 0.02nd oscillation amplitude F(1,26) = 17.8,P< 0.0003ximal peak velocity F(1,26) = 16.3,P< 0.0004mber of velocity peaks F(1,26) = 6.6,P< 0.02. of number of velocity peaks F(2,24) = 0.80,P= 0.46, n.s

CIAO and NO, and the meaningless gesture.

Page 6: New Speech and gesture share the same communication system · 2005. 10. 12. · E-mail address: gentiluc@unipr.it (M. Gentilucci). view, gesture functions as an auxiliary support

6 P. Bernardis, M. Gentilucci / Neuropsychologia xxx (2005) xxx–xxx

Fig. 3. Mean values of voice parameters of words pronounced without simultaneous execution of meaningful or meaningless gesture in experiments 1 and2.Bars are S.E.

2.2.2. Comparison between words pronounced withoutand with gestures

In the second series of analyses, we studied the effects ofmeaningful and meaningless gestures on the pronunciationof words and pseudo-words. In the comparison with the pro-nunciation of words alone, F2 increased when CIAO, NO,and STOP were pronounced while executing a gesture of thesame meaning (Fig. 5, Table 2). In contrast, a meaninglessgesture did not affect F2 of the three words (Table 2). Sim-ilarly, the meaningful gestures did not affect pronunciationof the meaningless word LAO (Table 2). The meaningfuland meaningless gestures induced an increase in F1 of theword STOP (Fig. 5, Table 2). In contrast, F1 of the pseudo-word was not affected by the meaningful gestures (Table 2).

Finally, the pitch of the three words increased when theywere pronounced simultaneously with meaningful gesturesbut not when pronounced simultaneously with meaninglessgestures (Fig. 5, Table 2). The pitch of LAO was not affectedby the gestures (Table 2). Both the intensity and duration ofthe vowels were not affected by the execution of meaningfuland meaningless gestures.

2.2.3. Comparison between gestures executed with andwithout pronunciation of words

The second series of analyses allowed us to study theeffects of words and pseudo-words on the execution of mean-ingful and meaningless gestures. In the comparison with thesole execution, the duration of meaningful gestures was short-

Table 2Statistical comparison between voice spectra parameters of words and pseudo-words pronounced without and with execution of gestures in experiments 1and/or 2

Word Pseudo-word

No gesture vs. meaningful gesture No gesture vs. meaningless gesture No gesture vs. meaningful gesture

F1 F(2,52) = 6.8,P< 0.002 F(2,26) = 3.4,P< 0.04 F(3,39) = 0.80,P= 0.50, n.s.F2 F(1,26) = 18.5,P< 0.002 F(1,13) = 1.9,P= 0.19, n.s. F(3,39) = 0.04,P= 0.99, n.s.Pitch F(1,26) = 11.6,P< 0.02 F(1,13) = 3.15,P= 0.10, n.s. F(3,39) = 1.5,P= 0.23, n.s.

The analyses on intensity and vowel duration are not reported because not significant in all the three conditions

Page 7: New Speech and gesture share the same communication system · 2005. 10. 12. · E-mail address: gentiluc@unipr.it (M. Gentilucci). view, gesture functions as an auxiliary support

P. Bernardis, M. Gentilucci / Neuropsychologia xxx (2005) xxx–xxx 7

Fig. 4. Mean values of arm kinematics parameters of gestures executed without simultaneous pronunciation of words or pseudo-word in experiments 1 and 2.Bars are S.E.

ened by the pronunciation of words of the same meaning(Fig. 6, Table 3), but not of meaningless word (Table 3).The duration of the meaningless gesture was not significantlyaffected by the words (Table 3). The maximal height reachedby the hand was lowered only by the pronunciation of thewords (Fig. 6, Table 3), which, however, did not affect themaximal height of the meaningless gesture (Table 3). Thesame results were found for hand oscillation amplitude andmaximal peak velocity during the hand oscillations of CIAOand NO (Fig. 6,Table 3), although the word effect on the latter

parameter was more evident in experiment 1 than in experi-ment 2 (Fig. 6, interaction between experiment and condition,Table 3). The number of velocity peaks of both the meaning-ful (Fig. 6, Table 3) and the meaningless gestures (Table 3)decreased when the word was simultaneously pronouncedwith the gesture. In contrast, the kinematics of the meaning-ful gestures was not affected by the pseudo-word. Finally,S.D. of number of velocity peaks of meaningful and mean-ingless gestures was not affected by word and pseudo-wordpronunciation.

Table 3Statistical comparison between kinematics parameters of meaningful and meaningless gestures executed without and with pronunciation of words andpseudo-words in experiments 1 and/or 2

Meaningful gesture Meaningless gesture

No word vs. word No word vs. pseudo-word No word vs. word

Gesture time F(1,26) = 19.9,P< 0.001 F(1,13) = 0.02,P= 0.88, n.s. F(3,39) = 0.4,P= 0.75, n.s.Maximal height F(1,26) = 4.9,P< 0.03 F(1,13) = 1.5,P= 0.24, n.s. F(3,39) = 1.2,P= 0.32, n.s.Hand oscillation amplitude F(1,26) = 23.7,P< 0.00001 F(1,13) = 0.7,P= 0.42, n.s. F(2,26) = 0.1,P= 0.91, n.s.Maximal peak velocity F(1,26) = 9.9,P< 0.004 F(1,13) = 2.0,P= 0.18, n.s. F(2,26) = 0.6,P= 0.54, n.s.Number of velocity peaks F(1,26) = 6.3,P< 0.02 F(1,13) = 1.4,P= 0.26, n.s. F(2,26) = 10.1,P< 0.001, 8.1 vs. 5.4

The analyses on S.D. of number of velocity peaks are not reported because not significant in all the three conditions

Page 8: New Speech and gesture share the same communication system · 2005. 10. 12. · E-mail address: gentiluc@unipr.it (M. Gentilucci). view, gesture functions as an auxiliary support

8 P. Bernardis, M. Gentilucci / Neuropsychologia xxx (2005) xxx–xxx

Fig. 5. Mean values of voice parameters of words pronounced without and with simultaneous execution of meaningful and meaningless gesture in experiments1 and 2. Bars are S.E.

2.2.4. Temporal relations between the beginning ofspeech and gesture

The third series of analyses showed that the pronunci-ation of words and pseudo-word always started after thebeginning of the meaningful and meaningless gestures. Inaddition, whereas the beginning of speech did not vary forthe three words, the NO gesture started in advance comparedwith the other two meaningful gestures (Fig. 7, interactionbetween effector and communication signal,F(2,52) = 4.2,P< 0.02,P< 0.05). The interval between the beginning ofword and meaningless gestures decreased (546.8 versus375.2 ms,F(1,12) = 7.3,P< 0.03,P< 0.05), whereas the cor-responding interval between pseudo-words and meaningfulgestures increased (517.8 versus 693.1 ms,F(1,12) = 13.2,P< 0.003), in the comparison with the interval between wordand meaningful gesture.

3. Experiments 3 and 4

We presented participants with the same communicationsignals (words and/or gestures) emitted in experiments 1 and2 and studied the effect of the observation of these stimuli onthe pronunciation of the corresponding words. We expectedthe same effects on the voice spectra as those found for theemission of the communication signals.

3.1. Methods

3.1.1. ParticipantsTwenty male adults (age 21–24 years.) were classified

as right-handed according to Edinburgh Inventory (Oldfield,1971), and divided in two groups. The first group of 12 indi-viduals and the second group of 8 individuals were assignedto experiments 3 and 4, respectively. All participants werenaıve to the purpose of the study.

3.1.2. Apparatus and stimuliParticipants sat on a chair in front of a table. The stimuli

(words, or symbolic gestures, or both) were the three commu-nication signals: CIAO (/ ao/), NO (/nɔ/), and STOP (/stɔp/).The modality of stimulus presentation of the communicationsignals varied in each of four successive blocks of trials. Thefirst modality (1) was the word printed on a PC display. Themodality 2 was a short video-clip showing the half body ofan actress (face: 3× 4.5 degrees of visual angle) pronounc-ing the words, or executing the symbolic gesture (modality3), or simultaneously pronouncing the word and executingthe gesture (modality 4). In experiment 4 the three commu-nication signals were presented in three successive blocks oftrials according to the following three modalities of stimu-lus presentation: a video-clip showing the actress (modality1) pronouncing only the word or (modality 2) simultane-

Page 9: New Speech and gesture share the same communication system · 2005. 10. 12. · E-mail address: gentiluc@unipr.it (M. Gentilucci). view, gesture functions as an auxiliary support

P. Bernardis, M. Gentilucci / Neuropsychologia xxx (2005) xxx–xxx 9

Fig. 6. Mean values of arm kinematics parameters of meaningful gestures executed without and with simultaneous pronunciation of words in experiments 1and 2. Bars are S.E.

ously executing the gesture and pronouncing the word withthe voice emitted spontaneously or (modality 3) with thevoice recorded when pronouncing only the word. In this con-dition we synchronized the actress’s voice with her labial

Fig. 7. Mean values of the beginning of word pronunciation and of mean-ingful gesture execution with respect to the end of stimulus presentation inexperiments 1 and 2. Bars are S.E.

movements using the FINAL CUT PRO software (Apple,California, USA). The last modality of stimulus presentationwas necessary because in experiment 3 the actress’s voiceparameters of words pronounced simultaneously with gestureexecution differed from those when the only word was pro-nounced (see below). Consequently, possible changes in theparticipant’s voice could be due to imitation of the actress’svoice. The printed words and the video-clips were presentedon the same monitor with the same settings as in experiments1 and 2. The distance of the monitor from the participant was57 cm. The trial started with a BEEP followed by a blackscreen (300 ms), after which the stimulus was presented. Thestimulus duration was 3000 ms.

3.1.3. ProcedureThe participants were required to pronounce the word cor-

responding to the communication signal presented with dif-ferent modalities (i.e. printed word, spoken word, meaningfulgesture, and spoken word and meaningful gesture) at the endof stimulus presentation. Consequently, experiments 3 and 4consisted of four and three experimental conditions, respec-tively, in which the same response was required to the stimulipresented, respectively, in four and three successive blocks oftrials. The blocks of trials were quasi-counterbalanced across

Page 10: New Speech and gesture share the same communication system · 2005. 10. 12. · E-mail address: gentiluc@unipr.it (M. Gentilucci). view, gesture functions as an auxiliary support

10 P. Bernardis, M. Gentilucci / Neuropsychologia xxx (2005) xxx–xxx

Table 4Voice spectra parameters of the words pronounced by the actress presentedin experiments 3 and 4

Pitch(Hz)

Intensity(db)

F1 (Hz) F2 (Hz)

CIAO Word 210.5 61.7 845.7 1392.7Word and gesture 219.1 59.2 901.1 1422.9

NO Word 204.3 62.9 964.9 1388.8Word and gesture 200.3 60.6 967.4 1387.1

STOP Word 240.8 60.5 749.0 1385.3Word and gesture 224.8 59.9 809.6 1422.5

participants. Each stimulus was quasi-randomly presentedfive times. Consequently, each block of experiments 3 and4 consisted of 15 trials. Participants were left free to use asmuch time as needed to complete each trial.

3.1.4. Data recordingThe participants wore the same microphone and the data

were acquired and analyzed using the same technical equip-ment as in experiments 1 and 2. The time course of formant(F) 1 and 2, pitch, intensity and duration of vowels were ana-lyzed. We measured also the actress’s voice parameters. Theyare presented inTable 4.

3.1.5. Data analysisIn experiments 3 and 4, the voice parameters were ana-

lyzed by ANOVAs in which the within-subjects factors wereas follows: Condition (experiment 3: printed word versus spo-ken word versus gesture versus gesture and spoken word;experiment 4: spoken word versus gesture and original spo-ken word versus gesture and non-original spoken word) andcommunication signal (CIAO versus NO versus STOP). Inall analyses paired comparisons were performed using theNewman–Keuls procedure. The significance level was fixedatP< 0.05.

Since more parameters were statistically analysed ine sta-t mpleo

3

3d in

t onw yi com-p sedi re int oicep

3ere

u tions

Fig. 8. Mean values of formant 2 (F2) of the words pronounced in responseto the stimuli presented in experiments 3 and 4. The stimuli (communicationsignals) were presented with the following modalities: printed word; spokenword*; gesture; spoken word and gesture; spoken word* (with the actress’svoice of the spoken word* condition) and gesture. Bars are S.E.

of bimodal stimulus presentation.Fig. 8shows that F2 of theword significantly increased in the two conditions of bimodalpresentation as compared with the condition of unimodal pre-sentation (F(2,22) = 4.7,P< 0.05). No other significance wasfound in the ANOVAs.

4. Discussion

4.1. General characteristics of word voice spectra andgesture kinematics

Some parameters of the vowel spectra and arm kinematicsof the three words and gestures analyzed in the present studymay be related to aspects of the meaning of the communica-tion signals. Both F1 and F2 of the word CIAO were higherthan those of the word STOP. In other words, the vowels ofthe word CIAO were opened and anterior as compared to thevowel of the word STOP. F2 of the word NO was interme-diate between the other two words. Although the sample ofwords tested in the present study is small, we observed thatthe type of interaction coded by the meaning of the wordscould be related to the mouth configuration during vowelpronunciation. When pronouncing the word CIAO, whichinvolves closer interactions, the anterior internal mouth isopened and the tongue is displaced forward (Leoni & Maturi,2 andt TOP,w bser-

xperiments 1 and 2 than in experiments 3 and 4, moreistical power was required. This explains the greater saf participants in experiments 1 and 2.

.2. Results

.2.1. Experiment 3Fig. 8 shows mean F2 of the word vowels recorde

he four conditions. In the ANOVA, the factor conditias significant (F(3,21) = 9.0,P< 0.0005). F2 significantl

ncreased in the conditions of spoken word and gestureared with printed word. Further, it significantly increa

n the condition of simultaneous spoken word and gestuhe comparison with all the other conditions. No other varameter reached significance.

.2.2. Experiment 4All the participants except one reported that they w

naware that the voice differed between the two condi

002). The reverse, i.e. anterior internal mouth closureongue retraction occurs when pronouncing the word Shich codes avoidance. This is accordance with the o

Page 11: New Speech and gesture share the same communication system · 2005. 10. 12. · E-mail address: gentiluc@unipr.it (M. Gentilucci). view, gesture functions as an auxiliary support

P. Bernardis, M. Gentilucci / Neuropsychologia xxx (2005) xxx–xxx 11

vation that in non-humans, both mouth aperture and tongueprotrusion are typical of approaching relationships. For anexample lips-smacking and protruding lips precede groom-ing actions among monkeys (Van Hooff, 1962, 1967). The oldtheory byPaget (1930), called “schematopoeia” proposed aparallel between sound and meaning in a variety of languages.According to Paget, vowels are opened in words coding some-thing that is large, whereas vowels are closed in words codingsomething that is small. It is possible that also the type ofinteraction coded by the word influences the degree of aper-ture and anteriority of the vowel, i.e. opened and anteriorvowel in some words favoring interaction, whereas closingand posterior vowel for words coding avoidance. In additionto formants, also intensity and pitch were different accord-ing to word meaning. The two parameters increased whenpronouncing STOP. Since this word expresses a command,it obviously requires higher intensity and pitch. This wasautomatically performed by the participants who neverthe-less were not required to emphasize the voice tone accordingto its meaning during pronunciation of the words.

The three gestures also differed in the arm kinematics. Themaximal arm height of the NO gesture was lower than thoseof the CIAO and STOP gestures. A possible explanation ofthis result is that the fulcrum of the CIAO and STOP gesturesis usually the hand palm, whereas that of the No gesture isthe index finger tip. The fulcrum of the three gestures wasp -m rdert de,v AOa e ofi o theh nse-q d toh f theg ch ism as ano ectraa dedb

4e

turei ane-o wasp t bed ed, iti difyv ver,n ean-i s thet wordw orep 2 of

the words. Higher F2 is related to forward displacement ofthe tongue (Ferrero, Genre, Boe, & Contini, 1979; Leoni &Maturi, 2002). Consequently, the command to execute thegesture was probably sent also to the tongue. The findingthat tongue protrusion instead of retraction was observed inresponse to the double command to mouth and arm may beexplained as a non-specific effect, i.e. the command to moveforward both arm and tongue. However, another explanationcongruent with the fact that the effect on voice was effec-tive only for meaningful gestures may be that, independentlyof the word meaning, mouth articulation might be config-ured (i.e. the tongue was protruded) by the speaker in theexpectance of a response. In other words, this effect expressesthe intention to interact closely with other individuals (seeabove). This explanation is supported by the results of exper-iments 3 and 4 showing that the verbal response to a visibleindividual pronouncing the word and simultaneously per-forming the gesture showed the same increase in F2 as duringsimultaneous execution of the two communication signals.

F1 of the word STOP increased when pronounced simul-taneously with both the meaningful and the meaninglessgestures. The effect observed only on the word STOP canbe explained by the initial word consonants, which requirequicker release of mouth occlusion. This was enhanced by thearm lifting producing greater opening of the internal mouth.Indeed, F1 mainly depends on anterior mouth opening. Thee sturew nt oft ges-t stureo utha sion( thet eters.I

thes ning.T mentp cere-b tivity.I -c latter.A thes hesei udo-w unto ationo sup-p er,t es-t ech.I ges-t h asC ve-m ures.I aning

robably placed at the same height (Fig. 5), to send comunication signals from the same spatial location in o

o facilitate receipt of the signal. The different amplituelocity and number of hand oscillations during the CInd NO executions may be explained by both the typ

nteraction coded by and the emotional state related tand gestures. CIAO, which favors interaction (and, couently, verbal communication) and is frequently relateappiness induces increasing oscillations and velocity oesturing hand in order to attract the observer. NO, whiore related to avoidance and to negative emotions hpposite effect. Summing up, some features of voice spnd hand kinematics may reflect the social intention coy both the two communication signals.

.2. Mutual interactions between word and gesturemission

Word and corresponding-in-meaning symbolic gesnfluenced each other when they were emitted simultusly. F2 in the voice spectra was higher when the wordronounced together with the gesture. This effect mighue to vocalization simultaneous to arm movement. Inde

s well known that any movement of body effector can mooice emission, due to proprioceptive stimulation. Howeo modification in F2 was observed when executing a m

ngless arm movement, which involved the same joints ahree meaningful gestures. Conversely, F2 of a pseudo-as not affected by the meaningful gestures. Thus, it is mlausible that the meaning of the gesture influenced F

ffect on F1 was also found when the meaningless geas executed. Both the hand configuration and moveme

he meaningless gesture were similar to that of the STOPure. This may explain the effect of the meaningless gen the word STOP. F1 is mainly related to anterior moperture, whereas F2 is mainly related to tongue protruFerrero et al., 1979). Gesture execution enhanced bothwo mouth movements, and consequently voice paramn this regard, gesture reinforced word.

The kinematics of the gestures was influenced byimultaneous pronunciation of words of the same meahe general effect was a shortening of the various movehases. This result can be due to overlap between theral mechanisms associated with speech and manual ac

ndeed,Kinsbourne and Cook (1971)reported that the conurrent speech and manual performance disrupted thenother possibility is the trend to execute gestures iname time as duration of word pronunciation. However, tnterpretations do not explain the lack of effect of the pseord on gestures. Another possibility is that the amof the information was reduced (and consequently, durf the various gesture phases) because it was partiallylied by the word information of briefer duration. Howev

he word effect was not highly specific for meaningful gures like was the effect of meaningful gestures on spendeed, the number of velocity peaks of the meaninglessure was reduced by pronunciation of words. Words, sucIAO and NO, may have an effect even on oscillatory moents resembling the corresponding meaningful gest

n other words, speakers can readily associate the me

Page 12: New Speech and gesture share the same communication system · 2005. 10. 12. · E-mail address: gentiluc@unipr.it (M. Gentilucci). view, gesture functions as an auxiliary support

12 P. Bernardis, M. Gentilucci / Neuropsychologia xxx (2005) xxx–xxx

of words with the simultaneously executed arm movements(Gentilucci, Benuzzi, Bertolani, Daprati, & Gangitano, 2000;Gentilucci & Gangitano, 1998). Summing up, the fact that theword information of briefer duration reduced the communi-cation time of the gesture information may be interpreted asan inhibition effect.

The comparison of the reciprocal effects between wordspronunciation and gesture execution showed that gesturereinforced speech, whereas word inhibited gesture. Gesturesare usually performed in the presence of an interlocutor,whereas words can be used to communicate also with invisi-ble or distant interlocutors. Consequently, gesture reinforcesspeech in order to attract the attention of a hypothetical inter-locutor and to engage immediately an interaction with him.On the other hand, speech shortens the execution of the cor-responding gesture probably acting by a negative feedbackonce the word of shorter duration has been emitted.

Although the beginning of voice emission always followedthe beginning of gesture, a different temporal relationship wasobserved for words emitted with meaningful gestures in thecomparison with both pseudo-words emitted with meaningfulgestures and words emitted with meaningless gestures. A spe-cific temporal coupling probably exists between meaningfulgesture and the corresponding word, which is lost when thegesture or the string of phonemes are meaningless. This fur-ther supports the idea about a single system, which governste

4v

cut-i ressp d ani f thep ctor,w t thep wnt alsot thatt okenw ct top ow-e unda per-f siblee sig-n thes utiono d byt thep mu-n t thise , thet may

have been interpreted as suggesting a closer interaction withthe interlocutor. The effects of their elaboration on voice wereof the same kind as when emitted. In other words, the par-ticipants while pronouncing the word covertly executed (i.e.motorically imagined) also the corresponding gesture. Theyrepeated what was perceived as single signal constituted byboth word and gesture. It seems that two individuals (theactress and the male observer) went into resonance.

4.4. Conclusion

The data of the present study suggest that two types ofcommunication signal, word and gesture, are related at thelevels of execution and processing. We propose that they arecontrolled by a single system of communication. Specifi-cally, words spoken while executing a gesture, may becomeone unitary with the aforesaid gesture. Thus, from a socialpoint of view, they take on a different meaning from whencoded in isolation. We have no experimental data to showwhich cortical region is involved in joining gestures to words.However, neuroimaging data suggest that Broca’s area maybe involved in this function. Indeed, this area seems to beinvolved in encoding phonological representations in termsof mouth articulation gestures (Paulesu, Frith, & Frackowiak,1993; Zatorre, Evans, Meyer, & Gjedde, 1992) and imitationof arm gestures (Tanaka & Inui, 2002). The suggestion thatw sys-t entie dt sys-t ed int noti undw ser-v2 l.,2 es duc-t wello sys-t ores s.

A

per-i them IUR( oM

R

A re

he temporal aspects of arm and mouth kinematics (Leveltt al., 1985).

.3. Effects of observing communication signals onerbal response

Speech was similarly affected by observing or exeng communication signals. Firstly, observation of an actronouncing the word or performing the gesture induce

ncrease in F2 in the comparison with the sole reading orinted word. Since the word was presented by a female ahereas the participants were males, it is possible thaarticipants imitated the voice of the actor. It is well kno

hat formants are higher in females than in males (seehe results of the present study). Another possibility ishe presentation of both visual and acoustical stimuli (spords) reinforced F2 of the verbal response with resperesentation of the sole visual stimuli (printed words). Hver, neither possibility explains why the effect was folso when the participants observed the silent actress

orming the gestures (visual stimulus). Thus, a more plauxplanation is that the presence of a visible interlocutoralled the intention of a closer interaction, producingame effects on voice as those we found on the execf communication signals (see above). This is confirme

he finding that greater effect on voice was found whenarticipants observed the actress performing the two comication signals. The results of experiment 4 showed thaffect was not due to imitation of the actress voice. Thus

wo communication signals simultaneously presented

ord and gesture participate to a unique communicationem has also implication in explaining language developmn terms of evolution. Indeed, various authors (Amstrongt al., 1995; Corballis, 2002; Hewes, 1973) have propose

hat speech evolved from a primitive communicationem based on gestures. Although the gestures analyzhe present study were symbolic ones (i.e. apparentlyconic), their effects on speech were similar to those we fohen syllables were simultaneously pronounced with obation/execution of actions directed to an object (Gentilucci,003a; Gentilucci et al., 2001; Gentilucci, Santunione, et a004; Gentilucci, Stefanini, et al., 2004). Consequently, thystem involved in speech production is related to the proion of concrete actions aimed to interact with objects asf abstract gestures acquiring the meaning of words. A

em relating actions to syllables might have evolved to a mophisticated system relating symbolic gestures to word

cknowledgments

We thank Claudio Secchi for the help in carrying out exments 1 and 2, and Nicola Bruno for the comments on

anuscript. The study was supported by grants from MMinistero dell’Istruzione, dell’Universita e della Ricerca) t.G.

eferences

mstrong, D., Stokoe, W., & Wilcox, S. (1995).Gesture and the natuof language. Cambridge: Cambridge University Press.

Page 13: New Speech and gesture share the same communication system · 2005. 10. 12. · E-mail address: gentiluc@unipr.it (M. Gentilucci). view, gesture functions as an auxiliary support

P. Bernardis, M. Gentilucci / Neuropsychologia xxx (2005) xxx–xxx 13

Bates, E., & Dick, F. (2002). Language, gesture, and the developing brain.Developmental Psychology, 40(3), 293–310.

Bates, E., & Snyder, L. S. (1987). The cognitive hypothesis in languagedevelopment. In E. Ina, C. Uzgiris, & E. J. McVicker Hunt (Eds.),Infant performance and experience: New findings with the ordinalscales(pp. 168–204). Urbana, IL, USA: University of Illinois Press.

Buccino, G., Lui, F., Canessa, N., Patteri, I., Lagravinese, G., Benuzzi,F., et al. (2004). Neural circuits involved in the recognition of actionsperformed by nonconspecifics: An fmri study.Journal of CognitiveNeuroscience, 16(1), 114–126.

Condillac, E. B. (1746).Essai sur l’origine des connaissances humaines.Paris.

Corballis, M. C. (2002).From hand to mouth: The origins of language.Princeton, NJ: Princeton University Press.

De Renzi, E. (1990). Apraxia. In F. Boller & J. Grafman (Eds.),Hand-book of clinical neuropsychology: Vol. 2(pp. 245–263). Amsterdam:Elsevier.

Decety, J., Grezes, J., Costes, N., Perani, D., Jeannerod, M., Procyk, E.,et al. (1997). Brain activity during observation of actions Influence ofaction content and subject’s strategy.Brain, 120(Pt. 10), 1763–1777.

Demonet, J. F., Chollet, F., Ramsay, S., Cardebat, D., Nespoulous, J. L.,Wise, R., et al. (1992). The anatomy of phonological and semanticprocessing in normal subjects.Brain, 115(Pt. 6), 1753–1768.

Ferrero, F., Genre, A., Boe, L. J., & Contini, M. (1979).Nozioni difonetica acustica. Torino, Italy: Edizioni Omega.

Ferrero, F., Magno Caldognetto, E., & Cosi, P. (1996). Sui piani formaticiacustici ed uditivi delle vocali di uomo, donna e bambino. InPaperpresented at the Associazione Italiana di Acustica.

Gentilucci, M. (2003a). Grasp observation influences speech production.European Journal of Neuroscience, 17(1), 179–184.

Gentilucci, M. (2003b). Object motor representation and language.Exper-

G M.h

G asp

G oralehen-

G ord

G xe-able

G tion.

G ofatic

jects.

G and

G Acti-an

H sture

H 1).oke.

Journal of Clinical and Experimental Neuropsychology, 23(5), 628–642.

Hewes, G. W. (1973). Primate communication and the gestural origins oflanguage.Current Antropology, 14, 5–24.

Iacoboni, M., Woods, R. P., Brass, M., Bekkering, H., Mazziotta, J. C.,& Rizzolatti, G. (1999). Cortical mechanisms of human imitation.Science, 286(5449), 2526–2528.

Kimura, D. (1973). Manual activity during speaking. II. Left-handers.Neuropsychologia, 11(1), 51–55.

Kinsbourne, M., & Cook, J. (1971). Generalized and lateralized effectsof concurrent verbalization on a unimanual skill.Quarterly Journalof Experimental Psychology, 23, 341–345.

Krauss, R., & Hadar, U. (1999). The role of speech-related arm/handgesture in word retrieval. In L. S. Messing & R. Campbell (Eds.),Gesture, speech, and sign. Oxford: Oxford University Press.

Leoni, F. A., & Maturi, P. (2002).Manuale di fonetica. Roma, Italy:Carocci.

Levelt, W. J. M., Richardson, G., & La Heij, W. (1985). Pointing andvoicing in deictic expressions.Journal of Memory and Language, 24,133–164.

MacSweeney, M., Campbell, R., Woll, B., Giampietro, V., David, A. S.,McGuire, P. K., et al. (2004). Dissociating linguistic and nonlinguis-tic gestural communication in the brain.NeuroImage, 22(4), 1605–1618.

MacSweeney, M., Woll, B., Campbell, R., McGuire, P. K., David, A. S.,Williams, S. C., et al. (2002). Neural systems underlying British signlanguage and audio-visual English processing in native users.Brain,125(Pt. 7), 1583–1593.

Masataka, N. (2001). Why early linguistic milestones are delayed inchildren with Williams syndrome: Late onset of hand banging as apossible rate-limiting constraint on the emergence of canonical bab-

M ght

O s: The

P o.P orre-

P

R

T of

V tes.

V ine.

VV 79).

tes,

V and& D.eth

Z ter-sing.

imental Brain Research, 153(2), 260–265.entilucci, M., Benuzzi, F., Bertolani, L., Daprati, E., & Gangitano,

(2000). Language and motor control.Experimental Brain Researc,133(4), 468–490.

entilucci, M., Benuzzi, F., Gangitano, M., & Grimaldi, S. (2001). Grwith hand and mouth: A kinematic study on healthy subjects.Journalof Neurophysiology, 86(4), 1685–1699.

entilucci, M., Chieffi, S., Scarpa, M., & Castiello, U. (1992). Tempcoupling between transport and grasp components during prsion movements: Effects of visual perturbation.Behavioural BrainResearch, 47(1), 71–82.

entilucci, M., & Gangitano, M. (1998). Influence of automatic wreading on motor control.European Journal of Neuroscience, 10(2),752–756.

entilucci, M., Santunione, P., Roy, A. C., & Stefanini, S. (2004). Ecution and observation of bringing a fruit to the mouth affect syllpronunciation.European Journal of Neuroscience, 19(1), 190–202.

entilucci, M., Stefanini, S., Roy, A. C., & Santunione, P. (2004). Acobservation and speech production: Study on children and adultsNeu-ropsychologia, 42(11), 1554–1567.

entilucci, M., Toni, I., Chieffi, S., & Pavesi, G. (1994). The roleproprioception in the control of prehension movements: A kinemstudy in a peripherally deafferented patient and in normal subExperimental Brain Research, 99(3), 483–500.

oldin-Meadow, S. (1999). The role of gesture in communicationthinking. Trends in Cognitive Sciences, 3(11), 419–429.

rezes, J., Armony, J. L., Rowe, J., & Passingham, R. E. (2003).vations related to “mirror” and “canonical” neurones in the humbrain: An fmri study.NeuroImage, 18(4), 928–937.

adar, U., Wenkert-Olenik, D., Krauss, R., & Soroker, N. (1998). Geand the processing of speech: Neuropsychological evidence.Brain andLanguage, 62(1), 107–126.

eath, M., Roy, E. A., Black, S. E., & Westwood, D. A. (200Intransitive limb gestures and apraxia following unilateral str

bling. Developmental Science, 4(2), 158–164.cNeill, D. (1992).Hand and mind: What gestures reveal about thou.

Chicago: University of Chicago Press.ldfield, R. C. (1971). The assessment and analysis of handednes

edinburgh inventory.Neuropsychologia, 9(1), 97–113.aget, R. (1930).Human speech. London: Kegan Paul, Trench and Caulesu, E., Frith, C. D., & Frackowiak, R. S. (1993). The neural c

lates of the verbal component of working memory.Nature, 362(6418),342–345.

ickett, J. M. (1999).The acoustics of speech communication. Boston:Allyn and Bacon.

izzolatti, G., & Arbib, M. A. (1998). Language within our grasp.Trendsin Neurosciences, 21(5), 188–194.

anaka, S., & Inui, T. (2002). Cortical involvement for action imitationhand/arm postures versus finger configurations: An fmri study.Neu-roreport, 13(13), 1599–1602.

an Hooff, J. A. R. A. M. (1962). Facial expressions in higher primaSymposium of Zoological Society of London, 8, 97–125.

an Hooff, J. A. R. A. M. (1967). The facial displays of the catarrhmonkeys and apes. In D. Morris (Ed.),Primate ethology(pp. 7–68)London: Weidenfield & Nicolson.

ico, G. B. (1744/1953).La scienza nuova. Bari: Laterza.olterra, V., Bates, E., Benigni, L., Bretherton, I., & Camaioni, L. (19

First words in language and action: A qualitative look. In E. BaL. Benigni, I. Bretherton, L. Camaioni, & V. Volterra (Eds.),Theemergence of symbols: Cognition and communication in infancy(pp.141–222). New York: Academic Press.

olterra, V., Caselli, M. C., Capirci, O., & Pizzuto, E. (2005). Gesturethe emergence and development of language. In M. TomaselloI. Slobin (Eds.),Beyond nature-nurture: Essays in honor of ElizabBates(p. 392). Mahwah, NJ: Lawrence Erlbaum Associates.

atorre, R. J., Evans, A. C., Meyer, E., & Gjedde, A. (1992). Laalization of phonetic and pitch discrimination in speech procesScience, 256(5058), 846–849.