Upload
others
View
2
Download
0
Embed Size (px)
Citation preview
For Peer Review
Electrophysiological Signatures of English Onomatopoeia
Journal: Language and Cognition
Manuscript ID LCO-2018-0034.R2
Manuscript Type: Article
Date Submitted by the Author: n/a
Complete List of Authors: Vigliocco , Gabriella; UCL, PsychologyZhang, Ye; University College London, Experimental PsychologyMaschio , Nicola; University Vita-Salute San Raffaele, Centre for Neurolinguistics and Psycholinguistics (CNPL)Todd, Rosanna; University College London, Psychology and Language SciencesTuomainen, Jyrki; University College London, Psychology and Language Sciences
Keywords: Iconicity, Onomatopoeia, ERP, N400, Word processing
Cambridge University Press
Language and Cognition
For Peer Review
1
Electrophysiological Signatures of English Onomatopoeia
Gabriella Vigliocco1*, Ye Zhang1, Nicola Del Maschio2, Rosie Todd1 & Jyrki Tuomainen1
1. Division of Psychology and Language Sciences, University College London
2. Centre for Neurolinguistics and Psycholinguistics (CNPL), University Vita-Salute San Raffaele, Milano, Italy, 20132
Corresponding author: [email protected]
Page 1 of 32
Cambridge University Press
Language and Cognition
For Peer Review
2
Abstract
Onomatopoeia are widespread across the world’s languages. They represent a relatively
simple iconic mapping: the phonological/phonetic properties of the word evokes acoustic
related features of referents. Here, we explore the EEG correlates of processing
onomatopoeia in English. Participants were presented with a written cue-word (e.g., leash)
and then with a spoken target-word. The target word was either an onomatopoeia (e.g., bark),
a sound-related but arbitrary word (e.g., melody) or another arbitrary word (e.g., bike).
Participants judged whether the cue and the target word were similar in meaning. We
analysed Event-Related Potentials (ERPs) in different time-windows: (i) early (100-200 and
200-250ms) to assess differences in processing at the form-level; (ii) the N400 time window
(300-500ms) in order to establish if there are differences in semantic processing across our
word-types; (iii) late (600-900ms) to assess post-lexical effects. We found that onomatopoeia
differed from the other words in the N400 time-window: when cue and target were unrelated,
onomatopoeic words led to greater negativity which can be accounted for in terms of
enhanced semantic activation of onomatopoeia which leads to greater salience of the
mismatch. We discuss results in the context of a growing body of literature investigating
iconicity in language processing and development.
Keywords: Iconicity, Onomatopoeia, EEG, ERP, N400, Word processing
Page 2 of 32
Cambridge University Press
Language and Cognition
For Peer Review
3
Introduction
In a paper entitled ‘The Origin of Speech’ (1960), Charles Hockett claimed that “in a
semantic communicative system, the ties between meaningful message-elements and their
meanings can be arbitrary or nonarbitrary. In language the ties are arbitrary. The word ‘salt’
is not salty nor granular; dog is not ‘canine’; ‘whale’ is a small word for a large object;
‘microorganism’ is the reverse” (Hockett 1960: 6). However, as acknowledged by Hockett
himself, if arbitrariness provides limitless possibilities to linguistic communication in terms
of what can be communicated about, it also “has the disadvantage of being arbitrary”
(Hockett 1960: 6). The lacking of an intrinsic connection between the phonological level and
the semantic or conceptual level poses impediments especially to lexical acquisition and
processing: how can we effortlessly learn and use words when no hint to what they stand for
is provided by their shape? And, why isn’t iconicity more widespread across languages?
These and related questions are presently at the core of a growing number of studies that
investigate iconicity and arbitrariness in language development and language processing
(e.g., Lockwood and Dingemanse, 2015; Lupyan and Winter, 2018; Perniss et al., 2010).
Non-arbitrary mappings are generally defined as ‘iconic’-- an ‘icon’ being “a sign which
stands for something merely because it resembles it” (Peirce, 1931-36: §3.362) -- linguistic
‘iconicity’ being a sensory-perceived resemblance between properties of linguistic form and
properties of meaning (Taub, 2001). Although traditionally dismissed as a marginal
phenomenon in structural accounts of language (Chomsky, 1988; De Saussure, 1916;
Hockett, 1960), iconicity plays a central role in recently-developed research programs
grounded on embodied theories of human cognition (Meteyard et al., 2012; Vigliocco et al.,
2014). Such programs adopt a functional view of communication where arbitrariness and
Page 3 of 32
Cambridge University Press
Language and Cognition
For Peer Review
4
iconicity co-exist to facilitate learning and processing. Arbitrariness is assumed to allow for
the efficient transmission of an unbounded array of concepts; iconicity is deemed
instrumental in mapping language onto human experience as the content of linguistic
communication. In particular, iconic mechanisms have been proposed to link lexical items to
the cognitive representation/re-enactment of the sensorimotor experience associated with
real-world referents, thus facilitating the acquisition and processing of lexical meaning
(Perniss and Vigliocco, 2014).
Iconicity and Onomatopoeia in Languages
Iconicity is more than a marginal exception in linguistic systems (Dautriche et al., 2017;
Dingemanse, 2012; Voeltz and Kilian-Hatz, 2001), although the degree of iconicity varies
across modalities (spoken and signed) and languages. Spoken languages show limited
iconicity, possibly due to the reduced amount of experience that can be iconically represented
in oral-aural modality. However, this is not to say that spoken languages (especially Indo-
European languages) are iconically challenged, as iconicity is readily visible in the gestures
speakers produce in interactional contexts and in their use of prosody (see Vigliocco et al.,
2014 for a discussion).
In speech, a prominent form of iconicity is represented by onomatopoeia, i.e., lexical
representations of acoustic events (e.g. ‘bubble’, ‘crash’, ‘tweet’ in English). Onomatopoeic
words preserve in their form acoustic properties of the environmental sounds they copy.
These iconic words tend to follow the phonotactic/phonological conventions of the language
they belong to, hence, while a rooster in English-speaking countries would go ‘cock-a-
doodle-doo’, a rooster in Italy would say ‘chicchirichì’. In addition to onomatopoeia, the
amount of iconic material in lexica varies greatly (Berlin and O’Neill, 1981; Dingemanse,
2012; Jakobson and Waugh, 2002), with a number of languages outside the Indo-European
Page 4 of 32
Cambridge University Press
Language and Cognition
For Peer Review
5
family presenting recognizable sets of ‘ideophone’ or ‘mimetic’ words (see Perniss et al.,
2010 for a description). Indo-European languages like English do not have such a
recognizable set of iconic words besides onomatopoeia, they do have however other forms of
iconicity embedded in the lexicon, such as indirect iconicity (Sidhu & Pexman, 2017): e.g.,
the link between specific phonemes (e.,g., /i/) and semantic features (e.g., small). These types
of indirect iconicity can be near universal: Blasi et al (2016) compared the forms of 100 basic
terms across 4,298 languages and found, in addition to other patterns, that words for the
concept small tended to include the high-front vowel /i/.
There is evidence that onomatopoeia and more generally iconic words may be easier to learn
(e.g. Kantartzis et al., 2011; Laing, 2014, 2019; Lockwood et al., 2016; Nygaard et al., 2009).
Evidence further suggests that onomatopoeia are among the first words being produced by
babies. Laing (2014) showed that onomatopoeia made up the majority of early vocabulary for
a German baby (< 12mo). Perry et al (2018) showed that iconic words are used more often by
10-26 months old infants, and then, their production decreases. Onomatopoeia are also
common in caregivers’ language early on and then decrease (Vigliocco et al., 2019). In sign
language (British Sign Language, BSL) it has been found that more iconic signs are learnt
before less iconic ones (Thompson et al., 2012).
With regard to word processing, there is far less behavioural evidence supporting any
advantage for onomatopoeic and generally iconic words. Meteyard and colleagues (2015)
showed that onomatopoeia were better recognized as words by aphasic patients than tightly
matched arbitrary words. While Peeters (2016) failed to observe any difference in lexical
decisions for onomatopoeic and arbitrary words in Dutch, Sidhu et al. (in press), found a
significant advantage for onomatopoeia in a visual lexical decision task as well as in a task
that explicitly engaged phonology. In sign languages, there are now studies spanning four
different languages (BSL, ASL, NGT and DGL) showing effects of iconicity on sign
Page 5 of 32
Cambridge University Press
Language and Cognition
For Peer Review
6
recognition using different paradigms (e.g. Grote and Linz, 2003; Ormel et al., 2009;
Thompson et al., 2009, 2012; Vinson et al., 2015). Thus, we have limited results for
onomatopoeia in different populations, tasks and languages suggesting that any of these
factors may affect whether facilitation is present.
Electrophysiological Evidence for a Different Status of Onomatopoeia
Less than a handful of studies have considered the Event-Related Potentials (ERPs) elicited
by iconic words (including onomatopoeia) so far, providing mixed results. Two studies
reported difference between iconic and non-iconic words in early time windows (Lockwood
and Tuomainen, 2015; Peeters, 2016). In Lockwood and Tuomainen (2015) Japanese
speakers were asked to read sentences in which a plausible or implausible target word was
either iconic (a mimetic/ideophone) or an arbitrary word. They found that the P2 component
(252-256ms) was more positive for the iconic words. As P2 is normally associated with
phonological processing and multisensory integration, they inferred that the larger P2 was
triggered by the combination of 1) phonological processing of salient features of Japanese
iconic words (e.g. reduplication), and 2) multisensory integration between word form and
sensory information associated with the mimetic words. In an auditory lexical decision task
with Dutch speakers, Peeters (2016) found that an early component (N2, 150-200ms) showed
decreased negativity for onomatopoeia vs. arbitrary words, suggesting early processing
differences between onomatopoeia and other words. A final study by Egashira et al. (2015),
where iconic words were auditorily presented to Japanese speakers, did not find any
difference for early components.
Results are equally mixed when considering the N400 component. Peeters (2016) found that
onomatopoeic words elicited less negative N400 (350-550ms) compared with arbitrary
words, suggesting that iconicity may facilitate access to semantic information. Less negative
Page 6 of 32
Cambridge University Press
Language and Cognition
For Peer Review
7
ERP around 400ms for iconic words was also found by Lockwood and Tuomainen (2015)
and Egashira (2015). However, because of differences in temporal and spatial distribution,
the authors did not consider this component as N400. Finally, all three experiments reported a
difference for the Late Positive Complex (LPC), normally associated with post-lexical
processing and meta-linguistic decisions between onomatopoeic and arbitrary words.
Nevertheless, while Lockwood and Tuomainen (2015) observed more positive LPC for
iconic words between 400-800ms and Peeters (2016) between 600-800ms, Egashira et al.
(2015) reported more negative early LPC (200-500ms) and middle LPC (500-900ms) for
onomatopoeia vs. arbitrary words.
To sum up, the few available studies that have assessed neurophysiological differences
between processing iconic and less iconic words have provided mixed results, which may
have multiple causes. First, conflicting patterns may come about as a consequence of cross-
linguistic differences, as for example, Japanese mimetic words tend to have repeated moras,
whereas this is not necessarily the case for Dutch onomatopoeia. Second, the results may be
due to semantic differences between onomatopoeia and other iconic words used.
Onomatopoeia refer to sound, however, iconicity is not limited to the acoustic dimension. We
do not know whether this might matter. Finally, different studies used different tasks that
engage semantic processing to different extents.
The Present Study
Here, we focus on onomatopoeia in English. We use a clearly semantic task and we provide a
comparison between onomatopoeia and other, more arbitrary, words also related to sound.
We employed a semantic association task where we asked participants to judge the
relatedness between visually presented words (cues) and spoken target words (onomatopoeia
and arbitrary). The semantic task was selected because we wanted to ensure that both
Page 7 of 32
Cambridge University Press
Language and Cognition
For Peer Review
8
phonological features and semantic information was processed, so that we can observe the
full electrophysiological signature of onomatopoeia. In contrast to previous experiments
where lexical and sub-lexical differences between onomatopoeia and control words were not
carefully controlled, we compared onomatopoeia with two groups of arbitrary words:
arbitrary words from different semantic classes – matched on a large number of
psycholinguistics dimensions, but crucially also a group of arbitrary words that, just like
onomatopoeia, are rich in sensory features and that refer to sound (e.g., words like ‘music’).
We included this third group to ensure that any difference in electrophysiological responses
to the onomatopoeia cannot be accounted for in terms of semantic differences between these
words and control words, given that in embodied accounts all words referring to sound
should activate sensory features relating to sound (see e.g., Kiefer et al., 2008). Moreover, in
contrast to previous studies, we carefully matched onomatopoeic and arbitrary words for
valence, as studies have found iconic words (especially mimetics in Japanese) to have more
emotional associations than arbitrary words (Iwasaki, Vinson & Vigliocco, 2007) and
emotional valence has been shown to affect N2 and N400 amplitude (e.g. Kanske and Kotz,
2007).
If onomatopoeia involve direct mappings between phonological/phonetic features and
semantic features (Vigliocco and Kita, 2006); or if they involve additional links from
semantic to (multi)sensory areas (Kanero et al., 2014), we should observe differences in the
N400 component of the Event-Related Potentials (ERP) elicited by onomatopoeic vs. other
words. Moreover, if any difference in the phonology of onomatopoeia and arbitrary words
also have processing consequences, we should see differences in earlier time windows,
reflecting the sensory processing of the words. If onomatopoeia and arbitrary words differ in
post-lexical processing, we should see difference in LPC (600-900ms).
Page 8 of 32
Cambridge University Press
Language and Cognition
For Peer Review
9
Method
Stimulus materials
Fifty target words were selected in each category (onomatopoeic, sound-related arbitrary;
other arbitrary words). Stimuli were taken from materials used in previous studies
investigating iconicity effects (Meteyard and Vigliocco, 2015) and for which information
about their semantic properties were available (Vinson and Vigliocco, 2008, Brysbaert et al.,
2014, Lynott, 2009, 2013, McRae, 2005; see Supplementary Materials, table 16 for a full list
of the stimuli). Across the three groups, words were matched for concreteness (Brysbaert et
al. 2014), number of syllables, number of phonemes, number of letters, and log transformed
frequency, orthographic neighbourhood density and phonological neighbourhood density
generated from English Lexicon Project’s HAL corpus (Balota et al. 2007). One-way
ANOVA confirmed that the three categories did not differ in any of these lexical features.
For each word, we chose a related and an unrelated word to use as a cue (e.g. ‘leash’/ ‘sad’
for the target onomatopoeia ‘bark’; ‘chord’/‘envy’ for the target sound related word
‘melody’; ‘ride’/ ‘history’ for the target other arbitrary word ‘bike’).
An on-line norming study with 20 monolingual native British speakers was performed to
ensure the difficulty of the task was matched across word categories. Each participant was
presented with either a related or unrelated cue and then the target word, and was asked to
judge the relatedness by pressing yes/no buttons on the screen as fast and as accurately as
possible. ANOVAs showed that RTs and accuracy did not differ significantly across word-
type conditions. However, unrelated pairs were significantly more accurate (F(1,19)=12.971,
p=0.002, ƞp2.
= 0.406) and required longer response time (F(1,19)=5.632, p=0.028, ƞp2=
0.229) than related pairs. Based on these results, we also replaced one target word, 5 cues and
removed one target word, leaving 147 target words in total. It is important to note here that
we did not control for whether the association between cue and target was related to sound or
Page 9 of 32
Cambridge University Press
Language and Cognition
For Peer Review
10
not. In our final list of items, most of the onomatopoeic words were not related in sound to
the cue words (42/49); however, for the sound-related arbitrary words, 21/49 were related in
sound. We avoided using onomatopoeia as cue words, however, some of the cue words are
iconic, according to available iconicity norms1. A female native speaker of British English
produced the target words at a normal rate, recorded in a soundproof booth. The stereophonic
stimuli had sampling frequency of 44,100 Hz, and intensity scale’s absolute peak normalized
to 0.3 dB with Praat (v. 6.0.19., Boersma, 2001) and duration between 450-900ms.
Participants
Thirty-one native English speakers (mean age=25) were recruited through the UCL
Psychology Subject Pool. All participants were right-handed (assessed using Edinburgh
Handedness Inventory), with normal or corrected to normal vision, normal hearing and
without neurological disorders. Data from seven participants were excluded due to technical
problems and/or accuracy below 80%. Twenty-four participants were included in the analysis
(Female=11).
Procedure
Participants were seated 100cm in front of a computer in a shielded booth. The experiment
was programmed with Presentation® software (Version 18.0, www.neurobs.com), with
auditory stimuli presented through EEG-compatible headphones. Written instructions
explained to participants that they would first see a written word on the screen, then hear a
word immediately afterwards, and their task was to judge whether the written word and the
following spoken word were related in meaning. Participants were presented with 12 practice
trials before the experiment. Hand of response was counterbalanced across participants.
1 It was not possible to include iconicity of the cue as a control variable in the analyses because too many of our cue words do not have iconicity ratings in existing databases.
Page 10 of 32
Cambridge University Press
Language and Cognition
For Peer Review
11
Two lists of stimuli were counterbalanced across participants, each containing 147 target
words and 49 fillers and correspondent cues (50% related and unrelated), so that no
participant heard a target word twice. The stimuli were divided into six blocks, randomized
within lists and across participants.
In each trial, a fixation cross was presented for 700ms, and was then followed by the written
cue presented for 700ms. 1000ms after the end of the cue, the audio target stimuli was
presented. No response deadline was set. The next trial began after a 1000ms inter-trial
interval. Participants took breaks between blocks. The experimental procedure was approved
by the UCL Division of Psychology and Language Science ethics committee.
EEG recording
A BioSemi Active Two system was used for EEG recording. 32 silver-silver chloride (Ag-
AgCl) electrodes were applied following a 10-10 international system layout. Common
reference included CMS electrode serving as online reference and DRL electrode serving as
ground electrode. There were four external electrodes in addition of the scalp sites: one below
left eye (VEOG) and one on the right canthus (HEOG) to measure vertical and horizontal eye
movement; two on the mastoids as reference in offline processing. To check for relative
impedance differences, the electrode offsets were kept between +/-25mV.
The continuous EEG was referenced offline to the average of two mastoids and downsampled
to 256 Hz. Due to the stimulus-onset asynchrony (SOA) in the target stimuli, the EEG
waveform was time-locked to the actual onset of the word in the audio recording, not the
onset of recording. To do that, we annotated the SOA in each stimulus with Praat (v. 6.0.29.,
Boersma, 2001), and edited the eventlists of each participant by adding SOA to the onset of
the auditory stimuli using EEGLAB (v.14.1.1, Delorme and Makeig, 2004) and ERPLAB
(v.7.0.0, Lopez-Calderon and Luck, 2014). EEG waveform was epoched from -200 to 900ms,
Page 11 of 32
Cambridge University Press
Language and Cognition
For Peer Review
12
corrected using -200 to 0ms as baseline and filtered with 0.1-100Hz band-pass filter.
Independent component analysis (ICA) (Chaumon et al., 2015) and artefact rejection was
further performed to remove noise from the data (Mean=11.088%, SD=13.733 across
participants. Rejection rate was similar across two relatedness levels and three word types).
Based on previous literature (Lockwood & Dingemanse, 2015; Peeters, 2016; Lockwood and
Tuomainen, 2015; Egashira et al., 2015), we focused on N1 (100-200ms), P2 (200-250ms),
N400 (300-500ms) and LPC (600-900ms) responses. The averaged ERP was calculated for
those time intervals from 9 electrodes (F3, Fz, F4, C3, Cz, C4, P3, Pz, P4) (Luck, Vogel &
Shapiro, 1996). All of the components above are largest at midline (Fz, Cz, Pz). Adding the
left and right side electrodes next to the electrodes above, provides a fair estimate of the scalp
distribution on the anterior/posterior and laterality planes across the different conditions in
our study. However for completeness, we further carried out analyses including all electrodes
(see supplementary materials).
Data Analysis
Linear mixed effect regression (LMER) analyses were performed for both behavioural and
EEG data by participant and by item using LME4 package (Bates et al., 2015) in R version
3.4.1(R studio team, 2016). For both behavioural and EEG data, two analyses were
performed. In the first, we considered iconicity as a categorical variable and we contrasted
three word types (onomatopoeic, sound-related arbitrary and other arbitrary words). In the
second, we considered iconicity as a continuous variable using ratings of iconicity collected
by Lu et al. (in prep). These ratings were obtained online in a survey in which 49 English
speakers were asked to judge the iconicity of 320 words on a 1 (not iconic) to 7 (highly
iconic) scale. Ratings were available for 47/49 onomatopoeic words and 33/49 sound related
words. Thus, this secondary analysis excluded the other arbitrary words.
Page 12 of 32
Cambridge University Press
Language and Cognition
For Peer Review
13
Behavioural Analysis
We used LMER to analyse response time (RT) data (ms) and Logistic LMER for accuracy
(0/1). The independent variables included word type (onomatopoeia, control, sound related),
relatedness (related, unrelated), and relatedness*word type. Further, log frequency (van
Heuven et al., 2014), age of acquisition (Kuperman et al, 2012), concreteness (Brysbaert et
al., 2014) and duration of stimulus were included as control variables. To capture by item and
by participant variance, we included participant and word as random intercept. All
continuous variables were standardized. Significance of the critical interaction between
relatedness and word type was established by comparing models with and without the
interaction term using ANOVA, which is a built-in function of R performing Wald Chi-
squared tests for LMER. Significant interactions were further analysed by comparing least
square means using emmeans package in R (Lenth, 2018).
The second analysis was conducted with the same process, with the exception that this
analysis only contrasted onomatopoeic and sound-related arbitrary words (for which we had
ratings of iconicity) and included iconicity as a continuous variable, rather than as a
categorical variable. Here, significance of the interaction iconicity*relatedness was directly
computed by the model without the need to carry out model comparison. We further included
in the random structure iconicity, relatedness and iconicity*relatedness as random slope.
ERP Analysis
In the first and second LMER analysis of ERP (µV), we added three control variables to the
ones listed above for the behavioural analysis: RT of the trial, mean accuracy of the item and
electrode site. The electrode sites were coded as 2 variables, one for left/right (left=-1,
centre=0, right=1), and one for frontal/parietal (frontal=-1, centre=0, parietal=1) (Emmorey et
Page 13 of 32
Cambridge University Press
Language and Cognition
For Peer Review
14
al., 2017). All continuous variables were standardized. Significance of interactions was
established as above. Only correct trials were analysed.
Results
Behavioural Results
RTs
Table 1. RT’s Descriptive Statistics (ms).
Condition n M SD
Related onomatopoeia (e.g. leash-bark) 523 1119 481
Unrelated onomatopoeia (e.g. sad-bark) 518 1137 448
Related sound-related (e.g. chord-melody) 527 1166 459
Unrelated sound-related (e.g. envy-melody) 519 1225 451
Related other arbitrary (e.g. ride-bike) 514 1156 616
Unrelated other arbitrary (e.g. history-bike) 528 1231 541
The main LMER analysis found a significant main effect of relatedness (ßunrelated =0.07,
p=<.001): judgement for related pairs were faster than for unrelated trials. There was also a
significant main effect of duration (ß= 0.17, p<.001): longer auditory stimuli, not
surprisingly, led to longer RTs. No other effects were significant. See Table 1 for descriptive
statistics.
Page 14 of 32
Cambridge University Press
Language and Cognition
For Peer Review
15
Similarly, the second analysis found significant positive mean effect of duration (ß= 0.14,
p<.001) on mean RT. But the relatedness effect was not significant. No other significant
effect was found. See Table 1-2 in Supplementary Materials for full results.
Accuracy
Table 2. Accuracy’s Descriptive Statistics
Condition n M SD
Related onomatopoeia (e.g. leash-bark) 523 0.860 0.347
Unrelated onomatopoeia (e.g. sad-bark) 518 0.952 0.215
Related sound-related (e.g. chord-melody) 527 0.911 0.285
Unrelated sound-related (e.g. envy-melody) 519 0.969 0.173
Related other arbitrary (e.g. ride-bike) 514 0.887 0.317
Unrelated other arbitrary (e.g. history-bike) 528 0.979 0.143
In the first binary logistic regression, we found a significant main effect of relatedness
(ßunrelated=0.72, p<.001): participants were less accurate for related than unrelated pairs. No
other significant effect was found. See Table 2 for descriptive statistics. The same negative
effect of relatedness (ßunrelated=0.77, p<.001) was also found in second analysis, showing a
consistent effect. No other main effect or interaction were significant. See Tables 3-4,
Supplementary Materials, for full results.
EEG Results
Page 15 of 32
Cambridge University Press
Language and Cognition
For Peer Review
16
Figure 1 depicts the ERP responses for the nine electrodes analysed in the main analysis with
condition as the predictor variable. Figure 2 shows the ERP response in the second analysis
with iconicity as predictor.
Figure 1. ERP plot of word type * relatedness in N1, P2, N400, and LPC. The plot was
additionally filtered with 30Hz low pass filter for illustration.
Page 16 of 32
Cambridge University Press
Language and Cognition
For Peer Review
17
Figure 2. ERP plot of iconicity * relatedness in N1, P2, N400, and LPC. The continuous
variable iconicity was split into high and low iconicity categories for illustration only, each
containing 30% highest and lowest iconicity words. The plot was additionally filtered with
30Hz low pass filter for illustration.
N1 (100-200ms)
The first analysis suggested that unrelated items elicited more positive N1 (ßunrelated=0.03,
p<.001). Moreover, the comparison of the models with and without interaction terms was
significant (p=.004). Follow-up analyses showed that the relatedness effect was significant
for both onomatopoeia (p<.001) as well as other arbitrary words (p<.001), but not for sound
related words (p=.67). The negativity was more central-parietal distributed, as the frontal
electrodes were less positive (ßfrontal=0.04, p=.007). In the second analysis, we did not find
the same effect of relatedness (ßunrelated=0.04, p=0.15) or interaction between iconicity and
relatedness (ßunrelated:iconicity=0.01, p=0.56), but we found a significant effect of reaction time
Page 17 of 32
Cambridge University Press
Language and Cognition
For Peer Review
18
(ß=-0.03, p=.006). See full results and scalp distribution in Tables 5 to 7 and Figures 1 and 2,
Supplementary Materials.
P2 (200-250ms)
There was a significant relatedness effect in the first analysis of the P2 time window: related
items showed reduced positivity (ßunrelated=0.02, p<.001). Duration also showed a significant
effect (ß =-0.06, p=.006): longer words elicited less positive P2. There was no significant
difference between models with and without interaction term. The effect was mostly frontal,
with frontal electrodes being more positive than central ones (ßfrontal =0.04, p=.004), and
parietal electrodes were less positive than central (ßparietal =-0.17, p<.001). P2 was strongest
along the midline of the scalp, as both left hemisphere electrodes (ßleft =-0.05, p=.001) and
right hemisphere electrodes (ßright =-0.042, p=.006) were less positive. The second analysis
did not find any main or interaction effect involving relatedness or iconicity. With regards to
control variables, P2 was less positive for words with longer reaction time (ß=-0.02, p=.02)
and duration (ß=-0.07, p=.03). The effect was more central-frontal distributed (ßparietal=-0.17,
p<.001) compared with the frontal distribution in the initial analysis. But the effect was also
strongest alone the midline, compared with both left hemisphere (ßleft=-0.06, p=.002) and
right (ßright=-0.04, p=.02). See full results in Tables 8 and9 and scalp distribution in Figures 3
and 4, Supplementary Materials.
N400 (300-500ms)
There was a significant main effect of relatedness (ßunrelated=-0.06, p<.001) in the first
analysis: unrelated pairs elicited a larger N400 than related pairs. Moreover, models’
comparison showed a significant interaction (p < .001). The follow-up analysis showed that
for related pairs, the three word types were not significantly different. However, for unrelated
Page 18 of 32
Cambridge University Press
Language and Cognition
For Peer Review
19
pairs, onomatopoeia were more negative than sound related words (p=.01) and other arbitrary
words (p=.05). There was no significant difference between the two arbitrary word groups
(p=.70).
The N400 effect was mostly central, as both frontal (ßfrontal =0.05, p=.001) and parietal
(ßparietal =0.08, p=.001) electrodes were less negative. The midline electrode was more
negative than right hemisphere (ßright =0.03, p=.02), but not left hemisphere. This central-left
distribution has been associated with auditory N400 (e.g. Hagoort and Brown, 2000).
We found a clear N400 effect also in the second analysis, such that related items were
significantly less negative (ßunrelated=-0.08, p=.001). There was also a significant interaction
between relatedness and iconicity (ßunrelated:iconicity=-0.04, p=.03): N400 in unrelated trials was
more negative for words with higher iconicity, thus confirming the results from the first
analysis. With regards to the control variables, items with longer RT also elicited more
negative N400 (ß=-0.04, p<.001). This effect was strongest in the central frontal electrodes
compared with parietal (ßparietal=0.09, p<.001). See full results in Tables 10-12, and scalp
distribution in Supplementary Materials. Figures 5 and 6.
LPC (600-900ms)
The first analysis found a significant main effect of relatedness (ßunrelated =-0.15, p<.001):
related pairs showed more positive LPC than unrelated. Models’ comparison showed a
significant interaction (p=.02), but follow-up analysis did not find significant difference
between conditions in either related or unrelated trials. We also found that words with longer
RT showed less positive LPC (ß=-0.10, p<.001). The positivity increased from frontal to
parietal areas, central being more positive than frontal (ßfrontal =-0.10, p<.001), and parietal
being more positive than central (ßparietal =0.20, p<.001). The effect was strongest in left
Page 19 of 32
Cambridge University Press
Language and Cognition
For Peer Review
20
hemisphere, as the left electrodes were more positive than the midline (ßleft =0.04, p=.01), but
midline and right electrodes were not significantly different.
The second analysis showed a significant main effect of relatedness (ßrelated =-0.17, p<.001),
and a significant negative effect of RT (ß=-0.11, p<.001) in LPC. Frontal electrodes were less
positive compared with central electrodes (ßfrontal =-0.11, p<.001), while parietal electrodes
were more positive (ßparietal=0.21, p<.001). Left hemisphere was more positive (ßleft =0.04,
p=.04). See full results in Tables 13-15 and scalp distribution in Figures 7 and 8,
Supplementary Materials.
Discussion
Vocal iconicity has been argued to play a critical role during language evolution (e.g.,
Perlmann et al., 2015), it has been found to support language acquisition (e.g., Laing, 2014),
to render words more resistant after brain damage (Meteyard et al., 2015) and to facilitate
processing also in healthy individuals (Sidhu et al., in press). It has also been shown that
iconic words (onomatopoeia and mimetic words in Japanese) engage neural networks outside
the typical language areas (to include right STS, Kanero et al., 2014).
However, the current electrophysiological literature provides mixed results with respect to
whether and when different brain signatures are found for iconic vs. arbitrary words. We
have argued that this may be due in part to the use of typologically different languages that
can be more or less rich in iconicity; differences in the class of iconic words being
investigated, and, finally, differences in the tasks used that may not always fully engage
semantic processing. Here we focused on onomatopoeia, the most direct and imitative form
of iconicity (as they do not require any cross-modal mapping which is typical instead of
many cases of less direct iconicity as seen in mimetic words). Moreover, we focused on the
semantic processing of these words using a task that explicitly required semantic processing
Page 20 of 32
Cambridge University Press
Language and Cognition
For Peer Review
21
(judging the semantic similarity between two words). Processing of onomatopoeia was
compared to the processing of more arbitrary words which in a critical condition also referred
to sound (e.g., orchestra). Words in the three word classes were carefully matched for a
number of psycholinguistic dimensions.
Across both behavioural and EEG data, not surprisingly, we found strong effects of
relatedness with related pairs being processed faster and eliciting a reduced N400. We did not
find differences between onomatopoeic and arbitrary words in RTs and accuracy. This result
replicates what was found by Peeters (2016) with onomatopoeia in Dutch in a lexical
decision task. However, it should be noted that faster RTs (and more accurate responses)
were reported in another study in English using onomatopoeia, also in a lexical decision task
(Sidhu et al., in press). The reasons for this discrepancy are unclear as the languages are
closely related, the three studies all focus on onomatopoeia and whereas we used a semantic
judgements task, the other two studies both used lexical decision (auditory presentation in
Peeters, 2016 and visual presentation in Sidhu et al., in press).
Importantly, in our study we found electrophysiological differences in processing across the
three word types in the N400 time window (300-500ms). More specifically, we found that the
ERP elicited by correctly rejecting non-semantically related pairs was more negative for the
onomatopoeic words than for both sound-related and other arbitrary words. The time window
and scalp distribution of the effect (maximal in centro-parietal electrodes) suggest that the
effect is a genuine N400. The semantic processing of onomatopoeia, thus, seems to differ
from more arbitrary words, even if these words belong to the same semantic field (words
referring to sound), in line with previous findings from Dutch (Peeters, 2016). We also found
a relatedness effect for N1 and P2, with unrelated pairs eliciting more positive ERP in both
time windows but only in the analysis with iconicity as a categorical variable and not in the
analysis with iconicity as continuous variable. Similarly, there was a significant interaction in
Page 21 of 32
Cambridge University Press
Language and Cognition
For Peer Review
22
N1 between relatedness and iconicity but only in the analysis with iconicity as a categorical
variable (word type). It is unclear why we found these effects, however, as we observed them
only in the analysis with iconicity coded as categorical (and not as continuous variable), such
effects could be driven by other (so far unknown) factors.
Finally, no effect was observed in the time-window for post-lexical (LPC: 600-900ms)
processing driven by iconicity.
EEG signatures of iconicity
EEG studies investigating differences between onomatopoeic/mimetic words and arbitrary
words, in principle, have the potential to identify differences in sensory, semantic and post-
lexical processes and therefore provide a powerful tool to understand how iconic words are
processed. With regards to sensory/phonological analysis of the words, we failed to observe
consistent differences in early time windows, in contrast to previous studies. Lockwood and
Tuomainen (2015) found differences between Japanese mimetic and arbitrary words in the
peak of P2 (252-256ms). Peeters (2016) found differences in Dutch between onomatopoeia
and arbitrary words in the N2 (150-200ms) time-window (which overlapped with our N1
time window). Lockwood and Tuomainen (2015) explained the larger P2 for mimetic words
as linked to phonological differences between mimetic and other words in terms of,
especially, reduplication of moras which is present in the mimetic but not in the other words
considered in the study. Moreover, mimetic words are not restricted to auditory experience
but they can refer to all sorts of sensory experience, thus the P2 effect might also be linked to
multi-modal integration. In our study, we minimized any potential phonological difference
between our onomatopoeic and control words and no multimodal integration was required for
the onomatopoeic words (as well as for the sound-related arbitrary words). More difficult is
to account for the difference in results between our study and the study by Peeters (2016).
Page 22 of 32
Cambridge University Press
Language and Cognition
For Peer Review
23
Not only this study was on a much closer language (Dutch), but also just like in our study he
focused on onomatopoeia and presented auditory single words. Peeters observed more
negativity in 150-200ms for onomatopoeic than control words which he attributes to
engagement of non-linguistic sound processing (in line with fMRI findings showing bilateral
STS activations for mimetic words (Kanero et al., 2014). However, this explanation does not
take into account that the fMRI findings refer to Japanese where there are important
phonology-related differences for mimetics. An important difference between Peeters (2016)
and our study is the task used. While Peeters used a lexical decision task, that may be argued
to focus more on the form level, here we used a semantic decision task that forced
participants to focus on semantics (and on the semantic relatedness between onomatopoeia
and other words). This task was also a fair amount more difficult for our subjects, judging
from their RTs (about twice as long as standard lexical decision RTs) and their error rates.
One may speculate that lexical decision, by focusing more attention on wordform, may
capture earlier correlates of iconicity (which indexes a transparent relation between wordform
and meaning) than a task such as semantic decision which instead focuses attention of
meaning only.
Importantly, we found a clear difference between onomatopoeic and arbitrary words in the
N400 window. More specifically, we found a difference between onomatopoeic and arbitrary
words from different semantic classes but also arbitrary words that, just like onomatopoeia,
evoke sound-related experiences (e.g., music). It has been shown in fMRI studies that sound-
related (arbitrary) words activate networks involved in auditory perception (including
posterior STG/MTG, Kiefer et al., 2008; Kiefer et al., 2012) indicating how, in general,
words evoke associated sensory experience, in line with embodiment views of semantic
representation (Barsalou, 1999; Meteyard et al., 2012). Onomatopoeia and sound-related
arbitrary words should evoke similar sensory experience, however, special to onomatopoeia
Page 23 of 32
Cambridge University Press
Language and Cognition
For Peer Review
24
is that such experience is part and parcel of the wordform. The larger N400 for onomatopoeia
was observed in the unrelated condition which can be accounted for in terms of greater
semantic activation for onomatopoeia that then leads to increased dissimilarity with the cue.
There was no complementary increase in positivity for the related condition. As most of our
cue-target pairs in the onomatopoeia condition were not related on the basis of sound,
whereas most of the pairs in the sound-related arbitrary word condition were, the lack of an
effect for related pairs might be due to the fact that any enhancement of semantic activation
for sound-related features (assumed to be greater for onomatopoeia) is simply not relevant to
the task as cue and target overlap in features other than sound (e.g., fingers-clap).
We did not observe any difference across word types in later time window (600-900ms,
LPC), while, again we observed a strong effect of relatedness. As it is unclear the extent to
which effects in this time window are genuine or carry-over effects from the N400 time
window, and moreover, there is no clear prediction for differences across word type in this
time window (which is considered to index post-lexical decision processes), this result will
not be discussed further.
Proposed Mechanisms for Processing Onomatopoeia
There are few accounts of how iconicity effects might come about in language processing
(for a review, see Sidhu and Pexman, 2017). Many of these accounts cover not just
onomatopoeia but the more general associations between specific phonemes and some
general semantic features (e.g., the association of ‘i’ with smallness) and are based on
associative mechanisms that can pick up such statistical correspondences. However,
onomatopoeia may be a special case. This is because they represent a unimodal type of
iconicity (sounds in language mapping into sounds in the world); they are extremely simple
Page 24 of 32
Cambridge University Press
Language and Cognition
For Peer Review
25
mappings (imitative) that do not require much learning; they appear to be universal across
world’s languages and they feature heavily in the initial stages of language development.
Onomatopoeia could be represented with both mediated as well as direct links between
wordform and semantic (sensory-motor) features. In standard connectionist architectures for
spoken word recognition (e.g., McClelland & Elman, 1986), it is assumed that the link
between phonological and semantic features is mediated via intermediate representations. It
is argued that this mediation is necessary precisely to implement arbitrary relationships that
cannot, otherwise, be learnt. However, onomatopoeia could have, in addition to mediated
links, also direct connections between phonological/phonetic and semantic features: those
that map in an imitative manner between the two levels. This proposal is consistent with the
results of the present study: activation of semantic features is enhanced and this, in turn,
renders a mismatch with an unrelated cue more salient (hence a larger N400). This account is
also in line with the developmental evidence reviewed in the introduction: direct mappings
are easier to learn and therefore onomatopoeia should be the first words being learnt (Laing,
2014; Perry et al., 2018; Thompson et al., 2012). According to this view, direct links would
develop for onomatopoeia and not for other iconic words. Compatible with this is the finding
by Sidhu et al (in press) that facilitatory effects were found for onomatopoeia, regardless
whether the task implicitly (lexical decision) or explicitly (phonological decision) recruited
phonology, whereas facilitation was only found for other iconic words when the task
explicitly recruited phonology.
A different mechanism that has been proposed in the literature for all iconic words (not just
onomatopoeia), is that their lexical representation encompasses both linguistic and non-
linguistic sensory (sound-related in the case of onomatopoeia) features. This account has
been put forward by Kanero et al (2014) to account for bilateral activation of STS for
mimetic but not arbitrary words. Our results can also be accounted for by this proposal.
Page 25 of 32
Cambridge University Press
Language and Cognition
For Peer Review
26
However, it is worth noting that a traditional divide between the processing of linguistic
sounds in the left and the processing of non-linguistic sounds in the right hemisphere has
been challenged (see, e.g., Schirmer et al., 2012).
Conclusions
In the present paper, we have shown that making semantic judgements on onomatopoeic
words vs. arbitrary words (including other words referring to sound) leads to processing
differences as indexed by the N400. Crucially, by comparing the processing of onomatopoeia
to a set of sound-related words, we could establish that any processing difference in this time
window for the onomatopoeia cannot be accounted for in semantic terms only, but needs to
take into account the special link between form and meaning that is present only in the
onomatopoeia.
References
Balota, D. A., Yap, M. J., Hutchison, K. A., Cortese, M. J., Kessler, B., Loftis, B., & Treiman, R. (2007). The English lexicon project. Behavior research methods, 39(3), 445-459.
Barsalou, L. W. (1999). Perceptions of perceptual symbols. Behavioral and brain sciences, 22(4), 637-660.
Bates, D., Maechler, M., Bolker, B., Walker, S. (2015). Fitting Linear Mixed-Effects Models Using lme4. Journal of Statistical Software, 67(1), 1-48.
Berlin, B., & O’Neill, J. P. (1981). The pervasiveness of onomatopoeia in Aguaruna and Huambisa bird names. Journal of Ethnobiology, 1(2), 238-261.
Boersma, P (2001). Praat, a system for doing phonetics by computer. Glot International 5:9/10, 341-345.
Brysbaert, M., Warriner, A. B., & Kuperman, V. (2014). Concreteness ratings for 40 thousand generally known English word lemmas. Behavior research methods, 46(3), 904-911.
Page 26 of 32
Cambridge University Press
Language and Cognition
For Peer Review
27
Chaumon, M., Bishop, D. V., & Busch, N. A. (2015). A practical guide to the selection of independent components of the electroencephalogram for artifact correction. Journal of neuroscience methods, 250, 47-63.
Chomsky, N. (1988). Language and problems of knowledge: The Managua lectures (Vol. 16). MIT press.
Dautriche, I., Mahowald, K., Gibson, E., & Piantadosi, S. T. (2017). Wordform similarity increases with semantic similarity: An analysis of 100 languages. Cognitive science, 41(8), 2149-2169.
de Saussure, (1916) Course in general linguistics. New York, NY: McGraw-Hill.
Delorme, A., & Makeig, S. (2004). EEGLAB: an open source toolbox for analysis of single-trial EEG dynamics including independent component analysis. Journal of neuroscience methods, 134(1), 9-21.
Dingemanse, M. (2012). Advances in the cross‐linguistic study of ideophones. Language and Linguistics compass, 6(10), 654-672.
Egashira, Y., Choi, D., Motoi, M., Nishimura, T., & Watanuki, S. (2015). Differences in Event-Related Potential Responses to Japanese Onomatopoeias and Common Words. Psychology, 6(13), 1653.
Emmorey, K., Midgley, K. J., Kohen, C. B., Sehyr, Z. S., & Holcomb, P. J. (2017). The N170 ERP component differs in laterality, distribution, and association with continuous reading measures for deaf and hearing readers. Neuropsychologia, 106, 298-309.
Grote, K. & Linz, E. (2003). The influence of sign language iconicity on semantic conceptualization. In Muller, W. G. & Fisher, O. (eds.), From sign to signing: Iconicity in language and literature 3, pp. 23–40. Amsterdam: John Benjamins.
Hagoort, P., & Brown, C. M. (2000). ERP effects of listening to speech: semantic ERP effects. Neuropsychologia, 38(11), 1518-1530.
Hockett, C (1960). The origin of speech. Scientific American 203, 88-96.
Iwasaki, N., Vinson, D. P., & Vigliocco, G. (2007). How does it Hurt, Kiri-kiri or Siku-sikuI: Japanese Mimetic Words of Pain Perceived by Japanese Speakers and English Speakers. Applying theory and research to learning Japanese as a foreign language, 2.
Kanero, J., Imai, M., Okuda, J., Okada, H., & Matsuda, T. (2014). How sound symbolism is processed in the brain: a study on Japanese mimetic words. PLoS One, 9(5), e97905.
Kanske, P., & Kotz, S. A. (2007). Concreteness in emotional words: ERP evidence from a hemifield study. Brain research, 1148, 138-148.
Kantartzis, K., Imai, M., & Kita, S. (2011). Japanese sound‐symbolism facilitates word learning in English‐speaking children. Cognitive Science, 35(3), 575-586.
Page 27 of 32
Cambridge University Press
Language and Cognition
For Peer Review
28
Kiefer, M., Sim, E. J., Herrnberger, B., Grothe, J., & Hoenig, K. (2008). The sound of concepts: four markers for a link between auditory and conceptual brain systems. Journal of Neuroscience, 28(47), 12224-12230.
Kiefer, M., Trumpp, N., Herrnberger, B., Sim, E. J., Hoenig, K., & Pulvermüller, F. (2012). Dissociating the representation of action-and sound-related concepts in middle temporal cortex. Brain and Language, 122(2), 120-125.
Kuperman, V., Stadthagen-Gonzalez, H., & Brysbaert, M. (2012). Age-of-acquisition ratings for 30,000 English words. Behavior Research Methods, 44(4), 978-990.
Kutas, M., & Federmeier, K. D. (2011). Thirty years and counting: finding meaning in the N400 component of the event-related brain potential (ERP). Annual review of psychology, 62, 621-647.
Laing, C. E. (2014). A phonological analysis of onomatopoeia in early word production. First Language, 34(5), 387-405.
Lenth, R. (2018). Emmeans: Estimated marginal means, aka least-squares means. R package version, 1(1).
Lockwood, G., & Dingemanse, M. (2015). Iconicity in the lab: A review of behavioral, developmental, and neuroimaging research into sound-symbolism. Frontiers in psychology, 6, 1246.
Lockwood, G., & Tuomainen, J. (2015). Ideophones in Japanese modulate the P2 and late positive complex responses. Frontiers in psychology, 6, 933.
Lockwood, G. F., Hagoort, P., & Dingemanse, M. (2016). How iconicity helps people learn new words: Neural correlates and individual differences in sound-symbolic bootstrapping. Collabra, vol. 2 (1), 7.
Lu, B.H., Healy, M., Binney, R., & Vigliocco, G. (In preparation). Neuro correlates of onomatopoeia.
Luck, S. J., Vogel, E. K., & Shapiro, K. L. (1996). Word meanings can be accessed but not reported during the attentional blink. Nature, 383(6601), 616-618.
Lupyan, G., & Winter, B. (2018). Language is more abstract than you think, or, why aren't languages more iconic?. Phil. Trans. R. Soc. B, 373(1752), 20170137.
Lopez-Calderon, J., & Luck, S. J. (2014). ERPLAB: an open-source toolbox for the analysis of event-related potentials. Frontiers in human neuroscience, 8, 213.
Lynott, D., & Connell, L. (2009). Modality exclusivity norms for 423 object properties. Behavior Research Methods, 41(2), 558-564.
Lynott, D., & Connell, L. (2013). Modality exclusivity norms for 400 nouns: The relationship between perceptual experience and surface word form. Behavior research methods, 45(2), 516-526.
Page 28 of 32
Cambridge University Press
Language and Cognition
For Peer Review
29
Jakobson, R., & Waugh, L. R. (2002). The sound shape of language. Walter de Gruyter.
McClelland, J. L., & Elman, J. L. (1986). The TRACE model of speech perception. Cognitive psychology, 18(1), 1-86.
McRae, K., Cree, G. S., Seidenberg, M. S., & McNorgan, C. (2005). Semantic feature production norms for a large set of living and nonliving things. Behavior research methods, 37(4), 547-559.
Meteyard, L., Cuadrado, S. R., Bahrami, B., & Vigliocco, G. (2012). Coming of age: A review of embodiment and the neuroscience of semantics. Cortex, 48(7), 788-804.
Meteyard, L., Stoppard, E., Snudden, D., Cappa, S. F., & Vigliocco, G. (2015). When semantics aids phonology: A processing advantage for iconic word forms in aphasia. Neuropsychologia, 76, 264-275.
Monaghan, P., Shillcock, R. C., Christiansen, M. H., & Kirby, S. (2014). How arbitrary is language?. Phil. Trans. R. Soc. B, 369(1651), 20130299.
Nygaard, L. C., Cook, A. E., & Namy, L. L. (2009). Sound to meaning correspondences facilitate word learning. Cognition, 112(1), 181-186.
Ormel, E., Hermans, D., Knoors, H., & Verhoeven, L. (2009). The role of sign phonology and iconicity during sign processing: the case of deaf children. Journal of Deaf Studies and Deaf Education, 14(4), 436-448.
Perlman, M., Dale, R., & Lupyan, G. (2015). Iconicity can ground the creation of vocal symbols. Royal Society open science, 2(8), 150152.
Peeters, D. (2016). Processing consequences of onomatopoeic iconicity in spoken language comprehension. In 38th Annual Meeting of the Cognitive Science Society (CogSci 2016) (pp. 1632-1647). Cognitive Science Society.
Peirce, C.S. (1931–36). The Collected Papers. Volumes 1–6. Eds. Charles Hartshorne and Paul Weiss. Cambridge M.A.: Harvard University Press.
Perniss, P., Thompson, R., & Vigliocco, G. (2010). Iconicity as a general property of language: evidence from spoken and signed languages. Frontiers in psychology, 1, 227.
Perniss, P., & Vigliocco, G. (2014). The bridge of iconicity: from a world of experience to the experience of language. Phil. Trans. R. Soc. B, 369(1651), 20130300.
Perry, L. K., Perlman, M., Winter, B., Massaro, D. W., & Lupyan, G. (2018). Iconicity in the speech of children and adults. Developmental Science, 21(3), e12572.
Schirmer, A., Fox, P. M., & Grandjean, D. (2012). On the spatial organization of sound processing in the human temporal lobe: a meta-analysis. Neuroimage, 63(1), 137-147.
Sidhu, D.M, Vigliocco, G., Pexman, P.M. (in press). Effects of iconicity in lexical decision.
Page 29 of 32
Cambridge University Press
Language and Cognition
For Peer Review
30
Taub, S. F. (2001). Language from the body: Iconicity and metaphor in American Sign Language. Cambridge University Press.
Thompson, R. L., Vinson, D. P., & Vigliocco, G. (2009). The link between form and meaning in American Sign Language: lexical processing effects. Journal of Experimental Psychology: Learning, Memory, and Cognition, 35(2), 550.
Thompson, R. L., Vinson, D. P., Woll, B., & Vigliocco, G. (2012). The road to language learning is iconic: Evidence from British Sign Language. Psychological science, 23(12), 1443-1448.
Vigliocco, G., & Kita, S. (2006). Language-specific properties of the lexicon: Implications for learning and processing. Language and Cognitive Processes, 21(7-8), 790-816.
Vigliocco, G., Perniss, P., & Vinson, D. (2014). Language as a Multimodal Phenomenon implications for language learning, processing and evolution. Philosophical Transactions of the Royal Society of London, Series B, Biological sciences, 369.
Vinson, D., Thompson, R. L., Skinner, R., & Vigliocco, G. (2015). A faster path between meaning and form? Iconicity facilitates sign recognition and production in British Sign Language. Journal of Memory and Language, 82, 56-85.
Voeltz, F. E., & Kilian-Hatz, C. (Eds.). (2001). Ideophones (Vol. 44). John Benjamins Publishing.
Page 30 of 32
Cambridge University Press
Language and Cognition
For Peer Review
F3
C3
P3 Pz P4
C4
F4 Fz
Cz
-8
+8
-199 895
F3
C3
P3 Pz P4
C4
F4 Fz
Cz
-8
+8
-199 895
F3
C3
P3 Pz P4
C4
F4 Fz
Cz
-8
+8
-199 895
F3
C3
P3 Pz P4
C4
F4 Fz
Cz
-8
+8
-199 895
F3
C3
P3 Pz P4
C4
F4 Fz
Cz
-8
+8
-199 895
F3
C3
P3 Pz P4
C4
F4 Fz
Cz
-8
+8
-199 895
F3
C3
P3 Pz P4
C4
F4 Fz
Cz
-8
+8
-199 895
F3
C3
P3 Pz P4
C4
F4 Fz
Cz
-8
+8
-199 895
F3
C3
P3 Pz P4
C4
F4 Fz
Cz
-8
+8
-199 895
F3
C3
P3 Pz
Cz
Fz F4
C4
P4
+7
-9 N1 P2 N400 LPC
+7
-9 N1 P2 N400 LPC
+7
-9 N1 P2 N400 LPC
+7
-9 N1 P2 N400 LPC
+7
-9 N1 P2 N400 LPC
+7
-9 N1 P2 N400 LPC
+7
-9 N1 P2 N400 LPC
+7
-9 N1 P2 N400 LPC
+7
-9
-200 400 800 -200 400 800 -200 400 800
-200 400 800
-200 400 800-200 400 800-200 400 800
-200 400 800 -200 400 800
N1 P2 N400 LPC
Onomatopoeia Unrelated
Onomatopoeia Related
Sound Related Unrelated
Sound Related Related
Other Arbitrary Unrelated
Other Arbitrary Related
Page 31 of 32
Cambridge University Press
Language and Cognition
For Peer Review
F3
C3
P3 Pz P4
C4
F4 Fz
Cz
-9
+7
-199 895
F3
C3
P3 Pz P4
C4
F4 Fz
Cz
-9
+7
-199 895
F3
C3
P3 Pz P4
C4
F4 Fz
Cz
-9
+7
-199 895
F3
C3
P3 Pz P4
C4
F4 Fz
Cz
-9
+7
-199 895
F3
C3
P3 Pz P4
C4
F4 Fz
Cz
-9
+7
-199 895
F3
C3
P3 Pz P4
C4
F4 Fz
Cz
-9
+7
-199 895
F3
C3
P3 Pz P4
C4
F4 Fz
Cz
-9
+7
-199 895
F3
C3
P3 Pz P4
C4
F4 Fz
Cz
-9
+7
-199 895
F3
C3
P3 Pz P4
C4
F4 Fz
Cz
-9
+7
-199 895
F3
C3
P3 Pz
Cz
Fz F4
C4
P4
+7
-9
-200 400 800
N1 P2 N400 LPC
+7
-9
-200 400 800
N1 P2 N400 LPC
+7
-9
-200 400 800
N1 P2 N400 LPC
+7
-9
-200 400 800
N1 P2 N400 LPC
+7
-9
-200 400 800
N1 P2 N400 LPC
+7
-9
-200 400 800
N1 P2 N400 LPC
+7
-9
-200 400 800
N1 P2 N400 LPC
+7
-9
-200 400 800
N1 P2 N400 LPC
+7
-9
-200 400 800
N1 P2 N400 LPC
High Iconicity Unrelated
High iconicity Related
Low Iconicity Unrelated
Low Iconicity Related
Page 32 of 32
Cambridge University Press
Language and Cognition