9

Click here to load reader

Statistical learning effects in musicians and non-musicians: An MEG study

Embed Size (px)

Citation preview

Page 1: Statistical learning effects in musicians and non-musicians: An MEG study

S

Ea

b

c

a

ARRAA

KMMMPS

1

d(aoFm(ocei

sTtct

0d

Neuropsychologia 50 (2012) 341– 349

Contents lists available at SciVerse ScienceDirect

Neuropsychologia

jo u rn al hom epa ge : www.elsev ier .com/ locate /neuropsychologia

tatistical learning effects in musicians and non-musicians: An MEG study

vangelos Paraskevopoulosa, Anja Kuchenbucha, Sibylle C. Herholzb,c, Christo Panteva,∗

Institute for Biomagnetism and Biosignalanalysis, University of Münster, Münster, GermanyMontreal Neurological Institute, McGill University, Montreal, Quebec, CanadaInternational Laboratory for Brain, Music and Sound Research (BRAMS), Montreal, Quebec, Canada

r t i c l e i n f o

rticle history:eceived 28 March 2011eceived in revised form 3 November 2011ccepted 10 December 2011vailable online 19 December 2011

eywords:EGusical trainingismatch negativity

50

a b s t r a c t

This study aimed to assess the effect of musical training in statistical learning of tone sequences usingMagnetoencephalography (MEG). Specifically, MEG recordings were used to investigate the neural andfunctional correlates of the pre-attentive ability for detection of deviance, from a statistically learned tonesequence. The effect of long-term musical training in this ability is investigated by means of comparisonof MMN in musicians to non-musicians.

Both groups (musicians and non-musicians) showed a mismatch negativity (MMN) response to thedeviants and this response did not differ amongst them neither in amplitude nor in latency. Anotherinteresting finding of this study is that both groups revealed a significant difference between the stan-dards and the deviants in the response of P50 and this difference was significantly larger in the groupof musicians. The increase of this difference in the group of musicians underlies that intensive, special-

tatistical learning ized and long term exercise can enhance the ability of the auditory cortex to discriminate new auditoryevents from previously learned ones according to transitional probabilities. A behavioral discriminationtask between the standard and the deviant sequences followed the MEG measurement. The behavioralresults indicated that the detection of deviance was not explicitly learned by either group, probably due tothe lack of attentional resources. These findings provide valuable insights on the functional architecture

of statistical learning.

. Introduction

Intensive, specialized and long-term training, on any specificomain, modifies relevant cortical functions and representationsJäncke, 2009a). Especially music training shapes auditory cortext both functional and structural levels, affecting a whole networkf brain areas (Münte, Altenmüller, & Jäncke, 2002; Pantev, Ross,ujioka, & Trainor, 2003; Schlaug, 2001; Zatorre, 2005). Therefore,usician’s brain has been proposed as a model for cortex plasticity

Jäncke, 2009b; Pantev, 2009; Pantev et al., 2003). The investigationf specific processes in which musicians differ from non-musiciansonstitutes an important tool for the description of this model andnhances our understanding of experience-driven cortical plastic-ty.

Statistical learning is a process that causes the grouping of aeries of auditory events (such as speech or tones) into chunks.his segmentation process makes use of the statistical informa-

ion derived from the distribution of patterns of sounds. Theontingencies between adjacent events are used to compute theransitional probabilities which are indicators of co-occurrence.

∗ Corresponding author. Tel.: +49 2518356885.E-mail address: [email protected] (C. Pantev).

028-3932/$ – see front matter © 2011 Elsevier Ltd. All rights reserved.oi:10.1016/j.neuropsychologia.2011.12.007

© 2011 Elsevier Ltd. All rights reserved.

With respect to speech, transitional probabilities of syllables arelarger within words than across word boundaries and thus, this lin-guistic attribute guides the segmentation process (Saffran, Aslin, &Newport, 1996). This segmentation process is of great importancein extracting the structure of complex environmental stimuli.

There is empirical evidence that both infants and adults usetransitional probabilities in order to structure the incoming soundstream when exposed to a new language (Aslin, Saffran, & Newport,1998; Saffran, Aslin, et al., 1996; Saffran, Newport, & Aslin, 1996;Saffran, Newport, Aslin, Tunick, & Barrueco, 1997). Saffran, Aslin,et al. (1996) investigated word segmentation by 8-month-oldinfants using a synthesized speech stream consisting of four trisyl-labic pseudo-words presented in random order. The only cues forword boundaries were the transitional probabilities between syl-lable pairs. After a small period of familiarization, the infants weretested on their listening preferences for words of this artificial lan-guage versus novel pseudo-words containing the same syllablesreordered. Results indicated that infants were able to distinguishbetween the newly acquired words and the unfamiliar words basedon the transitional probabilities of the syllables.

Similar to language, music is a highly structured system andlisteners extract its regularities implicitly (Bigand, 2003; Conway,Bauernschmidt, Huang, & Pisoni, 2010; Tillmann, Bharucha, &Bigand, 2000; Tillmann & McAdams, 2004). Saffran, Johnson, Aslin,

Page 2: Statistical learning effects in musicians and non-musicians: An MEG study

3 uropsy

auaewotsRiostfioetodr

mitle2K2NatAtEittbRtnsnisss(N(cealIsttrepl

mu

42 E. Paraskevopoulos et al. / Ne

nd Newport (1999) showed that statistical learning is not onlysed in the acquisition of language, but is also used for otheruditory stimuli such as sinusoidal tones. In this study, Saffrant al. (1999) used a tone stream of three-tone sequences (tone-ords) structured so that the pairs of tones within the sequences

ccurred more often than the pairs forming the boundaries ofhe sequences. Additionally, the tone sequences were not con-tructed in accordance with the rules of any standard tonal system.esults revealed that both adults and infants used statistical learn-

ng to structure the stream of tones and to distinguish sequencesf this stream from new ones, or even from part-sequences. Part-equences were three-tone sequences that contained the first twoones of a sequence of the familiarization stream plus a differentnal tone, or the final two tones of a sequence plus a new firstne. In addition, Tillmann and McAdams (2004), using a similarxperimental design as the above reported studies, revealed thatransitional probabilities play a crucial role in the implicit learningf musical timbres. Statistical learning is also found in the visualomain (Kirkham, Slemmer, & Johnson, 2002), suggesting that itepresents a domain general mechanism.

However, all of the above mentioned studies used behavioraleasurements to evaluate this cognitive process. Recent stud-

es have implemented neurophysiological measurements in ordero examine the functional and neural correlates of statisticalearning, focusing mainly on the domain of language (Cunillerat al., 2009; Cunillera, Toro, Sebastián-Gallés, & Rodríguez-Fornells,006; De Balaguer, Toro, Rodriguez-Fornells, & Bachoud-Lévi, 2007;ooijman, Hagoort, & Cutler, 2005; McNealy, Mazziotta, & Dapretto,006; Sanders, Newport, & Neville, 2002; Teinonen, Fellman,äätänen, Alku, & Huotilainen, 2009). Nevertheless some studieslso reported statistical learning in the domain of tone segmen-ation (Abla, Katahira, & Okanoya, 2008; Abla & Okanoya, 2008).bla et al. (2008) assessed the functional correlates of tone segmen-

ation by means of Electroencephalography (EEG). They recordedvent Related Potentials (ERPs) while participants were exposedn the artificial tone stream proposed by Saffran et al. (1999). Theone-sequences were presented in a random order so that onlyransitional probabilities within or between the sequences coulde used for the extraction of the underlying structure of the stream.esults revealed that the tone sequences were segmented andhat the onset of the sequences elicited larger ERP components,amely the N100 and the N400, compared to the tones within theequences. Furthermore Abla et al. (2008) showed that these twoegative potentials could be differentiated in amplitude accord-

ng to subject’s performance on the segmenting procedure. In aubsequent study Abla and Okanoya (2008), using near-infraredpectroscopy, identified the left inferior frontal cortex as anothertructure involved in statistical segmentation of tone sequencesalong with the auditory cortex which is the main generator of100 reported in the previous study). Sanders, Ameral, and Seyles

2009) also investigated statistical learning by means of EEG. Theyonstructed a sound stream composed of sounds that could notasily be rehearsed (such as glass braking or elephant trumpeting)nd found that the onset of a sequence of these sounds elicitedarger N100 and N400 than the sounds within the sequences.n addition, Francois and Schön (2011) presented an artificialung language to their subjects and tested them afterwards inhe ability to discriminate between the tone sequences used inhe familiarization phase and novel part-tone sequences. Theiresults indicated that during the testing phase part-tone sequenceslicit a larger negativity over fronto-central regions when com-ared with the tone-sequences that were used in the sung

anguage.Although the results of these studies reflect the process of seg-

entation during the statistical learning in the brain, it remainsnknown as to how the brain discriminates deviant segments. This

chologia 50 (2012) 341– 349

is an important aspect of the neural correlates of statistical learn-ing because segmentation is valuable in order to predict input andto reliably discriminate the new events. Behavioral studies haveshown that artificial words or tone-sequences are encoded based onstatistical learning, and that they can be discriminated from novelcombinations, but the neuronal correlate of this discrimination dur-ing the learning procedure is still unclear.

A classic event-related component that has been used to assessdiscrimination of novel sound events from regular or expectedinput is the mismatch negativity (MMN). The MMN is an auditoryevoked response (AER) in response to a deviant sound in a stream ofstandard sounds that typically occurs between 100 ms and 250 msafter the onset of the deviant stimulus (Kujala, Tervaniemi, &Schröger, 2007; Näätänen, 1995). In order to detect deviants, anyincoming stimulus must be compared to the internal representa-tion of a previously encoded regularity. Therefore, the presence ofan MMN can be interpreted as objective evidence that the brainhas encoded the regularity violated by the deviance, and it can beused to assess which aspects of sounds and sound sequences arepre-attentively encoded. The MMN can be elicited by violations ofsimple aspects of the sound such as frequency, timbre or intensity(Näätänen, Paavilainen, Rinne, & Alho, 2007), but also by violationsof complex regularities, sequential patterns and abstract relation-ships of tones (Paavilainen, Arajarvi, & Takegata, 2007; Picton,Alain, Otten, Ritter, & Achim, 2000; Tervaniemi, Maury, & Näätänen,1994). Consequently, the MMN constitutes a very valuable tool forthe evaluation of statistical learning, since its presence in responseto a novel stimulus would indicate that the segmentation processhas been achieved.

Musical training has been shown to improve the ability fordetection of deviance in complex regularities and tone patterns(Fujioka, Trainor, Ross, Kakigi, & Pantev, 2004, 2005; Habermeyeret al., 2009; Herholz, Lappe, & Pantev, 2009; Tervaniemi, Rytkönen,Schröger, Ilmoniemi, & Näätänen, 2001; Zuijen, Sussman, Winkler,Näätänen, & Tervaniemi, 2004, 2005). Specifically (Herholz, Lappe,Knief, & Pantev, 2009) used MEG recordings in musicians and non-musicians to reveal that a pattern MMN can be elicited based ona global regularity that can only be picked up over a longer timerange. In this study a simple sequential pattern (AAAB) was embed-ded in an oddball paradigm consisting of only two tones (A, B). Theprobability of occurrence of the tone pattern (AAAB) was only 0.5,whereas other patterns differed in the number of A tones that werepresented before the B tone, with decreasing probability of occur-rence of the longer patterns (p[AAAAB] = 0.25, p[AAAAAB] = 0.125,etc.). Violations of the predominant AAAB pattern (every fourthA tone in a row) resulted in a mismatch response. These resultsimply that the probability distribution of possible patterns within asequence influences the expectations for upcoming tones. Further-more, musicians showed a larger and more left-lateralized responsethan non-musicians, revealing an enhancement of the ability tointegrate and analyze tone sequences in musical experts. Therefore,these results imply that the probability distribution of possible pat-terns within a sequence influence the expectations for upcomingtones.

The aim of the present study is to further assess the statisti-cal learning of tone sequences using MMN paradigm with MEG.Specifically, we investigated the neural correlates of the detec-tion of deviance from a statistically learned set of tone sequenceson a pre-attentive level. Furthermore the effect of long-termmusical training on this ability is investigated by comparing musi-cians to non-musicians. We hypothesize that part-tone (deviant)sequences embedded randomly in a set of standard sequences

(oddball paradigm) during an on-line learning procedure elicit anMMN and that this effect is more pronounced in musicians than innon-musicians. Additionally another pre-attentive component ofthe AER, with a latency of about 50 ms after tone onset (P50), was
Page 3: Statistical learning effects in musicians and non-musicians: An MEG study

uropsychologia 50 (2012) 341– 349 343

abrKia

2

2

eMmehnstsh

2

twisCm0sstwssTmrmsq4trtp

2

brptctsisoo1

2

Cw6wtftca

Fig. 1. Illustration of the design. Each square represents a tone. The outline of thesquares groups the sequences. The tones with the black, white and dashed black

E. Paraskevopoulos et al. / Ne

lso compared since it has been shown to be significantly affectedy musical training and at the same time sensitive to the occur-ence of deviants in oddball paradigms (Boutros & Belger, 1999;izkin, Karlidag, Ozcan, & Ozisik, 2006). Moreover, explicit discrim-

nation between standard and deviant sequences was assessed in behavioral testing session following the MEG measurements.

. Materials and methods

.1. Subjects

Thirty individuals, 15 musicians and 15 non-musicians, participated in thexperiment. Musicians (mean age = 26.93; SD = 5.87; 4 males) were students of theusic Conservatory in Münster (mean musical training = 16.82; SD = 3.87). Non-usicians (mean age = 26.47; SD = 2.53; 4 males) had not received any musical

ducation apart from the compulsory lessons in school. All subjects were rightanded according to the Edinburgh Handedness Inventory (Oldfield, 1971), and hadormal hearing as evaluated by clinical audiometry. Subjects provided written con-ent prior to their participation in the study. The study protocol was approved byhe ethics committee of the medical faculty of the University of Münster and thetudy was conducted according to the Declaration of Helsinki. None of the subjectsad absolute pitch according to self-reports.

.2. Stimuli

Tone sequences were constructed from combinations of 11 pure tones similaro Saffran et al. (1999). Particularly, 11 sinusoidal tones from within one octaveere generated (44,100 kHz, 16 bit). The duration of the tones was 300 ms includ-

ng 10 ms rise and decay time. Sequences included in the tone sequence set 1 wereimilar to those described by Saffran et al. (1999): ADB, DFE, GG#A, FCF#, D#ED,C#D. The transitional probabilities within the sequences averaged 0.64 (min = 0.25,ax = 1). Across the sequences boundaries the average transitional probability was

.14 (min = 0.05, max 0.60). Also, a second set of part-sequences was built. Theequences of the second set served as deviants in the oddball paradigm. This toneequence set was constructed by the same tones arranged in a different way, sohat each part-sequence was made by the first two tones of a sequence of set 1,hile the last tone was new. The tones were combined in a way ensuring that tran-

ition probabilities between the sequences were much lower than within the toneequences. The sequences of set 2 were: ADG#, DFF#, GG#D, FCG#, D#EG#, CC#B.he transitional probabilities within sequences of set 2 averaged 0.59 (min = 0.33;ax = 1) while between the sequences 0.13 (min = 0.1; max = 0.45). The frequency

ange for both tone sequence sets was common: 261.63–493.88 Hz. Moreover, theean frequencies of the final tones for the 2 sets were compared, in order to avoid

ystematical spectral differences amongst standard and deviant tones. Mean fre-uency of the final tone for set 1 was 370.136 Hz, SD = 81.89 while for the set 2 was00.571 Hz, SD = 65.9. Moreover, the frequency of occurrence of the tones used inhe standards and the deviants were compared with a paired samples t-test thatevealed no significant difference [t(5) = .510, p > .05]. All other characteristics of theones (duration, loudness, etc.) were common since all tones derived from the sameool of stimuli.

.3. Design

Sequences from both sets were randomly combined in order to produce onelock (Fig. 1). This block consisted of 400 stimuli of set 1, serving as standards,andomly interleaved with 100 stimuli of set 2, serving as deviants, in an oddballaradigm (probability = 0.2) with 2 constraints: (a) at least 3 standard sequences hado occur between presentation of two deviant sequences and (b) the same sequenceould not occur in two successive trials. The ISI was set to 35 ms in order to overcomehe sound card latency which was 25 ms. This interval was also embedded in theequences (between the tones of each sequence) so that it could not be used asndicator for the segmentation process. The duration of the block was 8.08 min. Theubjects were exposed to 3 such blocks, each with different randomization. Initially,ne shorter stream consisting only of sequences of set 1 served for the establishmentf basic representations of the standard sequences. The duration of this stream was.94 min and included 180 standard sequences.

.4. MEG recordings

Magnetic fields were recorded with a 275 channel whole-head system (OMEGA,TF Systems Inc., Port Coquitlam, Canada) in a magnetically shielded room. Dataere acquired continuously during each presentation block with a sampling rate of

00 Hz. Subjects were seated upright, and their head position was comfortably fixedith pads inside the dewar. Stimuli were delivered via plastic tubes at 60 dB SL above

he individual hearing threshold, which was determined with an accuracy of 5 dBor each ear at the beginning of the MEG session. Participants were instructed noto pay attention to the sound stimuli and watched a soundless movie of their ownhoice that was projected onto a screen placed in front of the subject. The subject’slertness and compliance were verified by video monitoring. No explanation about

outline belong to the set of standard sequences. The tones with the gray outlinebelong to the deviant set and the light-gray tone is the deviant tone of the part-sequence. The ISI was kept constant at 35 ms.

the stimuli was provided. The subjects listened to the three blocks with short breaksin between.

2.5. Behavioral measurements

For the behavioral assessment a test containing all possible 36 pairs of one stan-dard and one deviant sequence was conducted. The tone sequences in each trialwere separated by 300 ms and the inter-trial interval was 3 s. Subjects had to indi-cate which of the two sequences of each pair was more familiar to them. The orderof the standard and deviant sequence within each pair was counterbalanced. Thetest was performed after the MEG measurements and in the same room. Subjectscontinued sitting in the same position while listening to the stimuli and answeredvia a button presses.

2.6. Data analysis

The continuous data were separated in epochs of 400 ms, starting 100 ms beforethe critical tone (last tone of each sequence) and ending 300 ms after the tone onset.Epochs containing signals larger than 2.5 pT were considered as artifacts and wereexcluded from the averaging. All epochs were baseline adjusted based on the 100 msbefore tone onset. Standards and deviants were averaged separately. The subset ofstandards directly preceding the deviants were used in the averaging of the stan-dards so that the two conditions (standards and deviants) share a similar numberof epochs and signal to noise ratio. Measurements of all three blocks were averagedin order to achieve the best signal-to-noise ratio possible. The averaged data weredigitally filtered using a high pass filter of 1 Hz and a low pass filter 30 Hz.

The technique of signal space projection (Tesche et al., 1995) was used for theanalysis of the MEG data, dividing the 275 channel data to one source waveform foreach hemisphere. This technique provides substantial discrimination against systemnoise and uncorrelated brain activity, under the assumption of a single time-varyingsource at a specified brain region. Source analysis using an equivalent current dipole(ECD) model with two dipoles, one in each hemisphere, was applied, a techniquejustified by the dipolar distribution of the MMN (Csépe, Pantev, Hoke, Hampson,& Ross, 1992). The two dipoles were fitted simultaneously in a spherical volumeconductor to each individual’s peak of MMN in the averaged data across the stan-dards and deviants. All dipoles included in the analysis explained at least 90% of the

magnetic field variance. The same procedure was followed for the analysis of P50 aswell (i.e. two ECDs fitted simultaneously in a spherical conductor to each individ-ual’s peaks of P50 in the averaged data across standards and deviants, explaining atleast 90% of the field variance). Source space projection for analysis of the P50 andMMN were based on the dipoles fitted on the P50 and on the MMN, respectively, for
Page 4: Statistical learning effects in musicians and non-musicians: An MEG study

3 uropsy

bao

obiisafeflga

3

3

3

vsaFtslhwM

etiMdshhCrdnaep[htp

dteferpcfi

3

i

44 E. Paraskevopoulos et al. / Ne

oth hemispheres. Source waveforms were computed based on averaged deviantsnd standards. Standard waveforms were subtracted from deviant waveforms inrder to obtain the difference waveform, MMN.

Amplitudes of the MMN and P50 were entered into the statistical analyses. Inrder to estimate if the components differed significantly from zero, nonparametricootstrapping was applied to the group averaged waveforms for both components

n both hemispheres. Time windows in which the 0.95% confidence interval did notnclude zero were considered to represent significant deflections. Group and hemi-phere effects were assessed separately for the P50 and the MMN. Individual peakmplitudes in a time window of 30 ms around the mean peak of the difference wave-orms were entered in a repeated measures ANOVA with between-subject (randomffects) factor group (musician, non-musician) and within-subject (fixed effects)actors hemisphere (left, right) and condition (standard, deviant). Moreover, theatencies of the individual peaks of MMN were analyzed and compared across theroups separately for each hemisphere with one-way ANOVA. In all analyses thelpha level was set at 0.05 and Bonferroni correction was applied.

. Results

.1. MEG data

.1.1. Mismatch negativityThe grand averaged source waveforms, obtained from the indi-

idual dipole moment of MMN, are presented in Fig. 2. Both groupshow a significant MMN response to the deviants as revealed by

nonparametric bootstrap analysis of the difference waveforms.or the group of musicians, the peak amplitude of the MMN inhe grand-averaged difference waveforms was M = 6.95 nAm, boot-trapped 95% confidence intervals were CI = 9.77–4.69 nAm for theeft hemisphere, and M = 6.45 nAm, CI = 8.9–3.44 nAm for the rightemisphere. For the group of non-musicians the peak amplitudeas M = 7.84 nAm, CI = 11.44–4.98 nAm for the left hemisphere and

= 7.83 nAm, CI = 5.31–10.7 nAm for the right hemisphere.It must be noted that the peak of the grand averaged differ-

nce waveforms as seen in Fig. 2 is relatively early for a MMN, buthe mean latencies of the peaks of the difference waveforms in thendividual data sets justify the characterization of this potential as

MN. Specifically, the mean latencies of the MMN in the individualifference waveforms of the musicians were M = 154.76 ms, boot-trapped confidence intervals CI = 130.24–183.45 ms for the leftemisphere and M = 153.45 ms, CI = 127.62–187.62 ms for the rightemisphere; for the group of non-musicians were M = 138.11 ms,I = 121.33–162.67 ms; and M = 144.678 ms, CI = 126.44–173 ms,espectively. One-way ANOVA of the latencies across the groupsid not yield significant results, indicating that the two groups didot differ systematically in the latency of their MMN. Addition-lly, the amplitudes of the peaks of the difference waveforms forach individual MMN were significantly different from the negativeeaks of the respective baseline according to a paired sample t-testmusicians left hemisphere t(14) = 3.618, p < .05; musicians rightemisphere t(14) = 3.456, p < .05; non-musicians left hemisphere(14) = 2.325, p < .05; non-musicians right hemisphere t(14) = 2.972,

< .05].The amplitudes of the individual peaks of the standard and the

eviant waveforms in a time window of 30 ms around the peak ofhe difference waveform of each subject and each hemisphere werentered in a 2 × 2 × 2 mixed model ANOVA with between-subjectactor group (random effects), and within-subject factors (fixedffects) hemisphere and condition (standard and deviant). Theesults revealed a main effect of factor condition [F(1,28) = 15.37,

= .001] but all other comparisons did not reach significance, indi-ating that the amplitude of the MMN of musicians did not differrom that of non-musicians and that this response was not lateral-zed.

.1.2. P50The grand averaged source waveforms obtained from the

ndividual dipole moment of P50 are presented in Fig. 3. An

chologia 50 (2012) 341– 349

enhancement of the P50 response to the deviant stimuli is presentin both groups as revealed by a nonparametric bootstrap analy-sis of the difference waveforms. For the group of musicians, thepeak of the grand average difference waveform was M = 6.58 nAm,bootstrapped 95% confidence intervals were CI = 8.04–4.96 forthe left hemisphere, and M = 5.61 nAm, CI = 7.88–3.71 nAm forthe right hemisphere. For the group of non-musicians the meanwas M = 5.42 nAm, CI = 7.01–4.19 for the left hemisphere andM = 4.47 nAm, CI = 7.78–2.97 nAm for the right hemisphere.

The amplitude of the individual peaks of the deviant and thestandard waveforms, in the time window of 40–80 ms after toneonset, were entered in a 2 × 2 × 2 mixed model ANOVA with withinsubject factors condition (standard and deviant), and hemisphereand between subject factor group. The results revealed a signif-icant main effect of condition [F(1,28) = 65.850, p < .000] and asignificant interaction of condition × group [F(1,28) = 4.773, p < .05]indicating a significant enlargement of the difference in P50 in thegroup of musicians. Neither the interactions group × hemisphereand group × hemisphere × condition, nor the main effects of groupand hemisphere were significant. Fig. 4 presents the grand aver-aged difference waveforms of musicians vs. non-musicians for theleft and the right hemisphere, respectively.

In order to determine the source of the significant con-dition × group interaction we conducted pair-wise post hoccomparisons on the means involved in this interaction using t-tests.For this post hoc analysis, data were averaged across hemispheres,as the factor hemisphere was not involved in any significant inter-actions. While the means of the two groups did not differ neitherfor standards nor for deviants (independent t-tests, both p > .05),the amplitude difference between standards and deviants was sig-nificant both in the musicians and in the non-musicians (pairedt-tests, both p ≤ .001, which is significant even after conservativeBonferroni-correction). Direct comparison of the standard-deviantsubtraction between the groups by means of a two-sample t-test confirmed the increased difference between conditions in thegroup of musicians compared to the non-musicians (t(28) = −2.185,p < .05). Therefore, the results reveal that the source of the signifi-cant interaction was the difference between standards and deviantsthat was significantly larger in musicians than in non-musicians,while the responses to the standards or the deviants alone werenot significantly different across the two groups.

3.2. Behavioral data

In the behavioral testing, the two groups (musicians andnon-musicians) did not differ in accuracy [t(28) = 0.489, p > .05;independent sample t-test]. Moreover, neither group’s perfor-mance was significantly different from the chance level [musicians:t(14) = −1.154, p > .05; non-musicians: t(14) = −1.898, p > .05; one-sample t-test] indicating that the participants did not learn toexplicitly distinguish between the two sequence-sets (standard anddeviant) on the behavioral level. The group averages of percentcorrect responses are shown in Fig. 5.

4. Discussion

The aim of this study was to investigate the effects of musi-cal training on the functional and neural correlates of statisticallearning of tone sequences. The neural responses of musicians andnon-musicians were studied by means of MMN using an oddballparadigm. The stimuli consisted of two tone sequence sets, a stan-

dard and a deviant, made by 6 different 3-tone-sequencies thatwere mixed and presented randomly. In the deviant sequence,only the last tone of each 3-tone-sequence was altered in compari-son with the standard. Both groups (musicians and non-musicians)
Page 5: Statistical learning effects in musicians and non-musicians: An MEG study

E. Paraskevopoulos et al. / Neuropsychologia 50 (2012) 341– 349 345

Fig. 2. Grand averaged source waveforms, obtained from the individual dipole moment of MMN for musicians (A) and non-musicians (B). Black lines represent the responset vefora the ri

sdfiftTn

bWGs–mtodbmo

o the standard and gray lines the response to the deviant stimuli. The difference walso presented for each group. The left hemisphere is presented on the left side and

howed an MMN response to the deviants and this response did notiffer between musicians and non-musicians. Another interestingnding of this study is that both groups showed a significant dif-

erence in P50 between the standards and the deviants as well andhis difference was significantly larger in the group of musicians.he behavioral results indicated that the detection of deviance didot reach the level of awareness.

In our stimulus material, there was a difference in proba-ility of occurrence of semitones within or between standards.hereas there were 5 semitone steps within the standards (G-

#-A; D#-ED; C-C#-D), there were only three possibilities ofemitones between triplets (ADB – CC#D; FCF# – GG#A; DFE

FCF#). As the semitone is an important musical interval, itight be argued that this difference could account for some of

he learning during the initial, short presentation of standardsnly. However, this difference in semitones was only present

uring the short initial phase, whereas there was no differenceetween semitone occurrence between and within triplets in theain part of the experiment, due to the additional presentation

f deviant triplets, which resulted in additional probabilities for

ms along with the bootstrapped confidence intervals indicated with gray shade areght on the right.

semitones occurring between triplets. Also, as the initial learningphase was probably too short (1.94 min) to account for learn-ing of the triplets (i.e. 6 min presentation was not sufficientlylong for statistical learning of tone triplets for most participantsin the study of Abla et al., 2008) we believe that most learn-ing occurred during the main part of the experiment. Therefore,although we cannot exclude entirely some influence of individ-ual intervals, we think that learning relied mostly on transitionalprobabilities, not individual musical intervals throughout theexperiment.

The mismatch of the P50 response between the standard andthe deviant stimuli in an oddball paradigm is not very surprising.Boutros et al. (1995) first report that the P50 response decreasesamplitude to the frequent stimuli of an oddball paradigm consist-ing of two tones of different frequencies, one standard (500 Hz)and one deviant (1500 Hz, probability: 10%). In a later study

Boutros and Belger (1999) replicate this finding using tones of1000 Hz (standard) and 1500 Hz (deviant, probability 20%) andan ISI of 2000 ms. Also, Kisley, Noecker, and Guinther (2004)reported that using a 500 ms ISI in an oddball paradigm of two
Page 6: Statistical learning effects in musicians and non-musicians: An MEG study

346 E. Paraskevopoulos et al. / Neuropsychologia 50 (2012) 341– 349

F ment

t avefog

ttrt

Fp

ig. 3. Grand averaged source waveforms, obtained from the individual dipole moo the standard and gray lines the response to the deviant stimuli. The difference wroup. The left hemisphere is presented on the left side and the right on the right.

ones they found a significant difference in the P50 response tohe standard stimuli compared to the deviants. These authorseport that this difference was not correlated to the MMN ampli-ude.

ig. 4. Grand averaged difference waveforms, obtained from the individual dipole momresented on the left side and the right on the right.

of P50 for musicians (A) and non-musicians (B). Black lines represent the responserms along with the bootstrapped confidence intervals are also presented for each

In the above-mentioned studies, this attribute of P50 has beeninterpreted as an indicator of an active sensory gating process. Sen-sory gating is a process that reflects reduced neural activity (P50)in response to increased stimulus redundancy, typically studied by

ent of P50 for musicians (gray) and non-musicians (black). The left hemisphere is

Page 7: Statistical learning effects in musicians and non-musicians: An MEG study

E. Paraskevopoulos et al. / Neuropsy

Fd

pgilTtssuttpa(

rbcpbpt(potbsmmgh

fisass

ig. 5. Correct responses of musicians and non-musicians in the behavioral test. Theashed line indicates the chance level. Error bars indicate 95% confidence intervals.

airs of clicks (Adler, Olincy, & Waldo, 1998). The gating theory sug-ests that each stimulus creates local inhibitory activity that willnhibit (and thus gate) the response to a second identical stimu-us in order to avoid the overstimulation of higher cortical centers.his is thought to be developed by an inhibitory process in the cor-ical neurons themselves. The theory of the active gating processuggests that the observed augmentation of the P50 in the devianttimuli is not simply a loss of inhibition. Instead, the new stim-lus activates an additional inhibitory input capable of inhibitinghe activity described above and allows the neurons to respondo the deviant stimulus. Sensory gating is thought to be a multi-art mechanism involving inhibitory and excitatory mechanismsnd the difference in P50 is attributed to these active mechanismsBoutros & Belger, 1999).

Furthermore, our result of a more reduced sensory gatingesponse (i.e. decrease of P50 in the standards) in musicians haseen also reported in another study comparing professional musi-ians to non-musicians. In this study Kizkin et al. (2006) used aaradigm consisting of pairs of clicks with ISI 500 ms and intervaletween pairs 10 s. This paradigm is known to produce the sup-ression of P50 as response to the second click of the pair andhus produce the sensory gating response. A subsequent studyWang, Staffaroni, Reid, Steinschneider, & Sussman, 2009) com-ared P50 amplitude in musicians and non-musicians using anddball paradigm. This study revealed a decrease of P50 ampli-ude to the standards but did not expose any statistical differenceetween the two groups. It must be noted though, that in thistudy the subjects were high-school students and both groups had ainimum exposure to musical training. The criterion for the deter-ination of musicianship was the active participation in a music

roup or course in the time of testing, and thus the groups might notave differed sufficiently to reveal an effect of musical expertise.

The novel finding of the present study is that the observed dif-erence of P50 amplitude between the standards and the deviantss produced by complex stimuli with varying pitch instead of

ingle repetitive tones. This finding challenges previous believesbout the speed of decoding the transitional probabilities of atructured input in the auditory system, because whereas earliertudies (Abla et al., 2008; Francois & Schön, 2010; Sanders et al.,

chologia 50 (2012) 341– 349 347

2009) show effects on the N100 and N400, we show that differentprocessing of standard and deviant tones is reflected already in theP50. In addition, this process is enhanced in the group of musiciansand this reveals that it is a process that can be trained throughintensive, specialized and long-term exercise. Further research isnecessary to shed light to this mismatch response of P50 and itsspecific attributes.

An alternative interpretation is that the enhanced P50 responseto deviant tones can be explained with a chunking procedure, dueto Auditory Scene Analysis (Bregman, 1994), on the basis of the dif-ferent amount of (a) semitone intervals within and between thetone-sequences (i.e. probability of occurrence of a semitone inter-val within the sequences is 0.416 while probability of occurrenceof a semitone interval between them is 0.1423) and (b) 3-tone-sequences constructed by 2 subsequent semitone intervals foundwithin the sequences. The second scenario is less likely since theprobability of occurrence of 2 subsequent semitone intervals whenone is found between the sequences is 0.114 while when both arewithin the sequence is 0.075. Due to the material used, such aninterpretation cannot be ruled out. However the results obtainedfrom the MMN response strongly suggest that chunking did notoccur in the paradigm used in the present study. If it had occurred,the chunks would be treated as 3-tone patterns and one wouldexpect to find a group difference (between musicians and non-musicians) in the MMN as well, comparable to what has been foundfor chunking of tone patterns in similar studies (e.g., Herholz, Lappe,& Pantev, 2009). Within this framework it seems more plausiblethat the enhanced P50 response was based on the encoding oftransitional probabilities and that Auditory Scene Analysis and pat-tern chunking did not have a major influence on the results of thepresent study.

In our study, musicians did not produce an enhancement of theMMN response compared to the non-musicians, an observationthat is at odds with previous studies on pattern MMN (Herholz,Lappe, Knief, et al., 2009; Pantev, Engelien, Candia, & Elbert, 2001)and does not confirm our hypothesis. However, patterns in our pre-vious study (Herholz, Lappe, Knief, et al., 2009) were based on anumerical regularity, whereas the patterns in this study were basedon pitch. It has been shown previously that musicians are more sen-sitive to violations of a numerical regularity than non-musicians(van Zuijen et al., 2005). Still, given the complexity and the numberof different pitch patterns in this study, we would have expected adifference nonetheless. An absence of a difference in MMN betweenmusicians and non-musicians has been found for simple acousticcharacteristics rather than pattern/abstract features that are notenhanced by musical training, such as pitch, intensity, rhythm andmeter discrimination (Fujioka et al., 2004; Geiser, Ziegler, Jancke,& Meyer, 2009; Koelsch, Schröger, & Tervaniemi, 1999; Seppänen,Brattico, & Tervaniemi, 2007; Tervaniemi, Just, Koelsch, Widmann,& Schröger, 2005). It seems that the discrimination of these acous-tic characteristics is not enhanced by explicit musical training sinceeveryone is expert in these capacities (Bigand & Poulin-Charronnat,2006).

The behavioral results indicated that the encoding of the tonesequence sets did not reach the level of awareness. There areseveral evidences that MMN can be detected even if there is noconscious recognition of the patterns and this is also in line withprevious studies that argue in favor of the necessity of atten-tion in order for statistical learning to be detected behaviorally(Cunillera et al., 2009; Toro, Sinnett, & Soto-Faraco, 2005; Turk-Browne, Scholl, Johnson, & Chun, 2010). Also, the fact that the twogroups did not differentiate on their behavioral results is in line

with the study of Francois and Schön (2011). These authors usedboth behavioral and neurophysiological measurements to inves-tigate the effect of musical expertise in implicit learning. Theirresults indicated that the effect of musical expertise, despite a
Page 8: Statistical learning effects in musicians and non-musicians: An MEG study

3 uropsy

twtfitoasttt2aoAlitMrtou

ewtpfilapptstapaoivddctpg

5

dicdmttqit

48 E. Paraskevopoulos et al. / Ne

rend, was not significant in the behavioral measurements but itas significant in the neurophysiological correlates. Nevertheless,

he presence of the functional and neural detection of deviancerom the standard sequence-set (revealed by the decrease of P50n the standards and the MMN) clearly shows that the standardone sequences were encoded at a pre-attentive level. The abovebservation implies that the segmentation procedure takes placet a pre-attentive level and that the resulting representation of thetandards is available for comparison with new input, but accesso this implicit knowledge is possibly only if attention is paid tohe stimuli. This observation is similar to Lamme’s theory abouthe role of attention in awareness in the visual domain (Lamme,003) and is in line with the study of Custers and Aarts (2010)bout the role of attention and awareness in predictions basedn relations between successive events. Specifically, Custers andarts (2010) argue that predictive relations between events are

earned and stored in memory depending on whether attentions focused to process these relations as bi- or uni-directional andhis does not necessarily require awareness of the relation itself.

oreover, this study used an explicit learning test and thus theeflection of a statistical learning effect in an implicit test remainso be studied in the future. Further research also is necessary inrder for the role of attention in statistical learning to be fullynderstood.

The behavioral results of this study indicate that there was noxplicit pattern recognition, and the electrophysiological responsesere similar to the response to a simple frequency deviance; i.e.

he MMN did not differ amongst the two groups. A cohesive inter-retation of these results can be based on the predictive codingramework (Friston, 2005). According to this theory, the brains regarded as a hierarchically organized system, in which eachevel tries to attain a compromise between bottom-up informationbout sensory inputs, provided by the level below and top-downredictions provided by the level above. The auditory input is com-ared with the expected tone and produces a signal that codeshis prediction error. When no error occurs, there is a suppres-ion of this signal. In the paradigm applied in our study, due tohe complexity of the patterns, the very short ISI and the lack ofttentional resources, the sequences seem not to be coded as com-lete, three-tone-patterns. Instead, the system seems to generaten expectation on the tone that will follow after the two precedingnes in the part sequences according to the transitional probabil-ties and the global characteristics of the tone sequence set. Theiolation of this expectation probably functions as a signal of aeviant pitch from the one expected, and thus producing a pitcheviance response, while the sequence may not be coded as aoherent pattern. The superiority of musicians in sensory gatinghat has also been described elsewhere (Kizkin et al., 2006) is pro-osed as a cause for the difference of the P50 response between theroups.

. Conclusion

This study provides evidence that the auditory system pre-icts events based on the transitional probabilities of auditory

nput for tonal stimuli and uses this ability in order to dis-riminate new events that need more processing. Moreover, theetection of deviance can be rather fast, as reflected by theismatch response of P50, and can be enhanced by musical

raining, confirming its influence in the functional plasticity of

he auditory system. Additionally, the present study generatesuestions about the role of attention in statistical learning and

t’s relation to explicit or implicit use of the extracted informa-ion.

chologia 50 (2012) 341– 349

Acknowledgments

We would like to thank Dr T. Fujioka for thoughtful comments onan earlier version of this manuscript and our test subjects for theircooperation. This work was supported by the Deutsche Forschungs-gemeinschaft [PA392/12-2 and HE6067-1/1].

References

Abla, D., Katahira, K., & Okanoya, K. (2008). On-line assessment of statistical learningby event-related potentials. Journal of Cognitive Neuroscience, 20(6), 952–964.doi:10.1162/jocn.2008.20058

Abla, D., & Okanoya, K. (2008). Statistical segmentation of tone sequences activatesthe left inferior frontal cortex: A near-infrared spectroscopy study. Neuropsy-chologia, 46(11), 2787–2795. doi:10.1016/j.neuropsychologia.2008.05.012

Adler, L., Olincy, A., & Waldo, M. (1998). Schizophrenia, sensory gating, and nicotinicreceptors. Schizophrenia Bulletin, 24(2), 189–202.

Aslin, R. N., Saffran, J. R., & Newport, E. L. (1998). Computation of conditional prob-ability statistics by 8-month-old infants. Psychological Science, 9(4), 321–324.doi:10.1111/1467-9280.00063(John Wiley & Sons)

Bigand, E. (2003). More about the musical expertise of musically untrainedlisteners. Annals of the New York Academy of Sciences, 999(1), 304–312.doi:10.1196/annals.1284.041

Bigand, E., & Poulin-Charronnat, B. (2006). Are we experienced listeners? A review ofthe musical capacities that do not depend on formal musical training. Cognition,100(1), 100–130. doi:10.1016/j.cognition.2005.11.007(Elsevier Science)

Boutros, N. N., & Belger, A. (1999). Midlatency evoked potentials attenuation andaugmentation reflect different aspects of sensory gating. Biological Psychiatry,45(7), 917–922. doi:10.1016/S0006-3223(98)00253-4(Elsevier)

Boutros, N. N., Torello, M. W., Barker, B. a., Tueting, P. a., Wu, S. C., & Nasrallah, H. a.(1995). The P50 evoked potential component and mismatch detection in normalvolunteers: Implications for the study of sensory gating. Psychiatry Research,57(1), 83–88. doi:10.1016/0165-1781(95)02637-C

Bregman, A. (1994). Auditory scene analysis: The perceptual organization of sound. TheMIT Press.

Conway, C. M., Bauernschmidt, A., Huang, S. S., & Pisoni, D. B. (2010). Implicit statis-tical learning in language processing: Word predictability is the key. Cognition,114(3), 356–371. doi:10.1016/j.cognition.2009.10.009

Csépe, V., Pantev, C., Hoke, M., Hampson, S., & Ross, B. (1992). Evoked mag-netic responses of the human auditory cortex to minor pitch changes:Localization of the mismatch field. Electroencephalography and ClinicalNeurophysiology/Evoked Potentials Section, 84(6), 538–548. doi:10.1016/0168-5597(92)90043-B(Elsevier)

Cunillera, T., Càmara, E., Toro, J. M., Marco-Pallarés, J., Sebastián-Gallés, N., Ortiz, H.,et al. (2009). Time course and functional neuroanatomy of speech segmentationin adults. NeuroImage, 48(3), 541–553. doi:10.1016/j.neuroimage.2009.06.069

Cunillera, T., Toro, J. M., Sebastián-Gallés, N., & Rodríguez-Fornells, A. (2006).The effects of stress and statistical cues on continuous speech segmenta-tion: An event-related brain potential study. Brain Research, 1123(1), 168–178.doi:10.1016/j.brainres.2006.09.046

Custers, R., & Aarts, H. (2010). Learning of predictive relations betweenevents depends on attention, not on awareness. Consciousness and Cognition,doi:10.1016/j.concog.2010.05.011

De Diego Balaguer, R., Toro, J. M., Rodríguez-Fornells, A., & Bachoud-Lévi,A.-C. (2007). Different neurophysiological mechanisms underly-ing word and rule extraction from speech. PloS One, 2(11), e1175.doi:10.1371/journal.pone.0001175

Francois, C., & Schön, D. (2010). Learning of musical and linguistic struc-tures: Comparing event-related potentials and behavior. Neuroreport, 1–5.doi:10.1097/WNR.0b013e32833ddd5e

Francois, C., & Schön, D. (2011). Musical expertise boosts implicit learning of bothmusical and linguistic structures. In Cerebral cortex. Oxford Univ. Press.

Friston, K. J. (2005). A theory of cortical responses. Philosophical Transactions ofthe Royal Society of London: Series B, Biological Sciences, 360(1456), 815–836,doi:10.1098/rstb.2005.1622.

Fujioka, T., Trainor, L. J., Ross, B., Kakigi, R., & Pantev, C. (2004). Musical trainingenhances automatic encoding of melodic contour and interval structure. Journalof Cognitive Neuroscience, 16(6), 1010–1021. doi:10.1162/0898929041502706

Fujioka, T., Trainor, L. J., Ross, B., Kakigi, R., & Pantev, C. (2005). Automatic encod-ing of polyphonic melodies in musicians and nonmusicians. Journal of CognitiveNeuroscience, 17(10) doi:10.1162/089892905774597263

Geiser, E., Ziegler, E., Jancke, L., & Meyer, M. (2009). Early electrophysiological corre-lates of meter and rhythm processing in music perception. Cortex, 45(1), 93–102.doi:10.1016/j.cortex.2007.09.010(Elsevier Masson)

Habermeyer, B., Herdener, M., Esposito, F., Hilti, C. C., Klarhöfer, M., di Salle,F., et al. (2009). Neural correlates of pre-attentive processing of pattern

deviance in professional musicians. Human Brain Mapping, 30(11), 3736–3747.doi:10.1002/hbm.20802

Herholz, S. C., Lappe, C., Knief, A., & Pantev, C. (2009). Imagery mismatch negativ-ity in musicians. Annals of the New York Academy of Sciences, 1169, 173–177.doi:10.1111/j.1749-6632.2009.04782.x

Page 9: Statistical learning effects in musicians and non-musicians: An MEG study

uropsy

H

JJ

K

K

K

K

K

K

L

M

M

N

N

O

P

P

P

P

P

S

S

S

van Zuijen, T. L., Sussman, E., Winkler, I., Näätänen, R., & Tervaniemi, M.(2005). Auditory organization of sound sequences by a temporal or numer-ical regularity—A mismatch negativity study comparing musicians and

E. Paraskevopoulos et al. / Ne

erholz, S. C., Lappe, C., & Pantev, C. (2009). Looking for a pattern: An MEG study onthe abstract mismatch negativity in musicians and nonmusicians. BMC Neuro-science, 10, 42. doi:10.1186/1471-2202-10-42

äncke, L. (2009a). The plastic human brain. Restorative Neurology and Neuroscience.äncke, L. (2009b). Music drives brain plasticity. In F1000 biol reports

(doi:10.3410/B1-78, pp. 1–78).irkham, N., Slemmer, J., & Johnson, S. (2002). Visual statistical learning in infancy:

Evidence for a domain general learning mechanism. Cognition, 83(2), B35–B42.doi:10.1016/S0010-0277(02)00004-5

isley, M. A., Noecker, T. L., & Guinther, P. M. (2004). Comparison of sensory gating tomismatch negativity and self-reported perceptual phenomena in healthy adults.Psychophysiology, 41(4), 604–612. doi:10.1111/j.1469-8986.2004.00191.x

izkin, S., Karlidag, R., Ozcan, C., & Ozisik, H. I. (2006). Reduced P50 auditory sensorygating response in professional musicians. Brain and Cognition, 61(3), 249–254.doi:10.1016/j.bandc.2006.01.006

oelsch, S., Schröger, E., & Tervaniemi, M. (1999). Superior pre-attentive auditoryprocessing in musicians. NeuroReport, 10(6), 1309.

ooijman, V., Hagoort, P., & Cutler, A. (2005). Electrophysiological evidence forprelinguistic infants’ word recognition in continuous speech. Brain Research.Cognitive Brain Research, 24(1), 109–116. doi:10.1016/j.cogbrainres.2004.12.009

ujala, T., Tervaniemi, M., & Schröger, E. (2007). The mismatch negativity in cognitiveand clinical neuroscience: Theoretical and methodological considerations. Bio-logical Psychology, 74(1), 1–19. doi:10.1016/j.biopsycho.2006.06.001(Elsevier)

amme, V. (2003). Why visual attention and awareness are different. Trends in Cog-nitive Sciences, 7(1), 12–18. doi:10.1016/S1364-6613(02)00013-X(Elsevier)

cNealy, K., Mazziotta, J. C., & Dapretto, M. (2006). Cracking the languagecode: Neural mechanisms underlying speech parsing. The Journal of Neuro-science: The Official Journal of the Society for Neuroscience, 26(29), 7629–7639.doi:10.1523/jneurosci.5501-05.2006

ünte, T. F., Altenmüller, E., & Jäncke, L. (2002). The musician’s brain asa model of neuroplasticity. Nature Reviews. Neuroscience, 3(6), 473–478.doi:10.1038/nrn843

äätänen, R. (1995). The mismatch negativity: A powerful tool for cognitive neuro-science. Ear and Hearing, 16(1), 6.

äätänen, R., Paavilainen, P., Rinne, T., & Alho, K. (2007). The mismatch negativ-ity (MMN) in basic research of central auditory processing: A review. ClinicalNeurophysiology: Official Journal of the International Federation of Clinical Neuro-physiology, 118(12), 2544–2590. doi:10.1016/j.clinph.2007.04.026(Elsevier)

ldfield, R. (1971). The assessment and analysis of handedness: The Edinburghinventory. Neuropsychologia, 9(1), 97–113. doi:10.1016/0028-3932(71)90067-4(Elsevier)

aavilainen, P., Arajarvi, P., & Takegata, R. (2007). Preattentive detection of non-salient contingencies between auditory features. Neuroreport, 18(2), 159–163.doi:10.1097/WNR.0b013e328010e2ac

antev, C. (2009). Part III introduction: Musical training and induced corti-cal plasticity. Annals of the New York Academy of Sciences, 1169, 131–132.doi:10.1111/j.1749-6632.2009.04867.x

antev, C., Engelien, A., Candia, V., & Elbert, T. (2001). Representational cortexin musicians. Annals of the New York Academy of Sciences, 930(1), 300–314.doi:10.1111/j.1749-6632.2001.tb05740.x(Wiley Online Library)

antev, C., Ross, B., Fujioka, T., & Trainor, L. J. (2003). Music and learning inducedcortical plasticity. Annals of the New York Academy of Sciences, 999(1), 4–14.doi:10.1196/annals.1284.001

icton, T. W., Alain, C., Otten, L. J., Ritter, W., & Achim, A. (2000). Mismatch negativity:Different water in the same river. Audiology and Neurotology, 5(3–4), 111–139,doi:10.1159/000013875 (Karger Publishers).

affran, J. R., Aslin, R. N., & Newport, E. L. (1996). Statistical learning by 8-month-oldinfants. Science, 274(5294), 1926. doi:10.1126/science.274.5294.1926(AAAS)

affran, J. R., Johnson, E. K., Aslin, R. N., & Newport, E. L. (1999). Statistical learn-

ing of tone sequences by human infants and adults. Cognition, 70(1), 27–52.doi:10.1016/S0010-0277(98)00075-4

affran, J. R., Newport, E. L., & Aslin, R. N. (1996). Word segmentation: Therole of distributional cues. Journal of Memory and Language, 35, 606–621.doi:10.1006/jmla.1996.0032(Citeseer)

chologia 50 (2012) 341– 349 349

Saffran, J. R., Newport, E. L., Aslin, R. N., Tunick, R. A., & Barrueco, S. (1997). Inci-dental language learning: Listening (and learning) out of the corner of your ear.Psychological Science, 8(2), 101–105 (JSTOR).

Sanders, L. D., Ameral, V., & Sayles, K. (2009). Event-related potentials indexsegmentation of nonsense sounds. Neuropsychologia, 47(4), 1183–1186.doi:10.1016/j.neuropsychologia.2008.11.005

Sanders, L. D., Newport, E. L., & Neville, H. J. (2002). Segmenting nonsense: Anevent-related potential index of perceived onsets in continuous speech. NatureNeuroscience, 5(7), 700–703, doi:10.1038/nn873.

Schlaug, G. (2001). The brain of musicians. Annals of the New York Academy of Sci-ences, 930(1), 281–299. doi:10.1111/j.1749-6632.2001.tb05739.x(Wiley OnlineLibrary)

Seppänen, M., Brattico, E., & Tervaniemi, M. (2007). Practice strategies of musiciansmodulate neural processing and the learning of sound-patterns. Neurobiology ofLearning and Memory, 87(2), 236–247. doi:10.1016/j.nlm.2006.08.011

Teinonen, T., Fellman, V., Näätänen, R., Alku, P., & Huotilainen, M. (2009). Statisticallanguage learning in neonates revealed by event-related brain potentials. BMCNeuroscience, 10(1), 21. doi:10.1186/1471-2202-10-21(BioMed Central Ltd.)

Tervaniemi, M., Just, V., Koelsch, S., Widmann, A., & Schröger, E. (2005). Pitchdiscrimination accuracy in musicians vs nonmusicians: An event-relatedpotential and behavioral study. Experimental Brain Research. ExperimentelleHirnforschung. Expérimentation cérébrale, 161(1), 1–10. doi:10.1007/s00221-004-2044-5(Springer, Berlin/Heidelberg)

Tervaniemi, M., Maury, S., & Näätänen, R. (1994). Neural representations of abstractstimulus features in the human brain as reflected by the mismatch negativity.NeuroReport, 5(7), 844–846.

Tervaniemi, M., Rytkönen, M., Schröger, E., Ilmoniemi, R., & Näätänen, R. (2001).Superior formation of cortical memory traces for melodic patterns in musicians.Learning & Memory, 8(5), 295–300. doi:10.1101/lm.39501(Cold Spring HarborLab.)

Tesche, C., Uusitalo, M., Ilmoniemi, R., Huotilainen, M., Kajola, M., & Salonen, O.(1995). Signal-space projections of MEG data characterize both distributed andwell-localized neuronal sources. Electroencephalography and Clinical Neurophys-iology, 95(3), 189–200. doi:10.1016/0013-4694(95)00064-6(Elsevier)

Tillmann, B., Bharucha, J. J., & Bigand, E. (2000). Implicit learning of tonality: Aself-organizing approach. Psychological Review (New York), 107(4), 885–913(American Psychological Association, APA).

Tillmann, B., & McAdams, S. (2004). Implicit learning of musical timbre sequences:Statistical regularities confronted with acoustical (dis) similarities. Learning &Memory, 30(5), 1131–1142. doi:10.1037/0278-7393.30.5.1131

Toro, J. M., Sinnett, S., & Soto-Faraco, S. (2005, September). Speech seg-mentation by statistical learning depends on attention. Cognition,doi:10.1016/j.cognition.2005.01.006

Turk-Browne, N. B., Scholl, B. J., Johnson, M. K., & Chun, M. M. (2010). Implicit percep-tual anticipation triggered by statistical learning. Journal of Neuroscience, 30(33),11177–11187. doi:10.1523/jneurosci.0858-10.2010

Wang, W., Staffaroni, L., Reid, E., Steinschneider, M., & Sussman, E. (2009).Effects of musical training on sound pattern processing in high-school stu-dents. International Journal of Pediatric Otorhinolaryngology, 73(5), 751–755.doi:10.1016/j.ijporl.2009.02.003

Zatorre, R. J. (2005). Music, the food of neuroscience? Nature, 434(7031), 312–315,doi:10.1038/434312a (Nature Publishing Group).

van Zuijen, T. L., Sussman, E., Winkler, I., Näätänen, R., & Tervaniemi, M. (2004).Grouping of sequential sounds—An event-related potential study comparingmusicians and nonmusicians. Journal of Cognitive Neuroscience, 16(2), 331–338.doi:10.1162/089892904322984607

non-musicians. Brain Research. Cognitive Brain Research, 23(2–3), 270–276.doi:10.1016/j.cogbrainres.2004.10.007