29
This article was downloaded by: [University of Maastricht] On: 08 July 2014, At: 04:36 Publisher: Routledge Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK Cognitive Neuropsychology Publication details, including instructions for authors and subscription information: http://www.tandfonline.com/loi/pcgn20 Dissociations among functional subsystems governing melody recognition after right-hemisphere damage Willi R. Steinke , Lola L. Cuddy & Lorna S. Jakobson Published online: 09 Sep 2010. To cite this article: Willi R. Steinke , Lola L. Cuddy & Lorna S. Jakobson (2001) Dissociations among functional subsystems governing melody recognition after right-hemisphere damage, Cognitive Neuropsychology, 18:5, 411-437 To link to this article: http://dx.doi.org/10.1080/02643290125702 PLEASE SCROLL DOWN FOR ARTICLE Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content. This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden.

Dissociations among functional subsystems governing melody recognition after right-hemisphere damage

  • Upload
    lorna-s

  • View
    212

  • Download
    0

Embed Size (px)

Citation preview

This article was downloaded by: [University of Maastricht]On: 08 July 2014, At: 04:36Publisher: RoutledgeInforma Ltd Registered in England and Wales Registered Number: 1072954Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK

Cognitive NeuropsychologyPublication details, including instructions for authors andsubscription information:http://www.tandfonline.com/loi/pcgn20

Dissociations among functionalsubsystems governing melodyrecognition after right-hemispheredamageWilli R. Steinke , Lola L. Cuddy & Lorna S. JakobsonPublished online: 09 Sep 2010.

To cite this article: Willi R. Steinke , Lola L. Cuddy & Lorna S. Jakobson (2001) Dissociationsamong functional subsystems governing melody recognition after right-hemisphere damage,Cognitive Neuropsychology, 18:5, 411-437

To link to this article: http://dx.doi.org/10.1080/02643290125702

PLEASE SCROLL DOWN FOR ARTICLE

Taylor & Francis makes every effort to ensure the accuracy of all the information (the“Content”) contained in the publications on our platform. However, Taylor & Francis,our agents, and our licensors make no representations or warranties whatsoeveras to the accuracy, completeness, or suitability for any purpose of the Content. Anyopinions and views expressed in this publication are the opinions and views of theauthors, and are not the views of or endorsed by Taylor & Francis. The accuracyof the Content should not be relied upon and should be independently verifiedwith primary sources of information. Taylor and Francis shall not be liable for anylosses, actions, claims, proceedings, demands, costs, expenses, damages, and otherliabilities whatsoever or howsoever caused arising directly or indirectly in connectionwith, in relation to or arising out of the use of the Content.

This article may be used for research, teaching, and private study purposes. Anysubstantial or systematic reproduction, redistribution, reselling, loan, sub-licensing,systematic supply, or distribution in any form to anyone is expressly forbidden.

Terms & Conditions of access and use can be found at http://www.tandfonline.com/page/terms-and-conditions

Dow

nloa

ded

by [

Uni

vers

ity o

f M

aast

rich

t] a

t 04:

36 0

8 Ju

ly 2

014

DISSOCIATIONS AMONG FUNCTIONAL SUBSYSTEMSGOVERNING MELODY RECOGNITION AFTER

RIGHT-HEMISPHERE DAMAGE

Willi R. Steinke and Lola L. CuddyQueen’s University at Kingston, Canada

Lorna S. JakobsonUniversity of Manitoba, Winnipeg, Manitoba, Canada

This study describes an amateur musician, KB, who became amusic following a right-hemispherestroke. A series of assessments conducted post-stroke revealed that KB functioned in the normal rangefor most verbal skills. However, compared with controls matched in age and music training, KB showedsevere loss of pitch and rhythmic processing abilities. His ability to recognise and identify familiar in-strumental melodies was also lost. Despite these deficits, KB performed remarkably well when asked torecognise and identify familiar song melodies presented without accompanying lyrics. This dissociationbetween the ability to recognise/identify song vs. instrumental melodies was replicated across differentsets of musical materials, including newly learned melodies. Analyses of the acoustical and musical fea-tures of song and instrumental melodies discounted an explanation of the dissociation based on thesefeatures alone. Rather, the results suggest a functional dissociation resulting from a focal brain lesion.We propose that, in the case of song melodies, there remains sufficient activation in KB’s melody analy-sis system to coactivate an intact representation of both associative information and the lyrics in thespeech lexicon, making recognition and identification possible. In the case of instrumental melodies, nosuch associative processes exist; thus recognition and identification do not occur.

INTRODUCTION

This study is concerned with the recognition offamiliar melodies, both song melodies (melodieslearned with lyrics) and instrumental melodies(melodies learned without lyrics). We report whatwe believe to be the first recorded instance of adissociation between recognition of song versusinstrumental melodies secondary to brain injury.

We present our observations on a 64-year-oldamateur musician, KB, who suffered a right-hemisphere stroke, and on controls matched forage and level of music training. Despite evidenceof impaired performance on a variety of musicaltasks, KB displayed remarkably spared recogni-tion of familiar song melodies even when thesemelodies were presented without accompanyinglyrics.

COGNITIVE NEUROPSY CHOLOGY, 2001, 18 (5), 411–437

Ó 2001 Psychology Press Ltdhttp://www.tandf.co.uk/journals/pp/02643294.html DOI:10.1080/02643290042000198

411

Requests for reprints should be addressed to Lola L. Cuddy, Department of Psychology, Queen’s University, Kingston, OntarioK7L 3N6, Canada (Email: [email protected]).

We thank KB and his wife for their invaluable cooperation. We thank Dr I. Peretz for thoughtful and constructive comments onan earlier draft of this paper. Dr Sylvie Hébert and Dr Peretz provided ideas and insights that inspired the present discussion. Weacknowledge detailed and helpful advice from two anonymous referees. Dr Jill Irwin and members of the Acoustical Laboratory atQueen’s University provided support and assistance. The research was supported by a Medical Research Council of CanadaStudentship to the first author, and by research grants from the Natural Sciences and Engineering Research Council of Canada to thesecond and third authors.

Q1427–CN2199D

ownl

oade

d by

[U

nive

rsity

of

Maa

stri

cht]

at 0

4:36

08

July

201

4

The results address two interrelated issues inmusic neuropsychology. The first issue concernsthe processes and functions governing melody rec-ognition. Consideration of this issue leads to a sec-ond—the nature of the integration of melody andspeech in the acquisition, retention, and recogni-tion of songs. To address these issues, we drawupon research results from neurologically intactpopulations (for reviews, see Dowling & Harwood,1986; Krumhansl, 1990, 1991) and reports of selec-tive loss and sparing following brain damage (forreviews, see Marin & Perry, 1999; Patel & Peretz,1997; Peretz, 1993).

Melody recognition

Peretz (1993, 1994) and others (e.g., Zatorre,1984) have argued against a simple division ofmusic and language each assigned to separate cere-bral hemispheres. Rather, both music and languageare themselves viewed as divisible into componentswhich may or may not be shared. Division of musicin the Western tonal system would include melody,with which this paper is primarily concerned, andalso dynamics, grouping, harmony, timbre, and soforth. Within the present concern with melody,there are at least two further components.

Musical melodies can be characterised by a par-ticular set of pitch relations and temporal relationsamong their notes. It has been suggested that theprocessing of pitch relations and of temporal rela-tions contributes separately to melody recognition(Peretz & Kolinsky, 1993), a suggestion supportedby research with neurologically intact (e.g.,Monahan & Carterette, 1985; Palmer &Krumhansl, 1987a, b) and compromised partici-pants (e.g., Peretz & Kolinsky, 1993). Listenersmay recognise melodies given pitch or temporalrelations alone, although they are able to recognisefar more melodies from pitch than from temporalrelations (Hébert & Peretz, 1997; White, 1960).There may also be differences in the neural sub-strates for processing pitch and temporal relations,with the right hemisphere more strongly implicatedfor pitch, but the evidence to date is not conclusive(Marin & Perry, 1999).

The issue is complicated by indications that eachsource of information may influence the processingof the other (see, for example, Bigand, 1993; Boltz,1989; Boltz & Jones, 1986). As pointed out byKolinsky (1969), the same pattern of pitches can beperceived as two different melodies if the temporalpattern is changed. (Consider, for example, that thefirst seven pitches of “Rock of Ages” and “Rudolph,the Red-Nosed Reindeer” are identical.) Thus, theextent to which pitch and temporal effects are addi-tive or interact with one another remains inquestion.

What does appear to be increasingly clear is thatthe pitch processing involved in melody recognitionis separable from verbal abilities (Peretz, 1993;Peretz, Belleville, & Fontaine, 1997; Polk &Kertesz, 1993) and other nonmusic cognitive abili-ties (Koh, Cuddy, & Jakobson, in press; Steinke,Cuddy, & Holden, 1997). The relation betweenpitch processing ability and music training is mod-est and does not account for the dissociation fromcognitive abilities (Koh et al.; Steinke et al.).

Song

The case of song processing poses a somewhat dif-ferent problem, one that necessitates considerationof text in addition to pitch and temporal compo-nents. A song, by definition, consists of integratedmelody and speech, or text. Yet how or where thisintegration is achieved is not known. Processing ofsong and memory for song has received relativelylittle attention in the neuropsychological literature.

Song is a universal form of auditory expression in which musicand speech are intrinsically related. Song thus represents an idealcase for assessing the separability or commonality of music andlanguage. ... In song memory, the musical and the linguisticcomponent may be represented in some combined or commoncode. (Patel & Peretz, 1997, p. 206)

According to Peretz (1993), melody recognitioninvolves the activation of a stored melody in a mel-ody lexicon. The lexicon may be activated by eitheror both of two separate kinds of information, result-ing from the analysis of pitch and temporal features,respectively. In addition, lyrics and song titles may

STEINKE, CUDDY , JAKOBSON

412 COGNITIVE NEUROPSYCHOLOGY , 2001, 18 (5)

Dow

nloa

ded

by [

Uni

vers

ity o

f M

aast

rich

t] a

t 04:

36 0

8 Ju

ly 2

014

provide a third route through which song melodies(but not instrumental melodies) are recognised.

Others have questioned whether the words andmelodies of songs are processed and stored inde-pendently and have reported an integration effect(Crowder, Serafine, & Repp, 1990; Samson &Zatorre, 1991; Serafine, Crowder, & Repp, 1984).The experiments in these studies typicallyemployed a forced-choice recognition memory par-adigm. Listeners were asked to study and then torecognise melody, lyrics, or both under conditionswhere melody and lyrics either matched or did notmatch. Matches between melody and lyrics resultedin higher recognition rates than mismatches. Rec-ognition of melody and lyrics was facilitated for theoriginal song pairing the two components asopposed to a new song mixing equally familiarcomponents. Similar results, using the same para-digm, have been obtained by Morrongiello andRoes (1990) and by Serafine, Davidson, Crowder,and Repp (1986).

Peretz et al. (1994, p.1298; see also Patel &Peretz, 1997) have pointed out that the use of two-alternative forced-choice paradigms may limit thegenerality of these findings. They suggest that lis-teners may have formed integrated representationsof melody and lyrics in memory to facilitate recog-nition of novel songs when unfamiliar, interferingmelodies and lyrics appeared among the responsechoices. In contrast, for the representation of well-known songs, a strategy of encoding melody andtext separately may be more effective since, ineveryday (i.e., nonexperimental) situations, a givenmelodic line will typically carry different lyrics orverses. Peretz et al. (1994) conclude that for familiarsongs “encoding melody and lyrics independentlywould be parsimonious and efficient” (p. 1298).

In related research, Besson, Faïta, Peretz,Bonnel, and Requin (1998) asked professionalmusicians to listen to well-known opera excerptsending with semantically congruous or incongru-ous words sung either in or out of key. Event-related potentials recorded for the co-occurrence ofsemantic and harmonic (musical) violations werewell predicted by an additive combination of theresults recorded for each violation alone. Theauthors argue for the view that semantic and har-

monic violations are processed independently. Onthe other hand, Patel (1998) has reported event-related potential data supporting the sharing of lin-guistic and musical resources where syntactic rela-tions are violated. These recent findings illustratethe complexity of the systems engaged in the acqui-sition and representation of song.

Purpose of the present report

In the present report, we describe the case of KB, anamateur musician who demonstrates a highlyunusual type of amusia. This case provides anopportunity to address questions about the organi-sation of melody recognition and the relationship ofverbal processing to nonverbal processing in songrecognition. Following the case report of KB and adescription of general procedures, we report a seriesof test findings in three experiments.

In Experiment 1, we describe the results of testsof pitch perception, rhythm perception, rhythmproduction, and melody recognition abilities forKB and for a group of neurologically intact con-trols. Through this testing, we uncovered a remark-able and intriguing dissociation in KB. Althoughhe showed profound deficits in the music tests, hewas nonetheless able to recognise the melodies ofwell-known songs played without lyrics. The pre-requisite for successful recognition was that KB hadlearned the melodies with lyrics premorbidly.Instrumental melodies readily recognised by con-trols could not, in contrast, be reliably recognised byKB. In Experiment 2, we report the results of addi-tional tests designed to examine the sparing of songmemory in greater detail. We replicated the differ-ence in recognition of song versus instrumentalmelodies for KB. Analyses of the acoustical andmusical features of both types of melody led us todiscount an explanation of the difference on thebasis of these features alone. Experiment 3addresses the question of whether KB has retainedthe ability to learn new melodies. The results of thisexperiment indicated some residual ability torecognise novel melodies learned with lyrics.

COGNITIVE NEUROPSYCHOLOGY, 2001, 18 (5) 413

MELODY RECOGNITION

Dow

nloa

ded

by [

Uni

vers

ity o

f M

aast

rich

t] a

t 04:

36 0

8 Ju

ly 2

014

CASE REPORT: KB

KB is a right-handed male with 13 years of formaleducation, including completion of Grade 10,Police College, and a number of additional trainingcourses. He worked as a policeman, prison guard,and process server (of official court documents)until the time of his retirement at age 55. KBreported a moderately extensive background inmusic. He played trumpet and drums for approxi-mately 3 years in high school. After high school, hesang both in a barbershop quartet and in a numberof amateur operettas for about 10 years in total. KBreported limited ability to read music notation andstated that he typically learned his parts by ear. Hisspouse reports that KB often sang at home and hada collection of classical and popular records that heplayed regularly. She stated that he had a fine sing-ing voice. Unfortunately no recordings of KB’ssinging exist.

Neurological history

KB was admitted to hospital in July 1994, at age 64,complaining of left-sided paralysis and speech pro-duction difficulties. His speech impairmentresolved after a few days but his left-sided weaknesshas persisted. KB underwent a series of CT scans,the first shortly after admission, and the others 6, 8,and 12 months later. These scans revealed evidenceof focal damage in the right fronto-parietal area (seeFigure 1), and to a lesser extent in the right cerebel-lum (both in the posterior-inferior aspect, and inthe superior cerebellar peduncle-pontine region). Alacunar infarct was also noted in the right lenticularnucleus (G. Bearalot, personal communication,February, 2000). In addition to focal damage, dif-fuse brain atrophy consistent with age was noted,although it is highly unlikely that these latterchanges could account for KB’s amusia (B. Pearse,personal communication, April, 1996).

KB’s previous neurological history included aself-report of a “mild stroke” suffered in 1968 whenhe was 38 years of age. According to the patient andhis wife, this event resulted in an isolated, transientmemory impairment from which KB recoveredcompletely. No objective diagnostic examinations

were undertaken at the time of this event, andrecent CT scans did not reveal the presence of anold infarct.

Initial neuropsychological assessment

During the seventh through tenth weeks after hisstroke in 1994, KB’s intellectual and emotionalfunctioning was assessed by a staff psychologistwith a standard neuropsychological test battery (seeTable 1). Test results from the Wechsler AdultIntelligence Scale-Revised (WAIS-R) (Wechsler,1981) revealed a Full Scale IQ in the normal range,but a significantly lowered Performance IQ score.In order to obtain an estimate of premorbid IQ, theNorth American Adult Reading Test (NAART)(Blair & Spreen, 1989) was administered. The con-sistency between estimated premorbid Verbal IQand postmorbid Verbal IQ scores suggest that KB’sstroke left his verbal abilities intact. Nonverbal abil-ities, in contrast, were impaired, as suggested byscores on the Performance subscale of the WAIS-R. KB’s low score on the Raven’s Coloured Pro-gressive Matrices (Raven, 1965), a nonverbal testused to measure general intellectual functioning,also indicated a decline in nonverbal abilities.

STEINKE, CUDDY , JAKOBSON

414 COGNITIVE NEUROPSYCHOLOGY , 2001, 18 (5)

Table 1. Demographic data and initial neuropsychologicalassessment

Age (years) 64Sex MaleEducation (years) 13Wechsler Adult Intelligence Scale Full Scale IQ = 92

(Revised) Verbal IQ = 103Performance IQ = 80

Wechsler Memory Scale-Form 1 Memory quotient = 109Trail Making Test <25th percentileRaven’s Coloured Progressive <25th percentile

Matrices TestRey-Osterrieth Complex Figure Test <25th percentileHouse-Tree-Person Test Indicative of RH strokeWisconsin Card Sort <25th percentileBoston Diagnostic Aphasia No evidence of aphasia

ExaminationNorth American Adult Reading Full Scale IQ = 107

Test (NAART) (estimated Verbal IQ = 107premorbid WAIS-R IQ) Performance IQ = 108

Identification of musical 13/17 correctly identifiedinstruments

Identification of nonspeech sounds 28/29 correctly identified

Dow

nloa

ded

by [

Uni

vers

ity o

f M

aast

rich

t] a

t 04:

36 0

8 Ju

ly 2

014

On the Wechsler Memory Scale-Form 1(Stone, Girdner, & Albrecht, 1946) KB also scoredin the normal range, supporting clinical observationand KB’s self-report that memory was unimpairedfollowing the stroke. However, on a number ofother tests considered sensitive to brain damage,described below, KB’s performance was poor(below the 25th percentile). His low score on theTrail Making Test indicated difficulties insequencing of symbols, and alternating betweenalphabetic and numeric symbols. The WisconsinCard Sort (Heaton, 1981) requires abstraction abil-ity to determine correct sorting principles as well asmental flexibility to accommodate to shifting crite-

rion sets; poor performance is associated with fron-tal lobe damage. Low scores on the Rey-OsterriethComplex Figure Test (Rey, 1941) and the House-Tree-Person Test are frequently seen after rightposterior damage. KB’s performance on theHouse-Tree-Person Test, in particular, wasmarked by adoption of a piecemeal approach,perseverative tendencies, left-sided neglect, “work-ing over,” and much detail. Signs of emotionallability and impulsivity were also noted.

KB reported no history of hearing loss and nochanges in auditory acuity post-stroke. He showedintact ability to identify nonspeech, environmentalsounds (compact disc recording Dureco 1150582;

COGNITIVE NEUROPSYCHOLOGY, 2001, 18 (5) 415

MELODY RECOGNITION

Figure 1. Transverse CT scans obtained from KB 12 months post-stroke. Note that the right side of the brain is shown on the left in eachimage. The four images, labelled a–d, proceed inferiorly in 10 mm increments. The scans show focal damage in the right fronto-parietal lobe(see text for details).

a b

c d

Dow

nloa

ded

by [

Uni

vers

ity o

f M

aast

rich

t] a

t 04:

36 0

8 Ju

ly 2

014

see Table 1). In addition, when tested with arecorded set of solo instrumental pieces he was ableto identify 13 of 17 musical instruments correctly(including piano, harp, trumpet, french horn, tuba,piccolo, flute, oboe, clarinet, double bass, cello,viola, and violin). His misidentifications includedlabeling both a bassoon and a saxophone as an oboe,mistaking a guitar for a piano, and being unable toname a harpsichord.

The results of the Boston Diagnostic AphasiaExamination (BDAE) (Goodglass & Kaplan,1983) revealed that KB did not meet the criteria fora diagnosis of aphasia. However, the BDAE didreveal impairments in speech prosody, singing, andrhythm reproduction. KB was judged to have anobvious lack of melody of speech in his spontaneousspeech (which both the patient and his wife con-firmed was not evident prior to his stroke). Defi-ciencies in singing and rhythm production werealso noted in a subtest of the BDAE that requiredKB to sing a familiar melody of his own choosingand repeat a set of four rhythms tapped by theexaminer.

During the period of KB’s 10-week, in-patientrehabilitation programme, all stroke patients on therehabilitation unit of the hospital were beingscreened by the first author on tests designed toassess sensitivity to tonality in music. The initial setof tests carried out with KB revealed the presence ofamusia. Prior to testing, KB had not complained ofany musical deficits; nor did he voice any com-plaints after hearing and rating the melodies in thetests of tonal sensitivity. It was only after singingaloud that KB described himself as sounding “flat.”Subsequently, however, KB reported that music nolonger had meaning for him, and it was noted thathe no longer listened to his record collection andchose not to attend scheduled music activities in hisresidential care facility.

GENERAL PROCEDURES

Controls

Ten males and 10 females, all right-handed, withno reported history of neurological disease or major

hearing loss participated in Experiments 1 and 2.Participants’ age and musical background as singersapproximated those of KB. They were recruitedfrom local choirs and vocal ensembles. The ages ofthe control participants ranged from 59 to 71 years,with an average of 66 years. All were presently par-ticipating in singing activities, and most reportedsinging or playing musical instruments for much oftheir lives. With the exception of one participantwho had been giving piano lessons for approxi-mately 10 years, none derived any income frommusic activities. Because none of the participantsmade a living from playing or singing music, allwere considered to be amateur musicians.

Controls had a mean of 15 years of formal edu-cation (range of 10–20 years). The Shipley Instituteof Living Scale (Shipley, 1953) was used to obtainan estimate of overall intelligence. Using a conver-sion factor, the mean Shipley score for controls wasestimated to equal a WAIS-R Full Scale IQ score of119.9 (range 105–133).

Materials and methods

All music perception tests, with the exception ofthose in the University of Montreal Musical TestBattery (provided on cassette tape by I. Peretz),were constructed and recorded in the AcousticalLaboratory of the Department of Psychology,Queen’s University. Musical tones were synthe-sised timbres, created by a Yamaha TX81Zsynthesiser controlled by a Packard Bell computerrunning “Finale” music-processing software(Anderson, 1996). Two exceptions were the Probe-tone Melody test (Experiment 1) for which thesynthesiser was controlled by a Zenith Z-248 com-puter running “DX-Score” software (Gross, 1981),and the Chord Progressions test (Experiment 1), forwhich chords were created on a Yamaha TX802synthesiser controlled by an Atari 1040ST com-puter running “Notator” music-processing soft-ware (Lengeling, Adam, & Schupp, 1990). Toprovide variety for participants, different musicaltimbres were used for different tests. The names ofall music tests, and the timbres used, are given inTable 2.

STEINKE, CUDDY , JAKOBSON

416 COGNITIVE NEUROPSYCHOLOGY , 2001, 18 (5)

Dow

nloa

ded

by [

Uni

vers

ity o

f M

aast

rich

t] a

t 04:

36 0

8 Ju

ly 2

014

Stimuli were recorded onto Maxell XL II-S cas-sette tapes using a Panasonic Rx-DT707 portabletape player. A Dixon DM1178 microphone wasused for recording rhythmic reproductions (Exper-iment 1) and sung materials (Experiment 3). Teststimuli were reproduced by the tape player in freefield at a comfortable loudness, as determined byeach participant (about 55 to 70 dB SPL).

KB was tested in several different locations—aninterview room at the stroke rehabilitation unit of alocal hospital, his bedroom at a skilled nursing facil-ity, and his family residence. Test conditions wereuniformly quiet across test locations, with no obtru-sive background noises present. Test sessions lastedapproximately 30 minutes each and were conductedover a period of 16 months, beginning in August1994 when KB was 64 years old. KB completed thetests in Experiment 1 first, followed by the tests in

Experiment 2 and Experiment 3. All test instruc-tions were read to KB, and all responses wererecorded by the experimenter. KB was given asmuch time as needed to ask questions followingpractice trials (where applicable), to clarify instruc-tions whenever necessary, and to complete tasks. Itmay be noted that KB was a cooperative and well-motivated participant throughout testing. KB dis-played a good-natured willingness to continue lis-tening and responding to test trials even when itwas apparent to both KB and the experimenter thathis sensitivity to test stimuli was severely compro-mised and that he had resorted to guessing.

Controls were tested in a quiet room in the Psy-chology Department at Queen’s University. Theywere most often tested individually, but occasion-ally in groups of two to three, depending on the par-ticular experiment and scheduling demands. Testsessions lasted 2 to 3 hours each and were con-ducted over 6 months. Unless otherwise indicated,all 20 controls provided data for each of the testsdescribed below.

The order of music tests presented in Experi-ments 1 and 2 was counterbalanced across controls,with three restrictions. The first restriction was thatthe seven tests in the University of Montreal Musi-cal Test Battery were always presented in the sameorder and during the same test session with no othertests intervening. This set of tests was considered asa single unit for purposes of counterbalancing. Thesecond restriction concerned time constraints.When insufficient time remained in a test session tocomplete the next scheduled test, a shorter test wassubstituted. The third restriction concerned num-ber of controls present. When two or three controlswere present during the same session, tests whichrequired individual recording or taping of responseswere not administered and other tests weresubstituted.

Prior to testing, written consent was obtainedfrom each control. Demographic data were col-lected first, followed by Shipley tests (Shipley,1953) and the music tests. Published test protocolswere followed for administration of the Shipleytests. Participants were given written instructionsfor the music tests, and were given as much time asneeded to read the instructions, to become familiar

COGNITIVE NEUROPSYCHOLOGY, 2001, 18 (5) 417

MELODY RECOGNITION

Table 2. Music tests used in Experiments 1, 2, and 3: Name andtimbre

Name of test Timbre

University of Montreal Musical Test BatteryContour Discrimination test Synthesised PianoScale Discrimination test "Interval Discrimination test "Rhythm Contour test "Metric test "Familiarity Decision test "Incidental Memory test "Tests of basic pitch perception and memoryPitch Discrimination test Grand piano (A1)a

Pitch Memory test New electro (A12)a

Tests of tonalityProbe-tone Melody test Wood piano (A15)a

Familiar Melodies test Pan floot (B12)a

Novel Melodies test Pan floot (B12)a

Chord Progressions test Sine-wave componentsb

Tests of rhythm perception and productionRhythm Same/Different test Electro piano (A11)a

Metronome Matching testc MetronomeTests of melody recognition and learningIncremental Melodies test Electric grand (A5)a

Famous Melodies test New electro (A12)a

Television Themes test Grand piano (A1)a

Mismatched Songs test Voice (W. Steinke)Learning testc Electro piano (A11)a

aName and factory preset designation of Yamaha TX81Zsynthesizer.

bProduced by Yamaha TX802 synthesizer.cTest not administered to control participants.

Dow

nloa

ded

by [

Uni

vers

ity o

f M

aast

rich

t] a

t 04:

36 0

8 Ju

ly 2

014

with the response sheets (where applicable), to askquestions after completion of practice trials (whereapplicable), and to complete the tasks. No feedbackwas given following test trials, but instructions wereclarified whenever necessary. Participants were ver-bally debriefed at the conclusion of the testing. Inlieu of individual compensation, a donation wasmade to the church or organisation from which par-ticipants were recruited.

EXPERIMENT 1: TESTS OF BASICMUSICAL ABILITIES

Experiment 1 administered basic tests of auditoryand musical skills to KB and controls. Pitch percep-tion was assessed in different ways, reflecting dif-ferent subcomponents of musical pitch processing.Pitch tests measured simple pitch discrimination,pitch memory, sensitivity to changes in pitch con-tour of a novel melody (changes to the sequences of

ups and downs in pitch), and sensitivity to scales,intervals, and tonality, part of the “grammar” ofmusic. Tests of rhythm perception and productionassessed sensitivity to discrimination of temporalalterations to auditory sequences, sensitivity tometer (or periodic accenting) of a sequence, andmotor control of tempo, or rate of events. Finally,participants’ ability to recognise well-known melo-dies, and their capacity for incidental learning ofnovel melodies, was assessed.

Tests

The 17 tests administered are listed in Table 3 withthe source and a brief description of each test.Detailed descriptions of 12 tests can be found intwo published reports (Liégeois-Chauvel, Peretz,Babaï, Laguitton, & Chauvel, 1998; Steinke et al.,1997). Detailed procedures for five tests developedfor the present experiment may be found in Appen-dix A.

STEINKE, CUDDY , JAKOBSON

418 COGNITIVE NEUROPSYCHOLOGY , 2001, 18 (5)

Table 3. Title, source, and description of tests used in Experiment 1

Title Source Description

Tests of pitch discriminationPitch Discrimination Newa Discriminate pitch height of two tonesPitch Memory S,C,&Hb Rate a test tone as present/not present in a preceding sequence of tonesContour Discrimination UMc Detect within-scale contour alterationScale Discrimination UM Detect outside-scale interval alterationInterval Discrimination UM Detect within-scale interval alterationTests of tonalityProbe-tone Melody S,C,&H Rate 12 chromatic test tones on degree of fit with preceding tonal melodyFamiliar Melodies S,C,&H Rate endings of familiar song melodies varying in level of tonalityNovel Melodies S,C,&H Rate endings of novel melodies varying in level of tonalityChord Progressions S,C,&H Rate degree of expectancy in chord progressions varying in level of tonalityTests of rhythm perceptionRhythm Same-Different New Discriminate standard from same or altered comparison rhythmRhythm Contour UM Detect alterations to duration of a single note in a novel melodyMetric UM Classify novel melodies as march or waltzTests of rhythm reproductionMetronome Matching New Tap in time with beats produced by metronomeRhythm Tap-Back New Repeat rhythms tapped by examinerTests of melody recognitionFamiliarity Decision UM Classify melodies as novel or familiarIncremental Melodies New Identify well-known melodies after presentation of 2 notes, 3 notes, 4 notes, etc.Incidental Memory UM Recognition of novel melodies previously heard in UM test battery

aTest constructed for present study.bTest from Steinke, Cuddy, and Holden (1997).cTest from University of Montreal Musical Test Battery (Liégeois-Chauvel et al., 1998).

Dow

nloa

ded

by [

Uni

vers

ity o

f M

aast

rich

t] a

t 04:

36 0

8 Ju

ly 2

014

Results

Results of the tests for KB and the present controlsare summarised in Table 4. Where available, datafrom controls from two previous studies areincluded. Controls from Steinke et al. (1997) were22 participants who, though younger than KB (agerange 20–40), were also amateur musicians. Con-trols from Liégeois-Chauvel et al. (1998) were 24participants (mean age 32 years) for whom the agerange (14–72 years) included the age of KB and thepresent controls. These participants, however, hadless musical experience; only two participantsreported a few years of music training. Despitethese differences, the controls from the present and

previous studies yielded similar test results, withremarkably similar within-group ranks for meanaccuracy as a function of test type.

Table 4 reveals KB performed poorly on or failedto complete most tests of musical pitch perception.The Pitch Discrimination test was the only such testwhere KB was moderately successful, although onaverage his score fell below the range of scoresobtained by the controls. Further inspection of thedata revealed that, whereas controls did equally wellacross the octave range tested, KB improved acrossthe octave range. For KB, accuracy in the range E1to E3 was 37.5%, but if one tone of the pair wasabove E3, and the other below, accuracy rose to60%. If both tones were above E3, accuracy was

COGNITIVE NEUROPSYCHOLOGY, 2001, 18 (5) 419

MELODY RECOGNITION

Table 4. Results of tests of pitch discrimination and memory, tonality, rhythm perceptionand reproduction, and melody recognition (Experiment 1)

Scorea

—————————————————————Test KB Controls Other controls b,c

Tests of discrimination and memoryPitch Discrimination 67.3 95.6 (75.0–100) Not administeredPitch Memory x 70.8 (63.9–81.9) 76.0b (56.9–84.7)Contour Discrimination 43.3d 84.8 (73.3–100) 91.0c (70.0–100)Scale Discrimination 50.0d 85.3 (70.0–100) 94.6c (86.7–100)Interval Discrimination 50.0d 82.3 (70.0–100) 90.6c (70.0–100)Tests of tonalityProbe-tone Melody x .66 (.13–.93) .74b (.25–.90)Familiar Melodies x .74 (.55–.94) .78b (.36–.88)Novel Melodies x .79 (.20–.91) .81b (.69–.92)Chord Progressions x .58 (.23–.81) .64b (.11–.77)Tests of rhythm perceptionRhythm Same/Different 73.3 96.5 (86.7–100) Not administeredRhythm Contour 43.3d 94.2 (80.0–100) 92.2c (76.7–100)Metric 63.3d 85.6 (53.3–100) 82.2c (66.7–96.7)Tests of rhythm reproductionMetronome Matching 0.0 Not administered Not administeredRhythm Tap-Back 16.6 93.3 (83–100) Not administeredTests of melody recognitionFamiliarity Decision test 80.0 96.0 (80–100) 98.0c (90–100)Incremental Melodies test 4 (2–5) 4 (2–9) Not administeredIncidental Memory test 53.3d 83.3 (56.6–100) 88.3c (73.3–100)

aAll scores are percentage correct (KB) and mean percentage correct (controls), exceptscores on tests of tonality, which are mean Spearman rank-order correlations ofparticipant ratings with music-theoretic predicted levels of tonality, and scores onthe Incremental Melodies test, which are median number of notes needed foridentification. Ranges are presented in parentheses. x means KB admitted toguessing and failed to complete test.

b Data from Steinke, Cuddy, and Holden (1997).c Data from Liégeois-Chauvel et al. (1998).d Score is not significantly different from chance.

Dow

nloa

ded

by [

Uni

vers

ity o

f M

aast

rich

t] a

t 04:

36 0

8 Ju

ly 2

014

82.3%, a score well above chance and within therange of scores obtained by the controls.1

Limited sparing was also noted for the RhythmSame/Different test, which assessed the ability todiscriminate rhythmic changes. KB’s score on theRhythm Same/Different test was above chance,although well below the range of control scores.

Although he failed to show incidental memoryfor melodies he had been exposed to during theassessment (the Incidental Memory test), KB per-formed surprisingly well on two additional tests. Inthe first, Familiarity Decision test, participants wereasked to classify melodies as familiar or unfamiliar.Familiar melodies included both well-known songand instrumental melodies; unfamiliar melodieswere novel. On the Familiarity Decision test, KBstated that 7 of the 10 well-known melodies werefamiliar, and 9 of the 10 novel melodies were unfa-miliar. On average, controls judged 97% of thewell-known melodies as familiar and 95% of thenovel melodies as unfamiliar. This level of perfor-mance from KB was quite remarkable, given his lossof pitch and rhythm processing. Even moreremarkable, the results of the Familiarity Decisiontest suggested that KB was able to recognise themelody lines of well-known songs, but not well-known instrumental pieces. KB recognised all fiveof the well-known songs in the experimental set onthe basis of their melodies alone. He also recog-nised two instrumental pieces, the Wedding Marchand the William Tell Overture, but volunteeredthat he had at one time learned lyrics for these mel-odies. He failed to recognise the opening themefrom Beethoven’s Fifth Symphony and Ravel’sBolero (both of which were highly familiar to age-matched controls), and recognised the Blue Dan-ube Waltz simply as a waltz.

The second test of melody recognition on whichKB performed well was the Incremental Melodiestest. KB and controls correctly identified all 10 songmelodies, requiring a median number of only 4notes to do so (range for KB, 3–5 notes; range forcontrols 2–9 notes).

Discussion

The results of this initial set of tests were consistentwith those of previous studies (e.g., Peretz, 1994) inwhich neurological damage has been shown toresult in loss or impairment of (previously intact)abilities to discriminate pitch and rhythm, processtonal relationships, and recognise familiar melo-dies. KB’s pitch processing abilities were shown tobe somewhat more impaired than two of the threeamusic patients described in detail by Peretz andcolleagues (CN and GL) and comparable to a third,IR (e.g., Peretz, 1996; Peretz et al., 1997; Peretz etal., 1994). Most pertinently, CN and GL demon-strated marked dissociations between processing ofpitch and rhythm (intact) and processing of tonality(impaired). KB and IR, in contrast, were impairedon both pitch and rhythm tasks, yet KB was able torecognise and identify familiar melodies, while IRwas above chance when categorising familiar andunfamiliar melodies. In the present study, controlsmatched for age and music experience were able tocomplete all of the tests successfully, though theydid not score as high on these tests as younger con-trols from previous experiments. Nevertheless theirscores were consistently higher than KB’s—a resultindicating that ageing alone cannot account for thelosses displayed by KB.

In contrast to his verbal skills, which appeared tobe largely intact, KB experienced problems withpitch processing and with the perception andreproduction of rhythms. It is likely that these pro-found problems in basic music processing contrib-uted to KB’s failure to show incidental learning ofnovel melodies. In all, the testing suggested a lim-ited sparing in only two areas—first, in the ability todistinguish two notes on the basis of pitch height,and second in the ability to discriminate two simplerhythmic patterns presented without melody.

What is intriguing is the observation that,despite his profound loss of pitch processing abili-ties and sense of tonality, and his obvious rhythmicperception and reproduction problems, KB dem-onstrated a preserved ability to recognise and name

STEINKE, CUDDY , JAKOBSON

420 COGNITIVE NEUROPSYCHOLOGY , 2001, 18 (5)

1 By American convention, the numeral following the note name refers to octave location. C4 is middle C (262 Hz), C3 is theoctave below C4, C5 the octave above, and so forth.

Dow

nloa

ded

by [

Uni

vers

ity o

f M

aast

rich

t] a

t 04:

36 0

8 Ju

ly 2

014

7 of the 10 well-known melodies included in theFamiliarity Decision test. Moreover, the results ofthe Incremental Melodies test suggested that in KB,as in controls, recognition of familiar song melodieswas achieved almost immediately—after the firstfew notes had been presented. The possibility thatthis surprising preservation of melodic recognitionability in KB might apply only to song melodies andnot to instrumental melodies motivated the next setof tests.

EXPERIMENT 2: MELODYRECOGNITION ANDIDENTIFICATION

Experiment 2 reports the results of three tests ofmelody recognition and identification. Recogni-tion is defined as the judgement that an item orevent (in this case a melody) has been previouslyencountered (Mandler, 1980). Identification, inaddition, requires correctly naming the title or lyricsof the melody. The Famous Melodies test sampledsong and instrumental melodies from popular, folk,and classical genres. The Television Themes testsampled song and instrumental melodies fromtheme music drawn from television programmes.The Mismatched Songs test explored recognitionand identification of familiar song melodies whenthe melodies were accompanied by mismatchedlyrics.

Famous Melodies test

A number of studies support the widely held notionthat adults of all ages have a remarkable capacity toremember familiar songs (Bartlett & Snelus, 1980;Halpern, 1984; Hébert & Peretz, 1997), famousand obscure classical instrumental themes (Java,Kaminska, & Gardiner, 1995), and televisionthemes (Maylor, 1991). Two previous studies, bothwith university students, have contrasted recogni-tion memory for song and instrumental melodies.Both showed slightly higher recognition ratesfor song over instrumental melodies (Gardiner,

Kaminska, Java, Clarke, & Mayer, 1990; Peretz,Babaï, Lussier, Hébert, & Gagnon, 1995). The firstgoal of the Famous Melodies test was to document inKB selective sparing of song melody recognitionabilities. The second goal was to compare recogni-tion for familiar song versus instrumental melodieswithin the same sample of older adults.

Materials and methodsThe Famous Melodies test consisted of a set of 39

instrumental and 68 song melodies selected to befamiliar to a listener of KB’s age and cultural back-ground. Song melodies were defined as those origi-nally written with lyrics, most commonly heard assongs on radio or recordings, and most commonlysung to lyrics by amateur singers such as KB and thecontrols. Instrumental melodies were not associ-ated with lyrics and were defined as melodies origi-nally written as instrumental melodies, mostcommonly heard as instrumental melodies on radioor recordings, and most commonly played asinstrumental melodies by amateur musicians. Themelodies were drawn from several sources, includ-ing World-famous piano pieces (1943), Jumbo: Thechildren’s book (Johns, 1980), The book of world-famous music: Classical, popular, and folk (Fuld,1995) and the Reader’s Digest treasury of popularsongs that will live forever (Simon, 1982). Alsoincluded in the test were eight novel melodies,composed by the first author in the style of thefamiliar melodies.

Each melody was limited to the opening phraseand presented as a monophonic melody line, withrhythm, original key, and original tempo left intact.The overall range for song melodies was G3 to A4,and for instrumental melodies was F#3 to A4. Themost frequent notes for both types of melody wereC4 and D4. Melodies ranged from 7–35 s in dura-tion. Notes were edited to sound equally loud.

The set of 115 song, instrumental, and novelmelodies was presented in a single random order toall participants, in the context of a larger set of mel-odies used for another experiment. After hearingeach melody, KB and controls were asked to indi-cate whether they recognised it or not. If themelody was recognised, they were then asked to

COGNITIVE NEUROPSYCHOLOGY, 2001, 18 (5) 421

MELODY RECOGNITION

Dow

nloa

ded

by [

Uni

vers

ity o

f M

aast

rich

t] a

t 04:

36 0

8 Ju

ly 2

014

identify the melody by stating lyrics, title, or anyother identifying information. When a melody wasrecognised, controls were asked to rate the degree offamiliarity they had with the melody on a scale of 1–10. A “1" indicated very little familiarity, while a”10" indicated that the melody was highly familiarto the participant. KB was not asked to rate degreeof familiarity because pilot-testing revealed he wasunable to discriminate levels of familiarity. A 3 spause was inserted between melodies on the tape,but participants were instructed to pause the tapefor a longer period between trials if needed. Allresponses were recorded by the first author.

Results and discussionResults are shown in Figure 2. For controls, mel-

ody recognition was very high overall for both setsof melodies. They recognised slightly but signifi-cantly more song melodies than instrumental melo-dies, t(19) =5.22, p< .001, and identified more songmelodies than instrumental melodies, t(19) =22.20, p< .001. Both sets of melodies were judgedto be highly familiar, with the average familiarity

rating for song melodies higher than that of instru-mental melodies: 9.4 vs. 8.7 on the 10-point scale,t(19) = 4.16, p < .001. False recognition of novelmelodies occurred on 25% of trials, and false identi-fication occurred only once. Novel melodies wereassigned an average familiarity rating of 4.8 on the10-point scale.

Similar to controls, KB’s recognition and identi-fication of song melodies was very high (with 88%of song melodies being recognised, and 75% beingcorrectly identified). In marked contrast to con-trols, however, he was able to recognise only 7(18%) of the 39 instrumental melodies as familiar.Four of these could not subsequently be identified.Interestingly, KB reported having previouslylearned comic lyrics to the remaining three instru-mental melodies, and was able to supply these lyr-ics. KB did not recognise any of the novel melodies.

Results from this test lend support to our earlierimpression that there exists in KB a dissociationbetween the ability to recognise melodies learnedwith lyrics (i.e., songs) and those learned as instru-mental pieces.

STEINKE, CUDDY , JAKOBSON

422 COGNITIVE NEUROPSYCHOLOGY , 2001, 18 (5)

Figure 2. Mean percentage song (N = 68), instrumental (N = 39), and novel melodies (N = 8) recognised and identified (+SE) bycontrols (N = 20) and KB on the Famous Melodies test. * Indicates KB obtained a score of 0%.

Dow

nloa

ded

by [

Uni

vers

ity o

f M

aast

rich

t] a

t 04:

36 0

8 Ju

ly 2

014

Television Themes test

The goal of the Television Themes test was to repli-cate the results of the Famous Melodies test with adifferent source of musical materials of a similarmusical style.

Materials and methodsThe test sampled theme music from serialised

television programmes, drawn from The TV FakeBook (Leonard, 1995). Themes were shortened andedited to a monophonic melody line, in a manneridentical to that described above for the FamousMelodies test, and were recorded on tape.

A pilot set of themes was tested with five con-trols. The 81 themes included in the pilot set wereoriginally written as either songs (N = 36) orinstrumentals (N = 45) and were typically heardonly during viewing of the television programmes.The five controls were asked to indicate whetherthey were familiar with or had previously heardeach theme. Themes not recognised by at leastthree of the five controls were discarded from thefinal experimental set. The final set included 24vocal themes and 21 instrumental themes. Theoverall range for both vocal themes and instrumen-tal themes was F3 to G4, with C4 the most frequentnote.

The task for KB and the remaining 15 controlswas to indicate whether each melody was recog-nised or not, and, if recognised, to identify the mel-ody by stating the title, lyrics, or any identifyinginformation that came to mind. Participants werenot told in advance that the set of melodies con-sisted of themes from television programmes.

Results and discussionAs shown in Figure 3, performance scores were

generally lower for the Television Themes test thanfor the Famous Melodies test. In a previous report byMaylor (1991), elderly participants recognisedbetween 68 and 92% of a pool of television themessometimes or regularly watched (not categorised assong and instrumental themes). Maylor’s partici-pants were able to identify as many as 50.4% of mel-

odies from recent television programmes that wereregularly watched. The lower scores obtained fromthe present controls might be attributable, in part,to the fact that controls reported having watchedonly 63% of the television shows on average (range35–98%). However, KB and his spouse reportedthat he had seen all the programmes represented inthe test.

Nevertheless, an important result of the FamousMelodies test was replicated: KB showed markedimpairment in his ability to recognise instrumentalthemes (none of the controls recognised as fewinstrumental themes as KB) while his recognitionof song themes, in contrast, was shown to be rela-tively preserved. As in the Famous Melodies test,controls in the present study recognised signifi-cantly more song themes than instrumental themes(73% vs. 55%), t(19) =6.74, p< .001. Correct iden-tifications were also more common for song themesthan for instrumental themes (18% vs. 9%), t(19) =4.93, p< .001. KB’s ability to recognise song themeswas markedly superior to his ability to recogniseinstrumental themes (46% vs. 5%). In addition,although he correctly identified 21% of the songthemes, he was unable to identify the one instru-mental theme that he had recognised (the themefrom “Bonanza”).

COGNITIVE NEUROPSYCHOLOGY, 2001, 18 (5) 423

MELODY RECOGNITION

Figure 3. Mean percentage song ( N = 24) and instrumental (N= 21) themes recognised and identified (+SE) by controls (N = 20)and KB on the Television Themes test. * Indicates KB obtained ascore of 0%.

Dow

nloa

ded

by [

Uni

vers

ity o

f M

aast

rich

t] a

t 04:

36 0

8 Ju

ly 2

014

Mismatched Songs test

The results of the Famous Melodies test and theTelevision Themes test together lend support to thenotion that songs are processed and/or stored dif-ferently from instrumental melodies. One sugges-tion that has been made is that the words andmelodies of songs are processed and stored in a par-tially or fully integrated form (Crowder et al., 1990;Samson & Zatorre, 1991; Serafine et al., 1984). Iflyrics and melodies are stored in an integrated form,then one might predict that a task involving recog-nition of melody in the presence of mismatched lyr-ics should prove difficult for both KB and controls.This prediction motivated the next test. The mainquestion examined in the Mismatched Songs test waswhether KB and controls could recognise song mel-odies when they were sung, not to their original lyr-ics, but to either a set of novel lyrics or to the lyrics ofother, highly familiar songs.

Materials and methodsFor this test, a set of songs was developed con-

sisting of: (a) novel lyrics set to familiar melodies (N=5); (b) lyrics of familiar songs set to novel melodies(N =5); or (c) familiar lyrics set to familiar-but-mis-matched melodies (e.g., “My Bonnie” sung to thetune of “Swing Low”) (N =6). Novel melodies andlyrics were composed by the first author with theintent to preserve the style of the familiar songs.The overall range of the songs was G3 to C5, theauthor’s comfortable singing range.

Twenty-one different familiar songs were used.Eleven contributed melody; 10 contributed lyrics:and 1 song contributed both. Two of the songs thatcontributed lyrics had been found to be familiar toKB and controls in pilot testing. The remaining 19songs were drawn from the set of 68 songs in theFamous Melodies test according to the following cri-teria: First, they had all been recognised by KB inthat test; second, they were all amenable to mis-matching of melody and lyrics; and third, they hadall been recognised and rated as highly familiar bythe majority of controls. (Fifteen of the melodieshad been recognised by all 20 controls, and theremaining four were recognised by 19 controls. The

average familiarity rating was 9.71, with 10 melo-dies receiving the maximum familiarity rating.)

In the interest of time, the familiar songs used inthis test were not presented to participants in theiroriginal versions. It was evident that recognizingthe melodies presented with their original lyricswould be a trivial task for all participants, includingKB, who had previously demonstrated recognitionof the song melodies and had correctly provided thelyrics.

Only the opening melodic and lyric phrases wereused. All phrases were sung by the first author andrecorded on tape in a single random order. Partici-pants were instructed to listen carefully to bothmelody and lyrics, to state whether the melody wasrecognised or not, to state whether the lyrics wererecognised or not, and to identify melody and lyricsif recognised. Each trial was played once, and pre-sentation was self-paced. Responses were scored forthe correct detection of the mismatch and the cor-rect specification of the type of mismatch (e.g.,familiar lyrics set to an unfamiliar tune).

Following the test, controls were shown a list ofthe titles for songs used and asked to verify theirfamiliarity with each one. In addition, they wereasked to describe the strategies employed to recog-nise the melody and lyrics.

Results and discussionTable 5 provides percentage detection of mis-

match and percentage recognition for lyrics andmelodies, presented separately, for KB and con-trols. In all three conditions, controls were able todetect the presence of a mismatch on virtually everytrial: i.e., there was no effect of Type of Mismatchon detection performance: F(2, 57) =0.62, MSE =33.37, p =.54, yielding an overall detection accuracyof 98.1% (a value not significantly different from100%, c 2(59) =21.58). Controls accurately recog-nised familiar melodies and familiar lyrics, andrejected novel melodies and novel lyrics. Thus, andmost important, controls were able to specify theprecise nature of the mismatch in all threeconditions.

For controls, recognition of familiar melodieswas not impaired by the presence of mismatched

STEINKE, CUDDY , JAKOBSON

424 COGNITIVE NEUROPSYCHOLOGY , 2001, 18 (5)

Dow

nloa

ded

by [

Uni

vers

ity o

f M

aast

rich

t] a

t 04:

36 0

8 Ju

ly 2

014

lyrics. Identification, however, was affected. Theidentification rate for the 11 familiar melodies inthe Famous Melodies test (when no lyrics were pre-sented) was 86.4%. When familiar melodies in thepresent test were sung with novel lyrics, identifica-tion dropped to 71%, paired-sample t(19) = 2.42,p = .025; when they were sung with the lyrics fromother, familiar songs, identification dropped to74%, paired-sample t = 2.13, p< .05.

Each control reported trying different strategiesover the course of the 16 trials. Two strategies werereported by all. The first strategy was to attend toboth lyrics and music at the same time as the songwas unfolding. The second strategy was to payattention to either words or music, make a decisionas to familiarity, and then switch to the other. Thethird strategy, reported by five controls, was tomake a deliberate decision on the familiarity of thelyrics first, and then, upon completion of the song,“replay” the melody in their minds before making adecision regarding its familiarity and identity.

KB, like the controls, was able to distinguishbetween novel and familiar lyrics and could also tellthat familiar lyrics were not accompanied by theiroriginal melodies. On these two judgements heobtained 100% accuracy. However, unlike the con-trols, KB displayed a total loss of recognition of thefamiliar melodies that he had previously recognisedin the Famous Melodies test. Thus he was unable todetermine whether familiar lyrics were accompa-nied by a novel or familiar melody, and was unableto recognise familiar melodies sung to novel lyrics.The presence of competing lyrics interfered withKB’s previously demonstrated ability to recognisesong melodies.

Summary

The results from this series of tests indicate that,relative to age-matched controls, KB demonstratesa dissociation between the preserved ability torecognise and identify song melodies and theinability to recognise and identify instrumentalmelodies. In the first two tests, song recognitionwas shown to be relatively well preserved, whileinstrumental melody recognition was severelyimpaired. The third test, however, revealed someproblems with KB’s song recognition. Relative tocontrols, he experienced difficulty recognisingfamiliar song melodies in the presence of compet-ing lyrics. Because he was able to detect instances inwhich familiar lyrics were accompanied by the“wrong” melody, we may infer that some melodicinformation was being processed. It is also possiblethat his ability to detect such mismatches was aidedby the presence of temporal differences and stressdifferences imposed on lyric syllables when sung todifferent melodies. However, his inability to specifywhether the melody heard on those trials was novelor familiar suggests that his melody recognition wasseriously impaired under these conditions.

POST-HOC ANALYSIS OF FAMILIARMELODIES

One possible explanation for KB’s differential per-formance with song and instrumental melodies isthat the melodies differed in musical characteristics(e.g., in the number and type of musical intervalswhich might, in the case of songs, be limited by the

COGNITIVE NEUROPSYCHOLOGY, 2001, 18 (5) 425

MELODY RECOGNITION

Table 5. Results of the Mismatched Songs testa

Mismatch Lyrics Melody(% detected) (% recognised) (% recognised)

————————— ————————— ————————Type of mismatch KB Controls KB Controls KB Controls

Novel lyrics with familiar melody (N = 5) 0 99.0 (80–100) 0 3.0 (0–20) 0 99.0 (80–100)Familiar lyrics with novel melody (N = 5) 100 97.0 (80–100) 100 100 0 12.0 (0–60)Familiar lyrics with familiar-but-mismatched 100 98.3 (83–100) 100 99.2 (83–100) 0 97.5 (83–100)

melody (N = 6)

aControl participant scores are means. Ranges are presented in parentheses.

Dow

nloa

ded

by [

Uni

vers

ity o

f M

aast

rich

t] a

t 04:

36 0

8 Ju

ly 2

014

range of the human voice). To examine this possi-bility, a number of post-hoc comparisons were car-ried out on the materials presented in the FamousMelodies test.

Measures of musical characteristics

Pitch measures for each melody included the num-ber of note onsets, the rate of presentation of notes,the range in semitones between the highest andlowest notes, the average size in semitones of theinterval between successive notes, and the ratio ofnumber of direction changes to number of noteonsets. Direction changes were defined as pitchreversals, i.e., the change from an upward frequencymovement to a downward frequency movementand vice-versa. The total number of directionchanges was tallied and divided by the total numberof note onsets. The resultant value may be inter-preted as a measure of contour complexity whichcontrols for differences in total numbers of noteonsets. Values closer to 1.0 and 0.0 indicate more orless frequent pitch reversals, respectively.

Rhythm and meter measures for each melodyincluded the total number of different note dura-tions, the percentage of notes accounted for by thetwo most frequent durations, and the meter, dupleor triple.

The tonal strength of the pitch distributions foreach melody was determined by a key-finding algo-rithm (Krumhansl & Schmuckler, cited inKrumhansl, 1990). Correlations were obtainedbetween the distribution of pitches in eachsequence and the standardised tonal hierarchy foreach of the 24 major and minor keys. The standard-ised tonal hierarchies for C major and C minor arereported in Krumhansl and Kessler (1982), and theset of probe-tone values are given in Krumhansl(1990, p. 30). Values for each of the other keys wereobtained by orienting the set to each of the differenttonic notes. For each sequence the highest correla-tion so obtained was selected to represent the tonalstrength of the distribution.

Analyses of melodic expectancy characteristicsdetermined the extent to which melodies con-formed to certain bottom-up principles outlined inNarmour’s (1990) Implication-Realisation Theory

of melodic expectancy. The principles, as outlinedand coded by Krumhansl (1995), were RegistralDirection, Intervallic Difference, Proximity,Registral Return, and Closure. An algorithm basedon the Krumhansl coding (Russo & Cuddy, 1996)was used to obtain “fulfillment” scores on each prin-ciple for each melody. The algorithm computed, foreach successive interval beyond the first, whether ornot the interval fulfilled the expectancy created bythe previous interval according to the specifiedprinciple. The fulfillment score for each principlewas the ratio of the number of fulfilled intervals tothe total number of successive intervals, minus thefirst. Scores of 0 indicated no conformance to theprinciple and scores of 1 indicated completeconformance.

Results

Results of the analyses of musical characteristics areshown in Table 6. Two-sample t-tests were applied

STEINKE, CUDDY , JAKOBSON

426 COGNITIVE NEUROPSYCHOLOGY , 2001, 18 (5)

Table 6. Musical characteristics of Famous Melodies test: Pitch,rhythm and meter, tonal strength, and melodic expectancy

Song InstrumentalMelodies Melodies

——————————Mean (SD) Mean (SD)

PitchNumber of note onsets* 41.2 (16.23) 50.4 (22.8)Presentation rate (# of notes/s)*** 2.3 (0.55) 3.2 (1.65)Range (semitones)*** 12.5 (2.76) 16.5 (5.01)Interval size (semitones)** 2.4 (0.52) 2.8 (0.99)Ratio of number of direction 0.43 (0.14) 0.44 (0.11)

changes to number of notesRhythm and meterNumber of different note durations* 4.9 (1.39) 4.3 (1.70)Percentage of notes accounted for

by two most frequent durations* 79.6 (11.2) 84.3 (15.3)Melodies in duple meter (total) 44 20Melodies triple meter (total) 24 19Tonal strength 0.77 (0.14) 0.72 (0.15)Melodic expectancyRegistral direction 0.41 (0.13) 0.43 (0.12)Intervallic difference 0.77 (0.11) 0.76 (0.11)Registral return 0.44 (0.13) 0.42 (0.16)Proximity 0.62 (0.13) 0.55 (0.20)Closure 0.58 (0.12) 0.57 (0.11)

*p< .05; **p< .01; ***p< .001.

Dow

nloa

ded

by [

Uni

vers

ity o

f M

aast

rich

t] a

t 04:

36 0

8 Ju

ly 2

014

to all comparisons between song and instrumentalmelodies, with the exception of the metric measure.This measure involved a frequency count, so a chi-square test was applied.

The results indicate that song and instrumentalmelodies did not differ on musical characteristicsthought to reflect pitch structure and to influencemelodic organization—tonal strength and melodicexpectancy. Nor did they differ in contour com-plexity or distribution of duple and triple meter.Instrumental melodies, however, were found to dif-fer from song melodies on six other musical charac-teristics. Instrumental melodies were slightlylonger, faster in presentation rate, larger in pitchrange and average interval size, and less complex ontwo rhythmic measures. The question thereforearises whether these characteristics might beresponsible for KB’s poorer recognition of instru-mental versus song melodies.

To address this question we categorised the mel-odies into subsets, two for each of the six character-istics. For one subset, the range of measures wasbelow the median for that characteristic; for theother, the range was above the median. Extremevalues were dropped so that for each subset themean for the song melodies did not differ signifi-cantly from the mean for the instrumental melo-dies. Means and standard deviations for the musicalcharacteristics for each subset are given in the firsttwo columns of Table 7.

Next we calculated for each subset KB’s recogni-tion score for the song melodies and the instrumen-tal melodies. These scores are given in the third andfourth columns of Table 7, which report the ratio ofnumber of melodies recognised to the number ofmelodies retained in the subset. It can be seen thatfor every subset the recognition score is higher forsong than for instrumental melodies. All differ-ences were significant (by tests of proportions)beyond the .0002 level.

Thus for the six musical characteristics, the dif-ference in recognition between song and instru-mental melodies held for subsets of melodiesselected to be statistically equivalent. There istherefore no support for an account of KB’s diffi-culty with instrumental melodies based on thesemusical characteristics.

This conclusion can be bolstered by logical argu-ments as well. Given that KB and the controls weretypically able to achieve song melody recognitionwithin the first few notes (Experiment 1—Incre-mental Melodies), differences in the average num-ber of notes provided by a melody may not berelevant to the song/instrumental melody differ-ence. Moreover, the less rhythmic complexity of theinstrumental melodies might have simplified therecognition task for KB relative to song melodies,but such was not the case. In summary, the post-hoc analyses did not reveal a consistent set of differ-ences in musical characteristics that could explainthe sparing of song recognition for KB with markedand selective failure of instrumental melodyrecognition.

COGNITIVE NEUROPSYCHOLOGY, 2001, 18 (5) 427

MELODY RECOGNITION

Table 7. Musical characteristics and KB recognition scores forselected subsets from the Famous Melodies test

Mean (SD) Recognition Score—————————– —————————–

Song Instrumental Song InstrumentalMelodies Melodies Melodies Melodies

Number of note onsets14–39 30.5 (6.4) 30.1 (4.8) 33/38 4/1540–98 53.7 (17.9) 59.2 (12.6) 27/30 3/22Presentation rate (# of notes/s)0.98–2.49 1.87 (0.03) 2.07 (0.32) 38/42 4/112.50– 4.73 3.12 (0.56) 3.24 (0.60) 22/26 2/24Range (semitones)

4–13 10.8 (1.79) 10.5 (2.83) 36/42 2/914–16 14.5 (0.80) 15.1 (0.99) 20/22 3/10Interval size (semitones)0.96–2.4 2.02 (0.29) 1.92 (0.46) 35/39 4/152.5–3.7 2.87 (0.31) 3.03 (0.37) 25/29 3/17Number of different note durations3–4 3.7 (0.48) 3.5 (0.51) 21/25 4/155–8 5.7 (1.19) 6.0 (0.71) 39/43 3/17Percentage of notes accounted for by two most frequent durations50–82 72.2 (0.09) 67.3 (0.09) 34/38 2/1583–97 88.9 (0.03) 91.1 (0.04) 26/30 2/13

For the selected melody subsets in this table, means formusical characteristics do not differ significantly betweensong and instrumental melodies. Recognition scores forsong melodies are consistently and significantly higherthan for instrumental melodies, p< .0002.

Dow

nloa

ded

by [

Uni

vers

ity o

f M

aast

rich

t] a

t 04:

36 0

8 Ju

ly 2

014

EXPERIMENT 3: LEARNING OFSONG AND INSTRUMENTALMELODIES

Three findings reported above suggest that, duringtheir initial learning, songs (with lyrics) and instru-mental melodies are processed and stored in differ-ent ways: (1) the remarkable dissociation betweensong and instrumental melody recognitionobserved in KB; (2) the consistently higher rates ofrecognition and correct identification for song asopposed to instrumental melodies reported for con-trol participants in Experiment 2; and (3) the factthat the only instrumental melodies correctly iden-tified by KB in the Famous Melodies test were thoseto which he had, at one point, learned comic lyrics.

Although KB’s capacity to learn new instrumen-tal melodies had been tested incidentally in Experi-ment 1 and found to be severely impaired (seeresults of the Incidental Memory test), his capacityfor new song learning was unclear. In the experi-ment described below, we examined the possibilitythat the learning of new melodies might be possiblefor KB if those melodies were presented to him inthe context of songs with lyrics. It was hypothesisedthat KB would be able learn both words and melodyfor novel songs over time, but would not be able tolearn novel melodies sung to “la.”

Materials and methods

Initially, KB’s ability to learn novel melodies wasexamined using a paradigm adapted from Samsonand Zatorre (1992). In this paradigm, a set of stim-uli is presented, followed by pairs of items. Eachpair contains one stimulus from the set, and onenovel stimulus or foil. The task is to state whichmember of the pair was presented in the originalset. The original set is then presented again, andonce more followed by pairs of items, this time withdifferent foils. The procedure is repeated until adesignated recognition criterion is reached. Short-term learning is measured by the number of repeti-tions needed to reach criterion. For our study, com-parisons were made between participants’ ability tolearn two different sets of novel melodies: one set

sung with lyrics, and the other sung to “la”. Pilottesting indicated that, even after several modifica-tions to simplify the procedure, KB was unable tolearn either set of test materials.

At this point, a new procedure was introduced,one that allowed extensive exposure to the learningmaterials over many months. For this test of KB’sability to learn melodies, a set of 12 novel melodies(of similar length and style) were composed by thefirst author and recorded on tape, in random order,as instrumental pieces with a piano timbre withinthe range G3 to C4. The melodies were written inthe style of North American folk songs, and as suchwere highly tonal, nonmodulating diatonic melo-dies outlining simple harmonic progressions, withregular phrase structures and recurring rhythmicand melodic motifs. Next, a second tape of thesematerials was created in which four of the melodieswere sung to novel lyrics, four were sung to “la,” andfour were left in their original, instrumental ver-sions. The sung renditions were performed andrecorded while the vocalist was listening to thepiano versions of each melody through head-phones. In this way differences in intonation andtiming between the sung and instrumental versionswere minimised. This latter tape (which includedthe sung renditions) was played for KB on 26 occa-sions, approximately once a week over a 6-monthperiod. KB’s instruction on each occasion was sim-ply to listen to the tape.

At the end of the 6-month period, KB’s abilityto recognise the test materials in the exposureset was assessed. The original instrumental versionsof the 12 test melodies were combined with a set of36 additional control melodies. The 36 controlmelodies consisted of 12 well-known songmelodies, 12 familiar melodies that had beenmelodically altered (included as pilot materials forlater experiments), and 12 novel instrumentalmelodies (foils). The complete set of 48 melodies(each played in the same piano timbre used for thefour instrumental melodies in the learning phase ofthe experiment) was presented to KB in a single,random order. His task was to indicate after eachtrial whether he recognised the melody, and if so toprovide a title, lyrics, or any other associations thatcame to mind.

STEINKE, CUDDY , JAKOBSON

428 COGNITIVE NEUROPSYCHOLOGY , 2001, 18 (5)

Dow

nloa

ded

by [

Uni

vers

ity o

f M

aast

rich

t] a

t 04:

36 0

8 Ju

ly 2

014

Results and discussion

KB recognised all 12 well-known song melodiesplayed in their original versions, and 4 of 12 familiarsong melodies that had been melodically altered.He also reported that the melody line was familiarfor two of the four songs from the exposure setwhich had been presented to him in the context ofsongs with lyrics over the preceding 6-monthperiod. None of the remaining 10 test items, andnone of the 12 novel instrumental foils, were recog-nised. The data are summarised in Figure 4.

KB’s inability to perform the short-term learn-ing task most likely resulted from his documentedpitch and rhythmic processing deficits. Despitethese deficits, KB was able to demonstrate a limitedability to learn new melodies, with repeated expo-sure over a lengthy period, provided that these mel-odies were presented in the context of songs withlyrics.

GENERAL DISCUSSION

We have reported a dissociation between song andinstrumental melody recognition for KB, an ama-teur musician who suffered a right-hemispherestroke with fronto-parietal involvement. Resultsfrom a wide array of tests indicated preserved gen-

eral intelligence and language skills, sparing of rec-ognition of environmental sounds and musicalinstruments, and limited sparing of simple percep-tual judgements of pitch height and rhythmic pat-tern. Overall, however, KB’s musical deficits weresufficiently severe to warrant a diagnosis of amusia.KB’s difficulties strongly implicate musical mem-ory—both for discrimination and recognition ofnovel melodies and for recognition/identificationof familiar instrumental melodies. Sparing of songmelody recognition and identification in the pres-ence of severe musical loss is therefore the moststriking finding of this report.

Two potential accounts of differences betweensong and instrumental melodies are inadequate toexplain the observed dissociation—one based onmusical features, the other on relative familiarity.Post hoc tests of musical features revealed certaindifferences between song and instrumental melo-dies but, when the features were statistically con-trolled, scores remained superior for song melody asopposed to instrumental melody recognition.

Three findings contraindicate relative familiar-ity. First, although controls’ familiarity ratings forthe Famous Melodies test favoured song over instru-mental melodies (as did recognition and identifica-tion rates in both the Famous Melodies and theTelevision Themes tests) the difference was not largecompared to the difference shown by KB. Giventhe similarity between the musical background ofKB and the controls, it seems likely that he, too,would have been very familiar with both types ofmelodies. Second, familiarity alone cannot explainKB’s inability to recognise familiar songs presentedwith competing lyrics (Mismatched Songs test).Third, only melodies presented with lyrics werelearned by KB, despite his equivalent exposure toand thus familiarity with melodies sung to “la” andinstrumental melodies ( Experiment 3).

Our arguments against accounts based on differ-ences in musical characteristics and relative famil-iarity of song and instrumental melodies would bestronger if another patient was found demonstrat-ing superior recognition and identification offamiliar instrumental, as opposed to song, melodies(i.e., if a double dissociation was documented). Inthe meantime, however, it is instructive to consider

COGNITIVE NEUROPSYCHOLOGY, 2001, 18 (5) 429

MELODY RECOGNITION

Figure 4. Percentage of Learning test melodies recognised by KB.* Indicates KB obtained a score of 0%.

Dow

nloa

ded

by [

Uni

vers

ity o

f M

aast

rich

t] a

t 04:

36 0

8 Ju

ly 2

014

alternate explanations for the dissociation observedin KB. In the following discussion we propose thatduring song acquisition the temporal contiguity oflyrics and melody results in an elaborated represen-tation with special properties. We draw upon theassociation-by-contiguity hypothesis (Crowder etal., 1990), a class of associative models put forwardby Stevens, McAuley, and Humphreys (1998), andPeretz’s (1993) model of melody recognition.

An associationist position (Crowder et al., 1990;Stevens et al., 1998) is one of several put forth toaccount for the integration effect (e.g., Samson &Zatorre, 1992; Serafine et al., 1984). As noted ear-lier, the paradigm for the integration effect engagesnovel songs and the results may not directly gener-alise to well-known songs. Nevertheless, theassociationist position may contribute to an expla-nation of KB’s dissociation. The association-by-contiguity hypothesis suggests that temporal conti-guity suffices for association; thus, “melody and textare connected in memory, hence they act as recallcues for each other, yet each is stored with its inde-pendent integrity intact” (Crowder et al., 1990, p.474). The class of associative model termed “con-junctive representation” by Stevens et al. (1998) iscompatible; they suggest that melody and text arerepresented both by item information for the sepa-rate components and relational information fortheir pairing. The contiguous presentation of mel-ody and lyrics in song may result in a particularlyrich store of relational information. In the case ofinstrumental music the relational information isless salient because such contexts as the title of thepiece are not temporally contiguous with themelody.

Next, in line with Peretz (1993), we proposehow such notions might be implemented in the caseof KB. When a normal listener hears a familiarsong, two distinct but interconnected systems areengaged. One, the melody analysis system, leads toactivation of a stored template of the melody in adedicated tune input lexicon. The other, the speechanalysis system, leads to activation of the storedtemplates of individual words in a dedicated speechinput lexicon. Activation of one or both lexiconsgenerates recognition in the listener, a sense offamiliarity. Moreover, repeated coincidental acti-

vation of the two lexicons during song learningallows for the establishment of direct links betweenthe two representations. Activation in one systeminfluences the level of activity in the other system,producing, through a process of spreading activa-tion, recognition and identification of song.

We will argue that this proposal has consider-able explanatory power for many of our presentfindings. First, it could account for the observationthat controls found song melodies easier torecognise and identify than instrumental pieces,overall. Note that, according to this scheme, duringthe processing of a familiar instrumental melodythere would be no activation of the speech analysissystem nor of the speech input lexicon. Thus, thenetwork of information relating to the instrumentalpiece would be less elaborate than that for a familiarsong, given that it would not include lyrics or con-cepts associated with those lyrics. With less elabo-ration, it may be inferred that recognition is lesslikely.

Second, the present proposal could account forthe severe disruption to KB’s basic music abilities(Experiment 1), the remarkable sparing of his abil-ity to recognise the melody lines of familiar songs(Experiment 2), and his limited residual capacityfor learning new songs (Experiment 3). The loss ofKB’s basic music abilities reflects extensive damageto his melody analysis system. Exposure to a famil-iar song melody, however, may result in just enoughactivation to generate a simple, degraded, melodytemplate. Recall that in Experiment 1 KB showedlimited sparing in two domains: first, in his abilityto distinguish two notes in the mid- to high-fre-quency ranges on the basis of pitch height; and, sec-ond, in his ability to discriminate simple rhythmicpatterns. These residual capacities might providethe basis for the creation of this simple melody tem-plate. This template, while not containing a rich,detailed and highly accurate mapping of musicalevents, might nonetheless produce a level of activa-tion sufficient to influence the level of activity inKB’s declarative memory and in his speech inputlexicon, producing recognition. KB’s demonstrated—though severely limited—ability to learn newsong melodies could be explained through repeatedactivation of a similar pathway of associative links.

STEINKE, CUDDY , JAKOBSON

430 COGNITIVE NEUROPSYCHOLOGY , 2001, 18 (5)

Dow

nloa

ded

by [

Uni

vers

ity o

f M

aast

rich

t] a

t 04:

36 0

8 Ju

ly 2

014

Finally, the proposal can address the difficultyexperienced by KB during the Mismatched Songstest (Experiment 2). The fact that he did not sufferfrom a language impairment suggests that hisspeech analysis system, his speech input lexicon,and the representations of word meanings in hisdeclarative memory were intact. Given this, it is notsurprising that he was able to specify accuratelywhether the lyrics he heard were novel or familiaracross trials. As well, the activation of the represen-tation of the lyrics to a familiar song would create,through activation of associative links between thetwo lexicons and memory for relational informa-tion, a set of melodic expectancies. If these expec-tancies were inconsistent with the simple melodytemplate generated by KB in response to theincoming melody, the result would be the observedpattern of correct detection of a mismatch. KB’sinability to recognise the melody correctly in thissituation might reflect the effects of interferencecreated in the speech input lexicon between twocompeting patterns of activation—a strong patternproduced through exposure to the lyrics, and aweaker pattern produced through spread of activa-tion from the crude melody template in the melodyinput lexicon. Controls, being able to exploit theresources of their intact melody analysis systemsand to activate a detailed representation of the mel-odies, would have been able to overcome any suchinterference and would achieve recognition of bothlyrics and melodies.

Although presentation of novel lyrics would notlead to a set of melodic expectancies, it would leadto activation of the representations of individualwords and their related meanings. Again, this pat-tern of activation would be inconsistent with thatproduced in response to the melody. Moreover, inthe case of KB, it would far outweigh any activationproduced by the simple melody template, therebymaking melody recognition difficult or impossible.

For KB, song recognition has been spared eventhough the melody analysis system has been dam-aged. However, this damage is not complete: KBhas residual ability to generate simple, crude, repre-sentations of familiar melodies. In the case of songmelodies, there is sufficient activation in the mel-ody analysis system to co-activate an intact repre-

sentation of both relational information and of thelyrics in the speech lexicon, making recognition andidentification possible. In the case of instrumentalmelodies, no such associative processes exist(unless, of course, as noted earlier, KB has previ-ously associated words to the instrumental melody).In the absence of sufficient relational informationand tightly connected associative links betweenmelody and speech lexicons, recognition and iden-tification of instrumental melodies does not occur.

Manuscript received 28 May 1999Revised manuscript received 15 August 2000

Revised manuscript accepted 19 September 2000

REFERENCES

Anderson, C. (1996). Finale music notation software[Computer Software]. Eden Prairie, MN: CodaMusic Technology.

Bartlett, J.C., & Snelus, P. (1980). Lifespan memory forpopular songs. American Journal of Psychology, 93,551–560.

Besson, M., Faïta, F., Peretz, I., Bonnel, A.-M., &Requin, J. (1998). Singing in the brain: Independenceof lyrics and tunes. Psychological Science, 9, 494–498.

Bigand, E. (1993). The influence of implicit harmony,rhythm and musical training on the abstraction of“tension-relaxation schemas” in tonal musicalphrases. Contemporary Music Review, 9, 123–137.

Blair, J.R., & Spreen, O. (1989). Predicting premorbidIQ: A revision of the National Adult Reading Test.The Clinical Neuropsychologist, 3, 129–136.

Boltz, M. (1989). Rhythm and “good endings”: Effects oftemporal structure on tonality judgments. Perceptionand Psychophysics, 46, 9–17.

Boltz, M., & Jones, M.R. (1986). Does rule recursionmake melodies easier to reproduce? If not, what does?Cognitive Psychology, 18, 389–431.

Crowder, R.G., Serafine, M.L., & Repp, B.H. (1990).Physical interaction and association by contiguity inmemory for the words and melodies of songs. Memoryand Cognition, 18, 469–476.

Dowling, W.J., & Harwood, D.L. (1986). Music cogni-tion. Orlando, FL: Academic Press.

Fuld, J.F. (1995). The book of world-famous music: Classi-cal, popular, and folk. New York: Dover.

Gardiner, J.M., Kaminska, Z., Java, R.I., Clarke, E.F., &Mayer, P. (1990). The Tulving-Wiseman law and the

COGNITIVE NEUROPSYCHOLOGY, 2001, 18 (5) 431

MELODY RECOGNITION

Dow

nloa

ded

by [

Uni

vers

ity o

f M

aast

rich

t] a

t 04:

36 0

8 Ju

ly 2

014

recognition of recallable music. Memory and Cogni-tion, 18, 632–637.

Goodglass, H., & Kaplan, E. (1983). Boston DiagnosticAphasia Examination (BDAE). Philadelphia: Lee &Febiger. Distributed by Psychological AssessmentResources, Odessa, FL.

Gross, R. (1981). DX-Score [Computer Software].Rochester, NY: Eastman School of Music.

Halpern, A.R. (1984). Organization in memory forfamiliar songs. Journal of Experimental Psychology:Learning, Memory, and Cognition, 10, 496–512.

Heaton, R.K. (1981). A manual for the Wisconsin CardSorting Test. Odessa, FL: Psychological AssessmentResources.

Hébert, S., & Peretz, I. (1997). Recognition of music inlong-term memory: Are melodic and temporal pat-terns equal partners? Memory and Cognition, 25, 518–533.

Java, R.I., Kaminska, A., & Gardiner, J.M. (1995). Rec-ognition memory and awareness for famous andobscure musical themes. European Journal of CognitivePsychology, 7, 41–53.

Johns, M. (Ed.). (1980). Jumbo: The children’s book (3rded.). Miami Beach, FL: Hansen House.

Koh, C.K., Cuddy, L.L, & Jakobson, L.S. (in press).Associations and dissociations between music train-ing, tonal and temporal processing, and cognitiveskills. Proceedings of the New York Academy of Science:Biological Foundations of Music.

Kolinsky, M. (1969). “Barbara Allen”: Tonal versusmelodic structure, Part II. Ethnomusicology , 13, 1–73.

Krumhansl, C.L. (1990). Cognitive foundations of musicalpitch. New York: Oxford University Press.

Krumhansl, C.L. (1991). Music psychology: Tonalstructures in perception and memory. Annual Reviewof Psychology, 42, 277–303.

Krumhansl, C.L. (1995). Music psychology and musictheory: Problems and prospects. Music Theory Spec-trum, 17, 53-80.

Krumhansl, C.L., & Kessler, E.J. (1982). Tracing thedynamic changes in perceived tonal organization in aspatial representation of musical keys. PsychologicalReview, 89, 334–368.

Lengeling, G., Adam, C., & Schupp, R. (1990). C-LabNotator SL/Creator SL (version 3.1) [Computer soft-ware]. Hamburg, Germany: C-Lab Software GmbH.

Leonard, H. (1995). The TV fake book. Milwaukee, WI:Hal Leonard Corporation.

Liégeois-Chauvel, C., Peretz, I., Babaï, M., Laguitton,V., & Chauvel, P. (1998). Contribution of differentcortical areas in the temporal lobes to music process-ing. Brain, 121, 1853–1867.

Mandler, G. (1980). Recognizing: The judgement ofprevious occurrence. Psychological Review, 89, 334–368.

Marin, O.S.M., & Perry, D.W. (1999). Neurologicalaspects of musical perception. In D. Deutsch (Ed.),The psychology of music (2nd ed., pp. 653–724). NewYork: Academic Press.

Maylor, E.A. (1991). Recognizing and naming tunes:Memory impairment in the elderly. Journals of Geron-tology, 46, P207–217.

Monahan, C.B., & Carterette, E.C. (1985). Pitch andduration as determinants of musical space. Music Per-ception, 3, 1–32.

Morrongiello, B.A., & Roes, C.L. (1990). Children’smemory for new songs: Integration or independentstorage of words and tunes? Journal of ExperimentalChild Psychology, 50, 25–38.

Narmour, E. (1990). The analysis and cognition of basicmelodic structures. Chicago: University of ChicagoPress.

Palmer, C., & Krumhansl, C.L. (1987a). Independenttemporal and pitch structures in determination ofmusical phrases. Journal of Experimental Psychology:Human Perception and Performance, 13, 116–126.

Palmer, C., & Krumhansl, C.L. (1987b). Pitch and tem-poral contributions to musical phrase perception:Effects of harmony, performance timing, and famil-iarity. Perception and Psychophysics, 41, 505–518.

Patel, A.D. (1998). Syntactic processing in language andmusic: Different cognitive operations, similar neuralresources? Music Perception, 16, 27–42.

Patel, A.D., & Peretz, I. (1997). Is music autonomousfrom language? A neuropsychological appraisal. InI.E. Deliège & J.E. Sloboda (Eds.), Perception andcognition of music (pp. 191–215). Hove, UK: Psychol-ogy Press.

Peretz, I. (1993). Auditory agnosia: A functional analy-sis. In S. McAdams & E. Bigand (Eds.), Thinking insound: The cognitive psychology of human audition (pp.199–230). Oxford: Clarendon Press/Oxford Univer-sity Press.

Peretz, I. (1994). Amusia: Specificity and multiplicity. InI. Deliège (Ed.), Proceedings of the 3rd InternationalConference on Music Perception and Cognition (pp. 37–38). Liège, Belgium: European Society for the Cog-nitive Sciences of Music.

Peretz, I. (1996). Can we lose memory for music? A caseof music agnosia in a nonmusician. Journal of Cogni-tive Neuroscience, 8, 481–496.

Peretz, I., Babaï, M., Lussier, I., Hébert, S., & Gagnon,L. (1995). Corpus d’extraits musicaux: Indices relatifsà la familiarité, à l’âge d’acquisition et aux évocations

STEINKE, CUDDY , JAKOBSON

432 COGNITIVE NEUROPSYCHOLOGY , 2001, 18 (5)

Dow

nloa

ded

by [

Uni

vers

ity o

f M

aast

rich

t] a

t 04:

36 0

8 Ju

ly 2

014

verbales [A repertory of music extracts: Indicators offamiliarity, age of acquisition, and verbal associa-tions]. Canadian Journal of Experimental Psychology,49, 211–238.

Peretz, I., Belleville, S., & Fontaine, S. (1997). Dissocia-tions entre musique et language après atteintecérébrale: Un nouveau cas d’amusie sans aphasie [Dis-sociation between music and language following cere-bral haemorrhage: Another instance of amusiawithout aphasia]. Canadian Journal of ExperimentalPsychology, 51, 354–368.

Peretz, I., & Kolinsky, R. (1993). Boundaries of separa-bility between melody and rhythm in music discrimi-nation: A neuropsychological perspective. QuarterlyJournal of Experimental Psychology, 46A, 301–325.

Peretz, I., Kolinsky, R., Tramo, M., Labrecque, R.,Hublet, C., Demeurisse, G., & Belleville, S. (1994).Functional dissociations following bilateral lesions ofauditory cortex. Brain, 117, 1283–1301.

Polk, M., & Kertesz, A. (1993). Music and language indegenerative disease of the brain. Brain and Cognition,22, 98–117.

Raven, J.C. (1965). Guide to using the coloured progressivematrices. London: H.K. Lewis.

Rey, A. (1941). L’examen psychologique dans les casd’encephalopathy traumatique [Psychological exami-nation in cases of traumatic encephalopathy]. Archivesde Psychologie, 28, 286–340.

Russo, F.A., & Cuddy, L.L. (1996). Predictive value ofNarmour’s principles for cohesiveness, pleasingness,and memory of Webern melodies. In I. Deliège (Ed.),Proceedings of the 3rd International Conference on MusicPerception and Cognition (pp. 439–443). Liège, Bel-gium: European Society for the Cognitive Sciences ofMusic.

Samson, S., & Zatorre, R.J. (1991). Recognition mem-ory for text and melody of songs after unilateral tem-poral lobe lesion: Evidence for dual encoding. Journalof Experimental Psychology: Learning, Memory, andCognition , 17, 793–804.

Samson, S., & Zatorre, R.J. (1992). Learning and reten-tion of melodic and verbal information after unilateraltemporal lobectomy. Neuropsychologia , 30, 815–826.

Serafine, M.L., Crowder, R.G., & Repp, B.H. (1984).Integration of melody and text in memory for songs.Cognition , 16, 285–303.

Serafine, M.L., Davidson, J., Crowder, R.G., & Repp,B.H. (1986). On the nature of melody-text integra-tion in memory for songs. Journal of Memory and Lan-guage, 25, 123–135.

Shipley, W.C. (1953). Shipley Institute of Living scalefor measuring intellectual impairment. In A.Weider

(Ed.), Contributions toward medical psychology: Theoryand psychodiagnostic methods (pp. 751–756). NewYork: The Ronald Press Company.

Simon, W.L. (Ed.). (1982). Reader’s Digest popular songsthat will live forever. Pleasantville, NY: Reader’sDigest Association Inc.

Steinke, W.R., Cuddy, L.L., & Holden, R.R. (1997).Dissociation of musical tonality and pitch memoryfrom nonmusical cognitive abilities. Canadian Journalof Experimental Psychology, 51, 316–334.

Stevens, K., McAuley, J.D., & Humphreys, M.S. (1998).Relational information in memory for music: Theinteraction of melody, rhythm, text, and instrument.Noetica: Open Forum [on-line], 3(8). Available: http://www.cs.indiana.edu/Noetica/OpenForumIssue8/Stevens.html.

Stone, C.P., Girdner, J., & Albrecht, R. (1946). An alter-nate form of the Wechsler Memory Scale. Journal ofPsychology, 22, 199–206.

Wechsler, D. (1981). Manual for the Wechsler Adult Intel-ligence Scale (Rev. ed.). New York: The PsychologicalCorporation.

White, B. (1960). Recognition of distorted melodies.American Journal of Psychology, 7, 253–258.

World Famous Piano Pieces. (1943). Toronto, Canada:Frederick Harris.

Zatorre, R.J. (1984). Musical perception and cerebralfunction: A critical review. Music Perception, 2, 196–221.

APPENDIX A

Experiment 1: Test procedures

Pitch Discrimination testFor the Pitch Discrimination test, a set of 55 pairs of piano

tones was generated, 5 for each of 11 intervals ranging in sizefrom unison to the octave. Each pair consisted of two pitches,each of 1 s duration, separated by a silent interval of 1 s duration.A 4 s silence separated trials. Each of the five trials for eachinterval type was played in a different octave range, from E1 toF7. For each of the 50 nonunison intervals, the pitch direction ofthe second note, higher or lower, was randomly determined. Thetotal set of 55 interval pairs was recorded in a single randomorder. The task was to state whether the second note was higherthan, lower than, or the same pitch as the first note. Threepractice trials were provided, one for each of the threeconditions: second note higher, lower, and the same as the firstnote.

COGNITIVE NEUROPSYCHOLOGY, 2001, 18 (5) 433

MELODY RECOGNITION

Dow

nloa

ded

by [

Uni

vers

ity o

f M

aast

rich

t] a

t 04:

36 0

8 Ju

ly 2

014

Rhythm Same/Different testThe purpose of the Rhythm Same/Different test was to obtain a

measure of KB’s ability to discriminate changes within rhythms.A set of 15 rhythmic patterns were constructed, each designed tobe easily apprehended by normal listeners. The followingconstruction rules were obeyed. First, the number of notes ineach pattern was limited (range =2–7 notes; mean =4.6 notes).Second, each pattern contained only one or two different notedurations, and only four note durations were employed in total(0.25, 0.5, 0.75 and 1 s durations, corresponding to eighth,quarter, dotted quarter, and half notes respectively). Six patternswere isochronous and the remaining nine each contained twodurations from the above set. Third, a regular beat structure wasimposed on the patterns. Notes tended to fall on the beat, withno syncopations.

An altered version of each rhythmic pattern was alsogenerated, intended to be easily discriminable from the original.Altered versions were constructed by either lengthening orshortening the duration of one note in the group of notes (or“figure”) within each rhythm. Each pattern was then randomlypaired with either the original or altered version of the pattern,yielding a set of 15 pairs. A second set of 15 pairs was thencreated by pairing each pattern with the version not used in thefirst set. The two sets were recorded on cassette tape in a singlerandom order, yielding 30 trials.

The patterns in each pair were separated by 1 s of silence; pairswere separated by 3 s. All notes were the same pitch (A4 =440Hz), and were recorded at the same loudness level. Participantswere instructed to listen carefully to each pair of rhythms and toindicate whether the two rhythms were the same or different.

Metronome Matching testThe Metronome Matching test was not administered to

controls. An isochronous sequence of beats was produced by ametronome set in turn to five different rates or tempos, frommoderately slow to fast (as indicated on the metronome). KBwas asked to imitate the sound of the metronome by tapping on atabletop with a pencil held in his right hand. The fivemetronome settings chosen, in beats per minute (bpm), were120 bpm, 160 bpm, 200 bpm, 260 bpm, and 320 bpm. Thecorresponding times in seconds between beat onsets were 0.5 s,0.37 s, 0.3 s, 0.23 s, and 0.19 s. At each setting approximately 10beats were sounded on the metronome, and then the metronomewas stopped. At this point KB was instructed to tap a pencil atthe same rate as the beats produced by the metronome. He wasinstructed to listen carefully to the metronome before beginningto tap, and to continue to tap until asked to stop by theexperimenter. The metronome and his tapping were recordedonto tape. The first author and two independent raters (eachhighly trained musicians) subsequently listened to the tape and

judged whether KB matched each metronome rate. Judges werealso asked to characterise KB’s tapping for each metronome ratein terms of how closely he was able to approximate themetronome rate. Following independent listenings of the tape,raters were found to be in 100% agreement on KB’sperformance, in terms of both matching and approximationjudgements.

Rhythm Tap-Back testFor the Rhythm Tap-Back test, KB and five controls were

asked to reproduce a set of six short, simple rhythms. Five of therhythms were constructed in the same manner as describedabove for the Rhythm Same/Different test. The remainingrhythm was identified to KB, prior to presentation, as therhythm from the saying “Shave and a haircut—two bits,” whichKB was known to be familiar with. The experimenter, using apencil on a tabletop, first tapped each rhythm. The pencil wasthen handed to the participant and the rhythm was tapped back.This procedure was repeated for each rhythm. All experimenterand participant rhythms were recorded onto tape.

Two sets of judgements were made on the rhythms. First,three independent raters with high music training listened to thetape recording for each participant. Each rater assigned asimilarity rating, on a 10-point scale, to each pair ofexperimenter/participant rhythms. A “1" indicated no similarity,and a ”10" indicated that the rhythms sounded identical. For thesecond set of judgements, each rater listened to an edited tape onwhich each individual rhythm tapped by the experimenter andby each participant was reproduced in a single random order.Each rater transcribed each of the rhythms into music notation.The transcriptions of each pair of experimenter/participantrhythms were then compared.

Incremental Melodies testThe Incremental Melodies test included the following 10 song

melodies: Swannee River, Row Your Boat, Home on the Range,Red River Valley, Oh Susannah, Song of the Volga Boatmen,Hail Hail, Bicycle Built for Two, Auld Lang Syne, and Irish Jig.(Irish Jig was included as a song melody because KB hadpreviously learned comic lyrics for this tune.) Each melody wasrecorded onto cassette tape starting with the first two notes, thenthe first three notes, first four notes, and so on, until thecomplete version of the song was played in its original tempo andkey. A 2 s pause separated each version of the melody.Participants were instructed to listen to each trial in succession,until they were able to identify each melody correctly by statingthe title or the lyrics. At this point the trials for that melody werediscontinued, and the number of notes required foridentification was recorded. Melodies were recorded in a singlerandom order.

STEINKE, CUDDY , JAKOBSON

434 COGNITIVE NEUROPSYCHOLOGY , 2001, 18 (5)

Dow

nloa

ded

by [

Uni

vers

ity o

f M

aast

rich

t] a

t 04:

36 0

8 Ju

ly 2

014

COGNITIVE NEUROPSYCHOLOGY, 2001, 18 (5) 435

MELODY RECOGNITION

APPENDIX B

Song and instrumental melodies used in famous melodies test

Source page #———————————————–

Title and composer Johnsa Fuld b W-F c

Instrumental melodies1. Chant san Paroles—P. Tchaikovsky 1672. The Sorcerers Apprentice—P. Dukas 5223. Irish Washerwoman Jig—Trad 101 3064. Minute Waltz, Op 64 No. 1—F. Chopin 312 325. Hungarian Dance #5—J. Brahms 2806. Largo, New World Symphony, Op. 95—A. Dvorak 558 247. Traumerei Op 15, No. 7—R. Schumann 588 638. Prelude, Op 28, No. 7—F. Chopin 443 1389. Minuet in G, Series 18, No. 12—L. van Beethoven 37010. Minuet (Don Giovanni)—W.A. Mozart 371 11011. Joy of My Soul, Cantata 147—J.S. Bach 9412. The Entertainer—Scott Joplin 713. Prelude in C (Welltempered Clavichord)—J.S. Bach 8314. The Stars and Stripes Forever—John Philip Sousa 116 53515. Baby Elephant Walk—Henry Mancini 3816. Simple Confession—Francis Thome 501 3517. Anvil Chorus—G. Verdi 10218. Spring Song (#30 Lieder ohne Worte)—F. Mendelssohn 524 3919. Nocturne, Op 9, No. 2—F. Chopin 392 8120. March, Nutcracker Suite— P. Tchaikovsky 392 61321. Flower Song—Gustav Lange 10122. Barcarolle—J. Offenbach 127 14423. Theme, 1st Mvmt, Piano Con. #1—P. Tchaikovsky 182 7824. Semper Fidelis—John Philip Sousa 58 48925. La Cumparsita Tango—G. Rodriguez 27226. Mexican Hat Dance—F. Patrichala 210 36627. Pizzicato (Sylvia)—L. Delibes 43028. Für Elise—L. van Beethoven 241 5829. Dance of the Sugar Plum Fairies, Nutcracker Suite—P. Tchaikovsky 31330. Chicken Reel—Joseph M. Daly 93 16931. The Harmonious Blacksmith—G.F. Handel 28232. Thunder and Blazes (Circus March)—Julius Fucik 37333. The Blue Danube—Johann Strauss, Jr. 147 17234. Waltz in A Flat, Op 39, No. 15—J. Brahms 610 4835. Andante (Orpheus)—C.W. von Gluck 6236. The Girl with the Flaxen Hair—Claude Debussy37. Can Can Polka—J. Offenbach 59 38738. Music Box Dancer—F. Mills 2139. Valse Lento (Coppelia)—L. Delibes 616 50Song melodies1. Ode to Joy—L. van Beethoven 5632. Blue Tail Fly (Jimmy Crack Corn)—Trad 86 3123. Home on the Range—Trad 233 2734. Red River Valley—Trad 113 4575. In the Good Old Summertime—Evans and Shield 3006. It Had to be You—Jones and Kahn

Dow

nloa

ded

by [

Uni

vers

ity o

f M

aast

rich

t] a

t 04:

36 0

8 Ju

ly 2

014

STEINKE, CUDDY , JAKOBSON

436 COGNITIVE NEUROPSYCHOLOGY , 2001, 18 (5)

7. Smiles—Callahan and Roberts 5078. Song of the Volga Boatmen—Trad 430 5209. Hail, Hail the Gang’s All Here—Trad 52 26110. Aura Lee (Love Me Tender)—Trad 78 11711. For He’s A Jolly Good Fellow—Trad 61 23112. Ach! Du Lieber Augustin (Have You Ever Seen a Lassie)—Trad 26 39913. Swing Low, Sweet Chariot—Trad 9 54614. While Strolling in the Park One Day—Ed Haley 203 64615. By the Light of the Silvery Moon—Edwards and Madden 15516. Greensleeves—Trad 206 25917. Smile Awhile—Egans and Whiting 58118. When Irish Eyes Are Smiling—Olcott and Graff 63819. I’m Looking over a Four-Leaf Clover—Woods and Dixon 29820. Silent Night—Mohr and Gruber 415 50021. Four Strong Winds—Ian Tyson22. The Flowers that Bloom in the Spring—Gilbert and Sullivan 2223. Cockles and Mussels—Trad 20924. The Yellow Rose of Texas—Trad 95 66125. April Showers—Silvers and DeSylva 10626. I’m a Yankee Doodle Dandy—George M. Cohan 42 65927. Land of Hope and Gloryd—Music by E. Elgar (Pomp and Circumstance) 43828. Heart and Soul—Loesser and Carmichael29. Hush Little Baby—Trad 8730. Scarborough Fair—Trad 8231. My Wild Irish Rose—C. Olcott 38532. Puff the Magic Dragon—Yarrow and Lipton33. On the Sunny Side of the Street—Fields and McHugh34. There Is a Tavern in the Town—Trad 57235. English Country Gardense—Trad 33 18736. Carolina in the Morning—Donaldson and Kahn37. Blowin’ in the Wind—Bob Dylan38. Bicycle Built for Two (Daisy Bell)—Harry Dacre 97 18839. Blow the Man Down—Trad 2 14640. My Bonnie—Trad 248 38141. The Band Played On—Ward and Palmer 12342. Happy Days are Here Again—Yellen and Ager 26843. Paddle Your Own Canoe—Trad 5644. Fascinating Rhythm—George and Ira Gershwin45. Row, Row, Row Your Boat—Trad 234 47546. Shenandoah—Trad 9847. I’m Popeye the Sailor Man—Sammy Lerner 7248. American Patrolf—Music by W. Meacham, words by Edgar Leslie 14 9749. As Time Goes By—H. Hupfeld 11150. Rock of Ages—Trad 345 46951. Cielito Lindo—Carlos Fernandez 327 12152. In My Merry Oldsmobile—Gus Edwards 99 29953. Michael Row the Boat Ashore—Trad 8354. In the Shade of the Old Apple Tree—Williams and Van Alstyne 32955. Swanee River (Old Folks at Home)—S. Foster 37 407

APPENDIX B (continued)

Source page #———————————————–

Title and composer Johnsa Fuld b W-F c

Dow

nloa

ded

by [

Uni

vers

ity o

f M

aast

rich

t] a

t 04:

36 0

8 Ju

ly 2

014

COGNITIVE NEUROPSYCHOLOGY, 2001, 18 (5) 437

MELODY RECOGNITION

56. Oranges and Lemons—Trad 3657. Tea for Two—Caesar and Youmans 57258. Ta-ra-ra Boom Der-e—Henry Sayers 97 57059. Auld Lang Syne—Trad 411 11560. On Top of Old Smokey—Trad 239 41661. There’ll Be A Hot Time in the Old Town Tonight—Hayden and Metz 26 27862. On Moonlight Bay—Madden and Wenrich63. I Gave My Love A Cherry—Trad 8164. I Left My Heart in San Francisco—Cross and Cory 1765. When Johnny Comes Marching Home Again—Trad 212 63966. Oh Susannah—S. Foster 123 40467. When the Saints Go Marching In—Trad 388 64168. Dixie Land—Dan Emmet 39 196

a In Johns, M. (Ed.). (1980). Jumbo: The children’s book (3rd ed). Miami Beach: Hansen House.b Opening theme listed in Fuld, J.F. (1995). The book of world-famous music: Classical, popular, and folk. NY: Dover.c In World Famous Piano Pieces. (1943). Toronto: Frederick Harris.d Hymn, lyrics: “Land of hope and glory, mother of the free” [Source: Fuld, J.F. (1995). The book of world-famous music: Classical,

popular, and folk. NY: Dover].e Earliest source of this melody was as a song from “Quaker’s Opera” in 1728; later known as the song “Vicar of Bray”; later

popularised as “Country Gardens” by P. Grainger in 1919 [Source: Fuld, J.F. (1995). The book of world-famous music: Classical,popular, and folk. NY: Dover]. New title “English Country Gardens” and lyrics written by R. Jordan in 1962 [Source: Lax, R.,& Smith, F. (1989). The great song thesaurus (2nd ed.). New York: Oxford University Press].

f Song by E. Leslie, “We Must Be Vigilant” was sung to this melody in 1942 and during World War II [Source: Fuld, J.F. (1995).The book of world-famous music: Classical, popular, and folk. NY: Dover].

Dow

nloa

ded

by [

Uni

vers

ity o

f M

aast

rich

t] a

t 04:

36 0

8 Ju

ly 2

014