Steinke, W. R., Cuddy, L. L., & Jakobson, L. S. (2001). Dissociations Among Functional Subsystems Governing Melody Recognition After Right-hemisphere Damage. Cognitive Neuropsychology,

Embed Size (px)

Citation preview

  • 7/29/2019 Steinke, W. R., Cuddy, L. L., & Jakobson, L. S. (2001). Dissociations Among Functional Subsystems Governing Mel

    1/28

    This article was downloaded by: [b-on: Biblioteca do conhecimento online UA]On: 01 May 2013, At: 12:35Publisher: RoutledgeInforma Ltd Registered in England and Wales Registered Number: 1072954Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK

    Cognitive NeuropsychologyPublication details, including instructions for authors andsubscription information:

    http://www.tandfonline.com/loi/pcgn20

    Dissociations among functional

    subsystems governing melody

    recognition after right-hemisphere

    damageWilli R. Steinke , Lola L. Cuddy & Lorna S. JakobsonPublished online: 09 Sep 2010.

    To cite this article: Willi R. Steinke , Lola L. Cuddy & Lorna S. Jakobson (2001): Dissociations

    among functional subsystems governing melody recognition after right-hemisphere damage,

    Cognitive Neuropsychology, 18:5, 411-437

    To link to this article: http://dx.doi.org/10.1080/02643290125702

    PLEASE SCROLL DOWN FOR ARTICLE

    Full terms and conditions of use: http://www.tandfonline.com/page/terms-and-conditions

    This article may be used for research, teaching, and private study purposes. Anysubstantial or systematic reproduction, redistribution, reselling, loan, sub-licensing,systematic supply, or distribution in any form to anyone is expressly forbidden.

    The publisher does not give any warranty express or implied or make anyrepresentation that the contents will be complete or accurate or up to date. Theaccuracy of any instructions, formulae, and drug doses should be independentlyverified with primary sources. The publisher shall not be liable for any loss, actions,claims, proceedings, demand, or costs or damages whatsoever or howsoever causedarising directly or indirectly in connection with or arising out of the use of thismaterial.

    http://dx.doi.org/10.1080/02643290125702http://www.tandfonline.com/loi/pcgn20http://www.tandfonline.com/page/terms-and-conditionshttp://www.tandfonline.com/page/terms-and-conditionshttp://dx.doi.org/10.1080/02643290125702http://www.tandfonline.com/loi/pcgn20
  • 7/29/2019 Steinke, W. R., Cuddy, L. L., & Jakobson, L. S. (2001). Dissociations Among Functional Subsystems Governing Mel

    2/28

    DISSOCIATIONS AMONG FUNCTIONAL SUBSYSTEMSGOVERNING MELODY RECOGNITION AFTER

    RIGHT-HEMISPHERE DAMAGE

    Willi R. Steinke and Lola L. CuddyQueens University at Kingston, Canada

    Lorna S. JakobsonUniversity of Manitoba, Winnipeg, Manitoba, Canada

    This study describes an amateur musician, KB, who became amusic following a right-hemisphere

    stroke. A series of assessments conducted post-stroke revealed that KB functioned in the normal rangefor most verbal skills. However, compared with controls matched in age andmusic training,KB showedsevere loss of pitch and rhythmic processing abilities. His ability to recognise and identify familiar in-strumental melodies was also lost. Despite these deficits, KB performed remarkably well when asked torecognise and identify familiar songmelodies presented withoutaccompanying lyrics.This dissociationbetween the ability to recognise/identify song vs. instrumental melodies was replicated across differentsets of musical materials, including newly learned melodies. Analyses of the acoustical and musical fea-tures of song and instrumental melodies discounted an explanation of the dissociation based on thesefeatures alone. Rather, the results suggest a functional dissociation resulting from a focal brain lesion.We propose that, in thecase of song melodies, there remains sufficient activation in KBs melody analy-sis system to coactivate an intact representation of both associative information and the lyrics in the

    speech lexicon, making recognition and identificationpossible. In thecaseof instrumental melodies, nosuch associative processes exist; thus recognition and identification do not occur.

    INTRODUCTION

    This study is concerned with the recognition offamiliar melodies, both song melodies (melodieslearned with lyrics) and instrumental melodies(melodies learned without lyrics). We report what

    we believe to be the first recorded instance of adissociation between recognition of song versusinstrumental melodies secondary to brain injury.

    We present our observations on a 64-year-oldamateur musician, KB, who suffered a right-hemisphere stroke, and on controls matched forage and level of music training. Despite evidenceof impaired performance on a variety of musicaltasks, KB displayed remarkably spared recogni-tion of familiar song melodies even when thesemelodies were presented without accompanyinglyrics.

    COGNITIVE NEUROPSY CHOLOGY, 2001, 18 (5), 411437

    2001 Psychology Press Ltdhttp://www.tandf.co.uk/journals/pp/02643294.html DOI:10.1080/02643290042000198

    411

    Requests for reprints should be addressed to Lola L. Cuddy, Department of Psychology, Queens University, Kingston, OntarioK7L 3N6, Canada (Email: [email protected]).

    We thank KB and his wife for their invaluable cooperation. We thank Dr I. Peretz for thoughtful and constructive comments onan earlier draft of this paper. Dr Sylvie Hbert and Dr Peretz provided ideas and insights that inspired the present discussion. Weacknowledge detailed and helpful advice from two anonymous referees. Dr Jill Irwin and members of the Acoustical Laboratory atQueens University provided support and assistance. The research was supported by a Medical Research Council of Canada

    Studentship to the first author, and by research grants from the Natural Sciences andEngineering ResearchCouncil of Canada to thesecond and third authors.

    27CN2199

  • 7/29/2019 Steinke, W. R., Cuddy, L. L., & Jakobson, L. S. (2001). Dissociations Among Functional Subsystems Governing Mel

    3/28

    The results address two interrelated issues inmusic neuropsychology. The first issue concernsthe processes and functions governing melody rec-ognition. Consideration of this issue leads to a sec-ondthe nature of the integration of melody andspeech in the acquisition, retention, and recogni-tion of songs. To address these issues, we drawupon research results from neurologically intactpopulations (for reviews, see Dowling & Harwood,1986; Krumhansl, 1990, 1991) and reports of selec-tive loss and sparing following brain damage (forreviews, see Marin & Perry, 1999; Patel & Peretz,1997; Peretz, 1993).

    Melody recognitionPeretz (1993, 1994) and others (e.g., Zatorre,1984) have argued against a simple division ofmusic and language each assigned to separate cere-bral hemispheres. Rather, both music and languageare themselves viewed as divisible into components

    which may or may not be shared. Division of musicin the Western tonal system would include melody,

    with which this paper is primarily concerned, andalso dynamics, grouping, harmony, timbre, and soforth. Within the present concern with melody,there are at least two further components.

    Musical melodies can be characterised by a par-ticular set of pitch relations and temporal relationsamong their notes. It has been suggested that theprocessing of pitch relations and of temporal rela-tions contributes separately to melody recognition(Peretz & Kolinsky, 1993), a suggestion supportedby research with neurologically intact (e.g.,Monahan & Carterette, 1985; Palmer &Krumhansl, 1987a, b) and compromised partici-pants (e.g., Peretz & Kolinsky, 1993). Listenersmay recognise melodies given pitch or temporalrelations alone, although they are able to recognisefar more melodies from pitch than from temporalrelations (Hbert & Peretz, 1997; White, 1960).

    There may also be differences in the neural sub-strates for processing pitch and temporal relations,

    with the right hemispheremore strongly implicated

    for pitch, but the evidence to date is not conclusive(Marin & Perry, 1999).

    The issue is complicated by indications that eachsource of information may influence the processingof the other (see, for example, Bigand, 1993; Boltz,1989; Boltz & Jones, 1986). As pointed out byKolinsky (1969), the same pattern of pitches can beperceived as two different melodies if the temporal

    pattern is changed. (Consider, for example, that thefirst seven pitches of Rock of Ages and Rudolph,the Red-Nosed Reindeer are identical.) Thus, theextent to which pitch and temporal effects are addi-tive or interact with one another remains inquestion.

    What does appear to be increasingly clear is thatthepitch processing involved in melody recognitionis separable from verbal abilities (Peretz, 1993;Peretz, Belleville, & Fontaine, 1997; Polk &

    Kertesz, 1993) and other nonmusic cognitive abili-ties (Koh, Cuddy, & Jakobson, in press; Steinke,Cuddy, & Holden, 1997). The relation betweenpitch processing ability and music training is mod-est and does not account for the dissociation fromcognitive abilities (Koh et al.; Steinke et al.).

    Song

    The case of song processing poses a somewhat dif-ferent problem, one that necessitates considerationof text in addition to pitch and temporal compo-nents. A song, by definition, consists of integratedmelody and speech, or text. Yet how or where thisintegration is achieved is not known. Processing ofsong and memory for song has received relativelylittle attention in theneuropsychological literature.

    Song is a universal form of auditory expression in which musicandspeechare intrinsically related. Songthus represents an idealcase for assessing the separability or commonality of music andlanguage. ... In song memory, the musical and the linguisticcomponent may be represented in some combined or commoncode. (Patel & Peretz, 1997, p. 206)

    According to Peretz (1993), melody recognitioninvolves the activation of a stored melody in a mel-ody lexicon. The lexicon may be activated by eitherorboth of twoseparate kinds of information, result-

    ing from theanalysis ofpitchandtemporal features,respectively. In addition, lyrics and song titles may

    STEINKE, CUDDY, JAKOBSON

    412 COGNITIVE NEUROPSYCHOLOGY, 2001, 18 (5)

  • 7/29/2019 Steinke, W. R., Cuddy, L. L., & Jakobson, L. S. (2001). Dissociations Among Functional Subsystems Governing Mel

    4/28

    provide a third route through which song melodies(but not instrumental melodies) are recognised.

    Others have questioned whether the words andmelodies of songs are processed and stored inde-pendently and have reported an integration effect(Crowder, Serafine, & Repp, 1990; Samson &

    Zatorre, 1991; Serafine, Crowder, & Repp, 1984).The experiments in these studies typicallyemployed a forced-choice recognitionmemory par-adigm. Listeners were asked to study and then torecognise melody, lyrics, or both under conditions

    where melody and lyrics either matched or did notmatch. Matches between melody and lyrics resultedin higher recognition rates than mismatches. Rec-ognition of melody and lyrics was facilitated for theoriginal song pairing the two components as

    opposed to a new song mixing equally familiarcomponents. Similar results, using the same para-digm, have been obtained by Morrongiello andRoes (1990) and by Serafine, Davidson, Crowder,and Repp (1986).

    Peretz et al. (1994, p.1298; see also Patel &Peretz, 1997) have pointed out that the use of two-alternative forced-choice paradigms may limit thegenerality of these findings. They suggest that lis-teners may have formed integrated representationsof melody and lyrics in memory to facilitate recog-nition of novel songs when unfamiliar, interferingmelodies and lyrics appeared among the responsechoices. In contrast, for the representation of well-known songs, a strategy of encoding melody andtext separately may be more effective since, ineveryday (i.e., nonexperimental) situations, a givenmelodic line will typically carry different lyrics or

    verses. Peretz et al. (1994) conclude that for familiarsongs encoding melody and lyrics independently

    would be parsimonious and efficient (p. 1298).In related research, Besson, Fata, Peretz,

    Bonnel, and Requin (1998) asked professionalmusicians to listen to well-known opera excerptsending with semantically congruous or incongru-ous words sung either in or out of key. Event-related potentials recorded for the co-occurrence ofsemantic and harmonic (musical) violations were

    well predicted by an additive combination of the

    results recorded for each violation alone. Theauthors argue for the view that semantic and har-

    monic violations are processed independently. Onthe other hand, Patel (1998) has reported event-related potential data supporting the sharing of lin-guistic and musical resources where syntactic rela-tions are violated. These recent findings illustratethecomplexity of the systems engaged in the acqui-sition and representation of song.

    Purpose of the present report

    In the present report, we describe the case ofKB,anamateur musician who demonstrates a highlyunusual type of amusia. This case provides an

    opportunity to address questions about the organi-sation of melodyrecognition and therelationship of

    verbal processing to nonverbal processing in songrecognition. Following the case report of KB and adescription of general procedures, we report a seriesof test findings in three experiments.

    In Experiment 1, we describe the results of testsof pitch perception, rhythm perception, rhythmproduction, and melody recognition abilities forKB and for a group of neurologically intact con-trols. Through this testing, weuncovered a remark-able and intriguing dissociation in KB. Althoughhe showed profound deficits in the music tests, he

    was nonetheless able to recognise the melodies ofwell-known songs played without lyrics. The pre-requisite for successful recognitionwasthat KB hadlearned the melodies with lyrics premorbidly.Instrumental melodies readily recognised by con-trolscould not, in contrast, be reliably recognisedbyKB. In Experiment 2, we report the results of addi

    -tional tests designed to examine the sparing of songmemory in greater detail. We replicated the differ-ence in recognition of song versus instrumentalmelodies for KB. Analyses of the acoustical andmusical features of both types of melody led us todiscount an explanation of the difference on thebasis of these features alone. Experiment 3addresses the question of whether KB has retainedthe ability to learn new melodies. The results of this

    experiment indicated some residual ability torecognise novel melodies learned with lyrics.

    COGNITIVE NEUROPSYCHOLOGY, 2001, 18 (5) 413

    MELODY RECOGNITION

  • 7/29/2019 Steinke, W. R., Cuddy, L. L., & Jakobson, L. S. (2001). Dissociations Among Functional Subsystems Governing Mel

    5/28

    CASE REPORT: KB

    KB is a right-handed male with 13 years of formaleducation, including completion of Grade 10,Police College, anda number of additional trainingcourses. He worked as a policeman, prison guard,

    and process server (of official court documents)until the time of his retirement at age 55. KBreported a moderately extensive background inmusic. He played trumpet and drums for approxi-mately 3 years in high school. After high school, hesang both in a barbershop quartet and in a numberof amateur operettas for about 10 years in total. KBreported limited ability to read music notation andstated that he typically learned his parts by ear. Hisspouse reports that KB often sang at home and had

    a collection of classical and popular records that heplayed regularly. She stated that he had a fine sing-ing voice. Unfortunately no recordings of KBssinging exist.

    Neurological history

    KB was admitted to hospital in July 1994, at age64,complaining of left-sided paralysis and speech pro-duction difficulties. His speech impairmentresolvedafter a few days but his left-sided weaknesshas persisted. KB underwent a series of CT scans,the first shortly after admission, and the others 6, 8,and 12 months later. These scans revealed evidenceof focal damage in the right fronto-parietal area (seeFigure 1), and to a lesser extent in the right cerebel-lum (both in the posterior-inferior aspect, and inthe superior cerebellar peduncle-pontine region). Alacunar infarct was also noted in the right lenticularnucleus (G. Bearalot, personal communication,February, 2000). In addition to focal damage, dif-fuse brain atrophy consistent with age was noted,although it is highly unlikely that these latterchanges could account for KBs amusia (B. Pearse,personal communication, April, 1996).

    KBs previous neurological history included aself-report of a mild stroke suffered in 1968 whenhewas 38 years of age. According to the patient andhis wife, this event resulted in an isolated, transient

    memory impairment from which KB recoveredcompletely. No objective diagnostic examinations

    were undertaken at the time of this event, andrecent CT scans did not reveal the presence of anold infarct.

    Initial neuropsychological assessment

    During the seventh through tenth weeks after hisstroke in 1994, KBs intellectual and emotionalfunctioning was assessed by a staff psychologist

    with a standard neuropsychological test battery (seeTable 1). Test results from the Wechsler AdultIntelligence Scale-Revised (WAIS-R) (Wechsler,1981) revealed a Full Scale IQ in the normal range,but a significantly lowered Performance IQ score.In order to obtain an estimate of premorbid IQ, theNorth American Adult Reading Test (NAART)

    (Blair& Spreen, 1989) was administered. Thecon-sistency between estimated premorbid Verbal IQand postmorbid Verbal IQ scores suggest that KBsstroke left his verbalabilities intact. Nonverbalabil-ities, in contrast, were impaired, as suggested byscores on the Performance subscale of the WAIS-R. KBs low score on the Ravens Coloured Pro-gressive Matrices (Raven, 1965), a nonverbal testused to measure general intellectual functioning,also indicated a decline in nonverbal abilities.

    STEINKE, CUDDY, JAKOBSON

    414 COGNITIVE NEUROPSYCHOLOGY, 2001, 18 (5)

    Table 1. Demographic data and initial neuropsychologicalassessment

    Age(years) 64Sex MaleEducation (years) 13Wechsler Adult Intelligence Scale Full Scale IQ=92

    (Revised) Verbal IQ=103Performance IQ=80

    Wechsler Memory Scale-Form 1 Memory quotient =109Trail Making Test

  • 7/29/2019 Steinke, W. R., Cuddy, L. L., & Jakobson, L. S. (2001). Dissociations Among Functional Subsystems Governing Mel

    6/28

    On the Wechsler Memory Scale-Form 1(Stone, Girdner, & Albrecht, 1946) KB also scoredin the normal range, supporting clinical observationand KBs self-report that memory was unimpairedfollowing the stroke. However, on a number ofother tests considered sensitive to brain damage,described below, KBs performance was poor(below the 25th percentile). His low score on the

    Trail Making Test indicated difficulties insequencing of symbols, and alternating betweenalphabetic and numeric symbols. The WisconsinCard Sort (Heaton, 1981) requiresabstraction abil-

    ity to determine correct sorting principles as well asmental flexibility to accommodate to shifting crite-

    rion sets; poor performance is associated with fron-tal lobe damage. Low scores on the Rey-OsterriethComplex Figure Test (Rey, 1941) and the House

    -Tree-Person Test are frequently seen after rightposterior damage. KBs performance on theHouse-Tree-Person Test, in particular, wasmarked by adoption of a piecemeal approach,perseverative tendencies, left-sided neglect, work-ing over, and much detail. Signs of emotionallability and impulsivity were also noted.

    KB reported no history of hearing loss and nochanges in auditory acuity post-stroke. He showed

    intact ability to identify nonspeech, environmentalsounds (compact disc recording Dureco 1150582;

    COGNITIVE NEUROPSYCHOLOGY, 2001, 18 (5) 415

    MELODY RECOGNITION

    Figure 1. Transverse CT scans obtained from KB 12 months post-stroke. Note that the right side of the brain is shown on the left in eachimage. The four images, labelled ad, proceed inferiorly in 10 mm increments. The scans show focal damage in the right fronto-parietal lobe(see text for details).

    a b

    c d

  • 7/29/2019 Steinke, W. R., Cuddy, L. L., & Jakobson, L. S. (2001). Dissociations Among Functional Subsystems Governing Mel

    7/28

    see Table 1). In addition, when tested with arecorded set of solo instrumental pieces he was ableto identify 13 of 17 musical instruments correctly(including piano, harp, trumpet, french horn, tuba,piccolo, flute, oboe, clarinet, double bass, cello,

    viola, and violin). His misidentifications included

    labelingboth a bassoonand a saxophone asan oboe,mistaking a guitar for a piano, and being unable toname a harpsichord.

    The results of the Boston Diagnostic AphasiaExamination (BDAE) (Goodglass & Kaplan,1983) revealed that KB did not meet the criteria fora diagnosis of aphasia. However, the BDAE didreveal impairments in speech prosody, singing, andrhythm reproduction. KB was judged to have anobvious lack ofmelody of speech in his spontaneous

    speech (which both the patient and his wife con-firmed was not evident prior to his stroke). Defi-ciencies in singing and rhythm production werealso noted in a subtest of the BDAE that requiredKB to sing a familiar melody of his own choosingand repeat a set of four rhythms tapped by theexaminer.

    During the period of KBs 10-week, in-patientrehabilitation programme, all stroke patientson therehabilitation unit of the hospital were beingscreened by the first author on tests designed toassess sensitivity to tonality in music. The initial setof tests carried out with KB revealedthe presenceofamusia. Prior to testing, KB had not complained ofany musical deficits; nor did he voice any com-plaints after hearing and rating the melodies in thetests of tonal sensitivity. It was only after singingaloud that KB described himself as sounding flat.Subsequently, however, KB reported that music nolonger had meaning for him, and it was noted thathe no longer listened to his record collection andchosenot to attend scheduled music activities in hisresidential care facility.

    GENERAL PROCEDURES

    Controls

    Ten males and 10 females, all right-handed, withno reported history of neurological disease or major

    hearing loss participated in Experiments 1 and 2.Participants age andmusicalbackground as singersapproximated those of KB. They were recruitedfrom local choirs and vocal ensembles. The ages ofthe control participants ranged from 59 to 71 years,

    with an average of 66 years. All were presently par-ticipating in singing activities, and most reportedsinging or playing musical instruments for much oftheir lives. With the exception of one participant

    who had been giving piano lessons for approxi-mately 10 years, none derived any income frommusic activities. Because none of the participantsmade a living from playing or singing music, all

    were considered to be amateur musicians.Controls had a mean of 15 years of formal edu-

    cation (range of 1020 years). TheShipley Institute

    of Living Scale (Shipley, 1953) was used to obtainan estimate of overall intelligence. Using a conver-sion factor, the mean Shipley score for controls wasestimatedtoequalaWAIS-RFullScaleIQscoreof119.9 (range 105133).

    Materials and methods

    All music perception tests, with the exception ofthose in the University of Montreal Musical TestBattery (provided on cassette tape by I. Peretz),

    were constructed and recorded in the AcousticalLaboratory of the Department of Psychology,Queens University. Musical tones were synthe-sised timbres, created by a Yamaha TX81Zsynthesiser controlled by a Packard Bell computerrunning Finale music-processing software(Anderson, 1996). Two exceptions were the Probe-tone Melody test (Experiment 1) for which thesynthesiser was controlled by a Zenith Z-248 com-puter running DX-Score software (Gross, 1981),and the Chord Progressionstest (Experiment 1), for

    which chords were created on a Yamaha TX802synthesiser controlled by an Atari 1040ST com-puter running Notator music-processing soft-

    ware (Lengeling, Adam, & Schupp, 1990). Toprovide variety for participants, different musicaltimbres were used for different tests. The names of

    all music tests, and the timbres used, are given inTable 2.

    STEINKE, CUDDY, JAKOBSON

    416 COGNITIVE NEUROPSYCHOLOGY, 2001, 18 (5)

  • 7/29/2019 Steinke, W. R., Cuddy, L. L., & Jakobson, L. S. (2001). Dissociations Among Functional Subsystems Governing Mel

    8/28

    Stimuli were recorded onto Maxell XL II-S cas-sette tapes using a Panasonic Rx-DT707 portabletape player. A Dixon DM1178 microphone wasused for recording rhythmic reproductions (Exper-iment 1) and sung materials (Experiment 3). Teststimuli were reproduced by the tape player in freefield at a comfortable loudness, as determined byeach participant (about 55 to 70 dB SPL).

    KB was tested in several different locationsaninterview room at the stroke rehabilitation unit of alocalhospital, hisbedroomat a skilled nursing facil-ity, and his family residence. Test conditions wereuniformly quiet across test locations,with no obtru-sive background noises present. Test sessions lastedapproximately30 minutes each and were conductedover a period of 16 months, beginning in August

    1994 when KB was 64 years old. KB completed thetests in Experiment 1 first, followed by the tests in

    Experiment 2 and Experiment 3. All test instruc-tions were read to KB, and all responses wererecorded by the experimenter. KB was given asmuch time as needed to ask questions followingpractice trials (where applicable), to clarify instruc-tions whenever necessary, and to complete tasks. It

    may be noted that KB was a cooperative and well-motivated participant throughout testing. KB dis-played a good-natured willingness to continue lis-tening and responding to test trials even when it

    was apparent to both KB and the experimenter thathis sensitivity to test stimuli was severely compro-mised and that he had resorted to guessing.

    Controls were tested in a quiet room in the Psy-chology Department at Queens University. They

    were most often tested individually, but occasion-ally ingroups of two tothree, depending onthe par-ticular experiment and scheduling demands. Testsessions lasted 2 to 3 hours each and were con-ducted over 6 months. Unless otherwise indicated,all 20 controls provided data for each of the testsdescribed below.

    The order of music tests presented in Experi-ments 1 and 2 was counterbalanced across controls,

    with three restrictions. The first restriction was thatthe seven tests in the University of Montreal Musi-cal Test Battery were always presented in the sameorder andduring thesame test session with no othertests intervening. This set of tests was considered asa single unit for purposes of counterbalancing. Thesecond restriction concerned time constraints.

    When insufficient time remained in a test session tocomplete the next scheduled test, a shorter test wassubstituted. The third restriction concerned num-ber of controls present. When two or three controls

    were present during the same session, tests whichrequired individual recording or taping of responses

    were not administered and other tests weresubstituted.

    Prior to testing, written consent was obtainedfrom each control. Demographic data were col-lected first, followed by Shipley tests (Shipley,1953) and the music tests. Published test protocols

    were followed for administration of the Shipleytests. Participants were given written instructions

    for the music tests, and were given as much time asneeded to read the instructions, to become familiar

    COGNITIVE NEUROPSYCHOLOGY, 2001, 18 (5) 417

    MELODY RECOGNITION

    Table 2.Music tests used in Experiments 1, 2, and 3: Name andtimbre

    Name of test Timbre

    University of Montreal Musical Test BatteryContour Discrimination test Synthesised PianoScale Discrimination test "

    Interval Discrimination test "Rhythm Contour test "Metric test "Familiarity Decision test "Incidental Memory test "Tests of basic pitch perception and memoryPitch Discrimination test Grand piano (A1)

    a

    Pitch Memory test New electro (A12)a

    Tests of tonalityProbe-tone Melody test Wood piano (A15)a

    Familiar Melodies test Pan floot (B12)a

    Novel Melodies test Pan floot (B12)a

    Chord Progressions test Sine-wave componentsbTests of rhythm perception and productionRhythm Same/Different test Electro piano (A11)

    a

    Metronome Matching testc

    MetronomeTests of melody recognition and learningIncremental Melodies test Electric grand (A5)

    a

    Famous Melodies test New electro (A12)a

    Television Themes test Grand piano (A1)a

    Mismatched Songs test Voice (W. Steinke)Learning test

    cElectro piano (A11)

    a

    aName and factory preset designation of Yamaha TX81Z

    synthesizer.bProduced by Yamaha TX802 synthesizer.

    cTest not administered to control participants.

  • 7/29/2019 Steinke, W. R., Cuddy, L. L., & Jakobson, L. S. (2001). Dissociations Among Functional Subsystems Governing Mel

    9/28

    with the response sheets (where applicable), to askquestions after completion of practice trials (whereapplicable), and to complete the tasks. No feedback

    was given following test trials, but instructions wereclarified whenever necessary. Participants were ver-bally debriefed at the conclusion of the testing. In

    lieu of individual compensation, a donation wasmade to thechurch ororganisation from whichpar-ticipants were recruited.

    EXPERIMENT 1: TESTS OF BASICMUSICAL ABILITIES

    Experiment 1 administered basic tests of auditory

    andmusical skills to KB andcontrols. Pitch percep-tion was assessed in different ways, reflecting dif-ferent subcomponents of musical pitch processing.Pitch tests measured simple pitch discrimination,pitch memory, sensitivity to changes in pitch con-tour of a novel melody (changes to the sequences of

    ups and downs in pitch), and sensitivity to scales,intervals, and tonality, part of the grammar ofmusic. Tests of rhythm perception and productionassessed sensitivity to discrimination of temporalalterations to auditory sequences, sensitivity tometer (or periodic accenting) of a sequence, and

    motor control of tempo, or rate of events. Finally,participants ability to recognise well-known melo-dies, and their capacity for incidental learning ofnovel melodies, was assessed.

    Tests

    The 17 tests administered are listed in Table 3 withthe source and a brief description of each test.

    Detailed descriptions of 12 tests can be found intwo published reports (Ligeois-Chauvel, Peretz,Baba, Laguitton, & Chauvel, 1998; Steinke et al.,1997). Detailed procedures for five tests developedfor the present experiment may be found in Appen-dix A.

    STEINKE, CUDDY, JAKOBSON

    418 COGNITIVE NEUROPSYCHOLOGY, 2001, 18 (5)

    Table 3. Title, source, and description of tests used in Experiment 1

    Title Source Description

    Tests of pitch discriminationPitch Discrimination New

    aDiscriminate pitch height of two tones

    Pitch Memory S,C,&Hb

    Rate a test tone as present/not present in a preceding sequence of tonesContour Discrimination UM

    cDetect within-scale contour alteration

    Scale Discrimination UM Detect outside-scale interval alterationInterva l Discrimination UM Detect within-scale interval alterationTests of tonalityProbe-tone Melody S,C,&H Rate 12 chromatic test tones on degree of fit with preceding tonal melodyFamiliar Melodies S,C,&H Rate endings of familiar song melodies varying in level of tonalityNovel Melodies S,C,&H Rate endings of novel melodies varying in level of tonalityChord Progressions S,C,&H Rate degree of expectancy in chord progressions varying in level of tonalityTests of rhythm perception

    Rhythm Same-Different New Discriminate standard from same or altered comparison rhythmRhythm Contour UM Detect alterations to duration of a single note in a novel melodyMetric UM Classify novel melodies as march or waltz Tests of rhythm reproductionMetronome Matching New Tap in time with beats produced by metronomeRhythm Tap-Back New Repeat rhythms tapped by examinerTests of melody recognitionFamiliarity Decision UM Classify melodies as novel or familiarIncremental Melodies New Identify well-known melodies after presentation of 2 notes, 3 notes, 4 notes, etc.Incidental Memory UM Recognition of novel melodies previously heard in UM test battery

    aTest constructed for present study.b

    Test from Steinke, Cuddy, and Holden (1997).cTest from University of Montreal Musical Test Battery (Ligeois-Chauvel et al., 1998).

  • 7/29/2019 Steinke, W. R., Cuddy, L. L., & Jakobson, L. S. (2001). Dissociations Among Functional Subsystems Governing Mel

    10/28

    Results

    Results of the tests for KB and the present controlsare summarised in Table 4. Where available, datafrom controls from two previous studies areincluded. Controls from Steinke et al. (1997) were

    22 participants who, though younger than KB (agerange 2040), were also amateur musicians. Con-trols from Ligeois-Chauvel et al. (1998) were 24participants (mean age 32 years) for whom the agerange (1472 years) included the age of KB and thepresent controls. These participants, however, hadless musical experience; only two participantsreported a few years of music training. Despitethese differences, the controls from the present and

    previous studies yielded similar test results, withremarkably similar within-group ranks for meanaccuracy as a function of test type.

    Table 4 reveals KB performed poorly on or failedto complete most tests of musical pitch perception.

    The Pitch Discrimination test was the only such test

    where KB was moderately successful, although onaverage his score fell below the range of scoresobtained by the controls. Further inspection of thedata revealed that, whereas controls didequally wellacross the octave range tested, KB improved acrossthe octave range. For KB, accuracy in the range E1to E3 was 37.5%, but if one tone of the pair wasabove E3, and the other below, accuracy rose to60%. If both tones were above E3, accuracy was

    COGNITIVE NEUROPSYCHOLOGY, 2001, 18 (5) 419

    MELODY RECOGNITION

    Table 4. Results of tests of pitch discrimination and memory, tonality, rhythm perceptionand reproduction, and melody recognition (Experiment 1)

    Scorea

    Test KB Controls Other controls

    b,c

    Tests of discrimination and memoryPitch Discrimination 67.3 95.6 (75.0100) Not administeredPitch Memory x 70.8 (63.981.9) 76.0

    b(56.984.7)

    Contour Discrimination 43.3d

    84.8 (73.3100) 91.0c

    (70.0100)Scale Discrimination 50.0

    d85.3 (70.0100) 94.6

    c(86.7100)

    Interval Discrimination 50.0d

    82.3 (70.0100) 90.6c

    (70.0100)

    Tests of tonalityProbe-tone Melody x .66 (.13.93) .74b (.25.90)Familiar Melodies x .74 (.55.94) .78

    b(.36.88)

    Novel Melodies x .79 (.20.91) .81b

    (.69.92)Chord Progressions x .58 (.23.81) .64

    b(.11.77)

    Tests of rhythm perceptionRhythm Same/Different 73.3 96.5 (86.7100) Not administeredRhythm Contour 43.3

    d94.2 (80.0100) 92.2

    c(76.7100)

    Metric 63.3d

    85.6 (53.3100) 82.2c

    (66.796.7)Tests of rhythm reproductionMetronome Matching 0.0 Not administered Not administeredRhythm Tap-Back 16.6 93.3 (83100) Not administered

    Tests of melody recognitionFamil iarity Decision test 80.0 96.0 (80100) 98.0

    c(90100)

    Incremental Melodies test 4 (25) 4 (29) Not administeredIncidental Memory test 53.3

    d83.3 (56.6100) 88.3

    c(73.3100)

    aAll scores are percentage correct (KB) and mean percentage correct (controls), except

    scores on tests of tonality, which are mean Spearman rank-order correlations ofparticipant ratings with music-theoretic predicted levels of tonality, and scores onthe Incremental Melodies test, which are median number of notes needed foridentification. Ranges are presented in parentheses. x means KB admitted toguessing and failed to complete test.

    bData from Steinke, Cuddy, and Holden (1997).

    c

    Data from Ligeois-Chauvel et al. (1998).d Score is not significantly different from chance.

  • 7/29/2019 Steinke, W. R., Cuddy, L. L., & Jakobson, L. S. (2001). Dissociations Among Functional Subsystems Governing Mel

    11/28

    82.3%, a score well above chance and within therange of scores obtained by the controls.1

    Limited sparing was also noted for the RhythmSame/Different test, which assessed the ability todiscriminate rhythmic changes. KBs score on theRhythm Same/Different test was above chance,

    although well below the range of control scores.Although he failed to show incidental memory

    for melodies he had been exposed to during theassessment (the Incidental Memorytest), KB per-formed surprisingly well on two additional tests. Inthe first, Familiarity Decision test, participants wereasked to classify melodies as familiar or unfamiliar.Familiar melodies included both well-known songand instrumental melodies; unfamiliar melodies

    were novel. On the Familiarity Decision test, KBstated that 7 of the 10 well-known melodies werefamiliar, and 9 of the 10 novel melodies were unfa-miliar. On average, controls judged 97% of the

    well-known melodies as familiar and 95% of thenovel melodies as unfamiliar. This level of perfor-mancefromKBwasquiteremarkable,givenhislossof pitch and rhythm processing. Even moreremarkable, the results of the Familiarity Decisiontest suggested that KB was able to recognise the

    melody lines of well-known songs, but not well-known instrumental pieces. KB recognised all fiveof the well-known songs in the experimental set onthe basis of their melodies alone. He also recog-nised twoinstrumentalpieces, the Wedding Marchand the William Tell Overture, but volunteeredthat he had at one time learned lyrics for these mel-odies. He failed to recognise the opening themefrom Beethovens Fifth Symphony and RavelsBolero (both of which were highly familiar to age-

    matched controls), and recognised the Blue Dan-ube Waltz simply as a waltz.The second test of melody recognition on which

    KB performed well was the Incremental Melodiestest. KB andcontrols correctly identified all 10 songmelodies, requiring a median number of only 4notes to do so (range for KB, 35 notes; range forcontrols 29 notes).

    Discussion

    The results of this initial set of tests were consistentwith those of previous studies (e.g., Peretz, 1994) inwhich neurological damage has been shown toresult in loss or impairment of (previously intact)

    abilities to discriminate pitch and rhythm, processtonal relationships, and recognise familiar melo-dies. KBs pitch processing abilities were shown tobe somewhat more impaired than two of the threeamusic patients described in detail by Peretz andcolleagues (CN and GL)and comparable to a third,IR (e.g., Peretz, 1996; Peretz et al., 1997; Peretz etal., 1994). Most pertinently, CN and GL demon-strated marked dissociations between processing ofpitch and rhythm (intact) andprocessing of tonality

    (impaired). KB and IR, in contrast, were impairedon both pitch and rhythm tasks, yet KB was able torecognise and identify familiar melodies, while IR

    was above chance when categorising familiar andunfamiliar melodies. In the present study, controlsmatched for age and music experience were able tocomplete all of the tests successfully, though theydid not score as high on these tests as younger con-trols from previous experiments. Nevertheless theirscores were consistently higher than KBsa result

    indicating that ageing alone cannot account for thelosses displayed by KB.

    In contrast to hisverbal skills, which appeared tobe largely intact, KB experienced problems withpitch processing and with the perception andreproduction of rhythms. It is likely that these pro-found problems in basic music processing contrib-uted to KBs failure to show incidental learning ofnovel melodies. In all, the testing suggested a lim-ited sparing in only two areasfirst, in theability to

    distinguish two notes on the basis of pitch height,and second in the ability to discriminate two simplerhythmic patterns presented without melody.

    What is intriguing is the observation that,despite his profound loss of pitch processing abili-ties and sense of tonality, and his obvious rhythmicperception and reproduction problems, KB dem-onstrated a preserved ability to recognise and name

    STEINKE, CUDDY, JAKOBSON

    420 COGNITIVE NEUROPSYCHOLOGY, 2001, 18 (5)

    1

    By American convention, the numeral following the note name refers to octave location. C4 is middle C (262 Hz), C3 is theoctave below C4, C5 the octave above, and so forth.

  • 7/29/2019 Steinke, W. R., Cuddy, L. L., & Jakobson, L. S. (2001). Dissociations Among Functional Subsystems Governing Mel

    12/28

    7 of the 10 well-known melodies included in theFamiliarity Decision test. Moreover, the results oftheIncremental Melodiestest suggested that in KB,as in controls, recognitionof familiarsong melodies

    was achieved almost immediatelyafter the firstfew notes had been presented. The possibility that

    this surprising preservation of melodic recognitionability inKBmight apply only tosong melodies andnot to instrumental melodies motivated the next setof tests.

    EXPERIMENT 2: MELODYRECOGNITION ANDIDENTIFICATION

    Experiment 2 reports the results of three tests ofmelody recognition and identification. Recogni-tion is defined as the judgement that an item orevent (in this case a melody) has been previouslyencountered (Mandler, 1980). Identification, inaddition, requires correctly naming the title or lyricsof the melody. The Famous Melodiestest sampledsong and instrumental melodies from popular, folk,and classical genres. The Television Themes testsampled song and instrumental melodies fromtheme music drawn from television programmes.

    The Mismatched Songs test explored recognitionand identification of familiar song melodies whenthe melodies were accompanied by mismatchedlyrics.

    Famous Melodies test

    A number of studies support the widely held notionthat adults of all ages have a remarkable capacity toremember familiar songs (Bartlett & Snelus, 1980;Halpern, 1984; Hbert & Peretz, 1997), famousand obscure classical instrumental themes (Java,Kaminska, & Gardiner, 1995), and televisionthemes (Maylor,1991). Two previous studies, both

    with university students, have contrasted recogni-tion memory for song and instrumental melodies.

    Both showed slightly higher recognition ratesfor song over instrumental melodies (Gardiner,

    Kaminska, Java, Clarke, & Mayer, 1990; Peretz,Baba, Lussier, Hbert, & Gagnon, 1995). Thefirstgoal of the FamousMelodiestest was to document inKB selective sparing of song melody recognitionabilities. The second goal was to compare recogni-tion for familiar song versus instrumental melodies

    within the same sample of older adults.

    Materials and methodsThe Famous Melodiestest consisted of a set of 39

    instrumental and 68 song melodies selected to befamiliar to a listener of KBs age and cultural back-ground. Song melodies were defined as those origi-nally written with lyrics, most commonly heard assongs on radio or recordings, and most commonly

    sung to lyricsby amateur singers such as KBand thecontrols. Instrumental melodies were not associ-ated with lyrics and were defined as melodies origi-nally written as instrumental melodies, mostcommonly heard as instrumental melodies on radioor recordings, and most commonly played asinstrumental melodies by amateur musicians. Themelodies were drawn from several sources, includ-ing World-famous piano pieces (1943), Jumbo: Thechildrens book (Johns, 1980), The book of world -famous music: Classical, popular, and folk (Fuld,1995) and the Readers Digest treasury of popularsongs that will live forever (Simon, 1982). Alsoincluded in the test were eight novel melodies,composed by the first author in the style of thefamiliar melodies.

    Each melody was limited to the opening phraseand presented as a monophonic melody line, withrhythm, original key, andoriginal tempo left intact.

    The overall range for song melodies was G3 to A4,and for instrumental melodies was F#3 to A4. Themost frequent notes for both types of melody wereC4 and D4. Melodies ranged from 735 s in dura-tion. Notes were edited to sound equally loud.

    The set of 115 song, instrumental, and novelmelodies was presented in a single random order toall participants, in the context of a larger set of mel-odies used for another experiment. After hearingeach melody, KB and controls were asked to indi-

    cate whether they recognised it or not. If themelody was recognised, they were then asked to

    COGNITIVE NEUROPSYCHOLOGY, 2001, 18 (5) 421

    MELODY RECOGNITION

  • 7/29/2019 Steinke, W. R., Cuddy, L. L., & Jakobson, L. S. (2001). Dissociations Among Functional Subsystems Governing Mel

    13/28

    identify the melody by stating lyrics, title, or anyother identifying information. When a melody wasrecognised, controls were askedto rate the degree offamiliarity they had withthe melody on a scaleof110. A 1" indicated very little familiarity, while a10" indicated that the melody was highly familiar

    to the participant. KB was not asked to rate degreeof familiarity because pilot-testing revealed he wasunable to discriminate levels of familiarity. A 3 spause was inserted between melodies on the tape,but participants were instructed to pause the tapefor a longer period between trials if needed. Allresponses were recorded by the first author.

    Results and discussionResults are shown in Figure 2. Forcontrols,mel-

    ody recognition was very high overall for both setsof melodies. They recognised slightly but signifi-cantlymore song melodies than instrumental melo-dies, t(19) =5.22,p< .001,andidentifiedmoresongmelodies than instrumental melodies, t(19) =22.20, p< .001. Both sets of melodies were judgedto be highly familiar, with the average familiarity

    rating for song melodies higher than that of instru-mental melodies: 9.4 vs. 8.7 on the 10-point scale,t(19) = 4.16, p< .001. False recognition of novelmelodies occurredon 25% of trials, and false identi-fication occurred only once. Novel melodies wereassigned an average familiarity rating of 4.8 on the

    10-point scale.Similar to controls, KBs recognition and identi-

    fication of song melodies was very high (with 88%of song melodies being recognised, and 75% beingcorrectly identified). In marked contrast to con-trols, however, he was able to recognise only 7(18%) of the 39 instrumental melodies as familiar.Four of these could not subsequently be identified.Interestingly, KB reported having previouslylearned comic lyrics to the remaining three instru-mental melodies, and was able to supply these lyr-ics. KB did not recognise anyof the novelmelodies.

    Results from this test lend support to our earlierimpression that there exists in KB a dissociationbetween the ability to recognise melodies learned

    with lyrics (i.e., songs) and those learned as instru-mental pieces.

    STEINKE, CUDDY, JAKOBSON

    422 COGNITIVE NEUROPSYCHOLOGY, 2001, 18 (5)

    Figure 2. Mean percentage song (N=68), instrumental (N=39), and novel melodies (N=8) recognised and identified (+SE) bycontrols (N=20) and KB on the Famous Melodies test. * Indicates KB obtained a score of 0%.

  • 7/29/2019 Steinke, W. R., Cuddy, L. L., & Jakobson, L. S. (2001). Dissociations Among Functional Subsystems Governing Mel

    14/28

    Television Themes test

    The goal of the Television Themestest was to repli-cate the results of the Famous Melodiestest with adifferent source of musical materials of a similarmusical style.

    Materials and methodsThe test sampled theme music from serialised

    television programmes, drawn from The TV Fake Book (Leonard, 1995). Themes were shortened andedited to a monophonic melody line, in a manneridentical to that described above for the FamousMelodiestest, and were recorded on tape.

    A pilot set of themes was tested with five con-

    trols. The 81 themes included in the pilot set wereoriginally written as either songs (N = 36) orinstrumentals (N = 45) and were typically heardonly during viewing of the television programmes.

    The five controls were asked to indicate whetherthey were familiar with or had previously heardeach theme. Themes not recognised by at leastthree of the five controls were discarded from thefinal experimental set. The final set included 24

    vocal themes and 21 instrumental themes. The

    overall range for both vocal themes and instrumen-tal themeswas F3 toG4, with C4the most frequentnote.

    The task for KB and the remaining 15 controlswas to indicate whether each melody was recog-nised or not, and, if recognised, to identify themel-ody by stating the title, lyrics, or any identifyinginformation that came to mind. Participants werenot told in advance that the set of melodies con-sisted of themes from television programmes.

    Results and discussionAs shown in Figure 3, performance scores were

    generally lower for the Television Themestest thanfor the Famous Melodiestest. In a previous report byMaylor (1991), elderly participants recognisedbetween 68 and 92% of a pool of television themessometimes or regularly watched (not categorised as

    song and instrumental themes). Maylors partici-pants were able to identifyas many as 50.4% ofmel-

    odies from recent television programmes that wereregularly watched. The lower scores obtained fromthe present controls might be attributable, in part,to the fact that controls reported having watchedonly 63% of the television shows on average (range3598%). However, KB and his spouse reportedthat he had seen all the programmes represented inthe test.

    Nevertheless, an important result of the FamousMelodies test was replicated: KB showed markedimpairment in his ability to recognise instrumentalthemes (none of the controls recognised as fewinstrumental themes as KB) while his recognitionof song themes, in contrast, was shown to be rela-tively preserved. As in the Famous Melodies test,controls in the present study recognised signifi

    -cantly more song themes than instrumental themes(73% vs. 55%), t(19) =6.74,p< .001. Correct iden-tifications were also more common for song themesthan for instrumental themes (18% vs. 9%), t(19) =4.93,p< .001.KBs ability to recognise song themes

    was markedly superior to his ability to recogniseinstrumental themes (46% vs. 5%). In addition,although he correctly identified 21% of the songthemes, he was unable to identify the one instru-

    mental theme that he had recognised (the themefrom Bonanza).

    COGNITIVE NEUROPSYCHOLOGY, 2001, 18 (5) 423

    MELODY RECOGNITION

    Figure 3. Mean percentage song (N =24) and instrumental (N=21) themes recognised and identified (+SE) by controls (N =20)and KB on the Television Themes test. * Indicates KB obtained ascore of 0%.

  • 7/29/2019 Steinke, W. R., Cuddy, L. L., & Jakobson, L. S. (2001). Dissociations Among Functional Subsystems Governing Mel

    15/28

    Mismatched Songs test

    The results of the Famous Melodies test and theTelevision Themestest together lend support to thenotion that songs are processed and/or stored dif-ferently from instrumental melodies. One sugges-

    tion that has been made is that the words andmelodies of songsareprocessed and stored in a par-tially or fully integrated form (Crowder et al., 1990;Samson & Zatorre, 1991; Serafine et al., 1984). Iflyrics and melodies are stored in an integrated form,then one might predict that a task involving recog-nition of melody in thepresenceofmismatched lyr-ics should prove difficult for both KB and controls.

    This prediction motivated the next test. The mainquestion examined in theMismatched Songstestwas

    whether KB and controls could recognise song mel-odies when they were sung, not to their original lyr-ics,buttoeitherasetofnovellyricsortothelyricsofother, highly familiar songs.

    Materials and methodsFor this test, a set of songs was developed con-

    sistingof: (a) novel lyrics set to familiar melodies (N=5); (b) lyricsof familiar songssetto novel melodies(N=5);or (c) familiar lyrics set to familiar-but-mis-matched melodies (e.g., My Bonnie sung to thetune of Swing Low) (N=6). Novel melodies andlyrics were composed by the first author with theintent to preserve the style of the familiar songs.

    The overall range of the songs was G3 to C5, theauthors comfortable singing range.

    Twenty-one different familiar songs were used.Eleven contributed melody; 10 contributed lyrics:and1 song contributed both. Two of the songs thatcontributed lyrics had been found to be familiar toKB and controls in pilot testing. The remaining 19songs were drawn from the set of 68 songs in theFamousMelodiestest according to the following cri-teria: First, they had all been recognised by KB inthat test; second, they were all amenable to mis-matching of melody and lyrics; and third, they hadall been recognised and rated as highly familiar bythe majority of controls. (Fifteen of the melodies

    had been recognised by all 20 controls, and theremaining fourwere recognisedby 19 controls. The

    average familiarity rating was 9.71, with 10 melo-dies receiving the maximum familiarity rating.)

    In the interest of time, the familiar songs used inthis test were not presented to participants in theiroriginal versions. It was evident that recognizingthe melodies presented with their original lyrics

    would be a trivial task for all participants, includingKB, who had previously demonstrated recognitionof the song melodies and hadcorrectly provided thelyrics.

    Only theopening melodic and lyricphrases wereused. All phrases were sung by the first author andrecorded on tape in a single random order. Partici-pants were instructed to listen carefully to bothmelody and lyrics, to state whether the melody wasrecognised or not, to state whether the lyrics were

    recognised or not, and to identify melody and lyricsif recognised. Each trial was played once, and pre-sentation wasself-paced. Responses werescored forthe correct detection of the mismatch and the cor-rect specification of the type of mismatch (e.g.,familiar lyrics set to an unfamiliar tune).

    Following the test, controls were shown a list ofthe titles for songs used and asked to verify theirfamiliarity with each one. In addition, they wereasked to describe the strategies employed to recog-nise the melody and lyrics.

    Results and discussionTable 5 provides percentage detection of mis-

    match and percentage recognition for lyrics andmelodies, presented separately, for KB and con-trols. In all three conditions, controls were able todetect the presence of a mismatch on virtually everytrial: i.e., there was no effect of Type of Mismatchon detection performance: F(2, 57) =0.62,MSE=33.37,p=.54, yielding an overall detection accuracyof 98.1% (a value not significantly different from100%, c2(59) =21.58). Controls accurately recog-nised familiar melodies and familiar lyrics, andrejected novel melodies and novel lyrics. Thus, andmost important, controls were able to specify theprecise nature of the mismatch in all threeconditions.

    For controls, recognition of familiar melodieswas not impaired by the presence of mismatched

    STEINKE, CUDDY, JAKOBSON

    424 COGNITIVE NEUROPSYCHOLOGY, 2001, 18 (5)

  • 7/29/2019 Steinke, W. R., Cuddy, L. L., & Jakobson, L. S. (2001). Dissociations Among Functional Subsystems Governing Mel

    16/28

    lyrics. Identification, however, was affected. Theidentification rate for the 11 familiar melodies inthe Famous Melodiestest (when no lyrics were pre-sented) was 86.4%. When familiar melodies in the

    present test were sung with novel lyrics, identifica-tion dropped to 71%, paired-sample t(19) = 2.42,p= .025; when they were sung with the lyrics fromother, familiar songs, identification dropped to74%, paired-sample t=2.13,p< .05.

    Each control reported trying different strategiesover the course of the 16 trials. Two strategies werereported by all. The first strategy was to attend toboth lyrics and music at the same time as the song

    was unfolding. The second strategy was to payattention to either words or music, make a decisionas to familiarity, and then switch to the other. Thethird strategy, reported by five controls, was tomake a deliberate decision on the familiarity of thelyrics first, and then, upon completion of the song,replay the melody in their minds before making adecision regarding its familiarity and identity.

    KB, like the controls, was able to distinguishbetween novel and familiar lyrics and could also tellthat familiar lyrics were not accompanied by theiroriginal melodies. On these two judgements heobtained 100% accuracy. However, unlike the con-trols, KB displayed a total loss of recognition of thefamiliar melodies that he had previously recognisedin the Famous Melodiestest. Thus he was unable todetermine whether familiar lyrics were accompa-nied by a novel or familiar melody, and was unableto recognise familiar melodies sung to novel lyrics.

    The presence of competing lyrics interfered with

    KBs previously demonstrated ability to recognisesong melodies.

    Summary

    The results from this series of tests indicate that,relative to age-matched controls, KB demonstrates

    a dissociation between the preserved ability torecognise and identify song melodies and theinability to recognise and identify instrumentalmelodies. In the first two tests, song recognition

    was shown to be relatively well preserved, whileinstrumental melody recognition was severelyimpaired. The third test, however, revealed someproblems with KBs song recognition. Relative tocontrols, he experienced difficulty recognisingfamiliar song melodies in the presence of compet-

    ing lyrics. Because he was able to detect instances inwhich familiar lyrics were accompanied by thewrong melody, we may infer that some melodicinformation was being processed. It is also possiblethat his ability to detect such mismatches was aidedby the presence of temporal differences and stressdifferences imposed on lyric syllables when sung todifferentmelodies. However,his inability to specify

    whether the melody heard on those trials was novelor familiar suggests that hismelodyrecognitionwas

    seriously impaired under these conditions.

    POST-HOC ANALYSIS OF FAMILIARMELODIES

    One possible explanation for KBs differential per-formance with song and instrumental melodies isthat the melodies differed in musical characteristics

    (e.g., in the number and type of musical intervalswhich might, in the case of songs, be limited by the

    COGNITIVE NEUROPSYCHOLOGY, 2001, 18 (5) 425

    MELODY RECOGNITION

    Table 5. Results of theMismatched Songs testa

    Mismatch Lyrics Melody(% detected) (% recognised) (% recognised)

    Type of mismatch KB Controls KB Controls KB Controls

    Novel lyrics with familiar melody (N =5) 0 99.0 (80100) 0 3.0 (020) 0 99.0 (80100)Familiar lyrics with novel melody (N =5) 100 97.0 (80100) 100 100 0 12.0 (060)Familiar lyrics with familiar-but-mismatched 100 98.3 (83100) 100 99.2 (83100) 0 97.5 (83100)

    melody (N = 6)aControl participant scores are means. Ranges are presented in parentheses.

  • 7/29/2019 Steinke, W. R., Cuddy, L. L., & Jakobson, L. S. (2001). Dissociations Among Functional Subsystems Governing Mel

    17/28

    range of the human voice). To examine this possi-bility, a number of post-hoc comparisons were car-ried out on the materials presented in the FamousMelodiestest.

    Measures of musical characteristicsPitch measures for each melody included the num-ber of note onsets, the rate of presentation of notes,the range in semitones between the highest andlowest notes, the average size in semitones of theinterval between successive notes, and the ratio ofnumber of direction changes to number of noteonsets. Direction changes were defined as pitchreversals, i.e., the change from an upward frequencymovement to a downward frequency movement

    and vice-versa. The total number of directionchangeswas tallied and divided by the total numberof note onsets. The resultant value may be inter-preted as a measure of contour complexity whichcontrols for differences in total numbers of noteonsets. Valuescloserto1.0and0.0indicatemoreorless frequent pitch reversals, respectively.

    Rhythm and meter measures for each melodyincluded the total number of different note dura-tions, the percentage of notes accounted for by thetwo most frequent durations, and the meter, dupleor triple.

    The tonal strength of the pitch distributions foreach melody was determined by a key-finding algo-rithm (Krumhansl & Schmuckler, cited inKrumhansl, 1990). Correlations were obtainedbetween the distribution of pitches in eachsequence and the standardised tonal hierarchy foreach of the 24major and minor keys.The standard-ised tonal hierarchies for C major and C minor arereported in Krumhansl and Kessler (1982), and theset of probe-tone values are given in Krumhansl(1990, p. 30). Values for each of the other keys wereobtained by orienting the set to each of the differenttonic notes. For each sequence the highest correla-tion so obtained was selected to represent the tonalstrength of the distribution.

    Analyses of melodic expectancy characteristicsdetermined the extent to which melodies con-

    formed to certain bottom-up principles outlined inNarmours (1990) Implication-Realisation Theory

    of melodic expectancy. The principles, as outlinedand coded by Krumhansl (1995), were RegistralDirection, Intervallic Difference, Proximity,Registral Return, and Closure. An algorithm basedon the Krumhansl coding (Russo & Cuddy, 1996)

    was used to obtain fulfillment scores on each prin-ciple for each melody. Thealgorithm computed, foreach successive interval beyond the first, whetherornot the interval fulfilled the expectancy created bythe previous interval according to the specifiedprinciple. The fulfillment score for each principle

    was the ratio of the number of fulfilled intervals tothe total number of successive intervals, minus thefirst. Scores of 0 indicated no conformance to theprinciple and scores of 1 indicated completeconformance.

    Results

    Results of the analyses of musical characteristics areshown in Table 6. Two-sample t-tests were applied

    STEINKE, CUDDY, JAKOBSON

    426 COGNITIVE NEUROPSYCHOLOGY, 2001, 18 (5)

    Table 6.Musical characteristics ofFamous Melodies test: Pitch,rhythm and meter, tonal strength, and melodic expectancy

    Song Instrumental

    Melodies MelodiesMean (SD) Mean (SD)

    PitchNumber of note onsets* 41.2 (16.23) 50.4 (22.8)Presentation rate (# of notes/s)*** 2.3 (0.55) 3.2 (1.65)Range (semitones)*** 12.5 (2.76) 16.5 (5.01)Interval size (semitones)** 2.4 (0.52) 2.8 (0.99)Ratio of number of direction 0.43 (0.14) 0.44 (0.11)

    changes to number of notesRhythm and meter

    Number of different note durations* 4.9 (1.39) 4.3 (1.70)Percentage of notes accounted for

    by two most frequent durations* 79.6 (11.2) 84.3 (15.3)Melodies in duple meter (total) 44 20Melodies triple meter (total) 24 19Tonal strength 0.77 (0.14) 0.72 (0.15)Melodic expectancyRegistra l direction 0.41 (0.13) 0.43 (0.12)Intervall ic difference 0.77 (0.11) 0.76 (0.11)Registral return 0.44 (0.13) 0.42 (0.16)Proximity 0.62 (0.13) 0.55 (0.20)Closure 0.58 (0.12) 0.57 (0.11)

    *p< .05; **p< .01; ***p< .001.

  • 7/29/2019 Steinke, W. R., Cuddy, L. L., & Jakobson, L. S. (2001). Dissociations Among Functional Subsystems Governing Mel

    18/28

    to all comparisons between song and instrumentalmelodies, with the exceptionof themetric measure.

    This measure involved a frequency count, so a chi-square test was applied.

    The results indicate that song and instrumentalmelodies did not differ on musical characteristics

    thought to reflect pitch structure and to influencemelodic organizationtonal strength and melodicexpectancy. Nor did they differ in contour com-plexity or distribution of duple and triple meter.Instrumental melodies, however, were found to dif-fer from song melodies on six other musical charac-teristics. Instrumental melodies were slightlylonger, faster in presentation rate, larger in pitchrange and average interval size, and less complex ontwo rhythmic measures. The question therefore

    arises whether these characteristics might beresponsible for KBs poorer recognition of instru-mental versus song melodies.

    To address this question we categorised the mel-odies into subsets, two for each of the six character-istics. For one subset, the range of measures wasbelow the median for that characteristic; for theother, the range was above the median. Extreme

    values were dropped so that for each subset themean for the song melodies did not differ signifi-cantly from the mean for the instrumental melo-dies. Means and standarddeviations for themusicalcharacteristics for each subset are given in the firsttwo columns of Table 7.

    Next we calculated for each subset KBs recogni-tion score for thesong melodies andthe instrumen-tal melodies. These scores are given in the third andfourth columns of Table 7, which report the ratioofnumber of melodies recognised to the number ofmelodies retained in the subset. It can be seen thatfor every subset the recognition score is higher forsong than for instrumental melodies. All differ-ences were significant (by tests of proportions)beyond the .0002 level.

    Thus for the six musical characteristics, the dif-ference in recognition between song and instru-mental melodies held for subsets of melodiesselected to be statistically equivalent. There istherefore no support for an account of KBs diffi-

    culty with instrumental melodies based on thesemusical characteristics.

    This conclusion can be bolstered by logical argu-ments as well. Given that KB and the controls weretypically able to achieve song melody recognition

    within the first few notes (Experiment 1Incre-mental Melodies), differences in the average num

    -ber of notes provided by a melody may not berelevant to the song/instrumental melody differ-ence. Moreover, the less rhythmiccomplexity of theinstrumental melodies might have simplified therecognition task for KB relative to song melodies,but such was not the case. In summary, the post-hoc analysesdid not reveala consistent set of differ-ences in musical characteristics that could explainthesparingof song recognition for KB with marked

    and selective failure of instrumental melodyrecognition.

    COGNITIVE NEUROPSYCHOLOGY, 2001, 18 (5) 427

    MELODY RECOGNITION

    Table 7.Musical characteristics and KB recognition scores forselected subsets from theFamous Melodies test

    Mean (SD) Recognition Score

    Song Instrumental Song Instrumental Melodies Melodies Melodies Melodies

    Number of note onsets1439 30.5 (6.4) 30.1 (4.8) 33/38 4/154098 53.7 (17.9) 59.2 (12.6) 27/30 3/22Presentation rate (# of notes/s)0.982.49 1.87 (0.03) 2.07 (0.32) 38/42 4/112.50 4.73 3.12 (0.56) 3.24 (0.60) 22/26 2/24Range (semitones)

    413 10.8 (1.79) 10.5 (2.83) 36/42 2/91416 14.5 (0.80) 15.1 (0.99) 20/22 3/10Interval size (semitones)0.962.4 2.02 (0.29) 1.92 (0.46) 35/39 4/152.53.7 2.87 (0.31) 3.03 (0.37) 25/29 3/17

    Number of different note durations34 3.7 (0.48) 3.5 (0.51) 21/25 4/1558 5.7 (1.19) 6.0 (0.71) 39/43 3/17Percentage of notes accounted for by two most frequent durations5082 72.2 (0.09) 67.3 (0.09) 34/38 2/158397 88.9 (0.03) 91.1 (0.04) 26/30 2/13

    For the selected melody subsets in this table, means formusical characteristics do not differ significantly betweensong and instrumental melodies. Recognition scores forsong melodies are consistently and significantly higherthan for instrumental melodies,p< .0002.

  • 7/29/2019 Steinke, W. R., Cuddy, L. L., & Jakobson, L. S. (2001). Dissociations Among Functional Subsystems Governing Mel

    19/28

    EXPERIMENT 3: LEARNING OFSONG AND INSTRUMENTALMELODIES

    Three findings reported above suggest that, duringtheir initial learning, songs (with lyrics) and instru-mental melodies are processed and stored in differ-ent ways: (1) the remarkable dissociation betweensong and instrumental melody recognitionobserved in KB; (2) the consistently higher rates ofrecognition and correct identification for song asopposed to instrumentalmelodies reported forcon-trol participants in Experiment 2; and (3) the factthat the only instrumental melodies correctly iden-tified by KB in the Famous Melodiestest were thosetowhichhe had, at onepoint, learnedcomic lyrics.

    Although KBs capacity to learn new instrumen-tal melodies had been tested incidentally in Experi-ment 1 and found to be severely impaired (seeresults of the Incidental Memorytest), his capacityfor new song learning was unclear. In the experi-ment described below, we examined the possibilitythat the learningof new melodies might be possiblefor KB if those melodies were presented to him inthe context of songs with lyrics. It was hypothesisedthat KB would beable learn both words and melodyfor novel songs over time, but would not be able tolearn novel melodies sung to la.

    Materials and methods

    Initially, KBs ability to learn novel melodies wasexamined using a paradigm adapted from Samsonand Zatorre (1992). In this paradigm, a set of stim-uli is presented, followed by pairs of items. Eachpair contains one stimulus from the set, and onenovel stimulus or foil. The task is to state whichmember of the pair was presented in the originalset. The original set is then presented again, andonce more followed by pairs of items, this time withdifferent foils. The procedure is repeated until adesignated recognition criterion is reached. Short-term learning is measured by the number of repeti-tionsneeded to reach criterion. For our study, com-

    parisons were made between participants ability tolearn two different sets of novel melodies: one set

    sung with lyrics, and the other sung to la. Pilottesting indicated that, even after several modifica-tions to simplify the procedure, KB was unable tolearn either set of test materials.

    At this point, a new procedure was introduced,one that allowed extensive exposure to the learning

    materials over many months. For this test of KBsability to learn melodies, a set of 12 novel melodies(of similar length and style) were composed by thefirst author and recorded on tape, in random order,as instrumental pieces with a piano timbre withinthe range G3 to C4. The melodies were written inthe style ofNorth American folk songs, and as such

    were highly tonal, nonmodulating diatonic melo-dies outlining simple harmonic progressions, withregular phrase structures and recurring rhythmic

    and melodic motifs. Next, a second tape of thesematerials was created in which four of the melodies

    were sung to novel lyrics, four were sung to la, andfour were left in their original, instrumental ver-sions. The sung renditions were performed andrecorded while the vocalist was listening to thepiano versions of each melody through head-phones. In this way differences in intonation andtiming between the sung and instrumental versions

    were minimised. This latter tape (which includedthe sung renditions) was played for KB on 26 occa-sions, approximately once a week over a 6-monthperiod. KBs instruction on each occasion was sim-ply to listen to the tape.

    At the end of the 6-month period, KBs abilityto recognise the test materials in the exposureset was assessed. The original instrumental versionsof the 12 test melodies were combined with a set of36 additional control melodies. The 36 controlmelodies consisted of 12 well

    -known song

    melodies, 12 familiar melodies that had beenmelodically altered (included as pilot materials forlater experiments), and 12 novel instrumentalmelodies (foils). The complete set of 48 melodies(each played in the same piano timbre used for thefour instrumental melodies in the learning phase ofthe experiment) was presented to KB in a single,random order. His task was to indicate after eachtrial whether he recognised the melody, and if so to

    provide a title, lyrics, or any other associations thatcame to mind.

    STEINKE, CUDDY, JAKOBSON

    428 COGNITIVE NEUROPSYCHOLOGY, 2001, 18 (5)

  • 7/29/2019 Steinke, W. R., Cuddy, L. L., & Jakobson, L. S. (2001). Dissociations Among Functional Subsystems Governing Mel

    20/28

    Results and discussion

    KB recognised all 12 well-known song melodiesplayed in theiroriginal versions, and4 of12 familiarsong melodies that had been melodically altered.He also reported that the melody line was familiarfor two of the four songs from the exposure set

    which had been presented to him in the context of

    songs with lyrics over the preceding 6-monthperiod. None of the remaining 10 test items, andnone of the12 novel instrumental foils, were recog-nised. The data are summarised in Figure 4.

    KBs inability to perform the short-term learn-ing task most likely resulted from his documentedpitch and rhythmic processing deficits. Despitethesedeficits, KB was able to demonstrate a limitedability to learn new melodies, with repeated expo-sure over a lengthy period, provided that these mel-

    odies were presented in the context of songs withlyrics.

    GENERAL DISCUSSION

    We have reported a dissociation between song andinstrumental melody recognition for KB, an ama-teur musician who suffered a right-hemisphere

    stroke with fronto-parietal involvement. Resultsfrom a wide array of tests indicated preserved gen-

    eral intelligence and language skills, sparing of rec-ognition of environmental sounds and musicalinstruments, and limited sparing of simple percep-tual judgements of pitch height and rhythmic pat-tern. Overall, however, KBs musical deficits weresufficiently severe to warrant a diagnosis of amusia.

    KBs difficulties strongly implicate musical mem-oryboth for discrimination and recognition ofnovel melodies and for recognition/identificationof familiar instrumental melodies. Sparing of songmelody recognition and identification in the pres-ence of severe musical loss is therefore the moststriking finding of this report.

    Two potential accounts of differences betweensong and instrumental melodies are inadequate toexplain the observed dissociationone based on

    musical features, the other on relative familiarity.Post hoc tests of musical features revealed certaindifferences between song and instrumental melo-dies but, when the features were statistically con-trolled, scores remainedsuperior for song melody asopposed to instrumental melody recognition.

    Three findings contraindicate relative familiar-ity. First, although controls familiarity ratings forthe Famous Melodiestest favoured song over instru-mental melodies (as did recognition and identifica-tion rates in both the Famous Melodies and theTelevision Themestests) the difference was not largecompared to the difference shown by KB. Giventhe similarity between the musical background ofKB and the controls, it seems likely that he, too,

    would have been very familiar with both types ofmelodies. Second, familiarity alone cannot explainKBs inability to recognise familiar songs presented

    with competing lyrics (Mismatched Songs test).Third, only melodies presented with lyrics werelearned by KB, despite his equivalent exposure toand thus familiarity with melodies sung to la andinstrumental melodies ( Experiment 3).

    Our arguments against accountsbasedondiffer-ences in musical characteristics and relative famil-iarity of song and instrumental melodies would bestronger if another patient was found demonstrat-ing superior recognition and identification offamiliar instrumental, as opposed to song, melodies

    (i.e., if a double dissociation was documented). Inthe meantime, however, it is instructive to consider

    COGNITIVE NEUROPSYCHOLOGY, 2001, 18 (5) 429

    MELODY RECOGNITION

    Figure 4. Percentage of Learning test melodies recognised by KB.* Indicates KB obtained a score of 0%.

  • 7/29/2019 Steinke, W. R., Cuddy, L. L., & Jakobson, L. S. (2001). Dissociations Among Functional Subsystems Governing Mel

    21/28

    alternate explanations for the dissociation observedin KB. In the following discussion we propose thatduring song acquisition the temporal contiguity oflyrics and melody results in an elaborated represen-tation with special properties. We draw upon theassociation-by-contiguity hypothesis (Crowder etal., 1990), a class of associative models put forwardby Stevens, McAuley, and Humphreys (1998), andPeretzs (1993) model of melody recognition.

    An associationist position (Crowder et al., 1990;Stevens et al., 1998) is one of several put forth toaccount for the integration effect (e.g., Samson &Zatorre, 1992; Serafine et al., 1984). As noted ear-lier, the paradigm for the integration effect engagesnovel songs and the results may not directly gener-alise to well-known songs. Nevertheless, theassociationist position may contribute to an expla-nation of KBs dissociation. The association-by-contiguity hypothesis suggests that temporal conti-guity suffices for association; thus, melody and textare connected in memory, hence they act as recallcues for each other, yet each is stored with its inde-pendent integrity intact (Crowder et al., 1990, p.474). The class of associative model termed con-

    junctive representation by Stevens et al. (1998) iscompatible; they suggest that melody and text arerepresented both by item information for the sepa-rate components and relational information fortheir pairing. The contiguous presentation of mel-ody and lyrics in song may result in a particularlyrich store of relational information. In the case ofinstrumental music the relational information isless salient because such contexts as the title of thepiece are not temporally contiguous with themelody.

    Next, in line with Peretz (1993), we proposehow such notions might be implemented in the caseof KB. When a normal listener hears a familiarsong, two distinct but interconnected systems areengaged. One, the melody analysis system, leads toactivation of a stored template of the melody in adedicated tune input lexicon. The other, the speechanalysis system, leads to activation of the storedtemplates of individual words in a dedicated speechinput lexicon. Activation of one or both lexicons

    generates recognition in the listener, a sense offamiliarity. Moreover, repeated coincidental acti-

    vation of the two lexicons during song learningallows for the establishment of direct links betweenthe two representations. Activation in one systeminfluences the level of activity in the other system,producing, through a process of spreading activa-tion, recognition and identification of song.

    We will argue that this proposal has consider-able explanatory power for many of our presentfindings. First, it could account for the observationthat controls found song melodies easier torecognise and identify than instrumental pieces,overall. Note that, according to this scheme, duringthe processing of a familiar instrumental melodythere would be no activation of the speech analysissystem nor of the speech input lexicon. Thus, thenetwork of information relatingto the instrumental

    piece wouldbe less elaborate than that for a familiarsong, given that it would not include lyrics or con-cepts associated with those lyrics. With less elabo-ration, it may be inferred that recognition is lesslikely.

    Second, the present proposal could account forthe severe disruption to KBs basic music abilities(Experiment 1), the remarkable sparing of his abil-ity to recognise the melody lines of familiar songs(Experiment 2), and his limited residual capacityfor learning new songs (Experiment 3). The loss ofKBs basic music abilities reflects extensive damageto his melody analysis system. Exposure to a famil-iar song melody, however, may result in just enoughactivation to generate a simple, degraded, melodytemplate. Recall that in Experiment 1 KB showedlimited sparing in two domains: first, in his abilityto distinguish two notes in the mid- to high-fre-quency ranges on the basis of pitch height; and, sec-ond, in his ability to discriminate simple rhythmicpatterns. These residual capacities might providethe basis for the creation of this simple melody tem-plate. This template, while not containing a rich,detailed and highly accurate mapping of musicalevents, might nonetheless produce a level of activa-tion sufficient to influence the level of activity inKBs declarative memory and in his speech inputlexicon, producing recognition.KBs demonstratedthough severely limitedability to learn new

    song melodies could be explained through repeatedactivationof a similar pathway of associative links.

    STEINKE, CUDDY, JAKOBSON

    430 COGNITIVE NEUROPSYCHOLOGY, 2001, 18 (5)

  • 7/29/2019 Steinke, W. R., Cuddy, L. L., & Jakobson, L. S. (2001). Dissociations Among Functional Subsystems Governing Mel

    22/28

    Finally, the proposal can address the difficultyexperienced by KB during the Mismatched Songstest (Experiment 2). The fact that he did not sufferfrom a language impairment suggests that hisspeech analysis system, his speech input lexicon,and the representations of word meanings in his

    declarative memorywere intact. Given this, it is notsurprising that he was able to specify accurately

    whether the lyrics he heard were novel or familiaracross trials. As well, theactivation of the represen-tation of the lyrics to a familiar song would create,through activation of associative links between thetwo lexicons and memory for relational informa-tion, a set of melodic expectancies. If these expec-tancies were inconsistent with the simple melodytemplate generated by KB in response to the

    incoming melody, the result would be the observedpattern of correct detection of a mismatch. KBsinability to recognise the melody correctly in thissituation might reflect the effects of interferencecreated in the speech input lexicon between twocompeting patterns of activationa strong patternproduced through exposure to the lyrics, and a

    weaker pattern produced through spread of activa-tion from the crude melody template in the melodyinput lexicon. Controls, being able to exploit theresources of their intact melody analysis systemsand to activate a detailed representation of the mel-odies, would have been able to overcome any suchinterference and would achieve recognition of bothlyrics and melodies.

    Although presentation of novel lyrics would notlead to a set of melodic expectancies, it would leadto activation of the representations of individual

    words and their related meanings. Again, this pat-tern of activation would be inconsistent with thatproduced in response to the melody. Moreover, inthe case of KB, it would far outweigh any activationproduced by the simple melody template, therebymaking melody recognitiondifficult or impossible.

    For KB, song recognition has been spared eventhough the melody analysis system has been dam-aged. However, this damage is not complete: KBhas residual ability to generate simple, crude, repre-sentations of familiar melodies. In the case of song

    melodies, there is sufficient activation in the mel-ody analysis system to co-activate an intact repre-

    sentation of both relational information and of thelyrics in thespeechlexicon, making recognition andidentification possible. In the case of instrumentalmelodies, no such associative processes exist(unless, of course, as noted earlier, KB has previ-ously associated words to the instrumentalmelody).

    In the absence of sufficient relational informationand tightly connected associative links betweenmelody and speech lexicons, recognition and iden-tification of instrumentalmelodies does not occur.

    Manuscript received 28 May 1999Revised manuscript received 15 August 2000

    Revised manuscript accepted 19 September 2000

    REFERENCES

    Anderson, C. (1996). Finale music notation software[Computer Software]. Eden Prairie, MN: CodaMusic Technology.

    Bartlett, J.C., & Snelus, P. (1980). Lifespan memory forpopular songs. American Journal of Psychology, 93,551560.

    Besson, M., Fata, F., Peretz, I., Bonnel, A.-M., &Requin, J. (1998). Singing in the brain: Independence

    of lyrics and tunes. Psychological Science, 9, 494498.Bigand, E. (1993). The influence of implicit harmony,rhythm and musical training on the abstraction oftension-relaxation schemas in tonal musicalphrases. Contemporary Music Review, 9, 123137.

    Blair, J.R., & Spreen, O. (1989). Predicting premorbidIQ: A revision of the National Adult Reading Test.The Clinical Neuropsychologist, 3, 129136.

    Boltz, M. (1989). Rhythm andgoodendings:Effects oftemporal structure on tonality judgments. Perceptionand Psychophysics, 46, 917.

    Boltz, M., & Jones, M.R. (1986). Does rule recursionmake melodies easier to reproduce? If not, what does?Cognitive Psychology, 18, 389431.

    Crowder, R.G., Serafine, M.L., & Repp, B.H. (1990).Physical interaction and association by contiguity inmemory for the words andmelodies of songs.Memoryand Cognition, 18, 469476.

    Dowling, W.J., & Harwood, D.L. (1986). Music cogni-tion. Orlando, FL: Academic Press.

    Fuld, J.F. (1995). The book of world-famous music: Classi-cal, popular, and folk. New York: Dover.

    Gardiner, J.M.,Kaminska, Z., Java,R.I., Clarke, E.F.,&Mayer, P. (1990). TheTulving-Wiseman law and the

    COGNITIVE NEUROPSYCHOLOGY, 2001, 18 (5) 431

    MELODY RECOGNITION

  • 7/29/2019 Steinke, W. R., Cuddy, L. L., & Jakobson, L. S. (2001). Dissociations Among Functional Subsystems Governing Mel

    23/28

    recognition of recallable music. Memory and Cogni-tion, 18, 632637.

    Goodglass, H., & Kaplan, E. (1983). Boston DiagnosticAphasia Examination (BDAE). Philadelphia: Lee &Febiger. Distributed by Psychological AssessmentResources, Odessa, FL.

    Gross, R. (1981). DX-Score [Computer Software].Rochester, NY: Eastman School of Music.

    Halpern, A.R. (1984). Organization in memory forfamiliar songs. Journal of Experimental Psychology:Learning, Memory, and Cognition, 10, 496512.

    Heaton, R.K. (1981). A manual for the Wisconsin CardSorting Test. Odessa, FL: Psychological AssessmentResources.

    Hbert, S., & Peretz, I. (1997). Recognition of music inlong-term memory: Are melodic and temporal pat-terns equal partners?Memory and Cognition,25, 518533.

    Java, R.I., Kaminska, A., & Gardiner, J.M. (1995). Rec-ognition memory and awareness for famous andobscuremusical themes.European Journal of CognitivePsychology, 7, 4153.

    Johns, M. (Ed.). (1980). Jumbo: The childrens book (3rded.). Miami Beach, FL: Hansen House.

    Koh, C.K., Cuddy, L.L, & Jakobson, L.S. (in press).Associations and dissociations between music train-ing, tonal and temporal processing, and cognitiveskills. Proceedings of the New York Academy of Science:

    Biological Foundations of Music.Kolinsky, M. (1969). Barbara Allen: Tonal versusmelodic structure,PartII.Ethnomusicology, 13, 173.

    Krumhansl, C.L. (1990). Cognitive foundations of musicalpitch. New York: Oxford University Press.

    Krumhansl, C.L. (1991). Music psychology: Tonalstructures in perception and memory.Annual Reviewof Psychology, 42, 277303.

    Krumhansl, C.L. (1995). Music psychology and musictheory: Problems and prospects. Music Theory Spec-trum, 17, 53-80.

    Krumhansl, C.L., & Kessler, E.J. (1982). Tracing thedynamic changes in perceived tonal organization in aspatial representation of musical keys. PsychologicalReview, 89, 334368.

    Lengeling, G., Adam, C., & Schupp, R. (1990). C-LabNotator SL/Creator SL (version 3.1) [Computer soft-ware]. Ha