183
Examination of Emotion-Modulated Processing Using Eye Movement Monitoring and Magnetoencephalography by Lily Riggs A thesis submitted in conformity with the requirements for the degree of Doctor of Philosophy Psychology University of Toronto © Copyright by Lily Riggs 2012

Examination of Emotion-Modulated Processing Using Eye ... · ii Examination of Emotion-Modulated Processing Using Eye Movement Monitoring and Magnetoencephalography Lily Riggs Doctor

  • Upload
    buinhu

  • View
    216

  • Download
    0

Embed Size (px)

Citation preview

Examination of Emotion-Modulated Processing Using Eye Movement Monitoring and Magnetoencephalography

by

Lily Riggs

A thesis submitted in conformity with the requirements for the degree of Doctor of Philosophy

Psychology University of Toronto

© Copyright by Lily Riggs 2012

ii

Examination of Emotion-Modulated Processing Using Eye

Movement Monitoring and Magnetoencephalography

Lily Riggs

Doctor of Philosophy

Psychology

University of Toronto

2012

Abstract

Research shows that emotional items are associated with enhanced processing and

memory. However, emotional memories are composed of not only memory for the specific

emotion-eliciting item, but also other items associated with it, as well as memory for how these

items are related. The current thesis utilized verbal report, eye movement monitoring and

magnetoencephalography in order to examine how emotions may influence online processing

and memory for associated information. It was found that while emotions influenced attention to

both the emotion-eliciting item and associated information during the encoding stage, this was

not related to subsequent memory performance as indexed by verbal report. It was also found

that while emotions impaired detailed memory for associated information, it did not affect the

ease or speed at which those memories could be accessed. In using MEG, it was found that

emotions may modulate not only how participants’ view associated information, but it may also

modulate the type of representation formed. Together, findings from the current work suggests

that: (1) emotions influence online processing and memory for associated information; (2)

emotions modulate memory for associated information via routes other than overt attention; (3)

encoding and retrieval may occur in stages; and (4) memory exerts early influences on

processing. The current work shows that emotions modulate online processing of associated

iii

neutral information in a top-down manner, independent of differences in its physical properties.

Work from this thesis encourages a reconceptualization of emotion, memory and perception and

how they relate to one and another. Rather than viewing them as independent modular

processes, they may, in fact, be more widely distributed in the brain and interact more closely

than previously described. This may be evolutionarily adaptive allowing us to quickly and

efficiently form memories for emotional events/scenes that can later guide perception and

behaviour.

iv

Acknowledgments

I owe my deepest gratitude to my advisor, Dr. Jennifer D. Ryan. She has exceeded what an ideal

supervisor should be. Dr. Ryan has taught me how to think, write and speak like a scientist. She

has provided me with the perfect balance of challenge and support. She has an infectious passion

for science and inexhaustible attention to detail. Although we are still arguing over appropriate

spelling conventions (note the British/Canadian spelling throughout), she has influenced me in

immeasurable ways. I would like to thank her for inspiring me and always pushing me to be

better. She is truly a Great, and I hope to do her proud.

Many thanks to my thesis committee, Dr. Adam K. Anderson and Dr. Bernhard Ross, for their

invaluable insight, guidance, and support.

I am also honoured to have been taught and mentored by Drs. Morris Moscovitch, Sandra N.

Moses, Anthony T. Herdman, Takako Fujioka, and Timothy Bardouille. They have lent me their

years of expertise and patiently taught me the basic fundamentals of science and neuroimaging.

I would like to thank my amazing lab mates and Rotman colleagues who were always there for

moral support and in-depth discussions of science, puppies, and food. I am especially grateful to

Christina Richardson for her unwavering loyalty and friendship, for always helping me step back

and see the bigger picture, and for teaching me how to use pivot tables. I would also like to

thank future Dr. Mark Chiew, a python wizard who babysat my bike rides to and from work,

wrote countless scripts for my experiments (2.5), and kept me sane and entertained during

innumerable hours at the library writing. Our bond is like Valyrian steel.

I am grateful for all the wonderful collaborators, research assistant and students who have helped

me with my research projects: Douglas A. McQuiggan, Norman Farb, Daniel Lee, Ella Pan, Amy

Oziel, Helen Dykstra, and Lisa Bolshin.

I would also like to thank my surrogate Toronto families: the Mrazs, Kings and Godris. They

welcomed me into their homes, fed me, and taught me the essential life skills that I did not even

realize that I would need. I would like to thank Vera Mraz especially for always being so

positive and enthusiastic about my research, and for being the best mother-in-law one could hope

for.

v

Special thanks to my bridesmaids/best friends for their years of cheerleading and “bad”

influence: Danielle King, Christina Richardson, Krystal Godri, Yu Gu and Christina Sinopoli.

I am extremely grateful to all the institutions and persons who funded my graduate career and

rockstar lifestyle: Natural Sciences and Engineering Research Council, Ontario Mental Health

Foundation, Jack & Rita Catherall Fund and Men’s Services Group at Baycrest Hospital, Jennifer

D. Ryan Fund, The Richard Mraz Foundation, Alpha Gamma Delta Foundation, The Riggs

Family, and University of Toronto.

Many, many thanks to my fabulous family who have always supported me and encouraged me in

every way. They instilled in me a great love of learning, reading and science, and they have only

themselves to blame that I did not grow up to be rich and famous.

I am deeply indebted (literally and figuratively) to my new husband, Richard Mraz, who has

been exceptionally loving and surprisingly patient throughout this entire process. I love him

more than I could ever adequately express and feel extremely fortunate to have him in my life.

vi

Table of Contents

Acknowledgments (if any) ............................................................................................................. iv

Table of Contents ........................................................................................................................... iv

List of Tables ................................................................................................................................. xi

List of Figures ............................................................................................................................... xii

Chapter 1 Background and Rationale ............................................................................................. 1

1 Background and rationale .......................................................................................................... 2

1.1 Introduction ......................................................................................................................... 2

1.2 Emotion-Enhanced Memory for Items ............................................................................... 4

1.3 Memory Systems ................................................................................................................ 5

1.4 Emotions and Relational Memory ...................................................................................... 9

1.4.1 Emotion Enhances Relational Memory .................................................................. 9

1.4.2 Emotion Impairs Relational Memory ................................................................... 10

1.4.3 Questions Remaining ............................................................................................ 12

1.5 Eye Movement Monitoring ............................................................................................... 12

1.5.1 Measures of Attention ........................................................................................... 12

1.5.2 Eye Movement Monitoring and Attention ............................................................ 13

1.5.3 Eye Movement Monitoring and Memory ............................................................. 15

1.6 Magnetoencephalography ................................................................................................. 17

1.7 Objectives ......................................................................................................................... 19

Chapter 2 The Role of Overt Attention in Emotion-Modulated Memory .................................... 23

2 The role of overt attention in emotion-modulated memory ..................................................... 24

2.1 Abstract ............................................................................................................................. 24

2.2 Introduction ....................................................................................................................... 24

2.3 Method .............................................................................................................................. 26

vii

2.3.1 Participants ............................................................................................................ 26

2.3.2 Stimuli and Design ................................................................................................ 26

2.3.3 Procedure .............................................................................................................. 27

2.3.4 Analysis ................................................................................................................. 29

2.4 Results ............................................................................................................................... 31

2.4.1 Study Blocks ......................................................................................................... 31

2.4.2 Test Blocks ............................................................................................................ 33

2.4.3 Mediation Analysis ............................................................................................... 33

2.5 Discussion ......................................................................................................................... 35

2.5.1 Attention Narrowing ............................................................................................. 36

2.5.2 Central/Peripheral Tradeoff in Memory and Attention ........................................ 37

2.5.3 Mechanisms Underlying Emotion-Enhanced Memory ........................................ 39

2.5.4 Limitations and Future Directions ........................................................................ 41

2.6 Acknowledgments ............................................................................................................. 42

Chapter 3 Eye Movement Monitoring Revealed Differential Influences of Emotion on

Memory .................................................................................................................................... 43

3 Eye movement reveals differential influences of emotion on memory ................................... 44

3.1 Abstract ............................................................................................................................. 44

3.2 Introduction ....................................................................................................................... 44

3.3 Materials and Methods ...................................................................................................... 47

3.3.1 Participants ............................................................................................................ 47

3.3.2 Stimuli and Design ................................................................................................ 47

3.3.3 Procedure .............................................................................................................. 48

3.3.4 Analysis ................................................................................................................. 49

3.4 Results ............................................................................................................................... 51

3.4.1 Central Pictures ..................................................................................................... 51

viii

3.4.2 Peripheral Objects ................................................................................................. 53

3.4.3 Relation between Verbal reports and Eye Movement Data .................................. 55

3.5 Discussion ......................................................................................................................... 56

3.6 Acknowledgments ............................................................................................................. 60

Chapter 4 A Complementary Analytic Approach to Examining Medial Temporal Lobe

Sources Using Magnetoencephalography ................................................................................ 62

4 A complementary analytic approach to examining medial temporal lobe sources using

magnetoencephalography ......................................................................................................... 63

4.1 Abstract ............................................................................................................................. 63

4.2 Introduction ....................................................................................................................... 63

4.3 Method .............................................................................................................................. 70

4.3.1 Participants ............................................................................................................ 70

4.3.2 Stimuli ................................................................................................................... 70

4.3.3 Procedure .............................................................................................................. 71

4.3.4 Data acquisition .................................................................................................... 72

4.3.5 Data analysis ......................................................................................................... 73

4.4 Results ............................................................................................................................... 76

4.4.1 Behavioral responses ............................................................................................ 76

4.4.2 Signal power changes in neural responses: SAM ................................................. 76

Coherence in neural responses: ITC ................................................................................. 77

4.4.3 Averaged event related neural responses: ER-SAM ............................................. 84

4.5 Discussion ......................................................................................................................... 88

4.5.1 Multiple MEG data analyses ................................................................................. 89

4.5.2 Consistency across the data analyses .................................................................... 91

4.5.3 Theoretical implications ........................................................................................ 93

4.5.4 Concluding remarks and future considerations ..................................................... 94

4.6 Acknowledgments ............................................................................................................. 95

ix

Chapter 5 Emotional Associations Alter Processing of Neutral Faces ......................................... 96

5 Emotional associations alter processing of neutral faces ......................................................... 97

5.1 Abstract ............................................................................................................................. 97

5.2 Introduction ....................................................................................................................... 97

5.3 Methods ........................................................................................................................... 102

5.3.1 Participants .......................................................................................................... 102

5.3.2 Stimuli and Design .............................................................................................. 102

5.3.3 Procedure ............................................................................................................ 103

5.3.4 Data Acquisition ................................................................................................. 104

5.3.5 Analysis for Study Phase .................................................................................... 105

5.3.6 Analysis for Test Phase ....................................................................................... 109

5.4 Results ............................................................................................................................. 109

5.4.1 Study Phase ......................................................................................................... 109

5.4.2 Test Phase ........................................................................................................... 119

5.5 Discussion ....................................................................................................................... 120

5.5.1 Emotion-Modulated Viewing of Words ............................................................. 121

5.5.2 Emotion-Modulated Viewing of Neutral Faces .................................................. 121

5.5.3 Emotion-Modulated Processing of Neutral Faces .............................................. 122

5.5.4 Emotion-Modulated Memory for Neutral Faces ................................................. 127

5.5.5 Conclusions ......................................................................................................... 127

5.6 Acknowledgements ......................................................................................................... 128

Chapter 6 Theoretical and Methodological Contributions, and Concluding Remarks ............... 129

6 Theoretical and methodological contributions, and concluding remarks .............................. 130

6.1 Theoretical Contributions ............................................................................................... 131

6.1.1 Emotions Modulate Visual Processing of Associated Information .................... 131

6.1.2 Emotions-Modulated Relational Memory is not Mediated by Attention ........... 132

x

6.1.3 Emotion Has Differential Effects on Memory .................................................... 133

6.1.4 Encoding and Retrieval Occur in Stages ............................................................ 135

6.1.5 Memory Exerts Early Influences on Processing ................................................. 136

6.2 Methodological Contributions ........................................................................................ 137

6.2.1 Magnetoencephalography ................................................................................... 138

6.2.2 Eye Movement Monitoring and Magnetoencephalography ............................... 139

6.3 Summary and Concluding Remarks ............................................................................... 141

References ................................................................................................................................... 143

xi

List of Tables

Table 2.1. Mean responses and standard errors for peripheral objects and central pictures. ........ 33

Table 3.1 Means and standard errors for eye movement measures for viewing of the critical

object in the periphery and central scenes during test session. ..................................................... 52

Table 3.2 Mean responses and standard errors for peripheral objects and central pictures. ......... 52

Table 4.1 Average accuracy for correctly identifying a scene as ‘old’ or ‘new’ .......................... 76

Table 4.2 Source ITC values and Talairach co-ordinates for individual ITC maps showing

hippocampal activity ..................................................................................................................... 80

Table 4.3 Source ER-SAM values for individual participants within the hippocampus ............. 86

Table 5.1 The mean and SEM of ratings for the negative and neutral sentences used in

experiment ................................................................................................................................... 103

Table 5.2 The mean and SEM for different eye movement measures of viewing to the critical

word when it was negative and neutral. ...................................................................................... 110

Table 5.3 The mean and SEM for early (A) and overall (B) measures of viewing to different

features within Face 1 and Face 2. .............................................................................................. 111

Table 5.4 Mean accuracy and SEM for identifying different face types during the test phase .. 120

xii

List of Figures

Figure 2.1 Experimental procedure. .............................................................................................. 29

Figure 2.2. Viewing to central and peripheral elements during the study phase. ......................... 32

Figure 2.3 A mediation model with emotion, attention (number of fixations) and memory. ....... 35

Figure 3.1 The proportion of change in viewing the critical object in a manipulated and repeated

object array relative to a novel object array. ................................................................................. 54

Figure 4.1 Example of an indoor and outdoor scene used in the experiment. .............................. 72

Figure 4.2 Group-averaged SAM activation maps ....................................................................... 77

Figure 4.3 Group-averaged time-frequency maps of ITC for the hippocampus ........................... 78

Figure 4.4 Group-averaged volumetric maps of ITC .................................................................... 79

Figure 4.5 Representative individual volumetric maps of inter-trial coherence ........................... 82

Figure 4.6 The averaged location of hippocampal activity based on individual ITC maps ......... 84

Figure 4.7 Group-averaged time-courses of neural activity emanating from the hippocampus as

revealed by the ER-SAM analysis ................................................................................................ 85

Figure 4.8 Representative individual time-courses of hippocampal activity ................................ 88

Figure 4.9 The averaged location of hippocampal activity based on individual ER-SAM maps . 88

Figure 5.1 Experimental procedure ............................................................................................. 104

Figure 5.2 LV1 from PLS analysis ............................................................................................. 113

Figure 5.3 Sources showing stronger activation for faces paired with negative as compared to

neutral sentences ......................................................................................................................... 114

Figure 5.4 Sources showing stronger activation for faces paired with neutral as compared to

negative sentences ....................................................................................................................... 117

xiii

Figure 5.5 Sources initially showing stronger activation for faces paired with neutral as

compared to negative sentences, then stronger activation for faces paired with negative as

compared to neutral sentences .................................................................................................... 118

1

Chapter 1 Background and Rationale

2

1 Background and rationale

1.1 Introduction

The durability of emotional memories is verified not only by personal experience, but also by

empirical evidence (e.g. Christianson & Loftus, 1987; Heuer & Reisberg, 1990; Cahill et al.,

1996; Phelps, LaBar, & Spencer, 1997). From an evolutionary perspective, it is very adaptive,

and often crucial, to remember emotional information that is appetitive (the best place for

foraging) or aversive (the dwelling of a predator). An extreme example of emotion-enhanced

memory is called ‘flashbulb memory’. According to Brown and Kulik (R. Brown & Kulik,

1977), flashbulb memories are formed when something emotionally intense happens and have

the following characteristics: richly detailed and very complete, accurate, and immune to

forgetting. This led some researchers to suggest that emotionally arousing memories are

indelible (LeDoux, 1992). However, it has since been shown that although greater emotional

intensity is associated with greater memory confidence, it is not necessarily associated with

higher memory accuracy (Talarico & Rubin, 2003). Indelible or not, and for better or worse,

emotional memories often endure longer and contain more vivid details than non-emotional

memories.

However, in order to remember an event, one needs to remember not only the individual details,

but also be able to bind them into a coherent whole. For example, if one were the eyewitness to

an armed robbery, it would be important to not only remember seeing a weapon (e.g. a gun), but

to also associate the weapon with other aspects of the event such as what the person holding the

gun looked like, what s/he was wearing, and who else may be involved. While there is an

abundance of research focused on examining how emotion may influence memory for the

emotion-arousing item (i.e. the gun), there is a lack of research focused on how emotion may

modulate memory for associated information that may not be emotional in and of themselves

(e.g. what kind of clothes the perpetrator was wearing). In other words, it is unclear how

emotions may affect our ability to process the surrounding neutral information and bind various

bits of information together into a coherent and lasting episode. Such an examination may help

us better understand the interaction between emotions and memory. Specifically, do emotions

enhance memory for all aspects of an event, or only some aspects of an event? If emotions only

enhance some aspects of an event, how does this occur? Such an examination may also have

3

clinical relevance and lead to insights for understanding how neutral information may be altered

and become imbued with emotional significance in disorders such as post-traumatic stress

disorder (PTSD).

In studies examining how emotions may influence memory, most have assessed memory directly

via either recognition (i.e. has this stimulus been presented before?) or recall (i.e. tell me what

you remember) tasks (e.g. Adolphs, Cahill, Schul, & Babinsky, 1997; Adolphs, Denburg, &

Tranel, 2001; Anderson, Wais, & Gabrieli, 2006; Buchanan, Denburg, Tranel, & Adolphs, 2001;

Cahill et al., 1996). However, verbal reports represent the final output of a long chain of

processes that occur prior to it and emotions may influence memory by modulating any, or all of

those processes. Emotions may modulate how much attention we may direct to a stimulus,

thereby influencing what kind of information we get into memory in the first place (encoding).

Emotions may also influence how quickly we may be able to access memory representations

and/or the way in which we evaluate those stored representations, thereby influencing what kind

of information we get out of memory (retrieval). In order to assess how emotions may influence

these processes, one must go beyond verbal report and take advantage of technologies that are

able to reveal aspects of online processing, such as eye movement monitoring and neuroimaging

techniques such as magnetoencephalography (MEG).

The current thesis utilizes verbal report, eye movement monitoring and MEG in order to address

the following questions: (1) How do emotions influence attention to both the emotion-eliciting

item (e.g. a gun) and associated information (e.g. what the person holding the gun looked like)

during the encoding stage and what we get into memory; and (2) how do emotions influence the

retrieval of both the emotion-eliciting item and associated information and what we may be able

to get out of memory? In this chapter, background information relevant to addressing the above

questions is reviewed including: emotion-enhanced memory for items, differences between

memory for items and memory for associated information, emotion-modulated memory for

associated information. The use of eye movement monitoring and MEG, and how they may be

utilized to outline aspects of memory is also reviewed. Finally, an overview of methods and

specific project objectives are outlined.

4

1.2 Emotion-Enhanced Memory for Items

The bulk of the literature examining emotion-modulated memory has focused on how emotion

influences memory for single items such as words and faces. In this literature, studies

consistently showed that emotional items are remembered better than neutral items whether

memory is tested immediately after encoding or after a longer delay, i.e. more than 24 hours (for

reviews see: Dolan, 2002; LaBar & Cabeza, 2006). Research has shown that special neural and

hormonal processes exist to enhance emotional, but not nonemotional memories. The brain

region most implicated for emotion processing is the amygdala (for reviews see: Dolan, 2002;

LaBar & Cabeza, 2006; McGaugh, 2000; Phelps, 2004; Zald, 2003). For example, patients with

amygdala lesions did not show emotion-enhanced memory (e.g. Adolphs et al., 1997; Anderson

& Phelps, 2001) and amygdala lesions or infusions of beta-adrenergic receptor antagonists into

the amygdala blocked the memory modulating/enhancing effects of epinephrine and

glucocorticoids on consolidation (McGaugh, 2004).

Although most research on the amygdala’s role in emotion-enhanced memory have focused on

its effects during consolidation, a period in which a memory trace is stabilized after initial

encoding, which can take days (McGaugh, 2000, 2002), it has also been shown that the

amygdala may also enhance memory during the encoding stage. For example, it has been found

that even when memory was tested immediately, the amount of amygdala activity during

encoding was positively correlated with subsequent memory for the emotional items (e.g.

Kensinger & Schacter, 2006; Richardson, Strange, & Dolan, 2004). It is suggested that in

addition to its role during consolidation, the amygdala may also enhance memory by modulating

attention, i.e. the active processing of specific information in the environment (LaBar & Cabeza,

2006), and sensory processing, i.e. the construction of a coherent representation regarding

sensory input (Armony & Dolan, 2002; Carretie, Hinojosa, Martin-Loeches, Mercado, & Tapia,

2004; Phelps, 2004; Williams, Mathews, & MacLeod, 1996), both of which have been shown to

be significant factors in enhancing memory (Craik, Govoni, Naveh-Benjamin, & Anderson,

1996; Phelps, 2004; Talmi, Anderson, Riggs, Caplan, & Moscovitch, 2008). In other words,

emotions may enhance memory by increasing the amount of attention one may direct to the

emotion-eliciting item and/or by enhancing the visual representation of that item in the brain.

5

Another way in which the amygdala may enhance memory is through direct modulation of

mnemonic structures in the medial temporal lobe. The amygdala has connections to many

regions involved in memory such as the caudate nucleus, the rhinal cortex and the hippocampus

(Pikkarainen, Ronkko, Savander, Insausti, & Pitkanen, 1999; Pitkanen, Pikkarainen, Nurminen,

& Ylinen, 2000). Via these projections, the amygdala is able to enhance different forms of

memory. For example, it has been shown that while infusions of amphetamine into the

hippocampus and caudate nucleus enhanced memory for spatial and cued training, respectively,

infusions of amphetamine into the amygdala enhanced memory for both types of memory

(Packard & Cahill, 2001; Packard, Cahill, & McGaugh, 1994).

As mentioned previously, although there is ample research showing emotion-enhanced memory

for items such as faces and words, there is less research that examines the effects of emotion on

memory for neutral information associated with the emotional item. One may be tempted to

argue that since emotions enhance memory for items, it may also enhance memory for

information that is associated with the emotional items. After all, when people report on the

contents of their memory for emotional events (e.g. an armed robbery), they do not only report

on the emotional item (e.g. the gun), but also other neutral items associated with the event (e.g.

what the perpetrator was wearing). However, as mentioned at the beginning of this chapter,

research has shown that such reports were not always accurate (Talarico & Rubin, 2003).

Further, there is evidence to suggest that the same conditions that promote memory for items

may not promote memory for associated information, and that these two forms of memory may

rely on different neural regions (e.g. Craik, Luo, & Sakuta, 2010; Litman & Davachi, 2008).

Therefore, just because emotions enhance memory for the emotion-eliciting item, it does not

necessarily mean that emotions also enhance memory for information associated with the item.

In the next section, differences between item memory and associative memory are outlined

within a larger framework of memory systems.

1.3 Memory Systems

In 1953, a patient underwent a bilateral medial temporal-lobe resection for the relief of

incapacitating non-focal seizures. Subsequent to the operation, although the patient’s seizure

episodes decreased, he now suffered from profound anterograde amnesia – an inability to form

new memories. Importantly, this memory deficit was not accompanied by other intellectual or

6

motor skill deficits (Corkin, 1968, 2002). The patient is now famously known as H.M. (Scoville

& Milner, 1957, 2000) and led to the discovery that memory depends on the integrity of medial

temporal brain regions, and specifically the hippocampus (e.g. Scoville & Milner, 2000; Squire

& Zola-Morgan, 1991; Zola-Morgan, Squire, & Amaral, 1986). This is corroborated by cross-

species studies (for review see Squire, 1992) and set the stage for current cognitive and

neuroscientific theories.

One of the most important ideas to emerge in the last few decades is that memory is not a unitary

system, but divided into subsystems supported by different regions in the brain. In 1980, Cohen

and Squire (N. J. Cohen & Squire, 1980) proposed that declarative memory, defined as the long-

term memory for events, depends on medial temporal regions of the brain, especially the

hippocampus, and is compromised in amnesia. On the other hand, procedural memory, defined

as memory for skills and measured by tasks of motor skills (Corkin, 1968), classical conditioning

(Warrington & Weiskrantz, 1982; Weiskrantz & Warrington, 1979), priming (Warrington &

Weiskrantz, 1968) and perceptual skills (Milner, Corkin, & Teuber, 1968), depend on cortical

regions of the brain and is spared in amnesic patients.

While almost all researchers agree that memory is not a unitary system, and that loss of

hippocampal function is associated with impaired declarative memory and preserved procedural

memory, there is intense debate regarding what ‘declarative memory’ encompasses (for reviews

see: N. J. Cohen, Poldrack, & Eichenbaum, 1997; Squire, 2004; Tulving, 1987). Some

researchers argue that declarative memory encompasses memory for information that is

consciously or explicitly accessible such as memory for facts and events, as opposed to memory

for skills such as riding a bike which would be considered a procedural memory. Under this

definition, declarative memory is synonymous with explicit or conscious memory, and the

critical role of the hippocampus is to form and retrieve memories that are available to conscious

introspection (Graf & Schacter, 1985; Schacter, 1987; Squire & Zola, 1997). This is referred to

as the “explicit account” and under this definition, conscious memory for both singular items

(e.g. a face) and the relations between multiple items (e.g. a face within a particular scene) will

rely critically on the integrity of the hippocampus. Supporting this, it has been found that

amnesic patients with damage to the hippocampus are impaired when they have to recall or

recognize items (e.g. words) and associative information (e.g. word pairs; for reviews see: N. J.

Cohen et al., 1999; Mayes, Montaldi, & Migo, 2007; Squire, 2009). In view of this, it could then

7

be reasoned that since memory for items and memory for associations rely on the same neural

region, and research clearly shows that emotions enhance memory for items (Section 1.1), then

one can reasonably conclude that emotions may also enhance memory for associations as well.

In contrast to the explicit account, some researchers use declarative memory to describe

relational memory. Relational memory is defined as the formation of relations/associations

among items within a scene or an event into a lasting representation, and relies critically on the

integrity of the hippocampus (N. J. Cohen et al., 1997; N. J. Cohen et al., 1999). This is referred

to as the “relational account” and under this definition, memory for the relations between items

depends critically on the hippocampus, irrespective of conscious awareness (Chun & Phelps,

1999; Ryan, Althoff, Whitlow, & Cohen, 2000; Ryan & Cohen, 2004), and memory for items do

not depend on the hippocampus. For example, it has been reported that the more complex a

stimulus is and the more associations that are required to memorize it, the greater the

hippocampal activity observed (Henke, Weber, Kneifel, Wieser, & Buck, 1999; Kirwan & Stark,

2004; Montaldi et al., 1998; Stern et al., 1996). Critically, it has been shown that while amnesics

can express memory for items (e.g. faces), they do not show memory for the relations between

items irrespective of whether memory was assessed directly via verbal report or indirectly via

eye movement monitoring without requiring participants to explicitly comment on the contents

of their memory (Ryan et al., 2000; Ryan & Cohen, 2004). This suggests that relational memory

can be decoupled from conscious awareness and that memory for items and memory for the

relations between items is supported by different regions in the brain. In view of this, it could be

argued that just because emotions enhance item memory (Section 1.1), this does not necessarily

mean that emotions would also enhance relational memory.

It should be clear from the above that not only do proponents of the explicit and relational

account of declarative memory disagree on what encompasses ‘declarative memory’ and what

the critical role of the hippocampus is, but these accounts also lead to potentially different

predictions as to the way in which emotions may influence relational memory. However, it is

important to note that although proponents of the explicit versus relational account may disagree

on the critical role of the hippocampus, most researchers would agree that the hippocampus plays

an important role in relational memory. In further support of this, there is a growing body of

literature showing that not only is the hippocampus specialized for relational memory, but other

regions within the medial temporal lobe may be specialized for other types of memory as well

8

(e.g. Henson, 2005; see also: Squire, Stark, & Clark, 2004). For example, results from rat studies

show a functional dissociation such that lesions to the hippocampus led to impaired responding

on tasks requiring relational memory, whereas lesions to the perirhinal cortex led to impaired

responding on tasks requiring item memory (e.g. Moses, Cole, Driscoll, & Ryan, 2005). In a

study by Wan and colleagues (Wan, Aggleton, & Brown, 1999), neuronal activity in rats was

measured using immunohistochemistry for the protein products of c-fos while the rats viewed

familiar and novel pictures simultaneously. It was found that when the rats had to distinguish

between familiar and novel items, activity in the perirhinal cortex was significantly higher for

novel as compared to familiar objects. However, when rats had to distinguish between objects in

a familiar versus novel arrangements, activity in the hippocampus was significantly higher for

novel as compared to familiar arrangements.

Similar results as those reported for rats have also been observed with humans using

neuroimaging techniques such as functional magnetic resonance imaging (fMRI). Specifically,

memory with an associative component (i.e. item + contextual information) tends to elicit more

activity in the hippocampus, whereas memory for single items tends to elicit more activity in the

perirhinal cortex (M. W. Brown & Aggleton, 2001; Davachi, Mitchell, & Wagner, 2003; Dougal,

Phelps, & Davachi, 2007; Ranganath et al., 2004). For example, in an fMRI study of item and

relational memory (Davachi et al., 2003), participants were presented with a list of adjectives and

were instructed to either read the word backwards (“Read”) or to form a mental imagery of the

word (“Image”). After 20 hours, participants’ item (i.e. is the word old or new?) and

source/relational memory (i.e. was the word studied in the Read or Image condition?) was

examined. It was found that the level of activity in the perirhinal cortex predicted later item

recognition, but it did not predict later source memory. On the other hand, the level of activity in

the hippocampus predicted subsequent success in recalling the condition in which the word was

studied (source memory), but it did not predict subsequent item memory.

In light of the above, there are a couple important points worth highlighting. First, if the

relational account is correct and relational memory can be decoupled from conscious awareness,

then an examination of how emotions may influence relational memory should include both

direct (verbal report) and indirect methods (e.g. eye movement monitoring) of measuring

relational memory. This would not only provide convergent evidence, but it may also reveal

aspects of online processing that cannot be accessed by probing participants’ conscious memory

9

alone. Second, if different subregions of the medial temporal lobe underlie item and relational

memory, it can be argued that just because emotions may enhance item memory via the

perirhinal cortex, this does not necessarily imply that it must also enhance relational memory via

the hippocampus. Thus, the aim of the current thesis is to examine how emotions may influence

relational memory by utilizing direct and indirect methods to assess memory formation and

retrieval. However, before describing the specific methods in detail, a review of the relevant

literature concerning emotion-modulated relational memory is described in the next section.

1.4 Emotions and Relational Memory

The literature is mixed and somewhat unclear as to whether emotions may enhance or impair

relational memory. There are two main theories that have emerged concerning the effects of

emotion on relational memory and I examine each in turn: (1) emotion enhances relational

memory; and (2) emotion enhances memory for the emotion-eliciting item, but impairs relational

memory.

1.4.1 Emotion Enhances Relational Memory

MacKay and colleagues (Hadley & Mackay, 2006; MacKay & Ahmetzanov, 2005; MacKay et

al., 2004) have proposed that emotional arousal may enhance relational binding by acting as the

‘glue’ that preferentially binds features within the emotional item as well as between it and its

experimental context (e.g. information regarding when and where the experiment occurred).

These studies examined differences in memory for taboo versus neutral words. For example,

participants were presented with taboo words and neutral words typed in different font colours

and were instructed to ignore the meaning of the word and name the colour of the font. In a

surprise memory test, it was found that not only were the participants more accurate in recalling

the taboo words as compared to the neutral words, they were also more accurate in remembering

the colour in which the taboo words were presented (MacKay et al., 2004). However,

remembering the colour of the font in which a word was typed represents enhanced memory for

specific details of the word, or in other words, enhanced memory for specific details of an item.

Thus, this type of memory is not considered relational memory, but rather item memory and

likely relies on the perirhinal cortex rather than the hippocampus (Section 1.3).

10

Similar results have also been reported by D’Argembeau and Van der Linden (D'Argembeau &

Van der Linden, 2004, 2005) examining the effects of emotion on associated information such as

spatial location and temporal order. In one of these studies (D'Argembeau & Van der Linden,

2004), the researchers presented participants with positive, negative and neutral words in a 4×4

grid and found that participants were better able to identify the spatial location in which the

emotional as compared to where the neutral words had been presented. In a separate study, the

researchers also examined the influence of emotions on associated temporal information

(D'Argembeau & Van der Linden, 2005). Here, participants were presented with three separate

lists composed of negative, positive and neutral complex visual scenes. Memory for associated

temporal information was examined by asking participants to recall in which list a certain scene

had originally been presented during the encoding phase. As expected, memory for temporal

information was more accurate for negative versus neutral pictures.

The above studies show that emotions enhanced memory for not only the emotional item, but

also for contextual details associated with the item such as its spatial location and its temporal

position. However, it is unclear how emotions may enhance memory for such relations.

D’Argembeau and Van der Linden (D'Argembeau & Van der Linden, 2004; D'Argembeau &

Van der Linden, 2005) suggested that emotional items may capture and hold one’s attention to a

greater extent, thereby facilitating memory for both the item and information associated with it.

However, this has not been examined directly. And perhaps even more problematically,

emotion-modulated attention is also cited as the reason for the opposite pattern of results as those

reported above, namely, emotions lead to impaired relational memory. These studies are

reviewed in the next section.

1.4.2 Emotion Impairs Relational Memory

Returning briefly to the scenario of being a witness in an armed robbery (Section 1.1), some

studies have shown that contrary to the results reported above (Section 1.4.1), people actually

have worse memory for information associated with the emotional event such as what the

perpetrator looked like. It is suggested that while emotions enhance memory for the emotion-

eliciting item (e.g. gun), the cost of this memory enhancement is impaired memory for associated

information. In other words, emotions may enhance item memory at the cost of impaired

11

relational memory. This is commonly referred to as the central/peripheral tradeoff effect in

memory (for review see Christianson, 1992).

In a classic study by Loftus and colleagues (E. F. Loftus, Loftus, & Messo, 1987), participants

were shown a slide sequence depicting a man (target) holding a gun or a bill to the cashier at a

fast food restaurant line-up. Later, participants completed a 20-item recognition test and

attempted to identify the target man from a 12-person line-up as well as other aspects of the

scenes presented. The researchers found that when participants viewed the slide sequence with

the gun, they performed better in identifying the gun but more poorly in identifying the man

holding the gun as compared to when the man was holding a bill. This showed the purported

central/peripheral tradeoff effect in memory and similar results have been reported in numerous

studies (e.g. Jurica & Shimamura, 1999; Kensinger, Piguet, Krendl, & Corkin, 2005; Kramer,

Buckhout, & Eugenio, 1990; Levine & Pizarro, 2004; Pickel, 1998). Using a different paradigm,

Brown (J. M. Brown, 2003) also reported similar results. Specifically, Brown utilized the

contextual reinstatement (CR) procedure in which he used peripheral information to cue memory

and found that while CR enhanced memory in the neutral and unusual conditions, it did not

enhance memory in the emotionally arousing condition. This was likely the result of the fact that

participants’ memory representations did not contain information pertaining to the periphery

and/or the relation between the peripheral and central information.

It is suggested that the central/peripheral tradeoff effect in memory is the consequence of

differences in attention allocation during encoding. Specifically, it is reasoned that when an

arousing stimulus is present, participants spend most of their time focusing on it, which then

results in better encoding and memory for the emotion-eliciting item, but impaired encoding and

memory for the associated information (Armony & Dolan, 2002; J. M. Brown, 2003;

Easterbrook, 1959; Kensinger et al., 2005; E. F. Loftus et al., 1987; Wessel & Merckelbach,

1997). According to Easterbrook’s (Easterbrook, 1959) hypothesis, this attention narrowing

effect occurs because emotional arousal leads to a restricted focus on the emotion-eliciting item

and as a consequence of that, fewer resources are available for processing associated information

in the periphery. This bias in the attention systems seems to be evolutionarily adaptive. A major

function of attention is to ignore irrelevant and select relevant stimuli in the environment (Lavie,

Hirst, de Fockert, & Viding, 2004) and this ability is especially important in the selective

appraisal of appetitive and aversive stimuli in order to guide approach and avoidance behaviour.

12

However, although there is an abundance of studies showing that emotions preferentially capture

and sustain attention (e.g. Anderson, 2005; Anderson & Phelps, 2001; Armony & Dolan, 2002;

Calvo & Lang, 2005; E. F. Loftus et al., 1987; Nummenmaa, Hyona, & Calvo, 2006; Ohman,

Flykt, & Esteves, 2001; Ohman & Mineka, 2001), there has not been a successful attempt to

directly examine whether such differences in attention allocation during the encoding period are

related to differences in subsequent relational memory performance.

In summary, some lines of research show that emotions enhance relational memory while other

lines of research show that emotions impair relational memory. Further, while both camps

suggest that these emotion-modulated differences in memory are the result of emotion-

modulated differences in attention during the encoding stage, this has not been successfully

examined. In the next section, I highlight some the issues within this body of literature and the

questions that still remain.

1.4.3 Questions Remaining

As mentioned in the previous section, although many researchers have suggested that emotion-

modulated differences in relational memory may be the result of differences in attention during

the encoding stage, this has not been successfully examined. Now, as mentioned in the

introduction (Section 1.1), emotion may modulate relational memory at different stages, during

not only the encoding stage via differences in attention, but also during the retrieval stage. Thus,

in addition to questions regarding how emotions may modulate relational memory via attention

(above), it is also unclear how emotions may modulate relational memory during the retrieval

stage. Such questions cannot be addressed by using direct measures of memory such as recall

and recognition. Therefore, experiments within the present thesis also used eye movement

monitoring and MEG. Each methodology is reviewed in the next sections.

1.5 Eye Movement Monitoring

1.5.1 Measures of Attention

It is often reasoned that emotions may modulate relational memory via differences in attention

during the encoding period. The study of attentional processes has typically been explored with

the use of behavioral tasks such as the dot probe paradigm (e.g. Karin Mogg & Bradley, 1999,

visual search task (e.g. Fox et al., 2000; Ohman, Flykt, et al., 2001; Ohman, Lundqvist, &

13

Esteves, 2001; Tipples, Atkinson, & Young, 2002) and the exogenous cueing task (Fox, Russo,

& Dutton, 2002; Koster, Crombez, Van Damme, Verschuere, & De Houwer, 2004; Rowe, Hirsh,

& Anderson, 2007; Yiend & Mathews, 2001). However, these reaction time based studies are

not suitable for the research aims of the present thesis for the following reasons: First, they

cannot reveal how emotion may influence attention to information associated with emotional

versus neutral stimuli. Second, they cannot reveal qualitative differences in attention. Third,

they cannot reveal how emotion may influence processes during the retrieval period. In contrast,

an examination of eye movement behaviour can begin to reveal how emotions may influence the

above aspects of attention and retrieval.

The human visual system is structured in such a way that detailed information is primarily

discerned by directly fixating on the region of interest. This is because we have a high-

resolution central fovea and lower resolution visual surround (Henderson, Williams, Castelhano,

& Falk, 2003). In this way, eyes are continually active and sampling the visual world all around

us in order to gather relevant information and to guide subsequent behaviour. The way in which

we view the world (i.e. where we look, how long we may look at it) is driven not only by

stimulus bound characteristics such as colour and movement, but also internal cognitive

processes such as goals, semantic knowledge and memory (for review see: Hannula et al., 2010).

Eye movement monitoring takes advantage of these revealing characteristics of eye movement

behaviour, thus making it an excellent tool to study processes related to a variety of cognitive

processes, including attention and memory, and the effects of emotion on both processes. In the

sections below, I outline how researchers have used eye movement monitoring to reveal attention

and memory processes, and how emotions may modulate them.

1.5.2 Eye Movement Monitoring and Attention

One of the ways in which eye movement monitoring has been used to study emotion-modulated

attention is to assess whether emotions lead to orienting or engagement of attention and whether

this process is obligatory or not. Specifically, eye movement monitoring can differentiate

between attention orientation and attention maintenance via differences in early versus later

viewing, respectively (e.g. Ryan & Cohen, 2004). Further, eye movement monitoring can also

yield insights into whether a certain process is obligatory or not by assessing whether the eye

movement effects are affected by task instructions, i.e. if the process is obligatory, then it should

14

not be affected by task instructions (Ryan, Hannula, & Cohen, 2007). Taking advantage of these

characteristics, the use of eye movement monitoring has shown that the presence of an

emotionally arousing stimulus led to faster orienting and enhanced maintenance of attention, and

some of these effects occurred in an obligatory fashion. For example, when participants viewed

emotional and neutral stimuli simultaneously, they were more likely to direct their first fixation

and subsequent viewing to the emotional versus neutral stimuli (e.g. Calvo & Lang, 2004; Calvo

& Lang, 2005; Caseras, Garner, Bradley, & Mogg, 2007; K. Mogg, Millar, & Bradley, 2000;

Nummenmaa et al., 2006). It has been suggested that participants continued to view emotional

stimuli more than neutral stimuli because it was difficult to disengage attention from the

emotional qualities of the stimulus (Fox, Russo, Bowles, & Dutton, 2001). Further,

Nummenmaa and colleagues (Nummenmaa et al., 2006) found that regardless of whether

participants were instructed to direct their first gaze to the emotional or to the neutral picture,

they were more likely to direct their first gaze to the emotional picture. This suggests a bias in

attentional orienting that may be automatic and/or obligatory.

In addition to providing a measure for the amount of attention directed to a stimulus of interest,

eye movement monitoring has also been used to reveal differences in the manner of viewing

directed to neutral versus emotional stimuli. Differences in the manner of viewing may represent

qualitative differences in attention and/or differences in perception (i.e. the construction of a

coherent representation regarding sensory input). Specifically, researchers have used eye

movement monitoring to examine how eye movement patterns may differ for viewing faces in a

neutral expression versus those expressing emotions such as anger, fear or happiness (e.g. Bate,

Haslam, & Hodgson, 2009; Calder, Young, Keane, & Dean, 2000; M. L. Smith, Cottrell,

Gosselin, & Schyns, 2005; Wong, Cronin-Golomb, & Neargarder, 2005). It has been found that

viewing of threat-related versus non-threat-related (neutral) facial expressions is characterized by

an overall increase in the number of fixations directed to the face and number of regions sampled

within the face (Bate et al., 2009), an increase in sampling of internal features of the face, and an

extensive or “vigilant” style of scanning, i.e. long durations between fixations (Green, Williams,

& Davidson, 2003a). Eye movement patterns have also been found to distinguish between more

specific facial expressions. For example, expressions of anger and fear elicit focusing on the

eyes, and expressions of disgust and happiness elicit focusing on the mouth (Aviezer et al., 2008;

Calder et al., 2000; M. L. Smith et al., 2005; Wong et al., 2005).

15

In summary, previous studies have shown that emotions modulate not only the amount of

attention directed to emotionally arousing items, but also the manner in which such items were

viewed. In other words, emotion may lead to both quantitative and qualitative differences in

attention. Given these differences, it is possible that the presence of an emotional stimulus may

also lead to quantitative and/or qualitative differences in the processing of associated information

during the encoding stage. Further, eye movement monitoring can be utilized to outline such

emotion-modulated changes in attention. Eye movement monitoring can be used to characterize

several different aspects of attention such as what was attended, how quickly attention was

directed to a certain item of interest, how long attention was sustained and the manner in which

visual stimuli were viewed. Further, such eye movement behaviours can be quantified and

correlated with subsequent memory performance. In the next section, I review how eye

movement monitoring has been used as a measure of memory.

1.5.3 Eye Movement Monitoring and Memory

In addition to the use of eye movement monitoring as a measure of attention during encoding, it

can also be used as an indirect measure of memory during retrieval because it does not require

participants to explicitly comment on the contents of their memory (for review see: Hannula et

al., 2010). In contrast, more traditional means of measuring memory through recall and

recognition accuracy require participants to explicitly comment on the contents of their internal

memory representations which can be problematic or impossible for some populations such as

those without adequate language skills, e.g. babies and animals. Further, while verbal reports are

the end product of a long chain of processes that occur prior to it, eye movement monitoring can

reveal those aspects of online processing that may ultimately culminate in the explicit verbal

response, such as how quickly memories may be accessed and which aspects of a scene are

important in guiding that decision. Based on eye movement effects of memory, Parker (Parker,

1978) proposed that there may be four different stages that contribute to recognition memory: 1)

information regarding the gist of a scene is acquired; (2) the acquired information is compared

with expectations and stored memory representations; (3) there is an evaluation of whether or not

there is a mismatch between the external stimulus and one’s internal memory representation, and

if so, whether this is sufficient for a response; and (4) if the mismatch is sufficient, eyes are then

guided to the region of mismatch in order to gather more information. Given these different

stages of retrieval, it is possible that if emotion influences retrieval processes, it may even have

16

differential effects on the different stages of retrieval. Although the use of eye movement

monitoring has not been used to study the influence of emotion on the retrieval process, it has

been used by multiple labs as an indirect measure of memory for neutral information (for review

see Hannula et al., 2010).

In using eye movement monitoring to assess memory, there are two commonly reported effects:

the repetition effect and the manipulation effect. The repetition effect describes a phenomenon

in which participants direct significantly fewer fixations (i.e. a discrete pause in eye movements

– the absence of a saccade or blink) to a stimulus that is familiar and that one has a strong

memory representation for (e.g. a famous face) as compared to a stimulus that is novel (e.g.

Althoff & Cohen, 1999; Ryan et al., 2000; Ryan, Hannula, et al., 2007). This ‘repetition’ effect

in eye movement behaviour may be akin to the ‘repetition suppression’ effect observed in neural

activity for priming studies (Schacter & Buckner, 1998) and may represent a ‘sharpening’ of

one’s representation. In other words, as a stimulus becomes more familiar, one would direct

fewer fixations to it because most of its details are already committed to memory.

On the other hand, the manipulation effect describes the phenomenon in which eye movements

to a region of a scene that has undergone a change (e.g. adding an object, deleting an object, or

moving an object’s spatial location) as compared to a region of a scene that has not been

manipulated is characterized by faster orienting (e.g. Parker, 1978), longer fixation durations and

increased number of fixations (Ryan et al., 2000). It is reasoned that if participants had

successfully bound the different element within the scene into a coherent and lasting memory

representation, then they would be able to detect the subsequent manipulation, whether directly

via verbal report and/or indirectly as measured by eye movement monitoring. Critically, it has

been reported that this manipulation effect is absent in amnesic patients suggesting that such

relational processes are supported by the hippocampus (e.g. Ryan et al., 2000; Ryan & Cohen,

2004).

Now, if emotions enhance relational memory, then this would lead to a stronger and/or more

stable memory representation for stimuli paired with emotional information as compared to those

paired with neutral information. From the above, it can be seen that eye movement monitoring

provides a powerful tool by which to examine such emotion-modulated differences in memory,

as well as attention. By using this technique, we can reveal not only aspects of how emotions

17

may influence relational memory (i.e. via differences in attention), but also describe how

emotions may modulate different stages of the retrieval process. In this way, eye movement

monitoring has the potential to shed light on the nature of memory formation and retrieval, how

malleable such processes may be, and also how extensive the effects of emotions are.

However, although eye movement monitoring is a powerful tool in revealing how relational

memory processes may occur behaviourally, a comprehensive account of emotion-modulated

relational memory should also include an examination of how such processes are supported in

the brain, i.e. which neural regions/networks drive these changes in viewing? As mentioned in

the previous sections, the amygdala and the hippocampus have been found to play an important

role in emotion processing and relational memory, respectively. Thus, the use of neuroimaging

may shed light how these two regions and/or other neural networks may interact to support

emotion-modulated relational memory. In the next section, I review the neuroimaging technique

of magnetoencephalography (MEG) and outline reasons for which why this was the ideal tool to

address the question of how emotions may modulate relational memory.

1.6 Magnetoencephalography

In exploring the neural regions underlying emotions and relational memory, many researchers

have utilized fMRI. In convergence with neuropsychological data, this has revealed an

important role of the amygdala in emotion processing (Section 1.2), and of the hippocampus in

relational memory (Section 1.3). However, the temporal resolution of fMRI is on the order of

seconds whereas cognitive operations occur within hundreds of milliseconds. An understanding

of the precise temporal dynamics underlying neural activity is crucial to the understanding of not

only how different neural regions may interact and modulate each other, but it may also allow us

to answer questions regarding the nature of memory (e.g. is memory retrieval an obligatory

process?) and emotion-modulated cognitive processes (e.g. how quickly do emotions influence

relational binding?).

Precise temporal dynamics underlying neural activity can be recorded directly during surgery by

inserting microelectrodes into a particular region. However, this has the obvious drawback that

one can only study the ‘damaged’ brain as opposed to the ‘healthy’ brain. Another method that

can be applied to the study of precise neural dynamics is electroencephalography (EEG), which

is a measurement of electric potential differences on the scalp. Although this is widely used in

18

the study of various cognitive operations, including memory (for review see, Rugg, 1995b),

electric signals measured from the scalp is greatly influenced by various inhomogeneities in the

head, making accurate localization of neural activity within the brain very difficult (Hämäläinen,

Hari, Ilmoniemi, Knuutila, & Lounasmaa, 1993). In contrast, magnetoencephalography (MEG)

is a noninvasive neuroimaging technique that is similar to EEG in that both methods measure

signals generated by synchronized neural activity in the brain. The major difference between the

two methods is that whereas EEG measures electric potential differences that are affected by

various inhomogeneities in the head, MEG measures the magnetic field differences produced by

population of neurons that are largely unaffected by various inhomogeneities (Hämäläinen et al.,

1993; Hari, Levanen, & Raij, 2000). Thus, MEG provides recording of neural activity with

temporal resolution on the order of milliseconds and spatial resolution comparable to that of

fMRI (Miller, Elbert, Sutton, & Heller, 2007).

Taking advantage of the precise temporal resolution of MEG, researchers have been able to

better elucidate the functional networks mediating cognitive processes, including the processing

of emotions (Cornwell et al., 2008; Garolera et al., 2007; Hung et al., 2010; Luo, Holroyd, Jones,

Hendler, & Blair, 2007; Moses et al., 2007; Salvadore et al., 2009; Streit et al., 1999). For

example, based on animal studies, it has been proposed that there are two routes by which threat-

related information gains access to the amygdala: via a fast subcortical route (thalamus-

amygdala) and a slower cortical route (thalamus-sensory cortex-amygdala (LeDoux, 1996). This

implies that the presentation of a threat stimulus should elicit an early and a later peak of activity

in the amygdala. Due to limitations in its temporal resolution, fMRI cannot be used to

characterize neural activity related to the subcortical and cortical route by which emotional

information may gain access to the amygdala. However, MEG can be successfully utilized to

characterize such timing differences. In a MEG study examining neural signal changes to faces

expressing fear as compared to faces with a neutral expression (Hung et al., 2010), researchers

found two significant peaks of activity within the amygdala: viewing of faces with a fearful

versus neutral expression elicited significantly higher activity within the right amygdala at 100

ms and then again at 165 ms after stimulus onset.

From the above, it can be seen that MEG has been utilized to examine the temporal dynamics

underlying emotion processing and that such activity has been successfully localized to the

amygdala. Thus, it seems that MEG would be an ideal tool with which to examine the questions

19

of the current thesis, namely, do emotions modulate the processing of associated information and

if so, how is this process supported in the brain? However, there is considerable debate with

regards to whether MEG can be reliably used to localize deeper sources such as the

hippocampus, which is a critical structure in any examination of relational memory (Section 1.3).

For this reason, it was critical to first establish the reliability of MEG in characterizing neural

activity from deep sources such as the hippocampus before it can be used to assess how emotions

may modulate relational memory. With this in mind, the next section will outline all of the

specific objectives of the current thesis and provide an overview of the methods.

1.7 Objectives

The purpose of this thesis was to examine how the presence of emotion may affect relational

memory. Specifically, (1) How does the presence of emotional information influence the

amount of attention to different aspects of an event/scene and how is this related to subsequent

memory; (2) how does association with emotional versus neutral information change the manner

in which participants view certain stimuli and how is this related to subsequent memory; and (3)

how might association with emotional versus neutral information manifest during different

stages of retrieval.

A convergent methods approach was utilized such that attention was assessed via eye movement

monitoring and/or MEG, and memory was assessed both directly via verbal report and indirectly

via eye movement monitoring. As mentioned previously, eye movement monitoring can provide

a reliable measure of differences in eye movement scanning that occur during encoding of

information associated with negative versus neutral information; and it can also provide an

indirect measure of the different stages that may occur during retrieval (Section 1.5). The use of

MEG provided information regarding which neural regions were involved in emotion-modulated

processing of associated information and allowed us to address more specific questions regarding

how this process may occur. This is discussed in more detail below. However, in contrast to eye

movement monitoring, the use of MEG to study emotion-modulated relational memory may be

more controversial because there is some doubt as to whether MEG can localize neural activity

to deep sources such as the hippocampus, a critical structure implicated in relational memory

(Section 1.3). Thus, a corollary aim of the current thesis was to examine whether MEG can be

20

successfully used to characterize neural activity from the hippocampus before using MEG to

assess how emotions may influence relational processing.

Thesis Overview

The next sections provide a brief outline of the different chapters contained within this thesis and

the main question that it sought to address.

Do emotions modulate relational memory via differences in the amount of attention during

encoding (Chapter 2)?

It is commonly argued that emotions may modulate memory for associated information via

differences in the amount of attention directed to that item during the encoding stage (Section

1.4). However, this has not been successfully examined. The experiment in Chapter 2 used eye

movement monitoring and verbal report in order to address the following questions: (1) does the

presence of an emotional stimulus influence the amount of attention directed to associated

neutral information during the encoding phase; (2) do emotions modulate one’s ability to

remember the associated neutral information; and (3) if there are differences in the amount of

attention directed to information paired with emotional versus neutral stimuli, are these

differences related to subsequent memory performance?

Do emotions modulate relational memory via differences in the retrieval process (Chapter

3)?

As mentioned previously, emotions may modulate relational memory at the encoding and/or

retrieval stage. Thus, Chapter 3 utilized eye movement monitoring in order to assess whether

emotion influenced different stages of retrieval. Specifically, it is possible that emotion may

have differential effects on different stages of retrieval (Section 1.5.2). For example, it is

possible that the presence of emotionally arousing information may make associated information

easier to retrieve. Alternatively, it is possible that while early stages of memory retrieval occur

in an obligatory fashion and are uninfluenced by the effects of emotion, the subsequent and more

evaluative stages of retrieval may be modulated by emotion.

21

Can MEG be reliably used to characterize neural activity from the hippocampus (Chapter

4)?

Before moving on to address issues concerning which neural regions support emotion-modulated

relational memory, the feasibility of using MEG to study relational memory, or more

specifically, to localize hippocampal activity, was examined. To this aim, the experiment in

Chapter 4 measured participants’ brain activity using MEG during a recognition memory

paradigm, which has been shown to elicit hippocampal activity from a variety of other

neuroimaging techniques such as fMRI and PET (for reviews see: Cabeza & Nyberg, 2000;

Henson, 2005; Lepage, Habib, & Tulving, 1998). Further, three different localization methods

were applied to the MEG data in order to provide convergent evidence and also a more

comprehensive description of hippocampal activity relating to spectral frequency and precise

onsets.

Do emotions modulate relational memory via the manner in which associated information

is perceived (Chapter 5)?

In order to examine whether emotions led to differences in processing associated neutral

information, we utilized both eye movement monitoring and MEG. The use of eye movement

monitoring and MEG allowed us to assess how emotions may modulate differences in the

manner of viewing and neural activity, respectively, to associated neutral information during the

encoding stage. Specifically, emotions may modulate the manner in which participants view

associated neutral information by changing the nature of the neutral stimulus itself such that it

takes on ‘emotional’ qualities and perception is changed. Alternatively, association with

emotional information may not change perception per se, but participants may process it

differently due to the associated emotional information. In other words, do emotions change

perceptual processing of associated information or is the emotional information activated

following perceptual processing? One way in which we sought to address this question was to

examine the precise temporal neural dynamics underlying emotion-modulated viewing of

associated neutral information by using MEG. If emotion changes the way in which associated

information is perceived, then one may expect to see differences when viewing information

associated with emotional versus neutral items within the first 200 ms after stimulus onset, a time

frame typically associated with perception of externally presented stimuli (e.g. Ryan et al., 2008;

22

Tsivilis, Otten, & Rugg, 2001). On the other hand, if emotion does not change perceptual

processing of associated information, then one would not expect to see differences in viewing

information associated with emotional versus neutral items within the first 200 ms after stimulus

onset.

Conclusion (Chapter 6)

The last chapter of the thesis provides a summary of the findings as a whole, discusses the results

in context of previous work and provides suggestions for future work. It is anticipated that work

from the current thesis will contribute to the growing literature on emotion-modulated memory.

By examining how emotion may modulate relational memory, it will lead to greater insight into

the different ways in which emotions may influence our ability to process and bind various bits

of information into a coherent whole and later retrieve them from memory. This may have

implications for understanding a variety of phenomena such as eye witness testimony, emotional

autobiographical memory and how neutral experiences may be altered in clinical disorders such

as depression, anxiety and post-traumatic stress disorder.

Work from this thesis also makes a more general contribution to the field of cognitive

neuroscience by advancing the use of tools such as eye movement monitoring and MEG.

Specifically, the use of eye movement monitoring can reveal aspects of online processing that

cannot be gleaned from verbal report alone, thus shedding light on aspects of memory that

cannot be captured by using more traditional measurements such as recall and recognition.

Further, MEG can reveal precise temporal dynamics underlying brain activity that cannot be

answered by more traditional neuroimaging techniques such as fMRI and PET, allowing one to

address questions regarding the nature of cognitive processes such as memory (e.g. Is memory

retrieval obligatory?) and interaction between ‘different’ processes such as emotion, attention,

perception and memory (e.g. Do emotions change perception?).

23

Chapter 2 The Role of Overt Attention in Emotion-Modulated Memory

Riggs, L., McQuiggan, DA., Farb, NA., Anderson, AK., & Ryan, JD. (2011). The role of overt

attention in emotion-modulated memory. Emotion, 11(4), 776-785. doi: 10.1037/a0022591

This article may not exactly replicate the final version published in the APA journal. It is not the

copy of record.

24

2 The role of overt attention in emotion-modulated memory

2.1 Abstract

The presence of emotional stimuli results in a central/peripheral tradeoff effect in memory:

memory for central details is enhanced at the cost of peripheral items. It has been assumed that

emotion-modulated differences in memory are the result of differences in attention, but this has

not been tested directly. The present experiment used eye movement monitoring as an index of

overt attention allocation and mediation analysis to determine whether differences in attention

were related to subsequent memory. Participants viewed negative and neutral scenes surrounded

by three neutral objects and were then given a recognition memory test. The results revealed

evidence in support of a central/peripheral tradeoff in both attention and memory. However,

contrary with previous assumptions, whereas attention partially mediated emotion-enhanced

memory for central pictures, it did not explain the entire relationship. Further, although centrally

presented emotional stimuli led to decreased number of eye fixations toward the periphery, these

differences in viewing did not contribute to emotion-impaired memory for specific details

pertaining to the periphery. These findings suggest that the differential influence of negative

emotion on central versus peripheral memory may result from other cognitive influences in

addition to overt visual attention or on post-encoding processes.

2.2 Introduction

It is well-noted that presence of an emotional element may result in a central/peripheral tradeoff

effect in memory: memory for central, emotional aspects of an event is enhanced, and memory

for peripheral, nonemotional aspects of an event is impaired (e.g., Adolphs, Tranel, & Buchanan,

2005; J. M. Brown, 2003; Christianson, 1992; E. F. Loftus, 1979; E. F. Loftus et al., 1987;

Reisberg & Heuer, 2004). It is argued that the process underlying this tradeoff effect in memory

is attentional narrowing (e.g., Kensinger, Gutchess, & Schacter, 2007; Kensinger et al., 2005;

Wessel & Merckelbach, 1997) such that when an emotionally arousing stimulus, specifically a

negative stimulus, is present (Derryberry & Tucker, 1994; see also Gable & Harmon-Jones,

2008; Harmon-Jones & Gable, 2009), attention will “narrow” like a spotlight and be focused

primarily on it (Easterbrook, 1959; Posner, 1980), resulting in better encoding and subsequent

25

memory (e.g., Craik et al., 1996) for the central emotional object and impaired encoding and

subsequent memory for the neutral objects in the periphery. In support of this, there is an

abundance of literature showing that when emotionally arousing and neutral stimuli are

simultaneously presented, arousing stimuli preferentially capture and sustain attention (e.g.,

Anderson, 2005; Anderson & Phelps, 2001; Armony & Dolan, 2002; Bradley, 1994; Calvo &

Lang, 2005; E. F. Loftus et al., 1987; Nummenmaa et al., 2006; Ohman, Flykt, et al., 2001;

Ohman & Mineka, 2001; Stormark, Nordby, & Hugdahl, 1995). However, whereas there is

evidence showing that highly arousing and negatively valenced emotions lead to attention

narrowing and that a central/peripheral tradeoff effect occurs in memory, the co-occurrence of

both effects does not necessarily imply that the former mediates the latter.

Only two studies have examined the relationship between emotion-modulated attention and the

central/peripheral tradeoff effect in memory within the same experiment (Christianson, Loftus,

Hoffman, & Loftus, 1991; Wessel, van der Kooy, & Merckelbach, 2000). In both studies,

researchers used eye movement behavior as a measure of overt attention during encoding and

found evidence in support of attention narrowing, specifically, participants spent longer looking

at the central details of the critical slide if it was negatively arousing than if it was neutral and

less time looking at the peripheral details of the slide when it appeared in a negative context than

when it appeared in a neutral context. In a subsequent test phase, both studies reported higher

recall and recognition accuracy for central negative versus neutral details, but contrary to the

notion that more attention results in better memory, Christianson and colleagues (1991) found

that those who directed more viewing to the central aspects of the scene did not have higher

recognition memory scores than those who directed less viewing. Wessel and colleagues (2000)

did not directly examine the relationship between eye movement measures recorded during the

encoding phase and recall memory at the test phase. However, since neither study found a

difference in memory for peripheral details, it is not known whether attention narrowing results

in a central/peripheral tradeoff in memory per se. As such, it is possible that the relationship

between attention narrowing and the central/peripheral tradeoff in memory is not a unitary

phenomenon, that is, differences in attention may mediate differences in memory for peripheral

details but not central details or vice versa.

To address the extent to which the central/peripheral tradeoff effect in memory is caused by

attention narrowing, we performed a mediation analysis (Baron & Kenny, 1986; MacKinnon,

26

Fairchild, & Fritz, 2007) to examine the relationship between overt attention, as measured by eye

movement monitoring (EMM), and subsequent memory performance in a paradigm that elicited

both emotion-enhanced memory for central negative pictures and emotion-impaired memory for

peripheral items. A mediation analysis allowed us to examine the observed relationship between

an independent (emotion) and dependent (measure of memory) variable via the inclusion of a

third or mediator variable (measure of attention). The use of EMM can reveal differences in

overt attention allocation and scanning patterns during encoding which reveals not only what

was attended, but also how extensively it was attended.

In the present experiment, participants’ eye movements were monitored while they studied a

central picture that was either neutral or negatively arousing, surrounded by three neutral

everyday objects in the periphery. It is important to note that the central picture and the

peripheral objects did not overlap in space or meaning (see Reisberg & Heuer, 2004). After a

brief delay, memory for central pictures and peripheral objects was assessed separately in the test

phase in which previously viewed and novel central pictures and previously viewed, manipulated

and novel peripheral objects were presented. To the extent that the emotion-modulated

central/peripheral tradeoff effect in memory is related to differences in overt attention allocation,

measures of attention should mediate the relationship between emotion and memory. On the

other hand, if differences in attention do not mediate the relationship between emotion and

memory, then this would suggest that emotion affects memory via mechanisms other than

attention. This may include direct modulation as well as indirect modulation via mechanisms

such as differences in postencoding influences on memory formation.

2.3 Method

2.3.1 Participants

Twenty-four undergraduate students (mean age = 19.17 years, 3 males1 left-handed) from the

University of Toronto participated for course credit. All participants had normal neurological

histories and had normal or corrected-to-normal vision.

2.3.2 Stimuli and Design

The materials used to create the experimental displays consisted of 48 pictures taken from the

International Affective Picture System (IAPS), of which 24 had a negative valence and 24 were

27

of neutral valence (Lang, Bradley, & Cuthbert, 1999) and 192 neutral objects (Hemera Photo

Objects). Each display consisted of one picture in the center and three objects randomly placed

in the periphery. The everyday objects were judged by the authors (LR and DM) and two

independent raters to be neutral and nonarousing. All pictures chosen from the IAPS set included

people. The negative pictures had a more negative valence (t = -17.03, p < .001) and were more

arousing (t = 14.02, p > .0001) than the neutral pictures. The complexity of the pictures was

assessed in terms of the number of bytes of the image files in JPEG format, that is, more

complex images should have a larger file size (Boudo, Sarlo, & Palomba, 2002; Nummenmaa et

al., 2006). We found that there were no differences between the negative and neutral set of

pictures used (t(46) = .63, p > .1). Each display was divided equally into a 3 X 3 grid (not

presented to the participants), and the central picture was always placed in the center cell with

the three objects randomly placed in the periphery. The three objects in the periphery did not

overlap in physical space or semantic meaning with the central element but were always distinct

and not relevant to the meaning of the central scene (Burke, Heuer, & Reisberg, 1992; Reisberg

& Heuer, 2004). A manipulated version was constructed for each display in which one of the

three peripheral objects was replaced with a novel object. In the test blocks, the central pictures

and peripheral objects were presented separately. Central pictures were either previously

presented (repeated) or entirely new (novel). Peripheral objects contained the same three objects

presented during the study phase (repeated), two previously studied objects, and one novel object

(manipulated) or three novel objects that were not presented during the study phase (novel).

Peripheral objects in the repeated and manipulated displays were presented in the same spatial

location as seen during the study phase. For all displays of peripheral objects in test block, a

black box was placed in the location previously occupied by the central picture so that judgments

of repetition/manipulation/novelty could only be based on the peripheral objects rather than the

central picture. Counterbalancing of the display occurred such that each version of the display

appeared equally often in each experimental condition (repeated/novel for central pictures,

repeated/manipulated/novel for peripheral objects) and paired with each emotion (negative,

neutral) across participants.

2.3.3 Procedure

Eye movements were measured throughout the study and test phases with a SR Research Ltd.

Eyelink 1000 eye-tracking desktop monocular system and sampled at a rate of 1000 Hz with a

28

spatial resolution of 0.1°. A chin rest was used to limit head movements. A 9-point calibration

was performed at the start of the experiment followed by a 9-point calibration accuracy test.

Calibration was repeated if the error at any point was more than 1°. Participants studied 32

randomly presented displays (16 negative, 16 neutral) once in each of two study blocks.1 The

displays were 1024 X 768 pixels in size and subtended approximately 33.4 degrees of visual

angle when seated 25” from the monitor. Consistent with previous procedures in which a

central/ peripheral tradeoff was observed (e.g., Kensinger et al., 2007; Kensinger et al., 2005),

each display was presented for 2 s followed by a 3-s interstimulus interval (Figure 2.1).

Participants were instructed to freely view the entire display, and they were not told that there

would be a subsequent memory test. After a 10-min delay (approximately) in which participants

completed a background information form, participants’ memory for the peripheral objects and

central pictures was assessed separately across four test blocks. The first two test blocks

involved passively viewing 16 previously studied, 16 manipulated and 16 novel peripheral object

displays, and 32 previously studied and 16 novel pictures. Eye movement data from this test

phase is not presented in the present paper but is discussed elsewhere (Riggs et al., 2010). In the

final two test blocks, the same materials were presented again following procedures as in our

previous work (e.g., Ryan et al., 2000). Participants were informed that they would be seeing the

last two blocks of pictures again but now they had to indicate whether a set of peripheral objects

was exactly the same as during the study sessions (“repeated”), had changed in some way

(“manipulated”), or had not been viewed during the study session (“novel”). In the last test

1 In designing the present experiment, we had two aims: to explore the relationship between

emotion-modulated attention and memory (current paper) and to examine whether a tradeoff in

memory performance can be observed and the retrieval process outlined using eye movement

monitoring (Riggs, McQuiggan, Anderson, & Ryan, 2010). Previous eye movement studies of

memory have reported significant differences in viewing novel versus repeated stimuli only after

multiple exposures (e.g., Althoff et al., 1998; Ryan, Hannula, et al., 2007). Therefore, since we

planned to measure memory using both eye movement monitoring and verbal reports, we

presented all of the stimuli twice across two study blocks and assessed memory first indirectly by

eye movement monitoring and then directly via verbal reports. Indirect assessment of memory

via eye movement monitoring always occurred before the direct measure of memory via verbal

reports because previous eye movement studies of memory have reported memory effects in eye

movement behavior during free viewing of the stimuli when participants were not explicitly

instructed to perform a memory task (e.g., Ryan et al., 2000).

29

block, participants had to indicate whether a central picture was the same (“repeated”) or

different (“novel”) from what they had seen during the study blocks.

Figure 2.1 Experimental procedure.

Participants viewed negative and neutral central pictures paired with 3 everyday objects; each display was

randomly presented once in each of two study blocks (A). During the test for peripheral objects, the

central picture was blacked out so that only the peripheral objects were visible (B). Participants freely

viewed repeated (3 previously presented objects), manipulated (2 previously presented and 1 novel

object) and novel (3 never previously presented objects) peripheral objects. In the test for memory of

central pictures, only the central picture was visible (C). This block consisted of repeated and novel

pictures.

2.3.4 Analysis

To examine the role of attention on subsequent memory, measures derived from EMM were used

to quantify the amount of overt attention allocated to the central and the peripheral objects in

each display. Previous research shows that during the encoding phase, it is the number of eye

fixations, rather than duration of viewing, that predicts subsequent memory performance (e.g., G.

R. Loftus, 1972). In the present experiment, the number of fixations was used to characterize

eye movement behavior and provide an index of the amount of viewing/overt attention directed

Study

2000 ms 3000 ms (ISI)

Repeated Repeated Manipulated

A

Test: Peripheral

Objects

2500 ms

Manipulated Repeated Novel B

Test: Central Picture

2500 ms

Repeated Repeated Novel

C

30

within a particular region during the study phase. A fixation was defined as the absence of any

saccade (e.g., the velocity of two successive eye movement samples exceeds 22°/s over a

distance of 0.1°) or blink (e.g., pupil is missing for three or more samples) activity. Each

fixation is separated by a saccade. Analysis of eye movements was performed with respect to the

experimenter-drawn interest areas corresponding to the location of central picture and peripheral

objects. During the test phase, evidence of memory was obtained via verbal reports of

recognition. Recognition accuracy was measured as the proportion of correct responses to novel

and repeated central pictures and novel, repeated, and manipulated peripheral objects. Reported

hits for central pictures were corrected for false alarms. Reported hits to repeated and

manipulated peripheral objects are presented uncorrected for false alarm rates, because we were

interested in how processing and memory of peripheral objects are modulated by emotion, and

novel peripheral objects were never paired with either emotional or neutral pictures.

To address the question of whether the central/peripheral tradeoff effect is related to the amount

of overt attention at study, we performed a mediation analysis (Baron & Kenny, 1986) using a

bias-corrected bootstrap method (standard Monte-Carlo algorithm) for assessment of indirect

effects built into AMOS (MacKinnon, Lockwood, & Williams, 2004; Preacher & Hayes, 2004).

Specifically, viewing as indexed by the number of fixations during study blocks 1 and 2 was

included in the mediation analysis as a measure of overt attention. Viewing during both study

blocks was included in the analysis because subsequent memory performance cannot be

attributed to a single study block only. Emotion and memory for each trial were entered as

binary variables with 1 representing negative pictures and 0 representing neutral pictures and

with 1 representing a correct response and 0 representing an incorrect response, respectively.

The degrees of freedom for the regression analysis were 24. This allowed us to determine

whether the relationship between emotion and memory was (1) indirectly mediated by attention

either fully or partially or (2) direct and not mediated by attention. By “direct” we mean that the

path between emotion and memory remained statistically significant even after controlling for

attention. A significant direct effect may be the result of the influence of emotion on memory

via mechanisms other than overt attention, including direct modulation as well as other

unquantified third variable factors.

31

2.4 Results

2.4.1 Study Blocks

The extent to which participants directed more viewing to the central picture (and, as a

consequence, less viewing to the peripheral objects) when it was negative versus when it was

neutral was considered to provide evidence for emotion-modulated attention narrowing.

Analyses of variance (ANOVA) were conducted on the number of fixations2 directed to

particular regions of interest using emotion (negative, neutral), region type (central, peripheral),

and block (block 1, block 2) as within-subject factors. All possible interactions were evaluated.

Differences in viewing were evident in significant main effects for region type such that

participants directed more fixations to the central pictures versus peripheral objects (F(1, 23) =

29.28, p < .0001, d = .56). The main effect of emotion was also significant; participants sampled

the entire display with more fixations when the central picture was negative compared with when

it was neutral (F(1, 23) = 6.64, p < .05, d =.22). A significant main effect of block was also

observed (F(1,23) = 10.10, p < .01, d = .31), there was a decrease in the number of fixations

across study blocks. A significant three-way interaction was found between emotion, region

type, and block (F(1, 23) = 31.99, p < .0001, d = .58) (Figure 2.2), and follow-up t tests were

used to explore this interaction.

2 The same pattern of results was obtained when we examined eye movement measures of

duration of viewing and proportion of fixations, that is, the number of fixations directed to a

particular region of interest relative to the total number of fixations directed to the entire visual

display.

32

Figure 2.2. Viewing to central and peripheral elements during the study phase.

Study: Viewing to Central and Peripheral Elements

1.5

2

2.5

3

3.5

4

4.5

5

5.5

Block 1 Block 2

Study Block

Nu

mb

er o

f F

ixa

tio

ns

Central - Negative

Central - NeutralPeripheral - Negative

Peripheral - Neutral

Participants initially directed more fixations to central scenes when they were negative compared to when

they were neutral and fewer fixations to peripheral objects when they were paired with negative versus

neutral central pictures. In the second study block, viewing to negative central pictures decreased, which

likely resulted in a corresponding increase in viewing the associated peripheral objects.

Consistent with the attention-narrowing hypothesis, in the first study block, participants directed

significantly more fixations to negative relative to neutral central pictures (t(23) = 5.54, p <

.0001), and significantly fewer fixations to peripheral objects that were paired with negative than

neutral pictures (t(23) = -7.61, p < .0001). During the second study block, there were no

significant differences in the number of fixations directed to negative versus neutral central

pictures (t(23) = .89, p > .1). This change of viewing across study blocks was the result of

decreased fixations to negative central pictures (t(23) = 4.03, p < .01). There were no significant

changes in the number of fixations to neutral central pictures across study blocks (t(23) = .32, p >

.1). For peripheral objects, participants continued to direct more fixations to peripheral objects

that were paired with neutral versus negative central pictures (t(23) = -2.28, p < .05).

33

In summary, the presence of an emotional central stimulus led to an initial tradeoff in attention,

such that more overt attention was allocated to a negative versus a neutral central picture and less

attention was allocated to peripheral objects when they were paired with negative versus a

neutral central picture. Further, although this attention-narrowing effect was significantly

attenuated in the second study block, there was still evidence of emotion-modulated tradeoff in

attention allocation for peripheral objects. Below, we examine whether emotion also led to a

tradeoff in memory as measured by verbal report.

2.4.2 Test Blocks

Verbal recognition reports. Consistent with the notion that emotion enhances memory for

central details, participants were more accurate (hits minus false alarms) in identifying repeated

central pictures when they were negative compared to when they were neutral (t(23) = 2.86, p <

.01). Accuracy for repeated peripheral objects did not differ by emotionality, but participants

were less accurate in identifying manipulated peripheral objects if they were previously paired

with a negative central picture versus a neutral central picture (t(23) = -2.19, p < .05). All

relevant means and standard errors are presented in Table 2.1.

Table 2.1. Mean responses and standard errors for peripheral objects and central pictures.

Response Type Peripheral Objects

Neutral Negative

Novel Manipulated Repeated Novel Manipulated Repeated

“Novel” .43 (.05) .17 (.03) .15 (.03) N/A .23 (.04) .19 (.04)

“Manipulated” .26 (.03) .28 (.03) .22 (.03) N/A .20 (.03) .20 (.03)

“Repeated” .31 (.04) .55 (.04) .63 (.04) N/A .57 (.05) .62 (.05)

Central Pictures

Neutral Negative

Novel Repeated Repeated

(Corrected)

Novel Repeated Repeated

(Corrected)

“Novel” .64 (.07) .26 (.04) N/A .54 (.08) .08 (.02) N/A

“Repeated” .36 (.07) .74 (.04) .56 (.03) .46 (.08) .92 (.02) .69 (.04) Accuracy for central pictures (corrected) was calculated as hits minus false alarms.

2.4.3 Mediation Analysis

In the current study, emotion-modulated tradeoffs in overt attention to central and peripheral

elements as measured by EMM and tradeoffs in recognition memory for central pictures and

manipulated peripheral objects were observed. However, it is not known whether the tradeoffs

34

in memory performance were a result of the tradeoffs in the allocation of attention. The number

of fixations to central and peripheral elements was used as an index of overt attention in the

mediation analysis. This allowed us to examine whether the amount of fixations to central

pictures was predictive of subsequent memory for central pictures and whether the amount of

fixations to peripheral objects was predictive of subsequent memory for peripheral objects.

In examining the total relationship between emotion and accuracy, a regression analysis revealed

that negative emotion contributed significantly to higher accuracy for central pictures (β = .24, p

< .01) and lower accuracy for manipulated peripheral objects (β = -.09, p < .05). Consistent with

the behavioral results, emotion did not contribute significantly to accuracy for repeated

peripheral objects (β = .004, p > .1), therefore, we did not examine this relationship further. In

examining the relationship between emotion and attention, it was found that emotion was

associated with enhanced sampling, that is, more fixations of central pictures (β = .14, p < .05)

and decreased sampling of manipulated peripheral objects (β = -.09, p < .05).

Having established a significant relationship between emotion and memory performance, it was

critical to ascertain whether this relationship was fully, partially, or not at all mediated by

attention. To do this, we conducted mediation analyses separately for central pictures and

manipulated peripheral objects. For central pictures, the indirect path between emotion and

memory, with attention as a mediator, was significant (path a * path b: β = .02, p < .05) (Figure

2.3), suggesting that attention may mediate the relationship between emotion and memory.

However, it was also found that even when attention was fixed, the direct path (path c) between

emotion and accuracy remained significant (β = .23, p < .05). In other words, attention only

partially mediated emotion-enhanced recognition memory for central pictures. For manipulated

peripheral objects, the indirect path was not statistically significant (β = -.0004, p > .1), and the

direct path between emotion and accuracy for manipulated peripheral objects remained

significant even when the variable of attention was fixed (β = -.09, p < .05).3 Thus, the results

3 A potential concern with using the raw number of fixations as an overt measure of attention is

that there may be significant between-subjects variance in the total of fixations directed. One

way to control for these individual differences is to use the measure of proportion of fixations.

When we performed the mediation analysis using the proportion of fixations as the measure of

overt attention, the same pattern emerged as was found using number of fixations.

35

suggest that although emotion led to decreased viewing of peripheral objects, these changes did

not play a significant role in reducing one’s ability to identify changes in the periphery.

Figure 2.3 A mediation model with emotion, attention (number of fixations) and memory.

The left hand panel (A) shows the relationship between the three variables for central pictures and the

right panel (B) shows the relationship for manipulated peripheral objects. Solid lines represent significant

relationships while dashed lines represent nonsignificant relationships. Attention is a significant

mediating factor between emotion and memory for central pictures only. Emotion and/or other

unquantified variables modulated memory for both central pictures and manipulated peripheral objects.

In summary, emotion led to tradeoffs in attention. Participants directed more overt attention to

negative versus neutral central pictures and less attention to peripheral objects paired with

negative versus neutral central pictures. Emotion also led to a central/peripheral tradeoff effect

in memory. Recognition was more accurate for negative versus neutral central pictures and less

accurate for manipulated peripheral objects previously paired with negative versus neutral

central pictures. However, the mediation analysis revealed that differences in emotion-

modulated memory, especially memories of the details in the periphery, cannot fully be

explained by differences in attention allocation during the encoding phase. Rather, the current

analysis suggests that factors other than overt attention may mediate the relationship between

emotion and the central/peripheral tradeoff effect.

2.5 Discussion

The presence of emotional stimuli has typically resulted in a central/peripheral tradeoff effect in

memory. It has been suggested that these memory differences are the result of attention

Emotion

Attention

(EMM)

Memory

Path A: β=.14, p<.05

Path C: β =.23, p<.05 Emotion

Attention

(EMM)

Memory

Path A: β =-.09, p<.05

Path C: β =-.09, p<.05

A: Central Pictures B: Manipulated Peripheral

Objects

Path B: β =.11, p<.05 Path B: β=.004, p>.1

36

narrowing during encoding (e.g., Kensinger et al., 2007; Kensinger et al., 2005; Wessel &

Merckelbach, 1997). However, the relationship between attention narrowing and the

central/peripheral tradeoff effect in memory has not been directly examined in a study where

emotion was found to modulate memory for both central and peripheral items. In the current

study, it was found that consistent with previous research, emotion enhanced attention toward,

and memory for, centrally placed pictures. Specifically, participants directed more attention to,

and were more accurate in identifying, repeated negative versus neutral central pictures.

Emotion also led to decreased attention to objects in the periphery and less accurate memory for

identifying manipulations in the periphery that were both spatially and conceptually distinct from

the central picture. The present work addressed whether attention narrowing was related to the

central/peripheral tradeoff in memory through mediation analysis. The results here revealed that

differences in overt attention during the study phase cannot fully account for subsequent memory

performance. Specifically, although attention mediated some of emotion’s effects on memory, it

did not mediate the entire relationship. This suggests that cognitive mechanisms other than

attention are involved in modulating the relationship between emotion and the central/peripheral

tradeoff effect in memory. In the next sections, we discuss our results in light of previous

findings regarding the central peripheral tradeoff in attention and memory and how the current

work may inform theories regarding the influence of emotion on attention and memory.

2.5.1 Attention Narrowing

The preferential allocation of attention toward emotional stimuli is typically regarded as an

adaptive function allowing one to prioritize the detection and processing of potentially

threatening and/or important information (Whalen et al., 1998). Consistent with this notion,

here, participants directed more viewing to the central picture and less viewing to the

surrounding peripheral objects when the central picture was negative compared to when it was

neutral. This attention-narrowing effect occurred despite the fact that participants were

instructed to freely view the presented displays. This supports the hypothesis that emotional

pictures engage more attention (e.g., Calvo & Lang, 2005; Nummenmaa et al., 2006) and leads

to attention narrowing (Easterbrook, 1959), in particular for negatively valenced events

(Schmitz, De Rosa, & Anderson, 2009). These findings are also consistent with previous studies

showing that when emotional and neutral stimuli are presented simultaneously, attention is

biased toward the emotional stimuli (e.g., Calvo & Lang, 2004; Christianson et al., 1991;

37

Nummenmaa et al., 2006; Wessel et al., 2000). In the present study, this attention-narrowing

effect was present during the first study block but was mitigated in the second study block. This

suggests that whereas negatively arousing pictures may attract increased amounts of overt

attention initially, this response may habituate upon subsequent presentations, leaving

participants more time and resources for the processing of peripheral objects (Harris & Pashler,

2004; Nummenmaa et al., 2006). Further, it has also been shown that participants direct less

viewing to repeated versus novel stimuli (e.g., Althoff & Cohen, 1999; Althoff et al., 1998; Ryan

et al., 2000; Ryan, Hannula, et al., 2007). Thus, the decrease in viewing to negative, but not

neutral, central scenes across the study blocks may reflect the influence of more detailed and/or

stable memory representations on viewing behavior. In other words, the results may suggest

more efficient memory encoding of negative pictures during initial presentation. All together, a

significant emotion-modulated attention-narrowing effect was observed that dissipated after the

first presentation of the displays, suggesting rapid formation of stable memory representations

and thus rapid habituation of emotional capture of overt attention.

2.5.2 Central/Peripheral Tradeoff in Memory and Attention

Consistent with the eye movement data from the study phase, recognition accuracy from the test

phase showed that memory for central negative pictures was more accurate than memory for

central neutral pictures, and memory for manipulated peripheral objects was less accurate for

those that were previously paired with negative versus neutral pictures (e.g., J. M. Brown, 2003;

Kensinger et al., 2005; Pickel, French, & Betts, 2003; Wessel & Merckelbach, 1997). However,

compared to previous studies (e.g., Kensinger et al., 2007; Kensinger et al., 2005), we observed

that accuracy for recognizing novel central pictures was relatively lower than expected. This is

likely a result of the fact that when participants had to make an explicit judgment regarding

whether the central pictures were novel or previously presented, all of the test stimuli had already

been presented during the eye movement test phase. Therefore, participants had to make relative

novelty judgments. Further, contrary to previous studies (e.g., Kensinger et al., 2007; Kensinger

et al., 2005), no emotion-modulated effects were observed for recognition of repeated peripheral

objects. One important methodological difference is that whereas previous studies have

presented the stimuli once during the encoding phase (e.g., Christianson, 1992; Kensinger et al.,

2007; Kensinger et al., 2005; E. F. Loftus & Christianson, 1989), we presented the stimuli twice

over two study blocks. Thus, it is possible that by repeating the stimuli, the central/peripheral

38

tradeoff effect in memory was not as robust as it would have been had the stimuli only been

presented once. Therefore, an emotion-modulated effect in the repeated peripheral objects did

not manifest. There is some indication in the literature that the central/peripheral tradeoff effect

in memory is sensitive to methodological parameters such as the duration of exposure to the

stimuli, specificity of the information interrogated during the test phase, and the length of time

between encoding and retrieval (e.g., Burke et al., 1992; Christianson, 1992; Steblay, 1992).

However, despite having presented the stimuli twice during the study blocks, we still observed

an influence of emotion on the memory for the manipulated peripheral objects. The correct

identification of manipulated peripheral objects may require a more detailed memory

representation than the correct identification of repeated objects. This increase in difficulty was

reflected in the lower accuracy of the verbal report data. Thus, the current results suggest that

emotion may predominantly impact memory for the specific details in the periphery (Adolphs et

al., 2001; Adolphs, Tranel, et al., 2005; Denburg, Buchanan, Tranel, & Adolphs, 2003).

Contrary to the attention-narrowing hypothesis (e.g., Easterbrook, 1959; Kensinger et al., 2007;

Kensinger et al., 2005; Wessel & Merckelbach, 1997), the amount of overt attention (i.e., eye

movements) directed to central pictures versus peripheral objects did not fully account for the

subsequent central/peripheral tradeoff seen in memory. Specifically, through mediation analysis,

it was found that even when differences in overt attention to central pictures were fixed, the

relationship between emotion and recognition memory remained significant. Further, even

though the presence of negative central pictures led to decreased attention directed to peripheral

objects, this change in attention allocation was not significantly related to the memory

impairment observed. One possibility is that because very little attention was directed toward

the peripheral objects during the encoding phase, emotion-modulated differences in attention

were not large enough to affect subsequent memory performance. Another possibility is that

because participants viewed all stimuli twice across two study blocks, this may have attenuated

the effects of emotion-modulated attention on subsequent memory. However, despite viewing

the stimuli twice, participants continued to direct fewer fixations to objects paired with negative

central pictures than those paired with neutral pictures.

Taken together, the current results suggest that the presence of emotion enhanced memory for

central emotional information via overt attention and additional mechanisms. Further, it was

also found that emotion impaired memory for neutral information in the periphery via

39

mechanisms other than overt attention. Given that the attention-narrowing account does not fully

explain the central/peripheral tradeoff observed, below, we consider some alternate mechanisms.

2.5.3 Mechanisms Underlying Emotion-Enhanced Memory

One factor that has often been invoked to explain emotion-modulated memory is the factor of

distinctiveness. It has been shown that when an item is relatively distinct from its surroundings

(e.g., unique features, location, color, etc.), memory for that item is enhanced, likely at the

expense of memory for other items (Schmidt, 1991; Talmi, Schimmack, Paterson, &

Moscovitch, 2007). However, previous studies that controlled for distinctiveness still found a

significant effect of emotion above and beyond distinctiveness (Anderson, 2005; Anderson et al.,

2006). In other studies, it has been shown that memory for central and peripheral details of

unusual pictures did not differ and resembled that of neutral pictures (Christianson et al., 1991;

Wessel et al., 2000). Here, distinctiveness is unlikely to account for the emotion-modulated

differences in recognition of peripheral objects because those were counterbalanced across

emotional conditions. However, because the central pictures were not counterbalanced across

emotional conditions, it is possible that differences in the distinctiveness of negative versus

neutral pictures contributed to not only memory for central pictures but also the peripheral

objects with which they were presented.

In addition to distinctiveness, another mechanism that may underlie the emotion-modulated

central/peripheral tradeoff effect is the amount of covert attention allocated, which can be

decoupled from overt attention as measured by EMM (e.g., Posner, 1980; Rowe et al., 2007).

However, although eye fixations and attention can be dissociated under explicit instructions, they

are closely related in real world situations, because a covert shift of visual attention is reliably

and quickly followed by an overt gaze shift to the attended spatial location (Findlay & Gilchrist,

2003; Hoffman, 1998; Reichle, Pollatsek, Fisher, & Rayner, 1998). Despite this, it is unknown

whether emotion would impact the correlation between overt and covert attention, and any

contributions from the covert allocation of attention cannot be ruled out as a factor underlying

the emotion-modulated central/peripheral tradeoff in memory.

Another possible mechanism underlying the central/peripheral tradeoff may be the direct

modulation of memory processes through emotional arousal. Hadley and MacKay (Hadley &

Mackay, 2006) have proposed that emotional arousal may act as a “glue” that preferentially

40

binds features within an emotional item, as well as between the emotional item and its

experimental context (e.g., information regarding when and where the experiment occurred),

thereby facilitating the subsequent retrieval of the emotional item. At the same time, this binding

of the emotional item interrupts the encoding of surrounding nonemotional items, making the

nonemotional items more difficult to retrieve later (Miu, Heilman, Opre, & Miclea, 2005; Most,

Chun, Widders, & Zald, 2005). On this view, the central/peripheral tradeoff effect occurs

because participants preferentially encode elements within the central negative picture that

interfere with the encoding of the surrounding peripheral objects. Although the theory by

Hadley and MacKay refers specifically to rapidly presented stimuli (<200 ms), there is some

evidence to suggest that even at longer presentation times, there are qualitative differences

between the encoding of negative versus neutral stimuli (e.g., Kensinger, Garoff-eaton, &

Schacter, 2006; Takahashi, Itsukushima, & Okabe, 2006). In the present experiment, central

negative pictures may have received prioritized processing, increased perceptual processing

(Anderson & Phelps, 2001; Lim, Padmala, & Pessoa, 2009), deeper semantic processing, and/or

poststimulus elaboration, leading to the disruption of the processing of the peripheral objects. In

other words, when it comes to memory, it is not necessarily how long one spends viewing an

item, but rather how one processes it (e.g., Craik, 2002). Thus, even when participants were

attending to the peripheral objects, they may have still been elaborating and/or rehearsing

information associated with the negative central picture. This may also explain why overt

attention during the study phase was not a significant mediating factor for memory of

manipulated peripheral objects.

Consistent with the idea that there are qualitative differences in encoding emotional and neutral

information, research shows that special neural and hormonal processes exist to enhance

emotional, but not neutral, memories. For example, results from human and nonhuman animal

studies (Cahill & McGaugh, 1998) reveal amygdala activation is significantly correlated with

subsequent memory performance (e.g., Dolcos, LaBar, & Cabeza, 2004; Kensinger & Corkin,

2004b; Packard & Cahill, 2001; Talmi et al., 2008). Critically, the amygdala may mediate

enhanced processing of emotional information that is separate from any increases in attention

(Anderson & Phelps, 2001). In addition, given the amygdala’s critical role in post encoding

modulation of memory consolidation (Adolphs, Tranel, & Denburg, 2000; Cahill & McGaugh,

1998; McGaugh, 2000), and in mediating the central-peripheral tradeoff (Adolphs et al., 2001;

41

Adolphs, Tranel, et al., 2005), it is possible that amygdalar modulatory influences during

consolidation may also play a role. Further, consistent with the notion that the central/peripheral

tradeoff effect in memory cannot be fully explained by emotion-modulated differences in

attention during encoding, Payne and colleagues (Payne, Stickgold, Swanberg, & Kensinger,

2008) found that after a 12-hr sleep period, the central/peripheral tradeoff effect was even more

pronounced than it was when memory was tested immediately or after a 12-hr wake period,

because sleep led to a preservation of central emotional details within a scene and decay of

peripheral details within a negative scene and all aspects of neutral scenes. In view of this, it is

possible that results from the current study would have been even more robust after a 12-hr sleep

period than they were at immediate testing.

Taken together, the present results suggest that, contrary to some previous assumptions, the

central/peripheral tradeoff effect in memory is not entirely a result of differences in overt

attention allocation. Rather, this memory effect may be related to altered covert attention, and/or

may be the result of the direct influence of emotion on memory processes through cognitive

mechanisms such as depth of processing, and/or specialized neuromodulatory mechanisms such

as the direct modulation of the amygdala on medial temporal regions, leading to enhanced

processing of emotionally arousing items at the cost of impaired processing of surrounding

peripheral objects; these accounts remain to be tested in future research. Attention and memory

may be independent processes to the extent that the amount of overt attention directed to an item

may not predict subsequent memory performance. This may allow emotion to enhance memory

for significant stimuli even when there is limited time or attentional resources to devote to the

encoding of such stimuli.

2.5.4 Limitations and Future Directions

In addition to examining how emotion may modulate memory via cognitive and specialized

neuromodulatory mechanisms, it may also be important to examine the relationship between

emotion, attention, and memory under more “real-life” circumstances. The stimuli used in the

present experiment delineated between central and peripheral details both spatially and

conceptually. However, this may be less ecologically valid than previous studies that have

examined central and peripheral details within one cohesive scene. Although it is possible that

the central/peripheral tradeoff effect in attention and memory reported in the current study was

42

more exaggerated than in previous studies given that the peripheral details were clearly irrelevant

for the understanding of the central details, this is unlikely because we did not find evidence of

emotion-impaired memory for repeated peripheral objects as reported in previous work (e.g.,

(Kensinger et al., 2007; Kensinger et al., 2005). Thus, although the findings here suggest that

differences in overt attention during the encoding phase do not fully explain emotion-impaired

memory for specific details in the periphery that are spatially distinct and conceptually unrelated

to the central details, future studies could explore whether this is also true for central and

peripheral details within a cohesive and more ecologically valid paradigm.

In the present study, IAPS pictures were used to elicit negative emotion. However, rather than

depicting the range of negative emotions (e.g., anger, fear, disgust), the negative pictures from

IAPS are mostly associated with fear and disgust. Studies show that not only may different

negative emotions result in different degrees of memory impairment for peripheral details

(Talarico, Berntsen, & Rubin, 2009), but there are different viewing patterns and perhaps

different cognitive processes engaged when viewing faces depicting anger, fear, and disgust

(e.g., Aviezer et al., 2008; Lerner, Gonzalez, Small, & Fischhoff, 2003; Susskind et al., 2008).

In a similar vein, there may be differences in viewing patterns to stimuli that elicit different

emotions (e.g., fear, disgust, anger) in the viewer. For example, although disgust is associated

with sensory rejection, fear is associated with enhanced sensory acquisition (Susskind et al.,

2008). Thus, fear may result in a stronger central/peripheral tradeoff effect in attention and/or

memory than disgust. In view of this, it would be important for future studies to explore how

these discrete negative emotions may differentially affect attention, memory, and the relationship

between attention and memory.

2.6 Acknowledgments

We thank Ella Pan for her assistance. This work was supported by funding to JDR from Natural

Sciences and Engineering Research Council of Canada (NSERC), Canada Research Chairs

Program, and Canadian Foundation for Innovation (CRC/CFI), to AKA from NSERC, and a

postgraduate scholarship to LR from NSERC.

43

Chapter 3 Eye Movement Monitoring Revealed Differential Influences of

Emotion on Memory

Riggs, L., McQuiggan, D.A., Anderson, A.K., Ryan, J.D. (2010). Eye movement monitoring

reveals differential influences of emotion on memory. Frontiers in Psychology, 1, 1-9. doi:

10.3389/fpsyg.2010.00205

This Document is Protected by copyright and was first published by Frontiers. All rights

reserved. It is reproduced with permission.

44

Eye movement reveals differential influences of emotion

on memory

2.7 Abstract

Research shows that memory for emotional aspects of an event may be enhanced at the cost of

impaired memory for surrounding peripheral details. However, this has only been assessed

directly via verbal reports which reveal the outcome of a long stream of processing but cannot

shed light on how/when emotion may affect the retrieval process. In the present experiment, eye

movement monitoring (EMM) was used as an indirect measure of memory as it can reveal

aspects of online memory processing. For example, do emotions modulate the nature of memory

representations or the speed with which such memories can be accessed? Participants viewed

central negative and neutral scenes surrounded by three neutral objects and after a brief delay,

memory was assessed indirectly via EMM and then directly via verbal reports. Consistent with

the previous literature, emotion enhanced central and impaired peripheral memory as indexed by

eye movement scanning and verbal reports. This suggests that eye movement scanning may

contribute and/or is related to conscious access of memory. However, the central/peripheral

tradeoff effect was not observed in an early measure of eye movement behavior, i.e., participants

were faster to orient to a critical region of change in the periphery irrespective of whether it was

previously studied in a negative or neutral context. These findings demonstrate emotion’s

differential influences on different aspects of retrieval. In particular, emotion appears to affect

the detail within, and/or the evaluation of, stored memory representations, but it may not affect

the initial access to those representations.

2.8 Introduction

For years, researchers have noted that emotionally arousing events are remembered better than

neutral events (Cahill & McGaugh, 1998). However, emotion-enhanced memory does not

always extend to all aspects of an event. Rather, emotion, or specifically negative emotion, may

result in a central/peripheral tradeoff effect in memory: memory for central, emotional aspects of

an event is enhanced, and memory for peripheral, non-emotional aspects of an event is impaired

(for review, see Levine & Edelstein, 2009; Steblay, 1992). In other words, emotion affects the

nature of one’s memory representations for how a particular scene/event is remembered. While

45

there are many studies showing the influence of emotion during encoding and consolidation

(e.g., Cahill & McGaugh, 1998), it is unclear which aspects of retrieval are modulated by

emotion. For example, in addition to the quality and/or the amount of details that are stored in

memory, emotion may also affect the ease or speed at which such memories can be accessed and,

further, whether such representations are subsequently available for conscious introspection.

Evidence in support of the central/peripheral tradeoff effect in memory has been derived

exclusively from verbal reports (e.g., Christianson, 1992; Kensinger et al., 2007; Kensinger et

al., 2005; E. F. Loftus, 1979; E. F. Loftus et al., 1987; Reisberg & Heuer, 2004), which provides

a direct measure of the end product of a long stream of memory processing, but it cannot reveal

processing differences online. Previous studies show that online indices of memory, such as

those garnered by eye movement monitoring (EMM), do not necessarily correspond to verbal

reports memory (e.g., Laloyaux, Devue, Doyen, David, & Cleeremans, 2008; Ryan et al., 2000;

Ryan & Cohen, 2004; Thornton & Fernandez-Duque, 2000; Thornton & Fernandez-Duque,

2002). Further, retrieval may occur in stages, therefore it may be useful to have a measure of

memory that can evaluate retrieval throughout the process. The first stage of retrieval may

reflect initial access to stored representations in memory. We have previously argued that access

to memory representations occurs very early (within the first few fixations) and in an obligatory

fashion, such that it is not affected by changes in task demands (e.g., Althoff & Cohen, 1999;

Ryan, Hannula, et al., 2007). Subsequent stages of retrieval may reflect a more evaluative

process that depend critically on the quality of stored memory representation; this evaluative

process allows for repetition and/or changes in the environment to be detected and may

ultimately result in conscious access of the information (e.g., Hannula, Ryan, Tranel, & Cohen,

2007; Ryan & Cohen, 2004; Ryan, Hannula, et al., 2007). On this view, emotion-impaired

memory for peripheral information may be the result of difficulties in the initial access of

memory, and/or differences in the amount of detail contained within those representations that

are retrieved (e.g., Adolphs et al., 2001; Adolphs, Tranel, et al., 2005; Denburg et al., 2003).

This would contribute to a more comprehensive understanding of how malleable the processes

related to memory retrieval are, and how extensively emotion may influence memory, i.e., does

it modulate seemingly “obligatory” processes during retrieval in the same manner as processes

that are considered more evaluative?

46

In order to gain a more comprehensive understanding of the effect of emotion on memory, we

employed measures derived from EMM as well as verbal reports to characterize retrieval

processing differences as a function of emotion. In contrast to verbal reports, EMM can reveal

aspects of mnemonic processing such as what aspects of a scene were subsequently remembered,

and when this information was retrieved. Specifically, previous studies show that even when

participants were not cued or instructed to make recognition memory judgments, the rate of

overall sampling decreased for repeated versus novel scenes (repetition effect); and further,

sampling increased for critical regions within a scene that had undergone a change in

manipulated scenes compared to unchanged regions of repeated scenes (manipulation effect; e.g.,

Ryan et al., 2000; Ryan & Cohen, 2004). This shows that eye movement scanning behavior can

be altered by prior experience, and by outlining where eye movements are attracted to within a

scene that has undergone a change, it can reveal how detailed the memory representation is.

Further, differences in eye movement behavior due to prior experience have been found to occur

very early during processing (Althoff & Cohen, 1999; Ryan, Hannula, et al., 2007; Ryan, Leung,

Turk-Browne, & Hasher, 2007) and in advance of explicit responding (Hannula et al., 2007),

suggesting that EMM can reveal the time at which memories are initially accessed. Additionally,

eye movement indices of memory may reveal that information has been retained in memory that

is unavailable for conscious introspection (e.g., Althoff & Cohen, 1999; Althoff et al., 1998;

Hollingworth & Henderson, 2002; Hollingworth, Williams, & Henderson, 2001; Laloyaux et al.,

2008; Ryan et al., 2000; Ryan & Cohen, 2004; Thornton & Fernandez-Duque, 2000; Thornton &

Fernandez-Duque, 2002).

To address how emotion may affect the nature of, and access to memory representations for the

central emotional and peripheral neutral information, we adapted an experimental paradigm

which has been shown to elicit the central/peripheral tradeoff effect in memory when measured

via verbal reports (Kensinger et al., 2007). During the study phase, participants studied a central

picture that was either neutral or negatively arousing surrounded by three neutral everyday

objects. After a brief delay, memory for central pictures and peripheral objects was assessed

separately in the test phase in which previously viewed and novel central pictures, and

previously viewed, manipulated, and novel peripheral objects were presented. Here, memory for

the central pictures and peripheral objects was indexed by verbal reports and through changes in

eye movement patterns as a function of prior exposure. Since the aim of the current study was to

47

examine how emotion may affect what is retrieved from memory and when, we focus only on

the results obtained during the retrieval phase of the experiment.

As shown in previous studies, evidence of a central/peripheral tradeoff in memory would be

indexed by: (1) more accurate recognition, as measured via verbal reports, in identifying

previously viewed negative versus neutral central pictures, and (2) conversely, more accurate

recognition of peripheral objects that had been previously paired with neutral versus negative

central pictures. Further, if emotion leads to retrieval advantages for the central negative versus

neutral pictures, this would lead to a larger repetition effect (overall sampling decreases for

previously viewed versus novel scenes) for central negative pictures. On the other hand, if

emotion leads to retrieval disadvantages for the surrounding neutral information due to

difficulties in access and/or less detailed memory representations, this would be manifested as:

(1) faster orienting to a region of change among the peripheral objects previously paired with a

neutral versus negative picture and/or (2) increased viewing of manipulated versus repeated

peripheral objects (manipulation effect) previously paired with neutral, but not negative pictures,

respectively.

2.9 Materials and Methods

2.9.1 Participants

Twenty-four undergraduate students (mean age = 19.17 years, three males; one left-handed) from

the University of Toronto participated for course credit. All participants had normal neurological

histories and normal or corrected-to-normal vision.

2.9.2 Stimuli and Design

The materials used to create the experimental displays consisted of 48 pictures taken from the

International Affective Picture System (IAPS), of which 24 had a negative valence and 24 were

of neutral valence (Lang et al., 1999), and 192 neutral objects (Hemera Photo Objects). The

everyday objects were judged by the authors (Lily Riggs and Douglas A. McQuiggan) and two

independent raters to be neutral and non-arousing. All pictures chosen from the IAPS set

included people. The negative pictures had a more negative valence (t = −17.03, p < 0.001) and

were more arousing (t = 14.02, p < 0.0001) than the neutral pictures. Each display consisted of

one picture in the center and three objects randomly placed in the periphery, which did not

48

overlap in physical space or semantic meaning with the central element, but were always distinct

and not relevant to the meaning of the central scene. A manipulated version was constructed for

each display in which one of the three peripheral objects was replaced with a novel object. Each

set of peripheral objects was counterbalanced across participants such that it was presented as

paired with negative and neutral pictures equally. In the test block, the central pictures and

peripheral objects were presented separately. Central pictures were either previously presented

(repeated) or entirely new (novel). Peripheral objects contained the same three objects presented

during the study phase (repeated), two previously studied objects and 1 novel object

(manipulated) or three novel objects that were not presented during the study phase (novel). For

all displays of peripheral objects in test block, a black box was placed in the location previously

occupied by the central picture so that judgments of novelty/repetition could only be based on

the peripheral objects rather than the central picture. Counterbalancing of the display occurred

such that each version of the display appeared equally often in each experimental condition

(repeated/novel for central pictures; repeated/manipulated/novel for peripheral objects) across

participants.

2.9.3 Procedure

Eye movements were measured throughout the study and test phases with a SR Research Ltd.

Eyelink 1000 eye-tracking desktop monocular system and sampled at a rate of 1000 Hz with a

spatial resolution 0.1°. A chin rest was used to limit head movements. A nine-point calibration

was performed at the start of the experiment followed by a nine-point calibration accuracy test.

Calibration was repeated if the average gaze error was greater than 1° and if the error at any

single point was more than 1.5°. Participants studied 32 randomly presented displays (16 nega-

tive, 16 neutral) once in each of two study blocks. The stimuli were repeated across two study

blocks because previous EMM studies have shown that significant differences in viewing novel

versus repeated stimuli manifested only after multiple exposures in which the trial duration was

longer than in the current work (Althoff & Cohen, 1999; Ryan et al., 2000; Ryan, Leung, et al.,

2007). The displays were 1024 × 768 pixels in size and subtended approximately 33.4° of visual

angle when seated 25 inches from the monitor. Each display was presented for 2 s (e.g.,

Kensinger et al., 2007; Kensinger et al., 2005) followed by a 3-s inter-stimulus interval.

Participants were instructed to freely view the scene. After a 10-min delay (approximately) in

which participants completed a background information form, participants’ memory for the

49

peripheral objects and central pictures was assessed separately across four test blocks. Further,

during the study phase, the stimuli were repeated across two blocks because previous EMM

studies have shown that significant differences in viewing novel versus repeated stimuli

manifested only after multiple exposures in which the trial duration was longer than in the

current work (e.g., Althoff & Cohen, 1999; Ryan et al., 2000; Ryan, Leung, et al., 2007). Test

blocks using indirect measures of memory were always assessed first, followed by test blocks

that elicited direct verbal reports of memory. This was done in an effort to reduce the effect of

verbal reports on eye movement responses (Ryan et al., 2000; Yarbus, 1967). To indirectly

assess memory for the peripheral objects, participants were shown 16 previously studied, 16

manipulated, and central pictures was assessed indirectly via EMM by presenting 32 previously

studied and 16 novel pictures and asking participants subjects to engage in free viewing while

eye movements were monitored (Figure 2.1). Memory for the central pictures was assessed

indirectly via EMM by presenting 32 previously studied and 16 novel pictures and asking

participants subjects to engage in free viewing. The same materials presented during the EMM

test phase were repeated during the verbal response test phase. During the verbal response test

blocks, participants were informed that they would be seeing the last two blocks of pictures

again. In the first test block, participants had to indicate whether a set of peripheral objects was

exactly the same as during the study sessions (“old”), had changed in some way (“manipulated”)

or was completely novel (“new”). In the second test block, participants had to indicate whether a

central picture was the same (old) or different (new) from what they had seen during the study

blocks.

2.9.4 Analysis

Eye movements were measured during the study and test phase. From the test phase, our

analyses focused on the results from the repeated and manipulated peripheral objects as they

were a direct test of emotional influences on memory (Kensinger et al., 2007). Analysis of eye

movements was performed with respect to the experimenter-drawn interest areas corresponding

to the location of central picture and peripheral objects. Eye movement measures of interest

included the time of first fixation and the number of fixations into a region of interest. A fixation

is defined as the absence of any saccade (e.g., the velocity of two successive eye movement

samples exceeds 22°/s over a distance of 0.1º), or blink (e.g., pupil is missing for three or more

samples) activity. The time of first fixation indicates how quickly overt attention was directed to

50

a particular region of interest and provides an index of how quickly memory representations are

accessed. The number of fixations indicates the amount of viewing directed within a particular

region and provides a measure of the detail contained within the memory representation. Both

EMM measures of time of first fixation and number of fixations provide an indirect measure of

memory, as these measures can be collected without having participants simultaneously

comment on the contents of their memories.

Evidence of memory during the test phase for the central pictures would be revealed as a

decrease in the sampling of previously presented versus novel pictures (e.g., Althoff & Cohen,

1999; Althoff et al., 1998; Ryan et al., 2000). It is important to note that for central pictures, we

examined eye movement differences between novel and repeated negative pictures and eye

movement differences between novel and repeated neutral pictures. In other words, evidence of

memory is manifested as changes in viewing between novel and previously viewed pictures, and

not between negative and neutral pictures. Since participants always began each trial fixated in

the center of the screen and the central region was the only filled region present on the screen,

we examined only the number of fixations for central pictures. For the peripheral objects, a

comparison of repeated versus novel/manipulated peripheral objects provided evidence for the

time at which memory representations may be accessed and the quality of those stored

representations (e.g., Ryan & Cohen, 2004). This was examined as a proportion of difference in

viewing the critical object in repeated and manipulated arrays with reference to viewing of the

critical object in novel arrays as a baseline. The critical object in novel object arrays was never

associated with a neutral or negative central picture and served as a baseline to correct for

individual differences in viewing. Evidence of impaired access to peripheral objects as a result

of emotion would be manifested by slower orienting to the critical object (the novel object

among two repeated objects) in manipulated displays versus the exact same “critical” object in

repeated displays which had not undergone a change, for peripheral objects that had been paired

with negative versus neutral pictures. Evidence of a less detailed memory representation as a

result of emotion would be manifested by a lack of difference in the number of fixations directed

to the critical object in manipulated versus repeated displays for those peripheral objects that had

been paired with a negative, but not neutral, picture. In order to control for stimulus specific

effects, the critical object appeared as a novel object within a manipulated display, as a repeated

object within a repeated display, and as a novel object within a novel display across participants.

51

Additionally, the presentation of central pictures as novel or previously viewed was

counterbalanced across participants, thus any differences in viewing was the result of the

participants’ prior viewing history (Ryan et al., 2000; Ryan, Leung, et al., 2007).

Recognition accuracy was measured as the proportion of correct responses to novel and repeated

central pictures, and novel, repeated and manipulated peripheral objects. Reported hits for

central pictures were corrected for false alarms. Reported hits to repeated and manipulated

peripheral objects are presented uncorrected for false alarm rates as novel peripheral objects

were not presented with emotional/neutral images.

2.10 Results

2.10.1 Central Pictures

2.10.1.1 Eye movement measures

Eye movements were analyzed with respect to the interest area corresponding to the location of

the central picture. The raw means and standard errors for the number of fixations made to the

central pictures are presented in Table 3.1. Differences in the number of fixations directed to

novel versus repeated pictures were calculated using novel pictures as the baseline. We then

used paired-sample t-tests to determine whether this difference in viewing was significantly

different from 0 and modulated by emotion (negative versus neutral). Consistent with the notion

that emotion enhances memory, differences in viewing novel versus repeated pictures was

significantly different from 0 when the pictures were negative (t(23) = 3.01, p < 0.01), but not

when they were neutral (t(23) = 1.38, p = 0.18). Specifically, participants directed fewer

fixations to repeated versus novel pictures only when they were negative. A direct comparison

of viewing of negative and neutral central pictures was not significant (t(23) = 0.37, p = 0.72).

52

Table 0.1 Means and standard errors for eye movement measures for viewing of the critical

object in the periphery and central scenes during test session.

Critical Peripheral Object

Measures Neutral Negative

Novel Manipulated Repeated Novel Manipulated Repeated

Time of first

fixation (ms):

895.25

(44.18)

800.83

(49.07)

942.63

(49.91)

N/A 809.65

(48.48)

880.00

(31.96)

Number of

fixations (#):

2.06 (.09) 2.25 (0.14) 1.88 (0.10) N/A 2.10 (0.13) 1.99 (0.11)

Central Pictures

Neutral Negative

Novel Repeated Novel Repeated

Number of

fixations (#):

6.68 (.21) 6.45 (.23) 7.39 (.28) 6.94 (.25)

2.10.1.2 Verbal recognition reports

Verbal recognition for the central pictures was analyzed using a paired-sample t-test examining

accuracy for repeated negative and neutral pictures. When hit rates were corrected by false

alarms, participants were more accurate in identifying repeated pictures when they were negative

versus when they were neutral (t(23) = 2.86, p < 0.01). All relevant means and standard errors

are presented in Table 3.2.

Table 0.2 Mean responses and standard errors for peripheral objects and central pictures.

Peripheral Objects

Neutral Negative

Novel Manipulated Repeated Novel Manipulated Repeated

Accuracy

(SEM)

.43 (.05) .28 (.03) .63 (.04) N/A .20 (.03) .62 (.05)

Central Pictures

Neutral Negative

Novel Repeated Repeated

(Corrected)

Novel Repeated Repeated

(Corrected)

Accuracy

(SEM)

.64 (.07) .74 (.04) .56 (.03) .54 (.08) .92 (.02) .69 (.04)

In summary, when memory was measured indirectly via EMM at the test phase, eye movement

patterns distinguished between repeated and novel pictures when they were negative, but not

when they were neutral pictures. Consistent with this, emotion was also found to enhance

recognition memory for repeated central pictures when measured directly via verbal reports.

53

2.10.2 Peripheral Objects

2.10.2.1 Eye movement measures

Eye movements were analyzed with respect to the interest area corresponding to the location of

the critical object which was the novel object among two repeated objects in the manipulated

arrays and the corresponding object in the repeated and novel object arrays. Proportion of

difference in viewing of the critical object between manipulated and repeated displays relative to

novel displays reveals the extent to which information regarding the peripheral objects was

retained in memory (Ryan et al., 2000; Ryan & Cohen, 2004).

Eye movement measures to the critical object were analyzed using separate 2 × 2 repeated

measures ANOVA using emotion (negative, neutral) and object array type (manipulated,

repeated) as within-subject factors. All relevant raw means and standard errors are presented in

Table 3.1. Consistent with the notion that EMM measures are sensitive to prior experience, there

was a significant main effect of object array type (both measures: F(1,23) = 7.47, p = 0.01, d =

0.25). Participants were faster to fixate, and directed more viewing to the critical object when it

appeared in a manipulated versus a repeated display, regardless of whether that set of peripheral

objects had been previously paired with a negative or neutral central picture. The main effect of

emotion on eye movement behavior was not significant (time of first fixation: F(1,23) = 0.16, p =

0.69, d = 0.01number of fixations: F(1,23) = 0.11, p = 0.75, d = 0.004). There was a significant

interaction for the number of fixations (F(1,23) = 4.11, p = 0.05, d = 0.15; Figure 3.2);

participants directed more fixations to the critical object when it appeared in a manipulated

compared to a repeated array if the objects had been previously paired with a neutral central

picture (t(23) = 2.94, p = 0.007), but not when it had been paired with a negative central picture

(t(23) = 1.23, p = 0.23). The interaction between emotion and type was not significant for the

time of first fixation (F(1,23) = 0.58, p = 0.46, d = 0.02), suggesting that while emotion affected

the overall quality (i.e., number of fixations) of memory for peripheral objects, it did not affect

access to, and early indicators of, memory for detecting which peripheral object had been altered,

which occurred approximately 800 ms following stimulus onset.4

4 It should be noted that the same pattern of results was found when we examined the raw values

resulting from the EMM measures.

54

Figure 0.1 The proportion of change in viewing the critical object in a manipulated and

repeated object array relative to a novel object array.

Participants directed more fixations to the critical object in manipulated versus repeated object arrays for

objects previously encoded in a neutral, but not in a negative context.

2.10.2.2 Verbal recognition reports

Verbal recognition accuracy was analyzed with repeated measures ANOVA using emotion

(negative, neutral) and peripheral object array type (manipulated, repeated) as within-subject

factors. For accuracy, the main effect of object type was significant (F(1,23) = 61.14, p <

0.0001, d = 0.73). In other words, participants were more accurate in identifying repeated versus

manipulated object arrays. The main effect of emotion was marginally significant (F(1,23) =

3.59, p = 0.07, d = 0.14); participants were more accurate in classifying peripheral objects as

-0.15

-0.1

-0.05

0

0.05

0.1

0.15

0.2

Emotion at Encoding

Pro

po

rtio

n o

f C

ha

ng

e (

Nu

mb

er

of

Fix

ati

on

s)

Manipulated

Repeated

P=.01

Neutral Negative

55

either repeated or manipulated if they had been previously paired with a neutral central picture

rather than a negative central picture. Planned contrasts revealed that participants were

significantly more accurate in identifying manipulated peripheral objects if they were previously

paired with a neutral central picture versus a negative central picture (t(23) = 2.19, p < 0.05).

Emotion did not modulate accuracy for repeated peripheral objects (t(23) = 0.30, p = 0.77). All

relevant means and standard errors are presented in Table 3.2.

In summary, indirect measures of memory as indexed by EMM revealed that early eye

movement patterns distinguished between manipulated and repeated object arrays irrespective of

whether they were previously paired with a negative or neutral picture. In contrast, viewing of

the periphery was modulated by emotion and only distinguished between manipulated and

repeated object arrays of those previously paired with a neutral picture. Consistent with this,

emotion was also found to impair recognition memory, as indexed by verbal reports, for

detecting a change in the periphery.

2.10.3 Relation between Verbal reports and Eye Movement Data

In further support of the finding that there may be a dissociation between memory measured by

verbal reports versus EMM, we also examined EMM for half of the participants who showed the

strongest emotion-modulated effect in verbal memory, i.e., those who showed the largest

difference in correctly identifying peripheral objects previously paired with negative (M = 0.11,

SEM = 0.03) versus neutral pictures (M = 0.34, SEM = 0.03; t(11) = 5.14, p < 0.0001). Despite

this strong tradeoff effect in memory as measured by verbal reports, the same tradeoff effect was

not observed in EMM measures. Specifically, a repeated measures ANOVA using emotion

(negative, neutral) and object type (manipulated, repeated) did not reveal significant main effects

of emotion for either of the EMM measures (time of first fixation: F(1,11) = 0.43, p > 0.1, d =

0.04; number of fixations: F(1,11) = 1.43, p > 0.1, d = 0.12) nor did it reveal a significant

interaction between emotion and object type (time of first fixation: F(1,11) = 0.66, p > 0.1, d =

0.06; number of fixations: F(1,11) = 1.67, p > 0.1, d = 0.13). However, consistent with previous

results, there was a significant main effect of object type for time of first fixation (F(1,11) = 6.35,

p < 0.05, d = 0.37) such that participants were quicker to fixate on manipulated versus repeated

objects regardless of whether the objects were previously paired with neutral or negative central

pictures. The main effect of type was not significant for the number of fixations (F(1,11) = 2.87,

56

p > 0.1, d = 0.21). This suggests that the strongest dissociation between verbal reports and EMM

may be observed in measures of early viewing such as the time of first fixation, which is

modulated by prior experience, but not emotion.5

2.11 Discussion

The presence of emotional stimuli results in a central/peripheral tradeoff effect in memory (e.g.,

Kensinger et al., 2007; Kensinger et al., 2005; Wessel & Merckelbach, 1997). Prior work

suggests that emotions change the nature of memory representations for the emotion-eliciting

stimulus and surrounding neutral information. However, this has only been explored using

explicit verbal reports which reveal the end product of what is held in memory and cannot speak

to how quickly one may be able to access stored representations. Using measures derived from

EMM and verbal reports, the present work examined whether the presence of emotional stimuli

led to differences in the speed at which memory representations could subsequently be accessed

at retrieval, and whether there were differences in the details maintained within those

representations. To the best of our knowledge, the present study is the first to examine these

issues regarding the influence of emotion on distinct aspects of retrieval. In the next section, we

discuss our results in light of prior findings regarding the central peripheral tradeoff in memory,

and how the current work may inform theories regarding the influence of emotion on memory.

The use of EMM allows for the examination of how early viewing may be modulated by prior

experience and whether this was influenced by the emotional context in which the information

was originally encoded. Consistent with previous research, the current results showed that an

early indicator of memory (i.e., time of first fixation to an altered region) was modulated by prior

experience (e.g., Althoff & Cohen, 1999; Henderson et al., 2003; Ryan & Cohen, 2004).

Specifically, participants were approximately 105 ms faster to fixate on the peripheral critical

object when it was manipulated compared to when it was repeated which suggests that

participants were able to encode, store, and at least to some degree, access information about

these peripheral objects during the test phase such that early eye movement behavior was altered

5 Since accuracy for central pictures was at ceiling, there was not enough variability to conduct

the same type of analysis to examine the relation between verbal reports and eye movement

behavior for central pictures.

57

(Parker, 1978). The difference in the time of first fixation occurred as early as 800 ms, which is

rapid considering that participants did not know which object arrays would be manipulated and

where the critical object would appear. Critically, this early indicator of memory differentiated

between manipulated and repeated object arrays irrespective of the emotional context in which

the objects were originally encoded. Thus, contrary to the notion that emotion impairs memory

for information in the periphery, the current results show that emotion did not modulate early

access of memory when measured indirectly via EMM.

In addition to examining an early indicator of memory via EMM, the current study also

examined viewing patterns during the entire presentation period, i.e., number of fixations.

Consistent with the central/peripheral tradeoff effect in memory, viewing patterns showed that

emotion-enhanced memory for central pictures and impaired memory for peripheral objects.

Specifically, it was found that viewing of central pictures was characterized by a repetition

effect, i.e., a decrease in the number of fixations in viewing repeated versus novel scenes (e.g.,

Althoff & Cohen, 1999; Althoff et al., 1998; Ryan et al., 2000; Ryan, Leung, et al., 2007) for

negative, but not neutral central pictures. Previous studies have shown that significant

differences in viewing novel versus repeated stimuli largely occur only after multiple exposures

in which the trial duration was longer than in the current work (e.g., Althoff & Cohen, 1999;

Ryan et al., 2000; Ryan, Leung, et al., 2007). Thus, it is likely that the EMM metric did not

distinguish between novel and repeated neutral central pictures because more repetitions were

required before such differences in eye movement behavior could manifest. Despite this, eye

movement behavior did distinguish between novel and repeated negative pictures, which

suggests that emotion does not only enhance the probability that the picture will later be

remembered, it also suggests that emotion may enhance the speed at which a lasting memory

representation is formed. It is important to note that the repetition effect found in the eye

movement behavior for viewing negative central pictures may represent the contributions of

perceptual fluency rather than (or in addition to) declarative/relational memory. For example,

repetition effects have been demonstrated in amnesic patients who have compromised medial

temporal lobe systems (e.g., Althoff et al., 1998; Ryan et al., 2000), and intact repetition effects

have been observed in healthy older adults in whom compromised medial temporal lobe function

has been implicated (Driscoll et al., 2003; Ryan, Leung, et al., 2007).

58

For peripheral objects, participants directed significantly more fixations to the critical object of

manipulated versus repeated displays (manipulation effect) if the objects were previously studied

with a neutral central picture, but not when the peripheral objects were studied with a negative

central picture. The finding that more fixations were directed to the manipulated versus repeated

critical object is consistent with previous eye movement studies that have reported an increase in

viewing for regions of change (e.g., Ryan et al., 2000; Ryan & Cohen, 2004; Ryan, Leung, et al.,

2007). Such effects have been reported irrespective of task demands and have been found to

precede behavioral responding, which suggest that such eye movement behaviors may ultimately

culminate in the conscious access of previously learned information. It is possible that it is only

through an increase in the amount of viewing to, and investigation of, a region of change that

allows one to not only notice a change, but also be able to explicitly identify what had been

changed and how. Further, on this view, it is likely that a manipulated versus repeated scene

may require a more extensive comparison process between the presented external stimulus and

the internal memory representation, leading to an increase in viewing (see Ryan & Cohen, 2004,

for further discussion). In addition, such an increase in viewing may also represent the re-

binding and/or the updating of memory representations. Thus, although early access to memory

was not modulated by emotion, the quality and/or the amount of details contained within the

memory, as indexed by the amount of sampling, was modulated by the emotional history of the

retrieved information. This suggests that although emotion may lead to a more impoverished

memory representation for information in the periphery, it may not impair one’s ability to access

that information during the retrieval phase, however poor in quality those representations may

be. In contrast to the repetition effect found for viewing of novel and repeated central pictures,

the manipulation effect found for viewing of manipulated versus repeated peripheral objects

likely reflects the influence of emotion on the representations that are declarative/relational in

nature; as eye movement indices of detection of a manipulation are impaired in amnesic patients

(Ryan et al., 2000) and older adults who presumably have a compromised medial temporal lobe

system (Ryan, Leung, et al., 2007). Altogether, it would appear that emotion affects the

formation of (detail contained within) multiple memory representations, including those that

would contribute to perceptual fluency and those that are declarative/relational in nature and

which support identification of a change by the eyes. However, it does not appear that emotion

affects the speed with which such representations are accessed.

59

Consistent with the notion that measures of sampling of the critical region may contribute and/or

are related to the final output of memory processing, direct measures of memory obtained

through verbal reports showed that emotion-enhanced memory for central pictures and impaired

memory for peripheral objects. Specifically, participants were more accurate to identify repeated

negative versus neutral central pictures, and less accurate to detect a change in the periphery if

the peripheral objects were previously studied with a negative compared to a neutral picture

(e.g., J. M. Brown, 2003; Kensinger et al., 2005; Wessel & Merckelbach, 1997). Interestingly,

while emotion impaired participants’ ability to detect a change in the peripheral objects, it did

not modulate their ability to identify repeated peripheral objects (see: Kensinger et al., 2007;

Kensinger et al., 2005). A possible reason for this is that whereas previous studies have

presented the stimuli once during the encoding phase, the present study presented the stimuli

twice across two study blocks. It is possible that by repeating the stimuli, the central/peripheral

tradeoff effect in memory was not as robust as it would have been had the stimuli only been

presented once. There is some indication in the literature that the central/peripheral tradeoff

effect in memory is sensitive to methodological parameters such as the duration of exposure to

the stimuli, specificity of the information interrogated during the test phase and the length of

time between encoding and retrieval (e.g., Burke et al., 1992; Christianson, 1992; Steblay, 1992).

However, despite having presented the stimuli twice during the study blocks, we still observed

an influence of emotion on the memory for the peripheral objects. Detection of a manipulation

within the peripheral objects may require a more detailed declarative/relational memory

representation as participants need to be able to identify a critical novel object among two

previously viewed objects. Thus, the current results may suggest that memory for specific

details in the periphery is more sensitive to emotional modulation than memory for the gist of

information (Adolphs et al., 2001; Adolphs, Tranel, et al., 2005; Denburg et al., 2003).

The results of this study showed that while emotion (here, negative emotion) did not modulate

early indicators of, or access to memory, it led to a central/peripheral tradeoff in memory as

indexed by sampling of the stimulus and by verbal reports. These differences in the influence of

emotion may be due to differences in what such changes in eye movement measures and verbal

reports represent; specifically, the early online use of memory versus the quality of stored

memory representations, and subsequent conscious access to those representations. An

important question that remains is how emotion may influence the quality and/or amount of

60

details stored in memory. It is often argued such differences in memory are the result of

emotion-modulated differences during the encoding phase. However, there has not been a

complete examination of whether such differences in attention during the encoding phase are

related to subsequent memory (see: Christianson et al., 1991; Wessel et al., 2000). In a recent

paper by Riggs and colleagues (Riggs, McQuiggan, Farb, Anderson, & Ryan, 2011), the

researchers used EMM as an index of overt attention allocation, and mediation analysis to

determine whether differences in attention were related to subsequent memory. It was found that

contrary to previous assumptions, differences in attention during the encoding phase did not fully

explain the central/peripheral tradeoff effect in verbal reports memory. These findings suggest

that the differential influence of negative emotion on central versus peripheral memory may

result from other cognitive influences in addition to visual attention, or on post-encoding

processes. Alternatively, it could also be argued that while EMM provides a reliable measure of

overt attention, it cannot capture processes related to covert attention which can be decoupled

from overt attention (e.g., Posner, 1980; Rowe et al., 2007). Future studies can more

systematically differentiate between these two contributing factors.

In summary, the current findings suggest that emotion does not modulate all aspects of retrieval.

Access to previously formed memory representations occurred early, without regard to the nature

of the information that is contained therein. Together, with our previous work that suggests

initial access to memory occurs despite differences in task demands (e.g., Ryan, Hannula, et al.,

2007), and even when such information is not relevant for the task at hand (Ryan, Hannula, et al.,

2007), we propose that retrieval of previously stored memory representations occurs in an

obligatory fashion, despite the valence of the stored information. By contrast, emotion impacts

the detail and/or amount of information that is maintained in memory and the likelihood that

there will be conscious access to that information. Thus, the more evaluative components of

memory (formation and) retrieval are impacted by emotional valence.

2.12 Acknowledgments

This work was supported by funding from the Natural Sciences and Engineering Research

Council of Canada (Jennifer D. Ryan, Adam K. Anderson), the Canada Research Chairs Program

(Jennifer D. Ryan), the Canadian Foundation for Innovation (Jennifer D. Ryan), and a

61

Postgraduate Scholarship from the Natural Sciences and Engineering Research Council of

Canada (Lily Riggs).

62

Chapter 4 A Complementary Analytic Approach to Examining Medial Temporal Lobe Sources Using Magnetoencephalography

Riggs, L., Moses, S.N., Bardouille, T., Herdman, A.T., Ross, B., Ryan, J.D. (2009). A

Complementary Analytic Approach to Examining Medial Temporal Lobe Sources Using

Magnetoencephalography. NeuroImage, 45(2), 627-642. doi: 10.1016/j.neuroimage.2008.11.018.

63

3 A complementary analytic approach to examining medial temporal lobe sources using magnetoencephalography

3.1 Abstract

Neuropsychological and neuroimaging findings reveal that the hippocampus is important for

recognition memory. However, it is unclear when and whether the hippocampus contributes

differentially to recognition of previously studied items (old) versus novel items (new), or

contributes to a general processing requirement that is necessary for recognition of both types of

information. To address this issue, we examined the temporal dynamics and spectral frequency

underlying hippocampal activity during recognition of old/new complex scenes using

magnetoencephalography (MEG). In order to provide converging evidence to existing literature

in support of the potential of MEG to localize the hippocampus, we reconstructed brain source

activity using the beamformer method and analyzed three types of processing-related signal

changes by applying three different analysis methods: (1) Synthetic aperture magnetometry

(SAM) revealed event related and non-event-related spectral power changes; (2) Inter-trial

coherence (ITC) revealed time-locked changes in neural synchrony; and (3) Event-related SAM

(ER-SAM) revealed averaged event-related responses over time. Hippocampal activity was

evident for both old and new information within the theta frequency band and during the first

250 ms following stimulus onset. The early onset of hippocampal responses suggests that

general comparison processes related to recognition of new/old information may occur

obligatorily.

3.2 Introduction

Since Scoville and Milner's (Scoville & Milner, 1957) discovery that excision of the

hippocampus and surrounding cortex leads to profound and pervasive memory deficits, memory

research has focused on the functional significance of this medial temporal lobe region (N. J.

Cohen & Eichenbaum, 1993; N. J. Cohen et al., 1999; Eichenbaum & Cohen, 2001; Squire,

1992). Early neuropsychological studies revealed that damage to the hippocampus leads to

severe deficits in memory for facts and events, as typically assessed using recall and recognition

tasks in which participants have to either retrieve previously studied items or distinguish

64

previously studied items from novel items, respectively (N. J. Cohen & Squire, 1980; Corkin,

1968, 1992; Manns, Hopkins, Reed, Kitchener, & Squire, 2003).

With the advent of functional neuroimaging techniques such as positron emission tomography

(PET) and functional magnetic resonance imaging (fMRI), researchers have studied the

contribution of the hippocampus to memory in the healthy brain, often by using recognition

memory tasks. Consistent with the neuropsychological data, neuroimaging findings revealed

that the hippocampal region is involved during recognition memory tasks compared to control

tasks that require simple visual processing/discrimination (e.g. Kapur, Friston, Young, Frith, &

Frackowiak, 1995; Schacter et al., 1995; Squire, 1992).

With further advances in technology and analysis, researchers used event-related fMRI to

interleave trial types that required different cognitive demands and/or to separate trials based on

participants’ response (Greenberg et al., 2005; Kensinger, Clarke, & Corkin, 2003; Yonelinas,

Otten, Shaw, & Rugg, 2005) to determine whether the hippocampus contributes specifically to

successful retrieval of stored information or whether the hippocampus has a more general role

during the retrieval stage. While some studies found that the hippocampus was preferentially

recruited during the successful recognition of previously studied (old) information, others found

that it was recruited to a similar extent (or even more) for novel (new) information (see Henson,

2005, for review). This suggests that the critical role of the hippocampus during memory

retrieval may not reflect successful access to a stored representation per se, but instead may

reflect a more general processing requirement that is common for both previously studied and

novel information (N. J. Cohen et al., 1999). However, it is also possible that the hippocampus

contributes differentially to the recognition of old/new information in ways that are not reflected

in the amount of changes in metabolism as measured using PET and fMRI. Specifically, to the

extent that different processes are invoked to support the recognition of old versus new

information, the hippocampus may be recruited at a different time and/or in a different manner,

as reflected in time course and spectral frequency of electromagnetic brain activity. PET and

fMRI techniques do not have the adequate temporal resolution to outline the time course by

which the hippocampus may come online during the recognition of different types of

information, therefore, we require a neuroimaging method that can localize the hippocampus and

outline its precise temporal dynamics.

65

Precise temporal dynamics underlying neural activity can be observed using

electroencephalography (EEG) or event-related potentials (ERPs). For years, ERP studies of

recognition memory have described what is thought of as hippocampally-mediated neural

activity associated with viewing of previously studied and novel stimuli (for review, see Rugg,

1995a). This is known as the late positive component (LPC) of the ERP and is typically

observed over medial and posterior sensor sites and begins around 500–600 ms after stimulus

onset (e.g. Duzel, Picton, et al., 2001; Duzel, Vargha-Khadem, Heinze, & Mishkin, 2001; Rugg,

Schloerscheidt, Doyle, Cox, & Patching, 1996; M. E. Smith & Halgren, 1989). This seems to

suggest that different types of information recruit hippocampal processing in the same temporal

manner. However, it is not known to what extent the late ERP components reflect the

contribution from the hippocampus versus other sources. The spatial localization of EEG is

compromised by volume conduction and therefore the signals likely reflect multiple underlying

neural regions, thereby making it difficult to outline the temporal dynamics of the hippocampus

specifically. Moreover, even if the temporal dynamics of hippocampal activity for previously

studied versus novel information is similar during later processing (N500 ms), it is not known

whether they are similar during earlier stages of recognition memory (b500 ms).

Magnetoencephalography (MEG) is a noninvasive neuroimaging technique that estimates

neuronal activity based on recordings of the magnetic flux outside of the head (Hämäläinen et

al., 1993; Hari et al., 2000). MEG has the same temporal resolution as EEG, but magnetic fields

are less susceptible to attenuation by skull and tissue, therefore, its spatial localization is more

precise than EEG. MEG provides recording of neural activity with temporal resolution on the

order of milliseconds and spatial resolution comparable to that of fMRI (Miller et al., 2007).

These properties make MEG an ideal tool for studying the dynamics of brain function.

However, there is some debate of whether MEG can be reliably used to detect signals from deep

neural structures such as the hippocampus (Mikuni et al., 1997). First, it has been argued that the

specific shape of the hippocampus prevents any signal from being detected by MEG sensors

(Mikuni et al., 1997). Specifically, it has been speculated that the “spiral” shaped hippocampal

formation may lead to cancellation of all detectable signal from this region (Baumgartner,

Pataraia, Lindinger, & Deecke, 2000; Mikuni et al., 1997; Stephen, Ranken, Aine, Weisend, &

Shih, 2005). However, complete cancellation would require simultaneous activation of dentate

and cornu ammonis (CA) fields with equal signal intensity. Contrary to this, it has been argued

66

that the hippocampus is laminated, thus, signals tend to summate rather than cancel (Nishitani et

al., 1999). Moreover, anatomical and electrophysiological asymmetries in the hippocampus

(Duvernoy, 1988; Yeckel & Berger, 1990) suggest that cancellation will be incomplete and at

least some portion of the signal will be visible to MEG (for an in depth discussion, see Stephen

et al., 2005).

Second, it has been argued that signals from the hippocampus would be too weak to be

detectable by MEG sensors because the magnetic field decreases with the square of distance

between neural source and the MEG sensor (Baumgartner et al., 2000; Hämäläinen et al., 1993;

Hillebrand & Barnes, 2002). Since the hippocampus is situated deep within the brain, detecting

hippocampal activity at the scalp surface is challenging. Under the assumption that deep

structures do not contribute to the recorded signal, some source analysis programs constrain the

localization of neural activity to the cortex excluding all subcortical structures including the

hippocampus (Berg & Scherg, 1994; Gonsalves, Kahn, Curran, Norman, & Wagner, 2005; Jerbi

et al., 2004). However, the development of modern whole-scalp MEG sensor arrays has

increased the sensitivity for deep structures (Ahonen et al., 1993) by capturing magnetic flux

signals across the entire head. Advanced data analysis methods make use of information

obtained by all sensors and support volumetric source analysis, e.g. standardized low resolution

brain electromagnetic tomography (sLORETA) (Pascual-Marqui, 2002), L1 minimum-norm

current estimate (MCE) (Tesche, 1996; Uutela, Hamalainen, & Somersalo, 1999), synthetic

aperture magnetometry (SAM) (Fawcett, Barnes, Hillebrand, & Singh, 2004; Gaetz & Cheyne,

2003; Herdman et al., 2004; Herdman et al., 2003; Hirata et al., 2002; Luo et al., 2007; Robinson

& Vrba, 1999; Schulz et al., 2004), and event-related SAM (ER-SAM) (Cheyne, Bakhtazad, &

Gaetz, 2006; Cheyne, Bostan, Gaetz, & Pang, 2007; Hämäläinen et al., 1993; Herdman & Ryan,

2007; Itier, Herdman, George, Cheyne, & Taylor, 2006; Schulz et al., 2004). Our group

contributed to these tools with a new source analysis approach using inter-trial coherence (ITC)

(Bardouille & Ross, 2008).

Third, there is some question as to whether MEG is sensitive enough to differentiate activity

between the hippocampus and parahippocampal gyrus. It has been reported that at the depth of

these sources (5–6 cm), spatial resolution ranges from 25 mm to 40 mm, making it difficult to

distinguish activity originating in the hippocampus from those originating in the

parahippocampal region (D. Cohen et al., 1990; Mosher, Spencer, Leahy, & Lewis, 1993).

67

However, in a study that examined the precision of localization using simulated MEG activity

presented with real background brain activity, Stephen and colleagues (Stephen et al., 2005)

showed that MEG is able to correctly localize activity to either the hippocampus or the

parahippocampal gyrus when activity in these two regions did not overlap in time. When these

two regions did overlap in time and were simultaneously active, MEG was unable to differentiate

between them and modeled the activity to a single source. However, this is not a problem for

localizing the hippocampus per se, rather, this suggests that when both regions are active,

activity localized to one region cannot be said to be completely independent of the other, and

may reflect simultaneous activity from both regions.

Based on previous literature, it is clear while the localization of deep sources, such as the

hippocampus, using MEG remains a challenging task, it is by no means an impossible one. In

fact, numerous studies have lent support to the notion that hippocampal activity can be detected

by MEG using a variety of experimental paradigms such as sensory oddball tasks (Hamada,

Sugino, Kado, & Suzuki, 2004; Ioannides et al., 1995; Nishitani, Nagamine, Fujiwara, Yazawa,

& Shibasaki, 1998; Tesche, Karhu, & Tissari, 1996), conditioning (Kirsch et al., 2003), mental

calculation (Tesche, 1997), and motor reaction to an auditory cue (Tesche & Karhu, 1999). Of

the MEG studies that examined memory, several have reported observable responses from the

hippocampus for tasks of prospective memory (T. Martin et al., 2007), working memory

(Campo, Maestu, Ortiz, et al., 2005; Tesche & Karhu, 2000), and transverse patterning (Hanlon

et al., 2003; Hanlon et al., 2005). However, despite the theoretical and empirical link between

the hippocampus and long-term memory, and the prevalent use of recognition memory

paradigms in neuropsychological, PET, fMRI, and ERP studies, only very few MEG studies

have examined hippocampal activity within this framework (Breier, Simos, Zouridakis, &

Papanicolaou, 1998, 1999, 2000; Gonsalves et al., 2005; Papanicolaou et al., 2002; Tendolkar et

al., 2000).

The MEG studies that have looked at hippocampal activity during a recognition memory task

have been inconclusive. In a recent MEG study of visual memory by Osipova and colleagues

(Osipova et al., 2006), it was found that correctly recognized old items elicited stronger theta

oscillations than correctly rejected new items. The authors suggested that this theta oscillation

may derive from hippocampal activity, but were not able to localize the activity to any region in

the brain due to insufficient signal-to-noise ratio. In a MEG study of verbal recognition,

68

magnetic evoked activity localized to the right medial temporal region was reported (Tendolkar

et al., 2000). However, because the MEG data had not been co-registered with participants’

structural MRIs, it is not clear whether the activity originated from the hippocampus or

surrounding cortex. In a combined MEG and fMRI study that also examined neural activity

during a verbal recognition task, significant left medial temporal lobe (MTL) activity was

localized to the perirhinal and parahippocampal cortex predominantly during the 150–450 ms

time interval following stimulus onset (Gonsalves et al., 2005). However, since the MEG

sources had been constrained to the cortex only, it is unclear whether the hippocampus was also

involved. When researchers incorporated co-registration of MEG and structural MRI and did not

constrain the MEG localization to cortical sources, activity was localized to the medial temporal

lobe, including the hippocampus and parahippocampal gyrus using both visual and verbal

recognition tasks (Breier et al., 1998, 1999, 2000; Papanicolaou et al., 2002). However,

Papanicolaou and colleagues (2002) only examined the time course of medial temporal lobe

activation in general, and while Breier and colleagues (Breier et al., 1998, 1999, 2000) localized

the activity specifically in the hippocampus and parahippocampal gyrus and found them to be

active between 200–800 ms post-stimulus onset, the exact time course of activity in the

hippocampus was not outlined. Altogether, all of the MEG studies examining recognition

memory of which we are aware reported medial temporal activation when there was sufficient

signal-to-noise ratio for brain localization and when source analysis had not been constrained to

the cortical surface, (Breier et al., 1998, 1999, 2000; Gonsalves et al., 2005; Papanicolaou et al.,

2002; Tendolkar et al., 2000). Furthermore, activity in the hippocampus can be detected when

using precise co-registration of MEG and structural MRI (Breier et al., 1998, 1999, 2000;

Papanicolaou et al., 2002).

While the above studies have examined hippocampal activity during recognition memory using

MEG, questions remain regarding precisely when peak hippocampal activity occurs and whether

the manner of activity changes depending on the nature of the stimulus (old/new). For example,

hippocampal activity associated with the recognition of previously studied versus novel items

may peak at the same/different times and/or oscillate in the same/different frequency range. The

purpose of the present study was to provide converging evidence for the earlier work described

above, which outlines the potential of using MEG for localizing hippocampal activity, and to

expand upon it both methodologically and theoretically. We adapted an experimental paradigm

69

in which participants first studied a series of scenes and scrambled versions of the scenes

(Kirchhoff, Wagner, Maril, & Stern, 2000). Immediately following, participants had to

distinguish previously studied from novel scenes.

To expand upon prior work methodologically, we provide a comprehensive examination of

electromagnetic activity from the hippocampus by analyzing multiple aspects of processing-

related signal changes in the observed signals. The three analysis methods used were variations

of the beamformer approach (Robinson & Vrba, 1999): Synthetic Aperture Magnetometry

(SAM), Inter-Trial Coherence (ITC) of brain source activity, and Event Related SAM (ER-

SAM). The beamformer approach to MEG data analysis is a two-step procedure: the first step

uses the beamformer as spatial filter for reconstructing source activity, and in the second step, a

signal statistic is derived from the source activity and mapped volumetrically. While the three

analysis methods in our study use the same beamformer, each method uses different statistics

and varies in their degree of specificity for particular aspects of the data such as spectral and

temporal information. SAM examines the changes in signal power in a certain frequency band

between a specified control and active time window for each volume element. The signal power

statistics include both the phase-locked event-related activity and changes in signal power

induced by the stimulus but not strictly phase-locked. ITC is a normalized measure of neural

synchrony across multiple trials. ITC reveals the time and frequency range in which high

coherence between stimulus and brain activity occurs and provides complementary information

to the signal power statistics in SAM. ER-SAM averages waveforms of source activity across all

trials and examines event-related, time-locked neural responses. Unlike modeling the MEG data

with a small number of equivalent current dipoles (ECD), the beamformer analysis does not

require a priori assumptions about the number of active sources. Also, beamformer algorithms

take advantage of the high dimensionality of the signal space offered by multi-channel MEG in

order to reduce correlations in the data and suppress interactive sources (Cheyne et al., 2006).

Specifically, the entire brain volume is covered by a grid, and at each grid node, the beamformer

maximizes sensitivity for the signal from that node and suppresses the signal from other nodes

(Huang et al., 2004). It should be noted that while the proposed analyses vary in their degree of

specificity for particular aspects of the data such as spectral and temporal information, the

observed measures may not be completely independent because they are affected by properties

of the commonly applied beamformer. Further, the methods of examining the averaged evoked

70

response with ER-SAM and ITC are asymptotically equivalent for a large number of trials.

However, two important differences exist between ITC and ER-SAM. First, ITC uses the

normalized amplitude of neural activity, which makes the statistics more homogeneous across

the whole brain than ER-SAM. This is important for localizing deep sources, which likely have

lower signal amplitudes than more superficial sources. Second, ITC provides information about

synchrony at a specific frequency, while ER-SAM provides precise timing information. With

the three complementary analysis methods we will give an exhaustive description of relevant

electromagnetic brain activity as expressed in changes in spectral signal power, time course of

event-related activity and changes in signal coherence. To the best of our knowledge, the

application of multiple analysis methods to characterize the different aspects of neural activity

from hippocampus with the same set of MEG data has not been previously attempted.

To expand upon prior work theoretically, through our multi-method approach, we are able to

outline the precise time courses and spectral frequencies of hippocampal activity during

recognition of old and new items. This will provide insights into the nature of recognition

memory, namely, when does the hippocampus begin to participate in recognition memory of, and

does it participate similarly for, old/new information? Such an analysis may speak to questions

regarding the functional role of the hippocampus in distinguishing the familiar from the novel.

3.3 Method

3.3.1 Participants

Thirteen adults (6 males, 28.1 years of age, 1 left-handed) from the Toronto community with

normal neurological histories and normal or corrected-to-normal vision participated in the study.

The study was approved by the local ethics committee and the rights and privacy of the

participants were observed. All participants gave informed consent before the experiment and

received monetary compensation.

3.3.2 Stimuli

Visual stimuli consisted of 200 pictures of indoor scenes, 200 of outdoor scenes, and 400

scrambled scenes. The resolution of all pictures was 1024 by 768 pixels. The 400 indoor and

outdoor scenes were created from a set of 200 scenes (100 indoor, 100 outdoor) taken from a

repository of scenes in CorelDraw. Each scene was divided into two unique non-overlapping

71

images to create a set of target scenes and a set of foil images. In this manner, sets of targets and

foils were similar for color, luminance and complexity. Targets were presented during the

encoding phase and as ‘old’ images in the retrieval phase; foils were presented as ‘new’ images

during the retrieval phase. The sets of scenes were counterbalanced such that every scene was

presented equally often as a target and foil across participants. The scrambled scenes were

random patterns generated from permutations of the indoor and outdoor scenes, such that each

scene had a scrambled counterpart, and therefore had similar color and luminance as the original

scenes. Scrambled scenes were made using Adobe Photoshop.

3.3.3 Procedure

The experiment consisted of an encoding and retrieval phase, each lasting approximately 20 min.

MEG was recorded during both phases; however, only the data from the retrieval phase is

presented here. During the encoding phase, participants viewed 200 indoor and outdoor scenes

and 200 matched scrambled scenes. Scenes were presented for 1000 ms with an average inter-

stimulus interval (ISI) of 2000 ms (range 1750–2250 ms). During the ISI a fixation cross

appeared in center of the black screen (Figure 4.1). Participants were instructed to distinguish

between indoor, outdoor, and scrambled scenes by pressing one of three different buttons with

their right hand. Participants were also informed that there would be a subsequent memory test.

The retrieval phase immediately followed the encoding phase. During retrieval, participants

viewed the 200 previously studied (target) scenes and 200 novel indoor and outdoor scenes (foil

images). Participants were instructed to respond whether they were highly confident that the

picture had been previously studied (‘old’), if they were only somewhat confident that the picture

was ‘old’, or if the picture was 'new'.

72

Figure 3.1 Example of an indoor and outdoor scene used in the experiment.

3.3.4 Data acquisition

MEG recordings were performed in a magnetically shielded room at the Rotman Research

Institute, Baycrest Hospital for Geriatric Care, using a 151-channel whole head first order

gradiometer system (VSM-Med Tech Inc.) with detection coils uniformly spaced 31 mm apart on

a helmet-shaped array. Participants sat in upright position, and viewed the stimuli on a back

projection screen that subtended approximately 31 degrees of visual angle when seated 30 in.

from the screen. The MEG collection was synchronized with the onset of the stimulus by

recording the luminance change of the screen. Participant's head position within the MEG was

determined at the start and end of each recording block using indicator coils placed on nasion

and bilateral preauricular points. These three fiducial points established a head-based Cartesian

coordinate system for representation of the MEG data.

In order to specify/constrain the sources of activation as measured by MEG and to co-register the

brain activity with the individual anatomy, a structural MRI was also obtained for each

participant using standard clinical procedures with a 1.5 T MRI system (Signa EXCITE HD

11.0GE Healthcare Inc., Waukesha, WI) located at Sunnybrook Health Sciences Centre. All

participants’ anatomical MRIs and MEG source data were spatially normalized to the Talairach

standard brain using AFNI (National Institute of Mental Health, Bethesda, MD, USA) for the

SAM and ITC method and using SPM99 (Wellcome Institute of Cognitive Neurology, London,

UK) for the ER-SAM method to allow for group analysis of functional data.

+

IST: 3000ms (avg) Scene: 1000ms

Time

+ +

73

3.3.5 Data analysis

Analysis methods were applied to scenes that were later correctly identified as ‘new’ (correct-

new) and ‘high confidence old’ (correct-old). For all analyses, the beamformer spatial filter as

provided by the VSM software package was used to estimate source activity on a grid with

regular spacing of 5 mm. Analyses were performed individually for each participant. Resulting

individual volumetric maps of functional brain activity were then transformed into the standard

Talairach space, using the same transform applied to the anatomical MR image. The resultant

functional maps for each time/frequency interval were then averaged across participants. Group

statistics were performed to identify which regions of brain activation were significantly

different from a pre-specified control window on average across all participants. The type of

group statistics applied for each analysis method is consistent with previous work, for example,

permutation test for SAM (Chau, Herdman, & Picton, 2004) and pseudo-z for ER-SAM

(Herdman, Pang, Ressel, Gaetz, & Cheyne, 2007). In order to ensure that significance in the

group-averaged results was not driven by outliers, we also examined individual volumetric maps.

The purpose of the present paper is to explore hippocampal activity in a visual recognition

memory task. As such, we present and discuss only activity restricted to this region.

3.3.5.1 MEG analysis using SAM

The linearly constrained minimum variance (LCMV) beamformer algorithm (Robinson & Rose,

1992; Van Veen, van Drongelen, Yuchtman, & Suzuki, 1997) was used to estimate source

activity in a wide frequency band (0–30 Hz) and specifically in the theta (4–8 Hz) frequency

band (e.g. Tesche & Karhu, 2000). The control window was defined as the time interval from–

500 to –250 ms before stimulus onset, and four active-windows of 250 ms duration between 0

and 1000 ms post-stimulus onset. For the two frequency bands, the differences in signal power

between all active and the control window were normalized to an estimate of noise power. The

resulting expression of stimulus induced relative power changes for each node was termed

pseudo t-statistic, which is a normalized measure of the difference between signal power in the

active and control window (Robinson & Vrba, 1999). Pseudo-t values at all nodes were

compiled to generate a volumetric map of neuronal power changes for each post-stimulus

interval and each frequency band. This calculation was performed for both ‘correct-new’ and

‘correct-old’ scenes. SAM volumetric maps were viewed in AFNI and only spatially distinct

regions of activity overlying the hippocampus were considered. Permutation tests were applied

74

separately for the group-averaged volumetric maps corresponding to each time interval,

frequency band, and both types of scenes in order to identify the brain regions with significant

(α=0.05) signal power changes (Chau et al., 2004).

3.3.5.2 MEG analysis using ITC

The beamformer algorithm was applied to the 0–100 Hz wide band filtered MEG data in the –

1000 to 1000 ms time interval relative to stimulus onset to define a spatial filter. Source

waveforms at all volume elements were obtained from spatially filtering the MEG data. Morlet

wavelet transform of the source waveforms provided phase information over each 250 ms time

interval between −1000 and 1000 ms and seven frequencies centered approximately around 4, 6,

9, 13, 19, 28, and 41 Hz. ITC is a statistic describing the distribution of phase values across

repeated trials (Fisher, 1993). ITC is zero in the case of the phase being uniformly distributed

between 0 and 2π, which means that the signal does not show any stimulus-related contributions

in the specific time and frequency interval. In contrast, ITC is close to 1 if the phase values are

concentrated around a mean value indicating that the brain signal is strongly synchronized with

the stimulus. A more detailed description of the analysis can be found in Bardouille and Ross

(2008).

Volumetric calculation of ITC in time-frequency domain results in a five dimensional data set

(three spatial dimensions, time, and frequency). In order to find relevant regions of interest, first

the locations of right and left hippocampus were identified on each participant's MRI. ITC values

for the closest grid node were compiled to generate time-frequency plots for both ‘correct-old’

and ‘correct-new’ scenes. These plots were used as a descriptive guideline to examine more

specific spectral and temporal information in whole-head volumetric ITC maps and no statistics

were applied. Whole-head volumetric ITC maps were generated for both types of scenes during

any specific time/frequency intervals that depicted high inter-trial coherence in the hippocampus.

Volumetric ITC maps were spatially normalized and group-averages were calculated as the mean

ITC value across corresponding voxels. Individual and group-averaged volumetric ITC maps

were visualized in AFNI using each participant's own MRI and the group-averaged MRI,

respectively. In order to estimate the distribution of ITC amplitudes under the null hypothesis,

we examined ITC values during the baseline period (−500ms to −750 ms) for each participant

75

and the group average, as outlined in Bardouille and Ross (2008). Only values exceeding the

95% level of this distribution were considered.

3.3.5.3 MEG data analysis using ER-SAM

The beamformer algorithm was used to define a spatial filter based on the MEG data in the 0–30

Hz frequency and −1000 ms to 1000 ms time interval. The spatial filter was applied to the time

domain averaged MEG and normalized to a noise estimate, which resulted in time courses of a

pseudo-z statistic corresponding to the amount of event-related brain activity in each volume

element across the entire time interval (−1000 to 1000 ms). The pseudo-z is like the t-statistic

used in SAM except that it is applied to multiple time points (every 5 ms) rather than normalized

over time (i.e. 250 ms intervals), making it more appropriate for evoked and averaged data.

Individual volumetric maps of the magnitude of pseudo-z values were transformed onto a

normalized brain and averaged across all participants. Individual and group-averaged SAM

maps were calculated for ‘correct-new’ and ‘correct-old’ scenes and the post-stimulus time

interval (0–1000 ms) was examined. A distribution of the pseudo-z values under the null

hypothesis was estimated from randomly sampled data in the pre-stimulus interval and

thresholds for α≤0.05 were obtained for all volume elements (Herdman et al., 2007). Threshold

values for the group-averaged data were based on the pre-stimulus interval in the group-averaged

data and threshold values for individual volumetric maps were based on the pre-stimulus interval

for each participant. ER-SAM maps were thresholded accordingly and the locations of

activation peaks in the remaining data were identified using a customized MATLAB procedure.

This procedure, provided by the CTF software package, marks peaks in the volumetric data by

first finding the voxel containing the maximum value within a 3 voxel volume of 15×15×15 mm

after the ER-SAM image is thresholded, and then removes all voxels in the surrounding region

that are contiguous or lower in magnitude than the maximum. The next peak is found as the

maximum value in the remaining volume. This procedure is repeated until the entire volume has

been scanned. For individual ER-SAM maps, peaks found in or within less than 1 cm of the

hippocampus were considered. Time courses of the magnitudes of event-related neural activities

(pseudo-z values) were calculated for the identified locations of peak activity from the grand-

averaged ER-SAM maps. In order to examine any differences in hippocampal activity between

processing of ‘correct-new’ and ‘correct-old’ scenes, we also performed a contrast between the

two types of scenes.

76

3.4 Results

3.4.1 Behavioral responses

Three participants were excluded from all analyses due to low numbers of total correct responses

(below 25%). Participants were significantly more likely to correctly identify old (hit) scenes

with high confidence and novel scenes (correct rejection) than would be expected by chance

(old: t(9) = 2.68, p<.05; novel: t(9) = 3.03, p<.05). The incidence of hits and correct rejections

did not differ significantly from each other (t(9) = 1.61, p>.1). A summary of the behavioral

recognition results can be found in Table 4.1. Behavioral results are similar to those obtained in

fMRI studies using similar number of stimuli (e.g. Kirchhoff et al., 2000).

Table 3.1 Average accuracy for correctly identifying a scene as ‘old’ or ‘new’

Response: Old scenes % (SEM) New scenes % (SEM)

High confident old 41.2 (3.06) 18.3 (3.13)

Low confident old 16.8 (3.58) 19.8 (4.29)

New 34.2 (5.57) 53.1 (6.63) Standard errors of mean (SEM) are noted.

3.4.2 Signal power changes in neural responses: SAM

SAM maps for each time interval and frequency band for both ‘correct-new’ and ‘correct-old’

scenes were examined. Permutation tests performed on the group averaged activity did not

reveal significant differences between the pre-stimulus control and active intervals in the

hippocampus for either scene type (α=0.05). The only activity revealed to be significantly

different from the control interval was within the visual cortex. However, the permutation test

estimates a threshold common for all voxels in the brain and this may be too conservative for

deep sources such as the hippocampus. We further explored the data by lowering the threshold

limit and found spatially distinct activity in the right hippocampus for ‘correct-new’ scenes and

in the parahippocampal region for ‘correct old’ scenes during the same time interval and

frequency band (Figure 4.2).

77

Figure 3.2 Group-averaged SAM activation maps

SAM activation maps are shown for the theta frequency band during 0–250 ms post-stimulus onset.

Activity is below the significance level of p=.05 for the group statistics (pseudo-t value=.49). However,

when the raw data were viewed in AFNI, spatially distinct activity in the hippocampus (pseudo-t

value=.15) and parahippocampal region (pseudo-t value=.15) was observed for correct-new and correct-

old scenes, respectively. Black cross-hairs indicate the location of the regional peak, also reported in

Talairach co-ordinates.

Coherence in neural responses: ITC

Averaged ITC values in time-frequency domain revealed stimulus-locked activation of the

hippocampus during the 0–250 ms and 250–500 ms time interval following stimulus onset for

the frequency bands up to 12 Hz for both ‘correct-new’ and ‘correct-old’ scenes. ITC measures

in the hippocampus were specifically expressed in the first two frequency bins, which correspond

with the delta (1–4 Hz) and theta frequency range (4–8 Hz), respectively (Figure 4.3).

0-250 ms

4-8Hz

Correct-New Correct-Old

Tal: 23 -21 -15

Source SAM Value: .1544

Tal: 34 -40 -6

Source SAM Value: .1473

.3

.11

78

Figure 3.3 Group-averaged time-frequency maps of ITC for the hippocampus

ITC maps represent the left and right hippocampus for scenes correctly identified as ‘new’ (left) or ‘old’

(right) with high confidence.

For the group-averaged data, volumetric maps for the theta band and the two time intervals of 0–

250 ms and 250–500 ms revealed spatially distinct activity exceeding group baseline threshold

values in the right hippocampus for ‘correct-new’ and ‘correct-old’ scenes across both time

intervals (Figure 4.4). Theta band activity in the left hippocampus was observed as a spatially

distinct activation in the hippocampus for ‘correct-old’ scenes during 250–500 ms.

Right

Hippocampus

Correct-New

Left

Hippocampus

Correct-Old

79

Figure 3.4 Group-averaged volumetric maps of ITC

Volumetric maps of ITC are shown in approximately the theta frequency band (4–8 Hz, center frequency

6 Hz) during the 0–250 ms (top) and 250–500 ms time intervals (bottom) for scenes correctly identified as

‘new’ (left) and ‘old’ (right). Whereas theta band ITC in the right hippocampus was expressed for both

time intervals for both ‘correct-new’ and ‘correct-old’ scenes, ITC in the left hippocampus was found

during the 250–500ms interval only for ‘correct-old’ scenes. Black cross-hairs indicate the position of the

hippocampal peak, also reported in Talairach co-ordinates. All values exceeding .14 and .17 were

considered significant for ‘correct-new’ and ‘correct-old’ scenes, respectively, based on threshold levels

derived from the baseline period. All activity shown exceeded 95% level of the baseline distribution.

Individual volumetric maps for ITC in the theta band were examined for each participant.

Consistent with group-averaged results, theta band synchrony in or within less than 1 cm of the

right hippocampus exceeding the significance threshold occurred in 8 participants during 0–250

ms and 5 participants during 250–500 ms for ‘correct-new’ scenes, and 6 participants during 0–

250 ms and 6 participants during 250–500 ms for ‘correct-old’ scenes. Group-averaged results

also revealed theta band synchrony for ‘correct-old’ scenes during 250–500 ms in the left

0-250 ms

5.8Hz

Correct-New Correct-Old

250-500 ms

5.8 Hz

.4

.15

.3

.11

Tal: 28 -28 -8

Source ITC Value: .33

Tal: 19 -18 -13

Source ITC Value: .23

Tal: 32 -34 -9

Source ITC Value: .32

Tal: -22 -32 -6 (L); 29 -38 -4 (R)

Source ITC Value: .21 (L); .20 (R)

80

hippocampus. Significant activity was found in individual volumetric maps for 2 participants

(Table 4.2). Examples of hippocampal activity found for individual participants is shown in

Figure 4.5 and the averaged location of peak hippocampal activity based on individual

participants’ ITC maps are shown in Figure 4.6.

Table 3.2 Source ITC values and Talairach co-ordinates for individual ITC maps showing

hippocampal activity

A. Correct-New

Subject Max. ITC

value for

Hpc

during

Baseline

Max. ITC

value

during

Baseline

Local

Maxima of

Hpc during 0-

250 ms (Tal.)

Max.

ITC

value

Local Maxima

of Hpc during

250-500 ms

(Tal.)

Max

ITC

value

S1 .12 .22 R: 25 -14 -13 .58 R: 25 -30 -7 .33

S2 .12 .18 R: 23 -23 -5 .21 n/a n/a

S3 .20 .27 L: -21 -39 0 .42 n/a n/a

S4 .20 .34 n/a n/a n/a n/a

S5 .17 .26 R: 31 -41 2 .73 R: 30 -28 -7 .54

S6 .10 .17 L: -23 -46 5

R: 25 -25 -14

.51

.24

R: 27 -14 -11

.32

S7 .23 .29 R: 29 -29 -10 .64 R: 22 -38 4 .42

S8 .24 .31 R: 28 -43 2 .49 L: -27 -32 -5

R: 26 -44 3

.35

.42

S9 .15 .29 R: 24 -10 -18 .34 L: -24 -37 -5 .33

S10 .19 .23 L: -20 -38 0

R: 15 -39 5

.40

.32

L: -18 -11 -12 .25

Average

(stdev)

.17

(.05)

.26

(.05)

L: -21 -41 2

(2 4 3)

R: 25 -28 -6

(5 12 9)

.44

(.06)

.44

(.20)

L: -23 -27 -7

(5 14 4)

R: 26 -31 -4

(3 16 8)

.31

(.05)

.41

(.09)

Total L: 3 participants

R: 8 participants

Either: 9 participants

L:3 participants

R: 5 participants

Either: 7 participants

81

B. Correct-Old

Subject Max. ITC

value for

Hpc

during

Baseline

Max. ITC

value

during

Baseline

Local

Maxima

during 0-250

ms (Tal.)

Max.

ITC

value

Local Maxima

during 250-

500 ms (Tal.)

Max

ITC

value

S1 .13 .31 R: 25 -19 -12 .41 R: 16 -38 3 .34

S2 .26 .36 n/a n/a n/a n/a

S3 .13 .28 L: -25 -41 -3

R: 30 -11 -20

.42

.38

n/a n/a

S4 .22 .29 L: -30 -24 -15 .32 n/a n/a

S5 .16 .26 L: -18 -18 -14

R: 34 -39 0

.34

.55

L: -34 -35 -6

R: 31 -31 -6

.30

.41

S6 .17 .32 L: -24 -41 0 .38 n/a n/a

S7 .12 .24 L: -23 -36 1

R: 35 -31 -11

.47

.69

L: -13 -15 -14

R: 30 -31 -8

.33

.43

S8 .18 .33 R: 30 -32 -5 .55 R: 21 -29 -6 .34

S9 .22 .40 n/a n/a n/a n/a

S10 .18 .24 L: -21 -37 0

R: 24 -40 2

.38

.31

R: 26 -24 -12 .25

Average

(stdev)

.18 .30 L: -24 -33 -5

(4 11 8)

R: 30 -29 -8

(5 12 8)

.37

(.06)

.48

(.14)

L: -24 -25 -10

(15 14 6)

R: 25 -31 -6

(6 5 6)

.31

(.02)

.35

(.07)

Total L: 6 participants

R: 6 participants

Either: 8 participants

L: 2 participants

R: 5 participants

Either: 5 participants All reported hippocampal activity were above threshold and in or within 10 mm of the hippocampus

(Hpc) for (A) ‘correct-new’ and (B) ‘correct-old’ (L = left hippocampus; R = right hippocampus).

82

Figure 3.5 Representative individual volumetric maps of inter-trial coherence

Left

Hippocampus

Correct-New Correct-Old

Right

Hippocampus

A. 0-250ms,

5.8Hz

s6

.5

.19

.5

.19 s2

s4

s1

83

Inter-trial coherence was in approximately the theta frequency band (center frequency 6 Hz) during (A) 0-

250 ms and (B) 250-500 ms (bottom) time intervals for scenes correctly identified as ‘new’ (left) and

‘old’ (right). All activity shown significantly exceeded baseline levels.

Left

Hippocampus

Correct-New

Right

Hippocampus

B. 250-500ms,

5.8Hz

s10

.5

.19

.5

.19 s5

s7

s8

Correct-Old

84

Figure 3.6 The averaged location of hippocampal activity based on individual ITC maps

3.4.3 Averaged event related neural responses: ER-SAM

Time courses of grand-averaged ER-SAM data revealed peaks of activity in the right and left

hippocampus for ‘correct-new’ scenes, which were significantly different from baseline for the

group (α=0.05) (Figure 4.7). Activity in the right hippocampus peaked at 225 ms post-stimulus

onset. Two smaller peaks were also observed between 300–450 ms. Activity in the left

hippocampus peaked initially at 130 ms post-stimulus onset and three smaller peaks were

observed between 200–350 ms. For ‘correct-old’ scenes, significant activity was found for the

left parahippocampal gyrus. Peak activity occurred 120 ms post-stimulus onset. Two smaller

peaks were also found between 500–600 ms. All reported peaks were above the threshold level

of α=0.05.

Left

Hippocampus

Correct-New Correct-Old

Right

Hippocampus

0-250ms, 5.8Hz

Left

Hippocampus

Right

Hippocampus

250-500ms, 5.8Hz

Tal: -21 -41 2

Tal: 25 -28 -6

Tal: -23 -27 -7

Tal: 26 -31 -4 Tal: 25 -31 -6

Tal: -24 -25 -10

Tal: 30 -29 -8

Tal: -24 -33 -5

85

Figure 3.7 Group-averaged time-courses of neural activity emanating from the

hippocampus as revealed by the ER-SAM analysis

Neural sources are marked with red dots in the MRIs and the location of the hippocampal peak is reported

in Talairach co-ordinates. The maximum source strength (pseudo-z) and the time of the maximum peak

(ms) are also reported.

Consistent with group-averaged results, individual ER-SAM data revealed bilateral activity in

the hippocampal region for ‘correct-new’ scenes with 7 participants showing activity in the left

hippocampal region and 6 participants showing activity in the right hippocampal region. For

‘correct-old’ scenes, we found 4 participants showing activity in the left and 4 in the right

hippocampal region. All peaks identified in the individual ER-SAM maps were significantly

different from baseline (α=0.01) and occurred predominantly within the first 500 ms post-

stimulus onset (Table 4.3). Examples of peaks found in or within less than 1 cm of the

hippocampus for individual participants are shown in Figure 4.8 and the averaged location of

peak hippocampal activity based on individual participants’ ER-SAM maps is shown in Figure

4.9. The contrast between ‘correct-new’ and ‘correct-old’ scenes revealed no significant

differences in the hippocampus.

Right

Hippocampus

Correct-Old Correct-New

Left

Hippocampus

Tal.:-25 -9 -15

Max Source Strength: .33

Time of Max. Peak: 130ms

Tal.: 31 -21 -11

Max Source Strength: .33

Time of Max. Peak: 225ms

Tal.:-37 -36 -7

Max Source Strength: .42

Time of Max. Peak: 120ms

86

Table 3.3 Source ER-SAM values for individual participants within the hippocampus

Correct-New

Subject Local

Maxima

(Tal.)

P<.01

Baseline

Value

(pseudo

-z)

Time

Activity

Reached

p<.01

(ms)

Value of

1st Peak

(pseudo

-z)

Time of

1st Peak

(ms)

Value of

Max.

peak

(pseudo

-z)

Time of

Max.

Peak

(ms)

S1 L: -29 -36 -4

R: 35 -25 -21

.37

120

95

.46

.40

125

95

1.0

1.6

635

150

S2 R: 27 -32 -7 0.27 105 .35 110 .72 275

S3 L: -21 -32 -7 0.42 110 .67 130 .89 330

S4 L: -33 -40 -4 0.57 150 .57 150 .74 460

S5 L: -29 -28 -8

R: 31 -13 -19

0.25

90

85

.45

.25

95

85

1.01

1.12

205

220

S6 L: -25 -33 -14

R: 27 -17 -14

0.26

110

210

.33

.58

115

230

.48

.59

215

445

S7 n/a n/a n/a n/a n/a n/a n/a

S8 L: -29 -13 -19

R: 23 -2 -16

0.32

90

60

.32

.38

90

60

.63

.91

335

165

S9 n/a n/a n/a n/a n/a n/a n/a

S10 L: -41 -25 -15

R: 39 -21 -11

0.35

85

100

.58

.65

90

105

.76

.65

695

105

Average

(stdev)

L: -30 -30 -10

(6 9 6)

R: 30 -18 -15

(6 10 5)

.35 (.11)

139.29

(50.85)

109.17

(51.91)

.48 (.13)

.44 (.15)

113.57

(23.04)

119.17

(58.77)

.79 (.19)

.83 (.21)

410.71

(194.26)

307.5

(198.08)

Total L: 7 participants

R: 6 participants

Either: 8 participants

87

Correct-Old

Subject Local

Maxima

(Tal.)

P<.01

Baseline

Value

(pseudo

-z)

Time

Activity

Reached

p<.01

(ms)

Value of

1st Peak

(pseudo

-z)

Time of

1st Peak

(ms)

Value of

Max.

peak

(pseudo

-z)

Time of

Max.

Peak

(ms)

S1 L: -29 -28 -8 .36 110 .64 120 .95 535

S2 L: -21 -13 -15

R: 27 -24 -8

.36 125

925

.44

.52

160

930

.57

.52

230

930

S3 n/a n/a n/a n/a n/a n/a n/a

S4 L: -33 -25 -11 0.50 230 .51 230 .82 885

S5 L: -29 -17 -18

R: 19 -6 -19

0.3002

130

180

.71

.98

135

225

1.63

.98

230

225

S6 n/a n/a n/a n/a n/a n/a n/a

S7 R: 23 -29 -11 0.3392 60 .40 65 .54 130

S8 R: 27 -2 -16 0.4387 135 .90 165 .90 165

S9 n/a n/a n/a n/a n/a n/a n/a

S10 n/a n/a n/a n/a n/a n/a n/a

Average

(stdev)

L: -28 -21 -13

(5 7 4)

R: 24 -15 -14

(4 13 5)

.38 (.07)

148.75

(54.83)

325.00

(403.05)

.58 (.12)

.70 (.28)

161.25

(48.71)

346.25

(394.72)

.99 (.45)

.74 (.24)

470

(311.80)

362.5

(380.36)

Total: L : 4 participants

R: 4 participants

Either: 6 participants Table includes Talairach co-ordinates and time of first and maximum peak for participants showing

activity above threshold in or within 1cm of the hippocampus (Hpc) for ‘correct-new’ and ‘correct-old’ in

individual ER-SAM maps (L = left hippocampus; R = right hippocampus).

88

Figure 3.8 Representative individual time-courses of hippocampal activity

Individual maps and time courses from the left and right hippocampus for ‘correct-new’ (left) and ‘correct-

old’ (right) scenes as revealed by the ER-SAM analysis. Neural sources are marked with red dots in the

MRIs and the Talairach coordinates are provided.

Figure 3.9 The averaged location of hippocampal activity based on individual ER-SAM

maps

Black cross-hairs indicate the location of the hippocampal peak, also reported in Talairach coordinates.

3.5 Discussion

We studied the potential of advanced MEG approaches to localize and outline different aspects

of hippocampal activity in a memory recognition task by using three different analyses: SAM for

event related changes in spectral power, ITC as a measure of stimulus related coherence in neural

Left

Hippocampus

Correct-New Correct-Old

Right

Hippocampus

0-1000ms

Tal: -30 -30 -10

Tal: 30 -18 -15 Tal: 24 -15 -14

Tal: -28 -21 -13

Left

Hippocampus

Right

Hippocampus

Correct-New Correct-Old

s5

s2

s4

s6

89

responses and ER-SAM as volumetric representation of the averaged event related neural

response. We observed hippocampal activity predominantly in the theta frequency band and

within the first 200 ms post-stimulus onset for both the successful recognition of novel (‘correct-

new’) and previously studied (‘correct-old’) scenes. Analyzing multiple features of

electromagnetic brain activity in conjunction with previous work, provided converging evidence

for the feasibility of localizing activity from the hippocampus with MEG (Breier et al., 1998;

Campo, Maestu, Capilla, et al., 2005; Hamada et al., 2004; Hanlon et al., 2003; Hanlon et al.,

2005; Ioannides et al., 1995; Kirsch et al., 2003; T. Martin et al., 2007; Nishitani et al., 1998;

Tesche, 1997; Tesche & Karhu, 1999, 2000; Tesche et al., 1996). Below, we summarize our

findings from the different data analyses and discuss its advantages and limitations. In

considering the current findings to previous work, we suggest that the functional role of the

hippocampus in recognition memory may be related to general processing requirements common

to recognizing both novel and previously studied information, such as comparing the externally

presented stimuli with internal memory traces. Further, we argue that hippocampally-mediated

processes supporting recognition memory occur rapidly following stimulus onset. The

observation of early hippocampal activity has implications for theories regarding memory;

namely, recognition may be an obligatory process and/or may influence perceptual processing.

3.5.1 Multiple MEG data analyses

In applying three different analysis techniques, we were able to extract unique complementary

information pertaining to hippocampal activity during a recognition memory task. Specifically,

information regarding the underlying spectral frequencies and temporal dynamics of

hippocampal responses were outlined. When we viewed the group-averaged SAM results, no

activity above threshold levels was observed in the hippocampus for either ‘correct-new’ or

‘correct-old’ scenes.

SAM analysis is based on the analysis of signal power changes between the pre- and post-

stimulus time window and the signal power measure includes both time-locked evoked responses

and non-phase-locked or induced activity. It is possible that the signal power changes in

hippocampal activity did not reach significance threshold because it occurs predominantly as an

evoked response and the inclusion of induced activity reduced its overall statistical power (but

see, Guderian & Duzel, 2005). It is also possible that the permutation test used was too

90

conservative. The only activity revealed to be significant after the permutation test were

superficial sources within the visual cortex, even though multiple regions beyond the visual

cortex, such as the parietal and frontal cortex, are thought to be involved in visual recognition

(e.g. Buckner, 2003; Buckner, Wheeler, & Sheridan, 2001; Tulving, Markowitsch, Craik, Habib,

& Houle, 1996; Weis, Klaver, Reul, Elger, & Fernandez, 2004). However, when we further

examined the data by lowering the threshold, we found spatially distinct activity in the right

hippocampus for ‘correct-new’ scenes and near the right hippocampus or parahippocampal gyrus

for ‘correct-old’ scenes, in the theta frequency band during the 0–250 ms time interval. This

suggests that hippocampal activity may include some signal power changes in the theta

frequency band that is not strictly phase-locked, but this was not strong enough to reach

statistical significance. The permutation test estimates a threshold value common for all voxels

in the brain, but the distribution is likely not homogeneous across the brain volume. Further, the

level at which neural activity is determined to be significantly different from baseline depends on

the number of neurons active, the amplitude of activity, and the amount of synchrony among

neural assemblies. While the amount of neural synchrony does not change with distance from

the sensors, amplitude of activity becomes weaker farther away from the sensors, making it very

difficult for deep source activity to reach threshold levels. Altogether, this suggests that the

permutation test may be too conservative for an examination of deep source activity and/or the

dominant feature of the hippocampal response in a recognition memory task is not induced.

With ITC analysis we found high levels of bilateral hippocampal coherence in the theta

frequency range during the 0–500 ms post-stimulus onset time interval for both ‘correct-new’

and ‘correct-old’ scenes. In examining the group-averaged volumetric maps, spatially distinct

sources of activity could be seen in the right hippocampus across the entire time interval and for

both types of scenes. This was confirmed in the individual participant analysis. However, from

the ITC maps, it can be seen that hippocampal theta band activity was most synchronous during

the first 250 ms after stimulus-onset (Fig. 3 and Fig. 4), and became less phase-locked to the

stimulus over time. Group-averaged results also revealed theta band synchrony in the left

hippocampus during 250–500 ms for ‘correct-old’ scenes. This was confirmed in individual

analyses for two participants. Likely, hippocampal synchrony in most participants was below

the threshold for individual analyses, but averaging data from all of the participants increased

statistical power. The ITC statistic is bound between 0 and 1, thus it is unlikely that single

91

individual data had skewed the group results toward significance. Despite this, it is important to

note that theta band synchrony in the right hippocampus was consistently found in both the

group-averaged and the individual data.

Using ER-SAM, we found significant bilateral hippocampal activity for ‘correct-new’ scenes in

the group-averaged data. This was confirmed for the majority of participants in the individual

analysis. Group-averaged data also revealed significant left parahippocampal activity for

‘correct-old’ scenes, which was found for 4 participants in the individual analysis. It is

important to note that the individual analysis was completed at α=.01 whereas the group-

averaged results were viewed at α=.05. This more conservative criterion for the individual

analysis may have resulted in smaller than expected number of participants showing activity in

the hippocampal region. In the group-averaged data, hippocampal activity was found during the

100–150 ms post-stimulus period for both ‘correct-new’ and ‘correct-old’ scenes (Breier et al.,

1998; Gonsalves et al., 2005; Guderian & Duzel, 2005).

While SAM is an amplitude-based analysis method, ITC, in contrast, measures the degree of

neural synchrony, and thus provides a more homogeneous statistic across the brain volume.

Thus, ITC may be a more appropriate analysis method for the examination of deep sources in the

presence of activity from other more superficial sources. However, ITC improves the signal-to-

noise ratio for the synchrony measure by integrating over relatively long (e.g. 250 ms) time

windows (Bardouille & Ross, 2008). ER-SAM, in contrast, can determine the latency of the

maximal evoked response with millisecond precision. These two methods can be used in a

complementary fashion to understand the temporal and spectral dynamics of evoked responses.

In applying three complementary analysis methods to the same set of data, we were able to

consistently localize hippocampal activity in two of the three methods. Below, we explore the

similarities and differences in findings from ITC and ER-SAM.

3.5.2 Consistency across the data analyses

Both ITC and ER-SAM revealed time-locked hippocampal activity within the first 250 ms of

viewing ‘correct-new’ and ‘correct-old’ scenes. Frequency analysis (ITC) also revealed that this

hippocampal activity consistently oscillated within the theta frequency band. While the obtained

results were consistent in terms of temporal dynamics and frequency of hippocampal activity,

there were some differences in the findings that should be discussed.

92

For ‘correct-new’ scenes, group-averaged results revealed significant neural synchrony (ITC) in

the right hippocampus and significant increases in evoked activity (ER-SAM) in bilateral

hippocampi. This is consistent with previous studies showing that whereas verbal information

tends to elicit activity in the left hippocampus, visual information, such as that used in the

present experiment, tends to elicit activity in either the right or bilateral hippocampi (Breier et

al., 1998; Golby et al., 2001; Gonsalves et al., 2005; Kelley et al., 1998; A. Martin, Wiggs, &

Weisberg, 1997; Stern et al., 1996). It is possible that activity in the left hippocampus failed to

reach significance in the ITC analysis.

For ‘correct-old’ scenes, ITC localized activity to the hippocampus and ER-SAM showed that

the peak of activity was within the parahippocampal gyrus. It is possible that both the

hippocampus and parahippocampal gyrus were activated (Gonsalves et al., 2005; Kapur et al.,

1995; Nyberg, McIntosh, Houle, Nilsson, & Tulving, 1996; Rombouts, Barkhof, Witter,

Machielsen, & Scheltens, 2001; Stark & Okado, 2003), but in the ER-SAM analysis, the peak of

activity was placed within the parahippocampal gyrus. As mentioned earlier, if the hippocampus

and parahippocampus are active simultaneously, MEG tends to place the peak of activity within

a single source (Stephen et al., 2005). It is also possible that the signal-to-noise ratio for

‘correct-new’ scenes was higher than that for ‘correct-old’ scenes since the average number of

trials was greater. A higher signal-to-noise ratio allows for greater power and sensitivity to the

localization of functional MEG data, and thus greater ability to localize deeper sources

(Hämäläinen et al., 1993).

While both ITC and ER-SAM analyses identify brain activity that is time locked to the stimulus

event, ITC is more specific in frequency information, and ER-SAM is more specific in temporal

information. However, an evoked response will generate high coherence values at low

frequencies (i.e. delta and theta) over sub-second time intervals. Thus, it is difficult to

differentiate between an evoked response and synchronous oscillatory activity in this case.

Given that ITC and ER-SAM examine different aspects of hippocampal activity, it may not be

surprising that differences in laterality and precise localization are observed. This makes clear

that claims about hippocampal activity have to consider the specific observed feature of brain

activity.

93

The present study, in conjunction with previous MEG studies, shows that hippocampal activity

can be successfully localized using MEG, and that it is characterized by different aspects

pertaining to evoked- versus induced-response, frequency, and time. Further, depending on

which aspect of the hippocampal activity one is interested in, it is important to select the

appropriate analysis method. In the present experiment, we found that hippocampal responses

occurred predominantly as a time-locked or evoked response, in the theta frequency band and

within 200 ms following stimulus onset during recognition of previously studied and novel

stimuli. Critically, we found no significant differences in the hippocampus between ‘correct-

new’ and ‘correct-old’ scenes using ER-SAM. This suggests that the functional role of the

hippocampus may be related to general memory processing requirements common for both the

viewing of new and old information (N. J. Cohen et al., 1999). Below, we focus on the

theoretical implications for the functional role of the hippocampus in light of the present results.

3.5.3 Theoretical implications

A general processing requirement for viewing new and old information in a recognition task is

the comparison of the external stimulus that is represented in the sensory cortices with internal

memory traces that may be stored within multiple neural assemblies (Ryan et al., 2008). This

‘comparison’ process (James, 1983; Ryan & Cohen, 2004) is thought to rely not only on the

hippocampus (Hannula, Federmeier, & Cohen, 2006; Rugg et al., 1996; Ryan & Cohen, 2004),

but also sensory cortices where external and internal information is processed and held online

(Ryan et al., 2008; Vaidya, Zhao, Desmond, & Gabrieli, 2002; Wheeler, Petersen, & Buckner,

2000), and prefrontal regions where search strategies are executed and monitored (Buckner,

2003; Koriat, 2000). During comparison, the functional role of the hippocampus may be to

coordinate activity between different neural regions and allow for the exchange of information in

a phase-locked manner via theta oscillations (Buzsaki, 2002; Duzel, Picton, et al., 2001; Duzel,

Vargha-Khadem, et al., 2001; Rugg et al., 1996; M. E. Smith & Halgren, 1989). The current

findings revealed that hippocampal oscillations occurred within the theta frequency band,

consistent with other work that has observed hippocampal theta oscillations in animal (Huxter,

Burgess, & O'Keefe, 2003; O’Keefe & Nadel, 1978; Wiebe & Staubli, 2001), human intracranial

(Raghavachari et al., 2001; Rizzuto et al., 2003 Sederberg, Kahana, Howard, Donner, & Madsen,

2003) and imaging studies (Guderian & Duzel, 2005; Osipova et al., 2006; Tesche & Karhu,

2000).

94

An examination of the temporal dynamics revealed that hippocampal activity was evident as

early as 120–130ms following stimulus onset (Breier et al., 1998; Gonsalves et al., 2005). This

time frame is typically associated with perception of externally presented stimuli, independent of

the hippocampus (Tsivilis et al., 2001), however, the current findings suggest that the

hippocampus may be involved during early perceptual processing of old/new information. The

functional role of the hippocampus during this stage may be to aid non-mnemonic visual

discrimination of the externally presented stimuli (Barense, Gaffan, & Graham, 2007; Lee et al.,

2005), or it may reflect part of a feed-forward sweep from visual cortices in order to prime other

cortical regions for subsequent processing, such as recognition memory in the present study

(Foxe & Simpson, 2002; Herdman et al., 2007). Alternatively, early onset of hippocampal

activity may also suggest that processes related to memory recognition occur rapidly and perhaps

in an obligatory fashion (Ryan, Hannula, et al., 2007; Ryan et al., 2008). However, since

participants were instructed to perform a recognition task and were in a ‘retrieval’ mental set, the

current results cannot address the issue of whether recognition memory is obligatory or not. At

the very least, evidence of such an early onset of hippocampal activity suggests that processes

related to recognition memory begin rapidly and operate in conjunction with, or parallel to,

visual processing. Indeed, conscious identification of a visual stimulus may be aided by rapid

access to stored memory representations (Bar, 2003, 2004; Bar, Kassam, et al., 2006; Ryan et al.,

2008). Regardless of whether the early onset of hippocampal activity represents a contribution

of mnemonic information to the building of perceptual representations (Ryan et al., 2008),

perceptual processing in the absence of any memory component (Lee et al., 2005), or a

preparatory response for subsequent processing (Herdman et al., 2007), the present findings

demonstrate that hippocampal responses are evident at time when perception is thought to occur.

3.5.4 Concluding remarks and future considerations

The results of this study, together with previous literature, offer converging evidence in support

of the feasibility of using MEG to record activity from the hippocampus. Unlike other

neuroimaging techniques, MEG can outline the frequency range and temporal dynamics with

good spatial resolution. This study highlights the importance of choosing an appropriate analysis

method for the localization of deep sources. Specifically, it is critical to use localization

algorithms that are not biased toward superficial sources, allow for the imaging of simultaneous

sources, and use co-registration of MEG and structural MRI data. We observed that processing

95

of studied versus novel stimuli recruited the hippocampus at similar times and in a similar

spectral frequency, suggesting that the hippocampus may be involved in general recognition

memory processes. Specifically, the hippocampus may contribute to comparison achieved via

theta oscillations (Buzsaki, 2002). Also, onset of hippocampal activity occurred rapidly after

stimulus onset, during a time typically associated with visual perception. Future studies are

needed in order to distinguish between mnemonic vs. non-mnemonic accounts of early

hippocampal responses.

In addition to examining incidences of normal memory functioning, MEG can be applied to the

study of memory impairments. It has long been noted that memory impairments are associated

with aging and a number of disorders such as Alzheimer's disease, temporal lobe epilepsy, post-

traumatic stress disorder, schizophrenia, among others (Eichenbaum & Cohen, 2001). While

other neuroimaging techniques such as PET and fMRI show a relationship between decreases in

memory performance and reduced hippocampal activity, MEG may reveal patterns of underlying

spatiotemporal dynamics that are associated with distinct performance profiles, and are

subsequently altered as a function of neurological impairment. Therefore, MEG has the potential

to illuminate the nature of hippocampally-mediated memory disorders as well as the nature of

normal.

3.6 Acknowledgments

The authors thank Guy Earle for programming and other technical assistance, and Christina

Villate for her assistance with the stimuli. This work was supported by funding from the Natural

Sciences and Engineering Research Council of Canada (JDR), the Canada Research Chairs

Program (JDR), Michael Smith Foundation for Health Research (ATH), and a Canadian

Graduate Scholarship from the Natural Sciences and Engineering Research Council of Canada

(LR).

96

Chapter 5 Emotional Associations Alter Processing of Neutral Faces

Riggs, L., Fujioka, T., Chan, J., Anderson, A.K., Ryan, J.D. Emotional associations alter

processing of neutral faces.

97

4 Emotional associations alter processing of neutral faces

4.1 Abstract

A number of studies have shown that the processing of emotional as compared to neutral

information is associated with different patterns in eye movement and neural activity. However,

the ‘emotionality’ of a stimulus can be conveyed not only by the physical properties of the

stimulus itself, but also by the context in which it appears and/or the information with which it is

associated. We examined how association with emotional information may influence processing

of otherwise neutral faces by using eye movement monitoring (EMM) and

magnetoencephalography (MEG). Participants studied a series of faces, each with a neutral

expression, paired subsequently with either a negative or a neutral sentence, and then the same

face was presented again in isolation. The face and the sentence never appeared simultaneously

on the screen. EMM revealed that viewing of isolated faces paired with negative versus neutral

sentences was associated with increased viewing of the eye region. Source localization of MEG

results were performed using event-related synthetic aperture magnetometry minimum-variance

beamformer algorithm (ER-SAM) coupled with the partial least squares (PLS) multivariate

statistical approach. This revealed that viewing of isolated faces paired with negative versus

neutral sentences was associated with increased neural activity between 600-1500 ms after

stimulus onset in emotion processing regions such as the cingulate, medial prefrontal cortex, and

amygdala, as well as posterior regions such as the precuneus and occipital cortex. Viewing of

isolated faces paired with neutral versus negative sentences was associated with increased

activity in the parahippocampal gyrus during the same time window. The above results suggest

that emotion may modulate associated, but otherwise neutral information, by altering visual

processing and the type of representation that is formed.

4.2 Introduction

We are constantly involved in the interpretation of social and emotional cues from those around

us. Such cues can be conveyed via facial expressions (e.g. Adolphs, 2003) and facial appearance

(e.g. Bar, Neta, & Linz, 2006; Olson & Marshuetz, 2005; Willis & Todorov, 2006), as well as by

biographical information (e.g. Carlston & Skowronski, 1994; Todorov & Uleman, 2002, 2003,

98

2004). For example, imagine being at a party and being introduced to two different people who

appear very neutral and non-threatening. However, right before you meet them, you are told that

one person has just gotten out of jail for murder and the other person is working on a PhD.

Research from social psychology suggests that from this minimal information, you will form a

rapid, and perhaps automatic, and very different impression of the two people (e.g. Todorov &

Uleman, 2002, 2003, 2004). As a consequence of forming such rapid impressions, this may then

lead to differences in the way in which we perceive and remember otherwise neutral information

(i.e. the person’s face).

A number of studies have shown that the processing of emotional versus neutral stimuli is

characterized by different patterns of eye movement behaviour and neural activity. For example,

compared with faces in a neutral expression, viewing to faces expressing threat is characterized

by an overall increase in the number of fixations directed to the face, an increase in the number

of regions sampled within the face (Bate et al., 2009), an increase in sampling of internal features

of the face (Calder et al., 2000; Green, Williams, & Davidson, 2003b; M. L. Smith et al., 2005),

and an increase in viewing the eye region of the face (e.g. Adolphs, Gosselin, et al., 2005; Gamer

& Buchel, 2009; Itier & Batty, 2009). In addition to differences in eye movement behaviour,

viewing of faces expressing emotion versus those in a neutral expression is accompanied by

increased neural activity in regions of the brain, such as the amygdala, anterior insula and basal

ganglia, which are associated with emotional processing (e.g. Adolphs, 2002; Adolphs, Tranel, &

Damasio, 2003; Haxby, Hoffman, & Gobbini, 2002; Phelps, 2006). It is suggested that such

differences in eye movement behaviour and neural activity may serve to aid in the recognition of

different facial expressions and of the person’s identity (Bate et al., 2009; Haxby et al., 2002)

and aid in the assessment of the person’s level of threat and their intentions (e.g. Haxby et al.,

2002; Spezio, Huang, Castelli, & Adolphs, 2007).

However, the manner in which external information is processed is determined not only by the

physical properties of the stimulus (i.e. facial expression), but also by the context in which the

stimulus appears, as well as by prior knowledge (e.g. Althoff & Cohen, 1999; G. R. Loftus &

Mackworth, 1978; Ryan, Hannula, et al., 2007). Aviezer and colleagues (Aviezer et al., 2008),

demonstrated that identical facial expressions may be interpreted as different emotions

depending on the context in which the facial expressions were presented. For example, viewing

of a face expressing anger within a neutral context was associated with an increased number of

99

fixations to the upper region of the face (i.e. eyes and eye brows) and viewing of a face

expressing disgust within a neutral context was associated with an equal number of fixations

directed to the upper and lower region of the face (i.e. lower nose and mouth region). However,

when the faces were placed into an emotional context, viewing patterns changed systematically.

Specifically, when a face expressing anger was placed within a disgust context (i.e. on a body

expressing disgust), participants directed an equal number of fixations to the upper and lower

region of the face. Similarly, when a disgust face appeared within an angry context, participants

directed significantly more fixations to the upper versus lower region of the face.

If perception of emotional expressions is malleable and depends in part on the context in which

the expressions appear, then it is possible that the perception of neutral faces is also malleable

and depends on the context in which they appear. One goal of the present study was to examine

the extent to which association with emotional information may influence viewing of, and

memory for, faces with neutral expression. After all, it is likely important to not only assess the

level of threat and/or intentions of the ‘murderer’ as opposed to the ‘student’, but also to

subsequently recognize him/her such that appropriate action may be taken (e.g. run away versus

initiate a conversation). In support of this, prior research shows that emotions may enhance

memory for both the emotional item (e.g. Christianson & Loftus, 1987 Cahill et al., 1996; Heuer

& Reisberg, 1990; Phelps et al., 1997), as well as information associated with it (e.g.

D'Argembeau & Van der Linden, 2004, 2005; MacKay et al., 2004).

In addition to examining whether association with emotional information may influence viewing

of, and memory for, neutral faces, another goal of the present study was to examine precisely

when emotion/context may exert its influence on associated information and how such processes

may be supported in the brain. Aviezer and colleagues (Aviezer et al., 2008) found that the

effects of context exerted their influence on eye movement behaviour early on during processing

of different facial expressions, i.e. within the first 1000 ms after stimulus onset. This prompted

the researchers to suggest that the context in which an item appears may actually alter perceptual

processing. However, it is unclear exactly when during the first 1000 ms that context may be

exerting its influence. Previous neuroimaging studies suggest that perceptual processing occurs

largely within the first 250 ms after stimulus onset, as opposed to a later time window (e.g. 250-

1500 ms) during which conceptual/semantic processes and/or the retrieval of associated

information are largely purported to occur (e.g., Donaldson & Rugg, 1998, 1999; Itier et al.,

100

2006; Schweinberger, Pickering, Burton, & Kaufmann, 2002). In light of this, the results from

Aviezer’s study can be explained in one of two ways. First, as the authours suggest, the

emotional context may have been invoked during the perceptual processing of the face, leading

to differences in the actual visual percept that was constructed. Alternatively, the emotional

context may not have influenced perceptual processing of the face per se; rather, it may have

been invoked after perceptual processing of the face had occurred. In order to distinguish

between these two possibilities, it is necessary to examine precisely when context may exert its

influence on processing.

The present work had three goals, to examine the extent to which emotion modulates (1) viewing

of associated neutral stimuli; (2) underlying neural activity invoked during viewing of associated

neutral stimuli, and precisely when; and (3) subsequent memory for associated neutral stimuli.

To address these goals, eye movement monitoring and magnetoencephalography (MEG) were

used to characterize eye movement behaviour and neural activity, respectively. Eye movements

have been shown to be sensitive to the effects of emotion and context (e.g. Aviezer et al., 2008;

Bate et al., 2009; Green et al., 2003b; Riggs et al., 2010; Riggs et al., 2011). MEG is a non-

invasive neuroimaging technique that measures the magnetic field differences produced by

population of neurons (Hari et al., 2000; Hämäläinen et al., 1993), providing recording of neural

activity with temporal resolution on the order of milliseconds and with spatial resolution

comparable to that of functional magnetic resonance imaging (fMRI; Miller et al., 2007).

Critically, through its precise timing information, MEG has been successfully utilized to study

emotion processing (Cornwell et al., 2008; Furl et al., 2010; Garolera et al., 2007; Hung et al.,

2010), as well as how knowledge and/or prior experience can influence the manner by which

perceptual processing occurs (Riggs et al., 2009; Ryan et al., 2008).

In the present experiment, participants were presented with a neutral face (Face 1) followed by

either a negative or a neutral sentence and, finally the neutral face was presented again in

isolation (Face 2). During this study phase, eye movements and neural activity were recorded

simultaneously. If association with emotional information affects processing of neutral stimuli,

then differences in eye movement patterns and neural activity should occur during re-

presentation of the face depending on whether the face had been previously paired with a

negative or a neutral sentence. It was predicted that viewing of a face paired with a negative

versus neutral sentence would elicit differences in viewing and underlying neural activity,

101

specifically, in enhanced activation of regions implicated in emotion processing such as the

amygdala, cingulate and anterior insula (Adolphs, 2002; Adolphs et al., 2003; Chen et al., 2009;

Garolera et al., 2007; Haxby et al., 2002; Hirata et al., 2007; Phelps, 2006), attention and facial

processing such as the precuneus and fusiform gyrus (Cavanna & Trimble, 2006; Deffke et al.,

2007; Fenker, Schott, Richardson-Klavehn, Heinze, & Duzel, 2005), memory and binding such

as the prefrontal cortices and medial temporal lobe (Badgaiyan, Schacter, & Alpert, 2002; N. J.

Cohen et al., 1999; Daselaar et al., 2001; Squire & Zola-Morgan, 1991; Yonelinas, Hopfinger,

Buonocore, Kroll, & Baynes, 2001), and the processing of person identity and biographical

information such as the superior temporal sulcus (Haxby et al., 2002; Todorov, Gobbini, Evans,

& Haxby, 2007). Further, if associated information is invoked during the time of perceptual

processing, then such eye movement and neural differences should manifest early during

viewing (<200 ms after stimulus onset).

In order to examine whether association with emotional information has an effect on memory,

during the test phase, participants were presented with a series of faces and were required to

decide whether the presented face was novel, previously paired with a neutral sentence or

previously paired with a negative sentence. If association with emotional information enhanced

memory, then participants should be more accurate in identifying faces previously presented

with a negative sentence than those previously presented with a neutral sentence.

To the best of our knowledge, no study has examined how association with emotion may

influence the immediate processing of otherwise neutral stimuli. If differences in eye movement

patterns and neural activity are found between processing neutral stimuli associated with

emotional versus neutral information, this would suggest that emotion can influence the visual

processing, and the mental representation, of associated neutral stimuli in a top-down manner

that is independent of the neutral stimuli’s physical properties. Thus, we may not only form

different impressions of the ‘murderer’ and ‘student’ (Todorov & Uleman, 2002, 2003, 2004),

but we may also look at them differently, think about them differently and perhaps even

remember them differently.

102

4.3 Methods

4.3.1 Participants

Twelve young adults (mean age = 21.6 years, 5 males) from the Rotman Research Volunteer

Pool participated for $10 per hour. All participants had no history of neurological or clinical

disorders, no history of head trauma and had normal or corrected-to-normal vision. All

participants were either native English speakers or had at least twelve years of experience with

English.

4.3.2 Stimuli and Design

The stimuli used consisted of 300 black and white, non-famous male faces with a neutral

expression selected from a database of face images as outlined by Schmitz and colleagues

(Schmitz, Cheng, & De Rosa, 2010). Briefly, the photographs showed front-view non-

expressive faces. Faces were cropped and did not contain hair nor other nonfacial features. To

prevent discrepancies in the spatial orientation and location of the face stimuli over trials, the

eyes and philtrum of each image was aligned to a standard 3-point Cartesian space (for more

details see: Schmitz et al., 2010). The faces were placed against a uniform black background –

the resulting image was 300x300 pixels. Each face was randomly paired with a negative (e.g.

“This person is a rapist”) and a corresponding neutral sentence (e.g. “This person is a linguist”).

Each pair of sentences differed only by one critical word which was either negative or neutral,

and was matched for the number of syllables (e.g. rapist versus linguist). All of the sentences

were previously rated by 10 participants (mean age = 21.3, 4 males) on a scale of 1-5 for each of

the following factors: familiarity (1 = not familiar at all, 5 = very familiar), tabooness (1 = not

taboo at all, 5 = very taboo), arousal (1 = not arousing at all, 5 = very arousing), valence (1 =

very negative, 3 = neutral, 5 = very positive) and coherence (1 = not coherent at all, 5 = very

coherent). Negative and neutral sentences were matched for familiarity (p = .20) and coherence

(p = .97). Critically, negative sentences were judged to be more taboo (p < .0001), negative (p <

.0001), and arousing (p < .0001). The mean values can be found in Table 5.1.

103

Table 4.1 The mean and SEM of ratings for the negative and neutral sentences used in

experiment

Mean for Negative Sentences

(SEM)

Mean for Neutral

Sentences (SEM)

Familiarity 4.74 (.03) 4.79 (.03)

Coherence 4.84 (.02) 4.84 (.02)

Tabooness 3.09 (.08) 1.06 (.02)

Valence 1.75 (.04) 3.20 (.04)

Arousal 2.68 (.06) 1.21 (.02)

During the study phase, participants were shown 200 unique faces across 5 study blocks (40 per

block). Each face was paired with either a negative (100) or neutral (100) sentence. Faces were

displayed in a pseudo random order such that no more than 3 negative or 3 neutral face-sentence-

face pairings appeared in succession. Each study block contained 20 faces paired with a negative

sentence, and 20 faces paired with a neutral sentence. In the test phase, participants were shown

300 faces in isolation (i.e., without sentences): 200 faces were previously viewed (100 old-

negative, 100 old-neutral) and 100 faces were novel (new). Faces were presented in a pseudo

random order such than no more than 3 faces of each type appeared in succession.

Counterbalancing was complete such that each face appeared as studied with a neutral sentence,

studied with a negative sentence, and as a novel face equally often across participants.

4.3.3 Procedure

Eye movements and neural activity were recorded simultaneously throughout the study phase.

The test phase of the experiment took place outside of the imaging suite and only eye movements

and explicit reports were recorded. During each trial of the study phase, a face was presented for

3000 ms (Face 1) followed by a sentence which was presented for 4000 ms. Subsequently, a

blank screen was presented for 400 ms and then a fixation cross was presented for 100 ms in

order to direct the participant’s eyes back to the centre of the screen. Following this, Face 1 was

re-presented again, in isolation, for 3000 ms (Face 2; Figure 5.1). Participants were then asked

to indicate via button press whether they would want to approach, avoid or stay neutral to the

face. The purpose of the task was to encourage participants to process the meaning of the face-

sentence pairings. Throughout the study phase, participants were instructed to freely view the

faces and sentences presented. There was a 1500 ms inter-trial interval which consisted of a

fixation cross in the center of a blank screen. Participants were instructed to fixate on the central

104

cross whenever it appeared. During a 30-minute delay (approximately) between the study and

test phases, participants moved from the imaging suite to the eye tracking room and completed a

background information form. During the test phase, 300 faces were presented (100 new, 100

old-negative, 100 old-neutral) in isolation, i.e., without any sentence pairings. Each face was

presented for 5000 ms and participants were instructed to freely view each face, and indicate via

a button press whether the face was previously viewed and paired with a negative sentence (old-

negative), previously viewed and paired with a neutral sentence (old-neutral), or novel (new).

Figure 4.1 Experimental procedure

Participants freely viewed a face (Face 1) followed by a sentence that was either negative or neutral. The

same face was presented again (Face 2) and participants were asked to judge whether they would want to

approach, avoid or neither approach or avoid (remain “neutral” to) that person.

4.3.4 Data Acquisition

Eye movements were measured with either a SR Research Ltd. Eyelink 1000 remote eyetracker

(during the study phase in the MEG suite) or a SR Research Ltd. Eyelink II eye tracker (during

the test phase outside of the MEG suite). Each eyetracker recorded eye movements at a rate of

105

500 Hz and with a spatial resolution of 0.1 degrees. A 9-point calibration was performed at the

start of each block followed by a 9-point calibration accuracy test. Calibration was repeated if

the error at any point was more than 1 degree. Drift corrections were performed at the beginning

of each trial if necessary.

MEG recordings were performed in a magnetically shielded room, using a 151-channel whole

head first order gradiometer system (VSM-Med Tech Inc.) with detection coils uniformly spaced

31 mm apart on a helmet-shaped array. Participants sat in an upright position, and viewed the

stimuli on a back projection screen that subtended approximately 31 degrees of visual angle

when seated 30 inches from the screen. The MEG collection was synchronized with the onset of

the stimulus by recording the luminance change of the screen. Participant’s head position within

the MEG was determined at the start and end of each recording block using indicator coils placed

on nasion and bilateral preauricular points. These three fiducial points established a head-based

Cartesian coordinate system for representation of the MEG data.

In order to specify/constrain the sources of activation as measured by MEG and to co-register the

brain activity with the individual anatomy, a structural MRI was also obtained for each

participant using standard clinical procedures with a 3T MRI system (Siemens Magnetom Trio

whole-body scanner) located at Baycrest.

4.3.5 Analysis for Study Phase

Eye Movement Analysis for the Critical Word. In order to provide evidence that the emotion

manipulation had an effect on eye movement behaviour, we first examined differences in

viewing the critical word when it was negative versus when it was neutral. Analysis of eye

movements was performed with respect to the experimenter-drawn region of interest

corresponding with the critical word within the sentence.

Eye Movement Analysis for Faces. Differences in the eye movement patterns made to faces that

had been paired with negative versus neutral sentences were taken as evidence that the

processing of faces may be changed via association with emotional information. Therefore, we

compared eye movement behavior during viewing of Face 2 following a negative sentence

(Face2–Negative) versus that of Face 2 following a neutral sentence (Face2–Neutral). As a

control condition, we also compared eye movement during viewing of Face 1 that preceded a

106

negative (Face1–Negative) versus neutral sentence (Face1–Neutral). Since these faces had not

yet been paired with either a negative or a neutral sentence, there should be no differences in

measures of eye movement behaviour. Viewing to regions corresponding to the location of

features within the face, i.e. eyes, nose, and mouth, were examined.

Eye Movement Measures: Eye movement analysis for both the critical word and face included

measures of early viewing and measures of overall viewing (for more details see: Hannula et al.,

2010). Measures of early viewing included: duration of first fixation, duration of first gaze, and

number of fixations within first gaze. A fixation is defined as the absence of any saccade (e.g.,

the velocity of two successive eye movement samples exceeds 22 degrees per second over a

distance of 0.1º), or blink (e.g., pupil is missing for 3 or more samples) activity. The first gaze

designates the first time that the eyes enter a region of interest. Duration of first gaze is the total

time spent within a particular region of interest on the first gaze before moving away from it.

Number of fixations within first gaze is the total number of fixations directed to a particular

region of interest during the first gaze before moving out of that region. Measures of overall

viewing included: average duration of fixations, number of fixations and number of transitions

into the region of interest, i.e. the number of times a particular region of interest is viewed.

MEG Analysis for Faces. Similar to the eye movement analysis for faces, spatiotemporal

differences in neural activity underlying viewing of faces that had been paired with negative

versus neutral sentences were taken as evidence that the processing of faces may be changed via

association with emotional information. Further, differences in neural activity occurring within

the first 200 ms were taken as evidence that association with emotion changes perceptual

processing of neutral faces. We compared neural activity during viewing of Face 2 following a

negative sentence (Face2–Negative) versus that of Face 2 following a neutral sentence (Face2–

Neutral). A comparison of neural activity underlying viewing of Face 1 that preceded a negative

(Face1–Negative) versus neutral sentence (Face1–Neutral) was also included as a control

condition

MEG Analysis. Source activity was estimated using the synthetic aperture magnetometry (SAM)

minimum-variance beamformer (Robinson & Vrba, 1999; Van Veen et al., 1997) across the

whole brain on a grid with regular spacing of 8 mm. The beamformer analysis, using the

algorithm as implemented in the VSM software package, was based on individual multisphere

107

models, for which single spheres were locally approximated for each of the 151 MEG sensors to

the shape of the cortical surface as extracted from the MRI. The MEG beamformer minimizes

the sensitivity for interfering sources as identified by analysis of covariance in the multichannel

magnetic field signal while maintaining constant sensitivity for the source location of interest.

The covariances were calculated based on the entire trial duration (-1000 ms to 10,500 ms) with

low-pass filter at 30 Hz. Thereafter, the resultant SAM weights were applied to the MEG sensor

data separately for epoch of interest based on an event-related spatial-filtering approach (ER-

SAM; Cheyne et al., 2006; Robinson, 2004) as used in our previous studies (Fujioka, Zendel, &

Ross, 2010; Moses, Brown, Ryan, & McIntosh, 2010). The MEG data were epoched from 100

ms prior to stimulus onset to 2800 ms after separately for each condition (i.e. Face2-Negative,

Face2-Neutral). Using the MEG data of the entire segment of the trial to compute SAM

separately from the epoch of interest is necessary to ensure that the resultant ER-SAM maps for

each condition were due to differences in the MEG data and not due to differences in the spatial

filter. Before applying the beamformer to each single epoch of magnetic field data, artifact

rejection using a principal component analysis was performed such that field components larger

than 1.5 pT at any time were subtracted from the data at each epoch. This procedure is effective

in removing large artifacts caused by eye blinks (Kobayashi & Kuriki, 1999; Lagerlund,

Sharbrough, & Busacker, 1997). Using the spatial filter, single-epoch source activity was first

estimated as a pseudo-Z statistic for each participant and each condition. The time series of the

source power within 1–30 Hz was then calculated for each single source waveform. Finally, the

representation of the evoked response was obtained as a time series of the average power across

trials normalized to the pooled variance across trials for each voxel and time point. These

individual functional maps were then spatially transformed to the standard Talairach space using

AFNI (National Institute of Mental Health, Bethesda, MD, USA), using the same transform

applied to the anatomical MR image, and averaged across all participants. For each participant,

functional data from the MEG was co-registered with their structural MRIs by using indicator

coils placed on the nasion and bilateral periauricular points.

We hypothesized that differences in neural activity for faces following negative and neutral

sentences (Face 2) may occur during early or later stages of the face processing in a distinct

manner. Thus, we examined MEG data with sliding time windows of 600 ms, i.e. 0-600 ms, 300-

900, 600-1200 ms, etc. (Staresina, Bauer, Deecke, & Walla, 2005; Walla et al., 2001), which

108

resulted in a total of 8 time windows. As a control condition, the same analyses were also

applied to faces preceding negative and neutral sentences (Face 1). Spatiotemporal differences

in the brain responses to viewing faces associated with negative versus neutral sentences were

characterized using the partial least squares (PLS) multivariate approach (McIntosh, Bookstein,

Haxby, & Grady, 1996; McIntosh & Lobaugh, 2004). The PLS approach has been successfully

used for time-series neuroimaging data in multi-electrode event-related potential (Lobaugh,

West, & McIntosh, 2001) and MEG (Fujioka et al., 2010; Moses et al., 2009). In order to

accommodate computation demands, the Talairach-transformed individual functional maps for

each participant were down-sampled to 78 Hz, which resulted in volumetric maps every 12.8

milliseconds, and used as input for a mean-centred PLS analysis. Mean centreing allowed values

for the different conditions to be expressed relative to the overall mean. Using this type of

analysis, activation patterns that are unique to a specific condition will be emphasized; whereas

activations that are consistent across all conditions, such as primary visual activation, will be

diminished.

The input of PLS is a cross-block covariance matrix, which is obtained by multiplying the design

matrix (an orthonormal set of vectors defining the degrees of freedom in the experimental

conditions), and the data matrix (time series of brain activity at each location as columns and

subjects within each experimental condition as rows). The output of PLS is a set of latent

variables (LVs), obtained by singular value decomposition applied to the input matrix. Similar

to eigenvectors in PCA, LVs account for the covariance of the matrix in decreasing order of

magnitude determined by singular values. Each LV explains a certain pattern of experimental

conditions (design score) as expressed by a cohesive spatial–temporal pattern of brain activity

(Fujioka et al., 2010). The significance of each LV was determined by a permutation test using

500 permuted data with conditions randomly reassigned for recomputation of PLS. This yielded

the empirical probability for the permuted singular values exceeding the originally observed

singular values. An LV was considered to be significant at p ≤ 0.05. For each significant LV,

the reliability of the corresponding eigen-image of brain activity was assessed by bootstrap

estimation using 250 resampled data with subjects randomly replaced for recomputation of PLS,

at each time point at each location. Sources with a bootstrap ration of ±3.5 were examined.

109

4.3.6 Analysis for Test Phase

In order to assess the extent to which association with emotion during the study phase influenced

subsequent memory for the neutral faces, we examined recognition memory at two levels during

the test phase. First, we examined the extent to which association with emotion influenced

memory for the associated item (i.e. the face). If emotion enhanced memory for recognizing the

neutral face, then participants should be more accurate in identifying previously presented faces

paired with negative sentences as “old” (i.e. both “old-negative” and “old-neutral” responses

were scored as correct) as compared to previously presented faces paired with neutral sentences.

Second, we examined the extent to which emotion influenced memory for the relation between

the neutral face and its associated sentence. If emotion enhanced memory for the relationship

between the neutral face and the associated sentence, then participants should be more accurate

in identifying previously presented faces paired with negative sentences as “old-negative”, as

compared to previously presented faces paired with neutral sentences as “old-neutral”.

4.4 Results

4.4.1 Study Phase

4.4.1.1 Viewing of the Critical Word

The extent to which viewing of the critical word differed depending on whether it was negative

or neutral was considered evidence for an emotion-modulated effect. Paired samples t-tests were

conducted for the different eye movement measures.

Early differences in viewing. Eye movements distinguished between negative and neutral words

with the very first fixation directed to the word. The duration of the first fixation was marginally

longer for negative versus neutral words (t(11) = 2.15, p = .06). Other early measures of viewing

did not reveal any significant differences (duration of first gaze: t(11) = 1.72, p > .1; number of

fixations within first gaze: t(11) = -1.26, p > .1)

Overall differences in viewing. The average duration of all fixations made to the critical word

was significantly longer when it was negative as compared to when it was neutral (t(11) = 4.06, p

< .01). As a result of the longer average fixation durations, participants made fewer fixations

(t(11) = -2.42, p < .05) and transitions (t(11) = -2.19, p = .05) to the negative versus neutral word

110

during the fixed viewing period. In other words, participants spent longer looking at the negative

words, but explored the neutral words more. Mean values can be found in Table 5.2.

Table 4.2 The mean and SEM for different eye movement measures of viewing to the

critical word when it was negative and neutral.

Critical Word

Early Measures of Viewing: Negative

(SEM)

Neutral (SEM)

Duration of First Fixation (ms) 224.61 (11.66) 213.99 (8.58)

Duration of First Gaze (ms) 355.79 (29.46) 339.19 (24.92)

Number of Fixations within First

Gaze

1.53 (.06) 1.57 (.07)

Overall Measures of Viewing:

Average Duration of Fixations (ms) 267.58 (16.47) 247.63 (14.64)

Number of Fixations 4.16 (.14) 4.37 (.12)

Number of Transitions 2.67 (.08) 2.77 (.09)

4.4.1.2 Viewing of Faces

The extent to which viewing of faces paired with negative sentences differed from viewing of

faces paired with neutral sentences was considered to provide evidence that, through association,

emotional information can immediately modulate viewing of neutral information. Here, we

focus on the significant results pertaining to emotion. Analyses of variance (ANOVA) were

conducted on measures of viewing to Face 2 using emotion (negative, neutral) and face feature

(eyes, mouth, nose) as within-subject factors. As a control, the same analyses were also

conducted for viewing to Face 1 and no significant effects were found (all p’s > .1), therefore, for

brevity, we only describe viewing to Face 2. Mean values can be found in Table 5.3.

111

Table 4.3 The mean and SEM for early (A) and overall (B) measures of viewing to

different features within Face 1 and Face 2.

A. Face 1 Face 2

Early Measures of

Viewing:

Negative

(SEM)

Neutral

(SEM)

Negative

(SEM)

Neutral

(SEM)

Duration of First

Fixation (ms)

Eyes

Nose

Mouth

308.3 (37.5)

286.5 (24.1)

264.4 (15.3)

317.7 (46.8)

304.0 (32.2)

308.0 (39.3)

291.1 (24.0)

284.8 (22.5)

294.0 (25.2)

271.0 (26.0)

284.7 (24.4)

293.6 (22.2)

Duration of First

Gaze (ms)

Eyes

Nose

Mouth

1104.0 (133.5)

404.3 (55.5)

275.6 (16.6)

1088.6 (122.4)

422.6 (66.3)

329.4 (37.5)

1035.4 (115.3)

471.4 (80.0)

324.7 (29.5)

927.0 (86.9)

453.9 (74.9)

323.7 (29.9)

Number of

Fixations Within

First Gaze

Eyes

Nose

Mouth

.41 (.04)

.17 (.02)

.11 (.01)

.41 (.04)

.17 (.02)

.12 (.01)

.39 (.04)

.19 (.03)

.12 (.00)

.35 (.03)

.18 (.02)

.11 (.01)

B. Face 1 Face 2

Overall Measures of

Viewing:

Negative

(SEM)

Neutral

(SEM)

Negative

(SEM)

Neutral

(SEM)

Average Duration

of Fixations

Eyes

Nose

Mouth

332.07 (38.33)

300.76 (25.28)

265.91 (14.90)

333.59 (40.07)

309.47 (32.28)

304.07 (39.40)

325.42 (24.12)

319.95 (31.18)

292.63 (24.09)

316.45 (26.68)

316.06 (29.84)

287.55 (21.39)

Number of

Fixations

Eyes

Nose

Mouth

5.60 (.48)

2.38 (.27)

.27 (.05)

5.63 (.45)

2.35 (.27)

.28 (.05)

5.41 (.50)

2.26 (.29)

.27 (.06)

5.53 (.49)

2.33 (.30)

.30 (.01)

Number of

Transitions

Eyes

Nose

Mouth

1.82 (1.00)

1.75 (.13)

.25 (.05)

1.81 (.09)

1.72 (.12)

.25 (.05)

1.77 (.10)

1.56 (.14)

.24 (.05)

1.93 (.09)

1.63 (.14)

.26 (.06)

112

Early differences in viewing. Participants spent significantly more time (F(1,11) = 4.76, p = .05,

d = .30) and directed marginally more fixations (F(1,11) = 3.62, p = .08, d = .25) during the first

gaze to the different features of faces paired with negative versus neutral sentences. There was

also a marginal interaction between emotion and face feature for the duration of the first gaze

(F(2,22) = 3.10, p = .07, d = .22) and the number of fixations within first gaze (F(1,11) = 2.92, p

= .08, d = .21). Follow-up t-tests revealed that during the first gaze, participants spent

significantly more time (t(11) = 2.31, p < .05) and directly marginally more fixations (t(11) =

1.93, p = .08) to the eye region of face, but not for other regions of the face (i.e. the nose and the

mouth, p’s > .1). Participants first entered the eye region around 400 ms after face onset. There

were no significant differences in the time at which participants entered the eye region of faces

paired with negative versus neutral sentences (t(11) = 1.43, p > .1). Thus, emotion-modulated

viewing differences found during the first gaze occurred approximately between 400-1400 ms

(the duration of the first gaze was approximately 1000 ms, see Table 5.3).

Overall differences in viewing. In contrast to early measures of viewing which showed increased

eye movement sampling of negative versus neutral faces, viewing across the entire trial revealed

fewer fixations (F(1,11) = 4.34, p = .06, d = .28) and fewer transitions between face features

(F(1,11) = 10.51, p < .01, d = .49) for faces paired with negative versus neutral sentences. A

significant interaction for the number of transitions (F(2,22) = 5.40, p < .05, d = .33) revealed

that participants made fewer transitions into the eye region of faces paired with negative versus

neutral sentences (t(11) = -3.45, p < .01), whereas there was no difference in viewing of the other

features (p’s > .1). No significant effects were found for the measure of average fixation

duration (p’s > .1).

In summary, emotion led to early changes in viewing for both emotional stimuli (i.e. words) and

neutral stimuli that were associated with emotion (i.e. faces). Specifically, association with

negative versus neutral sentences initially led to an increase in viewing of the eye region of

neutral faces, and perhaps as a consequence, decreased overall sampling (i.e. fewer fixations and

transitions) across the remainder of the trial.

113

4.4.1.3 Neural Activity to Faces

As for the above analyses regarding eye movement behaviour, the extent to which neural activity

observed during viewing of faces paired with negative sentences differed from neural activity

observed during viewing of faces paired with neutral sentences, was considered to provide

evidence that emotion modulates immediate processing of neutral information via association.

PLS analysis did not reveal any differences between Face1-Negative and Face1-Neutral. For

Face 2, PLS analysis yielded one significant design LV for the time window 600-1200 (p < .05;

Figure 5.2) and 900-1500 ms (p = .05).

Figure 4.2 LV1 from PLS analysis

Differences in Processing Face 2

Co

ntr

ast

Co

eff

icie

nts

Negative Neutral

-0.5

-1

0

0.5

1Differences in Processing Face 2

Co

ntr

ast

Co

eff

icie

nts

Negative Neutral

-0.5

-1

0

0.5

1

LV1 revealed that association with negative or neutral sentences yielded unique patterns of brain

activation during 600-1200 ms after stimulus onset. The same pattern was observed for the time window

900-1500 ms.

LV1 revealed greater activation for faces paired with negative versus neutral sentences in

emotion processing regions such as the left amygdala (950-976 ms), right cingulate (1027-1142

ms), and left medial frontal gyrus (1078-1104 ms), and in posterior regions such as the right

precuneus (989-1053 ms), right inferior parietal lobule (1014-1040) and dorsolateral prefrontal

cortex (1155-1245 ms; Figure 5.3).

114

Figure 4.3 Sources showing stronger activation for faces paired with negative as compared

to neutral sentences

115

116

A. PLS bootstrap ratio plots from LV1. B. Corresponding ER-SAM waveforms from LV1. Blue dots

denote bootstrap ratios <-3, and red dots denote bootstrap ratios >3.

Greater activation for faces paired with neutral versus negative sentences was found in the left

parahippocampal gyrus (835-925 ms) and right superior frontal gyrus (733-810 ms; Figure 5.4).

Interestingly, activation in fusiform gyrus and bilateral lingual gyrus initially showed a larger

response for faces paired with neutral versus negative sentences, but ultimately showed a larger

response for faces paired with negative versus neutral sentences (Figure 5.5).

117

Figure 4.4 Sources showing stronger activation for faces paired with neutral as compared

to negative sentences

A. PLS bootstrap ratio plots from LV1. B. Corresponding ER-SAM waveforms from LV1. Blue dots

denote bootstrap ratios <-3, and red dots denote bootstrap ratios >3.

118

Figure 4.5 Sources initially showing stronger activation for faces paired with neutral as

compared to negative sentences, then stronger activation for faces paired with negative as

compared to neutral sentences

119

A. PLS bootstrap ratio plots from LV1. B. Corresponding ER-SAM waveforms from LV1. Blue dots

denote bootstrap ratios <-3, and red dots denote bootstrap ratios >3.

4.4.2 Test Phase

Analyses of variance (ANOVA) were conducted on uncorrected hits using face type (novel,

repeated-negative, repeated-neutral) as a within-subject factor. We first examined the extent to

which emotion influenced memory for the associated item. No significant differences in

accuracy emerged between the three face types (F(2,22) = .15, p > .1). Corrected hit rates were

around chance levels (chance = .44; old-negative: M = .41, SEM = .03; old-neutral = .46, SEM =

.03). We also examined the extent to which emotion influenced memory for the relationship

between the neutral face and the associated sentence. There was a significant effect of face type

120

(F(2,22) = 34.00, p < .001, d = .76) such that accuracy for identifying novel faces was

significantly higher than accuracy for identifying repeated faces that had been presented with a

negative (t(11) = 5.93, p < .001) or a neutral sentence (t(11) = 6.56, p < .001). There were no

differences in uncorrected (t(11) = .63, p > .1) or corrected hits (t(11) = -1.44, p > .1) between

identifying repeated faces that had been presented with a negative and neutral sentence. Mean

values can be found in Table 5.4. Of note, when hits were corrected by false alarms (i.e. for old-

negative faces, responses of “old-negative” to new and old-neutral faces were scored as false

alarms), accuracy fell below chance levels suggesting that participants were unable to

differentiate between the face types. Taken together, the results suggest that the task was too

difficult to observe any potential effects of emotion on memory.

Table 4.4 Mean accuracy and SEM for identifying different face types during the test phase

Face Type

Response Type –

Uncorrected

New Old-Negative Old-Neutral

“Novel” .64 (.04) .39 (.02) .38 (.03)

“Old-Negative” .20 (.03) .34 (.03) .30 (.02)

“Old-Neutral” .16 (.02) .27 (.02) .32 (.02)

Chance Old-Negative Old-Neutral

Corrected Hits .11 -.17 (.04) -.11 (.04)

4.5 Discussion

Previous research showed that viewing of faces expressing emotion was associated with distinct

viewing patterns, but that such viewing patterns can be altered by contextual information

(Aviezer et al., 2008). However, previous research has not examined the influence of emotion

on the processing of neutral information when that neutral information is presented in isolation

during the study phase. In other words, it is not clear whether emotion may exert influence on

the processing of neutral information via association, even when it is no longer present. In the

current study, it was found that: (1) emotion led to increased early viewing of both the stimulus

itself (i.e. the negative word) and the associated neutral stimulus (i.e. the neutral face); (2)

emotion exerted its influence on neutral faces during later stages in processing, between 600-

1500 ms after stimulus onset, as revealed by magnetoencephalography (MEG); and (3) observed

differences in viewing and neural activity during the study phase was not related to subsequent

memory effects. In the next sections, we discuss our results in light of prior findings regarding

121

how association with emotional information may modulate processing of neutral information,

and how the current work may inform theories regarding the influence of emotion on perception

and memory.

4.5.1 Emotion-Modulated Viewing of Words

In the present experiment, it was found that eye movements differentiated between the critical

negative and neutral word within the sentence within the first fixation. Further, it was also found

that over the entire trial, the average duration of a fixation was longer for negative than neutral

words. These results suggest that emotions modulate online processing and that this occurs very

early. This is consistent with previous neuroimaging studies showing that compared to neutral

stimuli, emotional stimuli elicit increased processing within the first 500 ms, with some reports

as early as 120 ms, after stimulus onset (e.g. Kissler, Herbert, Winkler, & Junghofer, 2009;

Vuilleumier & Pourtois, 2007; Peyk, Schupp, Elbert, & Junghofer, 2008). This may be an

adaptive function allowing one to prioritize the detection and processing of potentially

threatening and/or important information.

4.5.2 Emotion-Modulated Viewing of Neutral Faces

In addition to emotion-modulated differences in viewing words, association with negative versus

neutral information also influenced viewing to neutral faces. Specifically, it was found that

when faces were paired with negative versus neutral sentences, there was increased viewing to

the eye region of faces during the first gaze, i.e. between 400-1400 ms. This finding could be

interpreted in two possible ways. First, association with negative information may

fundamentally change the way in which a neutral face is processed, i.e. it may change perceptual

processing of neutral faces (Aviezer et al., 2008). Previous studies have shown that the viewing

of threat-related versus non-threat-related facial expressions is also characterized by increased

viewing to the eye region (e.g. Adolphs, Gosselin, et al., 2005; Gamer & Buchel, 2009; Itier &

Batty, 2009). Within the current experiment, the neutral faces may have taken on emotional

qualities and elicited viewing patterns similar to those reported for the viewing of faces that are

actually expressing emotion. Further work is needed to directly compare viewing of faces

expressing emotion and viewing of faces associated with emotion. Alternatively, emotion-

modulated differences in viewing neutral faces associated with negative versus neutral

information may not reflect differences in perceptual processing per se, but may rather indicate

122

processing differences after perceptual processing has occurred. Specifically, after building a

visual percept of the neutral face, participants may have then bound the associated information to

the neutral face. Faces paired with negative versus neutral sentences may have invoked a greater

need to reappraise the face, leading to increased viewing to regions of the face that are the most

informative (i.e. the eyes). Consistent with the axiom that the eyes may be a “window to the

soul”, previous research shows that viewing to the eye region is associated with the assessment

of interest, threat, and intentions of other people (e.g. Haxby, Hoffman, and Gobbini, 2002;

Spezio et al., 2007).

4.5.3 Emotion-Modulated Processing of Neutral Faces

In addition to examining emotion-modulated differences in viewing, we also examined the extent

to which emotion may influence underlying neural activity associated with processing the

associated neutral face, and precisely when emotion may exert its influence. This may shed light

on how the processing of faces associated with emotion may differ from that of faces associated

with neutral information. For example, if emotion modulated processing of neutral faces early,

i.e. within the first 200 ms, then this may suggest that emotion changes perceptual processing of

associated neutral faces. However, PLS analysis did not reveal any significant spatiotemporal

differences in processing neutral faces paired with negative versus neutral sentences within the

first 200 ms after stimulus onset. In fact, no such differences were observed until a later period,

between 600-1500 ms. This time frame is consistent with eye movement monitoring results

which revealed viewing differences between neutral faces paired with negative versus neutral

sentences between 400-1400 ms (discussed above). Taken together, this suggests that emotion

may not exert its influence on the perceptual processing of associated neutral faces, rather, it may

modulate later stages of processing such as binding the sentence to the face and/or the process of

reappraising the neutral face in light of the new information obtained.

Further, given that the differences in viewing to- and processing of (i.e. underlying neural

activity) neutral faces paired with negative versus neutral information occurred in and around the

same time window, it could be argued that there may be a reciprocal relationship between eye

movements and neural activity. Specifically, eye movements entered the eye region at around

400 ms after face onset. However, since we did not observe differences in the duration of the

first fixation between viewing of faces associated with negative versus neutral sentences, this

123

suggests that emotion-modulated differences in viewing culminated only after participants had

some time to explore the region. In this way, the emotional information associated with the face

may drive participants to direct more viewing to informative regions of the face such as the eyes.

In doing so, this may then increase neural activity in regions implicated in emotion processing

(Hannula et al., 2010), which may then result in the construction of a different type of internal

representations as compared to faces paired with a neutral sentence.

In support of the notion that association with emotion may lead to the construction of different

types of representations, processing neutral faces paired with negative versus neutral sentences

elicited stronger activity in neural regions implicated in emotion processing such as the

amygdala, cingulate and medial prefrontal cortex (e.g. Adolphs, Gosselin, et al., 2005; Anderson,

Christoff, Panitz, De Rosa, & Gabrieli, 2003; Craig, 2002; Culham & Kanwisher, 2001;

Kensinger & Corkin, 2003; Lane, Reiman, Ahern, Schwartz, & Davidson, 1997). Interestingly,

the brain region to first differentiate between neutral faces paired with negative versus neutral

sentences was the amygdala, and dorsolateral prefrontal cortex which has been associated with

memory encoding and retrieval (e.g. Blumenfeld, Parks, Yonelinas, & Ranganath, 2011; Nyberg

et al., 2003; Ranganath, Johnson, & D'Esposito, 2003). It is possible that increased activity in

these regions then drove subsequent changes in the brain, leading to increased reappraisal of

neutral faces paired with negative as compared to neutral sentences. Consistent with this notion,

activation differences in the amygdala and dorsolateral prefrontal cortex were followed by

differences in neural activity in the precuneus and inferior parietal lobule, which has been linked

to the maintenance of attention and the processing of salient information (e.g. Culham &

Kanwisher, 2001; Singh-Curry & Husain, 2009). Activation peaks in these two parietal regions

were followed by peaks in neural regions in other emotion processing regions such as the

cingulate and medial prefrontal cortex (e.g. Craig, 2002; Culham & Kanwisher, 2001; Kensinger

& Corkin, 2003). This may reflect the retrieval and attachment of emotional information to a

neutral face (e.g. Fenker et al., 2005; E. J. Maratos, Dolan, Morris, Henson, & Rugg, 2001;

Medford et al., 2005; A. P. Smith, Henson, Dolan, & Rugg, 2004) and/or the reappraisal and

assessment of the neutral face in light of the negative sentence.

Further, the analysis also revealed a number of regions, namely the fusiform gyrus, superior

temporal sulcus (STS) and bilateral lingual gyrus, which were first more active for faces

associated with neutral versus negative sentences, but became more active for faces associated

124

with negative versus neutral sentences. The STS has been shown to be involved in inferring the

intentions and attributes of other people (e.g. Winston, Strange, O'Doherty, & Dolan, 2002) and

the representation of biographical information (Haxby et al., 2002; Todorov et al., 2007). In this

way, associated information may then modulate and change encoding processes, possibly via

enhanced activation of visual processing regions in the occipital cortex and specific face

processing modules in the fusiform gyrus (Halgren, Raij, Marinkovic, Jousmaki, & Hari, 2000;

Prince, Dennis, & Cabeza, 2009). Critically, these processes may be more delayed for neutral

faces paired with negative versus neutral sentences because emotional significance of the face

and/or congruence between the physical properties of the face and associated biographical

information must first be evaluated/considered. Taken together, results from MEG suggest that

emotional information may influence the processing of neutral faces via a parietal-limbic-frontal

network that may both drive and be modulated by eye movement behaviour.

Contrary to our original predictions, results from MEG did not reveal differences in neural

activity in the anterior insula. This is likely because the anterior insula is predominantly

involved in the processing of disgust (e.g. Anderson et al., 2003; Lane et al., 1997). In contrast,

the negative information presented within the paradigm may have elicited different types of

negative emotions including fear, anger and disgust, which may have decreased the power with

which emotion-modulated differences could be observed. Another possibility for why

spatiotemporal differences in the anterior insula were not observed between processing of faces

associated with negative versus neutral sentences may be related to individual differences.

Specially, some recent research has shown that the level of activation in the anterior insula varies

across participants, and may depend on factors such personality (Mataix-Cols et al., 2008;

Schafer, Leutgeb, Reishofer, Ebner, & Schienle, 2009) and sex (Aleman & Swart, 2008; Caseras,

Mataix-Cols, et al., 2007). Thus, it is possible that we failed to observe any emotion-modulated

differences in this region because not all of the participants showed increased activation during

processing of faces paired with negative as compared to those paired with neutral sentences.

This is especially relevant because the bootstrap method used in the present experiment is a

measure of stability across participants for a particular brain voxel, i.e. a high bootstrap ratio

value indicates that most, or all of the participants showed similar spatiotemporal differences

between the experimental conditions, and a low bootstrap ratio value indicates that few or none

of the participants showed similar spatiotemporal differences.

125

In addition to the above, another unexpected finding was that the processing of faces paired with

neutral versus negative sentences elicited stronger activation within the medial temporal lobe, i.e.

in the parahippocampal gyrus. Given that the current task required participants to process two

stimuli arbitrarily paired together (i.e. face and sentence), we had initially expected activation of

the hippocampus proper as this has been shown to be critical for the formation of

relations/associations among items into a lasting representation (e.g. Chun & Phelps, 1999; N. J.

Cohen et al., 1997; Ryan et al., 2000). The absence of activation differences in the hippocampus

proper, and the presence of activation differences in the parahippocampal gyrus may suggest that

the processing of faces paired with neutral versus negative sentences may rely more on a blended

versus relational representation. For instance, Moses and Ryan (Moses & Ryan, 2006) argued

that whereas the hippocampus mediates relational representations that are flexible; structures

outside of the hippocampus such as the parahippocampal gyrus mediate blended representations

that are not flexible, i.e. a change in any aspect would significantly disrupt subsequent

processing and memory.

Another possibility why we did not observe activation differences in the hippocampus proper

may be due to source localization concerns. Specifically, in a study using simulated MEG

activity presented with real background brain activity, it was found that MEG was not able to

differentiate between the hippocampus and parahippocampal gyrus when activity in these two

regions overlapped in time (Stephen et al., 2005). Instead, the source was placed in either the

hippocampus or the parahippocampal gyrus. In other words, it is possible that in the current

task, both the hippocampus and the parahippocampal gyrus were active at the same time, but

only activity in the parahippocampal gyrus was observed. In support of this, a number of fMRI

studies examining associative learning/memory have reported increased activity in both the

hippocampus proper and the parahippocampal gyrus (e.g. Bar, Aminoff, & Schacter, 2008;

Duzel et al., 2003; Kirwan & Stark, 2004; Yonelinas et al., 2001). This suggests that the

hippocampus does not operate in a vacuum but interacts with other brain regions in order to form

and support different representations for the same information input (i.e. relational and blended).

Future research is needed in order to determine whether association with neutral versus negative

information lead to a greater reliance on blended representations mediated by the

parahippocampal gyrus, or blended and relational representations mediated by the

parahippocampal gyrus and hippocampus, respectively.

126

Irrespective of whether association with neutral versus negative information led to a greater

reliance on blended and/or relational representations, the finding that processing of faces paired

with neutral versus negative sentences elicited stronger activation within the parahippocampal

gyrus may represent enhanced processing of associated neutral versus negative information and

the blending/binding of that information to the neutral face. This was somewhat counterintuitive

as it seemed that it would be more important and relevant to bind emotionally salient information

to the neutral face rather than affectively neutral information. However, there has been some

suggestion in the literature showing that while the processing and memory of associated neutral

information is mediated by the medial temporal lobes, processing and memory of associated

emotional information may be more dependent on emotion processing regions such as the

amygdala and temporal poles (Phelps & Sharot, 2008). For example, Todorov and Olson

(Todorov & Olson, 2008) presented participants with neutral faces paired with positive or

negative sentences and later asked them to rate each face on scales of likeability, trustworthiness

and competence, and to make a force-choice judgment of preference between a face that had

been previously paired with positive versus negative behaviours. They found that while healthy

controls and patients with hippocampal damage showed learning effects (i.e. preferring faces

previously paired with positive versus negative behaviours), patients with damage to the

amygdala and temporal poles did not. In light of this, it is possible that in the present

experiment, participants built different types of representations for the face-sentence pairings

depending on whether the sentence was negative or neutral. Specifically, participants may have

built a parahippocampal-based representation of faces with neutral sentences, and an emotion

system-based representation of faces with negative sentences. However, it is important to note

that these differences in neural activity were relative rather than absolute. In this way, the

evidence lends support to the notion that binding/blending of emotional versus neutral

information may depend more on one system versus another, not that they rely only on one

system versus another. Future work can examine what this may mean for the type of

representation formed, for example, if associations between neutral stimuli are mediated

predominantly by the parahippocampal gyrus, then are these representations more flexible,

detailed and/or more stable than those between emotional stimuli (e.g. N. J. Cohen et al., 1997)?

127

4.5.4 Emotion-Modulated Memory for Neutral Faces

Given the above emotion-modulated differences in viewing and underlying neural activity during

the study phase, we were surprised to find that emotion did not seem to modulate subsequent

recognition memory. This is likely due to the fact that there were too many face-sentence

pairings, leading to at-floor memory performance, thereby masking any potential mnemonic

effects that emotion may have had. In the absence of any emotion-modulated effects in memory

performance, this suggests that the differences observed in eye movement patterns and neural

activity between faces paired with negative versus neutral sentences during the study phase were

the result of emotion’s effects on processing associated information, independent (or at least in

part) of its effects on subsequent memory. It is possible that such effects would be more robust,

or different altogether, if we only examined neural activity underlying faces that were

subsequently remembered. Unfortunately, due to signal-to-noise constraints and the current

study’s low accuracy rates, such a comparison was not possible. Further, in the present

experiment, memory was examined via conscious explicit report. However, prior studies have

shown that even in the absence of conscious awareness, memory can be gleaned from other

aspects of behaviour such as eye movement behaviour (Hannula et al., 2010; Ryan et al., 2000).

Thus, it would be interesting for future research to examine how association with emotion may

modulate memory as indexed by different measures.

4.5.5 Conclusions

In the current work, it was found that emotions can alter processing of otherwise neutral

information by changing overt viewing patterns. This adds to the growing literature showing

that visual processing is determined not only by bottom-up physical characteristics, but also by

top-down influences such as prior knowledge, memory and context (Aviezer et al., 2008; Ryan et

al., 2008). Further, it was also found that processing of faces paired with negative sentences

relied more on a neural network mediated by regions involved in emotion, whereas processing of

faces paired with neutral sentences relied more on regions involved in memory (i.e. the

parahippocampal gyrus). This suggests that not only do emotions influence online processing of

associated information, but it may also alter the type of representation that is formed.

128

4.6 Acknowledgements

The authours wish to thank Douglas A. McQuiggan, Helen Dykstra, Nathaniel So and Amy

Oziel for their assistance in data collection. The authours also wish to thank Sandra Moses and

Bernhard Ross for their advice and technical assistance. This work was funded by Natural

Science and Engineering Research Council and Canada Research Chair grants awarded to JDR;

and a research studentship from Ontario Mental Health Foundation to LR.

129

Chapter 6 Theoretical and Methodological Contributions, and Concluding

Remarks

130

5 Theoretical and methodological contributions, and concluding remarks

In this thesis, I sought to examine how emotions may influence relational memory, or more

precisely, how emotions may influence the viewing, perception and retrieval of, associated

neutral information. This is in contrast to previous literature that has focused predominantly on

how emotions may influence memory for the emotion-eliciting item. This research provides a

more comprehensive understanding of ‘emotional memories’ as our memories for emotional

events or scenes are rarely composed of just a single item, but are rather composed of multiple

items and the relations between them.

In order to examine how emotions may modulate relational memory, I set out to address the

following questions: (1) To what extent do emotions modulate relational memory via differences

in the amount of attention allocated during encoding (Chapter 2; Riggs et al., 2011); (2) to what

extent do emotions modulate relational memory via differences in the retrieval process (Chapter

3; Riggs et al., 2010); and (3) to what extent do emotions modulate relational memory via the

manner in which associated information is perceived and how are such processes supported in

the brain (Chapter 5)? In order to address these questions, I used a convergent methods approach

that included eye movement monitoring and magnetoencephalography (MEG). The advantage

of using these methods to examine cognitive processes comes from the fact that, unlike verbal

reports, they have the power to illuminate aspects of processing online, such as precisely when a

certain operation occurs and how it is supported within the brain. However, in order to use MEG

to address some of the theoretical questions above, I had to first overcome the methodological

issue regarding whether MEG could be successfully utilized to localize activity in deep sources

within the brain, critically the hippocampus (Chapter 4; Riggs et al., 2009).

In this final chapter, I provide a summary of the theoretical and methodological contributions of

this work, outline some of the limitations and provide future directions.

131

5.1 Theoretical Contributions

In integrating all of the results from the chapters together, there are several overarching themes

that emerge with regards to how emotions may influence memory, and also the nature of

memory itself. Each of these is discussed in turn below.

5.1.1 Emotions Modulate Visual Processing of Associated Information

Prior studies have shown that the viewing of emotional and neutral stimuli is associated with

different viewing patterns. For example, compared with faces in a neutral expression, viewing to

faces expressing threat is characterized by increased viewing of internal facial features,

especially of the eyes (Adolphs, Gosselin, et al., 2005; Calder et al., 2000; Gamer & Buchel,

2009; Green et al., 2003b; Itier & Batty, 2009; M. L. Smith et al., 2005). However, such

differences in viewing are driven not only by the affective significance of the faces, but also by

differences in the physical features. Specifically, compared to faces in neutral expressions, faces

expressing fear show enlarged eyes and faces expressing disgust show narrowed eyes. Studies

within the current thesis were not vulnerable to these concerns because they examined how

emotions influenced viewing of associated information that was otherwise neutral, e.g. pairing

either a negative or a neutral sentence with a face in a neutral expression (Chapter 5). Critically,

the physical properties of the neutral face were held constant and counterbalanced across

conditions. In this way, differences in viewing could only be the result of the type of

information that was associated with the face.

Results from this thesis show that not only do emotions modulate the amount of viewing that is

directed to associated neutral stimuli (Chapter 2), but also the pattern of viewing, i.e. participants

increased viewing to the eye region of neutral faces associated with negative versus neutral

sentences (Chapter 5). This suggests that visual processing is influenced not only by bottom-up

factors such as the physical properties of a stimulus, but also by top-down factors such as prior

experience and affective meaning (e.g. Althoff & Cohen, 1999; Bar, Neta, et al., 2006; Hannula

et al., 2010). In this way, emotions may modulate visual processing of associated information

which may then in turn influence the type of representation that is formed, i.e. a representation

mediated by emotion-processing regions such as the amygdala and cingulate versus a

representation mediated by memory-processing regions such as the parahippocampal gyrus

(Chapter 5). It would be important for future research to examine the cognitive consequences of

132

different representations (e.g. is one type of representation more flexible, stable and/or long

lasting?) and how they may be affected in clinical disorders. For example, it is possible that

compared to healthy controls, patients with post-traumatic stress disorder or anxiety may rely

more on emotion-processing regions even for the processing of neutral information. In this way,

they may form internal representations that are more emotional, less flexible and/or more

difficult to extinguish.

5.1.2 Emotions-Modulated Relational Memory is not Mediated by Attention

Although it has often been suggested that emotions modulate memory for associated information

via differences in the allocation of attention (e.g. Armony & Dolan, 2002; J. M. Brown, 2003;

Easterbrook, 1959; Kensinger et al., 2005; E. F. Loftus et al., 1987; Wessel & Merckelbach,

1997), this has not been examined directly. I examined this issue directly within this thesis and

found that emotion-modulated memory was not mediated by differences in attention allocation.

Specifically, Chapter 2 showed that although the presence of an emotional scene led to decreased

amounts of attention to the associated neutral items, these emotion-modulated changes in

attention were not related to subsequent memory performance. Further, in Chapter 5, I found

that although association with emotions led to differences in the manner in which participants

viewed otherwise neutral faces, this did not result in differences in memory performance. Taken

together, this suggests that the relationship between attention and memory is not perfectly

correlated and that perhaps it is not about how much attention is directed to a certain item or

feature within an item, but rather how deeply the item is processed (Craik, 2002). Further, this

also suggests that emotions may influence memory via other mechanisms including differences

in the retrieval process and possibly the post-stimulus elaboration process.

Post-stimulus elaboration refers to the process in which participants may continue to process and

elaborate on a stimulus even after the stimulus is no longer externally present (Hulse, Allan,

Memon, & Read, 2007; Kern, Libkuman, & Otani, 2002). For example, although Chapter 2

revealed that emotion-impaired memory for associated neutral information was not the result of

differences in the amount of attention allocated during the encoding phase (the time during

which the stimulus was presented on the screen), it is possible that in between trials, participants

continued to elaborate on the emotional item at the cost of encoding associated items, thereby

leading to the memory effects observed. In light of the above, it would be important for future

133

research to examine whether emotions may influence relational memory via post encoding

processes.

5.1.3 Emotion Has Differential Effects on Memory

Another theme to emerge from my work is that emotion has differential effects on relational

memory, i.e. emotion does not always impair or enhance memory for associated information

(Riggs et al., 2010; Riggs et al., 2011). This is in contrast to the binding theory described by

MacKay and colleagues in which they proposed that emotions act as the ‘glue’ that binds the

emotional item to associated information, thereby enhancing memory for the emotional item and

associated information (Hadley & Mackay, 2006; MacKay & Ahmetzanov, 2005; MacKay et al.,

2004). The notion that emotion may have differential effects on explicit memory is supported by

prior literature. As described in Chapter 1, previous studies examining emotion-modulated

relational memory have shown emotion-enhanced relational memory (e.g. D'Argembeau & Van

der Linden, 2004, 2005; Hadley & Mackay, 2006; MacKay & Ahmetzanov, 2005; MacKay et

al., 2004) while others have shown emotion-impaired memory (e.g. Jurica & Shimamura, 1999;

Kensinger et al., 2005; Kramer et al., 1990; Levine & Pizarro, 2004; Pickel, 1998). This then

leads to the question of why emotion may have such differing effects on memory, or more

precisely, under what circumstances do emotions enhance versus impair relational memory?

It has been suggested that such selective effects of emotions may depend on relevancy (Burke et

al., 1992; Heuer & Reisberg, 1990; Reisberg & Heuer, 2004). Specifically, emotions may

enhance information for the item and associated information perceived to be relevant to the item,

and this may occur at a cost of impaired memory for associated information that is perceived to

be irrelevant. Consistent with this perspective, the current work also found that when the

associated neutral information was arbitrary and not relevant for the understanding of the

emotion-eliciting item, memory was impaired (Riggs et al., 2010; Riggs et al., 2011). However,

when the emotion-eliciting information was meaningfully associated with a neutral stimulus,

memory was not impaired (Chapter 5).

In addition to relevance, another factor that may influence whether emotion enhances or impairs

memory for associated information may be the amount of detail contained therein. In Chapter 3,

I found that while association with emotion impaired the more evaluative aspects of memory

134

retrieval, as well as recognition memory accuracy for specific visual details in the periphery (i.e.

when participants had to distinguish displays of objects that had been manipulated from those

that were repeated or novel), it did not impair memory when such detailed memory

representations were not required for the task (i.e. when participants had to identify displays that

were repeated). This is consistent with the notion that while emotions may enhance memory for

gist information (i.e. a general representation of the central elements of a scene), it may impair

memory for specific details (e.g. Adolphs et al., 2001). Further, results from Chapter 3 also

revealed that emotion did not impair how quickly/easily memory for associated neutral

information could be retrieved. This suggests that emotions do no impair all aspects of memory

for associated details and/or early access to stored memory representations occur in an obligatory

fashion. In this way, stored memory representations with affective associations/meaning may

direct attention to potentially relevant information even if the representation is not sufficiently

detailed to influence subsequent stages of retrieval and/or conscious awareness.

From the studies conducted in this thesis, it is possible that both relevance and the amount of

details contained therein may play a role in determining whether emotions may impair explicit

memory for associated information or not. Future research could clarify how such factors may

interact and contribute to memory as measured by explicit report and memory as indexed by eye

movement monitoring (e.g. how do we determine what is relevant and what is not?), especially

when they may lead to conflicting predictions. For example, the ‘relevance’ hypothesis predicts

that emotions would enhance all aspects of memory for associated information, regardless of

how specific the details may be, as long as it was relevant. In contrast, the ‘gist/detail’

hypothesis predicts that emotions would impair memory for specific details regardless of

whether it was judged to be relevant or not. Further, if early indices of memory are truly

obligatory, then they should not be influenced by factors such as relevance.

Another factor that may influence whether emotions enhance or impair relational memory may

be the valence of the emotion. In the present work, all of the experiments conducted used stimuli

that depicted negative emotions such as fear and disgust rather than positive emotions such as

happiness. There may be important psychological distinctions between different emotions (e.g.

Aviezer et al., 2008). For example, prior research suggests that there are different information-

processing strategies associated with positive versus negative emotions. For example, several

studies have shown that positive emotions (happiness) can lead to a greater reliance on general

135

knowledge or stereotypes and be more vulnerable to intrusion errors in memory (Bless et al.,

1996; Park & Banaji, 2000). Further, it has also been shown that positive affect is associated

with a broadening of attention whereby one seeks a wider range of information from general

knowledge and the environment (Rowe et al., 2007). On the other hand, fear is associated with a

narrowing of attention whereby one focuses on the source of threat and the means available to

escape that threat (e.g. Easterbrook, 1959). In light of the above, it is possible that while

negative emotions may lead to certain tradeoffs in attention and memory, positive emotions may

actually be associated with a generalized enhancement of both attention and memory. In other

words, emotional valence may have different consequences on cognition.

In contrast to the above, prior literature also supports the possibility that both negative and

positive emotions may have the same effect on attention and memory. Specifically, emotion

may exert its effects via arousal rather than via valence. Valence refers to how positive or

negative a certain stimulus is, while arousal refers to a physiological and psychological state of

intensity. Prior research has shown that while arousal modulates memory via the amygdalar-

hippocampal network, valence modulates memory via the prefrontal-cortex-hippocampal

network (e.g. Anders, Lotze, Erb, Grodd, & Birbaumer, 2004; Kensinger & Corkin, 2004a;

Lewis, Critchley, Rotshtein, & Dolan, 2007). Given that all emotional stimuli fall somewhere

along the two dimensions of valence and arousal, it is likely that both characteristics play a role

in determining how emotions may modulate cognitive processes such as attention and memory.

It would be interesting for future research to specify the effects of valence versus arousal and

how they may interact with other factors such as relevance of associated information.

5.1.4 Encoding and Retrieval Occur in Stages

Another theme to emerge from the current work is that emotions may have differential effects on

memory depending on that stage of memory that is being examined. In Chapter 3, I examined

how association with emotion during the encoding stage may influence the subsequent retrieval

process. I found that while emotions influenced the more evaluative stages of retrieval, it did not

modulate how quickly participants were able to access memory regarding information associated

with an emotional stimulus. Further, in Chapter 5, I examined how association with emotion

may influence the manner by which participants view otherwise neutral faces and found that

emotion exerted its influence predominantly in early measures of viewing. Taken together, this

136

suggests that emotions may have differential effects on different stages of encoding and retrieval.

This also suggests that encoding and retrieval may not occur in an all-or-none fashion; rather,

they may proceed in stages (e.g. Parker, 1978).

The notion that encoding and retrieval may proceed in stages has important implications for our

understanding of memory and memory-related disorders. Specifically, certain disorders may

affect a particular stage of encoding or retrieval first. For example, it is often quite difficult to

distinguish between healthy older adults and those with mild cognitive impairment (MCI) as

both groups report difficulties with memory (Irish, Lawlor, O'Mara, & Coen, 2010; Price et al.,

2010; Schacter, Koutstaal, & Norman, 1997; Grady & Craik, 2000). However, it is possible that

while healthy older adults’ difficulties are in part the result of impairments in the evaluative

stages of retrieval, those with MCI may have greater impairments in the evaluative, as well as the

early stages of memory, i.e. they may have difficulties accessing stored memory representations

as well. By distinguishing between different stages and characterizing how different disorders

may influence these stages, we may increase the sensitivity with which we can diagnose them

and be better able to tailor specific rehabilitative program to target those stages.

5.1.5 Memory Exerts Early Influences on Processing

In addition to the above insights regarding the nature of encoding and retrieval, my work also

sheds light on the relationship between memory and perception as well. In Chapter 4, I

examined the latency of hippocampal responses during a recognition memory task and found that

activity within the hippocampus peaked within the first 150 ms after stimulus onset, a time that is

typically associated with perceptual processing (e.g. e.g. Ryan et al., 2008; Tsivilis et al., 2001).

Further, in Chapter 3, I found that eye movements distinguished between changed and

unchanged visual displays within the first fixation and this was not modulated by emotion. This

suggests that memory exerts early influences on processing, perhaps even changing one’s

perception, and that such processes may occur in an obligatory fashion (Ryan et al., 2008).

The idea that our prior experience may influence the way in which we subsequently process the

same item/event/scene is not new. In 1890, William James wrote that “whilst part of what we

perceive comes through our senses from the object before us; another part (and it may be the

larger part) always comes… out of our own head.” (James, 1983). In support of this, a number

137

of fMRI studies have found that when a visual stimulus is paired with an auditory stimulus,

subsequent presentation of either stimulus alone elicited neural activity in both the visual cortex

and the auditory cortex (e.g. Nyberg, Habib, McIntosh, & Tulving, 2000; Wheeler et al., 2000).

Going further, Ryan and colleagues found that such differences in sensory activity occurred

within the first 200 ms after stimulus onset (Ryan et al., 2008). The authours concluded that our

prior experience may actually change subsequent perception such that we never perceive the

same item in exactly the same way.

The notion that prior experience may influence perceptual processing has important theoretical

implications for our understanding of memory and perception. First, it suggests that memory

retrieval may occur in a rapid and obligatory fashion (Riggs et al., 2010; Riggs et al., 2009;

Ryan, Hannula, et al., 2007; Ryan et al., 2008). In this way, prior experience may shape current

perception and behaviour in a way that is most efficient with minimal or no need for top-down

control. Second, if retrieval occurs in an obligatory manner and influences perception, this

suggests that perception and memory may not be as modular and independent as previously

described. Rather, these two processes may be plastic, intimately tied together and continually

shaping the other such that we never perceive or remember the same item twice.

The ability to empirically examine the relationship between perception and memory has only

been made possible in recent years with the advancement of technological tools such as eye

movement monitoring and MEG. My work sought to take full advantage of these tools in order

to begin addressing questions regarding the nature of emotion-modulated memory, and indeed

the nature of memory itself, that could not be answered by using verbal reports alone. In doing

so, the current thesis makes not only the theoretical contributions outlined above, but also

methodological contributions to the field of cognitive neuroscience via elucidation and

promotion of convergent methods.

5.2 Methodological Contributions

The majority of research in the field of emotions and memory has assessed memory through

recall or recognition paradigms, which as mentioned in Chapter 1 (Section 1.1), cannot shed light

on how such differences in memory may occur. Further, verbal reports cannot tell us exactly

when memory may exert its influence during online processing of stimuli and how such

processes may be supported in the brain. In order to address some of these issues, I utilized eye

138

movement monitoring and MEG. Both technologies are relatively under-used in the fields of

emotion and memory research, but my research shows that the application of these tools can

yield unique insights that cannot be gleaned from other methods such as verbal reports and/or

other neuroimaging techniques such as fMRI and EEG.

5.2.1 Magnetoencephalography

The current work used MEG to study how emotions may influence perception. As mentioned in

Chapter 1, MEG has traditionally been used as a method to study neural activity from superficial

sources such as primary sensory or motor responses. Although cognitive neuroscientists are now

beginning to appreciate the potential and utility of MEG imaging to illuminate the underlying

mechanisms of complex cognitive processes such as memory, there has been a considerable

debate regarding whether it is feasible to apply MEG to the study of memory due to difficulties

with imaging the hippocampus, a critical mnemonic structure.

In the current thesis, I showed that MEG can be successfully used to localize and characterize

hippocampal activity (Chapter 4; Riggs et al., 2009). In other words, MEG can be used to study

complex cognitive processes such as memory. Further, since unlike more traditional

neuroimaging methods such as fMRI or PET, MEG has excellent temporal and spatial resolution

that can characterize the precise dynamics underlying different cognitive operations (e.g.

memory encoding, memory retrieval), this has the potential to lead to new reconceptualizations

of cognition and brain functioning in general. For example, as discussed above (Section 6.1.4),

evidence from my MEG work suggests that memory and perception may not be modular

processes that are distinct from each other, rather, our prior experience may change subsequent

perception in an obligatory fashion. Further, my work has also shown that MEG can

characterize how the hippocampus functions, i.e. it oscillates in a theta rhythm. Thus, MEG has

the potential to reveal not only where and how fast a mnemonic process may occur, it could also

reveal how this process may occur.

In addition to examining how the hippocampus may function, it would be interesting for future

research to outline how the hippocampus may communicate with other regions in the brain, and

specifically, the amygdala in order to support emotion-modulated memory. Unfortunately, I did

not find significant differences in the hippocampus for viewing faces paired with negative versus

neutral sentences in Chapter 5. Further, the low accuracy results in Chapter 5 did not permit me

139

to conduct a subsequent memory analysis (i.e. examine the brain regions associated with stimuli

that are subsequently recognised versus the brain regions associated with stimuli that are

subsequently forgotten) in which responses from the hippocampus and amygdala may have been

more robust, but studies using fMRI have shown that a predictor of emotion-enhanced memory

is the level of functional connectivity between the amygdala and the hippocampus during the

encoding phase (e.g. Murty, Ritchey, Adcock, & LaBar, 2010; Ritchey, Dolcos, & Cabeza, 2008;

St Jacques, Dolcos, & Cabeza, 2010). However, it is not clear how such ‘functional

connectivity’ is mediated.

In light of my work on the function of the hippocampus (Chapter 4; Riggs et al., 2009) and

previous literature (e.g. Buzsaki, 2002; Duzel, Picton, et al., 2001; Rugg et al., 1996; M. E.

Smith & Halgren, 1989), it is possible that the amygdala and hippocampus may oscillate in a

phase-locked manner in the theta frequency range. There has been some evidence in support of

this in the animal literature (e.g. Seidenbecher, Laxmi, Stork, & Pape, 2003). In the human

literature, amygdala activity has predominantly been associated with gamma oscillations (e.g.

(Luo et al., 2007; Luo et al., 2010; Oya, Kawasaki, Howard, & Adolphs, 2002), but some studies

show that the amygdala also oscillates in the theta range during viewing of coarse fearful versus

neutral faces (F. A. Maratos, Mogg, Bradley, Rippon, & Senior, 2009) and during affective

priming (Garolera et al., 2007). However, further studies are needed in order to clarify whether

the amygdala and other emotion-processing regions such as the caudate and cingulate (Chapter

5) may modulate the hippocampus in a phase-locked manner in the theta frequency, and how this

may be related to subsequent memory performance.

5.2.2 Eye Movement Monitoring and Magnetoencephalography

In addition to showing that MEG can be successfully used to characterize hippocampal activity, I

also showed that MEG can be combined with eye movement monitoring in order to characterize

eye movement behaviour and underlying neural activity simultaneously (Chapter 5). This was

one of the first few attempts to combine both technologies (Herdman & Ryan, 2007; Hirvenkari

et al., 2010) and the first to use it in order to examine how emotions may influence online

processing of associated information.

The combination of eye movement monitoring with MEG represents a powerful converging

methods approach with which to study cognitive processes. For example, my current work using

140

eye movement monitoring showed that eye movements distinguished between changed and

unchanged visual displays within the first fixation and that such early eye movement based

effects of memory were uninfluenced by emotion (Chapter 3). This is consistent with my MEG

work which suggests that memory may exert its influence very early on during online processing

and that this may occur in an obligatory fashion (Chapter 4, 5). Future work could take this

further and link specific eye movement indices of memory retrieval to the underlying neural

activity as measured by MEG. In this way, one can outline exactly which neural regions/systems

are driving certain eye movement behaviours. Specifically, which neural regions/systems drive

early versus later and more evaluative eye movement based effects of memory?

Further, results from Chapter 5 showed that differences in viewing to (as measured by eye

movement monitoring), and processing (as measured by MEG) of neutral faces associated with

negative versus neutral sentences emerged in and around the same window, i.e. 400-1400 ms

after stimulus onset. This suggests that emotion-modulated differences in viewing and

processing may interact in a positive feedback loop such that associated emotional information

about the face may drive participants to direct more viewing to informative regions of the face

such as the eyes, leading to increases in neural activity in emotion-processing regions, which

may then result in further differences in eye movement behaviour. It would be interesting for

future research to examine precisely how indices of eye movement behaviour are related to

underlying changes in brain activity.

By demonstrating that such a convergent approach to study cognitive processes is feasible, this

paves the way for future research to expand and build upon my work and begin to address new

kinds of questions that could not be empirically examined with other methods such as fMRI. For

example: which neural networks underlie early obligatory versus later evaluative aspects of

memory retrieval; and how do specific indices of eye movement behaviour and underlying

neural activity relate to each other? With an ever-increasing push for interdisciplinary research,

it seems that there also needs to be an accompanying push whereby we begin to move beyond a

modular view of cognition, which albeit useful, may not be an accurate reflection of the way in

which the brain operates. As my work has shown, eye movement monitoring and MEG are both

excellent tools for the study of how ‘different’ cognitive operations may interact and influence

each other, and both have the potential to reveal aspects of cognition that cannot be gleaned by

141

other methods such as verbal report or fMRI that can shape our theoretical understanding of the

brain.

5.3 Summary and Concluding Remarks

The current body of work went beyond simple verbal report measures of memory and used

methodologies that enabled us to address questions that could not be gleaned by verbal reports

alone such as how do emotions influence online processing of associated information, and do

emotions influence all stages of the retrieval process? In doing so, I found that emotions

influenced not only the amount of attention allocated to the associated information, but also the

manner in which participants viewed it and the type of representation formed. Further, contrary

to previous assumptions in the literature (e.g. Armony & Dolan, 2002; J. M. Brown, 2003;

Easterbrook, 1959; Kensinger et al., 2005; E. F. Loftus et al., 1987; Wessel & Merckelbach,

1997), I also found emotion-modulated differences in the amount of attention allocated to the

associated information was not related to subsequent memory performance. Rather, emotion was

found to modulate memory via the evaluative stages of the retrieval process and perhaps other

unspecified factors not examined in the current work such as post-stimulus elaboration.

Interestingly, I also found that emotion did not modulate earlier stages of retrieval (likely related

to one’s ability to access stored memory representations) which may suggest that the early stages

of memory retrieval may occur in an obligatory fashion.

In exploring how emotions may modulate memory for associated information, I also showed that

MEG can be successfully used to outline hippocampal activity and that it can be combined with

eye movement monitoring as a convergent method for the study of cognitive processes. In the

future, the use of these methods can be applied to the study of how different cognitive processes

may interact and the study of healthy versus ‘impaired’ populations. For example, MEG can be

used to outline neural differences underlying emotion-modulated processing between healthy

controls and patients with clinical disorders such as post-traumatic stress disorder. Further, by

combining MEG with eye movement monitoring, this could lead to a more comprehensive

understanding of how certain overt behaviours (i.e. eye movements) are directly linked with

underlying neural systems, and how such interactions may be disrupted in clinical disorders.

This may lead to insights on the nature of certain disorders and possibly better diagnostic

142

criterion and rehabilitative directions (e.g. Adolphs, Gosselin, et al., 2005; DeGutis, Bentin,

Robertson, & D'Esposito, 2007; Schmalzl, Palermo, Green, Brunsdon, & Coltheart, 2008).

In summary, the current work encourages a reconceptualization of emotion, memory and

perception and how they relate to one and another. Specifically, rather than viewing them as

independent modular processes, they may, in fact, be more widely distributed in the brain and

interact more closely than previously described. This may be evolutionarily adaptive allowing us

to quickly and efficiently form memories for emotional events/scenes that can later guide

perception and behaviour.

143

References

Adolphs, R. (2002). Neural systems for recognizing emotion. Current Opinion in Neurobiology,

12(2), 169-177.

Adolphs, R. (2003). Cognitive neuroscience of human social behaviour. Nature Reviews

Neuroscience, 4(3), 165-178.

Adolphs, R., Cahill, L., Schul, R., & Babinsky, R. (1997). Impaired declarative memory for

emotional material following bilateral amygdala damage in humans. Learning &

Memory, 4(3), 291-300.

Adolphs, R., Denburg, N. L., & Tranel, D. (2001). The amygdala's role in long-term declarative

memory for gist and detail. Behavioral Neuroscience, 115(5), 983-992.

Adolphs, R., Gosselin, F., Buchanan, T. W., Tranel, D., Schyns, P., & Damasio, A. R. (2005). A

mechanism for impaired fear recognition after amygdala damage. Nature, 433(7021), 68-

72.

Adolphs, R., Tranel, D., & Buchanan, T. W. (2005). Amygdala damage impairs emotional

memory for gist but not details of complex stimuli. Nature Neuroscience, 8(4), 512-518.

Adolphs, R., Tranel, D., & Damasio, A. R. (2003). Dissociable neural systems for recognizing

emotions. Brain and Cognition, 52(1), 61-69.

Adolphs, R., Tranel, D., & Denburg, N. (2000). Impaired emotional declarative memory

following unilateral amygdala damage. Learning & Memory, 7(3), 180-186.

Ahonen, A. I., Hamalainen, M. S., Ilmoniemi, R. J., Kajola, M. J., Knuutila, J. E., Simola, J. T.,

et al. (1993). Sampling theory for neuromagnetic detector arrays. IEEE Transactions on

Bio-medical Engineering, 40(9), 859-869.

Aleman, A., & Swart, M. (2008). Sex differences in neural activation to facial expressions

denoting contempt and disgust. PloS One, 3(11), e3622.

Althoff, R. R., & Cohen, N. J. (1999). Eye-movement-based memory effect: a reprocessing

effect in face perception. Journal of Experimental Psychology. Learning, Memory, and

Cognition, 25(4), 997-1010.

Althoff, R. R., Cohen, N. J., McConkie, G., Wasserman, S., Maciukenas, M., Azen, R., et al.

(1998). Eye movement-based memory assessment. In W. Becker, H. Deubel & T.

Mergner (Eds.), CurrentOoculomotor Research: Physiological and Psychological

Aspects (pp. 293–302). New York: Plenum Press Publishers.

Anders, S., Lotze, M., Erb, M., Grodd, W., & Birbaumer, N. (2004). Brain activity underlying

emotional valence and arousal: a response-related fMRI study. Human Brain Mapping,

23(4), 200-209.

144

Anderson, A. K. (2005). Affective influences on the attentional dynamics supporting awareness.

Journal of Experimental Psychology. General, 134(2), 258-281.

Anderson, A. K., Christoff, K., Panitz, D., De Rosa, E., & Gabrieli, J. D. (2003). Neural

correlates of the automatic processing of threat facial signals. The Journal of

Neuroscience, 23(13), 5627-5633.

Anderson, A. K., & Phelps, E. A. (2001). Lesions of the human amygdala impair enhanced

perception of emotionally salient events. Nature, 411(6835), 305-309.

Anderson, A. K., Wais, P. E., & Gabrieli, J. D. (2006). Emotion enhances remembrance of

neutral events past. Proceedings of the National Academy of Sciences of the United States

of America, 103(5), 1599-1604.

Armony, J. L., & Dolan, R. J. (2002). Modulation of spatial attention by fear-conditioned

stimuli: an event-related fMRI study. Neuropsychologia, 40(7), 817-826.

Aviezer, H., Hassin, R. R., Ryan, J., Grady, C., Susskind, J., Anderson, A., et al. (2008). Angry,

disgusted, or afraid? Studies on the malleability of emotion perception. Psychological

science, 19(7), 724-732.

Badgaiyan, R. D., Schacter, D. L., & Alpert, N. M. (2002). Retrieval of relational information: a

role for the left inferior prefrontal cortex. NeuroImage, 17(1), 393-400.

Bar, M. (2003). A cortical mechanism for triggering top-down facilitation in visual object

recognition. Journal of Cognitive Neuroscience, 15(4), 600-609.

Bar, M. (2004). Visual objects in context. Nature reviews. Neuroscience, 5(8), 617-629.

Bar, M., Aminoff, E., & Schacter, D. L. (2008). Scenes unseen: the parahippocampal cortex

intrinsically subserves contextual associations, not scenes or places per se. The Journal of

Neuroscience, 28(34), 8539-8544.

Bar, M., Kassam, K. S., Ghuman, A. S., Boshyan, J., Schmid, A. M., Dale, A. M., et al. (2006).

Top-down facilitation of visual recognition. Proceedings of the National Academy of

Sciences of the United States of America, 103(2), 449-454.

Bar, M., Neta, M., & Linz, H. (2006). Very first impressions. Emotion, 6(2), 269-278.

Bardouille, T., & Ross, B. (2008). MEG imaging of sensorimotor areas using inter-trial

coherence in vibrotactile steady-state responses. NeuroImage, 42(1), 323-323-331.

Barense, M. D., Gaffan, D., & Graham, K. S. (2007). The human medial temporal lobe processes

online representations of complex objects. Neuropsychologia, 45(13), 2963-2974.

Baron, R. M., & Kenny, D. A. (1986). The moderator-mediator variable distinction in social

psychological research: conceptual, strategic, and statistical considerations. Journal of

personality and Social Psychology, 51(6), 1173-1182.

145

Bate, S., Haslam, C., & Hodgson, T. L. (2009). Angry faces are special too: evidence from the

visual scanpath. Neuropsychology, 23(5), 658-667.

Baumgartner, C., Pataraia, E., Lindinger, G., & Deecke, L. (2000). Neuromagnetic recordings in

temporal lobe epilepsy. Journal of clinical neurophysiology : official publication of the

American Electroencephalographic Society, 17(2), 177-189.

Berg, P., & Scherg, M. (1994). A fast method for forward computation of multiple-shell

spherical head models. Electroencephalography and Clinical Neurophysiology, 90(1),

58-64.

Bless, H., Schwarz, N., Clore, G. L., Golisano, V., Rabe, C., & Wolk, M. (1996). Mood and the

use of scripts: does a happy mood really lead to mindlessness? Journal of personality and

Social Psychology, 71(4), 665-679.

Blumenfeld, R. S., Parks, C. M., Yonelinas, A. P., & Ranganath, C. (2011). Putting the pieces

together: the role of dorsolateral prefrontal cortex in relational memory encoding. Journal

of Cognitive Neuroscience, 23(1), 257-265.

Boudo, G., Sarlo, M., & Palomba, D. (2002). Attentional resources measured by reaction times

highlight differences within pleasant and unpleasant, high arousing stimuli. Motivation

and Emotion, 26, 123–138.

Bradley, M. M. (Ed.). (1994). Emotional memory: A dimensional analysis. Hillsdale: Erlbaum.

Breier, J. I., Simos, P. G., Zouridakis, G., & Papanicolaou, A. C. (1998). Relative timing of

neuronal activity in distinct temporal lobe areas during a recognition memory task for

words. Journal of clinical and experimental neuropsychology, 20(6), 782-790.

Breier, J. I., Simos, P. G., Zouridakis, G., & Papanicolaou, A. C. (1999). Lateralization of

cerebral activation in auditory verbal and non-verbal memory tasks using

magnetoencephalography. Brain topography, 12(2), 89-97.

Breier, J. I., Simos, P. G., Zouridakis, G., & Papanicolaou, A. C. (2000). Lateralization of

activity associated with language function using magnetoencephalography: a reliability

study. Journal of clinical neurophysiology : official publication of the American

Electroencephalographic Society, 17(5), 503-510.

Brown, J. M. (2003). Eyewitness memory for arousing events: Putting things into context.

Applied Cognitive Psychology, 17(1), 93-106.

Brown, M. W., & Aggleton, J. P. (2001). Recognition memory: what are the roles of the

perirhinal cortex and hippocampus? Nature reviews. Neuroscience, 2(1), 51-61.

Brown, R., & Kulik, J. (1977). Flashbulb memories. Cognition, 5(1), 73-99.

Buchanan, T. W., Denburg, N. L., Tranel, D., & Adolphs, R. (2001). Verbal and nonverbal

emotional memory following unilateral amygdala damage. Learning & Memory, 8(6),

326-335.

146

Buckner, R. L. (2003). Functional-anatomic correlates of control processes in memory. The

Journal of Neuroscience, 23(10), 3999-4004.

Buckner, R. L., Wheeler, M. E., & Sheridan, M. A. (2001). Encoding processes during retrieval

tasks. Journal of Cognitive Neuroscience, 13(3), 406-415.

Burke, A., Heuer, F., & Reisberg, D. (1992). Remembering emotional events. Memory &

cognition, 20(3), 277-290.

Buzsaki, G. (2002). Theta oscillations in the hippocampus. Neuron, 33(3), 325-340.

Cabeza, R., & Nyberg, L. (2000). Imaging cognition II: An empirical review of 275 PET and

fMRI studies. Journal of Cognitive Neuroscience, 12(1), 1-47.

Cahill, L., Haier, R. J., Fallon, J., Alkire, M. T., Tang, C., Keator, D., et al. (1996). Amygdala

activity at encoding correlated with long-term, free recall of emotional information.

Proceedings of the National Academy of Sciences of the United States of America,

93(15), 8016-8021.

Cahill, L., & McGaugh, J. L. (1998). Mechanisms of emotional arousal and lasting declarative

memory. Trends in neurosciences (Regular edition), 21(7), 294-294-299.

Calder, A. J., Young, A. W., Keane, J., & Dean, M. (2000). Configural information in facial

expression perception. Journal of Experimental Psychology. Human perception and

performance, 26(2), 527-551.

Calvo, M. G., & Lang, P. J. (2004). Gaze patterns when looking at emotional pictures:

Motivationally biased attention. Motivation and Emotion, 28(3), 221-243.

Calvo, M. G., & Lang, P. J. (2005). Parafoveal semantic processing of emotional visual scenes.

Journal of Experimental Psychology. Human perception and performance, 31(3), 502-

519.

Campo, P., Maestu, F., Capilla, A., Fernandez, S., Fernandez, A., & Ortiz, T. (2005). Activity in

human medial temporal lobe associated with encoding process in spatial working

memory revealed by magnetoencephalography. The European journal of neuroscience,

21(6), 1741-1748.

Campo, P., Maestu, F., Ortiz, T., Capilla, A., Fernandez, S., & Fernandez, A. (2005). Is medial

temporal lobe activation specific for encoding long-term memories? NeuroImage, 25(1),

34-42.

Carlston, D. E., & Skowronski, J. J. (1994). Savings in the relearning of trait information as

evidence for spontaneous inference generation. Journal of personality and Social

Psychology, 66(5), 840-840-856.

Carretie, L., Hinojosa, J. A., Martin-Loeches, M., Mercado, F., & Tapia, M. (2004). Automatic

attention to emotional stimuli: neural correlates. Human Brain Mapping, 22(4), 290-299.

147

Caseras, X., Garner, M., Bradley, B. P., & Mogg, K. (2007). Biases in visual orienting to

negative and positive scenes in dysphoria: An eye movement study. Journal of abnormal

psychology, 116(3), 491-497.

Caseras, X., Mataix-Cols, D., An, S. K., Lawrence, N. S., Speckens, A., Giampietro, V., et al.

(2007). Sex differences in neural responses to disgusting visual stimuli: implications for

disgust-related psychiatric disorders. Biological psychiatry, 62(5), 464-471.

Cavanna, A. E., & Trimble, M. R. (2006). The precuneus: a review of its functional anatomy and

behavioural correlates. Brain, 129(3), 564-583.

Chau, W., Herdman, A. T., & Picton, T. W. (2004). Detection of power changes between

conditions using split-half resampling of synthetic aperture magnetometry data.

Neurology & clinical neurophysiology : NCN, 2004, 24.

Chen, Y. H., Dammers, J., Boers, F., Leiberg, S., Edgar, J. C., Roberts, T. P., et al. (2009). The

temporal dynamics of insula activity to disgust and happy facial expressions: a

magnetoencephalography study. NeuroImage, 47(4), 1921-1928.

Cheyne, D., Bakhtazad, L., & Gaetz, W. (2006). Spatiotemporal Mapping of Cortical Activity

Accompanying Voluntary Movements Using an Event-Related Beamforming Approach.

Human Brain Mapping, 27(3), 213-213-229.

Cheyne, D., Bostan, A. C., Gaetz, W., & Pang, E. W. (2007). Event-related beamforming: a

robust method for presurgical functional mapping using MEG. Clinical neurophysiology

: official journal of the International Federation of Clinical Neurophysiology, 118(8),

1691-1704.

Christianson, S. Å. (1992). Emotional stress and eyewitness memory: A critical review.

Psychological bulletin, 112(2), 284-309.

Christianson, S. Å., & Loftus, E. F. (1987). Memory for traumatic events. Applied Cognitive

Psychology, 1(4), 225-239.

Christianson, S. Å., Loftus, E. F., Hoffman, H., & Loftus, G. R. (1991). Eye fixations and

memory for emotional events. Journal of Experimental Psychology. Learning, memory,

and cognition, 17(4), 693-701.

Chun, M. M., & Phelps, E. A. (1999). Memory deficits for implicit contextual information in

amnesic subjects with hippocampal damage. Nature Neuroscience, 2(9), 844-847.

Cohen, D., Cuffin, B. N., Yunokuchi, K., Maniewski, R., Purcell, C., Cosgrove, G. R., et al.

(1990). MEG versus EEG localization test using implanted sources in the human brain.

Annals of neurology, 28(6), 811-817.

Cohen, N. J., & Eichenbaum, H. (1993). Memory, amnesia, and the hippocampal system.

Cambridge: MIT Press.

148

Cohen, N. J., Poldrack, R. A., & Eichenbaum, H. (1997). Memory for items and memory for

relations in the procedural/declarative memory framework. Memory, 5(1-2), 131-178.

Cohen, N. J., Ryan, J., Hunt, C., Romine, L., Wszalek, T., & Nash, C. (1999). Hippocampal

system and declarative (relational) memory: summarizing the data from functional

neuroimaging studies. Hippocampus, 9(1), 83-98.

Cohen, N. J., & Squire, L. R. (1980). Preserved learning and retention of pattern-analyzing skill

in amnesia: dissociation of knowing how and knowing that. Science, 210(4466), 207-210.

Corkin, S. (1968). Acquisition of motor skill after bilateral medial temporal-lobe excision.

Neuropsychologia, 6(3), 255-265.

Corkin, S. (1992). Lasting consequences of bilateral medial temporal lobectomy: Clinical course

and experimental findings in H. M. Cambridge, MA, US: The MIT Press.

Corkin, S. (2002). What's new with the amnesic patient H.M.? Nature reviews. Neuroscience,

3(2), 153-160.

Cornwell, B. R., Carver, F. W., Coppola, R., Johnson, L., Alvarez, R., & Grillon, C. (2008).

Evoked amygdala responses to negative faces revealed by adaptive MEG beamformers.

Brain research, 1244, 103-112.

Craig, A. D. (2002). How do you feel? Interoception: the sense of the physiological condition of

the body. Nature reviews. Neuroscience, 3(8), 655-666.

Craik, F. I. (2002). Levels of processing: past, present. and future? Memory, 10(5-6), 305-318.

Craik, F. I., Govoni, R., Naveh-Benjamin, M., & Anderson, N. D. (1996). The effects of divided

attention on encoding and retrieval processes in human memory. Journal of Experimental

Psychology. General, 125(2), 159-180.

Craik, F. I., Luo, L., & Sakuta, Y. (2010). Effects of aging and divided attention on memory for

items and their contexts. Psychology and aging, 25(4), 968-979.

Culham, J. C., & Kanwisher, N. G. (2001). Neuroimaging of cognitive functions in human

parietal cortex. Current Opinion in Neurobiology, 11(2), 157-163.

D'Argembeau, A., & Van der Linden, M. (2004). Influence of affective meaning on memory for

contextual information. Emotion, 4(2), 173-188.

D'Argembeau, A., & Van der Linden, M. (2005). Influence of emotion on memory for temporal

information. Emotion, 5(4), 503-507.

Daselaar, S. M., Rombouts, S. A., Veltman, D. J., Raaijmakers, J. G., Lazeron, R. H., & Jonker,

C. (2001). Parahippocampal activation during successful recognition of words: a self-

paced event-related fMRI study. NeuroImage, 13(6 Pt 1), 1113-1120.

149

Davachi, L., Mitchell, J. P., & Wagner, A. D. (2003). Multiple routes to memory: distinct medial

temporal lobe processes build item and source memories. Proceedings of the National

Academy of Sciences of the United States of America, 100(4), 2157-2162.

Deffke, I., Sander, T., Heidenreich, J., Sommer, W., Curio, G., Trahms, L., et al. (2007).

MEG/EEG sources of the 170-ms response to faces are co-localized in the fusiform

gyrus. NeuroImage, 35(4), 1495-1501.

DeGutis, J. M., Bentin, S., Robertson, L. C., & D'Esposito, M. (2007). Functional Plasticity in

Ventral Temporal Cortex following Cognitive Rehabilitation of a Congenital

Prosopagnosic. Journal of Cognitive Neuroscience, 19(11), 1790-1790-1802.

Denburg, N. L., Buchanan, T. W., Tranel, D., & Adolphs, R. (2003). Evidence for preserved

emotional memory in normal older persons. Emotion, 3(3), 239-253.

Derryberry, D., & Tucker, D. M. (Eds.). (1994). Motivating the focus of attention. San Diego:

Academic Press.

Dolan, R. J. (2002). Emotion, cognition, and behavior. Science, 298(5596), 1191-1194.

Dolcos, F., LaBar, K. S., & Cabeza, R. (2004). Interaction between the amygdala and the medial

temporal lobe memory system predicts better memory for emotional events. Neuron,

42(5), 855-863.

Donaldson, D. I., & Rugg, M. D. (1998). Recognition memory for new associations:

electrophysiological evidence for the role of recollection. Neuropsychologia, 36(5), 377-

395.

Donaldson, D. I., & Rugg, M. D. (1999). Event-related potential studies of associative

recognition and recall: electrophysiological evidence for context dependent retrieval

processes. Brain research. Cognitive brain research, 8(1), 1-16.

Dougal, S., Phelps, E. A., & Davachi, L. (2007). The role of medial temporal lobe in item

recognition and source recollection of emotional stimuli. Cognitive, affective &

behavioral neuroscience, 7(3), 233-242.

Driscoll, I., Hamilton, D. A., Petropoulos, H., Yeo, R. A., Brooks, W. M., Baumgartner, R. N., et

al. (2003). The aging hippocampus: cognitive, biochemical and structural findings.

Cerebral cortex, 13(12), 1344-1351.

Duvernoy, H. M. (1988). The human hippocampus: An atlas of applied anatomy. Munich: J. F.

Bergmann.

Duzel, E., Habib, R., Rotte, M., Guderian, S., Tulving, E., & Heinze, H. J. (2003). Human

hippocampal and parahippocampal activity during visual associative recognition memory

for spatial and nonspatial stimulus configurations. The Journal of Neuroscience, 23(28),

9439-9444.

150

Duzel, E., Picton, T. W., Cabeza, R., Yonelinas, A. P., Scheich, H., Heinze, H. J., et al. (2001).

Comparative electrophysiological and hemodynamic measures of neural activation during

memory-retrieval. Human Brain Mapping, 13(2), 104-123.

Duzel, E., Vargha-Khadem, F., Heinze, H. J., & Mishkin, M. (2001). Brain activity evidence for

recognition without recollection after early hippocampal damage. Proceedings of the

National Academy of Sciences of the United States of America, 98(14), 8101-8106.

Easterbrook, J. A. (1959). The effect of emotion on cue utilization and the organization of

behavior. Psychological Review, 66(3), 183-201.

Eichenbaum, H., & Cohen, N. J. (2001). From conditioning to conscious recollection: Memory

Systems of the Brain. New York: Oxford University Press.

Fawcett, I. P., Barnes, G. R., Hillebrand, A., & Singh, K. D. (2004). The temporal frequency

tuning of human visual cortex investigated using synthetic aperture magnetometry.

NeuroImage, 21(4), 1542-1553.

Fenker, D. B., Schott, B. H., Richardson-Klavehn, A., Heinze, H. J., & Duzel, E. (2005).

Recapitulating emotional context: activity of amygdala, hippocampus and fusiform cortex

during recollection and familiarity. The European journal of neuroscience, 21(7), 1993-

1999.

Findlay, J. M., & Gilchrist, I. D. (2003). Active vision: The psychology of looking and seeing.

New York: Oxford University Press.

Fisher, N. I. (1993). Statistical analysis of circular data: Cambridge University Press.

Fox, E., Lester, V., Russo, R., Bowles, R. J., Pichler, A., & Dutton, K. (2000). Facial

Expressions of Emotion: Are Angry Faces Detected More Efficiently? Cognition &

emotion, 14(1), 61-92.

Fox, E., Russo, R., Bowles, R., & Dutton, K. (2001). Do threatening stimuli draw or hold visual

attention in subclinical anxiety? Journal of Experimental Psychology. General, 130(4),

681-700.

Fox, E., Russo, R., & Dutton, K. (2002). Attentional Bias for Threat: Evidence for Delayed

Disengagement from Emotional Faces. Cognition & emotion, 16(3), 355-379.

Foxe, J. J., & Simpson, G. V. (2002). Flow of activation from V1 to frontal cortex in humans. A

framework for defining "early" visual processing. Experimental brain research.

Experimentelle Hirnforschung. Experimentation cerebrale, 142(1), 139-150.

Fujioka, T., Zendel, B. R., & Ross, B. (2010). Endogenous neuromagnetic activity for mental

hierarchy of timing. The Journal of Neuroscience, 30(9), 3458-3466.

Furl, N., van Rijsbergen, N. J., Kiebel, S. J., Friston, K. J., Treves, A., & Dolan, R. J. (2010).

Modulation of perception and brain activity by predictable trajectories of facial

expressions. Cerebral cortex, 20(3), 694-703.

151

Gable, P. A., & Harmon-Jones, E. (2008). Approach-motivated positive affect reduces breadth of

attention. Psychological science, 19(5), 476-482.

Gaetz, W. C., & Cheyne, D. O. (2003). Localization of human somatosensory cortex using

spatially filtered magnetoencephalography. Neuroscience letters, 340(3), 161-164.

Gamer, M., & Buchel, C. (2009). Amygdala activation predicts gaze toward fearful eyes. The

Journal of Neuroscience, 29(28), 9123-9126.

Garolera, M., Coppola, R., Munoz, K. E., Elvevag, B., Carver, F. W., Weinberger, D. R., et al.

(2007). Amygdala activation in affective priming: a magnetoencephalogram study.

Neuroreport, 18(14), 1449-1453.

Golby, A. J., Poldrack, R. A., Brewer, J. B., Spencer, D., Desmond, J. E., Aron, A. P., et al.

(2001). Material-specific lateralization in the medial temporal lobe and prefrontal cortex

during memory encoding. Brain, 124(Pt 9), 1841-1854.

Gonsalves, B. D., Kahn, I., Curran, T., Norman, K. A., & Wagner, A. D. (2005). Memory

strength and repetition suppression: multimodal imaging of medial temporal cortical

contributions to recognition. Neuron, 47(5), 751-761.

Grady, C. L., & Craik, F. I. (2000). Changes in memory processing with age. Current Opinion in

Neurobiology, 10(2), 224-231.

Graf, P., & Schacter, D. L. (1985). Implicit and explicit memory for new associations in normal

and amnesic subjects. Journal of Experimental Psychology. Learning, memory, and

cognition, 11(3), 501-518.

Green, M. J., Williams, L. M., & Davidson, D. (2003a). Visual scanpaths and facial affect

recognition in delusion-prone individuals: Increased sensitivity to threat? Cognitive

neuropsychiatry, 8(1), 19-41.

Green, M. J., Williams, L. M., & Davidson, D. (2003b). Visual scanpaths to threat-related faces

in deluded schizophrenia. Psychiatry research, 119(3), 271-285.

Greenberg, D. L., Rice, H. J., Cooper, J. J., Cabeza, R., Rubin, D. C., & Labar, K. S. (2005). Co-

activation of the amygdala, hippocampus and inferior frontal gyrus during

autobiographical memory retrieval. Neuropsychologia, 43(5), 659-674.

Guderian, S., & Duzel, E. (2005). Induced theta oscillations mediate large-scale synchrony with

mediotemporal areas during recollection in humans. Hippocampus, 15(7), 901-912.

Hadley, C. B., & Mackay, D. G. (2006). Does emotion help or hinder immediate memory?

Arousal versus priority-binding mechanisms. Journal of Experimental Psychology.

Learning, memory, and cognition, 32(1), 79-88.

Halgren, E., Raij, T., Marinkovic, K., Jousmaki, V., & Hari, R. (2000). Cognitive response

profile of the human fusiform face area as determined by MEG. Cerebral cortex, 10(1),

69-81.

152

Hamada, Y., Sugino, K., Kado, H., & Suzuki, R. (2004). Magnetic fields in the human

hippocampal area evoked by a somatosensory oddball task. Hippocampus, 14(4), 426-

433.

Hämäläinen, M., Hari, R., Ilmoniemi, R. J., Knuutila, J., & Lounasmaa, O. V. (1993).

Magnetoencephalography — theory, instrumentation, and applications to noninvasive

studies of the working human brain. Reviews of Modern Physics, 65(2), 413–496.

Hanlon, F. M., Weisend, M. P., Huang, M., Lee, R. R., Moses, S. N., Paulson, K. M., et al.

(2003). A non-invasive method for observing hippocampal function. Neuroreport,

14(15), 1957-1960.

Hanlon, F. M., Weisend, M. P., Yeo, R. A., Huang, M., Lee, R. R., Thoma, R. J., et al. (2005). A

specific test of hippocampal deficit in schizophrenia. Behavioral neuroscience, 119(4),

863-875.

Hannula, D. E., Althoff, R. R., Warren, D. E., Riggs, L., Cohen, N. J., & Ryan, J. D. (2010).

Worth a glance: using eye movements to investigate the cognitive neuroscience of

memory. Frontiers in human neuroscience, 4, 166.

Hannula, D. E., Federmeier, K. D., & Cohen, N. J. (2006). Event-related potential signatures of

relational memory. Journal of Cognitive Neuroscience, 18(11), 1863-1876.

Hannula, D. E., Ryan, J. D., Tranel, D., & Cohen, N. J. (2007). Rapid onset relational memory

effects are evident in eye movement behavior, but not in hippocampal amnesia. Journal

of Cognitive Neuroscience, 19(10), 1690-1705.

Hari, R., Levanen, S., & Raij, T. (2000). Timing of human cortical functions during cognition:

role of MEG. Trends in cognitive sciences, 4(12), 455-462.

Harmon-Jones, E., & Gable, P. A. (2009). Neural activity underlying the effect of approach-

motivated positive affect on narrowed attention. Psychological science, 20(4), 406-409.

Harris, C. R., & Pashler, H. (2004). Attention and the processing of emotional words and names:

not so special after all. Psychological science, 15(3), 171-171-178.

Haxby, J. V., Hoffman, E. A., & Gobbini, M. I. (2002). Human neural systems for face

recognition and social communication. Biological psychiatry, 51(1), 59-67.

Henderson, J. M., Williams, C. C., Castelhano, M. S., & Falk, R. J. (2003). Eye movements and

picture processing during recognition. Perception & psychophysics, 65(5), 725-734.

Henke, K., Weber, B., Kneifel, S., Wieser, H. G., & Buck, A. (1999). Human hippocampus

associates information in memory. Proceedings of the National Academy of Sciences of

the United States of America, 96(10), 5884-5889.

Henson, R. (2005). A mini-review of fMRI studies of human medial temporal lobe activity

associated with recognition memory. The Quarterly Journal of Experimental Psychology.

B, Comparative and physiological psychology, 58(3-4), 340-360.

153

Herdman, A. T., Fujioka, T., Chau, W., Ross, B., Pantev, C., & Picton, T. W. (2004). Cortical

oscillations modulated by congruent and incongruent audiovisual stimuli. Neurology &

clinical neurophysiology : NCN, 2004, 15.

Herdman, A. T., Pang, E. W., Ressel, V., Gaetz, W., & Cheyne, D. (2007). Task-related

modulation of early cortical responses during language production: an event-related

synthetic aperture magnetometry study. Cerebral cortex, 17(11), 2536-2543.

Herdman, A. T., & Ryan, J. D. (2007). Spatio-temporal brain dynamics underlying saccade

execution, suppression, and error-related feedback. Journal of Cognitive Neuroscience,

19(3), 420-432.

Herdman, A. T., Wollbrink, A., Chau, W., Ishii, R., Ross, B., & Pantev, C. (2003).

Determination of activation areas in the human auditory cortex by means of synthetic

aperture magnetometry. NeuroImage, 20(2), 995-1005.

Heuer, F., & Reisberg, D. (1990). Vivid memories of emotional events: the accuracy of

remembered minutiae. Memory & cognition, 18(5), 496-506.

Hillebrand, A., & Barnes, G. R. (2002). A quantitative assessment of the sensitivity of whole-

head MEG to activity in the adult human cortex. NeuroImage, 16(3 Pt 1), 638-650.

Hirata, M., Kato, A., Taniguchi, M., Ninomiya, H., Cheyne, D., Robinson, S. E., et al. (2002).

Frequency-dependent spatial distribution of human somatosensory evoked neuromagnetic

fields. Neuroscience letters, 318(2), 73-76.

Hirata, M., Koreeda, S., Sakihara, K., Kato, A., Yoshimine, T., & Yorifuji, S. (2007). Effects of

the emotional connotations in words on the frontal areas--a spatially filtered MEG study.

NeuroImage, 35(1), 420-429.

Hirvenkari, L., Jousmaki, V., Lamminmaki, S., Saarinen, V. M., Sams, M. E., & Hari, R. (2010).

Gaze-Direction-Based MEG Averaging During Audiovisual Speech Perception.

Frontiers in human neuroscience, 4, 17.

Hoffman, J. E. (1998). Visual attention and eye movements. In H. Pashler (Ed.), Attention (pp.

119–153). Hove: Psychology Press.

Hollingworth, A., & Henderson, J. M. (2002). Accurate visual memory for previously attended

objects in natural scenes. Journal of Experimental Psychology, 28(1), 113-113-136.

Hollingworth, A., Williams, C. C., & Henderson, J. M. (2001). To see and remember: Visually

specific information is retained in memory from previously attended objects in natural

scenes. Psychonomic Bulletin & Review, 8(4), 761-761-768.

Huang, M. X., Shih, J. J., Lee, R. R., Harrington, D. L., Thoma, R. J., Weisend, M. P., et al.

(2004). Commonalities and differences among vectorized beamformers in

electromagnetic source imaging. Brain topography, 16(3), 139-158.

154

Hulse, L. M., Allan, K., Memon, A., & Read, J. D. (2007). Emotional arousal and memory: a test

of the poststimulus processing hypothesis. The American journal of psychology, 120(1),

73-90.

Hung, Y., Smith, M. L., Bayle, D. J., Mills, T., Cheyne, D., & Taylor, M. J. (2010). Unattended

emotional faces elicit early lateralized amygdala-frontal and fusiform activations.

NeuroImage, 50(2), 727-733.

Huxter, J., Burgess, N., & O'Keefe, J. (2003). Independent rate and temporal coding in

hippocampal pyramidal cells. Nature, 425(6960), 828-832.

Ioannides, A. A., Liu, M. J., Liu, L. C., Bamidis, P. D., Hellstrand, E., & Stephan, K. M. (1995).

Magnetic field tomography of cortical and deep processes: examples of "real-time

mapping" of averaged and single trial MEG signals. International journal of

psychophysiology : official journal of the International Organization of

Psychophysiology, 20(3), 161-175.

Irish, M., Lawlor, B. A., O'Mara, S. M., & Coen, R. F. (2010). Exploring the recollective

experience during autobiographical memory retrieval in amnestic mild cognitive

impairment. Journal of the International Neuropsychological Society : JINS, 16(3), 546-

555.

Itier, R. J., & Batty, M. (2009). Neural bases of eye and gaze processing: the core of social

cognition. Neuroscience and biobehavioral reviews, 33(6), 843-863.

Itier, R. J., Herdman, A. T., George, N., Cheyne, D., & Taylor, M. J. (2006). Inversion and

contrast-reversal effects on face processing assessed by MEG. Brain research, 1115(1),

108-120.

James, W. (1983). The principles of Psychology. Cambridge: Harvard University Press.

Jerbi, K., Baillet, S., Mosher, J. C., Nolte, G., Garnero, L., & Leahy, R. M. (2004). Localization

of realistic cortical activity in MEG using current multipoles. NeuroImage, 22(2), 779-

793.

Jurica, P. J., & Shimamura, A. P. (1999). Monitoring item and source information: evidence for a

negative generation effect in source memory. Memory & cognition, 27(4), 648-656.

Kapur, N., Friston, K. J., Young, A., Frith, C. D., & Frackowiak, R. S. (1995). Activation of

human hippocampal formation during memory for faces: a PET study. Cortex; a journal

devoted to the study of the nervous system and behavior, 31(1), 99-108.

Kelley, W. M., Miezin, F. M., McDermott, K. B., Buckner, R. L., Raichle, M. E., Cohen, N. J., et

al. (1998). Hemispheric specialization in human dorsal frontal cortex and medial

temporal lobe for verbal and nonverbal memory encoding. Neuron, 20(5), 927-936.

Kensinger, E. A., Clarke, R. J., & Corkin, S. (2003). What neural correlates underlie successful

encoding and retrieval? A functional magnetic resonance imaging study using a divided

attention paradigm. The Journal of Neuroscience, 23(6), 2407-2415.

155

Kensinger, E. A., & Corkin, S. (2003). Memory enhancement for emotional words: are

emotional words more vividly remembered than neutral words? Memory & cognition,

31(8), 1169-1180.

Kensinger, E. A., & Corkin, S. (2004a). The effects of emotional content and aging on false

memories. Cognitive, affective & behavioral neuroscience, 4(1), 1-9.

Kensinger, E. A., & Corkin, S. (2004b). Two routes to emotional memory: distinct neural

processes for valence and arousal. Proceedings of the National Academy of Sciences of

the United States of America, 101(9), 3310-3315.

Kensinger, E. A., Garoff-eaton, R. J., & Schacter, D. L. (2006). Memory for specific visual

details can be enhanced by negative arousing content. Journal of memory and language

(Print), 54(1), 99-99-112.

Kensinger, E. A., Gutchess, A. H., & Schacter, D. L. (2007). Effects of aging and encoding

instructions on emotion-induced memory trade-offs. Psychology and aging, 22(4), 781-

795.

Kensinger, E. A., Piguet, O., Krendl, A. C., & Corkin, S. (2005). Memory for contextual details:

effects of emotion and aging. Psychology and aging, 20(2), 241-250.

Kensinger, E. A., & Schacter, D. L. (2006). Amygdala activity is associated with the successful

encoding of item, but not source, information for positive and negative stimuli. The

Journal of Neuroscience, 26(9), 2564-2570.

Kern, R. P., Libkuman, T. M., & Otani, H. (2002). Memory for negatively arousing and neutral

pictorial stimuli using a repeated testing paradigm. Cognition and Emotion, 16(6), 749-

749-767.

Kirchhoff, B. A., Wagner, A. D., Maril, A., & Stern, C. E. (2000). Prefrontal-temporal circuitry

for episodic encoding and subsequent memory. The Journal of Neuroscience, 20(16),

6173-6180.

Kirsch, P., Achenbach, C., Kirsch, M., Heinzmann, M., Schienle, A., & Vaitl, D. (2003).

Cerebellar and hippocampal activation during eyeblink conditioning depends on the

experimental paradigm: a MEG study. Neural Plasticity, 10(4), 291-301.

Kirwan, C. B., & Stark, C. E. (2004). Medial temporal lobe activation during encoding and

retrieval of novel face-name pairs. Hippocampus, 14(7), 919-930.

Kissler, J., Herbert, C., Winkler, I., & Junghofer, M. (2009). Emotion and attention in visual

word processing: an ERP study. Biological Psychology, 80(1), 75-83.

Kobayashi, T., & Kuriki, S. (1999). Principal component elimination method for the

improvement of S/N in evoked neuromagnetic field measurements. IEEE Transactions

on Bio-medical Engineering, 46(8), 951-958.

156

Koriat, A. (2000). Control processes in remembering. In E. Tulving & F. I. M. Craik (Eds.), The

Oxford Handbook of Memory (pp. 333–346). New York: Oxford University Press.

Koster, E. H., Crombez, G., Van Damme, S., Verschuere, B., & De Houwer, J. (2004). Does

imminent threat capture and hold attention? Emotion, 4(3), 312-317.

Kramer, T. H., Buckhout, R., & Eugenio, P. (1990). Weapon focus, arousal, and eyewitness

memory: Attention must be paid. Law and human behavior, 14(2), 167-184.

LaBar, K. S., & Cabeza, R. (2006). Cognitive neuroscience of emotional memory. Nature

reviews. Neuroscience, 7(1), 54-64.

Lagerlund, T. D., Sharbrough, F. W., & Busacker, N. E. (1997). Spatial filtering of multichannel

electroencephalographic recordings through principal component analysis by singular

value decomposition. Journal of clinical neurophysiology : official publication of the

American Electroencephalographic Society, 14(1), 73-82.

Laloyaux, C., Devue, C., Doyen, S., David, E., & Cleeremans, A. (2008). Undetected changes in

visible stimuli influence subsequent decisions. Consciousness and cognition, 17(3), 646-

646-656.

Lane, R. D., Reiman, E. M., Ahern, G. L., Schwartz, G. E., & Davidson, R. J. (1997).

Neuroanatomical correlates of happiness, sadness, and disgust. The American journal of

psychiatry, 154(7), 926-933.

Lang, P. J., Bradley, M. M., & Cuthbert, B. N. (1999). International Affective Picture System

(IAPS): Instruction Manual and Affective Ratings, Tech. Rep. No. A-4. Gainesville:

University of Florida.

Lavie, N., Hirst, A., de Fockert, J. W., & Viding, E. (2004). Load theory of selective attention

and cognitive control. Journal of Experimental Psychology. General, 133(3), 339-354.

LeDoux, J. E. (1992). Brain mechanisms of emotion and emotional learning. Current Opinion in

Neurobiology, 2(2), 191-197.

LeDoux, J. E. (1996). Emotional networks and motor control: a fearful view. Progress in brain

research, 107, 437-446.

Lee, A. C., Buckley, M. J., Pegman, S. J., Spiers, H., Scahill, V. L., Gaffan, D., et al. (2005).

Specialization in the medial temporal lobe for processing of objects and scenes.

Hippocampus, 15(6), 782-797.

Lepage, M., Habib, R., & Tulving, E. (1998). Hippocampal PET activations of memory encoding

and retrieval: the HIPER model. Hippocampus, 8(4), 313-322.

Lerner, J. S. J. S., Gonzalez, R. M. R. M., Small, D. A. D. A., & Fischhoff, B. B. (2003). Effects

of fear and anger on perceived risks of terrorism: a national field experiment.

Psychological science, 14(2), 144-144-150.

157

Levine, L. J., & Edelstein, R. S. (2009). Emotion and memory narrowing: A review and goal-

relevance approach. Cognition and Emotion, 23(5), 833-833-875.

Levine, L. J., & Pizarro, D. A. (2004). Emotion and memory research: A grumpy overview.

Social Cognition, 22(5), 530-554.

Lewis, P. A., Critchley, H. D., Rotshtein, P., & Dolan, R. J. (2007). Neural correlates of

processing valence and arousal in affective words. Cerebral cortex, 17(3), 742-748.

Lim, S.-L., Padmala, S., & Pessoa, L. (2009). Segregating the significant from the mundane on a

moment-to-moment basis via direct and indirect amygdala contributions. PNAS

Proceedings of the National Academy of Sciences of the United States of America,

106(39), 16841-16841-16846.

Litman, L., & Davachi, L. (2008). Distributed learning enhances relational memory

consolidation. Learning & Memory, 15(9), 711-716.

Lobaugh, N. J., West, R., & McIntosh, A. R. (2001). Spatiotemporal analysis of experimental

differences in event-related potential data with partial least squares. Psychophysiology,

38(3), 517-530.

Loftus, E. F. (1979). Eyewitness reliability. Science (New York, N.Y.), 205(4404), 386-386-387.

Loftus, E. F., & Christianson, S.-Å. (1989). Malleability of memory for emotional events. In

Archer, Trevor; Nilsson, Lars-Göran (1989) (pp. Aversion, avoidance, and anxiety:

Perspectives on aversively motivated behavior. (pp. 311-322). Hillsdale, NJ, England:

Lawrence Erlbaum Associates, Inc. xvi, 491).

Loftus, E. F., Loftus, G. R., & Messo, J. (1987). Some facts about "weapon focus.". Law and

human behavior, 11(1), 55-62.

Loftus, G. R. (1972). Eye fixations and recognition memory for pictures. Cognitive psychology,

3(4), 525-525-551.

Loftus, G. R., & Mackworth, N. H. (1978). Cognitive determinants of fixation location during

picture viewing. Journal of Experimental Psychology: Human Perception and

Performance, 4(4), 565-572.

Luo, Q., Holroyd, T., Jones, M., Hendler, T., & Blair, J. (2007). Neural dynamics for facial

threat processing as revealed by gamma band synchronization using MEG. NeuroImage,

34(2), 839-847.

Luo, Q., Holroyd, T., Majestic, C., Cheng, X., Schechter, J., & Blair, R. J. (2010). Emotional

automaticity is a matter of timing. The Journal of Neuroscience, 30(17), 5825-5829.

MacKay, D. G., & Ahmetzanov, M. V. (2005). Emotion, memory, and attention in the taboo

Stroop paradigm. Psychological Science, 16(1), 25-32.

158

MacKay, D. G., Shafto, M., Taylor, J. K., Marian, D. E., Abrams, L., & Dyer, J. R. (2004).

Relations between emotion, memory, and attention: evidence from taboo stroop, lexical

decision, and immediate memory tasks. Memory & cognition, 32(3), 474-488.

MacKinnon, D. P., Fairchild, A. J., & Fritz, M. S. (2007). Mediation analysis. Annual review of

psychology, 58, 593-614.

MacKinnon, D. P., Lockwood, C. M., & Williams, J. (2004). Confidence Limits for the Indirect

Effect: Distribution of the Product and Resampling Methods. Multivariate Behavioral

Research, 39(1), 99-99-128.

Manns, J. R., Hopkins, R. O., Reed, J. M., Kitchener, E. G., & Squire, L. R. (2003). Recognition

memory and the human hippocampus. Neuron, 37(1), 171-180.

Maratos, E. J., Dolan, R. J., Morris, J. S., Henson, R. N., & Rugg, M. D. (2001). Neural activity

associated with episodic memory for emotional context. Neuropsychologia, 39(9), 910-

920.

Maratos, F. A., Mogg, K., Bradley, B. P., Rippon, G., & Senior, C. (2009). Coarse threat images

reveal theta oscillations in the amygdala: a magnetoencephalography study. Cognitive,

affective & behavioral neuroscience, 9(2), 133-143.

Martin, A., Wiggs, C. L., & Weisberg, J. (1997). Modulation of human medial temporal lobe

activity by form, meaning, and experience. Hippocampus, 7(6), 587-593.

Martin, T., McDaniel, M. A., Guynn, M. J., Houck, J. M., Woodruff, C. C., Bish, J. P., et al.

(2007). Brain regions and their dynamics in prospective memory retrieval: a MEG study.

International journal of psychophysiology : official journal of the International

Organization of Psychophysiology, 64(3), 247-258.

Mataix-Cols, D., An, S. K., Lawrence, N. S., Caseras, X., Speckens, A., Giampietro, V., et al.

(2008). Individual differences in disgust sensitivity modulate neural responses to

aversive/disgusting stimuli. The European journal of neuroscience, 27(11), 3050-3058.

Mayes, A., Montaldi, D., & Migo, E. (2007). Associative memory and the medial temporal

lobes. Trends in cognitive sciences, 11(3), 126-135.

McGaugh, J. L. (2000). Memory--a century of consolidation. Science, 287(5451), 248-251.

McGaugh, J. L. (2002). Memory consolidation and the amygdala: a systems perspective. Trends

in neurosciences, 25(9), 456.

McGaugh, J. L. (2004). The amygdala modulates the consolidation of memories of emotionally

arousing experiences. Annual review of neuroscience, 27, 1-28.

McIntosh, A. R., Bookstein, F. L., Haxby, J. V., & Grady, C. L. (1996). Spatial pattern analysis

of functional brain images using partial least squares. NeuroImage, 3(3 Pt 1), 143-157.

159

McIntosh, A. R., & Lobaugh, N. J. (2004). Partial least squares analysis of neuroimaging data:

applications and advances. NeuroImage, 23 Suppl 1, S250-263.

Medford, N., Phillips, M. L., Brierley, B., Brammer, M., Bullmore, E. T., & David, A. S. (2005).

Emotional memory: separating content and context. Psychiatry research, 138(3), 247-

258.

Mikuni, N., Nagamine, T., Ikeda, A., Terada, K., Taki, W., Kimura, J., et al. (1997).

Simultaneous recording of epileptiform discharges by MEG and subdural electrodes in

temporal lobe epilepsy. NeuroImage, 5(4 Pt 1), 298-306.

Miller, G. A., Elbert, T., Sutton, B. P., & Heller, W. (2007). Innovative clinical assessment

technologies: challenges and opportunities in neuroimaging. Psychological assessment,

19(1), 58-73.

Milner, B., Corkin, S., & Teuber, H. L. (1968). Further analysis of the hippocampal amnesic

syndrome: 14-year follow-up study of H. M. Neuropsychologia, 6(3), 215-234.

Miu, A. C., Heilman, R. M., Opre, A., & Miclea, M. (2005). Emotion-induced retrograde

amnesia and trait anxiety. Journal of Experimental Psychology. Learning, memory, and

cognition, 31(6), 1250-1257.

Mogg, K., & Bradley, B. P. (1999). Orienting of attention to threatening facial expressions

presented under conditions of restricted awareness. Cognition and Emotion, 13(6), 713-

740.

Mogg, K., Millar, N., & Bradley, B. P. (2000). Biases in eye movements to threatening facial

expressions in generalized anxiety disorder and depressive disorder. Journal of abnormal

psychology, 109(4), 695-704.

Montaldi, D., Mayes, A. R., Barnes, A., Pirie, H., Hadley, D. M., Patterson, J., et al. (1998).

Associative encoding of pictures activates the medial temporal lobes. Human Brain

Mapping, 6(2), 85-104.

Moses, S. N., Brown, T. M., Ryan, J. D., & McIntosh, A. R. (2010). Neural system interactions

underlying human transitive inference. Hippocampus, 20(8), 894-901.

Moses, S. N., Cole, C., Driscoll, I., & Ryan, J. D. (2005). Differential contributions of

hippocampus, amygdala and perirhinal cortex to recognition of novel objects, contextual

stimuli and stimulus relationships. Brain Research Bulletin, 67(1-2), 62-76.

Moses, S. N., Houck, J. M., Martin, T., Hanlon, F. M., Ryan, J. D., Thoma, R. J., et al. (2007).

Dynamic neural activity recorded from human amygdala during fear conditioning using

magnetoencephalography. Brain Research Bulletin, 71(5), 452-460.

Moses, S. N., & Ryan, J. D. (2006). A comparison and evaluation of the predictions of relational

and conjunctive accounts of hippocampal function. Hippocampus, 16(1), 43-65.

160

Moses, S. N., Ryan, J. D., Bardouille, T., Kovacevic, N., Hanlon, F. M., & McIntosh, A. R.

(2009). Semantic information alters neural activation during transverse patterning

performance. NeuroImage, 46(3), 863-873.

Mosher, J. C., Spencer, M. E., Leahy, R. M., & Lewis, P. S. (1993). Error bounds for EEG and

MEG dipole source localization. Electroencephalography and Clinical Neurophysiology,

86(5), 303-321.

Most, S. B., Chun, M. M., Widders, D. M., & Zald, D. H. (2005). Attentional rubbernecking :

Cognitive control and personality in emotion-induced blindness. Psychonomic Bulletin &

Review, 12(4), 654-654-661.

Murty, V. P., Ritchey, M., Adcock, R. A., & LaBar, K. S. (2010). fMRI studies of successful

emotional memory encoding: A quantitative meta-analysis. Neuropsychologia, 48(12),

3459-3469.

Nishitani, N., Ikeda, A., Nagamine, T., Honda, M., Mikuni, N., Taki, W., et al. (1999). The role

of the hippocampus in auditory processing studied by event-related electric potentials and

magnetic fields in epilepsy patients before and after temporal lobectomy. Brain, 122 ( Pt

4), 687-707.

Nishitani, N., Nagamine, T., Fujiwara, N., Yazawa, S., & Shibasaki, H. (1998). Cortical-

hippocampal auditory processing identified by magnetoencephalography. Journal of

Cognitive Neuroscience, 10(2), 231-247.

Nummenmaa, L., Hyona, J., & Calvo, M. G. (2006). Eye movement assessment of selective

attentional capture by emotional pictures. Emotion, 6(2), 257-268.

Nyberg, L., Habib, R., McIntosh, A. R., & Tulving, E. (2000). Reactivation of encoding-related

brain activity during memory retrieval. Proceedings of the National Academy of Sciences

of the United States of America, 97(20), 11120-11124.

Nyberg, L., Marklund, P., Persson, J., Cabeza, R., Forkstam, C., Petersson, K. M., et al. (2003).

Common prefrontal activations during working memory, episodic memory, and semantic

memory. Neuropsychologia, 41(3), 371-377.

Nyberg, L., McIntosh, A. R., Houle, S., Nilsson, L. G., & Tulving, E. (1996). Activation of

medial temporal structures during episodic memory retrieval. Nature, 380(6576), 715-

717.

O’Keefe, J., & Nadel, L. (1978). The Hippocampus as a Cognitive Map. London: Oxford

University Press.

Ohman, A., Flykt, A., & Esteves, F. (2001). Emotion drives attention: detecting the snake in the

grass. Journal of Experimental Psychology. General, 130(3), 466-478.

Ohman, A., Lundqvist, D., & Esteves, F. (2001). The face in the crowd revisited: a threat

advantage with schematic stimuli. Journal of Personality and Social Psychology, 80(3),

381-396.

161

Ohman, A., & Mineka, S. (2001). Fears, phobias, and preparedness: toward an evolved module

of fear and fear learning. Psychological Review, 108(3), 483-522.

Olson, I. R., & Marshuetz, C. (2005). Facial attractiveness is appraised in a glance. Emotion,

5(4), 498-502.

Osipova, D., Takashima, A., Oostenveld, R., Fernandez, G., Maris, E., & Jensen, O. (2006).

Theta and gamma oscillations predict encoding and retrieval of declarative memory. The

Journal of Neuroscience, 26(28), 7523-7531.

Oya, H., Kawasaki, H., Howard, M. A., 3rd, & Adolphs, R. (2002). Electrophysiological

responses in the human amygdala discriminate emotion categories of complex visual

stimuli. The Journal of Neuroscience, 22(21), 9502-9512.

Packard, M. G., & Cahill, L. (2001). Affective modulation of multiple memory systems. Current

Opinion in Neurobiology, 11(6), 752-756.

Packard, M. G., Cahill, L., & McGaugh, J. L. (1994). Amygdala modulation of hippocampal-

dependent and caudate nucleus-dependent memory processes. Proceedings of the

National Academy of Sciences of the United States of America, 91(18), 8477-8481.

Papanicolaou, A. C., Simos, P. G., Castillo, E. M., Breier, J. I., Katz, J. S., & Wright, A. A.

(2002). The hippocampus and memory of verbal and pictorial material. Learning &

Memory, 9(3), 99-104.

Park, J., & Banaji, M. R. (2000). Mood and heuristics: the influence of happy and sad states on

sensitivity and bias in stereotyping. Journal of Personality and Social Psychology, 78(6),

1005-1023.

Parker, R. E. (1978). Picture processing during recognition. Journal of Experimental

Psychology: Human Perception and Performance, 4(2), 284-293.

Payne, J. D., Stickgold, R., Swanberg, K., & Kensinger, E. A. (2008). Sleep preferentially

enhances memory for emotional components of scenes. Psychological Science, 19(8),

781-788.

Peyk, P., Schupp, H. T., Elbert, T., & Junghofer, M. (2008). Emotion processing in the visual

brain: a MEG analysis. Brain Topography, 20(4), 205-215.

Phelps, E. A. (2004). Human emotion and memory: interactions of the amygdala and

hippocampal complex. Current Opinion in Neurobiology, 14(2), 198-202.

Phelps, E. A. (2006). Emotion and cognition: insights from studies of the human amygdala.

Annual Review of Psychology, 57, 27-53.

Phelps, E. A., LaBar, K. S., & Spencer, D. D. (1997). Memory for emotional words following

unilateral temporal lobectomy. Brain and Cognition, 35(1), 85-109.

162

Phelps, E. A., & Sharot, T. (2008). How (and Why) Emotion Enhances the Subjective Sense of

Recollection. Current Directions in Psychological Science, 17(2), 147-152.

Pickel, K. L. (1998). Unusualness and threat as possible causes of "weapon focus.". Memory,

6(3), 277-295.

Pickel, K. L., French, T. A., & Betts, J. M. (2003). A cross-modal weapon focus effect: The

influence of a weapon's presence on memory for auditory information. Memory, 11(3),

277-277-292.

Pikkarainen, M., Ronkko, S., Savander, V., Insausti, R., & Pitkanen, A. (1999). Projections from

the lateral, basal, and accessory basal nuclei of the amygdala to the hippocampal

formation in rat. The Journal of Comparative Neurology, 403(2), 229-260.

Pitkanen, A., Pikkarainen, M., Nurminen, N., & Ylinen, A. (2000). Reciprocal connections

between the amygdala and the hippocampal formation, perirhinal cortex, and postrhinal

cortex in rat. A review. Annals of the New York Academy of Sciences, 911, 369-391.

Posner, M. I. (1980). Orienting of attention. The Quarterly Journal of Experimental Psychology,

32(1), 3-25.

Preacher, K. J., & Hayes, A. F. (2004). SPSS and SAS procedures for estimating indirect effects

in simple mediation models. Behavior Research Methods, Instruments & Computers,

36(4), 717-717-731.

Price, S. E., Kinsella, G. J., Ong, B., Mullaly, E., Phillips, M., Pangnadasa-Fox, L., et al. (2010).

Learning and memory in amnestic mild cognitive impairment: contribution of working

memory. Journal of the International Neuropsychological Society, 16(2), 342-351.

Prince, S. E., Dennis, N. A., & Cabeza, R. (2009). Encoding and retrieving faces and places:

distinguishing process- and stimulus-specific differences in brain activity.

Neuropsychologia, 47(11), 2282-2289.

Raghavachari, S., Kahana, M. J., Rizzuto, D. S., Caplan, J. B., Kirschen, M. P., Bourgeois, B., et

al. (2001). Gating of human theta oscillations by a working memory task. The Journal of

Neuroscience, 21(9), 3175-3183.

Ranganath, C., Johnson, M. K., & D'Esposito, M. (2003). Prefrontal activity associated with

working memory and episodic long-term memory. Neuropsychologia, 41(3), 378-389.

Ranganath, C., Yonelinas, A. P., Cohen, M. X., Dy, C. J., Tom, S. M., & D'Esposito, M. (2004).

Dissociable correlates of recollection and familiarity within the medial temporal lobes.

Neuropsychologia, 42(1), 2-13.

Reichle, E. D., Pollatsek, A., Fisher, D. L., & Rayner, K. (1998). Toward a model of eye

movement control in reading. Psychological Review, 105(1), 125-125-157.

Reisberg, D., & Heuer, F. (2004). Remembering emotional events. In D. Reisberg & P. Hertel

(Eds.), Memory and Emotion (pp. 3-41). New York: Oxford University Press.

163

Richardson, M. P., Strange, B. A., & Dolan, R. J. (2004). Encoding of emotional memories

depends on amygdala and hippocampus and their interactions. Nature Neuroscience,

7(3), 278-285.

Riggs, L., McQuiggan, D. A., Anderson, A. K., & Ryan, J. D. (2010). Eye movement monitoring

reveals differential influences of emotion on memory. Frontiers in Psychology, 1(205), 1-

9.

Riggs, L., McQuiggan, D. A., Farb, N., Anderson, A. K., & Ryan, J. D. (2011). The role of overt

attention in emotion-modulated memory. Emotion, 11(4), 776-785.

Riggs, L., Moses, S. N., Bardouille, T., Herdman, A. T., Ross, B., & Ryan, J. D. (2009). A

complementary analytic approach to examining medial temporal lobe sources using

magnetoencephalography. NeuroImage, 45(2), 627-642.

Ritchey, M., Dolcos, F., & Cabeza, R. (2008). Role of amygdala connectivity in the persistence

of emotional memories over time: an event-related FMRI investigation. Cerebral Cortex,

18(11), 2494-2504.

Rizzuto, D. S., Madsen, J. R., Bromfield, E. B., Schulze-Bonhage, A., Seelig, D.,

Aschenbrenner-Scheibe, R., et al. (2003). Reset of human neocortical oscillations during

a working memory task. Proceedings of the National Academy of Sciences of the United

States of America, 100(13), 7931-7936.

Robinson, S. E. (2004). Localization of event-related activity by SAM(erf). Neurology & clinical

neurophysiology : NCN, 2004, 109.

Robinson, S. E., & Rose, D. F. (1992). Current source estimation by spatially filtered MEG. In

G. L. Romani (Ed.), Biomagnetism: Clinical Aspects (pp. 761–765). Amsterdam:

Excerpta Medica.

Robinson, S. E., & Vrba, J. (1999). Functional neuroimaging by synthetic aperture

magnetometry. In T. Yoshimoto, M. Kotani, S. Kuriki, H. Karibe & N. Nakasato (Eds.),

Recent Advances in Biomagnetism (pp. 302–305). Sendai: Tohoku University Press.

Rombouts, S. A., Barkhof, F., Witter, M. P., Machielsen, W. C., & Scheltens, P. (2001). Anterior

medial temporal lobe activation during attempted retrieval of encoded visuospatial

scenes: an event-related fMRI study. NeuroImage, 14(1 Pt 1), 67-76.

Rowe, G., Hirsh, J. B., & Anderson, A. K. (2007). Positive affect increases the breadth of

attentional selection. Proceedings of the National Academy of Sciences of the United

States of America, 104(1), 383-388.

Rugg, M. D. (1995a). ERP studies of memory. In M. D. Rugg (Ed.), Electrophysiology of Mind:

Event-Related Brain Potential and Cognition (pp. 133–170). Oxford: Oxford University

Press.

Rugg, M. D. (1995b). Memory and consciousness: a selective review of issues and data.

Neuropsychologia, 33(9), 1131-1141.

164

Rugg, M. D., Schloerscheidt, A. M., Doyle, M. C., Cox, C. J., & Patching, G. R. (1996). Event-

related potentials and the recollection of associative information. Brain Research, 4(4),

297-304.

Ryan, J. D., Althoff, R. R., Whitlow, S., & Cohen, N. J. (2000). Amnesia is a deficit in relational

memory. Psychological Science, 11(6), 454-461.

Ryan, J. D., & Cohen, N. J. (2004). The nature of change detection and online representations of

scenes. Journal of Experimental Psychology. Human Perception and Performance, 30(5),

988-1015.

Ryan, J. D., Hannula, D. E., & Cohen, N. J. (2007). The obligatory effects of memory on eye

movements. Memory, 15(5), 508-525.

Ryan, J. D., Leung, G., Turk-Browne, N. B., & Hasher, L. (2007). Assessment of age-related

changes in inhibition and binding using eye movement monitoring. Psychology and

Aging, 22(2), 239-250.

Ryan, J. D., Moses, S. N., Ostreicher, M. L., Bardouille, T., Herdman, A. T., Riggs, L., et al.

(2008). Seeing sounds and hearing sights: the influence of prior learning on current

perception. Journal of Cognitive Neuroscience, 20(6), 1030-1042.

Salvadore, G., Cornwell, B. R., Colon-Rosario, V., Coppola, R., Grillon, C., Zarate, C. A., Jr., et

al. (2009). Increased anterior cingulate cortical activity in response to fearful faces: a

neurophysiological biomarker that predicts rapid antidepressant response to ketamine.

Biological Psychiatry, 65(4), 289-295.

Schacter, D. L. (1987). Implicit expressions of memory in organic amnesia: learning of new facts

and associations. Human Neurobiology, 6(2), 107-118.

Schacter, D. L., & Buckner, R. L. (1998). On the relations among priming, conscious

recollection, and intentional retrieval: evidence from neuroimaging research.

Neurobiology of Learning and Memory, 70(1-2), 284-303.

Schacter, D. L., Koutstaal, W., & Norman, K. A. (1997). False memories and aging. Trends in

Cognitive Sciences, 1(6), 229-236.

Schacter, D. L., Reiman, E., Uecker, A., Polster, M. R., Yun, L. S., & Cooper, L. A. (1995).

Brain regions associated with retrieval of structurally coherent visual information.

Nature, 376(6541), 587-590.

Schafer, A., Leutgeb, V., Reishofer, G., Ebner, F., & Schienle, A. (2009). Propensity and

sensitivity measures of fear and disgust are differentially related to emotion-specific brain

activation. Neuroscience Letters, 465(3), 262-266.

Schmalzl, L., Palermo, R., Green, M., Brunsdon, R., & Coltheart, M. (2008). Training of familiar

face recognition and visual scan paths for faces in a child with congenital prosopagnosia.

Cognitive Neuropsychology, 25(5), 704-729.

165

Schmidt, S. R. (1991). Can We Have a Distinctive Theory of Memory? Memory & Cognition,

19(6), 523-523-542.

Schmitz, T. W., Cheng, F. H., & De Rosa, E. (2010). Failing to ignore: paradoxical neural effects

of perceptual load on early attentional selection in normal aging. The Journal of

Neuroscience, 30(44), 14750-14758.

Schmitz, T. W., De Rosa, E., & Anderson, A. K. (2009). Opposing influences of affective state

valence on visual cortical encoding. The Journal of Neuroscience, 29(22), 7199-7207.

Schulz, M., Chau, W., Graham, S. J., McIntosh, A. R., Ross, B., Ishii, R., et al. (2004). An

integrative MEG-fMRI study of the primary somatosensory cortex using cross-modal

correspondence analysis. NeuroImage, 22(1), 120-133.

Schweinberger, S. R., Pickering, E. C., Burton, A. M., & Kaufmann, J. M. (2002). Human brain

potential correlates of repetition priming in face and name recognition.

Neuropsychologia, 40(12), 2057-2073.

Scoville, W. B., & Milner, B. (1957). Loss of recent memory after bilateral hippocampal lesions.

Journal of Neurology, Neurosurgery & Psychiatry, 20, 11-21.

Scoville, W. B., & Milner, B. (2000). Loss of recent memory after bilateral hippocampal lesions.

1957. The Journal of Neuropsychiatry and Clinical Neurosciences, 12(1), 103-113.

Sederberg, P. B., Kahana, M. J., Howard, M. W., Donner, E. J., & Madsen, J. R. (2003). Theta

and gamma oscillations during encoding predict subsequent recall. The Journal of

Neuroscience, 23(34), 10809-10814.

Seidenbecher, T., Laxmi, T. R., Stork, O., & Pape, H. C. (2003). Amygdalar and hippocampal

theta rhythm synchronization during fear memory retrieval. Science, 301(5634), 846-850.

Singh-Curry, V., & Husain, M. (2009). The functional role of the inferior parietal lobe in the

dorsal and ventral stream dichotomy. Neuropsychologia, 47(6), 1434-1448.

Smith, A. P., Henson, R. N., Dolan, R. J., & Rugg, M. D. (2004). fMRI correlates of the episodic

retrieval of emotional contexts. NeuroImage, 22(2), 868-878.

Smith, M. E., & Halgren, E. (1989). Dissociation of recognition memory components following

temporal lobe lesions. Journal of Experimental Psychology. Learning, Memory, and

Cognition, 15(1), 50-60.

Smith, M. L., Cottrell, G. W., Gosselin, F., & Schyns, P. G. (2005). Transmitting and decoding

facial expressions. Psychological Science, 16(3), 184-189.

Spezio, M. L., Huang, P. Y., Castelli, F., & Adolphs, R. (2007). Amygdala damage impairs eye

contact during conversations with real people. The Journal of Neuroscience, 27(15),

3994-3997.

166

Squire, L. R. (1992). Memory and the hippocampus: a synthesis from findings with rats,

monkeys, and humans. Psychological Review, 99(2), 195-231.

Squire, L. R. (2004). Memory systems of the brain: a brief history and current perspective.

Neurobiology of Learning and Memory, 82(3), 171-177.

Squire, L. R. (2009). The legacy of patient H.M. for neuroscience. Neuron, 61(1), 6-9.

Squire, L. R., Stark, C. E., & Clark, R. E. (2004). The medial temporal lobe. Annual Review of

Neuroscience, 27, 279-306.

Squire, L. R., & Zola-Morgan, S. (1991). The medial temporal lobe memory system. Science,

253(5026), 1380-1386.

Squire, L. R., & Zola, S. M. (1997). Amnesia, memory and brain systems. Philosophical

Transactions of the Royal Society of London. Series B, Biological Sciences, 352(1362),

1663-1673.

St Jacques, P., Dolcos, F., & Cabeza, R. (2010). Effects of aging on functional connectivity of

the amygdala during negative evaluation: a network analysis of fMRI data. Neurobiology

of Aging, 31(2), 315-327.

Staresina, B. P., Bauer, H., Deecke, L., & Walla, P. (2005). Neurocognitive correlates of

incidental verbal memory encoding: a magnetoencephalographic (MEG) study.

NeuroImage, 25(2), 430-443.

Stark, C. E., & Okado, Y. (2003). Making memories without trying: medial temporal lobe

activity associated with incidental memory formation during recognition. The Journal of

Neuroscience, 23(17), 6748-6753.

Steblay, N. M. (1992). A meta-analytic review of the weapon focus effect. Law and Human

Behavior, 16(4), 413-413-424.

Stephen, J. M., Ranken, D. M., Aine, C. J., Weisend, M. P., & Shih, J. J. (2005). Differentiability

of simulated MEG hippocampal, medial temporal and neocortical temporal epileptic

spike activity. Journal of Clinical Neurophysiology, 22(6), 388-401.

Stern, C. E., Corkin, S., Gonzalez, R. G., Guimaraes, A. R., Baker, J. R., Jennings, P. J., et al.

(1996). The hippocampal formation participates in novel picture encoding: evidence from

functional magnetic resonance imaging. Proceedings of the National Academy of

Sciences of the United States of America, 93(16), 8660-8665.

Stormark, K. M., Nordby, H., & Hugdahl, K. (1995). Attentional shifts to emotionally charged

cues: Behavioural and ERP data. Cognition and Emotion, 9(5), 507-507-523.

Streit, M., Ioannides, A. A., Liu, L., Wolwer, W., Dammers, J., Gross, J., et al. (1999).

Neurophysiological correlates of the recognition of facial expressions of emotion as

revealed by magnetoencephalography. Brain Research, 7(4), 481-491.

167

Susskind, J. M., Lee, D. H., Cusi, A., Feiman, R., Grabski, W., & Anderson, A. K. (2008).

Expressing fear enhances sensory acquisition. Nature Neuroscience, 11(7), 843-850.

Takahashi, M., Itsukushima, Y., & Okabe, Y. (2006). Effects of test sequence on anterograde

and retrograde impairment of negative emotional scenes. Japanese Psychological

Research, 48(2), 102-102-108.

Talarico, J. M., Berntsen, D., & Rubin, D. C. (2009). Positive Emotions Enhance Recall of

Peripheral Details. Cognition & Emotion, 23(2), 380-398.

Talarico, J. M., & Rubin, D. C. (2003). Confidence, not consistency, characterizes flashbulb

memories. Psychological Science, 14(5), 455-461.

Talmi, D., Anderson, A. K., Riggs, L., Caplan, J. B., & Moscovitch, M. (2008). Immediate

memory consequences of the effect of emotion on attention to pictures. Learning &

Memory, 15(3), 172-182.

Talmi, D., Schimmack, U., Paterson, T., & Moscovitch, M. (2007). The role of attention and

relatedness in emotionally enhanced memory. Emotion, 7(1), 89-102.

Tendolkar, I., Rugg, M., Fell, J., Vogt, H., Scholz, M., Hinrichs, H., et al. (2000). A

magnetoencephalographic study of brain activity related to recognition memory in

healthy young human subjects. Neuroscience Letters, 280(1), 69-72.

Tesche, C. D. (1996). Non-invasive imaging of neuronal population dynamics in human

thalamus. Brain Research, 729(2), 253-258.

Tesche, C. D. (1997). Non-invasive detection of ongoing neuronal population activity in normal

human hippocampus. Brain Research, 749(1), 53-60.

Tesche, C. D., & Karhu, J. (1999). Interactive processing of sensory input and motor output in

the human hippocampus. Journal of Cognitive Neuroscience, 11(4), 424-436.

Tesche, C. D., & Karhu, J. (2000). Theta oscillations index human hippocampal activation

during a working memory task. Proceedings of the National Academy of Sciences of the

United States of America, 97(2), 919-924.

Tesche, C. D., Karhu, J., & Tissari, S. O. (1996). Non-invasive detection of neuronal population

activity in human hippocampus. Brain Research, 4(1), 39-47.

Thornton, I. M., & Fernandez-Duque, D. (2000). An implicit measure of undetected change.

Spatial Vision, 14(1), 21-21-44.

Thornton, I. M., & Fernandez-Duque, D. D. (2002). Converging evidence for the detection of

change without awareness. In: J. Hyona, D. P. Munoz, W. Heide, & R. Radach (Eds.),

Progress in Brain Research (Vol. 140, pp. 99-118). Amsterdam: Elsevier Science.

Tipples, J., Atkinson, A. P., & Young, A. W. (2002). The eyebrow frown: a salient social signal.

Emotion, 2(3), 288-296.

168

Todorov, A., Gobbini, M. I., Evans, K. K., & Haxby, J. V. (2007). Spontaneous retrieval of

affective person knowledge in face perception. Neuropsychologia, 45(1), 163-173.

Todorov, A., & Olson, I. R. (2008). Robust learning of affective trait associations with faces

when the hippocampus is damaged, but not when the amygdala and temporal pole are

damaged. Social Cognitive and Affective Neuroscience, 3(3), 195-203.

Todorov, A., & Uleman, J. S. (2002). Spontaneous trait inferences are bound to actors' faces:

evidence from a false recognition paradigm. Journal of Personality and Social

Psychology, 83(5), 1051-1065.

Todorov, A., & Uleman, J. S. (2003). The efficiency of binding spontaneous trait inferences to

actors' faces. Journal of Experimental Social Psychology, 39(6), 549-549-562.

Todorov, A., & Uleman, J. S. (2004). The person reference process in spontaneous trait

inferences. Journal of Personality and Social Psychology, 87(4), 482-493.

Tsivilis, D., Otten, L. J., & Rugg, M. D. (2001). Context effects on the neural correlates of

recognition memory: an electrophysiological study. Neuron, 31(3), 497-505.

Tulving, E. (1987). Multiple memory systems and consciousness. Human Neurobiology, 6(2),

67-80.

Tulving, E., Markowitsch, H. J., Craik, F. E., Habib, R., & Houle, S. (1996). Novelty and

familiarity activations in PET studies of memory encoding and retrieval. Cerebral

Cortex, 6(1), 71-79.

Uutela, K., Hamalainen, M., & Somersalo, E. (1999). Visualization of magnetoencephalographic

data using minimum current estimates. NeuroImage, 10(2), 173-180.

Vaidya, C. J., Zhao, M., Desmond, J. E., & Gabrieli, J. D. (2002). Evidence for cortical encoding

specificity in episodic memory: memory-induced re-activation of picture processing

areas. Neuropsychologia, 40(12), 2136-2143.

Van Veen, B. D., van Drongelen, W., Yuchtman, M., & Suzuki, A. (1997). Localization of brain

electrical activity via linearly constrained minimum variance spatial filtering. IEEE

Transactions on Bio-medical Engineering, 44(9), 867-880.

Vuilleumier, P., & Pourtois, G. (2007). Distributed and interactive brain mechanisms during

emotion face perception: evidence from functional neuroimaging. Neuropsychologia,

45(1), 174-194.

Walla, P., Hufnagl, B., Lindinger, G., Imhof, H., Deecke, L., & Lang, W. (2001). Left temporal

and temporoparietal brain activity depends on depth of word encoding: a

magnetoencephalographic study in healthy young subjects. NeuroImage, 13(3), 402-409.

Wan, H., Aggleton, J. P., & Brown, M. W. (1999). Different contributions of the hippocampus

and perirhinal cortex to recognition memory. The Journal of Neuroscience, 19(3), 1142-

1148.

169

Warrington, E. K., & Weiskrantz, L. (1968). New method of testing long-term retention with

special reference to amnesic patients. Nature, 217(5132), 972-974.

Warrington, E. K., & Weiskrantz, L. (1982). Amnesia: a disconnection syndrome?

Neuropsychologia, 20(3), 233-248.

Weis, S., Klaver, P., Reul, J., Elger, C. E., & Fernandez, G. (2004). Temporal and cerebellar

brain regions that support both declarative memory formation and retrieval. Cerebral

Cortex, 14(3), 256-267.

Weiskrantz, L., & Warrington, E. K. (1979). Conditioning in amnesic patients.

Neuropsychologia, 17(2), 187-194.

Wessel, I., & Merckelbach, H. (1997). The impact of anxiety on memory for details in spider

phobics. Applied Cognitive Psychology, 11(3), 223-231.

Wessel, I., van der Kooy, P., & Merckelbach, H. (2000). Differential recall of central and

peripheral details of emotional slides is not a stable phenomenon. Memory, 8(2), 95-109.

Whalen, P. J., Rauch, S. L., Etcoff, N. L., McInerney, S. C., Lee, M. B., & Jenike, M. A. (1998).

Masked presentations of emotional facial expressions modulate amygdala activity

without explicit knowledge. The Journal of Neuroscience, 18(1), 411-418.

Wheeler, M. E., Petersen, S. E., & Buckner, R. L. (2000). Memory's echo: vivid remembering

reactivates sensory-specific cortex. Proceedings of the National Academy of Sciences of

the United States of America, 97(20), 11125-11129.

Wiebe, S. P., & Staubli, U. V. (2001). Recognition memory correlates of hippocampal theta

cells. The Journal of Neuroscience, 21(11), 3955-3967.

Williams, J. M., Mathews, A., & MacLeod, C. (1996). The emotional Stroop task and

psychopathology. Psychological Bulletin, 120(1), 3-24.

Willis, J., & Todorov, A. (2006). First impressions: making up your mind after a 100-ms

exposure to a face. Psychological Science, 17(7), 592-598.

Winston, J. S., Strange, B. A., O'Doherty, J., & Dolan, R. J. (2002). Automatic and intentional

brain responses during evaluation of trustworthiness of faces. Nature Neuroscience, 5(3),

277-283.

Wong, B., Cronin-Golomb, A., & Neargarder, S. (2005). Patterns of visual scanning as

predictors of emotion identification in normal aging. Neuropsychology, 19(6), 739-749.

Yarbus, A. L. (1967). Eye Movements and Vision. New York: Plenum Press.

Yeckel, M. F., & Berger, T. W. (1990). Feedforward excitation of the hippocampus by afferents

from the entorhinal cortex: redefinition of the role of the trisynaptic pathway.

Proceedings of the National Academy of Sciences of the United States of America,

87(15), 5832-5836.

170

Yiend, J., & Mathews, A. (2001). Anxiety and attention to threatening pictures. The Quarterly

Journal of Experimental Psychology. A, Human Experimental Psychology, 54(3), 665-

681.

Yonelinas, A. P., Hopfinger, J. B., Buonocore, M. H., Kroll, N. E., & Baynes, K. (2001).

Hippocampal, parahippocampal and occipital-temporal contributions to associative and

item recognition memory: an fMRI study. Neuroreport, 12(2), 359-363.

Yonelinas, A. P., Otten, L. J., Shaw, K. N., & Rugg, M. D. (2005). Separating the brain regions

involved in recollection and familiarity in recognition memory. The Journal of

Neuroscience, 25(11), 3002-3008.

Zald, D. H. (2003). The human amygdala and the emotional evaluation of sensory stimuli. Brain

research. Brain Research Reviews, 41(1), 88-123.

Zola-Morgan, S., Squire, L. R., & Amaral, D. G. (1986). Human amnesia and the medial

temporal region: enduring memory impairment following a bilateral lesion limited to

field CA1 of the hippocampus. The Journal of Neuroscience, 6(10), 2950-2967.