10
ORIGINAL ARTICLE Repeated short presentations of morphed facial expressions change recognition and evaluation of facial expressions Jun Moriya Yoshihiko Tanno Yoshinori Sugiura Received: 4 July 2012 / Accepted: 8 November 2012 Ó Springer-Verlag Berlin Heidelberg 2012 Abstract This study investigated whether sensitivity to and evaluation of facial expressions varied with repeated exposure to non-prototypical facial expressions for a short presentation time. A morphed facial expression was pre- sented for 500 ms repeatedly, and participants were required to indicate whether each facial expression was happy or angry. We manipulated the distribution of pre- sentations of the morphed facial expressions for each facial stimulus. Some of the individuals depicted in the facial stimuli expressed anger frequently (i.e., anger-prone indi- viduals), while the others expressed happiness frequently (i.e., happiness-prone individuals). After being exposed to the faces of anger-prone individuals, the participants became less sensitive to those individuals’ angry faces. Further, after being exposed to the faces of happiness- prone individuals, the participants became less sensitive to those individuals’ happy faces. We also found a relative increase in the social desirability of happiness-prone indi- viduals after exposure to the facial stimuli. Introduction In the process of identifying people from their faces, we not only identify their gender or race, but we also form impressions of them, such as whether or not they are trustworthy or aggressive. Our impressions of people are partially derived from their facial expressions (Adolphs, 2002; Frith & Frith, 2007; Gallese, Keysers, & Rizzolatti, 2004). For example, perceived anger in faces has been associated with evaluations of untrustworthiness, while judgments of happiness have been associated with trust- worthiness (Oosterhof & Todorov, 2009; Todorov & Duchaine, 2008; Winston, Strange, O’Doherty, & Dolan, 2002). Therefore, it is necessary to accurately identify who presents a happy or angry facial expression. Accurate processing of facial identity and expression is fundamental to normal socialization and social interaction. Bruce and Young (1986) proposed that facial identity and expression are processed independently. The physical aspects of facial structure are first encoded during a structural encoding stage. Following the structural encod- ing stage, the facial identity route bifurcates from the facial expression route. Some neuropsychological studies have supported this model, which explains why some individu- als (e.g., patients suffering from prosopagnosia) are able to interpret facial expressions correctly but cannot recognize facial identity (Bruyer et al., 1983; Tranel, Damasio, & Damasio, 1988). However, recent image-based analysis and behavioral and functional imaging experiments have shown that facial identity and expression are not necessarily processed by independent perceptual systems; in fact, these systems may be interdependent (Calder & Young, 2005; Haxby, Hoff- man, & Gobbini, 2000). Results from the adaptation par- adigm show interactive processing between facial identity J. Moriya Japan Society for the Promotion of Science, Tokyo, Japan J. Moriya Á Y. Sugiura Hiroshima University, Hiroshima, Japan J. Moriya (&) Ghent University, Henri Dunantlaan 2, 9000 Gent, Belgium e-mail: [email protected] Y. Tanno The University of Tokyo, Tokyo, Japan 123 Psychological Research DOI 10.1007/s00426-012-0463-7

Repeated short presentations of morphed facial expressions change recognition and evaluation of facial expressions

Embed Size (px)

Citation preview

ORIGINAL ARTICLE

Repeated short presentations of morphed facial expressionschange recognition and evaluation of facial expressions

Jun Moriya • Yoshihiko Tanno • Yoshinori Sugiura

Received: 4 July 2012 / Accepted: 8 November 2012

� Springer-Verlag Berlin Heidelberg 2012

Abstract This study investigated whether sensitivity to

and evaluation of facial expressions varied with repeated

exposure to non-prototypical facial expressions for a short

presentation time. A morphed facial expression was pre-

sented for 500 ms repeatedly, and participants were

required to indicate whether each facial expression was

happy or angry. We manipulated the distribution of pre-

sentations of the morphed facial expressions for each facial

stimulus. Some of the individuals depicted in the facial

stimuli expressed anger frequently (i.e., anger-prone indi-

viduals), while the others expressed happiness frequently

(i.e., happiness-prone individuals). After being exposed to

the faces of anger-prone individuals, the participants

became less sensitive to those individuals’ angry faces.

Further, after being exposed to the faces of happiness-

prone individuals, the participants became less sensitive to

those individuals’ happy faces. We also found a relative

increase in the social desirability of happiness-prone indi-

viduals after exposure to the facial stimuli.

Introduction

In the process of identifying people from their faces, we

not only identify their gender or race, but we also form

impressions of them, such as whether or not they are

trustworthy or aggressive. Our impressions of people are

partially derived from their facial expressions (Adolphs,

2002; Frith & Frith, 2007; Gallese, Keysers, & Rizzolatti,

2004). For example, perceived anger in faces has been

associated with evaluations of untrustworthiness, while

judgments of happiness have been associated with trust-

worthiness (Oosterhof & Todorov, 2009; Todorov &

Duchaine, 2008; Winston, Strange, O’Doherty, & Dolan,

2002). Therefore, it is necessary to accurately identify who

presents a happy or angry facial expression. Accurate

processing of facial identity and expression is fundamental

to normal socialization and social interaction.

Bruce and Young (1986) proposed that facial identity

and expression are processed independently. The physical

aspects of facial structure are first encoded during a

structural encoding stage. Following the structural encod-

ing stage, the facial identity route bifurcates from the facial

expression route. Some neuropsychological studies have

supported this model, which explains why some individu-

als (e.g., patients suffering from prosopagnosia) are able to

interpret facial expressions correctly but cannot recognize

facial identity (Bruyer et al., 1983; Tranel, Damasio, &

Damasio, 1988).

However, recent image-based analysis and behavioral

and functional imaging experiments have shown that facial

identity and expression are not necessarily processed by

independent perceptual systems; in fact, these systems may

be interdependent (Calder & Young, 2005; Haxby, Hoff-

man, & Gobbini, 2000). Results from the adaptation par-

adigm show interactive processing between facial identity

J. Moriya

Japan Society for the Promotion of Science, Tokyo, Japan

J. Moriya � Y. Sugiura

Hiroshima University, Hiroshima, Japan

J. Moriya (&)

Ghent University, Henri Dunantlaan 2, 9000 Gent, Belgium

e-mail: [email protected]

Y. Tanno

The University of Tokyo, Tokyo, Japan

123

Psychological Research

DOI 10.1007/s00426-012-0463-7

and expression. The adaptation paradigm has revealed that

prolonged exposure to a face can modify one’s subsequent

perception of related facial identity, expression, gender,

and race (Ellamil, Susskind, & Anderson, 2008; Hsu &

Young, 2004; Leopold, O’Toole, Vetter, & Blanz, 2001;

Rhodes & Jeffery, 2006; Rutherford, Chattha, & Krysko,

2008; Webster, Kaping, Mizokami, & Duhamel, 2004). For

example, when the adapting stimuli of an angry face were

presented for 45 s followed by a neutral face of the same

individual for 1 s, the neutral face was more frequently

judged as happy (Rutherford et al., 2008). In other words,

people become less sensitive to the angry expressions of an

individual when they have been attending to an extremely

angry expression made by that individual. These adaptation

effects significantly diminished when the adapting and

following faces differed in identity (Bestelmeyer, Jones,

DeBruine, Little, & Welling, 2010; Campbell & Burke,

2009; Ellamil et al., 2008; Fox & Barton, 2007). That is,

facial-expression adaptation depends on the perceptual

features of each identity and influences sensitivity to facial

expression within each identity. Facial expression and

identity are processed interdependently.

Although the adaptation paradigm is useful for revealing

the interaction between facial identity and expression, it is

still unclear how such interaction works in real-life situa-

tions. Adaptation effects to facial expressions and identities

might occur in everyday life, because the average face is

constantly calibrated and fine-tuned by every experience

(Rhodes & Jeffery, 2006); this shifted criterion for the

average face toward the average of perceived expressions

constitutes the adaptation effect. Previous studies proposed

that adaptation effects to faces play an important role in

daily life (Saxton, Little, DeBruine, Jones, & Roberts,

2009; Webster et al., 2004; Webster & MacLeod, 2011).

For example, people might adjust their sensitivity to faces

with different ethnic features depending on where they live

and on their length of residence in that location (Webster &

MacLeod, 2011). Moreover, adjustments in sensitivity

derived from adaptation effects persist for familiar faces

even after 24 h (Carbon et al., 2007). Although faces may

vary depending upon factors such as age, the adaptation

effect might adjust face perception flexibly following

exposure to new visual information. However, it is still

unclear whether adaptation adjusts perception of the facial

expressions associated with each identity in everyday life.

A few issues concerning the ecological validity of the

interactive effect between facial expression and identity

have emerged from previous studies using the adaptation

paradigm. First, it seems that the presentation times of

facial expressions were too long. It takes several tens or

hundreds of seconds to induce adaptation effects (Camp-

bell & Burke, 2009; Ellamil et al., 2008; Fox & Barton,

2007; Hsu & Young, 2004; Rutherford et al., 2008;

Webster et al., 2004), but facial expressions do not remain

constant in everyday life—they vary according to the

moment. Therefore, we need to investigate the effects of

short-duration presentation of facial expressions on adap-

tation. Some previous studies have shown that a short-

duration of presentation (i.e., less than 1,000 ms) of a facial

stimulus is not enough to induce adaptation effects (Hsu &

Young, 2004). Considering that repeated presentations of a

stimulus contribute to increased familiarity and enhanced

adaptation effects (Jiang, Blanz, & O’Toole, 2007), repe-

ated presentation of faces might lead people to adjust their

sensitivity to individual faces, even if only briefly.

Second, previous studies of facial-expression adaptation

used prototypical facial configurations associated with basic

emotions, whereas in everyday life, we do not necessarily

express prototypical expressions, instead often displaying

subtle facial expressions. Few studies have investigated

adaptation effects associated with non-prototypical facial

expressions. Although even non-prototypical facial expres-

sions cause adjustment if presented constantly for long

durations (Cook, Matei, & Johnston, 2011), it is still

unknown whether sensitivity toward facial expressions varies

dynamically for each identity when non-prototypical, subtle

facial expressions are presented for short periods of time.

The purposes of the present study are to investigate

whether facial expression and identity are processed inter-

dependently in the adaptation paradigm and whether people

adjust their sensitivity to the expressions associated with

each identity under ecologically valid conditions, in which a

facial stimulus is presented briefly and shows a subtle

expression. Previous studies did not show adaptation effects

to facial expressions and identities with short-duration

presentation of non-prototypical facial expressions. How-

ever, considering that the average face is constantly cali-

brated and fine-tuned by every experience (Rhodes &

Jeffery, 2006), repeated presentations of even non-proto-

typical facial expressions could change viewers’ sensitivity

to the emotional expression associated with each identity

under short exposure times. The present study might reveal

whether long-duration presentation of prototypical facial

expressions is necessary for within-identity facial-expres-

sion adaptation. The present study could also reveal the

adaptation-related effects of repeated exposure to facial

expressions on interactive processing between identity and

expression in everyday life.

In addition, the current study aimed at investigating

whether the impressions for each identity vary with adap-

tation. Although previous studies have shown that adapta-

tion to certain faces (e.g., masculine faces, distorted faces)

influences viewers’ impressions (Buckingham et al., 2006;

Little, DeBruine, & Jones, 2005; Rhodes, Jeffery, Watson,

Clifford, & Nakayama, 2003), few studies have investigated

the effects of facial-expression adaptation on impressions.

Psychological Research

123

Engell, Todorov, and Haxby (2010) investigated the mod-

ulation of facial impressions in an expression-adaptation

paradigm. They showed that adaption to angry faces

increased viewers’ evaluations of the trustworthiness of

subsequently rated neutral faces, whereas adaptation to

happy faces decreased the trustworthiness evaluations of

neutral faces. Those results show that adaptation to facial

expressions influences viewers’ impressions of them.

However, in those experiments, prototypical facial expres-

sions were used as the adapter faces. In addition, those

studies did not investigate identity effects in the adaptation

task or the effects of simultaneous adaptation to facial

expressions and impressions. The current study aimed at

investigating whether expression adaptation to non-proto-

typical facial expressions influences the impressions formed

upon exposure to facial stimuli.

In the present experiment, we presented several non-

prototypical facial expressions continuously for a relatively

short presentation time (i.e., 500 ms) instead of presenting

a prototypical angry or happy face for several tens or

hundreds of seconds. Some individuals whose facial

expressions were presented in the study were anger-prone

individuals who frequently displayed angry expressions but

did not make prototypically angry faces; others were hap-

piness-prone individuals who frequently displayed happy

expressions. We investigated whether a biased distribution

of presented facial expressions adaptively influenced par-

ticipants’ sensitivity to facial expressions for each identity.

For example, just as sensitivity toward angry faces weak-

ens when one adapts to an angry face, the same weakening

occurs when the viewer is frequently exposed to subtly

angry expressions. If facial identity and expression are

processed independently, this perceptual shift might not

occur, as participants were exposed to the same number of

angry and happy faces overall. However, since facial-

expression adaptation occurs within each identity (Bestel-

meyer et al., 2010; Campbell & Burke, 2009; Ellamil et al.,

2008; Fox & Barton, 2007), the adaptation effect might be

observed within each individual (i.e., anger- and happiness-

prone individuals). It was hypothesized that the participants

would become less sensitive to angry faces of anger-prone

individuals if repeated, short presentations of angry faces

decreased the viewers’ sensitivity to anger. Under these

circumstances, participants might frequently judge the

faces made by anger-prone individuals as happy. In con-

trast, if the participants become less sensitive to happy

faces of happiness-prone individuals, then they might fre-

quently judge faces made by happiness-prone individuals

as angry. We also investigated whether such changes in

sensitivity are maintained over time.

With respect to facial evaluation, if sensitivity to facial

expression changes on the basis of the identity associated

with a given facial image, then individual facial evaluations

might vary because of the correlation between judgments of

facial expressions and trustworthiness (Oosterhof & Todo-

rov, 2009; Todorov & Duchaine, 2008; Winston et al.,

2002). According to Engell et al. (2010), if a neutral face

made by an anger-prone individual elicits a judgment of

happiness because of adaptation effects, then that anger-

prone individual might give observers the impression of

being trustworthy when portraying a neutral expression. In

contrast, happiness-prone individuals might create impres-

sions of being untrustworthy when expressing neutral faces.

Method

Participants

The participants—17 undergraduate students (5 female and

12 male; mean age 20.9 years; range 19–27 years)—were

required to complete informed consent forms before par-

ticipating in the experiment. All of them had normal or

corrected-to-normal vision.

Materials and apparatus

Angry and happy facial expressions of four individuals (two

male [PE and WF] and two female [MO and PF]) were

obtained from a standard set of pictures of facial expres-

sions (Ekman & Friesen, 1976). The images were morphed

using facial image processing software (Information-

Technology Promotion Agency, Japan, 1998). Landmarks

were placed manually at the critical positions of each pro-

totypical facial expression: head (4 points), outline (28

points), eyes (5 9 2 points), eyebrows (4 9 2 points), nose

(4 points), mouth (6 points), neck (13 points), and hairline

(15 points). An intermediate expression was then created by

linearly interpolating the point-to-point pixel intensity val-

ues. The faces were morphed to parametrically vary their

emotional expressions; this process generated a sequence of

11 facial expressions for each identity, which ranged from

happy to angry in increments of 10 % (e.g., 30 % indicated

relative happiness, and 70 % indicated relative anger).

All stimuli were presented using an Epson Endeavor

MT7500 computer connected to a 17-inch Sony CPD-E230

monitor. We developed our experiments in the MATLAB

environment using the Psychophysics Toolbox extensions

(Brainard, 1997; Pelli, 1997).

Procedure

The experiment comprised three tasks: a categorical-deci-

sion task (happy vs. angry) with no biased distribution, a

categorical-decision task (happy vs. angry) with a biased

distribution, and a facial-evaluation task.

Psychological Research

123

For the categorical-decision task with no biased distri-

bution, the participants were asked to discriminate between

the facial expressions of morphed faces. The trials began

with the presentation of a fixation cross for 500 ms fol-

lowed by the presentation of the morphed face for 500 ms.

The participants were asked to indicate whether the face

was happy or angry by pressing keys on the keyboard. The

experiments began with 24 randomly selected practice

trials. After the practice session, the 11 morphed faces

along the aforementioned continuum for each of the 4

individuals were each presented 10 times, ordered ran-

domly. The order of presentation of the morphed expres-

sions was completely randomized; there were 440 trials for

each participant.

The procedure of the categorical-decision task with a

biased distribution was identical to that of the categorical-

decision task with no biased distribution, with the follow-

ing exception: one male and one female individual in the

facial stimuli group (e.g., PE and MO) were prone to anger

(i.e., anger-prone faces). Their faces were morphed using

angry-to-happy ratios of 90:10, 80:20, 70:30, 60:40, 50:50,

40:60, or 30:70 %. In other words, their morphed faces

necessarily included 30 % angry expression or more, and

their average morphed expression was 60 % angry. In

contrast, the other male and female individuals (e.g., WF

and PF) were prone to happiness (i.e., happiness-prone

faces). Their morphed faces were created in a manner

similar to the anger-prone faces and necessarily included

more than 30 % happy expression. The average morphed

expression for these individuals was 60 % happy. Indi-

viduals with anger-prone and happiness-prone faces were

randomly selected to be shown to each participant in a

counterbalanced fashion. The experiment began with 24

randomly selected practice trials; after the practice session,

7 images from the continuum of morphed faces of each of

the 4 individuals were presented 10 times, ordered ran-

domly. The morphed expressions were presented in a

completely randomized order; there were 280 trials for

each participant.

In the facial-evaluation task, the neutral face (not the

morphed face with a 50:50 % angry-to-happy ratio) of each

individual was presented on each page of a questionnaire.

The participants were instructed to evaluate each individual

using the impression-evaluation questionnaire (Kawanishi,

1993), which comprises 15 adjective pairs associated with

facial impressions. The participants were instructed to rate

their impressions of the displayed facial stimuli on adjec-

tive scales (7-point Likert scales). The adjective pairs were

divided into the following three categories: six items con-

cerning social desirability (e.g., trustworthy–untrustworthy,

honest–dishonest), five items concerning individual desir-

ability (e.g., likable–dislikable, humorous–unhumorous),

and four items of aggressiveness (e.g., active–inactive).

Figure 1 shows the task order in the present experiment.

First, the participants performed the facial-evaluation tasks;

then, they performed the categorical-decision task with no

biased distribution once to investigate each participant’s

base sensitivity to each face (i.e., baseline). Next, they

performed the categorical-decision task with a biased dis-

tribution twice to investigate the changeability of their

sensitivity to facial expressions. Finally, they repeated the

categorical-decision task with no biased distribution; this

was done to investigate whether their changed sensitivity

persisted through time. After the experimental task, they

repeated the facial-evaluation task to investigate the eval-

uation shift for each identity. The participants were not

informed of the differences between the categorical-deci-

sion task with no biased distribution and that with a biased

distribution.

Analysis

In the categorical-decision task, the percentages of ‘‘angry

responses’’ were computed at each level of morphed faces

for each individual, and a psychometric function (cumu-

lative normal distribution) was fitted to the angry responses

for each of the individual morphed faces. Figure 2 shows

an example of the psychometric function corresponding to

anger-prone faces for a representative participant; the

x-axis represents the morph continuum (the percentage of

anger vs. happiness in the face), and the y-axis represents

the percentage of responses in which the participants

judged the morphed face as angry. We interpolated the

psychometric function to determine the angry-to-happy

ratio of the morphed face most likely to be perceived as

angry 50 % of the time (and happy the other 50 % of the

time). This point in the psychometric function is called the

1st facial-evaluation task

2nd facial-evaluation task

1st categorical-decision task with no biased distribution

2nd categorical-decision task with no biased distribution

1st categorical-decision task with a biased distribution

2nd categorical-decision task with a biased distribution

Fig. 1 The order of the tasks in the experiment

Psychological Research

123

point of subjective equality (PSE); it represents the cate-

gorical boundary of facial judgment between angry and

happy. We compared the average PSEs of anger- and

happiness-prone faces in the different tasks. Because the

ability to recognize facial expressions differs among people

(Blair & Coles, 2000; Marsh & Blair, 2008; Marsh, Kozak,

& Ambady, 2007), we defined the baseline as the sensi-

tivity to facial expressions for each face in the first cate-

gorical-decision task with no biased distribution. That is, to

quantify the shift in the categorical boundaries for anger-

and happiness-prone faces, we determined the PSE shift as

the difference between the PSEs at baseline and in all other

tasks for each individual. Then, we measured the average

PSE shifts elicited by anger- and happiness-prone faces in

each participant. A positive value implied a rightward shift

in the categorical boundary (a bias in judgment of facial

expressions toward happy relative to baseline) and signified

that the participants were sensitive to happy faces but less

sensitive to angry faces. In contrast, a negative shift value

implied that the participants were more sensitive to angry

faces than to happy faces.

In the facial-evaluation task, to investigate the changes

in the evaluation of anger- and happiness-prone faces, we

determined the evaluation shift as the differences between

social desirability, individual desirability, and aggressive-

ness scores in the second impression-evaluation question-

naire relative to the first in each individual. Then, we

measured the average evaluation shifts with respect to

anger- and happiness-prone faces in each participant. A

positive value implied stronger impressions of social

desirability, individual desirability, and aggressiveness

in the second evaluation as compared with the first

impression.

Results

The mean PSEs for the anger- and happiness-prone faces in

the first categorical-decision task with no biased distribu-

tion were 47.1 % (SD = 5.2) and 49.5 % (SD = 7.6),

respectively, and they did not differ significantly,

t (16) = 1.11, ns. That is, the sensitivities to the different

types of facial expressions did not differ at baseline. The

mean PSE shifts for anger- and happiness-prone faces are

shown in Fig. 3. We used a 2 (frequent expression: anger-

and happiness-prone faces) 9 3 (categorical-decision task:

first categorical-decision task with a biased distribution,

second categorical-decision task with a biased distribution,

and second categorical-decision task with no biased dis-

tribution) ANOVA to analyze the data. This two-way

ANOVA revealed significant main effects of frequent

expression, F (1, 16) = 20.52, p \ 0.001, gp2 = 0.56

(anger-prone faces: 2.7 %, happiness-prone faces: -1.4 %)

and categorical-decision task, F (2, 32) = 8.49, p \ 0.01,

gp2 = 0.35 (first categorical-decision task with a biased

distribution: -1.0 %, second categorical-decision task with

a biased distribution: 0.3 %, second categorical-decision

task with no biased distribution: 2.6 %). The two-way

interaction between these factors was not significant,

F (2, 32) = 2.47, p = 0.10, gp2 = 0.13. The main effect of

frequent expression revealed that the PSE shifts for anger-

prone faces were more positive than those for happiness-

prone faces. Participants were less sensitive to the angry

faces of anger-prone individuals than to those of happiness-

prone individuals. The main effect of categorical-decision

task revealed that the PSE shifts in the second categorical-

decision task with no biased distribution were more posi-

tive than those in the first and second categorical-decision

0%

20%

40%

60%

80%

100%

0% 20% 40% 60% 80% 100%

Per

cen

tag

e o

f A

ng

ry R

esp

on

se

Percentage of Angry Face

PSE Shift

1st categorical-decision task with no bias1st categorical-decision task with bias2nd categorical-decision task with bias2nd categorical-decision task with no bias

50%

Fig. 2 Examples of

psychometric function of a

representative participant for

anger-prone faces. The arrowindicates the PSE shifts in the

2nd categorical-decision task

with no biased distribution

Psychological Research

123

tasks with biased distributions. To estimate whether the

PSE shifts were significantly different from baseline (i.e.,

0 %), we used the population mean test for each facial

identity and categorical-decision task. With respect to the

anger-prone faces, PSE shifts were distinguishable from

baseline in the second categorical-decision task with a

biased distribution, t (16) = 3.92, p \ 0.01, d = 1.34, and

in the second categorical-decision task with no biased

distribution, t (16) = 4.43, p \ 0.001, d = 1.52. With

respect to happiness-prone faces, the PSE shifts were dis-

tinguishable from baseline in the first categorical-decision

task with a biased distribution, t (16) = 2.63, p \ .05,

d = 0.90, and in the second categorical-decision task with

a biased distribution, t (16) = 2.83, p \ 0.05, d = 0.97.

The PSE shifts in the other conditions were not distin-

guishable from baseline. Because strong, positive PSE

shifts for happiness-prone faces persisted until the second

categorical-decision task with no biased distribution, the

main effect of categorical-decision task might show a

significantly different PSE shift in that condition compared

with the other conditions.

In the facial-evaluation task, the mean scores for

each category of anger- and happiness-prone faces did

not significantly differ in the first impression-evaluation

questionnaire: social desirability, t (16) = 0.55; individual

desirability, t (16) = 0.47; and aggressiveness, t (16) =

0.01, all ns. The first impression elicited by each individual

did not differ across participants. We compared the eval-

uation shifts of anger- and happiness-prone faces using

two-tailed paired t tests (Fig. 4). The social desirability

score of the happiness-prone faces was higher than that of

the anger-prone faces, t (16) = 3.08, p \ 0.01, d = 1.05,

but the other evaluation shift scores did not differ between

the anger- and happiness-prone faces. To estimate whether

the subsequent evaluations were significantly different

from the first evaluations, we used the population mean test

for each facial identity and category of facial evaluation.

For the happiness-prone faces, the evaluation shift of social

desirability was statistically distinguishable from baseline,

t (16) = 2.76, p \ 0.05, d = 1.01. The other subsequent

evaluations were not distinguishable from baseline.

Discussion

This study investigated facial-expression adaptation by

repeatedly exposing participants to biased facial expres-

sions for a short presentation time; this study also inves-

tigated whether each individual’s evaluations varied with

adaptation. The participants became less sensitive to the

angry faces of anger-prone individuals (who frequently

expressed non-prototypical anger) and to the happy faces

of happiness-prone individuals (who frequently expressed

non-prototypical happiness). While participants frequently

judged the faces made by anger-prone individuals as

happy, they also frequently judged the faces made by

happiness-prone individuals as angry. This change in sen-

sitivity to facial expressions occurred in the same direction

as the observed adaptation. Each individual’s sensitivity to

facial expressions shifted with multiple exposures to non-

prototypical but biased facial expressions. Although pre-

vious studies have shown that short durations and subtle

facial expressions are not sufficient to induce adaptation

effects (Cook et al., 2011; Hsu & Young, 2004), the present

study showed that repeated exposure to even non-proto-

typical facial expressions for a short-duration could

generate adaptation effects. The facial evaluations of hap-

piness-prone individuals also varied from those of anger-

prone individuals: neutral expressions on happiness-prone

faces conveyed a trustworthy impression to observers and

increased the social desirability of the individual being

judged.

Sensitivity to facial expressions varied with adaptation

in each individual. If facial identity and expression were

processed independently, then this perceptual shift might

not occur, because participants were exposed to angry and

-4

-2

0

2

4

6

PS

E S

hif

ts (

%)

Anger-prone faceHappiness-prone face

1st task with bias

2nd task with bias

2nd task with no bias

Fig. 3 Shifts in point of subjective equality for each categorical face.

Error bars represent standard errors -2

-1

0

1

2

3

Eva

luat

ion

Sh

ifts

Anger-prone face

Happiness-prone face

Social Desirability Aggressiveness

IndividualDesirability

Fig. 4 Mean evaluation shifts for each categorical face. Error barsrepresent standard errors

Psychological Research

123

happy faces for the same amounts of time. The present

results, which imply perceptual shifts on the part of each

participant, are consistent with the notion that facial iden-

tity and expression are processed by interdependent per-

ceptual systems (Bestelmeyer et al., 2010; Campbell &

Burke, 2009; Ellamil et al., 2008; Fox & Barton, 2007).

Previous studies have demonstrated adaptation effects soon

after adaptation to stimulus presentation (Ellami et al.,

2008; Fox & Barton, 2007), while detection of a facial

expression during every trial created adaptation effects

within each identity in the present study. That is, the effects

of interdependent processing of facial identity and

expression might persist until the next presentation of the

same individual’s facial expression. Previous study has

shown that adaptation to familiar faces results in long-term

changes and adaptation effects, which remain even after a

delay of 24 h with a long presentation of the adapting face

(Carbon et al., 2007). The present study suggested that

even under short presentation of the facial stimuli, the

effects of adjustment in sensitivity to facial expressions per

identity persisted for a period of time.

The present study has shown adaptation effects, even

when the stimuli were presented only for short durations.

Previous studies did not show adaptation effects to facial

expressions presented only for short durations (Hsu &

Young, 2004). However, repeated presentations of unfa-

miliar faces increase the viewer’s familiarity therewith and

enhance adaptation effects thereto (Jiang et al., 2007). By

presenting the same identities many times, the present

study could induce adaptation effects even under short

presentation duration. These effects might be observed

because of the unfamiliarity of the faces. Adaptation

effects are not necessarily same for familiar and unfamiliar

faces; since representations of familiar faces are more

stable than those of unfamiliar ones, the effects of sensi-

tivity adjustment to familiar faces are weak (Laurence &

Hole, 2011). Therefore, if we had used familiar faces to

create the facial stimuli used for this experiment, repeated

short-duration presentation would be unlikely to induce

adaptation effects.

We used non-prototypical facial expressions in the

present study. Moreover, the differences between the anger-

and happiness-prone facial stimuli were subtle: the average

morphed expression was either 60 % angry or 60 % happy.

Adaptation effects might be weak in the present study due to

the small bias inherent in the stimuli. Even with such a

subtly different distribution of presentations of the morphed

facial expressions, processing of facial expressions was

finely tuned for each individual set of facial stimuli. The

average face is constantly calibrated and fine-tuned by each

experience with that individual (Rhodes & Jeffery, 2006);

the average face might thereby become a neutral facial

expression for each individual. Facial expressions might be

judged as deviations from a neutral facial expression, which

is the new criterion for an average face.

The duration of changed sensitivity differed between

anger-prone and happiness-prone faces. Whereas rapid

sensitivity changes were observed for happiness-prone

faces, the changes in sensitivity seen for anger-prone faces

did not vary rapidly: they remained constant after the

completion of the biased presentation. Positive facial

expressions (e.g., happy faces) are recognized more

quickly than are negative facial expressions (Esteves &

Ohman, 1993; Leppanen & Hietanen, 2004; Milders,

Sahraie, & Logan, 2008). Identity recognition is also more

effective for happy faces than angry faces (D’Argembeau,

Van der Linden, Etienne, & Comblain, 2003).While neg-

ative expressions (e.g., angry faces) capture attention more

effectively than do positive ones (Ohman, Lundqvist, &

Esteves, 2001), negative expressions are processed

ambiguously, and they make it difficult to process the local

features of faces (Eastwood, Smilek, & Merikle, 2003). In

the present experiment, happiness-prone individuals might

be identified instantly, because they frequently show happy

faces, which are efficiently recognized. As a result, changes

in sensitivity to happiness-prone individuals might be

rapid. In contrast, they might be observed later and persist

longer in anger-prone individuals, even in a task with no

biased distribution. One limitation of the present study is

that we used only four identities for presentation of facial

stimuli. Further studies need to employ facial stimuli with

many identities to evaluate how adjustment of sensitivity to

facial expressions affects recognition of different identities

represented by happy and angry faces.

The present research showed that individuals’ adapta-

tion effects influenced their facial evaluations. Although

adaptation effects might cause the neutral faces of happi-

ness-prone individuals to be frequently perceived as angry,

the neutral faces of happiness-prone individuals conveyed

favorable impressions and increased their social desirabil-

ity. These results are not consistent with those of previous

studies (Engell et al., 2010), which revealed that adaptation

to happy faces decreased the perceived trustworthiness of

neutral faces. In the study by Engell et al. (2010), partici-

pants were asked to rate their impressions of the depicted

individuals in every trial after their facial expressions were

presented, while in the present study, participants evaluated

each individual only at the beginning and the end of the

experiment. As Engell et al. (2010) showed, participants

become less sensitive to an expression used as an adapting

facial expression and temporarily evaluate the face

depending on their adjusted perceptions. Therefore, their

evaluation of the face tends to reflect the opposite

impression of that conveyed by the adapting facial

expression. However, repeated presentations of a particular

individual might longitudinally make participants form

Psychological Research

123

impressions of the presented facial expressions. Partici-

pants might gradually form impressions while performing

expression-categorization tasks. In the evaluation task,

their impressions might reflect not their adjusted sensitiv-

ities but the ideas they have already formed during the

categorization tasks. The temporal and longitudinal effects

of adaptation on facial evaluation might be different.

One important longitudinal effect of repeated stimuli is

the mere exposure effect (Seamon et al., 1995; Zajonc,

1968; Zajonc, Markus, & Wilson, 1974; Zebrowitz, White,

& Wieneke, 2008). Mere repeated exposure to stimuli

increases positive impressions of them. However, if an

individual’s impressions of stimuli are initially negative or

threatening, repeated exposure might strengthen those

negative evaluations (Brickman, Redfield, Harrison, &

Crandall, 1972; Crisp, Hutter, & Young, 2009; Perlman &

Oskamp, 1971). Although all identities were presented

equally often in the present experiment, the distribution of

facial expressions differed for each identity. Because par-

ticipants were repeatedly exposed to relatively happy faces

made by happiness-prone individuals, the effects of mere

exposure to positive stimuli might increase the trustwor-

thiness or social desirability of the happiness-prone faces.

However, the effects of mere exposure to negative stimuli

in this study might decrease the social desirability of the

anger-prone individuals. The results indicated that those

individuals’ social desirability decreased, but not signifi-

cantly. The anger-prone individuals might not be recog-

nized quickly, because angry faces are not encoded as well

as happy faces (D’Argembeau & Van der Linden, 2007).

Therefore, the mere exposure effect might be weak for

anger-prone individuals. Further study of evaluation of

facial expressions needs to reveal the interaction effect

between the duration and number of exposures to facial

expressions on the evaluations thereof.

The present results support the current model of facial

recognition, which proposes that processing of facial iden-

tity and expression is interdependent (Calder & Young,

2005; Haxby et al., 2000); the results do not conform to the

model of Bruce and Young (1986), which proposes that

facial identity and expression are processed in parallel.

However, processing and representation are not identical.

The present results show that the representation of facial

identity and that of expression are interdependent; however,

it is still possible that facial identity and expression are

processed independently after a structural encoding stage

and then integrated rapidly. The integrated representation of

a face might influence the next instance of facial-expression

processing targeting the same individual. The present study

could not evaluate the process of facial recognition; how-

ever, we can say that integrated information of facial

identity and expression, but not distinct parallel informa-

tion, affects adjustments in sensitivity to facial expressions.

Whereas the present study showed morphed facial

expressions for short durations to increase ecological

validity, everyday phenomena cannot be fully explained by

static, non-prototypical faces. In daily life, our facial

expressions change dynamically. Dynamic facial expres-

sions presented as video clips are quite ecologically valid

(Jellema, Pacchinenda, Palumbo, & Tan, 2011; Sato,

Kochiyama, Yoshikawa, Naito, & Matsumura, 2004). To

reveal adaptation effects to facial expressions and identities

in everyday life, further studies should investigate the

effects of dynamic facial expressions on adaptation effects.

In the present study, we showed expression adaptation to

each individual set of facial stimuli, even though the visual

image of each individual was the same. However, a per-

son’s image is not necessarily the same as his/her identity.

Within a given identity, image can differ substantially on

the basis of viewpoint, lightning, size, hairstyle, and age

(Bruce, 1994; Jenkins, White, Van Montfort, & Burton,

2011). Especially, processing of unfamiliar faces is influ-

enced by these factors (Johnston & Edmonds, 2009).

Although expression adaption is observed within individual

persons even with variation in visual image (Fox & Barton,

2007), the present study could not exclude the possibility

that not identity but similarity of image affected facial

expression processing. Further study using different visual

images of the same person might reveal whether the

modified sensitivity to facial expressions observed in the

present study applies not only to visually similar images

but also to the generalized image of a person.

In conclusion, the present study showed that people

become less sensitive to angry faces and more sensitive to

happy faces after exposure to individuals who frequently

express anger. In contrast, people become less sensitive to

happy faces and more sensitive to angry faces after expo-

sure to individuals who frequently express happiness.

Further, people tend to form more positive impressions of

the social desirability of individuals who express happiness

frequently.

References

Adolphs, R. (2002). The neurobiology of social cognition. CurrentOpinion in Neurobiology, 11, 231–239.

Bestelmeyer, P. E. G., Jones, B. C., DeBruine, L. M., Little, A. C., &

Welling, L. L. M. (2010). Face aftereffects suggest interdepen-

dent processing of expression and sex and of expression and

race. Visual Cognition, 18, 255–274.

Blair, R. J. R., & Coles, M. (2000). Expression recognition and

behavioural problems in early adolescence. Cognitive Develop-ment, 15, 421–434.

Brainard, D. H. (1997). The psychophysics toolbox. Spatial Vision,10, 433–436.

Brickman, P., Redfield, J., Harrison, A. A., & Crandall, R. (1972).

Drive and predisposition as factors in the attitudinal effects of

Psychological Research

123

mere exposure. Journal of Experimental Social Psychology, 8,

31–44.

Bruce, V. (1994). Stability from variation: the case of face recognition.

Quarterly Journal of Experimental Psychology, 47A, 5–28.

Bruce, V., & Young, A. (1986). Understanding face recognition.

British Journal of Psychology, 77, 305–327.

Bruyer, R., Laterre, C., Seron, X., Feyereisen, P., Strypstein, E.,

Pierrard, E., et al. (1983). A case of prosopagnosia with some

preserved covert remembrance of familiar faces. Brain andCognition, 2, 257–287.

Buckingham, G., DeBruine, L.M., Little, A.C., Welling, L.L.M.,

Conway, C.A., Tiddeman, B.,P., & Jones, B.C. (2006). Visual

adaptation to masculine and feminine faces influences general-

ized preferences and perceptions of trustworthiness. Evolutionand Human Behavior, 27, 381–389.

Calder, A. J., & Young, A. W. (2005). Understanding the recognition

of facial identity and facial expression. Nature Reviews Neuro-science, 6, 641–651.

Campbell, J., & Burke, D. (2009). Evidence that identity-dependent

and identity-independent neural populations are recruited in the

perception of five basic emotional facial expressions. VisionResearch, 49, 1532–1540.

Carbon, C. C., Strobach, T., Langton, S. R. H., Harsanyi, G., Leder,

H., & Kovacs, G. (2007). Adaptation effects of highly familiar

faces: immediate and long lasting. Memory and Cognition, 35,

1966–1976.

Cook, R., Matei, M., & Johnston, A. (2011). Exploring expression

space: adaptation to orthogonal and anti-expressions. Journal ofVision, 11, 1–9.

Crisp, R. J., Hutter, R. R. C., & Young, B. (2009). When mere exposure

leads to less liking: the incremental threat effect in intergroup

contexts. British Journal of Psychology, 100, 133–149.

D’Argembeau, A., Van der Linden, M., Etienne, A. M., & Comblain,

C. (2003). Identity and expression memory for happy and angry

faces in social anxiety. Acta Psychologica, 115, 1–15.

D’Argembeau, A., & Van der Linden, M. (2007). Facial expressions

of emotion influence memory for facial identity in an automatic

way. Emotion, 7, 507–515.

Eastwood, J. D., Smilek, D., & Merikle, P. M. (2003). Negative facial

expression captures attention and disrupts performance. Percep-tion and Psychophysics, 65, 352–358.

Ekman, P., & Friesen, W. V. (1976). Pictures of facial affect. Palo

Alto: Consulting Psychologists Press.

Ellamil, M., Susskind, J. M., & Anderson, A. K. (2008). Examinations

of identity invariance in facial expression adaptation. Cognitive,Affective, and Behavioral Neuroscience, 8, 273–281.

Engell, A. D., Todorov, A., & Haxby, J. V. (2010). Common neural

mechanisms for the evaluation of facial trustworthiness and

emotional expressions as revealed by behavioral adaptation.

Perception, 39, 931–941.

Esteves, F., & Ohman, A. (1993). Masking the face: recognition of

emotional facial expressions as a function of the parameters

of backward masking. Scandinavian Journal of Psychology, 34,

1–18.

Fox, C. J., & Barton, J. J. S. (2007). What is adapted in face

adaptation? The neural representations of expression in the

human visual system. Brain Research, 1127, 80–89.

Frith, C. D., & Frith, U. (2007). Social cognition in humans. CurrentBiology, 17, R724–R732.

Gallese, R., Keysers, C., & Rizzolatti, G. (2004). A unifying view of

the basis of social cognition. Trends in Cognitive Sciences, 8,

396–403.

Haxby, J. V., Hoffman, E. A., & Gobbini, M. I. (2000). The

distributed human neural system for face perception. Trends inCognitive Science, 4, 223–233.

Hsu, S. M., & Young, A. W. (2004). Adaptation effects in facial

expression recognition. Visual Cognition, 11, 871–899.

Information-technology Promotion Agency, Japan (1998). Softwarefor facial image processing system for human-like Kansei Agent[Computer software].

Jellema, T., Pecchinenda, A., Palumbo, L., & Tan, E. G. (2011).

Biases in the perception and affective valence of neutral facial

expressions induced by the immediate perceptual history. VisualCognition, 19, 616–634.

Jenkins, R., White, D., Van Montfort, X., & Burton, A. M. (2011).

Variability in photos of the same face. Cognition, 121, 313–323.

Jiang, F., Blanz, V., & O’Toole, A. J. (2007). The role of familiarity

in three-dimensional view-transferability of face identity adap-

tation. Vision Research, 47, 525–531.

Johnston, R. A., & Edmonds, A. J. (2009). Familiar and unfamiliar

face recognition: a review. Memory, 17, 577–596.

Kawanishi, C. (1993). The influence of face on person perception.

Japanese Journal of Psychology, 64, 263–270.

Laurence, S., & Hole, G. (2011). The effect of familiarity on face

adaptation. Perception, 40, 450–463.

Leopold, D. A., O’ Toole, A. J., Vetter, T., & Blanz, V. (2001).

Prototype-referenced shape encoding revealed by high-level

aftereffects. Nature Neuroscience, 4, 89–94.

Leppanen, J. M., & Hietanen, J. K. (2004). Positive facial expressions

are recognized faster than negative facial expressions, but why?

Psychological Research, 69, 22–29.

Little, A. C., DeBruine, L. M., & Jones, B. C. (2005). Sex-contingent

face after-effects suggest distinct neural populations code male

and female faces. Proceedings of the Royal Society B—Biological. Sciences, 272, 2283–2287.

Marsh, A. A., & Blair, R. J. R. (2008). Deficits in facial affect

recognition among antisocial populations: a meta-analysis.

Neuroscience and Behavioral Reviews, 32, 454–465.

Marsh, A. A., Kozak, M. N., & Ambady, N. (2007). Accurate

identification of fear facial expressions predicts prosocial

behavior. Emotion, 7, 239–251.

Milders, M., Sahraie, A., & Logan, S. (2008). Minimum presentation

time for masked facial expression discrimination. Cognition andEmotion, 22, 63–82.

Ohman, A., Lundqvist, D., & Esteves, F. (2001). The face in the

crowd revisited: a threat advantage with schematic stimuli.

Journal of Personality and Social Psychology, 80, 381–396.

Oosterhof, N. N., & Todorov, A. (2009). Shared perceptual basis of

emotional expressions and trustworthiness impressions from

faces. Emotion, 9, 128–133.

Pelli, D. G. (1997). The VideoToolbox software for visual psycho-

physics: transforming numbers into movies. Spatial Vision, 10,

437–442.

Perlman, D., & Oskamp, S. (1971). The effects of picture content and

exposure frequency on evaluations of negroes and whites.

Journal of Experimental Social Psychology, 7, 503–514.

Rhodes, G., & Jeffery, L. (2006). Adaptive norm-based coding of

facial identity. Vision Research, 46, 2977–2987.

Rhodes, G., Jeffery, L., Watson, T. L., Clifford, C. W. G., &

Nakayama, K. (2003). Fitting the mind to the world: face

adaptation and attractiveness aftereffects. Psychological Science,14, 558–566.

Rutherford, M. D., Chattha, H. M., & Krysko, K. M. (2008). The use

of aftereffects in the study of relationships among emotion

categories. Journal of Experimental Psychology: Human Per-ception and Performance, 34, 27–40.

Sato, W., Kochiyama, T., Yoshikawa, S., Naito, E., & Matsumura, M.

(2004). Enhanced neural activity in response to dynamic facial

expressions of emotion: an fMRI study. Cognitive BrainResearch, 20, 81–91.

Psychological Research

123

Saxton, T. K., Little, A. C., DeBruine, L. M., Jones, B. C., & Roberts,

S. C. (2009). Adolescents’ preferences for sexual dimorphism

are influenced by relative exposure to male and female faces.

Personality and Individual Differences, 47, 864–868.

Seamon, J. G., Williams, P. C., Crowley, M. J. U., Kim, I. J., Langer,

S. A., Orne, P. J., et al. (1995). The mere exposure effect is based

on implicit memory: effects of stimulus type, encoding condi-

tions, and number of exposures on recognition and affect

judgments. Journal of Experimental Psychology. Learning,Memory, and Cognition, 21, 711–721.

Todorov, A., & Duchaine, B. (2008). Reading trustworthiness in faces

without recognizing faces. Cognitive Neuropsychology, 25, 395–

410.

Tranel, D., Damasio, A. R., & Damasio, H. (1988). Intact recognition

of facial expression, gender, and age in patients with impaired

recognition of face identity. Neurology, 38, 690–696.

Webster, M. A., Kaping, D., Mizokami, Y., & Duhamel, P. (2004).

Adaptation to natural facial categories. Nature, 428, 557–561.

Webster, M. A., & MacLeod, D. I. A. (2011). Visual adaptation and

face perception. Philosophical Transactions of the Royal Societyof London B: Biological Sciences, 366, 1702–1725.

Winston, J. S., Strange, B. A., O’Doherty, J., & Dolan, R. J. (2002).

Automatic and intentional brain responses during evaluation of

trustworthiness of faces. Nature Neuroscience, 5, 277–283.

Zajonc, R. B. (1968). Attitudinal effects of mere exposure. Journal ofPersonality and Social Psychology, 9, 1–27.

Zajonc, R. B., Markus, H., & Wilson, W. R. (1974). Exposure effects

and associative learning. Journal of Experimental SocialPsychology, 10, 248–263.

Zebrowitz, L. A., White, B., & Wieneke, K. (2008). Mere exposure

and racial prejudice: exposure to other-race faces increases

linking for strangers of that race. Social Cognition, 26, 259–275.

Psychological Research

123