37
Dynamic Clips 1 Running head: DYNAMIC EMOTIONAL FACIAL EXPRESSIONS Dynamic Clips of Emotional and Pain Facial Expressions - Validation of a Stimulus Set - Daniela Simon a,e , Kenneth D. Craig b , Frederic Gosselin c , Pascal Belin d , and Pierre Rainville e* a) Department of Clinical Psychology, Humboldt University of Berlin, Germany b) Department of Psychology, University of British Columbia, Vancouver / Canada c) Departement de Psychologie, Université de Montréal, Montréal / Canada d) Department of Psychology, University of Glasgow, Glasgow / UK e) Departement de Stomatologie, Université de Montréal, Montréal / Canada * Corresponding author : Pierre Rainville, Ph.D., Professeur agrégé Département de stomatologie Faculté de médecine dentaire Pav. Paul-G-Desmarais (local 2123) Université de Montréal CP 6128, Succursale Centre-ville Montréal, QC H3C3J7 Canada Fax: +001 / 514 / 343-2111 Phone: +001 / 514 / 343-6111 ext. 3935 E-mail: [email protected]

Simon et al Dynamic clips 060914 - Université de Montréalmapageweb.umontreal.ca/gosselif/SimonEtAl.pdf · 2006. 9. 14. · Title: Microsoft Word - Simon et al Dynamic clips_060914.doc

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

  • Dynamic Clips 1

    Running head: DYNAMIC EMOTIONAL FACIAL EXPRESSIONS

    Dynamic Clips of Emotional and Pain Facial Expressions - Validation of a Stimulus Set -

    Daniela Simona,e, Kenneth D. Craigb, Frederic Gosselinc, Pascal Belind, and Pierre Rainvillee*

    a) Department of Clinical Psychology, Humboldt University of Berlin, Germany

    b) Department of Psychology, University of British Columbia, Vancouver / Canada

    c) Departement de Psychologie, Université de Montréal, Montréal / Canada

    d) Department of Psychology, University of Glasgow, Glasgow / UK

    e) Departement de Stomatologie, Université de Montréal, Montréal / Canada

    * Corresponding author : Pierre Rainville, Ph.D., Professeur agrégé

    Département de stomatologie

    Faculté de médecine dentaire

    Pav. Paul-G-Desmarais (local 2123)

    Université de Montréal

    CP 6128, Succursale Centre-ville

    Montréal, QC H3C3J7

    Canada

    Fax: +001 / 514 / 343-2111

    Phone: +001 / 514 / 343-6111 ext. 3935

    E-mail: [email protected]

  • Dynamic Clips 2

    Acknowledgements

    This research was supported by grants and salary support from the Natural Sciences and

    Engineering Research Council (NSERC) of Canada, the Fonds Québécois de la recherche sur la

    nature et les technologies (FQRNT), the Canadian Fund for innovation (CFI), the Fonds de la

    recherche en santé du Québec (FRSQ), and a scholarship to DS from the German Academic

    Research Exchange Service (DAAD). We thank Julie Senécal, David Wong, and Annie L. for their

    help in the realisation of this study.

  • Dynamic Clips 3

    Abstract

    Emotional facial expressions have been intensively investigated using photographs. However,

    because facial expressions are highly dynamic, there is a need for stimuli that include this important

    temporal dimension. Moreover, the facial expression of pain is not included in any available

    stimulus set. Here, we describe 1-sec film clips recorded in 8 actors who displayed facial

    expressions of six emotions (anger, disgust, fear, happiness, sadness and surprise), or pain,

    compared to neutral expression. Prototypicality and distinctness of all stimuli were confirmed by

    two trained judges using the Facial Action Coding System. Discrimination by naïve volunteers and

    subjective evaluations of the clips on expression intensity, arousal and valence, confirmed the

    validity and specificity of the 7 expressions. This set of dynamic facial expressions provides unique

    material to further explore the perception of facial expressions of emotion and pain and their role in

    the regulation of social behaviour and cognition.

  • Dynamic Clips 4

    Dynamic Clips of Emotional and Pain Facial Expressions - Validation of a Stimulus Set -

    The importance of the face in interpersonal communication is widely recognized in

    anthropology, psychology and the social sciences. Facial expressions are powerful social signals,

    which impart information about a person’s emotional state and might provide survival value by

    signalling potential threats (Darwin, 1872). Nowadays, scholars agree on the account of at least

    ‘minimal universality’ of facial expressions, which predicts a certain amount of cross-cultural

    similarity in expressing and recognizing emotional faces, without necessarily postulating innateness

    (e.g. Russell & Fernandez-Dols, 1997). Although facial expressions in naturalistic contexts might

    contain a blend of different emotions, considerable evidence has been provided for the existence of

    at least six prototypical basic emotions (anger, disgust, fear, sadness, happiness and surprise; see

    Table 1) (Ekman, Friesen, & Elsworth, 1982). According to Ekman & Friesen (1975) each basic

    emotion corresponds to a specific activation pattern of facial muscles (action units = AU),

    measurable by the Facial Action Coding System (FACS) (Ekman & Friesen, 1978). The facial

    expressions of the six basic emotions have been subject to research in a variety of disciplines

    ranging from cognitive, social, and clinical science to computer science. Research works

    encompassing topics from automatic facial feature tracking (e.g. Lievin & Luthon, 2004) to neural

    correlates of empathic responses to emotional facial expressions (e.g. Carr, Iacoboni, Dubeau,

    Mazziotta, & Lenzi, 2003) reflect the breadth of scientific interest in emotional facial expressions

    (EFEs).

    In parallel to those developments, pain research has provided evidence that the facial

    expression of pain is unique and distinct. According to the International Association for the Study

    of Pain, pain is “an unpleasant sensory and emotional experience associated with actual or potential

    tissue damage, or described in terms of such damage” (see http://www.iasp-pain.org/terms-p.html).

  • Dynamic Clips 5

    Modern conceptions of pain typically separate secondary emotional responses associated with the

    appraisal of the cause and future consequences of pain from the immediate affective response

    (including facial expression) intrinsic to acute pain experiences (e.g. Price, 2000). Moreover,

    evolutionary theories propose that the facial expression of pain following acute injury might play a

    role in survival by alarming onlookers in situations of direct threat and/or by eliciting empathy and

    solicitous behavior towards the individual experiencing pain (Williams, 2002). While fearful and

    angry faces may rather signal an impending danger, pain faces indicate an actual noxious event

    immediately affecting the body’s integrity. Therefore, the facial expression of pain might arguably

    be more important for the survival of the species than other basic facial expressions (Williams,

    2002). However, comparatively little work has been done on pain expression and this research has

    had relatively little impact on emotion research.

    The facial expression of pain has been investigated in the past decades especially in the

    context of pain assessment (Craig, Prkachin, & Grunau, 2001) and has recently become an area of

    interest in neuroscientific research (e.g. Botvinick et al., 2005; Simon, Craig, Miltner, & Rainville,

    in press). Several studies using FACS reliably identified the co-occurrence of certain facial actions

    in a pain face across various clinical pain conditions (e.g. LaChapelle, Hadjistavropoulos, & Craig,

    1999; Prkachin, 1992) (see Table 1). Although research is this field is still scant, this facial pattern

    has been described to be unique and distinct from the facial expressions of basic emotions

    (Kappesser & Williams, 2002; Williams, 2002). Part of the reason pain has not been investigated as

    much as other basic expression maybe related to the lack of appropriate stimuli set to study the

    facial expression of pain. Furthermore, the integration of pain into emotion research requires that

    specific stimuli be developed including both pain and basic emotion expressions. The stimulus set

    described in the present report may provide a tool to establish bridges between those currently

    independent domains.

  • Dynamic Clips 6

    Most studies on facial expressions have been conducted using static facial displays; for

    example, the photographs set provided by Ekman & Friesen (1976). However, facial expressions

    occurring in real-life interpersonal communication are highly dynamic. Moreover, the identification

    of EFEs seems to be partly influenced by motion (Harwood, Hall, & Shinkfield, 1999; O'Toole,

    Roark, & Abdi, 2002; Roark, Barrett, Spence, Abdi, & O'Toole, 2003). A recent study showed that

    the dynamic properties of facial expressions significantly improve discrimination (Ambadar,

    Schooler, & Cohn, 2005). Moreover, brain regions implicated in processing facial affect, including

    the amygdala and fusiform gyrus, show greater responses to dynamic versus static emotional

    expressions; and activity in the superior temporal sulcus discriminate dynamic faces expressing

    emotions from dynamic face not expressing emotion (LaBar, Crupain, Voyvodic, & McCarthy,

    2003; see also Haxby, Hoffman & Gobbini et al., 2002). Hence, to improve the ecological validity

    of the investigation of the social communicative functions of emotional facial expressions, there is a

    need for dynamic stimuli. To date no standardized and validated set of such stimuli containing all

    basic emotions has been published. Studies investigating responses to dynamic stimuli of EFEs

    either used computer-based morphs (e.g. LaBar et al., 2003; Sato, Kochiyama, Yoshikawa, Naito, &

    Matsumura, 2004) or sets of movie clips comprising some but not all basic emotions (Kilts, Egan,

    Gideon, Ely, & Hoffman, 2003; Wicker et al., 2003). Furthermore, the facial expression of pain is

    not included in any validated set of static or dynamic facial expression stimuli. Recent studies

    examining brain responses to the observation of pain expression used movie clips showing patients

    in pain (e.g. Botvinick et al., 2005). It is not clear whether the stimuli developed in this study may

    be contaminated by contextual information and motion resulting in non-uniform visual complexity.

    More importantly, this stimulus set does not allow for the study of the similarity and differences

    between responses to pain expression and responses to the expression of basic emotions. This

  • Dynamic Clips 7

    constitutes a serious limitation, especially for neuroscientific research on the facial expression of

    pain and emotions.

    The aim of the present study was to produce and validate a comprehensive, standardized set

    of dynamic clips of facial expressions of the six basic emotions for future research on EFEs. In

    addition, expressions of acute pain were included to assess further the specificity of this expression

    compared to basic emotions and to provide material to establish additional bridges between emotion

    and pain research.

    Method

    Participants

    Fifteen healthy volunteers (11 males and 4 females, mean age: 24.1 ± 3.4) were recruited on

    the campus of the University of Montreal to participate in a study on the perception of facial

    expression. Each participant provided informed consent and received monetary compensation for

    their participation (25 CA$). All procedures were approved by the local Ethics Committee.

    Stimuli and Stimuli Production

    Drama students of different theatrical schools in Montréal were hired for the production of

    the stimuli. Initially 11 actors took part in the recording but only the best 8 (4 males and 4 females;

    mean age: 24.4 ± 7.5 y.o.) were used to create this set of stimuli. They were selected based on the

    results of the observers’ judgements as those who produced the most unambiguous facial

    expressions. All actors provided written informed consent, transferring the copyright of the

    produced material to the research group. Their participation was compensated with 100 CA$. The

    recording sessions took place with a seamless chroma-key blue paper screen background and two

    diffuse tungsten light sources. In order to minimize contamination of the stimuli by special facial

  • Dynamic Clips 8

    cues, selected actors did not have facial piercing, moustache or beard, and did not wear earrings and

    make-up during the recording. If necessary they were asked to put on a hairnet. The actors sat

    comfortably about 1.5 m away from the camera lens.

    Actors were required to express the six basic emotions (happiness, disgust, fear, anger,

    sadness and surprise) in three intensities (moderate, strong, extreme). Actors also generated facial

    displays expressing acute pain experiences in four intensities (mild, moderate, strong, extreme), and

    produced a neutral face as a control. Actors were asked to produce each expression in about 1 sec

    starting with a neutral face and ending at the peak of the expression. The actor’s performance was

    monitored online by the filming team positioned outside of the cabin.

    Prior to the recordings, the actors were given an instruction guide encouraging them to

    imagine personal situations to evoke each emotion or pain. In addition, descriptions of short scenes

    were provided as examples to support vivid imaginations of the different emotional states they were

    asked to display (see appendix). The large majority of the clips (about 90%) were produced using

    this method based on mental imagery of an emotional situation. If necessary, the actors were also

    shown photographs of prototypical emotional facial expressions. If the filming team still saw

    discrepancies between depicted and expected facial expressions, discrete muscles were trained as

    described by Ekman & Friesen (1975) and Ekman, Friesen & Hager (2002) (Fig. 1). The

    performance was repeated until the filming team was convinced that the criteria described by

    Ekman & Friesen (1975) were met for each facial expression of emotions and by Williams (2002)

    for pain expression. At least 2-3 good streams were recorded for each level and in each condition.

    Given the considerable volume of film clips produced with each actor, a thorough FACS analysis

    could not be performed online and the filming team primarily relied on their immediate recognition

    of the target emotions. However, their decision to include a clip in the set was also informed by a

    list of easily detectable AUs for the online identification of ‘expected emotion expressions’ (Happy:

  • Dynamic Clips 9

    AU 6/12; Anger: AU 4/7/23; Disgust: AU 10; Fear: AU 1/4/5/25; Surprise: AU 1/2/26; Sadness:

    AU 1/4/15 and Pain: AU 4/6/7/10). The movie streams were captured in colour by a Canon XL1S

    video camera and directly transferred and saved to a Pentium computer for off-line editing using

    Adobe Premiere 6.5. Each recording session lasted about 1.5 hours. (Figure 1 about here)

    In this report we selected stimuli of the “strong” intensity level, as the emotions displayed

    appeared to be unambiguous and rarely judged to be exaggerated at that level. In a pre-rating

    session three naïve judges (1 male, mean age: 25.3 ± 0.6) independently selected the three best

    sequences per actor for each face category. Using Adobe Premiere 6.5 and QuickTime Pro, the clips

    were cut backward from the peak of the expression in order to assure that the peak was captured

    within 1 sec (Image size: 720 x 480 pixel, Frame rate= 29.97 frames per second; mean inter-ocular

    distance = 100.6 ± 4.5 pixels). Those features were imposed to facilitate their use in experimental

    studies in which the duration of the stimuli may be critical (e.g. event-related activation in brain

    imaging experiments). Admittedly, the disadvantage of this choice is that the 1-sec duration does

    not capture the full extent of the dynamical expression. More specifically, in some clips, the onset

    of the expression may not match precisely with the onset of the clip (i.e. in some clips the onset to

    apex slightly exceeded 1 sec), and the offset of the expression was excluded from the clips.

    Validation

    Subjective ratings. The rating session took place in a meeting room at the University of

    Montreal. Participants viewed each film clip twice and were asked to judge it on three different

    scales. After the first presentation participants evaluated valence and arousal on a 9 - point Likert

    scale. Participants were instructed to “rate how the person in the clip might feel: with respect to

    valence: - 4 = clearly unpleasant to + 4 clearly pleasant; and arousal: -4 = highly relaxed to +4 =

    high level of arousal”. Information about valence and arousal was included to allow for the

  • Dynamic Clips 10

    stimuli’s use in experimental studies inspired by the dimensional model of emotion (e.g. Lang,

    Greenwald, Bradley & Hamm, 1993). Neuroimaging studies have shown that responses of some

    brain areas are crucially influenced by valence and/or arousal of stimuli (e.g. the amygdala; see

    Zald, 2003), underlining the importance of controlling for those dimensions in studies investigating

    emotions. After the second presentation, participants rated each facial expression with respect to the

    intensity of happiness, disgust, fear, anger, sadness, surprise and pain on a scale of 6 - point Likert

    scale. Participants were instructed to “rate the intensity of each emotion in the clip from 0 = not at

    all to 5 = the most intense possible”. Each clip was therefore rated on all emotional categories.

    Based on the emotion intensity ratings, the clip with the lowest mean intensity on all non-target

    emotions was selected for each actor and emotion. The final set comprised 64 one-second clips with

    each of the 8 actors contributing one clip to each of the 8 conditions (Fig. 2). (Figure 2 about here)

    Facial Action Coding System. FACS coding was performed on the 64 selected clips (Fig. 1).

    This procedure offered the opportunity to compare the results of these facial expressions with the

    prototypes reported in the literature (chapter 12, p. 174, Table 1 Ekman et al., 2002; Williams,

    2002). All 64 clips were evaluated by two independent accredited coders (1 male and 1 female),

    who were blind to the target expression in each clip. The FACS ratings comprise information about

    the occurrence and intensity (a = trace of the action to e = maximum evidence) of each AU. In the

    first stage of coding, the two coders independently identified the AU’s present in each clip and rated

    their intensity. In the second stage, the differences in coding were reconciled across the two coders

    to provide a single set of AU’s with their intensity for each clip. (Table 1 about here)

    Data Analysis

    Subjective ratings. Average ratings were first calculated across subjects for each clip and

    each rating scale, and the mean and SEM ratings were then computed across clips within each

  • Dynamic Clips 11

    expression condition. The frequency of correct classification was calculated for each expression

    category based on the highest emotion intensity rating to provide an index of sensitivity [Hit rate =

    % (# of correct classification) / (# of correct classification + # of “miss”)]. The frequency of correct

    rejection of stimuli not in the target expression was also determined to provide an index of the

    specificity of judgments for each expression category [= % (# of correct rejection) / (# of correct

    rejection + # of “false alarms”)]. Additionally, the presence of non-target expression was assessed

    by calculating a “mixed-emotion index” (mean of the ratings for the non-target expressions) for

    each condition.

    A cluster analysis was computed to insure that each clip was adequately recognized and that

    clear boundaries existed between the different categories of facial expression (Squared Euclidean

    distances; Ward-Method). The analysis was performed on the 64 clips using the 7 intensity ratings

    (one for each possible expression) averaged across participants. Inter-judge reliability was

    determined by computation of Cronbach’s Alpha. In order to determine whether a portrayed

    emotion was rated higher on the corresponding scale than on the non-target scales, Fisher’s

    protected least significance difference test was calculated for each face category. Subsequent

    analyses included repeated measures analysis of variance (ANOVA) on the participant’s ratings

    (intensity, mixed-emotion index, valence and arousal) with factors emotion (pain and six basic

    emotions) and sex of actor (male / female). A Greenhouse-Geisser correction was used for

    computation of the statistical results (p ≤ .05). Main effects of emotion underwent post hoc analysis

    (multiple comparisons on eight emotional categories) using a Bonferroni correction (p ≤ .05/36 =

    .0013).

    Facial Action Coding System. The evaluation of the FACS coder was performed in two

    successive stages. During the first stage both coders provided independent lists and intensity ratings

    of the observed AUs. The frequency reliability was determined by 2x the number of agreements

  • Dynamic Clips 12

    divided by the total number of AUs coded by both raters. The intensity reliability was determined

    by 2x the number of intensity agreements ±1, divided by the total number of AUs on which the

    coders agreed. During the second stage an agreed set was determined by reconciling the

    disagreements. The criteria for the reconciliation are provided by the FACS manual (Ekman et al.,

    2002, p. 88). The reported results refer to this agreed set.

    The number of actors showing some involvement of an AU was determined for each

    expression condition. Additionally, the mean number and % of target AUs observed in each

    expression category was determined. All AUs that were activated at an intensity level of ‘b’

    (indicating slight evidence of facial action) or more in at least 50% of the actors were considered to

    be representative of the expression condition (note that level ‘a’ indicates only a trace). The lists of

    observed AUs in each category were compared with those previously reported for prototypical

    emotional facial expression or their major variants (Ekman et al., 1982). While a prototype refers to

    a configuration of AUs commonly associated with a certain emotion, major variants constitute

    partial expressions of a prototype (Smith & Scott, 1997). Moreover, in order to evaluate sex

    differences in facial expression, repeated measurement ANOVAs with within-subjects factor ‘AU’

    and between group-factor ‘sex of the actor’ were computed for each emotion (p ≤ .05/7 = .007).

    Results

    Subjective Ratings

    Analysis of inter-rater reliability revealed a very high agreement of the 15 observers’ for

    intensity, valence and arousal ratings (Cronbach’s Alpha = .97).

    Intensity. The average ratings across participants for each portrayed expression on the

    corresponding target scales are reported in Table 2. Pairwise comparisons revealed that each

    emotional facial expression was perceived as more intense than neutral faces (all p’s < .001). The

  • Dynamic Clips 13

    analysis of the mean intensity ratings of the target emotional category (see asterisks in Table 2) also

    revealed a main effect of emotional expression (F (7, 98) = 58.20; p < .001; ε = .57). Facial

    expressions of happiness and surprise were judged as more intense than sad faces (happiness Vs

    sadness: p < .001; surprise Vs sadness: p < .001) while happy faces were perceived as more intense

    than fear faces (p < .001). The remaining emotions did not significantly differ from each other.

    Intensity ratings were not influenced by the sex of the observer (F (1, 13) = 1.24; p = .285) or of the

    actor (F (1, 14) = 0.03; p = .864; ε = 1.00) (see Table 3). Moreover, each category of facial

    expressions yielded significantly higher average ratings on the corresponding scale than to the 6

    non-target scales (Fisher’s protected LSD; all p’s < .001; see Table 3). (Table 2 and Table 3 about

    here)

    Discrimination. The cluster analysis identified 8 different subgroups of clips corresponding

    to each of the 8 expression conditions. According to the observed clustering, all clips were

    adequately assigned to the target category (i.e. the distance between clips of the same target

    emotion was always smaller than between clips of different target emotions). The analysis also

    revealed second-order combinations of clusters for neutral and sadness, as well as pain and fear.

    Pain and fear also showed some proximity to disgust as reflected by a third-order combination of

    these clusters.

    The sensitivity (Hit rate) and specificity (Correct rejection rate) indices confirmed that

    participants clearly recognized the target emotion shown in the clips, and that the presence of non-

    target expression was negligible (Table 2). Sensitivity was higher than 70% for all conditions (mean

    sensitivity: 86%), well above the 14% chance rate. Specificity was above 94% for all expression

    (mean specificity: 97%). However, the presence of non-target expression differed between

    emotions as revealed by a main effect of emotional expression on the mixed-emotion index (F (7,

    98) = 13.89; p < .001; ε = .44). Pairwise comparisons revealed that fear, pain and disgust faces

  • Dynamic Clips 14

    showed slightly more traces of non-target emotions than happy, angry and neutral faces (all p’s ≤

    .001). Fear clips additionally showed more traces of non-target emotions than sadness and surprise

    clips (p’s ≤ .001). While the mixed-emotion index was not influenced by the sex of the observer, a

    main effect of the sex of the actor was detected reflecting slightly higher reports of non-target

    expression in female actors (F (1, 14) = 4.87; p = .044; ε = 1.00). Taken together, these findings

    indicate that the expressions were easily discriminated and included only traces of non-target

    expressions.

    Valence. Analysis of the average valence scores clearly indicate that participants perceived

    happy faces as “pleasant”, neutral and surprise faces as “neutral”, and the remaining emotional

    faces (pain, disgust, sadness, fear, anger) as unpleasant (Table 3). Thus, the analysis of these ratings

    resulted in a main effect of emotional expression (F (7, 98) = 216.40; p < .001; ε = .47). As

    demonstrated by pairwise comparisons, participants judged happy faces as significantly more

    pleasant than all other face categories (all p’s < .001). Furthermore, all facial expressions that had

    been rated as “unpleasant” were significantly more unpleasant than both neutral and surprised faces

    (all p’s < .001). Additionally, within the group of unpleasant facial displays, pain faces were rated

    significantly higher on unpleasantness than fear, sad and anger faces (pain Vs fear: p < .001; pain

    Vs sadness: p = .001; pain Vs anger: p < .001). Furthermore, disgust clips also yielded higher

    unpleasantness ratings than anger clips (p < .001).While valence ratings were not influenced by the

    observer’s sex, an interaction between emotional facial expression and sex of the actor was found (F

    (7, 98) = 5.55; p < .001; ε = .64). This was due to higher levels of pleasantness to female than male

    faces expressing surprise (p = .001).

    Arousal. The statistical analysis of the average arousal ratings for all emotional facial

    expressions revealed a main effect of emotional expression (F (7, 98) = 24.91; p < .001; ε = .39).

    Pairwise comparisons showed that participants perceived neutral and sad faces as significantly less

  • Dynamic Clips 15

    arousing than pain, surprise, fear, anger, as well as disgust faces (all p’s < .001), while sadness did

    not differ significantly from neutral. Furthermore, pain clips were rated as the most arousing and

    differed significantly from fear, disgust and happy faces (p’s ≤ .001). While arousal ratings were not

    influenced by the observer’s sex, a main effect of the actor’s sex was observed (F (1, 14) = 6.32; p =

    .025; ε = 1.00). Moreover, a trend of an interaction between the actor’s sex and emotion was

    detected (F (7, 98) = 2.31; p = .065; ε = .60). Post-hoc comparisons reveal higher levels of arousal

    to female than male faces expressing happiness (t (14) = 3.21; p = .006) (see Table 3).

    FACS

    The reliability of the FACS coding of the 64 clips across the two coders was excellent

    (frequency reliability = 82.3%; intensity reliabilities = 90.6%). Table 1 lists all AUs observed in the

    different condition and shows the overlap with prototypical AUs, as reported in the FACS literature.

    Considering the FACS investigators guide (chapter 12, p.174, Table, Ekman et al., 2002), the

    detected activation patterns of AUs for each emotion generally corresponded well with the

    prototypes or major variants of the intended emotional facial expressions. The selected stimuli

    always included several of the target AUs in various combinations, and only few non-target AUs

    (Table 1). More specifically, all happiness expressions included the target AUs 6 and 12; anger

    expressions included 8 out of 10 target AUs; disgust expression included all 7 target AUs; fear

    expressions included 7 out of 8 target AUs; surprise expressions included all 5 target AUs; sadness

    expressions included 7 out of 8 target AUs; and pain expressions included 9 out of 10 target AUs.

    Moreover, few non-target AUs were observed. For example, anger stimuli included an overall

    average of 3.38 target AUs and an only 0.38 non-target AUs (see # of target AU in Table 1). This

    meant that among all the AUs observed in the anger clips, 89.6% were target AUs. Happiness and

  • Dynamic Clips 16

    disgust showed the lowest relative rates of target AUs (see Table 1) indicating the presence of

    additional non-target AU mainly in those conditions.

    There was no main effect of the sex of the actor, and no interaction between the emotion

    condition and the sex of the actor on FACS data, suggesting that both males and females

    contributed to the observed patterns of facial expression across conditions.

    Discussion

    The aim of the present study was to validate a set of dynamic stimuli of prototypical facial

    expressions of basic emotions and pain. The film clips were comprehensively validated by FACS

    coding and subjective ratings. The facial expressions of the different basic emotions and pain

    corresponded well with the prototypes and major variants reported in the literature (chapter 12,

    p.174, Table 1, Ekman et al., 2002; Williams, 2002). Additionally, the results of the cluster analysis

    performed on the observer’s ratings of emotions and pain demonstrated that the 64 film clips were

    correctly classified (100% accuracy) into 8 different groups of EFEs, in accordance with the

    intended target emotions. Hence, in line with the literature, specificity and distinctness of the

    investigated facial expressions was confirmed by both FACS coding and subjective ratings of naïve

    observers (Kappesser & Williams, 2002; Smith & Scott, 1997).

    Taking the magnitude of the subjective evaluations into account, the EFEs selected here

    were perceived as expressing a comparable level of intensity (range: 2.40 to 3.34/5). This is

    consistent with the instructions given to actors and the selection of clips to obtain a final set of

    stimuli displaying each target expression at a consistent intensity level across conditions. However,

    differences were observed with surprise rated as slightly more intense than sad faces, and happy

    faces rated as slightly more intense than sad and fear faces.

  • Dynamic Clips 17

    FACS prototypes and the dynamical expression of emotions

    Results clearly demonstrated an excellent correspondence with FACS prototypes as shown

    in Table 1. A detailed comparison of the FACS responses observed here was further performed with

    those previously reported in a reference study using a similar methodology with actors performing

    short scenes with the instruction to feel or not feel the emotions expressed (Gosselin et al., 1995). In

    general, our stimuli were found to be highly consistent with the felt condition reported in that study.

    The concordance with FACS prototypes was also significantly better in the present set of stimuli.

    More specifically, Gosselin et al. (1995) reports that felt happiness is characterized by target

    AUs 12 (lip corner puller; mean ± SEM rate: 0.92 ±0.16) and AUs 6 (cheek raiser; 0.94 ±0.08) with

    the latter reduced to a rate of 0.77 ± 0.30 in the unfelt condition. Both AUs occurred in 100% of the

    happiness clips selected in our stimulus set.

    Similarly, the anger stimuli proposed here showed an overall higher rate of target AUs and a

    lower rate of non-target AUs suggesting better sensitivity and specificity than those of Gosselin et

    al. (1995). Only three target AUs reached a rate of 0.25 in that study, compared to six in the present

    stimulus set (rate >2/8). In their unfelt condition, Gosselin et al. further reported a decreased

    occurrence of target AU26 (jaw drop; unfelt: 0.24 ±0.20; felt: 0.49 ±0.30), an AU that we did not

    observe here. However, unfelt anger was also associated with an increase occurrence of target

    AU17 (chin raiser; 0.29 ±0.34; felt: 0.06 ±0.04), an AU that we did observe in 3/8 clips.

    Disgust stimuli also included more target AUs than those reported in the reference study.

    There was one exception with target AU26 (jaw drop) occurring less frequently in our stimuli (1/8)

    than in those of Gosselin et al. (1995) (rate = 0.40 ±0.35). Target AU17 (chin raiser) and non-target

    AU7 (lid tightener) were also observed in most clips described in the present study. In Gosselin et

    al. (1995), the occurrence of target AU17 increased in the unfelt condition while the rate of non-

  • Dynamic Clips 18

    target AU7 increased in the felt condition. This mismatch between the target/non-target AUs and

    the unfelt/felt conditions makes those observations difficult to interpret.

    In Gosselin et al., felt fear was characterized mainly by 3 target AUs (4-brow lowerer: 0.33

    ±0.40; 5-upper lid raiser; 0.55 ±0.41; and 26-jaw drop: 0.75 ±0.17). Several additional target AUs

    occurred in less than 30% of their stimuli. In the proposed stimuli, 6 target AUs occurred in more

    than 50% of the fear clips as reported in Table 1. The only slight difference between the felt and

    unfelt fear in Gosselin et al. was observed with the non-target AU7 (lid tightener) occurring more in

    the felt (0.29 ±0.45) than unfelt condition (0.08 ±0.17). Again, the higher occurrence of a non-target

    AUs in the felt condition is difficult to interpret. Nevertheless, this non-target (AU7) also occurred

    in 2/8 cases in our study, a rate consistent with the felt condition of the reference study.

    Felt surprise was characterized by two target AUs (5-upper lid raiser: 0.76 ±0.33; and 26-

    jaw drop: 0.72 ±0.41), while other main target AUs occurred rarely (AU1-inner brow raiser and

    AU2-outer brow raiser: rate < 0.20) in the stimuli of Gosselin et al. (1995). Again, our stimuli

    provided a better match with the prototypes with four target AUs (1, 2, 5, and 26) occurring in all or

    almost all surprise clips (Table 1). Consistent with the felt emotions reported by Gosselin et al.,

    target AUs 5 and 26 and the non-target AU25 (lip part) were highly frequent.

    Finally, our sadness stimuli also demonstrated a better concordance with the prototypes than

    those of Gosselin et al. (1995). The expression of felt sadness reported by Gosselin et al. (1995) was

    characterized mainly by 3 target AUs (1-inner brow raiser; 4-brow lowerer; and 26-jaw drop),

    although each occurred in less than 25% of cases. Several additional target AUs also occurred less

    frequently (AU15-lip corner depressor; AU17-chin raiser; and AU25-lips part; mean rate ≤ 0.17). In

    our stimulus set, four target AUs (1, 4, 15, and 17) were observed in 4/8 or more clips and three

    more (25, 26 and 6) occurred at a lower rate (1-3/8). The target AUs observed here included those

    that were only, or mainly, present in the felt condition (AUs 1, 4, 25, and 26) in Gosselin et al.

  • Dynamic Clips 19

    (1995). The only minor difference with felt sadness is our observation of AU15 (lip corner

    depressor), which was found mainly in the unfelt sadness condition (0.34 ±0.23) in the reference

    study.

    It should be noted that the increased occurrence of some AUs in the unfelt condition

    reported by Gosselin et al. (1995) (e.g. AU17 in anger and disgust) may be confounded with

    differences in the intensity of the expression (i.e. possibly stronger in the unfelt conditions) rather

    than the genuineness of the expression (e.g. Hill & Craig, 2002). Therefore, it is possible that those

    cases in the present stimuli simply reflected stronger genuine expressions. With a few minor

    exceptions, the detailed comparison of the proposed stimulus set with the FACS prototypes and

    with the observations of Gosselin et al., (1995) in the felt emotion condition supports the validity of

    the present stimulus set as representative of the genuine and prototypical expression of felt

    emotions.

    Perceiving dynamical expressions of emotions and pain

    The discrimination of the different emotions and pain based on the overall subjective ratings

    of the different emotion/pain categories led to an overall perfect classification rate (cluster analysis).

    This means that each stimulus received its highest mean rating (across observers) for the target

    emotion/pain category. Global correct recognition rates were well above chance, and in most cases

    higher than 85%, while correct rejection rates were always above 95%, consistent with the excellent

    sensitivity and specificity of the stimuli. Traces of non-target emotions occasionally lead to

    misattribution by some observes, mainly in pain and fear clips (both recognition rates of 74.2%).

    Furthermore, the range of correct recognition rates (74% to 100%; see Hit rate in Table 2) was

    comparable to the recognition rates reported for the original set of stimuli used by Ekman & Friesen

    (1976) to develop the FACS coding system and to determine the prototypical facial expressions of

  • Dynamic Clips 20

    basic emotions. Similar recognition rates are also reported by Kappesser & Williams (2002) in

    health care professional identifying pain (hit rate = 70%) and negative emotions (70-97%) from

    photographs of facial expressions based on FACS prototypes.

    The data also suggests that participants perceived some ambiguity or a mixture of several

    emotions in some cases. Errors of attribution and mixed emotions were observed mainly in fear and

    pain. Traces of surprise were reported in fear expression which sometimes led to misattribution

    (Table 2). Previous studies have reported some confusion between fear and surprise when expressed

    on human faces in some cultures (Ekman, Friesen, & Ellsworth, 1972) as well as by a model

    observer classifying affective faces (Smith, Cottrell, Gosselin, & Schyns, 2005). This is consistent

    with the similarity of the facial display associated with fear and surprise as demonstrated by the

    overlap of target AUs (1, 2, 5, and 26) involved in those emotions (Table 1). Based on the

    prototypical AUs and the observed FACS results, AU4 (brow lower) may be the most distinctive

    feature between fear and surprise. In addition, AU25 (lip part) was observed in both fear and

    surprise although it constitutes a prototypical AU only in fear. The high rate of AU25 in surprise

    clips might explain the detection of mixed emotions and/or misattributions observed in those

    categories.

    Similarly, pain faces contained traces of disgust, fear and to a lesser extent surprise, and

    were occasionally misattributed to these non-target categories (Table 2). In the present stimulus set,

    most of those expressions contained AU25 (lip part), a target AU for pain, disgust, and fear

    according to the prototypes. Additional AUs commonly observed in both pain and disgust included

    AUs 4, 6, 7, 9, 10, and 20. Those AUs are part of the prototypes for pain expression but only AUs 9

    and 10 are part of the prototypes for disgust expression. However, it is interesting to note that in

    spite of the presence of several non-target AUs, disgust faces were well discriminated. This is

    possibly due to the presence of target AU15 (lip corner depressor), a feature that was only observed

  • Dynamic Clips 21

    in disgust (and sadness) and may be essential to discriminate disgust from pain and fear. The

    circumstances for more ambiguous (or mixed) expression and/or perception of certain emotional

    facial expressions should be further investigated in future research. Recognition studies using

    random masking of parts of the facial displays (Gosselin & Schyns, 2001) might contribute to the

    identification of features that are necessary and sufficient for the adequate discrimination of fear

    and surprise.

    Valence and arousal

    In line with the literature, the investigated expressions can be divided into pleasant

    (happiness), neutral (surprise, neutral) and unpleasant (pain, anger, fear, disgust, sadness) facial

    expressions according to their valence (e.g. Feldman Barrett & Russell, 1998). From the cluster of

    unpleasantly rated facial displays, pain was perceived as most unpleasant. These findings are in line

    with the bio-social significance of pain (Williams, 2002). Moreover, subjects also judged that

    disgust faces were as unpleasant as pain faces and more unpleasant than fear faces. This may reflect

    the functional role of pain and disgust expression in signalling immediate noxious events (tissue

    damage or food poisoning) resulting in an actual threat to the integrity of the body (e.g. Wicker et

    al., 2003). This clearly contrasts with fearful and angry faces that may signal impending threats

    rather than actual noxious events.

    With respect to arousal, sad and neutral faces were perceived as least arousing in contrast to

    all other EFEs, in accordance to factorial analytic findings (e.g. Feldman Barrett & Russell, 1998).

    Again pain clips yielded the highest arousal ratings, however not significantly different from facial

    expressions of anger and surprise. This may again reflect a fundamental functional role of pain

    expression in communicating actual tissue damage.

  • Dynamic Clips 22

    Sex differences

    While the observer’s sex did not show any influence on the ratings, the actor’s sex seemed

    to modulate the participant’s judgements in some cases. This may reflect a perceptual-cognitive

    effect in the observer rather than differences in expression as the FACS results did not show any

    significant effect of the sex of the actor. More traces of non-target emotions and higher ratings of

    arousal were given to female expression in general. Moreover, females expressing surprise were

    perceived as being slightly more pleasant than male actors. However, taking the intensity ratings

    into account, judgements did not seem to be strongly influenced by gender stereotypes as suggested

    by previous research (Algoe, Buswell, & DeLamater, 2000; Hess, Adams & Kleck, 2004; Plant,

    Kling, & Smith, 2004).

    Some limitations and future directions

    As a final note, some limitations have to be underscored. Although the film clips of EFEs

    corresponded generally well with the FACS prototypes, fear and disgust faces may not be optimal.

    The imperfect match with FACS prototypes might have affected the recognition of fear faces.

    However, the facial expression of disgust was well recognized in spite of the more complex facial

    configuration observed in the present stimulus set compared to the FACS prototypes. Variability

    between actors in the FACS results and between the present results and the previously reported

    prototypes may reflect differences in the specific situations evoked to elicit the emotion or

    individual differences in expressive style. Those differences also occur in naturalistic contexts

    (Smith & Scott, 1997) and are consistent with the ecological validity of the present stimulus set.

    Future studies may examine in more details the symmetry and the smoothness of the full temporal

    unfolding of the expression (including the offset) as those factors may be affected by the

    genuineness of the expression (e.g. Craig et al., 1991; Gosselin, Kirouac, & Dore, 1995). Those

  • Dynamic Clips 23

    factors were not assessed here as they are typically undetected by the observers (Gosselin et al.,

    1995; Hadjistavropoulos, Craig, Hadjistavropoulos, & Poole, 1996). Therefore, although the stimuli

    may not be “pure” emotion/pain expressions and might include some spatio-temporal irregularities,

    the possible presence of those features did not compromise the ability of normal volunteers to

    recognize the expressions. The development of pattern-recognition algorithms for the analysis of

    facial expression (Hammal, Eveno, Caplier & Coulon, 2006) will likely contribute to future

    developments in this field by allowing for an unbiased and automatized analysis of dynamical facial

    expression including indices of asymmetries and precise spatio-temporal characterization. Random

    spatio-temporal masking procedures (Gosselin & Schyns, 2001) may also contribute to the

    identification of facial features necessary and sufficient for the correct discrimination of felt Vs

    unfelt emotions and associated with the concealment of felt emotions or pain.

    Finally, one has to emphasize that this set of stimuli is consistent with prototypical facial

    expressions but not exhaustive, and that perceptual data were obtained in a small homogeneous

    group of observers. Spontaneous real-life expressions are of course not restricted to such small sets

    and both the expressions and their perception by external observers are also determined by

    individual factors, the social surrounding and the cultural background. Furthermore, since the

    present stimulus set based on Western prototypes the generalization to other cultural contexts might

    be limited. To achieve a more comprehensive and valid investigation of facial expressions of

    different emotional states, considerable work remains to be done. In particular, validated sets of

    more subtle and mixed emotional expressions, including the social emotions must be developed.

    Studies must also be performed in larger populations to investigate more specifically the effects of

    individual characteristics (e.g. sex) of the observers.

  • Dynamic Clips 24

    Conclusion

    We present a set of dynamic, prototypical facial expressions of the six basic emotions, pain

    and a neutral display, which can be reliably recognized and distinguished by normal individuals.

    Consistent with previous studies, results confirmed the discrimination of facial expressions of basic

    emotions. Importantly, the results further demonstrate the distinctiveness of pain facial displays and

    the reliable discrimination of pain faces from those of basic emotions. For a comparable intensity of

    expression, pain was also perceived as the most unpleasant and arousing, possibly reflecting its

    higher bio-psychosocial significance. This standardized set of stimuli, including the comprehensive

    rating data and FACS coding, can be widely used in cognitive, social, clinical, and computer

    science/neuroscience studies on facial expressions and their social communicative functions.

  • Dynamic Clips 25

    References

    Algoe, S. B., Buswell, B. N., & DeLamater, J. D. (2000). Gender and Job Status as Contextual Cues

    for the Interpretation of Facial Expression of Emotion - Statistical Data Included. Sex roles,

    42(3/4), 183-208.

    Ambadar, Z., Schooler, J., & Cohn, J. (2005). Deciphering the enigmatic face: the importance of

    facial dynamics in interpreting subtle facial expressions. Psychological Science, 16(5), 403-

    410.

    Botvinick, M., Jha, A. P., Bylsma, L. M., Fabian, S. A., Solomon, P. E., & Prkachin, K. M. (2005).

    Viewing facial expressions of pain engages cortical areas involved in the direct experience

    of pain. Neuroimage, 25(1), 312-319.

    Carr, L., Iacoboni, M., Dubeau, M. C., Mazziotta, J. C., & Lenzi, G. L. (2003). Neural mechanisms

    of empathy in humans: a relay from neural systems for imitation to limbic areas.

    Proceedings of the National Academy of Sciences of the U S A, 100(9), 5497-5502.

    Craig, K. D., Hyde, S. A., & Patrick, C. J. (1991). Genuine, suppressed and faked facial behavior

    during exacerbation of chronic low back pain. Pain, 46(2), 161-171.

    Craig, K. D., Prkachin, K. M., & Grunau, R. V. E. (2001). The facial expression of pain. In D. C.

    Turk & R. Melzack (Eds.), Handbook of Pain Assessment (Vol. 2nd ed, pp. 153-169). New

    York: Guilford Press.

    Darwin, C. (1872). The expression of the emotions in man and animals. London: Albemarle.

    Ekman, P., Friesen, W. V., & Ellsworth, P. (1972). Emotion in the human face: Guidelines for

    research and an integration of findings. Oxford: Pergamon Press.

    Ekman, P., & Friesen, W. V. (1975). Unmasking the face. Englewoods Cliffs, NJ, USA: Prentice

    Hall.

  • Dynamic Clips 26

    Ekman, P., & Friesen, W. V. (1976). Pictures of Facial Affect. Palo Alto (CA): Consulting

    Psychologist Press.

    Ekman, P., & Friesen, W. V. (1978). Facial Action Coding System. Palo Alto (CA): Consulting

    Psychologist Press.

    Ekman, P., Friesen, W. V., & Elsworth, P. (1982). Research foundations In P. Ekman (Ed.),

    Emotions in the Human Face. London Cambridge University Press.

    Ekman, P., Friesen, W. V., & Hager, J. C. (2002). Facial Action Coding System (FACS). Salt Lake

    City: A Human Face.

    Feldman Barrett, L. F., & Russell, J. A. (1998). Independence and bipolarity in the structure of

    current affect. Journal of Personality and Social Psychology, 74, 967-984.

    Gosselin, P., Kirouac, G., & Dore, F. Y. (1995). Components and recognition of facial expression in

    the communication of emotion by actors. Journal of Personality and Social Psychology, 68,

    83-96.

    Gosselin, F. & Schyns, P. G. (2001). Bubbles: A technique to reveal the use of information in

    recognition. Vision Research, 41, 2261-2271.

    Hadjistavropoulos, H. D., Craig, K. D., Hadjistavropoulos, T., & Poole, G. D. (1996). Subjective

    judgments of deception in pain expression: accuracy and errors. Pain, 65(2-3), 251-258.

    Hammal, Z., Eveno, N., Caplier, A.& Coulon, P-Y (2006). Parametric models for facial features

    segmentation, Signal Processing, 86, 399-413.

    Harwood, N., Hall, L., & Shinkfield, A. (1999). Recognition of facial emotional expressions from

    moving and static displays by individuals with mental retardation. American Journal on

    Mental Retardation, 104(3), 270-278.

    Haxby, J. V., Hoffman, E. A., & Gobbini, M. I. (2002). Human neural systems for face recognition

    and social communication. Biological Psychiatry, 51(1), 59-67.

  • Dynamic Clips 27

    Hess, U., Adams, R. B. Jr. & Kleck, R. E. (2004). Facial appearance, gender, and emotion

    expression. Emotion , 4, 378-388.

    Hill, M.L. & Craig, K.D. (2002). Detecting deception in pain expressions: The structure of genuine

    and deceptive facial displays. Pain, 98, 135-144.

    Kappesser, J., & Williams, A. C. (2002). Pain and negative emotions in the face: judgements by

    health care professionals. Pain, 99(1-2), 197-206.

    Kilts, C. D., Egan, G., Gideon, D. A., Ely, T. D., & Hoffman, J. M. (2003). Dissociable neural

    pathways are involved in the recognition of emotion in static and dynamic facial

    expressions. Neuroimage, 18(1), 156-168.

    LaBar, K., Crupain, M., Voyvodic, J., & McCarthy, G. (2003). Dynamic perception of facial affect

    and identity in the human brain. Cerebral Cortex, 13(10), 1023-1033.

    LaChapelle, D. L., Hadjistavropoulos, T., & Craig, K. D. (1999). Pain measurement in persons with

    intellectual disabilities. Clinical Journal of Pain, 15(1), 13-23.

    Lang, P. J., Greenwald, M. K., Bradley, M. M., & Hamm, A. O. (1993). Looking at pictures:

    affective, facial, visceral, and behavioral reactions. Psychophysiology, 30(3), 261-273.

    Lievin, M., & Luthon, F. (2004). Nonlinear color space and spatiotemporal MRF for hierarchical

    segmentation of face features in video. IEEE Transactions on Image Processing, 13(1), 63-

    71.

    O'Toole, A. J., Roark, D. A., & Abdi, H. (2002). Recognizing moving faces: a psychological and

    neural synthesis. Trends in Cognitive Science, 6(6), 261-266.

    Plant, E. A., Kling, K. C., & Smith, G. L. (2004). The influence of gender and social role on the

    interpretation of facial expressions. Sex roles, 51 (3/4), 187-196.

    Price, D. D. (2000). Psychological and neural mechanisms of the affective dimension of pain.

    Science, 288(5472), 1769-1772.

  • Dynamic Clips 28

    Prkachin, K. M. (1992). The consistency of facial expressions of pain: a comparison across

    modalities. Pain, 51(3), 297-306.

    Roark, D., Barrett, S. E., Spence, M. D., Abdi, H., & O'Toole, A. J. (2003). Psychological and

    neural perspectives on the role of facial motion in face recognition. Behavioral and

    Cognitive Neuroscience Reviews, 2(1), 15-46.

    Russell, J. A., & Fernandez-Dols, J.-M. (1997). The Psychology of Facial Expression. New York

    Cambridge University Press.

    Sato, W., Kochiyama, T., Yoshikawa, S., Naito, E., & Matsumura, M. (2004). Enhanced neural

    activity in response to dynamic facial expressions of emotion: an fMRI study. Cognitive

    Brain Research, 20(1), 81-91.

    Simon, D., Craig, K. D., Miltner, W. H. R., & Rainville, P. (in press). Brain responses to dynamic

    facial expressions of pain. Pain.

    Smith, C. A., & Scott, H. S. (1997). A Componential approach to the meaning of facial expressions.

    In J. A. Russell, & Fernandez-Dols, J.-M. (Ed.), The Psychology of Facial Expression (pp.

    229-254). New York: Cambridge University Press.

    Smith, M. L., Cottrell, G. W., Gosselin, F., & Schyns, P. G. (2005). Transmitting and decoding

    facial expressions. Psychological Science, 16(3), 184-189.

    Wicker, B., Keysers, C., Plailly, J., Royet, J. P., Gallese, V., & Rizzolatti, G. (2003). Both of us

    disgusted in My insula: the common neural basis of seeing and feeling disgust. Neuron,

    40(3), 655-664.

    Williams, A. C. (2002). Facial expression of pain: an evolutionary account. Behavioral and Brain

    Sciences, 25(4), 439-488.

    Zald, D. H. (2003). The human amygdala and the emotional evaluation of sensory stimuli. Brain

    Research Reviews, 41(1), 88-123.

  • Dynamic Clips 29

    Appendix

    Instructions to the actors: "We will record movie-clips while you express different basic emotions

    and pain in different levels of intensity. If you want, you may also include the interjection "Ah!".

    This guide will help you to generate vivid imaginations of these different events. Of course, the

    instructions are only suggestions and please feel free to use appropriate personal situations to

    facilitate the production of the target expression."

    Emotion Proposed vignettes

    Surprise I. Slightly surprised: „It’s early October. You step out of the door and detect that

    it snowed overnight and everything outside is completely white. “

    II. Surprised: “A friend is back from his trip to Australia and tells you that he

    met your former classmate on a hike through the outback.”

    III: Extremely surprised: “You meet a woman on the street about which you

    were told that she was killed in a car accident. Later you find out that it was her

    twin sister. “

    Fear I. Slightly frightened: “After watching a horror movie you hear strange noises

    everywhere in your flat.”

    II. Frightened: “Imagine you walk home alone at night. You have to walk

    through a very dark alley. You hear foot steps following you. ”

    III. Terrified: „You are climbing a mountain over a very high cliff with a river

    underneath. You are almost at the top when your foot slips, and you slide down

    towards the cliff. Just you go off the cliff, you grap onto a small tree with one

    hand. All of your body is hanging over the cliff and you are holding on with

  • Dynamic Clips 30

    only one hand. Your friend is coming close to save you, but your arm is giving

    up and you are about to let go and fall. Your life is hanging by a thread. ”

    Happiness I. Slightly happy: “You are walking through the park. It’s a sunny nice day. You

    sit down on a park bench. Right beside you two puppies are playing on the

    meadow. They are very cute and move clumsily. You enjoy observing their

    play.”

    II. Happy: “You walk along the street downtown. You have to stop at a red

    traffic light. While waiting there you detect a good friend on the other side of

    the road. Imagine the moment, after your first surprise is gone, when you and

    the friend look each other in the eyes. “

    III. Very happy: “You are having a funny party with close friends.

    Disgust I. Slightly disgusted: „You are having a mug of coffee and recognize after

    taking a sip that your coffee cream is curdled. “

    II. Disgusted: „You take a walk in the park and step into a fresh dog dropping.”

    III. Extremely disgusted: „You are on your way to the rest room. When you

    enter the room you detect that the toilet has completely overflowed. “

    Anger I. Annoyed: „You have an appointment with a friend, that is already 30 minutes

    too late and still not in sight.“

    II. Angry: „After a long day of work you walk home. You are really stressed,

    because you have to carry two heavy bags with vegetables that you bought at the

    market. Suddenly, a rout of teenagers is coming from behind, having a race with

    their bicycles on the sidewalk and hit one of your bags, which almost fall

    down.“

  • Dynamic Clips 31

    III. Furious: “You are on your way to an important meeting wearing your best

    clothes, when a car at lightning speed passes through a slop of dirty water that

    now runs all over your clothes.” or “Imagine your cat or dog is killed by the

    neighbours dog.”

    Sadness I. Saddened: “A very close friend that you haven’t seen for a very long time is

    going to visit you tonight. You are really looking forward to it. But then your

    friend is calling and telling you that his car broke down and he has to cancel his

    visit.”

    II. Sad: “You are in love with a wonderful person. But then this person tells you

    that he/she will never feel the same for you. ”

    III. Very sad / on the verge of tears / crying: “A close friend or relative dies. “

    Pain I. Barely painful: “Imagine a mild pinprick!”

    II. Slightly painful: “You scratch your finger slightly on ingrain wallpaper.”

    III. Painful: “You caught your finger in a drawer.”

    IV. Very painful: “Imagine you cut your finger.”

  • Dynamic Clips 32

    Table 1. Occurrence rate of AUs relative to the prototypical expression (■ and □) of basic emotions and pain

    Emotion AU

    1 2 4 5 6 7 9 10 11 12 15 16 17 20 22 23 24 25 26 27

    Inne

    r br

    ow ra

    iser

    Out

    er b

    row

    rai

    ser

    Bro

    w lo

    wer

    Upp

    er li

    d ra

    iser

    Che

    ek ra

    iser

    Lid

    tight

    en

    Nos

    e w

    rinkl

    er

    Upp

    er li

    p ra

    iser

    Nas

    o. fu

    rrow

    de

    epen

    Lip

    corn

    er p

    ulle

    r

    Lip

    corn

    er

    depr

    esso

    r

    Low

    er li

    p de

    pres

    sor

    Chi

    n ra

    iser

    Lip

    stre

    tch

    Lip

    funn

    el

    Lip

    tight

    en

    Lip

    pres

    sor

    Lips

    par

    t

    Jaw

    dro

    p

    Mou

    th st

    retc

    h

    # of

    targ

    et A

    Us b

    % o

    f tar

    get A

    Us c

    Happiness Prototype ■ ■ 2.00 59.2% Observed 1/8 1/8 0/8 0/8 8/8a 3/8 0/8 0/8 0/8 8/8 a 0/8 0/8 0/8 3/8 0/8 0/8 0/8 4/8 a 1/8 0/8 (0.00) Anger Prototype ■ ■ ■ □ □ ■ ■ □ ■ ■ 3.38 89.6% Observed 0/8 0/8 6/8 a 3/8 1/8 5/8 a 1/8 1/8 0/8 0/8 0/8 0/8 3/8 0/8 0/8 6/8 a 3/8 1/8 0/8 0/8 (0.53) Disgust Prototype ■ and/or ■ □ □ □ □ □ 4.38 59.7% Observed 0/8 0/8 7/8 a 0/8 4/8 a 7/8 a 3/8 7/8 a 0/8 0/8 8/8 a 2/8 7/8 a 5/8 a 0/8 1/8 0/8 7/8 a 1/8 0/8 (0.32) Fear Prototype ■ ■ ■ ■ □ ■ and/or ■ and/or ■ 4.63 85.4% Observed 4/8 a 3/8 5/8 a 8/8 a 1/8 2/8 0/8 1/8 0/8 0/8 0/8 4/8 a 0/8 2/8 0/8 0/8 0/8 8/8 a 5/8 a 0/8 (0.42) Surprise Prototype ■ ■ ■ □ □ 3.50 81.3% Observed 7/8 a 8/8 a 0/8 5/8 a 0/8 0/8 0/8 0/8 0/8 0/8 0/8 0/8 0/8 0/8 1/8 0/8 0/8 7/8 a 6/8 a 1/8 (0.27) Sadness Prototype ■ ■ □ □ ■ □ □ □ 4.25 83.8% Observed 6/8 a 2/8 7/8 a 0/8 1/8 2/8 0/8 3/8 0/8 0/8 8/8 a 0/8 7/8 a 2/8 0/8 0/8 0/8 3/8 2/8 0/8 (0.49) Pain Prototype ■ ■ ■ ■ ■ □ □ □ □ □ 6.88 92.3% Observed 2/8 0/8 8/8 a 0/8 8/8 a 8/8 a 3/8 8/8 a 0/8 3/8 0/8 3/8 1/8 8/8 a 0/8 0/8 0/8 8/8 a 0/8 1/8 (0.30)

    Note. ■ AU which occurs with prototype; □ may occur with prototype and / or major variant [according to FACS investigators’ guide and Kappesser &

    Williams (2002]; /8: number of actors who activated a certain AU with an intensity of ≥ b (b=slight evidence of facial action);

    a occurrence rate ≥ 4/8;

    b mean number (±SEM) of target AUs activated per clip in each condition;

    c percentage of the observed AUs that were part of the prototypes or its major variants in each condition.

  • Dynamic Clips 33

    Table 2. Matrix of Subject’s Intensity Rating, Sensitivity (Hit rate) and Specificity (Correct

    rejection rate) by Emotion Category

    Expression Perceived

    Target

    Expression

    Hap

    pine

    ss

    Ang

    er

    Dis

    gust

    Fear

    Surp

    rise

    Sadn

    ess

    Pain

    Hit

    rate

    (%)

    Cor

    rect

    re

    ject

    ion

    rate

    (%

    )

    Neutral 0.06 (0.02)

    0.06 (0.03)

    0.06 (0.03)

    0.03 (0.02)

    0.02 (0.01)

    0.14 (0.05)

    0.01 (0.01)

    71.67 (5.31)

    100 (0.00)

    Happiness 3.34 * (0.14)

    0.00 (0.00)

    0.00 (0.00)

    0.02 (0.02)

    0.18 (0.07)

    0.01 (0.01)

    0.00 (0.00)

    100 (0.00)

    96.98 (1.62)

    Anger 0.00 (0.00)

    2.97* (0.18)

    0.13 (0.07)

    0.04 (0.03)

    0.08 (0.04)

    0.09 (0.06)

    0.03 (0.02)

    97.50 (1.22)

    98.84 (0.42)

    Disgust 0.04 (0.02)

    0.02 (0.02)

    2.84 * (0.18)

    0.32 (0.09)

    0.20 (0.09)

    0.13 (0.05)

    0.43 (0.12)

    85.83 (3.66)

    95.58 (0.54)

    Fear 0.00 (0.00)

    0.08 (0.05)

    0.43 (0.10)

    2.48* (0.16)

    1.08 (0.17)

    0.08 (0.04)

    0.38 (0.15)

    74.17 (5.55)

    96.69 (0.59)

    Surprise 0.17 (0.08)

    0.00 (0.00)

    0.00 (0.00)

    0.34 (0.19)

    3.27 * (0.14)

    0.01 (0.01)

    0.00 (0.00)

    95.00 (1.67)

    94.45 (0.96)

    Sadness 0.00 (0.00)

    0.08 (0.04)

    0.24 (0.07)

    0.16 (0.07)

    0.11 (0.05)

    2.40 * (0.14)

    0.34 (0.22)

    87.50 (6.60)

    96.59 (0.71)

    Pain 0.11 (0.06)

    0.03 (0.01)

    0.62 (0.15)

    0.53 (0.21)

    0.43 (0.13)

    0.27 (0.11)

    2.68 * (0.25)

    74.17 (4.95)

    96.18 (1.00)

    Note. Values correspond to the mean (±SEM) intensity rating (ranging from 0 to 5), as well as %

    (±SEM) correct recognition (Hit rate) and % (±SEM) correct rejection of each expression category

    (Expression Perceived) for each target expression condition averaged across actors and observers.

    SEM reflects the variability between actors. * Mean rating of the target emotion is significantly

    higher than the mean rating on the other scales (Fisher’s protected least significance difference test;

    all p’s < .001).

  • Dynamic Clips 34

    Table 3. Mean (SEM) Ratings by Emotional Category for Male and Female Actors

    Intensity of the

    target expression

    Valence of the

    target expression

    Arousal of the

    target expression

    Target expression

    Male actors

    Female actors

    Male actors

    Female actors

    Male actors

    Female actors

    Neutral 0.06 (0.02) 0.05 (0.02) 0.02 (0.03) -0.18 (0.09) -1.23 (0.38) -1.18 (0.36)

    Happiness 3.22 (0.15) 3.47 (0.13) 2.80 (0.16) 3.18 (0.12) 0.18 (0.44) 0.73 (0.43)

    Anger 3.00 (0.18) 2.93 (0.21) -1.57 (0.22) -1.80 (0.21) 1.20 (0.37) 1.28 (0.30)

    Disgust 2.78 (0.20) 2.90 (0.20) -2.57 (0.17) -2.65 (0.15) 0.73 (0.31) 1.23 (0.28)

    Fear 2.45 (0.20) 2.50 (0.18) -2.23 (0.12) -2.20 (0.14) 1.37 (0.29) 1.58 (0.30)

    Surprise 3.37 (0.17) 3.17 (0.13) -0.27 (0.12) 0.25 (0.16) 1.55 (0.35) 1.48 (0.34)

    Sadness 2.42 (0.14) 2.38 (0.19) -2.08 (0.18) -2.30 (0.18) -0.83 (0.32) -0.38 (0.41)

    Pain 2.68 (0.23) 2.67 (0.30) -2.77 (0.19) -3.00 (0.15) 2.25 (0.28) 2.03 (0.18)

    Note: Intensity ratings ranged from 0 to 5. Valence and arousal ratings ranged from +4 to -4.

  • Dynamic Clips 35

    Figure captions

    Figure 1. Flow chart illustrating the method of stimuli production and validation.

    Figure 2. Examples of four female and four male actors expressing the six basic emotions, pain and

    neutral. Five time points in the clip (1ms, 250 ms, 500 ms, 750 ms, 1000ms) are displayed.

  • Dynamic Clips 36

    Figure 1

  • Dynamic Clips 37

    Figure 2