Rigid motion studies in Whole-Part Task

Embed Size (px)

Citation preview

  • 8/13/2019 Rigid motion studies in Whole-Part Task

    1/25

    Abstract

    It is more naturalistic for us to view faces in the real world in rigid-motion or non-rigid

    motion ways instead of static all the time. The aim of this research is to study if rigid-facial

    motion influences featural processing or it is processed in a holistic manner using WFE. 24

    males and 24 females with age range of 19-35 years have participated in this study. The result

    showed there was a non-significant difference between rigid-facial motion group and multi-

    static group. There was a non-significant interaction between the groups stimulus and the

    type of trials (whole-part trials). However whole-based trials scores were significantly better

    than part-based trials. There was also a significant score differences in recognizing internal

    facial features.

  • 8/13/2019 Rigid motion studies in Whole-Part Task

    2/25

    In the real world, we are going to meet thousands of people in our entire life. It has

    always been peculiar to the researchers of how human recognizes and remember the faces

    they have encountered in their life and how human are able to differentiate one face different

    from the other. It is commonly agreed that faces are processed differently compared to

    objects (Boutet and Faubert, 2006; Piepers and Robbins, 2012; Tanaka and Farah, 1993) and

    it was known in neural encoding, faces is a default special class in primate recognition system

    (Gauthier and Logothetis, 2000; Farah, Wilson, Drain and Tanaka, 1998). Objects can be

    easily identified in part-based manner than faces in non-rigid motion stimuli (Tanaka and

    Farah, 1993) even compared to dogs whereby dog experts showed no face-like processing for

    dogs (Robbins and McKone, 2007).

    Lee et al (2011 as cited in Xiao, Quinn, Ge and Lee, 2012) stated there are three types

    of information which are important for face recognition including featural information

    (comprises of isolated parts of the face features i.e. nose, eyes and mouth), configural

    information (spatial differences between the isolated face features) and holistic information

    (both featural and configural processing) whereby in Piepers and Robbins (2012) review;

    faces tend to be viewed as a whole/gestalt. Holistic processing could be seen in past

    researches using different approaches including face-inversion effect (FIE), composite face

    effect (CFE) and whole part effect (WFE) (Boutet and Faubert, 2006; Konar, 2012; Maurer,

    Grand and Mondloch, 2002; Riesenhuber, Jarudi, Gilad and Sinha, 2004; Wang, Li, Fan,

    Tian, and Liu, 2012). However, most of these researches only use static images as the stimuli.

    More importantly, faces we see in the real world are dynamic and not static matters

    and that include faces moving in rigid motion (eg. nodding and the turning of the head) or

    elastic motion which involved alteration of the face shape such as smiling or talking

    (Knappmeyer, Thornton and Bulthoff , 2003). Lander, Christie and Bruce

  • 8/13/2019 Rigid motion studies in Whole-Part Task

    3/25

    (1999); Lander and Bruce (2000), Lander and Chuang (2005) and Pike, Kemp, Towell and

    Philips (1997) researches using moving faces as stimuli are easier for face recognition.

    Otsuka et al (2009) also founded the same effect in young infants and Guellai, Coulon, and

    Streri (2011) study showed face with concurrent occurrence of speech sound, rigid and non-

    rigid movementsincreases interactive faces recognition at birth. Furthermore, OToole,

    Roark and Abdi (2002) stated rigid motion faces has aided in building up a 3D representation

    of a face.

    Hill and Johnson (2001) and Lander et al. (1999) studies showed participants were

    able to recognize faces in motion even when the faces are inverted. Motion was founded to

    lower FIE and act as a cue to face recognition in individual with prosopagnosia according to

    study by Longmore and Tree (2013). These researches have indirectly showed that facial

    motion might influence featural processing. Mckone (2010) stated FIE cannot inform us if

    there are qualitative differences concerning face and object processing. Hence, previous

    research using FIE could not directly quantity holistic processing.

    Young, Hellawal and Hay (1987) first coined the CFE technique in measuring face

    perception whereby they founded participants scored significantly lower when two familiar

    faces (one top half image and another bottom half image) were to be aligned than misaligned

    as this is to show that holistic processing has been interfered when both top and bottom

    halves images were aligned to form a new face. Interestingly, Xiao et al. (2012) study using

    CFE method founded rigid facial motion influences featural but not holistic processing but

    such phenomena was not found in multi-static images stimuli and similar result was founded

    in Xiao et al. (2013) elastic motion and face recognition research. McKone (2008) suggested

    moving faces produces weaker effects on holistic processing than static faces and thus it tend

    to depend on featural processing to for recognizing faces due to the alteration within its

    second-order configuration (spacing between two features).

  • 8/13/2019 Rigid motion studies in Whole-Part Task

    4/25

    Several researches assumed WFE to be an alternative key to measure holistic

    processing. Konar (2012) experiment was adopted from Boutet and Faubert (2006) research

    which is analogous to Tanaka and Farah (1993) whole-part task, whereby participants were

    instructed to learn the full face and then respond to full-face trial or part-face trial in terms of

    accuracy in face recognition. In the full-face trial, participants were shown one learned face

    and one foil face. The foil face will has one internal features differed from the learned face

    (i.e. eyes, nose and mouth). In part-face trial, internal features of the faces (both learned and

    foil) were presented including only either two pairs of eyes, a pair of mouth and a pair of

    nose. In their researches, participants scored significantly better in full-face trial than part-

    face trial and this showned faces are processed holistically when features were presented in

    the whole-face context than in isolation. It was suggested, a new face has been formed when

    a new feature is added/altered within the learned face, making it easier to tell the two faces

    apart (Goffaux and Rossion, 2005; Piepers and Robbins, 2012). Moreover, performance was

    thought to be significantly better in full-fased trial than part-face trial because full faces were

    shown in learning phase (Tulving and Thomson, 1973).

    Furthermore, Joseph and Tanaka (2002) and Liu et al. (2013) identified the

    importance of eyes and mouths in face recognition compared to nose with eyes on the lead

    using whole-face effect trials. It was suggested eyes and mouth are essential for emotion and

    speech processing in social context (Baron-Cohen, Wheelwright and Jolliffe, 1997).

    For this research, we would be interested in looking into how rigid facial motion will

    be processed. Since, Xiao et al. (2012) research founded featural processing is on the lead

    using CFE, here we will be using WFE to measure the face processing on rigid facial motion

    adopting Konar (2012) and Xiao et al. (2012) methodologies combined.

    There will be five hypotheses in this experiment including (1) There will be

    significant differences between rigid-facial motion and multi-static groups in whole-part task;

  • 8/13/2019 Rigid motion studies in Whole-Part Task

    5/25

    (2) participants will score significantly higher in whole-based (full-face) trial than part-based

    (part-face) trial; (3) there will be significant main effect for the type of internal facial features

    being measured. Participants will score significantly better in eyes and mouth trials than nose

    trial but with higher accuracy in eyes compared to mouth trials. (4) There will be an

    interaction effect between groups (rigid-facial motion and multi-static) and type of trials

    (whole-part trials) whereby there will be significant differences between rigid-facial motion

    and multi-static groups in part-based trials.

    Method

    Design

    The methodological design is a mixed design with one between-subject (groups

    stimulus) two within-subjects (type of facial features and type of trials).

    This experiment consists of two independent variables; one within-subject variable

    and dependent variables. The first independent variable is the groups stimulusthat consists

    of two levels (rigid-facial motion group and multi-static group). Participants in rigid-facial

    motion group will have rigid-facial motion as stimuli while participants in multi-static group

    will have multi-static face images as stimuli. The second independent variable is the type of

    trials including whole-based trial and part-based trial. Participants in both groups will have to

    complete the whole-part task having full-face and part-face trials. The third independent

    variable is the facial features that will be altered from the learned face and will be used as a

    foil face in whole and part trials. (e.g. alteration of eyes, nose and mouth). Operational

    definition for dependent variables is face recognition accuracy in whole-part task in terms of

    the number of correctness in percentage.

    Participants

  • 8/13/2019 Rigid motion studies in Whole-Part Task

    6/25

    Participants (N=48, 24 males and 24 females, M = 20.8542 years, SD = 2.5179, age

    range: 19-35) were recruited from universities in Malaysia through convenient samplings. All

    participants were informed consent for this study.

    Materials

    30 models (15 males and 15 females) were obtained from FEI Face Database

    (Brazilian face database). Each model has 10 images from multiple angles (0 to 180) and a

    full-frontal view (90) of themselves and all images were grey-scale colored with the size of

    640 x 480 pixels. The models expression was neutral in all images taken.10 images from

    each model were used to create familiarization stimulus as rigid facial motion stimuli.

    Method in creating the familiarization stimulus was similar of Xiao et al. (2012) methodology

    whereby one profile view image shown only once and the other nine non-fronts view images

    twice, which summed up to 19 images in each familiarization. The picture sequence would be

    1-2-3-4-5-6-7-8-9-10-9-8-7-6-5-4-3-2-1 forming facial turning motion from 0 to 180. The

    duration of each image was 80ms with no interval on the following images which gave the

    overall presenting time of 19 images x 80ms = 1520ms.

    For multi-static stimuli, Xiao et al. (2012) experiment 3 methodology was adapted.

    The formation and the sequence of the models images were same as rigid facial motion

    stimuli however in multi-static stimuli, each images consisted 400ms interval between each

    images of the model. The total duration for this multi-static stimulus would be 19 pictures x

    80ms + 18 interval x 400ms = 8729ms. It was suggested apparent motion was removed with

    400ms intervals thus even if images have the same displaying order as the rigid facial motion

    stimuli, the images seen would be considered static images (Xiao et al., 2012). All rigid facial

    motion and multi-static stimuli were compiled and made into video format (.mpg) using

    Windows Movie Maker. Please refer to appendix 1.

  • 8/13/2019 Rigid motion studies in Whole-Part Task

    7/25

    Konar (2012) whole-part task was adapted as testing stimulus. For the full-face trial,

    30 modelsfull-front images were processed in terms of alteration either of the eyes, nose or

    mouth using Gimp (GNU image manipulation program). The foil face will only has one

    internal feature being altered using the other model feature. In 15 male models, 5 model eyes

    were replaced with another models eyes whereas 5 other models have their nose being

    replaced and another 5 models mouth were replaced. This alteration has also been done for

    the 15 female models. For the part-face trial, 30 models original full-front images and the

    foil images internal features would be cropped and labeled as part-face stimuli. Please refer to

    appendix 2.

    The final compilation of all the stimulus were completed using Psychopy program

    whereby the accuracy of the matching task (whole-part task) would be recorded and save in

    excel sheet with only two-options (forced-choice) given to respond to the whole-part tasks

    being shown.

    Procedure

    The experiment was run in the Nottingham Malaysia Campus as well as other

    universities and participants were recruited through convenient samplings. The participants

    were assigned to either rigid-facial motion group or multi-static group. Participants were

    instructed to complete the whole-part task as quickly as possible in order to prevent

    overthinking which may affect the result. The Psychopy program was set with a total of 60

    whole-part matching tasks in each group. Each group will have 30 full-face trials and 30 part-

    face trials in which participants will have to complete the same models full -face and part-

    face trials. Before running the actual test, the participants were given a pilot test to ensure

    they comprehend the idea of the research. The participants responded the whole-part tasks

    through forced choices (only 2 options given) by keying in left or right key on the

    keyboard. The accuracy of whole-part tasks were documented and tabulated.

  • 8/13/2019 Rigid motion studies in Whole-Part Task

    8/25

  • 8/13/2019 Rigid motion studies in Whole-Part Task

    9/25

    whole-based trial and part-based trial and the third independent variable is type of internal

    facial features encompasses three levels: the eyes, nose and mouth.

    Levenes Test for part-based trials with its features and whole-based trial with mouth

    feature were assumed, p > .05 whereas whole-based trial with eyes and nose features were

    not assumed, p < .05 (Refer to appendix 3).

    A 2 (groups stimulus: rigid-facial motion vs. multi-static) x 2 (type of trials: whole-

    based vs part-based) x 3 (type of features: eyes vs. nose vs. mouth) repeated measures

    ANOVA showed a non-significant main effect for groups stimulus, F(1,46) = 2.802, p > .05.

    Participants from rigid-facial motion group did not score significantly better than participants

    from multi-static group in whole-part task. This result has failed to support our first

    hypothesis stating there will be a significant difference between rigid-facial motion group and

    multi-static group in scoring whole-part task.

    There was a significant main effect of type of trials, F(1,46) = 25.675, p< .001

    (=0.025). Participants from both groups stimulus scored significantly better in whole-based

    trial (M = 71.04, SD = 9.73) than part-based trial (M = 60.49, SD = 9.94). Second hypothesis

    was supported.

    There was a significant main effect of accuracy in different features, F(2, 92) =

    18.752, p < .001. Participants from both conditions scored significantly different in

    recognizing the eyes (M = 72.50, SD = 11.299), nose (M = 59.69, SD = 9.86) and mouth (M

    = 65.21, SD = 11.53). A post hoc was conducted and it showed that participants from both

    conditions scored significantly better in recognizing the eyes compared to nose (p = .010) and

    mouth (p = .002). Theres a non-significant difference in recognizing the mouth or nose (p >

    .05). *Refer to Appendix 6. The third hypothesis was partially supported as the eyes have the

    highest accuracy scores than both nose and mouth features. Accuracy in recognizing mouth

    was supposedly better than nose however result showed the otherwise.

  • 8/13/2019 Rigid motion studies in Whole-Part Task

    10/25

    There was no significant interaction between types of trials and participants in both

    groups, F(1, 46) = .023, p> .05. In Fig. 1, participants from both conditions did not score

    significantly different in both trials even though rigid-facial motion groups participants

    seemed to score better in both trials than multi-static group. Fourth hypothesis was not

    supported whereby participants in rigid-facial motion group supposedly score significantly

    better than multi-static group in part-based trial.

    Fig 1. Rigid-facial Motion and Multi-static groups did not score significantly different in both

    whole-part trials.

    There was a significant interaction between accuracy in recognizing the type of

    features in both trials, F(1.678, 77.199) = 6.929, p < .001. Both trials score were significantly

    different whereby whole-based trial indeed has better scores than part-based trial. Refer to

    Appendix 7.

    There was a significant interaction for the groups stimulus, the type of trials and

    accuracy in different features in the recognition task, F(1.678, 77.199) = 3.679, p = .037

  • 8/13/2019 Rigid motion studies in Whole-Part Task

    11/25

    Fig. 2 Participants from both condition (Rigid-facial Motion and Multi-static) accuracy scores

    in both whole-part trials and their individual scores on each internal feature (eyes, nose and

    mouth).

    The three-way interaction indicated whether type of trials x accuracy scores in

    different features interaction is the same or different in both rigid-facial motion and static

    groups. A further analysis was conducted using two one-way ANOVA to see the differences

    of groups stimulus in both trials.

    The first one-way ANOVA showed there was a non-significant difference between

    two groups on accuracy in recognizing internal features for whole-based trial, F(2,92) =

    1.479, p >.05.

    The second one-way ANOVA analysis showed there was a significant differences

    between two groups on accuracy in recognizing internal features for part-based trial, F(2,92)

    = 3.951, p = 0.023. It was shown rigid-facial motion group scored significantly higher (M =

    62.5, SE = .027) for mouth features accuracy than multi-static group (M = 50, SE = .027).

    Refer to Appendix 9.

  • 8/13/2019 Rigid motion studies in Whole-Part Task

    12/25

    Fig. 3 Participants accuracy scores in part-based trial on recognizing the internal features.

    Discussion

    The aim of this research was to examine how rigid-facial motion was being processed

    in terms of featural and holistic processing. According to the result, the first hypothesis was

    not supported indicating there were no significant differences between participants in rigid-

    facial motion group and multi-static group. We thought rigid-facial motion group will score

    significantly higher due to their advantageous in viewing a motion thus should have higher

    scoring in part-based trial and brings up the overall accuracy for whole-part task. According

    to Hill and Johnson (2001), Lander (1999) and Xiao et al. (2012) suggested recognizing faces

    in motion were due to featural processing. Another possible reason would be the amount of

    time exposed to the familiarization stimuli, whereby participants in rigid-facial motion

    stimulus have shorter time exposure to the stimuli compared to the multi-static group in

    which multi-static group has extra 400ms intervals between each images. Even though it was

    suggested when the stimulus onset asynchrony (SOA) is more than 400ms, the visual

    attention cue will be inhibited (Klein, 2000 as cited in Xiao et al. 2012).

  • 8/13/2019 Rigid motion studies in Whole-Part Task

    13/25

    The second hypothesis was supported showing there was a significant difference

    between whole-based trial and part based trial. As expected, the result showed the mean

    accuracy in whole-based trial was significantly higher than part-based which was equal with

    other researchers result (Boutet and Faubert, 2006; Konar, 2012; Tanaka and Farah, 1993).

    As it confirmed Tulving and Thomson (1973) statement that the full-face learned trial has aid

    participants to score significantly better in full-face trial than part-face trial. This indicates the

    presence of holistic processing.

    The third hypothesis was supported partially showing there was a significant

    difference of accuracy in recognizing the eyes, nose and mouth. As indicated from past

    researches (Joseph and Tanaka, 2002; Liu, 2013), showed eyes as the main importance in

    social context. According to Baron-Cohen et al. (1997), basic and complex emotions are

    easier to be recognized in the eyes than the mouth part. However, basic emotion could be

    recognized in the mouth as well. Even though in our research, there were no significant

    differences between accuracy for the nose and mouth, it still showed in the graph accuracy

    for mouth is slightly higher. Accuracy for recognizing the nose part was the lowest due to

    nose is considered the salient and informative inner feature and it act as a reference point for

    the other inner features whereby the eyes and the mouth could be optimally processed (Liu et

    al. 2013).

    The fourth hypothesis was not supported whereby it was hypothesized participants in

    rigid-facial motion group will score significantly better in part-based trial than the multi-static

    group in order to propose featural processing. Even though the graph showed rigid-facial

    motion group scored slightly better however, the differences are too small to be significant.

    This may due to the error in stimuli manipulation and research being conducted in

    unconducive environment.

  • 8/13/2019 Rigid motion studies in Whole-Part Task

    14/25

    Participants in both groups did not score significantly different in whole-based trials

    for recognizing the internal features. Similar to Xiao et al. (2012) results suggested the multi-

    static images which were shown without the frontal view are sufficient enough to form a full

    frontal view representation and there were no significant differences between two groups in

    their CFE. This again supported holistic processing.

    Participants in rigid-facial motion scored significantly better in part-based trial in

    overall accuracy scores compared to multi-static group, one-way ANOVA was conducted for

    further analysis it showed that participants in rigid-facial group has only scored significantly

    better in mouth-part task only. Perhaps, this is due to the errors in the stimuli manipulation or

    participants accurately identify the mouth-part by chance. Otherwise, featural processing

    might leads to higher accuracy in recognizing the mouth-part in rigid-motion stimulus.

    Even though the conclusion of whether rigid-facial motion influences featural

    processing was inconclusive using WPE unlike CFE, it was shown that like any other whole-

    part task researches, faces tend to be recognized in holistic form whereby face parts are easier

    to familiarize in a face gestalt than in seclusion. Piepers and Robbins (2012) stated rigid

    motion may assist in holistic processing and process the face more effectively as a whole due

    to supplementary Gestalt consortium principles specific to moving stimuli. For instance,

    facial features in images seen in motion shared the common fate when they move in same

    direction at the same pace.

    Further work need to be carried out to confirm how rigid motion reveals information

    in holistic and featural processing. Perhaps in future research, external features in the whole-

    face trail should be removed to see if external features of the face has tremendously affect the

    result being collected in this research. It was noted in Boutet and Faubert (2006) research

    external features does not affect the WPE however the review of several researches in Konar

    (2012) research, external features was a factor affecting WPE.

  • 8/13/2019 Rigid motion studies in Whole-Part Task

    15/25

    References

    Baron-Cohen, S., Wheelwright, S., Jolliffe & Therese (1997). Is there a" language of the

    eyes"? Evidence from normal adults, and adults with autism or Asperger syndrome.

    Visual Cognition, 4(3), 311--331.

    Boutet, I. & Faubert, J. (2006). Recognition of faces and complex objects in younger and

    older adults.Memory & cognition, 34(4), 854--864.

    Farah, M., Wilson, K., Drain, M. & Tanaka, J. (1998). What is" special" about face

    perception?.Psychological review, 105(3), 482.

    Gauthier, I. & Logothetis, N. (2000). Is face recognition not so unique after all?. Cognitive

    Neuropsychology, 17(1-3), 125--142.

    Goffaux, V. & Rossion, B. (2005). Faces are spatial-Holistic perception of faces is

    subtended by low spatial frequencies.Journal of Vision, 5(8), 540--540.

    Guellai, B., Coulon, M. & Streri, A. (2011). The role of motion and speech in face

    recognition at birth. Visual Cognition, 19(9), 1212--1233.

    Hill, H. & Johnston, A. (2001). Categorizing sex and identity from the biological motion of

    faces. Current Biology, 11(11), 880--885.

    Knappmeyer, B., Thornton, I. & B"Ulthoff, H. (2003). The use of facial motion and facial

    form during the processing of identity. Vision research, 43(18), 1921--1936.

    Konar, Y. (2012). EVALUATION OF HOLISTIC FACE PROCESSING.McMaster

    University Library.

    L, Er, K. & Chuang, L. (2005). Why are moving faces easier to recognize?. Visual Cognition,

    12(3), 429--442.

    L, Er, K., Christie, F. & Bruce, V. (1999). The role of movement in the recognition of famous

    faces.Memory & Cognition, 27(6), 974--985.

    Liu, S., Anzures, G., Ge, L., Quinn, P., Pascalis, O., Slater, A., Tanaka, J. & Lee, K. (2012).

  • 8/13/2019 Rigid motion studies in Whole-Part Task

    16/25

    Development of recognition of face parts from unfamiliar faces.Infant and Child

    Development.

    Longmore, C. & Tree, J. (2013). Motion as a cue to face recognition: Evidence from

    congenital prosopagnosia.Neuropsychologia.

    Maurer, D., Gr & Mondloch, C. (2002). The many faces of configural processing. Trends in

    cognitive sciences, 6(6), 255--260.

    Mckone, E. (2008). Configural processing and face viewpoint..Journal of Experimental

    Psychology: Human Perception and Performance, 34(2), 310.

    Mckone, E. (2010). Face and object recognition: how do they differ?. Tutorials in visual

    cognition, 261--303.

    O'toole, A., Roark, D. & Abdi, H. (2002). Recognizing moving faces: A psychological and

    neural synthesis. Trends in cognitive sciences, 6(6), 261--266.

    Otsuka, Y., Konishi, Y., Kanazawa, S., Yamaguchi, M., Abdi, H. & OToole, A. (2009).

    Recognition of moving and static faces by young infants. Child development, 80(4),

    1259--1271.

    Piepers, D. & Robbins, R. (2012). A review and clarification of the terms

    holistic,configural, and relational in the face perception literature.Frontiers in

    psychology, 3.

    Pike, G., Kemp, R., Towell, N. & Phillips, K. (1997). Recognizing moving faces: The

    relative contribution of motion and perspective view information. Visual Cognition, 4

    (4), 409--438.

    Riesenhuber, M., Jarudi, I., Gilad, S. & Sinha, P. (2004). Face processing in humans is

    compatible with a simple shape--based model of vision.Proceedings of the Royal

    Society of London. Series B: Biological Sciences, 271(Suppl 6), 448--450.

    Robbins, R. & Mckone, E. (2007). No face-like processing for objects-of-expertise in three

  • 8/13/2019 Rigid motion studies in Whole-Part Task

    17/25

    behavioural tasks. Cognition, 103(1), 34--79.

    Tanaka, J. & Farah, M. (1993). Parts and wholes in face recognition. The Quarterly Journal

    of Experimental Psychology, 46(2), 225--245.

    Tulving, E. & Thomson, D. (1973). Encoding Specificity and Retrieval Process in Episodic

    Memory.Psychological Review, 80(5), 352-373. Retrieved from:

    http://alicekim.ca/9.ESP73.pdf [Accessed: 25th Nov 2013].

    Wang, R., Li, J., Fang, H., Tian, M. & Liu, J. (2012). Individual differences in holistic

    processing predict face recognition ability.Psychological Science, 23(2), 169--177.

    Xiao, N., Quinn, P., Ge, L. & Lee, K. (2012). Rigid facial motion influences featural, but not

    holistic, face processing. Vision research, 5726--34.

    Xiao, N., Quinn, P., Ge, L. & Lee, K. (2013). Elastic Facial Movement Influences Part-Based

    but Not Holistic Processing..American Psychological Association.

    Young, A., Hellawell, D. & Hay, D. (1987). Configurational information in face perception.

    Perception, 16(6), 747--759.

  • 8/13/2019 Rigid motion studies in Whole-Part Task

    18/25

    Appendix 1

    Fig. 1 The upper image showed the sequence of complete rigid-facial motion stimulus while

    the bottom was the multi-static images with 400ms intervals. The first profile and last profile

    view of the model is equal as a standardization format of the motion.

  • 8/13/2019 Rigid motion studies in Whole-Part Task

    19/25

    Appendix 2

    Fig. 2 showed the differences between whole-based trial and part-based trial in which

    participants have to complete in whole-part task.

  • 8/13/2019 Rigid motion studies in Whole-Part Task

    20/25

    Appendix 4

    Levene's Test of Equality of Error Variancesa

    F df1 df2 Sig.

    Weyes 5.855 1 46 .020

    Wnose 4.060 1 46 .050

    Wmouth .223 1 46 .639

    Peyes .575 1 46 .452

    Pnose .431 1 46 .515

    Pmouth .358 1 46 .553

    Tests the null hypothesis that the error variance of

    the dependent variable is equal across groups.

    a. Design: Intercept + Group

    Within Subjects Design: Types + Features +

    Types * Features

  • 8/13/2019 Rigid motion studies in Whole-Part Task

    21/25

    Appendix 5

    Pairwise Comparisons

    Measure: Correctness

    (I)

    FacialFeatures

    (J)

    FacialFeatures

    Mean

    Difference (I-

    J)

    Std. Error Sig. 95% Confidence Interval for

    Differenceb

    Lower Bound Upper Bound

    Eyes

    Nose .106 .034 .010 .021 .192

    Mouth .119 .033 .002 .038 .200

    Nose

    Eyes -.106 .034 .010 -.192 -.021

    Mouth .013 .026 1.000 -.053 .078

    Mouth

    Eyes -.119 .033 .002 -.200 -.038

    Nose -.013 .026 1.000 -.078 .053

    Based on estimated marginal means

    *. The mean difference is significant at the .05 level.

    b. Adjustment for multiple comparisons: Bonferroni.

  • 8/13/2019 Rigid motion studies in Whole-Part Task

    22/25

    Appendix 6

  • 8/13/2019 Rigid motion studies in Whole-Part Task

    23/25

    Appendix 7

    Accuracy in recognizing internal features in both trials

    Measure: Accuracy for both trials.

    Types Features Mean Std. Error 95% Confidence Interval

    Lower Bound Upper Bound

    Whole-Based

    Trial

    Eyes .765 .019 .726 .803

    Nose .606 .025 .556 .657

    Mouth .756 .023 .709 .803

    Part-Based Trial

    Eyes .681 .026 .629 .733

    Nose .575 .024 .527 .623

    Mouth .562 .019 .525 .600

  • 8/13/2019 Rigid motion studies in Whole-Part Task

    24/25

    Appendix 8

    Accuracy for recognizing internal features in both trials for both groups

    Measure: Accuracy

    Group Types Features Mean Std. Error 95% Confidence Interval

    Lower Bound Upper Bound

    Motion

    Rigid-Facial

    Motion

    Eyes .817 .027 .762 .872

    Nose .612 .035 .541 .684

    Mouth .762 .033 .696 .829

    Multi-Static

    Eyes .671 .036 .597 .744

    Nose .558 .034 .490 .626

    Mouth .625 .027 .571 .679

    Static

    Rigid-Facial

    Motion

    Eyes .713 .027 .658 .767

    Nose .600 .035 .529 .671

    Mouth .750 .033 .683 .817

    Multi-Static

    Eyes .692 .036 .618 .765

    Nose .592 .034 .524 .660

    Mouth .500 .027 .446 .554

  • 8/13/2019 Rigid motion studies in Whole-Part Task

    25/25

    Appendix 9

    Interaction between Groups stimulus and Facial Features

    Measure: Accuracy

    Group FacialFeatures Mean Std. Error 95% Confidence Interval

    Lower Bound Upper Bound

    Motion

    Eyes .671 .036 .597 .744

    Nose .558 .034 .490 .626

    Mouth .625 .027 .571 .679

    Static

    Eyes .692 .036 .618 .765

    Nose .592 .034 .524 .660

    Mouth .500 .027 .446 .554