41
Face Recognition From Video Part (II) Advisor: Wei-Yang Lin Presenter: C.J. Yang & S. C. Liang

Face Recognition From Video Part (II)

  • Upload
    questa

  • View
    36

  • Download
    0

Embed Size (px)

DESCRIPTION

Face Recognition From Video Part (II). Advisor: Wei-Yang Lin Presenter: C.J. Yang & S.C. Liang. Outline. Method (I) : A Real-Time Face Recognition Approach from Video Sequence using Skin Color Model and Eigenface Method [1] - PowerPoint PPT Presentation

Citation preview

Page 1: Face Recognition From Video Part (II)

Face Recognition From Video Part (II)

Advisor: Wei-Yang LinPresenter: C.J. Yang & S.C. Liang

Page 2: Face Recognition From Video Part (II)

Outline Method (I) : A Real-Time Face Recogni

tion Approach from Video Sequence using Skin Color Model and Eigenface Method[1]

Method (II) : An Automatic Face Detection and Recognition System for Video Streams[4]

Conclusion

Page 3: Face Recognition From Video Part (II)

A Real-Time Face Recognition Approach from Video Sequence using Skin Color Model and Eigenface Method

Islam, M.W.; Monwar, M.M. Paul, P.P.; Rezaei, S,

IEEE Electrical and Computer Engineering, Canadian Conference on, May 2006 Page(s):

2181 - 2185

Page 4: Face Recognition From Video Part (II)

Introduction

Real time face recognitionFace Detection Face Recognitin

Others

Most use intensity values Most ignore the question of which features are important for classification, which are not

Proposed

Use skin color Majority of images acquired are colored Skin color features should be important sources of information for discrimmating faces from the background

Use eigenface approach Principal component analysis (PCA) of the facial images, leave only those features that are critical for face recognition Speed, simplicity, learning capability, robustness to small changes in the face image

Page 5: Face Recognition From Video Part (II)

Method (I)

Video sequencesFace detection

Face recognition

Real time image acquisitionUsing MATLAB Image Acquisition

Toolbox 1.1

Results

Page 6: Face Recognition From Video Part (II)

Face Detection - Skin Color Model

Adaptable to people of different skin colors and to different lighting conditions

Skin colors of different people are very close, but they differ mainly in intensities

Page 7: Face Recognition From Video Part (II)

Face Detection - Skin Color Model (cont.)

[2]

[2] R.S. Feris, T. E. de Campos, and R. M. C. Junior, "Detection and tracking of facial features in video sequences," proceedings of the Mexican International Conference on Artificial Intelligence. Advances in Artificial Intelligence, pp. 127 - 135, 2000.

Selected skin-color region

Cluster in color space

Page 8: Face Recognition From Video Part (II)

Face Detection - Skin Color Model (cont.) Chromatic solors are defined by a normaliz

ation process

,R G

r gR G B R G B

rg

Cluster in chromatic space

Gaussian Model

r

g

r

g

N(m,C)

m=E{x} where x=

C=E{(x-m)(x-m)T}

= rr rg

gr gg

Page 9: Face Recognition From Video Part (II)

Face Detection - Skin Color Model (cont.)

Obtain the likelihood of skin for any pixel of an image with the Gaussian fitted skin color model

Transform a color image into a grayscale image

Using threshold value to show skin regions

Page 10: Face Recognition From Video Part (II)

Face Detection - Skin Region Segmentation

Segmentation and approximate face location detection process

r=0.41~0.50

g=0.21~0.30

Gray scale image

Page 11: Face Recognition From Video Part (II)

Face Detection - Skin Region Segmentation (cont.)

Median filter

Page 12: Face Recognition From Video Part (II)

Face Detection - Face Detection

Approximate face locations are detected using a proper height-width proportion of general face

Rough face locations are verified by an eye template-matching scheme

Page 13: Face Recognition From Video Part (II)

Face Recognition - Defining Eigenfaces Main idea of PCA method

Find the vectors which best account for the distribution of face images within the entire image space

Vectors Eigenvectors of covariance matrix correspondin

g to the original face images Face-like Eigenfaces

Vectors define the subspace of face images face space

Page 14: Face Recognition From Video Part (II)

Face Recognition - Defining Eigenfaces

Page 15: Face Recognition From Video Part (II)

Face Recognition - Defining Eigenfaces (cont.)

Keeping only the M Eigenfaces which correspond to the highest Eigenvalues, and M Eigenfaces denote the face space

Calculate the corresponding location in M-dimensional weight space for each known individual

Calculate the Eigenfaces from the training set

Calculate a set of weights based on a new face image and the M Eigenfaces

Page 16: Face Recognition From Video Part (II)

Face Recognition - Defining Eigenfaces (cont.)

Determine if the image is a face

If it is a face, classify the weight pattern as either a known person or as unknown person

[3]

[3] M. A. Turk, and A. P. Pentland, "Face recognition using Eigenfaces," proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586-591, June 1991.

Page 17: Face Recognition From Video Part (II)

Face Recognition - Calculating Eigenfaces

Steps

i i

1

1 M

nnM

Obtain the mean image

1 2 3{ , , ,..., }MS Obtain a set S with M face images (N by N)

Find the difference

1

1 MT T

n nn

C AAM

Calculate the Covariance matrix C

1 2[ , ,..., ]MA where

Page 18: Face Recognition From Video Part (II)

Face Recognition - Calculating Eigenfaces (cont.)

To find eigenvectors from C is a huge computational task. Solution : Find the eigenvectors of ATA first

lv

1

, 1, 2,...,M

l lk kk

u v l M

Tk k kA Av v

Multiply A

Tk k kAA Av Av

Gain the eigenvectors

2

1

1( )

MT

k l nn

uM

Find the eigenvalues of C

The M Eigenvectors are sorted in order of descending Eigenvalues and chosen to represent Eigenspace.

Page 19: Face Recognition From Video Part (II)

Face Recognition - Recognition Using Eigenfaces

1 2[ , ,..., ]TMw w w

( )Tk kw u Project each of the train images into Eigenspace

Give a vector of weights to represent the contribution of each Eigenface

When a new face image is encountered, project it into Eigenspace

Measure the Euclidean distance

2

a b

An acceptance or rejection is determined by applying a threshold

Page 20: Face Recognition From Video Part (II)

Method (I) - Result

Page 21: Face Recognition From Video Part (II)

Method (I) - Conclusion

In this face recognition approach, Skin color modeling approach is used for

face detection Eigenface algorithm is used for face reco

gnition

Page 22: Face Recognition From Video Part (II)

An Automatic Face Detection and Recognition System for Video Streams

A. Pnevmatikakis and L. Polymenakos 2nd Joint Workshop on Multimodal Interaction and Related Machine Learni

ng Algorithms (MLMI), 2005

[4]

Page 23: Face Recognition From Video Part (II)

Introduction Authors present the AIT-FACER algorithm The system is intended for meeting rooms

where background and illumination are fairly constant

As participants enter the meeting room, the system is expected to identify and recognize all of them in a natural and unobtrusive way i.e., participants do not need to enter one-by-

one and then pose still in front of a camera for the system to work

Page 24: Face Recognition From Video Part (II)

AIT-FACER System Four modules

Face Detector Eye Locator Frontal Face Verifier Face Recognizer along with performance metrics

The goal of the first three modules Detect possible face segments in video frames Normalize them (in terms of shift, scale and rotation) Assign to them a confidence level describing how

frontal they are Feed them to the face recognizer finally

Page 25: Face Recognition From Video Part (II)

AIT-FACER System (cont.)

Detect possible face segments

Normalize face segments

To alleviate the effect of lighting variations and shadows

Decide if the face is frontal or not

• DFFS: Distance-From-Face-Space

To tell frontal faces and profile faces apart

Page 26: Face Recognition From Video Part (II)

Foreground Estimation Algorithm

Subtract the empty room image The empty room image is utilized as background

Sum the RGB channels and binarize the result In order to produce solid foreground segments

We perform a median filtering operation on 8x8 pixel blocks is performed

Color normalization Which is used to minimize the effects of shadows on a fr

ame level We set the brightness of the foreground segment at 95

% The preferred and visibly better way is Gamma correction,

but a faster solution is needed for our real-time system

Page 27: Face Recognition From Video Part (II)

Foreground Estimation (cont.)

Page 28: Face Recognition From Video Part (II)

Skin Likelihood Segmentation Color model

based on the skin color and non-skin color histograms Log-likelihood L(r,g,b)

s[rgb] is the pixel count contained in bin rgb of the skin histogram n[rgb] is the equivalent count from the non-skin histogram Ts and Tn are the total counts contained in the skin and non-skin hist

ograms, respectively

[7]

Page 29: Face Recognition From Video Part (II)

Skin Likelihood Segmentation (cont.)

Algorithm Obtain the likelihood map The likelihood map L(r,g,b) is binarized

Pixels take the value 1 (skin color) if L(r,g,b) > -.75 The rest pixels take the value 0

The different segments become connected in the skin map

By using 8-way connectivity The bounding boxes of the segments are identified and b

oxes with small area (<0.2% of the frame area) are discarded

Because their resolution is too low for recognition Choose segments with face-like elliptical aspect ratios

The eigenvalues resulted by performing PCA are used to estimate the elliptical aspect ratio of the region

Page 30: Face Recognition From Video Part (II)

Skin Likelihood Segmentation (cont.)

Page 31: Face Recognition From Video Part (II)

Eye Detector Thought

If we can identify the eyes and their location reliably, we can perform necessary normalizations in terms of shift, scale and rotation

Two stages First, the eye zone (eyes and bridge of the nose

area) is detected in the face candidate segments

As a second stage, we detect the eyes in the identified eye zone

Page 32: Face Recognition From Video Part (II)

Eye Detector (cont.)

Page 33: Face Recognition From Video Part (II)

Frontal Face Verification Problem

Skin segmentation heuristics define many areas that are not frontal faces

Further, the eye detector always defines two dark spots as eyes, even when the segment is not a frontal face

Solution The first stage uses DFFS to compute the distance

from a frontal face prototype Segments with smaller DFFS values are considered

frontal faces with larger confidence A two-class LDA classifier is trained to discriminate

frontal from non-frontal head views

Page 34: Face Recognition From Video Part (II)

Frontal Face Verification (cont.)

The 100 normalized segments in ascending DFFS order

Page 35: Face Recognition From Video Part (II)

Face Recognition

All normalized segments are finally processed by an LDA classifier and an identity tag is attached to each one

Page 36: Face Recognition From Video Part (II)

Result

Page 37: Face Recognition From Video Part (II)

Video-Based Face Recognition Evaluation in the CHIL Project – Run 1

Ekenel, H.K.; Pnevmatikakis, A.;IEEE on Proceedings of the 7th International Conference on Automatic Face and Gestur

e Recognition (FGR’06), 2006

[5]

Page 38: Face Recognition From Video Part (II)

Smart-Room

Page 39: Face Recognition From Video Part (II)

Face Image

Page 40: Face Recognition From Video Part (II)

[6]

Page 41: Face Recognition From Video Part (II)

Reference [1] Islam, M.W.; Monwar, M.M.; Paul, P.P.; Rezaei, S.;” A Real-Time Face Reco

gnition Approach from Video Sequence using Skin Color Model and Eigenface Method,” IEEE Electrical and Computer Engineering, Canadian Conference on, May 2006 Page(s):2181 - 2185

[2] R.S. Feris, T. E. de Campos, and R. M. C. Junior, "Detection and tracking of facial features in video sequences," proceedings of the Mexican International Conference on Artificial Intelligence. Advances in Artificial Intelligence, pp. 127 - 135, 2000

[3] M. A. Turk, and A. P. Pentland, "Face recognition using Eigenfaces," proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586-591, June 1991

[4] A. Pnevmatikakis and L. Polymenakos, “An Automatic Face Detection and Recognition System for Video Streams,” 2nd Joint Workshop on Multimodal Interaction and Related Machine Learning Algorithms (MLMI), 2005

[5] Ekenel, H.K.; Pnevmatikakis, A.; “Video-Based Face Recognition Evaluation in the CHIL Project – Run 1,” IEEE on Proceedings of the 7th International Conference on Automatic Face and Gesture Recognition (FGR’06), 2006

[6] CHIL, http://chil.server.de/servlet/is/2764/ [7] M. Jones and J. Rehg. “Statistical color models with application to skin d

etection,” Computer Vision and Pattern Recognition, pp. 274–280, 1999.