PCA_FACE_RECOGNITION_REPORT

  • Upload
    sharib

  • View
    219

  • Download
    0

Embed Size (px)

Citation preview

  • 8/7/2019 PCA_FACE_RECOGNITION_REPORT

    1/13

    FaceRecognition

    A Report on Face Recognition Using

    Principal Component Analysis.

    Final Report

    Preparedfor:

    Submission of Assignment work(3)on Ap plied M athematics

    Preparedby:

    Darshan Venkatrayappa

    [email protected]

    Sharib Ali

    [email protected]

    Submitted to

    Desire [email protected]

    17th

    November 20

    mailto:[email protected]:[email protected]:[email protected]
  • 8/7/2019 PCA_FACE_RECOGNITION_REPORT

    2/13

    Contents

    Acronyms .................................................................................................................. 1

    Chapter 1 Introduction to Face Recognition Using PCA

    1.1 Background ........................................................................................ 11.2 Objective............................................................................................. 2

    1.3 Problem Statement............................................................................. 2

    1.4 Stages in Face Recognition

    Chapter 2 Normalization

    2.1 Why Normalization of images?....................................................... 32.1.1 Flow Chart and Algorithm of Normalization 3-5

    2.1.2 Algorithm for Mapping of the image to 64x64 window... 5-6

    2.2 Limitations... .................................................................. 6

    Chapter 3 Eigen Faces and Eigen Space

    3.1 What does they mean?........................................................................ 7

    3.2 Algorithm

    ..... 7-9

    Chapter 4 Recognition of the Face

    4.1 Analysis............................................................................................... 104.2 Algorithm .................................................. ......................................... 10

    4.3 Result4.3.1 Accuracy 10

    4.3.2 Limitation... 10

    4.3.3 Scope of Improvement.. 11

    Chapter 5 Conclusions

    ... 11

    REFERENCES................................................................................................................ R-1

  • 8/7/2019 PCA_FACE_RECOGNITION_REPORT

    3/13

    1

    FA

    C

    E

    R

    EC

    O

    N

    ITIO

    N

    U

    SIN

    G

    P

    C

    A

    Acronyms

    SVD = Singular Value Decomposition

    PCA = Principal Component Analysis

    EV = Eigen Vector

    Chapter 1.Introduction to PCA in Face

    Recognition

    1.1 BackgroundThe Face Recognition system has various potential applications, such as

    person identification.

    human-computer interaction.

    security systems.

    The history of it goes back to the start of computer vision. Several other approach like irish,

    fingerprint etc has been used in the application but however this has been used more widely

    as the research. Face recognition has always remain a major focus of research because of its

    non-invasive nature and because it is people's primary method of person identification. The

    most famous early example of a face recognition system is due to Kohonen. Kohonen'ssystem was not a practical success, however, because of the need for precise alignment and

    normalization.

    In Principal Component Analysis for the face recognition, we train the faces and create a

    database of the trained sample images of the person. We then find the covariance matrix from

    this tra ined set. We get the eigen faces which corresponds to the principal components in the

    eigen vector obtained. This will give an eigen facewhich is ghost like in appearance. Now,

    each face in the training set is the linear combination of the eigen vectors. Now, when we

    take a test image for the recognition, we follow the same normalization steps and then project

    the test image to the same eigen space containing the eigen faces. Finally we calculate theminimum Euclidean distance between the eigen faces in the trained set of images with the

    eigen face of the test image. This is explained in the further discussions.

    The main purpose of PCA is to reduce the large dimensionality of the data space (observed

    variables) to the smaller intrinsic dimensionality of feature space (independent variables),which are needed to describe the data economically. This is the case here is a strong

    correlation between observed variables.

  • 8/7/2019 PCA_FACE_RECOGNITION_REPORT

    4/13

    2

    FA

    C

    E

    R

    EC

    O

    N

    ITIO

    N

    U

    SIN

    G

    P

    C

    A

    1.2 Objective

    i. To Study and implement the concept of SVD and PCA.

    ii. To decrease the dimension of the feature space

    iii. To make the computation fast

    iv. To built a strong correlation between observed variables.

    v. To see how the Eigen vectors gives the principal components of the image to be

    recognized from a trained set of images.

    vi. To make effort towards improvement of the result obtained.

    1.3 Problem Statement :

    Given an image, to identify it as a face and/or extract face images from it.

    To retrieve the similar images (based on a PCA) from the given database of trained face

    images.

    1.4 Stages In Face Recognition

    Training of the Images

    (to create database)

    Eigen Face using PCA

    (eigen space)

    Test Face Location

    Identification

  • 8/7/2019 PCA_FACE_RECOGNITION_REPORT

    5/13

    3

    FA

    C

    E

    R

    EC

    O

    N

    ITIO

    N

    U

    SIN

    G

    P

    C

    A

    Chapter 2. Normalization

    2.1 Why Normalization required?

    The normalization steps are done for the sacling, orientation and location variation

    adjustment to all the images with some predefined features and the feature of images. Basically,

    all the images are mapped to a window 64x64(in our case) taking some of the important facial

    features. In our case, we have taken, 1. Left Eye Center 2. Right Eye Center 3. Tip of Nose 4.

    Left Mouth Corner and 5. Right mouth corner.

    2.1.1 Flow chart and Algorithm for Normalization

    1

  • 8/7/2019 PCA_FACE_RECOGNITION_REPORT

    6/13

    4

    FA

    C

    E

    R

    EC

    O

    N

    ITIO

    N

    U

    SIN

    G

    P

    C

    A

    Fig. Flow Chart to get the Convergence FBAR

    *which gives the affine transformation A and b used to map the images to 64x64 window

    Algorithm:

    1. We take the predefined co-ordinates ,p

    if

    14 20

    50 20

    34 34

    16 50

    48 50

    2. We take all the feature if files starting from the first feature file and compute the equation

    bfAf ip

    i * , where, A and b are 6 unkonwns which gives the affine

    transformation.

    3. We update FBAR each time by using SVD

    FBar=Singular_Value_Decomposition(FBar,fp);

    1

  • 8/7/2019 PCA_FACE_RECOGNITION_REPORT

    7/13

    5

    FA

    C

    E

    R

    EC

    O

    N

    ITIO

    N

    U

    SIN

    G

    P

    C

    A

    4. We take the average of all the FBAR calculated and update FBAR again with thisaverage value

    5. Now, we compare the previous result with the current and if its greater than thethreshold error 10^-6 then we come out of the loop and the final converged FBAR gives

    the affine transformation matrix A and vectoe b.

    2.1.2 Algorithm for Mapping 384x384 image into 64x64 window

    Since we have got the matrix A and vector b we can easily put the pixels of 384x384 into

    64x64 window. This follows the following algorithm.

    1. We now use FBAR to get the values ofA and b. The first 4 values gives the values ofmatrix A and the last two gives the vector b.

    x=(V*S*U')*F_BAR; b=x(5:6,1);

    a=x(1:4,1); a=a(1:1:4,1); A=zeros(2,2);

    A(:,1)=a(1:2,1); A(:,2)=a(3:4,1);

    2. Since we know the the transformation matrix so we will plot each pixel of 64x64 windowinto the 384x384 image using.

    )( 40961

    384384 bFAF x

    3. We extract this window image i.e. the transformed image of 64x64.

    Normalize image1 Normalize image2

    Normalize image3 Normalize image4

  • 8/7/2019 PCA_FACE_RECOGNITION_REPORT

    8/13

    6

    FA

    C

    E

    R

    EC

    O

    N

    ITIO

    N

    U

    SIN

    G

    P

    C

    A

    Fig. Normalized and Mapped Images in 64x64window

    2.2 Limitations

    Non-uniform illumination

    Images do not show the correct positions of the features taken for the affine

    transform may result in convergence or some bad result.

    Chapter 3 Eigen Faces and Eigen Space

    3.1. What do they mean?

    PCA computes the basis of a space which is represented by its training vectors. Thesebasis vectors, actually eigenvectors, computed by PCA are in the direction of the largestvariance of the training vectors. Each eigenface can be viewed a feature. these are

    Eigenvectors and have a face like appearance, they are called Eigenfaces. Sometimes,they are also called as Ghost Images because of their weird appearance. When aparticular face is projected onto the face space, its vector into the face space describe the

    importance of each of those features in the face. The face is expressed in the face spaceby its eigenface coefficients (or weights).

    3.2 Algorithm for Creating Eigen Face DataBase

    1. We have taken the mean of all the 57 training faces. We substract these values from each

    face. We concatenate all the substracted faces in a a single matrix which we will call D.

    )64,64(...........................).........1,1(

    .

    .

    .

    .

    .

    )64,64(................................).........1,1(

    5757

    11

    II

    II

    D

  • 8/7/2019 PCA_FACE_RECOGNITION_REPORT

    9/13

    7

    FA

    C

    E

    R

    EC

    O

    N

    ITIO

    N

    U

    SIN

    G

    P

    C

    A

    2. We must use the covariace formula to compute PCA

    DDN

    CT

    1

    1

    and compute the eigen values and eigen vector whose principal

    components will give the eigen faces and hence eigen space. But, if the database has

    many images then the situation will become worse and the computation may be very

    vague. Even in our case it will give 4096x4096 dimentional matrix. Now, as the number

    of non-zero covariance matrx is limited to N (57), we calculate the other way out which

    will reduce the dimention but still give the eigen vectors which correspond to the eigen

    vectors obtained from this covariance matrix.

    3. So, we doT

    DDN

    C1

    1'

    . This reduces the dimention to 57x57 in our case. The

    Eigenvector computed from this matrix will be of size 57x57

    4. Now, we find the eigen faces which is given by multiplying rEigenvectoDT * obtained by

    C.

    5. Each face in the training set, i ,can be represented as a linear combination of these

    Eigenvectors.

    .ii X , where, is the principal components

    iX is the training images

    Subtracted Image1 Subtracted Image2

    Subtracted Image3 Subtracted Image4

  • 8/7/2019 PCA_FACE_RECOGNITION_REPORT

    10/13

    8

    FA

    C

    E

    R

    EC

    O

    N

    ITIO

    N

    U

    SIN

    G

    P

    C

    A

    Graph: Representing the Eigen Trained Eigen Faces

    Fig. Eigen Faces

    0 10 20 30 40 50 60-0.5

    0

    0.5

    1

    1.5

    2

    2.5

    3x 10

    7

    1st Eigen Face 8th Eigen Face 10th Eigen Face

    12th Eigen Face 20th Eigen Face 30th Eigen Face

    35th Eigen Face 40th Eigen Face 57th Eigen Face

  • 8/7/2019 PCA_FACE_RECOGNITION_REPORT

    11/13

    9

    FA

    C

    E

    R

    EC

    O

    N

    ITIO

    N

    U

    SIN

    G

    P

    C

    A

    Chapter 4 Recognition of the Face

    4.1 Analysis

    The face is expressed in the face space by its eigenface coefficients (or weights). We can handlea large input vector now, facial image, only by taking its small weight vector in the face space.

    As seen in the previous chapter, we have already found 57 eigen faces of the trained images.Now, when user enters the face, (to be detected), which is unknown and we are taking it as a testset of faces. We follow the following algorithm to find the eigen face related to it.

    4.2 Algorithm:1. We normalize the incoming image and map it as done in normalization chapter.2. We now project this normalized image onto the eigen space to get a corresponding

    feature vector j as,

    jj X

    3. We finally find the Euclidean distance betweenj

    andi

    .

    Euclidean Distance: The Euclidean Distance is probably the most widely used distancemetric. It is a special case of a general class of norms and is given as:

    2

    ii yxyx

    4. The minimum distance position between them will give the nearly identical face in theEigen space.

    5. We read the corresponding image which will be identical to the test face.

    4.3 Result

    The result with the 30 test images was not 100% accurate but it gave some good

    matching with almost 28 images.

    4.3.1 Accuracy

    Implementing PCA in the Face recognition on MATLAB, we got nearly 87. 09 %.

    %Accuracy= 100*dim

    count

    agenoofmatche

    4.3.2 Limitaitions

    The images of a face, and in particular the faces in the training set shouldlie near the face space.

    Each image should be highly correlated with itself.

  • 8/7/2019 PCA_FACE_RECOGNITION_REPORT

    12/13

    10

    FA

    C

    E

    R

    EC

    O

    N

    ITIO

    N

    U

    SIN

    G

    P

    C

    A

    4.3.2 Scope and Improvement

    Further changes in the algorithm may lead to better accuracy. Inaddition, we cantake some more distinct features to the training set of faces like length of forehead, chin

    positon etc. Facial recognition is still an ongoing research topic for computer vision

    scientists.

    Fig. Showing One sample of Matched Output

    Fig. Showing One sample of Unmatched Output

    Test Image

    Matched Image

    20 40 60

    20

    40

    60

    Image-Matched(:>

    20 40 60

    20

    40

    60

    Test Image

    Matched Image

    20 40 60

    20

    40

    60

    !!!Miss-Matched(:

    20 40 60

    20

    40

    60

  • 8/7/2019 PCA_FACE_RECOGNITION_REPORT

    13/13

    11

    FA

    C

    E

    R

    EC

    O

    N

    ITIO

    N

    U

    SIN

    G

    P

    C

    A

    Chapter-5 Conclusion:

    1. We must choose some features of the sample face and create a database of the images. In

    our case, we have taken 57 face images.

    2. Use of the Affine transform in finding the variables responsible for the same orientation,scaling and other feature variations for all the images.

    3. The special features taken should be mapped in the window we are taking the face, itshould include most part of the face rather than body.

    4. Trained images should be mapped to smaller window.

    5. Principal component Analysis can be used to both decrease the computational complexityand measure of the covariance between the images.

    6. How PCA reduces the complexity of computation when large number of images aretaken?

    7. The principal components of the Eigen vector of this covariance matrix whenconcatenated and converted gives the Eigen faces.

    8. These eigenfaces are the ghostly faces of the trained set of faces forms a face space.

    9. For each new face(test face), 30 in our case, we need to calculate its pattern vector.

    10.The distance of it to the eigen faces in the eigen space must be minimum.

    11.This distance gives the location of the image in the eigen space which is taken as theoutput matched image.

    REFERENCES[1] Matthew Turk and Alex Pentland, Eigen Faces For Recognition, MIT , The MediaLaboratory[2] Desire Sidebe, Face Recognition Using PCA,Assignment-3 sheet,UB[3] Wikipedia