130
Robust face recognition using wavelets and neural networks Ph.D Rubén Machucho Cadena Istambul, Turkey September 2013

Presentación tesis

Embed Size (px)

Citation preview

Page 1: Presentación tesis

Robust face recognition using wavelets and neural networks

Ph.D Rubén Machucho Cadena

Istambul, Turkey September 2013

Page 2: Presentación tesis

IntroductionMethodologyConclusions

Contents

1 IntroductionMotivationObjectives

2 MethodologyStage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

3 Conclusions

2 / 38

Page 3: Presentación tesis

IntroductionMethodologyConclusions

MotivationObjectives

IntroductionAutomatic face recognition system

In the last years, face recognition has become a popular areaof research.

More accurate identification/verification technique thantraditional systems.Increased computing capabilities.A large number of application areas.

GovernmentLaw EnforcementSecurityImmigration

CommercialMissing Children/RunawaysInternet, E-commerceGaming Industry

3 / 38

Page 4: Presentación tesis

IntroductionMethodologyConclusions

MotivationObjectives

IntroductionAutomatic face recognition system

In the last years, face recognition has become a popular areaof research.

More accurate identification/verification technique thantraditional systems.

Increased computing capabilities.A large number of application areas.

GovernmentLaw EnforcementSecurityImmigration

CommercialMissing Children/RunawaysInternet, E-commerceGaming Industry

3 / 38

Page 5: Presentación tesis

IntroductionMethodologyConclusions

MotivationObjectives

IntroductionAutomatic face recognition system

In the last years, face recognition has become a popular areaof research.

More accurate identification/verification technique thantraditional systems.Increased computing capabilities.

A large number of application areas.

GovernmentLaw EnforcementSecurityImmigration

CommercialMissing Children/RunawaysInternet, E-commerceGaming Industry

3 / 38

Page 6: Presentación tesis

IntroductionMethodologyConclusions

MotivationObjectives

IntroductionAutomatic face recognition system

In the last years, face recognition has become a popular areaof research.

More accurate identification/verification technique thantraditional systems.Increased computing capabilities.A large number of application areas.

GovernmentLaw EnforcementSecurityImmigration

CommercialMissing Children/RunawaysInternet, E-commerceGaming Industry

3 / 38

Page 7: Presentación tesis

IntroductionMethodologyConclusions

MotivationObjectives

IntroductionAutomatic face recognition system

In the last years, face recognition has become a popular areaof research.

More accurate identification/verification technique thantraditional systems.Increased computing capabilities.A large number of application areas.

GovernmentLaw EnforcementSecurityImmigration

CommercialMissing Children/RunawaysInternet, E-commerceGaming Industry

3 / 38

Page 8: Presentación tesis

IntroductionMethodologyConclusions

MotivationObjectives

IntroductionAutomatic face recognition system

In the last years, face recognition has become a popular areaof research.

More accurate identification/verification technique thantraditional systems.Increased computing capabilities.A large number of application areas.

GovernmentLaw EnforcementSecurityImmigration

CommercialMissing Children/RunawaysInternet, E-commerceGaming Industry

3 / 38

Page 9: Presentación tesis

IntroductionMethodologyConclusions

MotivationObjectives

IntroductionBiometric Systems

Biometric systems are automated, mostly computerizedsystems using distinctive physio-biological or behaviouralmeasurements of the human body that serve as a (supposedly)unique indicator of the presence of a particular individual.

Face images are easy to get.Contactless authentication.Low hardware cost.

4 / 38

Page 10: Presentación tesis

IntroductionMethodologyConclusions

MotivationObjectives

IntroductionBiometric Systems

Biometric systems are automated, mostly computerizedsystems using distinctive physio-biological or behaviouralmeasurements of the human body that serve as a (supposedly)unique indicator of the presence of a particular individual.

Face images are easy to get.Contactless authentication.Low hardware cost.

4 / 38

Page 11: Presentación tesis

IntroductionMethodologyConclusions

MotivationObjectives

IntroductionBiometric Systems

Biometric systems are automated, mostly computerizedsystems using distinctive physio-biological or behaviouralmeasurements of the human body that serve as a (supposedly)unique indicator of the presence of a particular individual.

Face images are easy to get.Contactless authentication.Low hardware cost.

4 / 38

Page 12: Presentación tesis

IntroductionMethodologyConclusions

MotivationObjectives

IntroductionBiometric Systems

Biometric systems are automated, mostly computerizedsystems using distinctive physio-biological or behaviouralmeasurements of the human body that serve as a (supposedly)unique indicator of the presence of a particular individual.

Face images are easy to get.

Contactless authentication.Low hardware cost.

4 / 38

Page 13: Presentación tesis

IntroductionMethodologyConclusions

MotivationObjectives

IntroductionBiometric Systems

Biometric systems are automated, mostly computerizedsystems using distinctive physio-biological or behaviouralmeasurements of the human body that serve as a (supposedly)unique indicator of the presence of a particular individual.

Face images are easy to get.Contactless authentication.

Low hardware cost.

4 / 38

Page 14: Presentación tesis

IntroductionMethodologyConclusions

MotivationObjectives

IntroductionBiometric Systems

Biometric systems are automated, mostly computerizedsystems using distinctive physio-biological or behaviouralmeasurements of the human body that serve as a (supposedly)unique indicator of the presence of a particular individual.

Face images are easy to get.Contactless authentication.Low hardware cost.

4 / 38

Page 15: Presentación tesis

IntroductionMethodologyConclusions

MotivationObjectives

Motivation

Despite the progress made in the last years, the facerecognition problem has not been completely solved.

The need of systems with a higher level of accuracy androbustness still remains as an open research topic.

5 / 38

Page 16: Presentación tesis

IntroductionMethodologyConclusions

MotivationObjectives

Motivation

Despite the progress made in the last years, the facerecognition problem has not been completely solved.

The need of systems with a higher level of accuracy androbustness still remains as an open research topic.

5 / 38

Page 17: Presentación tesis

IntroductionMethodologyConclusions

MotivationObjectives

Motivation

Despite the progress made in the last years, the facerecognition problem has not been completely solved.

The need of systems with a higher level of accuracy androbustness still remains as an open research topic.

5 / 38

Page 18: Presentación tesis

IntroductionMethodologyConclusions

MotivationObjectives

Objectives

1.- Propose a feature extraction technique, which uses the discretewavelet transform.

2.- Determine the most situable wavelet base and decompositionlevels for its use in face recognition systems.

3.- Design a neural network for classify faces.

4.- Determine the best parameter configuration for the proposedNN.

5.- Compare the proposed net with a backprop net.

6 / 38

Page 19: Presentación tesis

IntroductionMethodologyConclusions

MotivationObjectives

Objectives

1.- Propose a feature extraction technique, which uses the discretewavelet transform.

2.- Determine the most situable wavelet base and decompositionlevels for its use in face recognition systems.

3.- Design a neural network for classify faces.

4.- Determine the best parameter configuration for the proposedNN.

5.- Compare the proposed net with a backprop net.

6 / 38

Page 20: Presentación tesis

IntroductionMethodologyConclusions

MotivationObjectives

Objectives

1.- Propose a feature extraction technique, which uses the discretewavelet transform.

2.- Determine the most situable wavelet base and decompositionlevels for its use in face recognition systems.

3.- Design a neural network for classify faces.

4.- Determine the best parameter configuration for the proposedNN.

5.- Compare the proposed net with a backprop net.

6 / 38

Page 21: Presentación tesis

IntroductionMethodologyConclusions

MotivationObjectives

Objectives

1.- Propose a feature extraction technique, which uses the discretewavelet transform.

2.- Determine the most situable wavelet base and decompositionlevels for its use in face recognition systems.

3.- Design a neural network for classify faces.

4.- Determine the best parameter configuration for the proposedNN.

5.- Compare the proposed net with a backprop net.

6 / 38

Page 22: Presentación tesis

IntroductionMethodologyConclusions

MotivationObjectives

Objectives

1.- Propose a feature extraction technique, which uses the discretewavelet transform.

2.- Determine the most situable wavelet base and decompositionlevels for its use in face recognition systems.

3.- Design a neural network for classify faces.

4.- Determine the best parameter configuration for the proposedNN.

5.- Compare the proposed net with a backprop net.

6 / 38

Page 23: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Methodology

Stage 1: State of the art

Review of face recognition algorithms that use neural networks and wavelets.

Stage 2: Proposed SolutionDesign of the face recognition system.

Stage 3: Implementation and ResultsImplementation of the proposed system.System experimentation and validation.Conclusions.

7 / 38

Page 24: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Methodology

Stage 1: State of the artReview of face recognition algorithms that use neural networks and wavelets.

Stage 2: Proposed SolutionDesign of the face recognition system.

Stage 3: Implementation and ResultsImplementation of the proposed system.System experimentation and validation.Conclusions.

7 / 38

Page 25: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Methodology

Stage 1: State of the artReview of face recognition algorithms that use neural networks and wavelets.

Stage 2: Proposed SolutionDesign of the face recognition system.

Stage 3: Implementation and ResultsImplementation of the proposed system.System experimentation and validation.Conclusions.

7 / 38

Page 26: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Methodology

Stage 1: State of the artReview of face recognition algorithms that use neural networks and wavelets.

Stage 2: Proposed SolutionDesign of the face recognition system.

Stage 3: Implementation and ResultsImplementation of the proposed system.

System experimentation and validation.Conclusions.

7 / 38

Page 27: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Methodology

Stage 1: State of the artReview of face recognition algorithms that use neural networks and wavelets.

Stage 2: Proposed SolutionDesign of the face recognition system.

Stage 3: Implementation and ResultsImplementation of the proposed system.System experimentation and validation.

Conclusions.

7 / 38

Page 28: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Methodology

Stage 1: State of the artReview of face recognition algorithms that use neural networks and wavelets.

Stage 2: Proposed SolutionDesign of the face recognition system.

Stage 3: Implementation and ResultsImplementation of the proposed system.System experimentation and validation.Conclusions.

7 / 38

Page 29: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 1: State of the artWavelet theory

Wavelet transform can be successfully applied for analysis andprocessing of non stationary signals e.g., speech and imageprocessing, data compression, communications, etc

Wavelet transform is able to construct a high resolutiontime-frequency representation of the signal.

A wavelet is a wave-like oscillation with an amplitude that beginsat zero, increases, and then decreases back to zero. It can typicallybe visualized as a "brief oscillation"like one might see recorded by a

seismograph or heart monitor

8 / 38

Page 30: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 1: State of the artWavelet theory

Wavelet transform can be successfully applied for analysis andprocessing of non stationary signals e.g., speech and imageprocessing, data compression, communications, etcWavelet transform is able to construct a high resolutiontime-frequency representation of the signal.

A wavelet is a wave-like oscillation with an amplitude that beginsat zero, increases, and then decreases back to zero. It can typicallybe visualized as a "brief oscillation"like one might see recorded by a

seismograph or heart monitor

8 / 38

Page 31: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 1: State of the artWavelet theory

Wavelet transform can be successfully applied for analysis andprocessing of non stationary signals e.g., speech and imageprocessing, data compression, communications, etcWavelet transform is able to construct a high resolutiontime-frequency representation of the signal.

A wavelet is a wave-like oscillation with an amplitude that beginsat zero, increases, and then decreases back to zero. It can typicallybe visualized as a "brief oscillation"like one might see recorded by a

seismograph or heart monitor

8 / 38

Page 32: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 1: State of the artWavelet theory

Wavelet transform can be successfully applied for analysis andprocessing of non stationary signals e.g., speech and imageprocessing, data compression, communications, etcWavelet transform is able to construct a high resolutiontime-frequency representation of the signal.

A wavelet is a wave-like oscillation with an amplitude that beginsat zero, increases, and then decreases back to zero. It can typicallybe visualized as a "brief oscillation"like one might see recorded by a

seismograph or heart monitor

8 / 38

Page 33: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 1: State of the artWavelet theory

Wavelet transform can be successfully applied for analysis andprocessing of non stationary signals e.g., speech and imageprocessing, data compression, communications, etcWavelet transform is able to construct a high resolutiontime-frequency representation of the signal.

A wavelet is a wave-like oscillation with an amplitude that beginsat zero, increases, and then decreases back to zero. It can typicallybe visualized as a "brief oscillation"like one might see recorded by a

seismograph or heart monitor

8 / 38

Page 34: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 1: State of the artDiscrete Wavelet Transform (DWT)

The filters increase to double theoriginal data. It makes necessary

to downsample.

9 / 38

Page 35: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 1: State of the artDiscrete Wavelet Transform (DWT)

The filters increase to double theoriginal data. It makes necessary

to downsample.

9 / 38

Page 36: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 1: State of the artDiscrete Wavelet Transform (DWT)

The filters increase to double theoriginal data. It makes necessary

to downsample.

9 / 38

Page 37: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 1: State of the artDiscrete Wavelet Transform (DWT)

The filters increase to double theoriginal data. It makes necessary

to downsample.

9 / 38

Page 38: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 1: State of the artBidimensional DWT

Apply low pass filter (L) and high pass filter (H) to the rowsand columns of the image

LL: Apprroximations.LH: Horizontal details.HL: Vertical details.HH: Diagonal details.

10 / 38

Page 39: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 1: State of the artBidimensional DWT

Apply low pass filter (L) and high pass filter (H) to the rowsand columns of the image

LL: Apprroximations.LH: Horizontal details.HL: Vertical details.HH: Diagonal details.

10 / 38

Page 40: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 1: State of the artBidimensional DWT

Apply low pass filter (L) and high pass filter (H) to the rowsand columns of the image

LL: Apprroximations.

LH: Horizontal details.HL: Vertical details.HH: Diagonal details.

10 / 38

Page 41: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 1: State of the artBidimensional DWT

Apply low pass filter (L) and high pass filter (H) to the rowsand columns of the image

LL: Apprroximations.LH: Horizontal details.

HL: Vertical details.HH: Diagonal details.

10 / 38

Page 42: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 1: State of the artBidimensional DWT

Apply low pass filter (L) and high pass filter (H) to the rowsand columns of the image

LL: Apprroximations.LH: Horizontal details.HL: Vertical details.

HH: Diagonal details.

10 / 38

Page 43: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 1: State of the artBidimensional DWT

Apply low pass filter (L) and high pass filter (H) to the rowsand columns of the image

LL: Apprroximations.LH: Horizontal details.HL: Vertical details.HH: Diagonal details.

10 / 38

Page 44: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 1: State of the artBidimensional DWT

Apply low pass filter (L) and high pass filter (H) to the rowsand columns of the image

LL: Apprroximations.LH: Horizontal details.HL: Vertical details.HH: Diagonal details.

10 / 38

Page 45: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 1: State of the artNeural Networks

Artificial neural networks are models inspired by animalcentral nervous systems (in particular the brain) that arecapable of machine learning and pattern recognition.

Name E/S RelationHard Limit a = 0 n < 0

a = 1 n >= 0Linear a = n

Log-Sigmoid a =1

1+ e−n

11 / 38

Page 46: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 1: State of the artNeural Networks

Artificial neural networks are models inspired by animalcentral nervous systems (in particular the brain) that arecapable of machine learning and pattern recognition.

Name E/S RelationHard Limit a = 0 n < 0

a = 1 n >= 0Linear a = n

Log-Sigmoid a =1

1+ e−n

11 / 38

Page 47: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 1: State of the artNeural Networks

Artificial neural networks are models inspired by animalcentral nervous systems (in particular the brain) that arecapable of machine learning and pattern recognition.

Name E/S RelationHard Limit a = 0 n < 0

a = 1 n >= 0Linear a = n

Log-Sigmoid a =1

1+ e−n

11 / 38

Page 48: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 1: State of the artNeural Network Architecture

Neural network architectures refers to the organization anddisposition of their neurons forming layers or groups ofneurons.

12 / 38

Page 49: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 1: State of the artNeural Network Architecture

Neural network architectures refers to the organization anddisposition of their neurons forming layers or groups ofneurons.

12 / 38

Page 50: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 1: State of the artNeural Network Architecture

Neural network architectures refers to the organization anddisposition of their neurons forming layers or groups ofneurons.

12 / 38

Page 51: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 1: State of the artTraining an Artificial Neural Network

Once a network has been structured for a particularapplication, that network is ready to be trained. To start thisprocess the initial weights are chosen randomly. Then, thetraining, or learning, begins.

Supervised TrainingIn supervised training, both the inputs and the outputs areprovided.The network then processes the inputs and compares itsresulting outputs against the desired outputs.Errors are then propagated back through the system, causingthe system to adjust the weights which control the network

13 / 38

Page 52: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 1: State of the artTraining an Artificial Neural Network

Once a network has been structured for a particularapplication, that network is ready to be trained. To start thisprocess the initial weights are chosen randomly. Then, thetraining, or learning, begins.

Supervised TrainingIn supervised training, both the inputs and the outputs areprovided.

The network then processes the inputs and compares itsresulting outputs against the desired outputs.Errors are then propagated back through the system, causingthe system to adjust the weights which control the network

13 / 38

Page 53: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 1: State of the artTraining an Artificial Neural Network

Once a network has been structured for a particularapplication, that network is ready to be trained. To start thisprocess the initial weights are chosen randomly. Then, thetraining, or learning, begins.

Supervised TrainingIn supervised training, both the inputs and the outputs areprovided.The network then processes the inputs and compares itsresulting outputs against the desired outputs.

Errors are then propagated back through the system, causingthe system to adjust the weights which control the network

13 / 38

Page 54: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 1: State of the artTraining an Artificial Neural Network

Once a network has been structured for a particularapplication, that network is ready to be trained. To start thisprocess the initial weights are chosen randomly. Then, thetraining, or learning, begins.

Supervised TrainingIn supervised training, both the inputs and the outputs areprovided.The network then processes the inputs and compares itsresulting outputs against the desired outputs.Errors are then propagated back through the system, causingthe system to adjust the weights which control the network

13 / 38

Page 55: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Related work

E. Gumus, N. Kilic, A. Sertbas and O. N. Ucan “Evaluation of face recognitiontechniques using PCA, wavelets and SVM”, 2010

This work shows the use of the wavelet transform and PCA technique for featureextraction stage. Distance classifier and Support Vector Machines (SVMs) areused for classification step. Autors reported a recognition rate above 95%.

S. Kakarwal and R. Deshmukh “Wavelet Transform based Feature Extraction for FaceRecognition”, 2010

Authors propose the use of the wavelet transform to get a set of principalcharacteristics of each face and the correlation method for classification stage.They have reported a good performance when they use frontal and side-viewimages.

M. Mazloom and S. Kasaei “Face Recognition using Wavelet, PCA, and NeuralNetworks”, 2005

Authors propose a face recognition method which combines the use of wavelets,PCA and a backpropagation neural network. They reported a recognition rate of90.35%.

14 / 38

Page 56: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Related work

E. Gumus, N. Kilic, A. Sertbas and O. N. Ucan “Evaluation of face recognitiontechniques using PCA, wavelets and SVM”, 2010

This work shows the use of the wavelet transform and PCA technique for featureextraction stage. Distance classifier and Support Vector Machines (SVMs) areused for classification step. Autors reported a recognition rate above 95%.

S. Kakarwal and R. Deshmukh “Wavelet Transform based Feature Extraction for FaceRecognition”, 2010

Authors propose the use of the wavelet transform to get a set of principalcharacteristics of each face and the correlation method for classification stage.They have reported a good performance when they use frontal and side-viewimages.

M. Mazloom and S. Kasaei “Face Recognition using Wavelet, PCA, and NeuralNetworks”, 2005

Authors propose a face recognition method which combines the use of wavelets,PCA and a backpropagation neural network. They reported a recognition rate of90.35%.

14 / 38

Page 57: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Related work

E. Gumus, N. Kilic, A. Sertbas and O. N. Ucan “Evaluation of face recognitiontechniques using PCA, wavelets and SVM”, 2010

This work shows the use of the wavelet transform and PCA technique for featureextraction stage. Distance classifier and Support Vector Machines (SVMs) areused for classification step. Autors reported a recognition rate above 95%.

S. Kakarwal and R. Deshmukh “Wavelet Transform based Feature Extraction for FaceRecognition”, 2010

Authors propose the use of the wavelet transform to get a set of principalcharacteristics of each face and the correlation method for classification stage.They have reported a good performance when they use frontal and side-viewimages.

M. Mazloom and S. Kasaei “Face Recognition using Wavelet, PCA, and NeuralNetworks”, 2005

Authors propose a face recognition method which combines the use of wavelets,PCA and a backpropagation neural network. They reported a recognition rate of90.35%.

14 / 38

Page 58: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 2: Proposed solutionProposed System Architecture

15 / 38

Page 59: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 2: Proposed solutionImage Preprocessing: Histogram equalization

Histogram equalization is a method in image processing ofcontrast adjustment using the image’s histogram.

16 / 38

Page 60: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 2: Proposed solutionImage Preprocessing: Histogram equalization

Histogram equalization is a method in image processing ofcontrast adjustment using the image’s histogram.

16 / 38

Page 61: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 2: Proposed solutionImage Preprocessing: Histogram equalization

Histogram equalization is a method in image processing ofcontrast adjustment using the image’s histogram.

16 / 38

Page 62: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 2: Proposed solutionImage Preprocessing: Face detection and segmentation

The ViolaJones object detection framework is the first objectdetection framework to provide competitive object detectionrates in real-time proposed in 2001 by Paul Viola and MichaelJones.

17 / 38

Page 63: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 2: Proposed solutionImage Preprocessing: Face detection and segmentation

The ViolaJones object detection framework is the first objectdetection framework to provide competitive object detectionrates in real-time proposed in 2001 by Paul Viola and MichaelJones.

17 / 38

Page 64: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 2: Proposed solutionImage Preprocessing: Face detection and segmentation

The ViolaJones object detection framework is the first objectdetection framework to provide competitive object detectionrates in real-time proposed in 2001 by Paul Viola and MichaelJones.

17 / 38

Page 65: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 2: Proposed solutionImage Preprocessing: Face detection and segmentation

The ViolaJones object detection framework is the first objectdetection framework to provide competitive object detectionrates in real-time proposed in 2001 by Paul Viola and MichaelJones.

17 / 38

Page 66: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 2: Proposed solutionImage Preprocessing: Face size normalization

Image interpolation works in two directions, and tries toachieve a best approximation of a pixel’s color and intensitybased on the values at surrounding pixels.

Nearest neighborBilinear interpolationBicubic Interpolation

18 / 38

Page 67: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 2: Proposed solutionImage Preprocessing: Face size normalization

Image interpolation works in two directions, and tries toachieve a best approximation of a pixel’s color and intensitybased on the values at surrounding pixels.

Nearest neighborBilinear interpolationBicubic Interpolation

18 / 38

Page 68: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 2: Proposed solutionImage Preprocessing: Face size normalization

Image interpolation works in two directions, and tries toachieve a best approximation of a pixel’s color and intensitybased on the values at surrounding pixels.

Nearest neighbor

Bilinear interpolationBicubic Interpolation

18 / 38

Page 69: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 2: Proposed solutionImage Preprocessing: Face size normalization

Image interpolation works in two directions, and tries toachieve a best approximation of a pixel’s color and intensitybased on the values at surrounding pixels.

Nearest neighborBilinear interpolation

Bicubic Interpolation

18 / 38

Page 70: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 2: Proposed solutionImage Preprocessing: Face size normalization

Image interpolation works in two directions, and tries toachieve a best approximation of a pixel’s color and intensitybased on the values at surrounding pixels.

Nearest neighborBilinear interpolationBicubic Interpolation

18 / 38

Page 71: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 2: Proposed solutionImage Preprocessing: Face size normalization

Image interpolation works in two directions, and tries toachieve a best approximation of a pixel’s color and intensitybased on the values at surrounding pixels.

Nearest neighborBilinear interpolationBicubic Interpolation

18 / 38

Page 72: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 2: Proposed solutionImage Preprocessing: Face size normalization

Image interpolation works in two directions, and tries toachieve a best approximation of a pixel’s color and intensitybased on the values at surrounding pixels.

Nearest neighborBilinear interpolationBicubic Interpolation

18 / 38

Page 73: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 2: Proposed solutionFeature extraction: (Optional) Log-Polar conversion

Useful for dealing with rotation and scale issues.

Log-polar images are based on a polar plane represented byrings and sectors.

ξ =√

x2 + y2, η = arctan xy

19 / 38

Page 74: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 2: Proposed solutionFeature extraction: (Optional) Log-Polar conversion

Useful for dealing with rotation and scale issues.Log-polar images are based on a polar plane represented byrings and sectors.

ξ =√

x2 + y2, η = arctan xy

19 / 38

Page 75: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 2: Proposed solutionFeature extraction: (Optional) Log-Polar conversion

Useful for dealing with rotation and scale issues.Log-polar images are based on a polar plane represented byrings and sectors.

ξ =√

x2 + y2, η = arctan xy

19 / 38

Page 76: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 2: Proposed solutionFeature extraction: (Optional) Log-Polar conversion

Useful for dealing with rotation and scale issues.Log-polar images are based on a polar plane represented byrings and sectors.

ξ =√

x2 + y2, η = arctan xy

19 / 38

Page 77: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 2: Proposed solutionFeature extraction: DWT

1 Let J be the number of decomposition levels.

2 Let F be the wavelet filter used for the decomposition.3 Apply the discrete wavelet transform to the detected face,

using the low-pass and high-pass filters obtained from F, asmany times as directed by J.

4 Take the approximation coefficients, discarding the detailcoefficients.

20 / 38

Page 78: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 2: Proposed solutionFeature extraction: DWT

1 Let J be the number of decomposition levels.2 Let F be the wavelet filter used for the decomposition.

3 Apply the discrete wavelet transform to the detected face,using the low-pass and high-pass filters obtained from F, asmany times as directed by J.

4 Take the approximation coefficients, discarding the detailcoefficients.

20 / 38

Page 79: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 2: Proposed solutionFeature extraction: DWT

1 Let J be the number of decomposition levels.2 Let F be the wavelet filter used for the decomposition.

3 Apply the discrete wavelet transform to the detected face,using the low-pass and high-pass filters obtained from F, asmany times as directed by J.

4 Take the approximation coefficients, discarding the detailcoefficients.

20 / 38

Page 80: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 2: Proposed solutionFeature extraction: DWT

1 Let J be the number of decomposition levels.2 Let F be the wavelet filter used for the decomposition.3 Apply the discrete wavelet transform to the detected face,

using the low-pass and high-pass filters obtained from F, asmany times as directed by J.

4 Take the approximation coefficients, discarding the detailcoefficients.

20 / 38

Page 81: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 2: Proposed solutionFeature extraction: DWT

1 Let J be the number of decomposition levels.2 Let F be the wavelet filter used for the decomposition.3 Apply the discrete wavelet transform to the detected face,

using the low-pass and high-pass filters obtained from F, asmany times as directed by J.

4 Take the approximation coefficients, discarding the detailcoefficients.

20 / 38

Page 82: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 2: Proposed solutionFeature extraction: (Optional) Apply entropy

H(X ) = −kn∑

i=1p(xi) log p(xi)

21 / 38

Page 83: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 2: Proposed solutionFeature extraction: (Optional) Apply entropy

H(X ) = −kn∑

i=1p(xi) log p(xi)

21 / 38

Page 84: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 2: Proposed solutionFeature extraction: (Optional) Apply entropy

H(X ) = −kn∑

i=1p(xi) log p(xi)

21 / 38

Page 85: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 2: Proposed solutionFeature extraction: (Optional) Apply entropy

H(X ) = −kn∑

i=1p(xi) log p(xi)

21 / 38

Page 86: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 2: Proposed solutionFeature extraction: (Optional) Apply entropy

H(X ) = −kn∑

i=1p(xi) log p(xi)

21 / 38

Page 87: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 2: Proposed solutionFeature extraction: (Optional) Apply autocorrelation

It is the correlation of a signal with itself.

Provides information about the structure of an image.

G(a, b) =M∑a

N∑a

i(x , y) ∗ i(x − a, y − b)

22 / 38

Page 88: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 2: Proposed solutionFeature extraction: (Optional) Apply autocorrelation

It is the correlation of a signal with itself.Provides information about the structure of an image.

G(a, b) =M∑a

N∑a

i(x , y) ∗ i(x − a, y − b)

22 / 38

Page 89: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 2: Proposed solutionFeature extraction: (Optional) Apply autocorrelation

It is the correlation of a signal with itself.Provides information about the structure of an image.

G(a, b) =M∑a

N∑a

i(x , y) ∗ i(x − a, y − b)

22 / 38

Page 90: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 2: Proposed solutionFeature extraction: (Optional) Apply autocorrelation

It is the correlation of a signal with itself.Provides information about the structure of an image.

G(a, b) =M∑a

N∑a

i(x , y) ∗ i(x − a, y − b)

22 / 38

Page 91: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 2: Proposed solutionFeature extraction: (Optional) Apply autocorrelation

It is the correlation of a signal with itself.Provides information about the structure of an image.

G(a, b) =M∑a

N∑a

i(x , y) ∗ i(x − a, y − b)

22 / 38

Page 92: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 2: Proposed solutionFeature extraction: (Optional) Apply autocorrelation

It is the correlation of a signal with itself.Provides information about the structure of an image.

G(a, b) =M∑a

N∑a

i(x , y) ∗ i(x − a, y − b)

22 / 38

Page 93: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 2: Proposed solutionFeature extraction: (Optional) Apply sampling

Reduce the dimensionality of the characteristic vector, that will besend to the neural network.

Supposing that the size of detected face is 80 x 80 pixels, andwe are using a second decomposition level....

23 / 38

Page 94: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 2: Proposed solutionFeature extraction: (Optional) Apply sampling

Reduce the dimensionality of the characteristic vector, that will besend to the neural network.

Supposing that the size of detected face is 80 x 80 pixels, andwe are using a second decomposition level....

23 / 38

Page 95: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 2: Proposed solutionFeature extraction: (Optional) Apply sampling

Reduce the dimensionality of the characteristic vector, that will besend to the neural network.

Supposing that the size of detected face is 80 x 80 pixels, andwe are using a second decomposition level....

23 / 38

Page 96: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 2: Proposed solutionFeature extraction: (Optional) Apply sampling

Reduce the dimensionality of the characteristic vector, that will besend to the neural network.

Supposing that the size of detected face is 80 x 80 pixels, andwe are using a second decomposition level....

23 / 38

Page 97: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 2: Proposed solutionClassification: Proposed neural network

24 / 38

Page 98: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 3: Implementation and ResultsFaces database

Public face database Faces94 1

Images of 153 persons with 20 snapshots by each one of them.Image resolution: 180 by 200 pixels (portrait format).Minor variation in image lighting, head pose and head scale.

1http://cswww.essex.ac.uk/mv/allfaces/faces94.html25 / 38

Page 99: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 3: Implementation and ResultsFaces database

Public face database Faces94 1

Images of 153 persons with 20 snapshots by each one of them.

Image resolution: 180 by 200 pixels (portrait format).Minor variation in image lighting, head pose and head scale.

1http://cswww.essex.ac.uk/mv/allfaces/faces94.html25 / 38

Page 100: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 3: Implementation and ResultsFaces database

Public face database Faces94 1

Images of 153 persons with 20 snapshots by each one of them.Image resolution: 180 by 200 pixels (portrait format).

Minor variation in image lighting, head pose and head scale.

1http://cswww.essex.ac.uk/mv/allfaces/faces94.html25 / 38

Page 101: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 3: Implementation and ResultsFaces database

Public face database Faces94 1

Images of 153 persons with 20 snapshots by each one of them.Image resolution: 180 by 200 pixels (portrait format).Minor variation in image lighting, head pose and head scale.

1http://cswww.essex.ac.uk/mv/allfaces/faces94.html25 / 38

Page 102: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 3: Implementation and ResultsFaces database

Public face database Faces94 1

Images of 153 persons with 20 snapshots by each one of them.Image resolution: 180 by 200 pixels (portrait format).Minor variation in image lighting, head pose and head scale.

1http://cswww.essex.ac.uk/mv/allfaces/faces94.html25 / 38

Page 103: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 3: Implementation and ResultsExperimental design

Validation and results of feature extraction phaseExperiments at this stage will allow us to know the best methodcombination (log-polar, autocorrelation, entropy), wavelet base yand decomposition level for use in a face recognition system.

Validation and results of classification phaseExperiments will be directed to detect the best configurationparameters for the proposed neural network.

Validation and results of the preprocessing phaseThis test will allow us to know the benefit of implement apreprocessing stage in the proposed system.

26 / 38

Page 104: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 3: Implementation and ResultsExperimental design

Validation and results of feature extraction phaseExperiments at this stage will allow us to know the best methodcombination (log-polar, autocorrelation, entropy), wavelet base yand decomposition level for use in a face recognition system.

Validation and results of classification phaseExperiments will be directed to detect the best configurationparameters for the proposed neural network.

Validation and results of the preprocessing phaseThis test will allow us to know the benefit of implement apreprocessing stage in the proposed system.

26 / 38

Page 105: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 3: Implementation and ResultsExperimental design

Validation and results of feature extraction phaseExperiments at this stage will allow us to know the best methodcombination (log-polar, autocorrelation, entropy), wavelet base yand decomposition level for use in a face recognition system.

Validation and results of classification phaseExperiments will be directed to detect the best configurationparameters for the proposed neural network.

Validation and results of the preprocessing phaseThis test will allow us to know the benefit of implement apreprocessing stage in the proposed system.

26 / 38

Page 106: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 3: Implementation and ResultsValidation and results of feature extraction phase

Method combination:

Log-polar (Optional).DWT.Entropy or autocorrelation (Optional).

Wavelet base: Bior 1.3, Daubechies 4 and Coif 5.For classification we use the proposed neural net, with thefollowing configuration parameters:

Number of neurons Layer 1, 2, 3 y 4 3Layer 5 1

Minimum error 0,01

Activation function Layer 1, 2, 3 y 4 SigmoidLayer 5 Linear

27 / 38

Page 107: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 3: Implementation and ResultsValidation and results of feature extraction phase

Method combination:Log-polar (Optional).DWT.Entropy or autocorrelation (Optional).

Wavelet base: Bior 1.3, Daubechies 4 and Coif 5.For classification we use the proposed neural net, with thefollowing configuration parameters:

Number of neurons Layer 1, 2, 3 y 4 3Layer 5 1

Minimum error 0,01

Activation function Layer 1, 2, 3 y 4 SigmoidLayer 5 Linear

27 / 38

Page 108: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 3: Implementation and ResultsValidation and results of feature extraction phase

Method combination:Log-polar (Optional).DWT.Entropy or autocorrelation (Optional).

Wavelet base: Bior 1.3, Daubechies 4 and Coif 5.

For classification we use the proposed neural net, with thefollowing configuration parameters:

Number of neurons Layer 1, 2, 3 y 4 3Layer 5 1

Minimum error 0,01

Activation function Layer 1, 2, 3 y 4 SigmoidLayer 5 Linear

27 / 38

Page 109: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 3: Implementation and ResultsValidation and results of feature extraction phase

Method combination:Log-polar (Optional).DWT.Entropy or autocorrelation (Optional).

Wavelet base: Bior 1.3, Daubechies 4 and Coif 5.For classification we use the proposed neural net, with thefollowing configuration parameters:

Number of neurons Layer 1, 2, 3 y 4 3Layer 5 1

Minimum error 0,01

Activation function Layer 1, 2, 3 y 4 SigmoidLayer 5 Linear

27 / 38

Page 110: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 3: Implementation and ResultsValidation and results of feature extraction phase

Method combination:Log-polar (Optional).DWT.Entropy or autocorrelation (Optional).

Wavelet base: Bior 1.3, Daubechies 4 and Coif 5.For classification we use the proposed neural net, with thefollowing configuration parameters:

Number of neurons Layer 1, 2, 3 y 4 3Layer 5 1

Minimum error 0,01

Activation function Layer 1, 2, 3 y 4 SigmoidLayer 5 Linear

27 / 38

Page 111: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 3: Implementation and ResultsValidation and results of feature extraction phase

Recognition rate using the available method combinations.

ND BW Train patterns Recognition rate(test patterns)W W_A LP_W LP_W_A

2Daub 4 100% 85% 86.6% 65% 55%Bior 1.3 100% 77% 79% 66.7% 71.7%Coif 5 100% 72% 72% 58.3% 50%

3Daub 4 100% 80% 85% 68.3% 18.3%Bior 1.3 100% 84% 83% 45% 56.6%Coif 5 100% 78% 82% 36.6% 26.6%

W: Wavelet, A: Autocorrelation, LP: Log-polar

28 / 38

Page 112: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 3: Implementation and ResultsValidation and results of feature extraction phase

Recognition rate using the available method combinations.

ND BW Train patterns Recognition rate(test patterns)W W_A LP_W LP_W_A

2Daub 4 100% 85% 86.6% 65% 55%Bior 1.3 100% 77% 79% 66.7% 71.7%Coif 5 100% 72% 72% 58.3% 50%

3Daub 4 100% 80% 85% 68.3% 18.3%Bior 1.3 100% 84% 83% 45% 56.6%Coif 5 100% 78% 82% 36.6% 26.6%

W: Wavelet, A: Autocorrelation, LP: Log-polar28 / 38

Page 113: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 3: Implementation and ResultsValidation and results of feature extraction phase

Recognition rate using the available method combinations.

ND BW Train patterns Recognition rate(test patterns)W W_A LP_W LP_W_A

2Daub 4 100% 85% 86.6% 65% 55%Bior 1.3 100% 77% 79% 66.7% 71.7%Coif 5 100% 72% 72% 58.3% 50%

3Daub 4 100% 80% 85% 68.3% 18.3%Bior 1.3 100% 84% 83% 45% 56.6%Coif 5 100% 78% 82% 36.6% 26.6%

W: Wavelet, A: Autocorrelation, LP: Log-polar29 / 38

Page 114: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 3: Implementation and ResultsValidation and results of classification phase

Recognition rate obtained by varying the number of neuronsand network minimum error.

Neurons Error Train. time Recognition rateTraining Test

2.3 >1 seg. 100% 78.3%.2 >1 seg. 100% 76.6%.1 1 seg. 100% 88.3%.01 2 seg. 100% 91.6%.001 3 seg. 100% 70%.0001 5 seg. 100% 66.6%.00001 5 seg. 100% 75%.000001 7 seg. 100% 65%

30 / 38

Page 115: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 3: Implementation and ResultsValidation and results of classification phase

Recognition rate obtained by varying the number of neuronsand network minimum error.

Neurons Error Train. time Recognition rateTraining Test

2.3 >1 seg. 100% 78.3%.2 >1 seg. 100% 76.6%.1 1 seg. 100% 88.3%.01 2 seg. 100% 91.6%.001 3 seg. 100% 70%.0001 5 seg. 100% 66.6%.00001 5 seg. 100% 75%.000001 7 seg. 100% 65%

30 / 38

Page 116: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 3: Implementation and ResultsValidation and results of classification phase

Recognition rate obtained by varying the number of neuronsand network minimum error.

Neurons Error Train. time Recognition rateTraining Test

4.3 >1 seg. 100% 78.3%.2 >1 seg. 100% 78.3%.1 1 seg. 100% 88%.01 2 seg. 100% 95.33%.001 2 seg. 100% 81.6%.0001 4 seg. 100% 85%.00001 6 seg. 100% 81%.000001 7 seg. 100% 81.6%

31 / 38

Page 117: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 3: Implementation and ResultsValidation and results of classification phase

Recognition rate obtained by varying the number of neuronsand network minimum error.

Neurons Error Train. time Recognition rateTraining Test

6.3 1 seg. 100% 76.6%.2 2 seg. 100% 76.6%.1 2 seg. 100% 83.3%.01 3 seg. 100% 85%.001 6 seg. 100% 83.3%.0001 7 seg. 100% 76.6%.00001 7 seg. 100% 78.33%.000001 8 seg. 100% 80%

32 / 38

Page 118: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 3: Implementation and ResultsValidation and results of classification phase

Recognition rate obtained by varying the number of neuronsand network minimum error.

Neurons Error Train. time Recognition rateTraining Test

8.3 2 seg. 100% 73.3%.2 2 seg. 100% 83.3%.1 1 seg. 100% 81.66%.01 6 seg. 100% 90%.001 7 seg. 100% 88.33%.0001 11 seg. 100% 85%.00001 14 seg. 100% 85%.000001 16 seg. 100% 76.6%

33 / 38

Page 119: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 3: Implementation and ResultsValidation and results of preprocessing phase

Comparison of recognition rates obtained by applying apreprocessing stage in contrast to the omission of suchactivity.

34 / 38

Page 120: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 3: Implementation and ResultsValidation and results of preprocessing phase

Comparison of recognition rates obtained by applying apreprocessing stage in contrast to the omission of suchactivity.

34 / 38

Page 121: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 3: Implementation and ResultsValidation and results of preprocessing phase

Comparison with a backpropagation neural net.

35 / 38

Page 122: Presentación tesis

IntroductionMethodologyConclusions

Stage 1: State of the artStage 2: Proposed SolutionStage 3: Implementation and Results

Stage 3: Implementation and ResultsValidation and results of preprocessing phase

Comparison with a backpropagation neural net.

35 / 38

Page 123: Presentación tesis

IntroductionMethodologyConclusions

Conclusions

We presented a new framework for face recognition, usingdiscrete wavet transform and neural networks.

The following relevant results were obtained:

PreprocessingWe detected an increase of approximately 5% in therecognition rates obtained, which determines that theapplication of techniques that improve the visual quality ofthe image have a positive influence on the overall systemperformance.

36 / 38

Page 124: Presentación tesis

IntroductionMethodologyConclusions

Conclusions

We presented a new framework for face recognition, usingdiscrete wavet transform and neural networks.

The following relevant results were obtained:

PreprocessingWe detected an increase of approximately 5% in therecognition rates obtained, which determines that theapplication of techniques that improve the visual quality ofthe image have a positive influence on the overall systemperformance.

36 / 38

Page 125: Presentación tesis

IntroductionMethodologyConclusions

Conclusions

We presented a new framework for face recognition, usingdiscrete wavet transform and neural networks.

The following relevant results were obtained:

PreprocessingWe detected an increase of approximately 5% in therecognition rates obtained, which determines that theapplication of techniques that improve the visual quality ofthe image have a positive influence on the overall systemperformance.

36 / 38

Page 126: Presentación tesis

IntroductionMethodologyConclusions

Conclusions

Feature extractionThe use of the wavelet Daubechies 4, the seconddecomposition level and the autocorrelation method give arecognition rate of 95.33%; this allow us to ascertain that theuse of the wavelet transform is an excellent imagedecompostion and texture description tool.

ClassificationIt was proved that the proposed neural network is a feasibleand efficient option to perform face recognition tasks, since itoutperformed the recognition rates, and decreased trainingtime in comparison with a backpropagation network.

37 / 38

Page 127: Presentación tesis

IntroductionMethodologyConclusions

Conclusions

Feature extractionThe use of the wavelet Daubechies 4, the seconddecomposition level and the autocorrelation method give arecognition rate of 95.33%; this allow us to ascertain that theuse of the wavelet transform is an excellent imagedecompostion and texture description tool.

ClassificationIt was proved that the proposed neural network is a feasibleand efficient option to perform face recognition tasks, since itoutperformed the recognition rates, and decreased trainingtime in comparison with a backpropagation network.

37 / 38

Page 128: Presentación tesis

IntroductionMethodologyConclusions

Thank you

Questions

38 / 38

Page 129: Presentación tesis

IntroductionMethodologyConclusions

[1] R.C. Gonzalez and R.E. Woods.Digital Image Processing.Springer US, 2008.

[2] Ergun Gumus, Niyazi Kilic, Ahmet Sertbas, and Osman N.Ucan.Evaluation of face recognition techniques using pca, waveletsand {SVM}.Expert Systems with Applications, 37(9):6404 – 6408, 2010.

[3] R Jafri and H.R. Arabnia.A survey of face recognition techniques.Journal of Information Processing Systems, 5(2):41–68, June2009.

[4] SN Kakarwal and Dr RR Deshmukh.Wavelet transform based feature extraction for facerecognition.

38 / 38

Page 130: Presentación tesis

IntroductionMethodologyConclusions

IJCSA Issue-I June, pages 0974–0767, 2010.

[5] F. Khalid and L. N. A.3D face recognition using multiple features for local depthinformation.IJCSNS International Journal of Computer Science andNetwork Security, 9(1):27–32, 2009.

[6] Masoud Mazloom and Shohreh Kasaei.Face recognition using wavelet, pca, and neural networks.2005.

[7] W. Zhao, R. Chellappa, A. Rosenfeld, and P. J. Phillips.Face Recognition: A Literature Survey.ACM Computing Surveys, pages 399–458, 2003.

38 / 38