View
1
Download
0
Category
Preview:
Citation preview
A Project Report
On
FACE DETECTION USING ARTIFICIAL
NEURAL NETWORK
SUBMITTED
In the partial fulfillment for the requirement of the award of the degree of
Master of Science
in
COMPUTER SCIENCE (Paper Code: MCS-1005)
SUBMITTED BY
ARIFUL ISLAM
SUBMITTED BY
Ariful Islam Department Of Computer Science, AUS
UNDER THE ABLE GUIDANCE OF
Dr. Prodipto Das
Assistant Professor, Deptt. Of Computer Science
Assam University, Silchar
CERTIFICATE
This is to certify that Ariful Islam bearing Roll: 101610 No: 02220025 has
carried out his work for the project entitled “FACE DETECTION USING
ARTIFICIAL NEURAL NETWORK” under my supervision in partial
fulfillment for the requirement of the award of degree of Master of Science in
Computer Science of Assam University, Silchar. He has done sincerely his work
for preparing this project. He has fulfilled all the requirements laid down in the
regulations of the MSc (Integrated) 10th Semester Examination (Paper MCS-
1005) of the Department of Computer Science, Assam University, Silchar, for
the session 2015-2016.
Date: …………………. Dr. Prodipto Das
Assistant Professor,
Department of Computer Science,
Assam University, Silchar
DEPARTMENT OF COMPUTER SCIENCE
SCHOOL OF PHYSICAL SCIENCES
ASSAM UNIVERSITY S IL C HAR
A CENTRAL UNIVERSITY CONSTITUTED UNDER ACT XIII OF 1989
ASSAM, INDIA, PIN - 788011
CERTIFICATE
This is to certify that Ariful Islam, student of 10th Semester, Department of
Computer Science, Assam University, Silchar has developed his project entitled
“FACE DETECTION USING ARTIFICIAL NEURAL NETWORK” under
the able guidance of Dr. Prodipto Das, Lecturer, Department of Computer
Science of Assam University, Silchar in partial fulfillment for the requirement of
the award of degree of Master of Science during the period of January’ 2016-
June’ 2016.
Date: (PROF. BIPUL SYAM PURKAYASTHA)
Place: Head, Department of Computer Science
Assam University, Silchar
DEPARTMENT OF COMPUTER SCIENCE
SCHOOL OF PHYSICAL SCIENCES
ASSAM UNIVERSITY S IL C HAR
A CENTRAL UNIVERSITY CONSTITUTED UNDER ACT XIII OF 1989
ASSAM, INDIA, PIN - 788011
DECLARATION
I, ARIFUL ISLAM do here by declare that the project entitled “FACE
DETECTION USING ARTIFICIAL NEURAL NETWORK” has been carried
out by me under the guidance of Dr. Prodipto Das, Assistant Professor,
Department Of Computer Science, Assam University, Silchar. Whenever I have
used materials (data, theoretical analysis, figures, and text) from other sources,
I have given due credit to them by citing them in the text of this report and giving
their details in the references.
.
Date…………………. Ariful Islam
Roll: 101610 No.: 02220025
Regn. No: 22-100001659 of 2010-11
Department Of Computer Science
Assam University, Silchar
ACKNOWLEDGEMENT
My sincere gratitude and thanks towards my project paper guide Dr.
Prodipto Das, Assistant Professor, Department of Computer Science, Assam
University, Silchar, Assam.
It was only with his backing and support that I could completed the project.
He provided me all sorts of help and corrected me if ever seemed to make
mistakes. I have no such words to express my gratitude.
I acknowledge my sincere gratitude to the HOD of Computer Science
Department, Assam University, Silchar. He gave me the permission to do the
project work. Without his support I couldn’t even start the work. So I am grateful
to him.
I acknowledge my sincere gratitude to the lecturers, research scholars and
the lab technicians for their valuable guidance and helping attitude even in their
very busy schedule.
Also I acknowledge my sincere gratitude to Mrs. Rosy Swami and Mr. Arif
Iqubal Maunder, research scholar Department of Computer Science who always
put her kind effort in doing this project work.
And at last but not the least, I acknowledge my dearest parents for being
such a nice source of encouragement and moral support that helped me
tremendously in this aspect.
Place: Assam University, Silchar Ariful Islam
Date: Roll: 101610 No.: 02220025
Department of Computer Science
ABSTRACT
In this project a face detection technique is implemented using artificial neural
network, where we use PCA for dimensionality reduction of the image and can be
represented as the eigenfaces coordinate space, i.e., face image can be divided into a
number of pixels and plot them according to eigenvector coordinates. Then the extracted
law dimensional feature vectors are considered as the input patterns in BPNN to classify
the extracted features into one of the two possible classes: faces or non-faces.
Human Brain can easily analyses vast array of faces from the images formed on the
eyes, at the same time it is very difficult for a computer to locate the faces in a digital
image. But in the present era of computation face detection is very useful in performing
face recognition, in fact it is often said that face detection is one of the basic fundamental
pillars of face recognition system, which is very fast growing and challenging in real
world applications such as security systems. The experimental result shows that it is
very efficient using PCA followed by BPNN for classification.
Keywords— Face Detection, Face Recognition, PCA, ANN and BPNN, Eigenfaces.
TABLE OF CONTENTS Chapters Title Page no.
CERTIFICATE ................................................................... i
CERTIFICATE ................................................................... ii
DECLARATION ................................................................ iii
ACKNOWLEDGEMENT ……………………………..… iv
ABSTRACT …..………………………………….…..…....v
TABLE OF CONTENTS………………………….……….vi
LIST OF FIGURES......................................................…....vii
1. INTRODUCTION………………………………..……..…1-4
1.1. Face Detection…………………………….……….2
1.2. Artificial Neural Network for Face Detection..……3
1.3. Motivation…………….………………………..…..4
1.4. Aim of the Project………………………………….4
2. LITERATURE SURVEY……………………………….….5-6
3. OBJECTIVES AND EXISTING TECHNIQUES……..…..7-8
3.1. Objectives
3.2. Existing Techniques
4. SYSTEM OVERVIEW………………………………........9-11
5. METHODOLOGIES………………….…………………...12-23
I. PRINCIPAL COMPONENT ANALYSIS………..……13-15
II. BACK PROPAGATION NEURAL NETWORK...……16-23
6. EXPERIMENTAL RESULTS…………..………………...24-27
7. CONCLUSION AND FUTURE WORK……...…………..28-29
REFERENCES ……………………………….……..…... 30-32
APPENDIXA: USER INTERFACE…………………………………....…33-38
APPENDIX B: DETECTED IMAGE GALLERY………………………..39-40
APPENDIX C: SOFTWARE PACKAGE…………………………………..41
LIST OF FIGURES Figure Name and No. Page No.
Figure1: Basic Face Detection System Using Neural Network……………………………2 Figure2: Face Detection System as a Part of Face Recognition System……………….......3 Figure 3: Overview of Developed Face Detection System with Various Functions…........10
Figure 4: Transformation of Face Images into Eigenfaces……………………………...…13
Figure 5: Selection of Eigenfaces among a Number of Eigenfaces………………………..14
Figure 6: BPNN with Single Hidden Layer …………..…………………………..………..18
Figure 7: Signal Flow Graph (considering ‘j’ as hidden layer and ‘K’ as output neuron)….20
Figure 8: Backward pass of training for obtaining input………………………………...….21
Figure 9: Combined algorithm is described by the PCA & BPNN algorithm………….…...22
Figure 10: Graphical Representation of Experimental Results using20, 50, 100, 150 and 190
images datasets……………………………………………………………………….......…26
Figure 11: GUI (Graphical User Interface) for Initial Screen…………………….......…….34
Figure 12: The Main Menu………………………………………………………………....35
Figure 13: Loading the Image……………………………………………………….....…...36
Figure 14: Perform PCA and to Initialize the Network by Loading Face and Non-face images
from the data set………………………………………………………….…………………36
Figure 15: Training phase of the System Network………………….……………………...37
Figure 16: This one is the output of the neural network around the candidate points which are
real values between -1 and 1………………………………………………………………..37
Figure 17: Face Detection and also the calculated Number of Detected Faces are shown...38
Chapter 1
1. INTRODUCTION
Face detection is basically explained as determination of faces from an image using a certain
search strategy under a complex background. It has many applications such as pattern
recognition, video surveillance, interface applications and identification [1]. The basic
fundamental approach for face detection is to extract the law dimensional feature vectors from
input patterns by the feature extractor (for this purpose we use PCA) and then classification
of extracted features into one of the two possible classes: face or non-face using neural
network approach (for this purpose we use BPNN) [2].
Figure1: Basic Face Detection System Using Neural Network
1.1 Face Detection:
Face detection and recognition includes many complementary parts, each part is a complement
to the other. Depending on regular system each part can work individually. Face detection is a
computer technology that is based on learning algorithms to allocate human faces in digital
images [6].Face detection takes images/video sequences as input and locates face areas within
these images. This is done by separating face areas from non-face background regions.
Face recognition involves comparing an image with a database of stored faces to identify
individual person in the input image. The related task of face detection has direct relevance to
face recognition because images must be analysed and faces identified, before they can be
recognized. Detecting faces in an image can help to focus the computational resources of the
face recognition system, optimizing the systems speed and performance [6, 3].
Figure2: Face Detection System as a Part of Face Recognition System
1.2 Artificial Neural Network for Face Detection:
In the recent years, different architectures and models of ANN were used for face detection
and recognition. ANN can be used in face detection and recognition because these models can
simulate the way neurons work in the human brain. This is the main reason for its role in face
recognition. This research includes the researches related to face detection based on ANN [6].
Artificial neural networks (ANN) were used largely in the recent years in the fields of image
processing (compression, recognition and encryption) and pattern recognition [3]. Many
literature researches used different ANN architecture and models for face detection and
recognition to achieve better compression performance according to: compression ratio (CR);
reconstructed image quality such as Peak Signal to Noise Ratio (PSNR); and mean square
error (MSE).
Advantages and Disadvantages of Neural Network
Neural networks offer a number of advantages, including requiring less formal statistical
training, ability to implicitly detect complex nonlinear relationships between dependent and
independent variables, ability to detect all possible interactions between predictor variables,
and the availability of multiple training algorithms [7]. Generally, neural network approaches
achieve higher accuracy in detection as compare to other approaches because they are trained
with large number of samples [4].
Disadvantages include its “black box” nature, greater computational burden,
proneness to over fitting, and the empirical nature of model development.
1.3 Motivation:
Why face detection?
Face detection system is used in face recognition system, which enhance security,
provide secure access control, and product personal privacy.
Face detection improve in the performance and reliability of face recognition.
No need to remember any password or carry any ID, if face recognition system is being
installed.
Why Neural Network?
Adaptive learning: An ability to learn how to do tasks.
Self-Organization: An ANN can create its own organization.
Remarkable ability to derive meaning from complicated or imprecise data.
A large number of training samples and error calculations are used each time.
1.4 Aim of the Project:
The aim of the project is to make the search of face from a given image. This is done by using
PCA (Principal Component Analysis) Algorithm to extract the law dimensional feature
vectors from input patterns and then the input patterns are feed in to the BPNN (Back
Propagation Neural Network) for classification of face and non-face during the training and
testing period of the network.
Chapter 2
2. LITERATURE SURVEY
In recent years, many methods have been proposed which can be roughly grouped into four
categories: template matching, geometrical models, statistical approach [3] and neural network
approach [4]. In multi-stage knowledge based systems we use the combination of template
matching and geometrical matching whereas the statistical approach and neural network
approach can achieve higher accuracy between face and non-face patterns when trained with
large number of samples [5].
The Face Detection research mainly focused attention since the 90’s decade [6].
During 70’s decade, simple techniques with frontal faces and uniform background was used
to be applied. In between 70’s-90’s decade no reasonable work was done [7]. Hjelmas and
Low (2001)[8] conducted a survey on face detection techniques and identified two broad
categories that separate the various approaches, appropriately named feature-based and
image-based approaches. They divide the group of feature-based systems into three sub-
categories: Low-level analysis, feature analysis, and active shape models. And the group of
image-based approaches into three sections: Linear subspace methods, neural networks, and
statistical approaches [8].
Neural Network Approach: Rowley et. al., (1998) [9] co-wrote an article presenting
Neural Network-Based Face Detection system. Over the past a few decades, many face
detection approaches based on neural network with better complexity has been proposed.
Aamer Mohamed, et al (2008) [10] proposed face detection system based on BPNN via
Gaussian mixture model to segment image based on skin color. Omaima N. A. AL-Allaf in
2014 published “Review of Face Detection Systems Based Artificial Neural Network” [11],
in which a brief description of different ANN approaches and algorithms were described on
the basis of strengths and limitations. It has been seen from the records that out of all the
approaches BPNN has the best performance in detection.
We use neural network based algorithms which lies under the category of image based
approach [12]. In this project, we use BPNN with PCA to get an efficient face detection
method which reduces the computational cost while retaining high detection rate.
Chapter 3
3. OBJECTIVES
To identify a face (of human) with the help of an Artificial Neural Networks
(ANN) from an image using MATLAB.
To study the existing algorithms of face detection on the basis of ANN and to
propose a new algorithm which would be more efficient and beneficial.
To develop an application (or software) for face detection system.
3.1 Existing Techniques:
Following are the major Artificial Neural Network techniques used for Face Detection
System:
a) Retinal Connected Neural Network (RCNN)
b) Rotation Invariant Neural Network (RINN)
c) Principal Component Analysis with ANN (PCA)
d) Fast Neural Networks (FNN)
e) Polynomial Neural Network (PNN)
f) Convolutional Neural Network (CNN)
g) Evolutionary Optimization of Neural Networks
h) Multilayer Perceptron (MLP)
i) Back Propagation Neural Networks (BPNN)
j) Gabor Wavelet Faces with ANN
k) Skin Color and BPNN
l) Cascaded Neural Network
Chapter 4
4. SYSTEM OVERVIEW
Face detection [13] is done by separating face areas from non-face background regions and
facial feature extraction locates important feature (eyes, checks, lips, mouth, nose, forehead
and eye-brows etc.) within a detected face. The fundamental steps of face detection system is
as follows:
Normalization[13] [14]: The image is scaled and rotated so that it can be mapped into an
appropriate size and pose, at the same time system robustness against scaling posture, facial
expression and illumination is increased.
The standard size of face as mentioned above is of 20Χ20 pixels [5]. The proposed face
detection task is shown by the diagram below:
Figure 3: Overview of Developed Face Detection System with Various Functions
Alignment Finder: It is used to determine the basic features like head's position, size and
pose of face, eye and nose etc. After extracting the feature we select a few combined features
that was analysed and stored in different scales to construct a face.
Pre-Processing [15]: To speed up the detection process and reducing false positives (i.e., to
reduce the variability) in face the images are pre-processed before they are fed into the
network. The pre-processing step can reject an acceptable amount of non-face windows.
Classification: Neural networks is implemented to classify and localize the faces or non faces,
i.e., BPNN is trained so that it gives +1 output value for face pattern and -1 value for non-face
pattern [11]. To optimize the results of different network configurations the classification
procedure is used.
Localization [11]: The faces are then localized in a bounding box after searched by a trained
neural network.
Representation: The system translates the facial data into a unique code for implementation
purpose.
To detect faces of variable sizes and locations the detector needs to analyse the given input
image in different scales [16]. Each sliding window is examine to be of standard size of face
or non-face image. Before initializing PCA test window is considered as a vector, which will
be obtained from feature selection.
Chapter 5
5. METHODOLOGIES:
I. Principal Component Analysis (PCA):
In this project, the Principal Component Analysis (PCA) [17] technique plays the important role
in feature extraction and feature selection to improve the efficiency of classification. PCA is
mainly used for dimensionality reduction of the pre-processed image. PCA from a statistical
background, is a method for
(i) transforming correlated variables into uncorrelated variables,
(ii) Finding linear combinations of the original variables with relatively large or small
variability, and
(iii) Reducing data.
PCA [17] is a mathematical procedure that uses orthogonal transformation to convert a set of
values of possibly correlated M variables (face images) into a set of values of K uncorrelated
variables called Principal Components (eigenfaces). The number of principal components
(eigenfaces- K) is always less than or equal to the number of original variables (face images-
M) i.e., K≤ M.
Any face image can be
(i) Represented as the eigenfaces coordinate space, i.e., face image can be divided into
a number of pixels and plot them according to eigenvector coordinates and
(ii) Conversely face image can be approximately reconstructed using a small collection
of eigenfaces (pixels).
A training set consisting of total M images
Select K Eigenfaces to represent the training set, K<M.
Figure 4: Transformation of Face Images into Eigenfaces
To perform the above operation, a face image is transformed into eigenfaces which can
be obtained after pre-processing the original image that characterize the variation
between face images. Once a set of eigenfaces is computed, a face image can be
reconstructed using a weighted combination of the eigenfaces. When a new image is
given, the weights are computed by projecting the image onto the eigenface vectors.
The weight vectors produced from the test image is then goes for classification process.
Since principal component shows the direction of data and each proceeding component
shows less direction and more noise, so only few first principal components (say K)
are selected whereas the rest of the last components are discarded.
K useful Eigenfaces (Selected)
Remaining K noisy Eigenfaces (Discarded)
Figure 5: Selection of Eigenfaces among a Number of Eigenfaces
PCA [18] Algorithm: Let the training set of images: Γ1, Γ2,………..,ΓM.
Pre-processing (Feature scaling/mean normalization/ Average face of the set):
Ψ=1
𝑀∑ 𝜞n𝑀
𝑛=1 …………………….. (1)
If different features on different scales (e.g.: X1=Size of house, X2=No. of bedrooms), scale
features to have comparable range of values. Each face differ from the average by vector
𝟇i = Γi – 𝛹 ………………………...… (2)
Reduce data from m-dimensions to k-dimension. Compute “Covariance matrix”:
C =1
M∑ ϕn. ϕnT𝑀
𝑛=1 = A.AT …… (3)
Where, matrix A = [𝟇1, 𝟇2,…………., 𝟇M].
The set of large vectors is then subject to principal component analysis, which seeks a set of
M orthonormal vectors u1 .....uM . To obtain a weight vector Ω of contributions of individual
eigenfaces to a facial image Γ, the face image is transformed into its eigenface components
by a simple operation
ω k= u kT ( Γ - 𝛹) ……………………..(4)
Eigenfaces have advantages over the other techniques available, such as speed and efficiency.
For obtaining the better output of PCA, the faces must be captured from a frontal view under
similar lighting.
II. Back Propagation Neural Network (BPNN):
The standard way to train a multilayer perceptron is using a method called back propagation.
This is used to solve a basic problem called assignment of credit, which comes up when we
try to figure out how to adjust the weights of edges coming from the input layer. Recall that
in the single layer perceptron, we could easily know which weights were producing the error
because we could directly observe the weights and output from those weighted edges.
However, we have a new layer that will pass through another layer of weights. As such, the
contribution of the new weights to the error is obscured by the fact that the data will pass
through a second set of weights or values [19].
To give a better idea about this problem and its solution, consider this toy problem: A
scientist wants to make billions of dollars by controlling the stock market. He will do this by
controlling the stock purchases of several wealthy people. The scientist controls information
that can be given by Wall Street insiders and has a device to control how much different
people can trust each other. Using his ability to input insider information and control trust
between people, he will control the purchases by wealthy individuals. If purchases can be
made that are ideal to the mad scientist, he can gain capital by controlling the market [20].
A common Back Propagation Neural Network [18] having multilayer feed forward with
supervise learning network is shown in the figure 6 below. In case of training multilayer
perceptron (MLP), Back-Propagation is the most well-known and widely used learning
algorithm. The MLP points to the network consisting of a set of source nodes that produces
the input layer, one or more hidden layers of computation nodes, and an output layer node
[18]. At the input layer the pattern of face and non-face images is trained and then gradually
distributed to the hidden layer and output layer. The weight at the input layer, associated with
each neuron or processing element (PE) in the structure, multiplying with the corresponding
interconnecting weights and serve as inputs to the PEs of the hidden layer. Similarly, the
output from the PEs of the hidden layer, multiply with corresponding interconnecting weights
and serves as input of the output layer.
In case of Back Propagation Neural Network (BPNN), depending on the position of the PE in
the network and the nature of the problem, PEs can be used with linear, sigmoid and
hyperbolic tangent transfer function. With sigmoid PEs in the hidden and only one PE in the
output layer, the network’s output layers becomes highly non-linear and the network can be
thought of as representing a non-linear function of the inputs respectively [13].
Input Nodes Hidden Nodes Output Nodes
Figure 6: BPNN with Single Hidden Layer [18]
TABLE I
Description of Forward Pass and Backward Pass of BPNN
Forward Pass Backward Pass
1. Take the input
2. 1st compute the induced fields & the
output of the layer just next to the input
(say, 1st hidden layer)
3. Then from the 1st hidden layer to the 2nd
layer.
4. It ultimately goes to the output layer.
5. Calculate the error term.
-End of forward pass
1. Start with the error term
2. Calculate the Local Gradient (LG)
3. Propagates the LG’s in to the layer
previous to the output.
4. Propagate it one layer before it, like that
keep going.
5. Ultimately reach to the layer just next to
the input.
6. All the synoptic weights are adjusted.
Only underlying assumption is that during this time say, “the forward pass & backward pass
going on” during that time our input should be held steady. In forward signal pass we have
only the signal flow and in the reverse part we have computation of errors.
A. BPNN Algorithm:
For neuron ‘j’ at iteration ‘n’, error can be calculated as,
Error = Desired output – Actual Output
ej(n) = dj(n) – yj(n) ……………….…(5)
Instantaneous value of error energy at nth iteration
E(n) = 1
2 ∑ 𝑒𝑗∈𝐶 j
2(n) ………………(6)
Where, C contains all the neurons at the output layer.
Average square error energy
Eav(n) = 1
𝑁∑ 𝐸(𝑛)𝑁
𝑛=1 ……………….…(7)
Where, N is the number of patterns that we are presenting, i.e., total no. of iterations.
Induced local field at the input of neuron j or the input of Activation Function of neuron j,
vj = ∑ 𝑊𝑚𝑖=0 ji(n) yi(n) …………...…(8)
Actual output or the output of previous layer,
yj(n) = ϕ(vj(n)) ……………….……….…(9)
Local Gradient,
δj(n) = ej(n) ϕ/(vj(n)) ……………………(10)
Change of Weight = Learning Rate x Local Gradient x Input of Neuron j
ΔWji(n) = ɳ δj(n) yi(n) …………….……(11); Where, ɳ is the constant proportionality.
Case I: Neuron j belongs to the output layer, in which δj(n) can be easily calculated from (10).
Case II: Neuron j belongs to a hidden layer.
Figure 7: Signal Flow Graph (considering ‘j’ as hidden layer and ‘K’ as output neuron)
Local Gradient presenting hidden layer is given as:
δj(n)=ϕ/j(vj(n)) ∑ δ𝑘 k(n) Wkj(n) ……..…(12)
Figure 8: Backward pass of training for obtaining input
Start with the errors, multiply the errors with the activations and get the gradients,
multiply all the individual gradients with synoptic weights, get the gradient of the preceding
layer. BPNN is an efficient method due to accepting a large number of weights in FFN (feed
forward network) with the activation function units which is differentiable and minimizes the
total squared error, for learning a training set of input-output data.
B. Training Parameters [18] for Efficient Method:-
Initial Input Weights: For getting an efficient output, the initial input weights are set to
fractions in between -1 and +1.
Number of Hidden Units: With n-input and m-output the required hidden units will be 2n+1,
under the condition that the activation function can vary with the function. With increasing
the hidden layer the value of ᵟ’s are increased repeatedly and by summing the ᵟ’s in the
previous layer that is feed to the current layer.
Training a Net: In back propagation the weight adjustment is based on training patterns and
the purpose of applying this method is to achieve a balance between memorization &
generalization. It is not often important for achieving a minimum value of the error for
continuation of the training patterns. With decreasing the error for the validation, the training
continues and for increasing the error, the training patterns are memorized by the net and
hence the training is terminated.
Learning Rate: In back propagation neural network, a small learning rate is used to avoid
major disruption of the direction of learning and the weight change depends on the
combination of current gradient and the previous gradient.
Input Image 1 Sub-image Input Hidden Output
Nodes Nodes Nodes
PCA BPNN
Figure 9: Combined algorithm is described by the PCA & BPNN algorithm.
Face Detection Using BPNN Algorithm:
The proposed algorithm for this project can be described below:
Step 1. Read the image and divide it into equal parts of pixels.
Step2. Convert it into grey scale image and find the covariance matrix small dimension
(transposed).
Step3. Find Eigenvectors and from Eigenfaces weight are calculated. Step 2. Each part is converted to binary numbers and fix the output for each vector.
Step 3. Weight and bias are initialized randomly for training of each part separately using neural
network tool through MATLAB.
Step 4. Calculate the output of each part from the previous step and apply it as an input to the
next step.
Step 5. The above steps are applied to the initial image without dividing it.
Step 6. Results obtained from step 4 and 5 are compared to check the accuracy of this approach.
Step 7. The procedure below can adjust the weight in the network
a) Using sigmoid activation function compare hidden layer and output layer neuron.
b) Calculate the errors of output layer and hidden layer and then calculate the total error
of the network.
c) Repeat the steps of adjusting weights until minimizing the squared mean error.
Adjust Weight is given by the equation:
ΔWji(n) = ɳ δj(n) yi(n) , which is described later in back propagation algorithm and ɳ is the
learning rate.
Chapter 6
6. EXPERIMENTAL RESULTS:
For the implementation of face detection technique MATLAB R2013a, Windows 8, Intel(R)
Core i3, 4GB RAM is used.
In this project, we use 45% of images from FACES [21] database, where each image is of size
869x592, which is freely available on the website for research purpose and the rest are the
personal group image. A two hidden layered back propagation neural network has been used,
which are initialized randomly. We used single image data set in the experiment that may
contains multiple faces within a background. Data set is used to train BPNN classifier and test
the detection performance. Several training session have been conducted during our
experiments, the training measures the performance on the basis of Mean Square Error (MSE)
[22]. MSE is defined as the average squared difference between output and target. With mean
square error zero means that there is no error in the network and with the decreasing the values
of MSE errors the network is considered as better one. During the experiment we use 20, 50,
100, 150 and 190 images datasets. The graphical representation is as shown in figure 10 below.
The stopping criteria for MSE is set as 10-11. Table II shows the summery of MSE
parameter’s values used for BPN during training using 20, 50, 100, 150 and 190 images
datasets.
Calculations:
False Acceptance Rate (FAR) considers non-face data as face data, while False Rejection Rate
(FRR) classifies face data as non-face data [15].
False Acceptance Rate(FAR) =𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝐼𝑛𝑐𝑜𝑟𝑟𝑒𝑐𝑡 𝐷𝑒𝑡𝑒𝑐𝑡𝑒𝑑 𝐹𝑎𝑐𝑒𝑠
𝑇𝑜𝑡𝑎𝑙 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝐴𝑐𝑡𝑢𝑎𝑙 𝐹𝑎𝑐𝑒𝑠
False Rejection Rate(FRR) =𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝐼𝑛𝑐𝑜𝑟𝑟𝑒𝑐𝑡𝑙𝑦 𝑅𝑒𝑗𝑒𝑐𝑡𝑒𝑑 𝐹𝑎𝑐𝑒𝑠
𝑇𝑜𝑡𝑎𝑙 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝐴𝑐𝑡𝑢𝑎𝑙 𝐹𝑎𝑐𝑒𝑠
Accurate Detection Rate(ADR) =𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝐶𝑜𝑟𝑟𝑒𝑐𝑡 𝐷𝑒𝑡𝑒𝑐𝑡𝑖𝑜𝑛
𝑇𝑜𝑡𝑎𝑙 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝐼𝑛𝑝𝑢𝑡 𝑋 100%
TABLE II
RESULTS ON THE BASIS OF 100 IMAGES
No. of
Images
False Acceptance
Rate (FAR)
False Rejection
Rate (FRR)
Accurate Detection
Rate (ADR)
20 42.75% 15.02% 43.62%
50 33.90% 7.34% 69.89%
100 19.10% 5.38% 76.03%
150 6.89% 3.12% 78.82%
190 5.12% 1.01% 85.01%
Figure 10: Graphical Representation of Experimental Results using20, 50, 100, 150 and 190
images datasets.
TABLE III
MEAN SQUARE ERROR
No. of Images Initial MSE Error Final MSE Error
20 0.821534 2.99e-012
50 0.63452 2.98e-012
100 0.534205 2.95e-12
150 0.945621 3.63e-012
190 1.35259 6.98e-012
Chapter 7
7. CONCLUSION AND FUTURE WORK
Under the pre-processing technique Skin Segmentation, Morphological Operation and
Component Labelling are performed before the process undergoes for PCA technique. It can
be concluded from the experimental result that detection rate increases with increasing the
number of images, whereas ARR and FRR decreases. When we considered 20 images dataset,
the performance of the system is 43.62% with FAR and FRR of 42.75% and 15.02%
respectively. On the other hand, the FAR and FRR decreases gradually for 190 images and
that becomes 5.12% and 1.01% respectively and the detection rate reaches up to 85.01%,
which is considered as the best detection rate result. From the above statements it can also be
concluded that the number of dataset images is directly proportional to performance of the
system.
This technique is improved, when we properly done the pre-processing steps and PCA
technique. So, for obtaining the better training and testing performance different classifiers
can be used. Further development includes pattern recognition, face recognition, and other
recognition system. This technique can be enhanced by further refining the image in pre-
processing. Different classifiers can be used for better training and testing results.
References [1] R-QiongXu , Bi-Cheng Li ,Bo Wang “Face detection and recognition using neural network
and hidden markov models”, IEEE Int. Conf. Neural Networks & Signal Processing Nanjing,
China, December 14-17,2003.
[2] EH.C Tivive and A. Bouzerdoum, “A Face Detection System Using Shunting Inhibitory
Convolutional Neural Networks” IEEE 2004.
[3] B. Moghaddam, A. Pentland.Probabilistic visual learning for object representation, IEEE
Trans. PAMI, Vol. 19(7), 696-720, 1997.
[4] E. Osuna, R. Freund, and F. Girosi, Training suppan vector machines: an application to
face detection, Proc. CVPR'IY97, pp. 130-136.
[5] Lin-Lin Huang et. al.,”face detection from cluttered image using a polynomial neural
network” IEEE 2001.
[6] R. Chellappa, C. L. Wilson, and S. Sirohey. Human and machine recognition of faces. a
survey. In Proc. IEEE, volume 83, 1995.
[7] T. Sakai, M. Nagao, and T. Kanade. Computer analysis and classication of photographs of
human faces. In Proc. First USA {Japan Computer Conference, page 2.7., 1972.}
[8] Hjelms, E. and Low, B.(2001). Face Detection: A Survey. Computer Vision and Image
Understanding. 83, 236-274.
[9] Rowley, H.A., Baluja, S., and Kanade, T. (1998). Neural network-based face detection.
IEEE Trans. Pattern Anal. Mach. Intelligence, 20, 23-38.
[10] Aamer Mohamed, et al (2008) “Face Detection based Neural Networks using Robust
Skin Color Segmentation”, 5th International Multi-Conference on Systems, Signals and
Devices, IEEE.
[11] Omaima N. A. AL-Allaf, “Review of Face Detection Systems Based Artificial Neural
Network”, IJMA Vol.6, No.1, February 2014.
[12] Khairul Azha A. Aziz et. al.,“Face Detection Using Radial Basis Function Neural
Networks With Variance Spread Value” 2009 International Conference of Soft Computing
and Pattern Recognition, IEEE 2009.
[13] Moeen Tayyab, M. F. Zafar, “Face Detection using 2D-Discrete Cosine Transform and
Back Propagation Neural Network” 2009 International Conference on Emerging
Technologies, IEEE 2009.
[14] Krishna Dharavath*, Fazal Ahmed Talukdar and Rabul Hussain Laskar, “Improving Face
Recognition Rate with Image Preprocessing”, Indian Journal of Science and Technology, Vol
7(8), 1170–1175, August 2014.
[15] Hossein Ziaei Nafchia,*, Seyed Morteza Ayatollahi, “A set of criteria for face detection
preprocessing” Proceedings of the International Neural Network Society Winter Conference
(INNS-WC 2012).
[16] Linlin Huang Akinobu Shimizu Hidefirmi Kobatake, “Face Detection using a Modified
Radial Basis Function Neural Network”, IEEE 2002.
[17] Hyeonjoon Moon et. al., “Computational and performance aspects of PCA-based face-
recognition algorithm”,
[18] P.Latha et. al., “Face Recognition using Neural Networks”, Signal Processing: An
International Journal (SPIJ) Volume (3).
[19] Weitzenfeld, A., Arbib M. A., Alexander A.(2002)The Neural Simulation Language: A
System for Brain Modeling, The MIT Press
[20] Bishop, C. M. (1995) Neural Networks for Pattern Recognition, Oxford University Press
[21] Facedetection,URL:http://www.vision.caltech.edu/htmlfiles/archive.html
[22] Tej Pal Singh, “Face Recognition by using Feed Forward Back Propagation Neural
Network”, International Journal of Innovative Research in Technology & Science (IJIRTS).
Appendix A
APPENDIX A: USER INTERFACE
The main aim is to detect faces inside an image. It’s really difficult after variation in the
illumination. The task is to make a data set of images of the person’s face, who are present in
the image and to train the system and also for negative example it uses non-face portion or a
part of face. After running the program, first work is to create a neural network. It loads face
and non-face images then it gives a menu or dialog box of graphical user interface (GUI). The
figure below shows the systematic steps of face detection technique:
Figure 11: GUI (Graphical User Interface) for Initial Screen
Figure 12: The Main Menu
Figure 13: Loading the Image
Figure 14: Perform PCA and to Initialize the Network by Loading Face and Non-face
images from the data set
Figure 15: Training phase of the System Network
Figure 16: This one is the output of the neural network around the candidate points which
are real values between -1 and 1.
The candidate points are the points that are estimated correlation between two face templates
and the big image which we inserted/loaded initially. Due the illumination in the images, the
detection becomes more difficult or harder.
Figure 17: Face Detection and also the calculated Number of Detected Faces are shown
Appendix B
The above pictures show face detection, some images are collected from MATLAB after
running the program.
Appendix C
Recommended