6
140 Multi-Feature Face Recognition based on PSO-SVM Sompong Valuvanathorn Department of Information Technology Faculty of Information Technology, KMUTNB Bangkok, Thailand [email protected] Supot Nitsuwat Department of Mathematics Faculty of Applied Science, KMUTNB Bangkok, Thailand [email protected] Mao Lin Huang Faculty of Engineering and Information Technology University of Technology, Sydney New South Wales, Australia [email protected] Abstract—Face recognition is a kind of identification and authentication, which mainly use the global-face feature. Nevertheless, the recognition accuracy rate is still not high enough. This research aims to develop a method to increase the efficiency of recognition using global-face feature and local-face feature with 4 parts: the left-eye, right-eye, nose and mouth. We used 115 face images from BioID face dataset for learning and testing. Each-individual person’s images are divided into 3 different images for training and 2 different images for testing. The processed histogram based (PHB), principal component analysis (PCA) and two-dimension principal component analysis (2D-PCA) techniques are used for feature extraction. In the recognition process, we used the support vector machine (SVM) for classification combined with particle swarm optimization (PSO) to select the parameters G and C automatically (PSO- SVM). The results show that the proposed method could increase the recognition accuracy rate. Keywords— global-face feature, local-face feature, PHB, PCA, 2D-PCA, SVM, PSO I. INTRODUCTION Presently, it is necessary to have a security system for each important location. These places desire the systems, which can identify and authenticate people who want to gain entrance to them. The face recognition system is a system that can identify and authenticate persons from both still and image sequence. It is a popular application for security places such as immigration department, banks, airports, etc. The Closed-Circuit Television System (CCTV) is usually installed in these locations to make a record of events. These images will be verified to look for the person involved when an abnormal situation appears. The important activities processed on these images are indication and identification an individual face. Searching for a face in images sequence by human normally takes a lot of time and manpower, which cause the reduction of performance and authentication correctness. There are several methods used for classification of face recognition. SVM was invented by Vladimir N. Vapnik and was proposed by Vapnik and Corinna Cortes in 1995 [1]. SVM has become to the popular technique for classification in many applications, particularly face recognition. For example, Guodong et al. [2] compared results between nearest center classification (NCC) and SVM. Performance of SVM method was a better in learning algorithm than the NCC approach. Zhang et al. [3] presented Gabor-face SVM and likened results with Fisher discriminate analysis (FDA) and Eigenface methods. The Gabor-face SVM given better effectiveness and performance than other methods. Okabe et al. [4] compared results of Fisher‘s linear discriminant and SVM techniques. They confirmed that the SVM method was effective for object recognition under varying illumination conditions. Sani et al. [5] focused on SVM classification method for face recognition. Adaptive multiscale retinex (AMSR) was also introduced to reduce various lighting conditions before performing the classification task and then compared with the PCA method. The SVM method processing times and performance were better than that of PCA. Kisku et al. [6] compared results of K- nearest neighbor (K-NN) and SVM. The performance of SVM was much better than K-NN classifiers. Pei et al. [7] proposed hybridizes wavelet transform (WT), non-negative matrix factorization (NMFs) with sparseness constraints and SVM method (WT+NMFs+SVM). They also compared the results with WT+PCA+SVM and WT+NMFs+K-NN methods. The experimental result of WT+NMFs+SVM increased the accuracy rate and was more robust to expression. Mazanec et al. [8] presented a combination of PCA and linear discriminant analysis (LDA) with SVM. The result of LDA+SVM achieved maximum recognition rate, 100%. Ying et al. [9] focused on NMF for feature dimension reduction and SVM for classification. They also compared with PCA and Kernel PCA (KPCA). The result of NMF combined with SVM had a great effect of recognition. Gumus et al. [10] compared the results of PCA combined with three different SVM kernel types: linear polynomial kernel (POLY-LINEAR), quadratic polynomial kernel (POLY-QUAD) and radial basis function kernel (RBF). They obtained the highest recognition rate with RBF kernel. Timotius et al. [11] presented combining generalized 2012 Tenth International Conference on ICT and Knowledge Engineering 978-1-4673-2317-8/12/$31.00©2012 IEEE

[IEEE 2012 10th International Conference on ICT and Knowledge Engineering (ICT & Knowledge Engineering 2012) - Bangkok, Thailand (2012.11.21-2012.11.23)] 2012 Tenth International Conference

  • Upload
    mao-lin

  • View
    215

  • Download
    3

Embed Size (px)

Citation preview

Page 1: [IEEE 2012 10th International Conference on ICT and Knowledge Engineering (ICT & Knowledge Engineering 2012) - Bangkok, Thailand (2012.11.21-2012.11.23)] 2012 Tenth International Conference

140

Multi-Feature Face Recognition based on PSO-SVM

Sompong ValuvanathornDepartment of Information Technology

Faculty of Information Technology, KMUTNBBangkok, Thailand

[email protected]

Supot NitsuwatDepartment of Mathematics

Faculty of Applied Science, KMUTNBBangkok, [email protected]

Mao Lin HuangFaculty of Engineering and Information Technology

University of Technology, SydneyNew South Wales, Australia

[email protected]

Abstract—Face recognition is a kind of identification and authentication, which mainly use the global-face feature. Nevertheless, the recognition accuracy rate is still not high enough. This research aims to develop a method to increase the efficiency of recognition using global-face feature and local-face feature with 4 parts: the left-eye, right-eye, nose and mouth. We used 115 face images from BioID face dataset for learning and testing. Each-individual person’s images are divided into 3 different images for training and 2 different images for testing. The processed histogram based (PHB), principal component analysis (PCA) and two-dimension principal component analysis (2D-PCA) techniques are used for feature extraction. In the recognition process, we used the support vector machine (SVM) for classification combined with particle swarm optimization (PSO) to select the parameters G and C automatically (PSO-SVM). The results show that the proposed method could increase the recognition accuracy rate.

Keywords— global-face feature, local-face feature, PHB, PCA, 2D-PCA, SVM, PSO

I. INTRODUCTION

Presently, it is necessary to have a security system for each important location. These places desire the systems, which can identify and authenticate people who want to gain entrance to them. The face recognition system is a system that can identify and authenticate persons from both still and image sequence. It is a popular application for security places such as immigration department, banks, airports, etc. The Closed-Circuit Television System (CCTV) is usually installed in these locations to make a record of events. These images will be verified to look for the person involved when an abnormal situation appears. The important activities processed on these images are indication and identification an individual face. Searching for a face in images sequence by human normally takes a lot of time and manpower, which cause the reduction of performance and authentication correctness.

There are several methods used for classification of face recognition. SVM was invented by Vladimir N. Vapnik and

was proposed by Vapnik and Corinna Cortes in 1995 [1]. SVM has become to the popular technique for classification in many applications, particularly face recognition. For example, Guodong et al. [2] compared results between nearest center classification (NCC) and SVM. Performance of SVM method was a better in learning algorithm than the NCC approach. Zhang et al. [3] presented Gabor-face SVM and likened results with Fisher discriminate analysis (FDA) and Eigenface methods. The Gabor-face SVM given better effectiveness and performance than other methods. Okabe et al. [4] compared results of Fisher‘s linear discriminant and SVM techniques. They confirmed that the SVM method was effective for object recognition under varying illumination conditions. Sani et al. [5] focused on SVM classification method for face recognition. Adaptive multiscale retinex (AMSR) was also introduced to reduce various lighting conditions before performing the classification task and then compared with the PCA method. The SVM method processing times and performance were better than that of PCA. Kisku et al. [6] compared results of K-nearest neighbor (K-NN) and SVM. The performance of SVM was much better than K-NN classifiers. Pei et al. [7] proposed hybridizes wavelet transform (WT), non-negative matrix factorization (NMFs) with sparseness constraints and SVM method (WT+NMFs+SVM). They also compared the results with WT+PCA+SVM and WT+NMFs+K-NN methods. The experimental result of WT+NMFs+SVM increased the accuracy rate and was more robust to expression. Mazanec et al. [8] presented a combination of PCA and linear discriminant analysis (LDA) with SVM. The result of LDA+SVM achieved maximum recognition rate, 100%. Ying et al. [9] focused on NMF for feature dimension reduction and SVM for classification. They also compared with PCA and Kernel PCA (KPCA). The result of NMF combined with SVM had a great effect of recognition. Gumus et al. [10] compared the results of PCA combined with three different SVM kernel types: linear polynomial kernel (POLY-LINEAR), quadratic polynomial kernel (POLY-QUAD) and radial basis function kernel (RBF). They obtained the highest recognition rate with RBF kernel. Timotius et al. [11] presented combining generalized

2012 Tenth International Conference on ICT and Knowledge Engineering

978-1-4673-2317-8/12/$31.00©2012 IEEE

Page 2: [IEEE 2012 10th International Conference on ICT and Knowledge Engineering (ICT & Knowledge Engineering 2012) - Bangkok, Thailand (2012.11.21-2012.11.23)] 2012 Tenth International Conference

141

discriminant analysis (GDA) and SVM for face recognition. The results shown that performance of SVM with GDA were better than SVM without using GDA. Recently, Le et al. [12]proposed the two dimension principle component analysis (2D-PCA) combined with SVM. They also compared with PCA combined with multilayered perceptron (MLP), PCA+K-NN, PCA+SVM and 2DPCA+K-NN. They reported that 2DPCA+SVM method gains the best performance.

However, the training parameters selection of SVM has a heavy impact on the performance of SVM. The optimal parameters G and C are hard to estimate the value. Many researchers solved this problem in their studies by using PSO method [13, 14] to find optimal parameters G and C for SVM automatically. They also stated that the experiment results were very positive. For example, Hsu et al. [15] compared results of classification using normal SVM and PSO-SVM. The results indicated that PSO-SVM method had higher accuracy. Marami et al. [16] focused on face detection algorithm that used PSO for searching for frontal faces in the image plane and SVM for search a face in two-dimension solution space. The experimental results demonstrated the algorithm’s good performance and proved its efficiency. Recently, Wei et al. [17]suggested the technique to be chosen in face recognition was the PSO-SVM. They also compared with SVM and back propagation neural network (BPNN). The experimental indicated that face recognition accuracy of PSO-SVM had higher than other methods.

Generally, most of face recognition can be implemented on both still and image sequence. Research in this domain mainly uses global-face feature. Nevertheless, the accuracy of recognition rate is not high enough. Hence, this paper has developed a technique to enhance the face recognition accuracy using both global-face feature and local-face feature with 4 parts: the right-eye, left-eye, nose, and mouth. The major techniques used to increase accuracy of face recognition are SVM combine with PSO (PSO-SVM). In the PSO-SVM, SVM with RBF kernel is used for classification of each part, whilethe PSO used to simultaneously optimize the parameter G and C of SVM. We also compared the face recognition performance using only global-face feature with those using both global-face feature and local-face feature.

This paper is organized as follows. Section 2 represents a summary of the literature reviews: Support Vector Machine, Particle Swarm Optimization and PSO-SVM Method. The experimental results are then shown in section 3. Finally, conclusion is given in Section 4.

II. LITERATURE REVIEWS

A. Support Vector Machine (SVM)The SVM is a statistical classification method proposed by

Cortes and Vapnik [1]. It is a popular machine learning algorithm because it can resolve a non-linear classification problem in higher dimensional space with a supporting result. The SVM concept can be described as an attempt to search the best separating hyperplane between two classes in the input space. The margin measuring of hyperplane to the closest patterns in each class are the best separating hyperplane. These

closest patterns are called support vectors. Basic SVM idea is shown in Figure 1.

Figure 1. Basic Support Vector Machine classifier.

In the binary support vector machine (BSVM), an SVM performs classification tasks by constructing optimal separate hyperplanes (OSHs). An OSH maximizes the margin between the two nearest data point belonging to two separate classes. Suppose that we have D is a dataset, which given n as the amount of the labeled training samples, xi are the training samples while yi are the labels or targets in p-dimensional real vector as:

n

iip

iii yRxyxD1

1,1,),( , (1)

0subject to bxw (2)

The outcome in a linearly separable problem corresponding to determination function can be expressed as f(x) = (w · x)+b.The distance from the hyperplane to a point x is given by function as:

wwxf 2)(2

margin (3)

Therefore, the hyperplane which optimally separates the data is the one that minimizes the following expression as:

2

, 21min w

bw(4)

1)(Subject to bxwy ii (5)

The Lagrange multiplier is used to solve the optimization problem can be expression as:

1)(21)L(min

1

2

0bwww,b, xy ii

n

iiw,b

(6)

The Lagrange function has a minimize value for w and b.Classical Lagrange duality enables the primal problem (6) to be transformed into its dual problem, which is easier to solve. The dual problem is defined as follows:

Page 3: [IEEE 2012 10th International Conference on ICT and Knowledge Engineering (ICT & Knowledge Engineering 2012) - Bangkok, Thailand (2012.11.21-2012.11.23)] 2012 Tenth International Conference

142

0,),,(minmax)(max bwLQ

bw(7)

The solution to the dual problem is given by:n

i

n

jjijiji

n

ii xxyyQ

1 11 21)( (8)

i

n

iiy ,0;0subject to i

1i (9)

Solving Equation (8) with (9) determines the Lagrange multiplier. Then, the optimal separate hyperplane is given by:

n

iiii xyw

1

~~ (10)

)(~21~

vu xxwb , (11)

where xu and xv are support vectors. Therefore, we can obtain the optimal discrimination function as:

n

iiii bxxybxwxf

1

~)(~sgn)

~~sgn()( (12)

For a new data point x, then the classifier is defined as follows:

)~~()( bxwsignxf (13)

In 1992, Boser et al. [18] suggested a way to create nonlinear classifiers by applying the kernel trick to maximum-margin hyperplanes [19]. The SVM maps the training data to high dimensional feature space though kernel functions K(xi,xj).These kernel functions meet the condition of Mercer theorem and can substitute for inner product (xi,xj) of original feature space. The decision boundary is linear in this space. Hence, the above linear technology can be used directly. There are several types of kernel functions such as polynomial (homogeneous) function, polynomial (inhomogeneous) function, Gaussian radial basis function (RBF) and hyperbolic tangent function. In our experimental, we use SVM with the Gaussian radial basis function kernel, because of that has better recognition rate.

B. Particle Swarm Optimization (PSO)Particle swarm optimization (PSO) is one of a global

optimization techniques and a type of evolutionary computation technique originally proposed by Kennedy et al. [13]. This method optimizes a problem by having a population (called a swarm) of candidate solutions (called particles). These particles are moved around in the search-space according to simple formulae. The movements of the particles are guided by the best found position in the search space, which are continually updated as better positions are found by the particles. Let f : Rn R be the cost function which must be minimized. Let S be the number of particles in the swarm, each having a position xi Rn in the search-space and a velocity viRn. Let pi be the best known position of particle i and let g be the best known position of the entire swarm. A basic PSO algorithm is given as follows:

Step 1: (Initialization) for each particle i = 1, …, S, do:

Initialize the particle's position with a uniformly distributed random vector: xi ~ U(blow, bupp), where blowand bupp are the lower and upper boundaries of the search space.

Initialize the particle's best known position to its initial position: pi xi

If f(pi) < f(g) update the swarm's best known position: g pi

Initialize the particle's velocity:

vi ~ U(-[bupp-blow], [bupp-blow]) (14)

Step 2: (Fitness) until the termination criterion is arrived (e.g. maximum iteration number or a solution with adequate objective function value is found), repeat:

Create random vectors rp and rg , where rp, rg ~ U(0,1)

Step 3: (Update) Update the particle's velocity:

).().( iggiippii xgrxprvv , (15)

where the parameters , p and g are selected by the practitioner and control the behavior and efficacy of the PSO method.

Step 4: (Construction) update the particle's position by adding the velocity: xi xi + vi, note that this is done regardless of improvement to the fitness.

If f(xi) < f(pi), update the particle s best known position: pi xi

If f(pi) < f(gi), update the swarm s best known position: g pi

Now g is the best found solution.

Step 5: (Termination) stop the algorithm if termination criterion is satisfied.

C. PSO-SVM MethodIn this paper, the recognition accuracy rate from SVM

classification is used for the fitness function of the PSO method. We used PSO method to find optimal parameters Gand C for SVM automatically. This model is called that PSO-SVM method. The PSO-SVM working flowchart is shown in Figure 2.

Page 4: [IEEE 2012 10th International Conference on ICT and Knowledge Engineering (ICT & Knowledge Engineering 2012) - Bangkok, Thailand (2012.11.21-2012.11.23)] 2012 Tenth International Conference

143

Figure 2. The flowchart of PSO-SVM algorithm.

III. EXPERIMENTATION AND RESULTS

This session presents detail of our experiment and results as follows:

Figure 3. The process of multi-feature face recognition based on PSO-SVM.

In this study, we used BioID face dataset which contains 115 male and female gray-scale images of 23 people. Each person has 5 different gray-scale images. Images of the individuals have been taken with varying light intensity, facial expression (open/close eyes, smiling/not smiling), facial details (glasses/no glasses, mustache/no mustache).

A. Preparation for experimentIn each image, we cropped only part of the face. This part is

called that global-face image. Then, we separated component from global-face image into 4 parts: the left-eye, right-eye, nose and mouth. These parts are called the local-face image. The example of both global-face and local-face images are shown in Figure 4. The resolution of each global-face and local-face image as follows:

The whole face image is 87x115 pixels.

Both left-eye and right-eye image is 57x47 pixels.

The nose image is 48x38 pixels.

The mouth image is 79x44 pixels.

All images in this experiment are processed by the histogram equalization before conducting the next process.

Figure 4. Example of global-face and local-face images.

B. Feature extractionAfter the five parts are separated, these images are

processed by feature extraction process. In this research, there are three methods used for feature extraction: PHB [20], PCA [21] and 2D-PCA [22]. Then feature is stored for using in the training and testing of PSO-SVM steps.

C. Recognition by PSO-SVMIn this experiment, the global-face feature and 4 parts of the

local-face feature are used for training and testing the SVM, the number of class is 23. The data is divided into 3:2 for individual sample, 3 difference images of individual used for training and 2 difference images used for testing. Classification is conducted by using the Gaussian radial basis function kernel.

While the SVM is being processed, the PSO is used for find optimal parameters G and C. The parameters of PSO are: population size of 30; the largest number of evolution (iteration) of 200; both c1 and c2 = 1.6. The PSO-SVM process is repeated until the convergence/stopping criteria (number of iteration) is reached. Then we get the optimal parameters. Therefore, the SVM are processed again by using these parameters. Finally, the results of all parts are integrated by using the Majority Vote technique to determine the best recognition result.

Training/Testing dataset

Train SVM

SVM

Calculate fitness from accuracy rate

Generate initial particlesPSO

Parameters selection

Retrain SVM Maximum iteration number reached?

Update fitness

Better than gbest?

Update the gbestYes

No

Update the velocity and particles position

No

YesObtain optimal

parameters of SVM

SVM Model

Output data

SVM Model

Test SVM

Retest SVM

Classification

Feature extraction

Whole face

Global-face image database

Left-eye Right-eye

Nose Mouth

Local-face image database

TestingTraining

PSO-SVM recognition

Recognition results

Whole face

Global-face feature database

Left-eye Right-eye

Nose Mouth

Local-face feature database

Page 5: [IEEE 2012 10th International Conference on ICT and Knowledge Engineering (ICT & Knowledge Engineering 2012) - Bangkok, Thailand (2012.11.21-2012.11.23)] 2012 Tenth International Conference

144

D. Experiment resultsIn this paper, there are three techniques used for

performance and processing time comparison: PHB, PCA and 2D-PCA. These methods are implemented using the same face dataset. The results are shown in Table I-III, respectively.

TABLE I. PERFORMANCE AND PROCESSING TIME OF FACE RECOGNITION USING PHB+PSO-SVM

Features PHB + PSO-SVM methodsAcc (%) Best-C Best-g Best-CVacc Time(Sec.)

Face 50.00 16.0557 1.0722 75.3623 64.9713 Right-eye 45.65 0.1000 362.1334 69.5652 64.5943 Left-eye 47.83 7.0678 1.0442 69.5652 64.7846 Nose 50.00 60.7797 1.7390 71.0145 64.4418 Mouth 41.30 0.1000 499.9465 50.7246 64.8166

TABLE II. PERFORMANCE AND PROCESSING TIME OF FACE RECOGNITION USING PCA+PSO-SVM

Features PCA + PSO-SVM methodsAcc (%) Best-C Best-g Best-CVacc Time(Sec.)

Face 67.39 15.9998 0.0100 73.9130 2,850.4000 Right-eye 86.96 38.6613 0.0100 75.3623 782.5296 Left-eye 80.43 72.4533 0.0100 76.8116 777.8463 Nose 71.74 2.5261 0.0100 68.1159 543.3903 Mouth 52.17 0.1000 0.0100 59.4203 978.5643

TABLE III. PERFORMANCE AND PROCESSING TIME OF FACE RECOGNITION USING 2D-PCA+PSO-SVM

Features 2D-PCA + PSO-SVM methodsAcc (%) Best-C Best-g Best-CVacc Time(Sec.)

Face 84.78 4.2044 0.0100 85.5072 472.6437Right-eye 89.13 8.1086 0.0100 84.0580 145.4333 Left-eye 80.43 24.6035 0.0100 84.0580 157.9439 Nose 69.57 20.8646 0.0100 66.6667 127.5686 Mouth 56.52 17.6501 0.0100 63.7681 150.5923

The Table IV shows the results of recognition comparison between only global-face feature and combination of global-face feature and local-face feature.

TABLE IV. COMPARISON PERFORMANCE AND PROCESSING TIME OF FACE RECOGNITION

MethodsGlobal-Face Feature Global-Face +

Local-Face FeatureAcc. (%) Time (min.) Acc. (%) Time (min.)

PHB + PSO-SVM 50.00 1.08 58.70 5.39 PCA + PSO-SVM 67.39 47.51 93.48 98.88 2D-PCA + PSO-SVM 84.78 7.88 95.65 17.57

From the results in Table IV, the recognition accuracy rate for only face feature using the PHB + PSO-SVM, PCA + PSO-SVM and 2D-PCA + PSO-SVM methods are 50.00%, 67.39% and 84.78%, respectively. While recognition accuracy rate when integrating face, right-eye, left-eye, nose and mouth features and using PHB + PSO-SVM, PCA + PSO-SVM and 2D-PCA + PSO-SVM methods increase to 58.70%, 93.48% and 95.65%, respectively, shown in Figure 5.

Figure 5. The graph shows the comparison of various methods of face recognition.

IV. CONCLUSIONS

This research aims to develop a method to increase the efficiency of face recognition using global-face feature and local-face feature. The BioID face dataset is used on our experiment. The proposed method is based on SVM classification with RBF function kernel. The Libsvm is applied to implemented this classification. Moreover, we also integrated PSO and SVM to select parameters C and G of the SVM automatically. There are three methods used for feature extraction: PHB, PCA and 2D-PCA. When using global-face feature only, the accuracy rate indicated that the PHB method was the lowest, the 2D-PCA method was the highest. On another hand, in the case that we integrated global-face feature and local-face feature, the recognition accuracy rate of all these methods are increased to 58.70%, 93.48% and 95.65%, respectively. The 2D-PCA method had a great effect of recognition accuracy rate.

For considering the processing time, the PHB method used a short time, but the recognition accuracy rate was low. However, the 2D-PCA method used less processing time than the PCA method. Therefore, we can see that our approach increases the efficiency of face recognition and the 2D-PCA combined with PSO-SVM method is well chosen in face recognition application. In the future work, we plan to compare this method with different optimization technique such as Genetic Algorithm and Grid Search using the same dataset.

REFERENCES

[1] C. Cortes and V. Vapnik, "Support vector Network," Machine Learning, vol. 20, pp. 273-297, 1995.

[2] G. Guodong, S. Z. Li and C. Kapluk, "Face recognition by support vector machines," in Automatic Face and Gesture Recognition, 2000. Proceedings. Fourth IEEE International Conference on, 2000, pp. 196-201.

[3] B. Zhang, W. Gao, S. Shan and Y. Peng, "Discriminant Gaborfaces and Support Vector Machines Classifier for Face Recognition," in Asian Conference on Computer Vision (ACCV2004). Jeju Island, Korea, 2004, pp. 37-42.

[4] T. Okabe and Y. Sato, "Support vector machines for object recognition under varying illumination conditions," in the 6th Asian Conference on Computer Vision (ACCV2004), Jeju Island, Korea, 2004, pp. 724-729.

[5] M. M. Sani, K.A.Ishak and S. A. Samad, "Classification using Adaptive Multiscale Retinex and Support Vector Machine for Face Recognition System," Journal of Applied Sciences, vol. 10, pp. 506-511, 2010.

[6] D. R. Kisku, H. Mehrotra, J. K. Sing and P. Gupta, "SVM-based Multiview Face Recognition by Generalization of Discriminant

Page 6: [IEEE 2012 10th International Conference on ICT and Knowledge Engineering (ICT & Knowledge Engineering 2012) - Bangkok, Thailand (2012.11.21-2012.11.23)] 2012 Tenth International Conference

145

Analysis," International Journal of Electrical and Computer Engineering vol. 3, No.7, pp. 474 – 479, 2008.

[7] J. Pei and L. Yongjie, "A hybrid face recognition algorithm based on WT, NMFs and SVM," in Cybernetics and Intelligent Systems, 2008 IEEE Conference on, 2008, pp. 734-737.

[8] J. Mazanec, M. Melisek, M. Oravec and J. Pavlovicova, "Support Vector machines, PCA and LDA in Face Recognition," Journal of Electrical Engineering, vol. 59, No. 4, pp. 203-209, 2008.

[9] Z. Ying and G. Zhang, "Facial Expression Recognition Based on NMF and SVM," in Information Technology and Applications, 2009. IFITA '09. International Forum on, 2009, pp. 612-615.

[10] E. Gumus, N. Kilic, A. Sertbas and O. N. Ucan, "Eigenfaces and Support Vector Machine Approaches for Hybrid Face Recognition," The Online Journal on Electronics and Electrical Engineering (OJEEE), vol. (2), No. (4), 2010.

[11] I. K. Timotius, T. C. Linasari, I. Setyawan and A. A. Febrianto, "Face recognition using support vector machines and generalized discriminant analysis," in Telecommunication Systems, Services, and Applications (TSSA), 2011 6th International Conference on, 2011, pp. 8-10.

[12] T. H. Le and L. Bui, "Face Recognition Based on SVM and 2DPCA," International Journal of Signal Processing, Image Processing and Patten Recognition, vol. 4, No. 3, pp. 85-94, 2011.

[13] J. Kennedy and R. Eberhart, "Particle swarm optimization," in Neural Networks, 1995. Proceedings., IEEE International Conference on, 1995, pp. 1942-1948 vol.4.

[14] Y. Shi and R. Eberhart, "A modified particle swarm optimizer," in Evolutionary Computation Proceedings, 1998. IEEE World Congress on Computational Intelligence., The 1998 IEEE International Conference on, 1998, pp. 69-73.

[15] C.-Y. Hsu, C.-H. Yang, Y.-C. Chen and M.-c. Tsai, "A PSO- SVM Lips Recognition Method Based on Active Basis Model," in Genetic and Evolutionary Computing (ICGEC), 2010 Fourth International Conference on, 2010, pp. 743-747.

[16] E. Marami and A. Tefas, "Face detection using particle swarm optimization and support vector machines," presented at the Proceedings of the 6th Hellenic conference on Artificial Intelligence: theories, models and applications, Athens, Greece, 2010.

[17] J. Wei, Z. Jian-qi and Z. Xiang, "Face recognition method based on support vector machine and particle swarm optimization," Expert Systems with Applications, vol. 38, pp. 4390-4393, 2011.

[18] B. E. Boser, I. M. Guyon and V. N. Vapnik, "A training algorithm for optimal margin classifiers," presented at the Proceedings of the fifth annual workshop on Computational learning theory, Pittsburgh, Pennsylvania, United States, 1992.

[19] A. Aizerman, E. M. Braverman and L. I. Rozoner, "Theoretical foundations of the potential function method in pattern recognition learning," Automation and Remote Control, vol. 25, pp. 821-837, 1964.

[20] M. J. Swain and D. H. Ballard, "Indexing Via Color Histograms," in Third International Conference on Computer Vision (ICCV), 1990, pp. 390-393.

[21] K. Pearson, On Lines and Planes of Closest Fit to Systems of Points in Space: University College, 1901.

[22] J. Yang, D. Zhang, A. F. Frangi and J. Yang, "Two-dimensional PCA: A new approach to appearance-based face representation and recognition," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 26(1), pp. 131–137 2004.