4
Face Recognition based on PCA and Multi-degree of Freedom Neurons WANG Shoujue LIU Xingxing Artificial Neural Networks Laboratory, Institute of Semiconductors Chinese Academy of Sciences P. 0. Box 912, Beijing 100083, P.R.China E-mail: [email protected] Abstract-An algorithm of PCA face recognition based on Multi-degree of Freedom Neurons theory is proposed, which based on the sample sets' topological character in the feature space which is different from "classification". Compare with the traditional PCA+NN algorithm, experiments prove its efficiency. I. INTRODUCTION Automatic human face recognition is attractive in personal identification, which widely used in security, bank and commerce department. Compared with other biometric identification techniques such as fingerprint recognition and retina recognition, it is directly, friendly and convenient. So the research of face recognition is become very hot [1]. Face recognition Based on geometrical features is proposed by D.L.Domoho[2], there are other algorithm such as based on deformable template, the character of eyes, neural networks 51161[71 and the generalized symmetry transform 31, In this paper we introduced an novel face recognition algorithm based on Multi-degree of Freedom Neurons theory[4 , experiments prove its efficiency. II. PRINCIPAL COMPONENT ANALYSIS (PCA) At common, a face image is described a high dimensional vector in image space. Facial image can be regarded as a point in the space. Fig 1 shows this idea graphically. Because there are strong symmetry structure of face (eyes, nose, and mouth and so on), so usually the vectors is relative. We can find that images of a people congregate in a certain area in space. So we can project face images into an array of eigenvector, which we obtain from covariance matrix of the trained face images. Suppose the face image to be size of L by L, this image can be described as a L2- dimensional vector or a point in a space of L2- dimensions. A set of images correspond to a set of points. Because the distribution is not at random, so we can project it to a low dimensional subspace. PCA is a usual dimension reduction method which gives the basis vector for this subspace. Each basis vector is eigenvector of the covariance matrix corresponding to the original face images. CAO Wenming Institute of Intelligent Information System, Information College Zhejiang University of Technology * Hangzhou 310032, P.R.China E-mail: luf(zjut.edu.cn Imnape spacecoordinate 3 , or h.c siie rpa S.Pa-ce ivordinste 2 Imap Sa'c- coordinaie I Fig 1 Image space Supposed I,'I2 ....... Is as a training set, its average face can be defined by: i-s A:= - I. (1) The margin of a face image and average face is Yi = Ii - A .Covariance matrix C is defined by: (2) 1 sy S E= i Select the maximum M eigenvector, and then get corresponding value of every image: Wi, = EK T* (Ii - A) Vi, K (3) EK is the maximum M eigenvector, the range of K is from 1 to M. Fig 2 illuminates face reconstruction corresponding to the maximum 10 ei envector in experiments: Fig 2 Test image 'test project to face space as the algorithm: 0-7803-9422-4/05/$20.00 C2005 IEEE 1494

[IEEE 2005 International Conference on Neural Networks and Brain - Beijing, China (13-15 Oct. 2005)] 2005 International Conference on Neural Networks and Brain - Face Recognition based

Embed Size (px)

Citation preview

Page 1: [IEEE 2005 International Conference on Neural Networks and Brain - Beijing, China (13-15 Oct. 2005)] 2005 International Conference on Neural Networks and Brain - Face Recognition based

Face Recognition based on PCA

and Multi-degree of Freedom NeuronsWANG Shoujue LIU Xingxing

Artificial Neural Networks Laboratory,Institute of SemiconductorsChinese Academy of Sciences

P. 0. Box 912, Beijing 100083, P.R.ChinaE-mail: [email protected]

Abstract-An algorithm of PCA face recognition based onMulti-degree of Freedom Neurons theory is proposed, whichbased on the sample sets' topological character in the featurespace which is different from "classification". Compare withthe traditional PCA+NN algorithm, experiments prove itsefficiency.

I. INTRODUCTION

Automatic human face recognition is attractive inpersonal identification, which widely used in security, bankand commerce department. Compared with other biometricidentification techniques such as fingerprint recognition andretina recognition, it is directly, friendly and convenient. Sothe research of face recognition is become very hot [1]. Facerecognition Based on geometrical features is proposed byD.L.Domoho[2], there are other algorithm such as based ondeformable template, the character of eyes, neuralnetworks 51161[71 and the generalized symmetry transform 31,In this paper we introduced an novel face recognitionalgorithm based on Multi-degree of Freedom Neuronstheory[4 , experiments prove its efficiency.

II. PRINCIPAL COMPONENT ANALYSIS (PCA)

At common, a face image is described a highdimensional vector in image space. Facial image can beregarded as a point in the space. Fig 1 shows this ideagraphically. Because there are strong symmetry structure offace (eyes, nose, and mouth and so on), so usually thevectors is relative. We can find that images of a peoplecongregate in a certain area in space. So we can project faceimages into an array of eigenvector, which we obtain fromcovariance matrix of the trained face images.

Suppose the face image to be size of L by L, this imagecan be described as a L2- dimensional vector or a point in aspace of L2- dimensions. A set of images correspond to a setof points. Because the distribution is not at random, so wecan project it to a low dimensional subspace. PCA is a usualdimension reduction method which gives the basis vectorfor this subspace. Each basis vector is eigenvector of thecovariance matrix corresponding to the original face images.

CAO WenmingInstitute of Intelligent Information System,

Information CollegeZhejiang University of Technology

* Hangzhou 310032, P.R.ChinaE-mail: luf(zjut.edu.cn

Imnape spacecoordinate 3,or h.csiie

rpaS.Pa-ce ivordinste 2

Imap Sa'c- coordinaie IFig 1 Image space

Supposed I,'I2 ....... Is as a training set, its averageface can be defined by:

i-sA:=- I. (1)

The margin of a face image and average face isYi = Ii - A .Covariance matrix C is defined by:

(2)1 syS E= i

Select the maximum M eigenvector, and then getcorresponding value of every image:

Wi, = EKT* (Ii - A) Vi,K (3)EK is the maximum M eigenvector, the range of K is

from 1 to M.Fig 2 illuminates face reconstruction corresponding to

the maximum 10 ei envector in experiments:

Fig 2Test image 'test project to face space as the

algorithm:

0-7803-9422-4/05/$20.00 C2005 IEEE1494

Page 2: [IEEE 2005 International Conference on Neural Networks and Brain - Beijing, China (13-15 Oct. 2005)] 2005 International Conference on Neural Networks and Brain - Face Recognition based

WtestK EK ('test -A) VK (4)

III. THE CONSTRUCTION AND ALGORITHM ABOUTMULTI-DEGREE OF FREEDOM NEURONS

In high-dimensional space, arbitrary subspaces in r-dimension Euclid space 9r can be described by equation asthe image of some real vector space, or as the set of allcombinations of a finite set of independent points,

F={xE9: x =4xo +***+A2x forky 9, i = 1} (5)

That is, every subspace can be described both as anintersection of hyperplanes, and as the hull of a finite pointset.

Also, a convex polyhedronK can be seen as a convexpoint setF c S r Since every intersection of convex setsis convex, the convex hull ofF can be constructed as theintersection of all convex sets that containF:K = conv(F) fnlF'c %r F c F', F'convex} (6)

Suppose 0 is a constant scalar, let

U={XjXEO(x,K)<z9} (7)U is actually a cover hull which is a convex polyhedroncovered with a d-thick layer in r.

Now, we may describe a multi-degree of freedomneuron in mathematical terms by writing the followingthreshold value functions:

[1, x E Uf(X,90) = (8)

Where x is the input signal and f is the output due to theinput signal. Here we do not consider the synaptic weightsof the neuron. From the point of view of thehigh-dimensional space, the input signal x can be arbitraryvector in%'. If x is inside the hyper-surface U, then theneuron is activated and returns output signal 1; vice versa.The neuron's degree of freedom is measured by thedimension ofK. IfK's dimension is r, and then the degreeof freedom of the neuron is r, too.

Several straightforward conclusions immediately comeout.

Corollary 1 A 0-degree-of-freedom neuron is a RBFneuron.

Corollary 2 A 1-degree-of-freedom neuron is a hypersausage neuron.

Corollary 3 A 2-degree-of-freedom neuron is a3-thresholded neuron.

IV. THE LEARNING ALGORITHM OF CONSTRUCTING THEMULTI-DEGREE OF FREEDOM NEURONS

Step 1: Suppose a type of number's all sample points'aggregation is a = {A1,A25 ...,AN}, N is the quantity ofsample points. Figure out the distance among these points.Find out the shortest distance between two points. Recordthem as B1 andB2. Calculate the sum of the distanceform other points to these two points. Let the distance beshortest. Record the two points which is not collinearwith B1 I, B12 as B13 , B14 It composes the first

- B11B12B13B44. Record it asO. Also use a Multi-degree ofFreedom Neurons to cover. The covering range is:P2 = p{XpX .Th,XER'}01 = {YIY=a3{a2[alB1 +(1-a)B12]+(-a2)B13}

+ (1 - aX3 )B14-) a,a1X2, a3 E[O,il

PX62 is the distance between the pointXand the space 01Step 2: To the former constructed geometrical

structure P,, judge whether the surplus points are includedby the structure. If they are in the covering range of thestructure, remove them. To the sample point outside thestructure, accord the method of step 1. Find out the pointB21 whose distance to the distance's sum ofB11B12B13B14 is shortest. Find out the two points whose

distance to B21 are shortest. Record them as B22 B23 . The

B22B2 and B21 compose the second B21B22B23B24Record it as 02 . Also cover it with the Multi-degree ofFreedom Neurons. Its covering range is:

P2 ={XIpx2 <.Th,XEg}

02 = {Y Y=a3{a2[[a1B21 +(1-a,)B22]+(1-a2)B23+ (I- a3 )B24 ta1, a2, a3E [0,]}

PXo2 is the distance between the pointX and the space 02.Step 3: Exclude the sample points that are included in

the former (i-1) Multi-degree of Freedom Neurons'covering volume in the surplus points. In the points outsidethe covering volume, find out the point whose distance tothe former (i - 1) -th vertexes' distance's sum is shortest.Record it as Bil. Record the (i-1) -th whose distance to

the point are shortest as Bi2 Bi3Bi4 to compose the i -th

B, B,2B,3Bi4 . Record it as 03. Also cover it with theMulti-deFre ofFreedom Neurons. Its covering range is:Pi = pxs <.Th,XER"}

1495

Page 3: [IEEE 2005 International Conference on Neural Networks and Brain - Beijing, China (13-15 Oct. 2005)] 2005 International Conference on Neural Networks and Brain - Face Recognition based

M1 ={YJ Y = a3{a2 [a1Bij + (1- al)Bi2]+(1-a2 )Bi3}+ (1- a3 )Bi4, al, a2,a-: [0911}

Step 4: Repeat step 3 until having dealt with all samplepoints.

Finally the quantity of created Multi-degree of FreedomNeurons will bem. Each class number's covering range isunion of these neurons' covering area:

m

P=uPi..i=l

When to recognize, give the Th =0, its neuron'sexpression as:

P= llX-9~(W,W2,W3W4)lThe neuron's output p is the distance from the point

X to the finite region 9, The method to(W1,w2,w3W4)calculate the distance is approximation algorithm.

The distance between the sample X waiting for beingrecognized and the Multi-degree ofFreedom Neurons' coveringrange covered by the i -th class number's high dimensionspace points is

Mi

Pi =minpij i=1. ... kJ=1

M1 is the number of neural network's neuron coveredby the i -th class number's high dimension space points.

Pii is the distance between the sampleX waiting for beingrecognized and the neural network's j -th neuron's coveringrange covered by the i -th class number's high dimensionspace points. Let the sort which includes the shortestdistance between the sample X waiting for beingrecognized and the Multi-degree of Freedom Neurons coveredby that class number's high dimension space points as thedigital class including the sampleX . The discriminance is:

k

J -=minpi, ]E(1,* Er)

V. EXPERIMENT RESULT

In experiments we used both UMIST and Yale database.The image of Yale database has different illumination andexpression, so we use it to analysis the affection ofillumination and expression. Meanwhile we use UMISTdatabase to analysis the affection of face position. Wecompare the traditional PCA+ NN algorithm with PCA+Multi-degree of Freedom Neurons.

Experiment 1: In Yale database we select 8 people, andselect 6 images of every of the 8 people to train the neuralnetwork. And we test the remained 5 images and the other 7people's images.

TABLE 1 Comparison of the efficiency used Yale databaseCorrect rate Rejection rate Error rate

PCA+ NN 70% 0 30%

PCA+ Multi-degree 95% 100% 5%ofFreedom Neurons 5

Under the variation of illumination and expression, wecan see from above table, the correct rate of PCA+Multi-degree of Freedom Neurons is higher than that ofPCA+ NN. Also it can correctly confirm all of theunacquainted image. So we can say that, under the variationof illumination and expression, PCA+ Multi-degree ofFreedom Neurons is superior to PCA+ NN.

(a)

_.........

(b)i ~~~~A _

Fig 3 the training image (a) and testing image (b) in Yale databaseExperiment 2: In UMIST database we select 15 people,

and select 8 images of every of the 15 people to train theneural network. And we test the remained 2 images and theother 5 people's images.

TABLE 2 Comparison of the efficienc used UMIST databaseCorrect rate Rejection rate Error rate

PCA+ NN 72.5% 0 27.5%

PCA± Multi-degreeofFreedom Neurons

Under the variation of face position, we can see fromabove table, the correct rate of PCA+Topological manifoldsis higher than that of PCA+ NN. Also it can correctlyconfirm all of the unacquainted image. So we can say that,under the variation of illumination and expression,PCA+Topological manifolds is superior to PCA+ NN.

Fig 4 the training limage (a) and testing image (b) in UMIST databaseExperiment 3: We test the efficiency between the

traditional PCA+ NN algorithm and PCA+ Topologicalmanifolds theory when the dimension of eigenvectorchanged. If heighten the size of eigenvector, recognition rate

1496

Page 4: [IEEE 2005 International Conference on Neural Networks and Brain - Beijing, China (13-15 Oct. 2005)] 2005 International Conference on Neural Networks and Brain - Face Recognition based

increased, however complexity increased as well. WhenM>30, the recognition rate do not increase obviously.Because an algorithm of PCA face recognition based ontopological manifolds theory is based on the sample sets'topological character in the feature space which is differentfrom "classification", its recognition rate is much better.

1~~~~~~~~~~~~~~~~~T

J9 PCA MWN0.8

rat e 0.7

0.6 PCA+iN0.5

0A4

0.3-

0.2

0.1

O 5 10 16 20 25 30 3 40 45 53 65

Fig 5 Comparison ofthe efficiency whenM is changed

VI. CONCLUSION AND PROBLEM

In the experiment we can draw conclusions below:1) Face recognition based on Multi-degree of Freedom

Neurons do not recognize every people not trained, which isbetter closer to the function ofhuman being.

2) When add the new trained sample, its structure oforiginal samples do not change.

3) In our experiments, we must make sure of thepreprocess is continuous project, otherwise it will affect theefficiency.

Meanwhile there are some problem existed. Whenforming Multi-degree of Freedom Neurons (cover hull), its sizeis important. If the size is bigger, the chance of an image tofall in Multi-degree of Freedom Neurons increased, which willbring increase of correct recognition rate, as well as errorrecognition rate.

AE4]

Ps~V

A[1]

The number of a people's sample affects the structure ofa people's topological manifolds. When the number ofsample is increasing, the chance of the image to fall in it isincreasing also, so the correct recognition rate is higher.

REFERENCES

[1] Deslauiers,G.andDubuc,s. Symmetric iterative interpolation processes.Constr. Approx., (1989) 5(1):49-682

[2] Monterey,Californla, Wavelet mathe -matics and application CRC Press,Inc,1994

[3] Ami Hartegen. Multi-resolution Represe -ntation of data: AgeneralFrame works, SLAM J. numer. Anal (1996) 33(3) 1205 -1255

[4] Wang ShouJue, A new development on ANN in China - Biomimeticpattern recognition and multi weight vector neurons, LECTURE NOTESINARTIFICLAL INTELLIGENCE 2639: 35-43, 2003

[5] Wang shoujue, Xu jian, Wang xianbao, Qin hong, Multi Camer HumanFace Personal Identification System based on the biomimetic patternrecognition, ACTA ELECTRONICA SINICA, Vol.31 No.l,Jan. 20031-3

[6] Wenming Cao, Feng Hao, Shoujue Wang: The application of DBFneural networks for object recognition. Inf: Sci. (2004) 160(1-4):153-160

[7] Wenming Cao, Jianhui Hu, Gang Xiao, Shoujue Wang. IrisRecognition Algorithm Based on Point Covering ofHigh-Dimensional Space and Neural Network, LECTURENOTES IN ARTIFICL4L INTELLIGENCE 3587, pp. 305 - 313,2005.

A12I

Fig 6 the structure of different size

1497