6
Application of Chaotic Neural Network on Face Recognition Jin Zhang1, Gung Lil*, Le Wang', and Walter J. Freeman2 'Department of Biomedical Engineering, Zhejiang University, Hangzhou 310027, P.R. China 2Division of Neurobiology, University of California at Berkeley, LSA 142, Berkeley, CA, 94720-3200, USA [email protected] Abstrad-Based on the study of biological olfactory system, a chaotic neural network model called K set model has been setup. This chaotic neural network has potential on pattern recognition while presenting novel chaotic concept for signal processing. This paper studies the characteristics of the K set models and applies ft to face recognition. Experimental results show that the chaotic model based on biological olfactory system has efficient performance for image pattern classification. I. INTRODUCTION Face recognition (FC) plays an increasingly important role in a wide range of applications, such as criminal identification, credit card verification, security system, scene surveillance, etc. However, a straightforward implementation is difficult since faces exhibit significant variations in appearance due to acquisition, variable illuminations, pose and aging variations [1]. In order to get better performance, many FR algorithms have been proposed [2]. In general, the whole recognition procedure includes two steps, feature extraction and classification. Due to the importance of feature extraction directly to the performance, many feature extraction algorithms are proposed, such as principle component analysis PCA [3, 4], independent component analysis (ICA) [5], linear discrimninant analysis LDA [6, 7] and so on. Upon the extraction of the proper feature sets, classifiers such as nearest neighbor classifier, Bayesian classifier [8, 9], support vector machine [10], are applied to recognize face images. Meanwhile, neural networks [9, 11] are also widely used for pattern classification. Although most artificial neural networks simulate some important features such as the threshold behavior and plasticity of synapses, they only simulate biological neural network primarily and are much simpler than real biological neural system. Based on the experimental study of the olfactory systems of rabbits and salamanders, a chaotic neural network mimicking olfactory system called KIIl model has been established. Built according to the architecture of the olfactory neural system, this chaotic neural network can simulate EEG waveform observed in biological experiments well. Contrasting to conventional linear method, the KIII model has strong nonlinear characteristics. In this paper, we employ the novel chaotic neural network mimicking olfactory system as a pattern classifier for face recognition. Before classification, the features are extracted using wavelet packet transform based on the idea of image partition. The experimental results show that the KIII model performs well in face recognition and has potential on other pattern cognition. H. OLFACTORY NEURAL SYSTEM AND K SET MODELS A. Biological olfactory neural system The olfactory system consists of four main parts: the receptor array (R), the olfactory bulb (OB), the anterior nucleus (AON) and the prepyriform cortex (PC), see the left part of Fig. 1. Different olfactory receptors (R), which are sensitive to different odorant, can detect a large variety of odor molecular and then send information through their axons to olfactory bulb (OB). The olfactory bulb is mainly composed of two kinds of cells, excitatory mitral cells (M) and inhibitory granule cells (G). It is the local negative feedback between the M and G cells that creates the oscillations in gamma range observed in the bubal EEG. The bulbar neurons send their axons by way of the lateral olfactory tract (LOT) to the AON and the PC. In the AON and PC, there are also excitatory (E) and inhibitory cells (I), and the negative feedback between these cells support oscillations in both AON and PC as in the OB but in different characteristic frequency. The PC then sends axons into the external capsule (EC) in the brain. In the other direction the AON and PC send axons back through the medial olfactory tract (MOT) to the OB. There are two subsidiary control elements in the olfactory system. One element is the periglomerular cells (P) in the outer layer of the OB, where it serves to preprocess receptor input to the OB. The periglomerular cells (P) are excitatory to each other and also forward to the mitral cells in the OB. The P cells provide a positive bias for the M and G cells to maintain the oscillation. The other control element is the anterior olfactory nucleus (AON) [12]. 0-7803-9422-4/05/$20.00 02005 IEEE 1367

[IEEE 2005 International Conference on Neural Networks and Brain - Beijing, China (13-15 Oct. 2005)] 2005 International Conference on Neural Networks and Brain - Application of Chaotic

  • Upload
    wj

  • View
    215

  • Download
    3

Embed Size (px)

Citation preview

Page 1: [IEEE 2005 International Conference on Neural Networks and Brain - Beijing, China (13-15 Oct. 2005)] 2005 International Conference on Neural Networks and Brain - Application of Chaotic

Application of Chaotic Neural Networkon Face Recognition

Jin Zhang1, Gung Lil*, Le Wang', and Walter J. Freeman2'Department ofBiomedical Engineering, Zhejiang University, Hangzhou 310027, P.R. China

2Division ofNeurobiology, University of California at Berkeley, LSA 142, Berkeley, CA, 94720-3200, [email protected]

Abstrad-Based on the study of biological olfactory system,a chaotic neural network model called K set model has beensetup. This chaotic neural network has potential on patternrecognition while presenting novel chaotic concept for signalprocessing. This paper studies the characteristics of the K setmodels and applies ft to face recognition. Experimental resultsshow that the chaotic model based on biological olfactorysystem has efficient performance for image patternclassification.

I. INTRODUCTION

Face recognition (FC) plays an increasingly importantrole in a wide range of applications, such as criminalidentification, credit card verification, security system,scene surveillance, etc. However, a straightforwardimplementation is difficult since faces exhibit significantvariations in appearance due to acquisition, variableilluminations, pose and aging variations [1]. In order to getbetter performance, many FR algorithms have beenproposed [2]. In general, the whole recognition procedureincludes two steps, feature extraction and classification. Dueto the importance of feature extraction directly to theperformance, many feature extraction algorithms areproposed, such as principle component analysis PCA [3, 4],independent component analysis (ICA) [5], lineardiscrimninant analysis LDA [6, 7] and so on.Upon the extraction of the proper feature sets, classifiers

such as nearest neighbor classifier, Bayesian classifier [8, 9],support vector machine [10], are applied to recognize faceimages. Meanwhile, neural networks [9, 11] are also widelyused for pattern classification. Although most artificialneural networks simulate some important features such asthe threshold behavior and plasticity of synapses, they onlysimulate biological neural network primarily and are muchsimpler than real biological neural system.

Based on the experimental study of the olfactory systemsof rabbits and salamanders, a chaotic neural networkmimicking olfactory system called KIIl model has beenestablished. Built according to the architecture of theolfactory neural system, this chaotic neural network cansimulate EEG waveform observed in biological experimentswell. Contrasting to conventional linear method, the KIIImodel has strong nonlinear characteristics.

In this paper, we employ the novel chaotic neural networkmimicking olfactory system as a pattern classifier for facerecognition. Before classification, the features are extractedusing wavelet packet transform based on the idea of imagepartition. The experimental results show that the KIII modelperforms well in face recognition and has potential on otherpattern cognition.

H. OLFACTORY NEURAL SYSTEM AND K SET MODELS

A. Biological olfactory neural systemThe olfactory system consists of four main parts: the

receptor array (R), the olfactory bulb (OB), the anteriornucleus (AON) and the prepyriform cortex (PC), see the leftpart of Fig. 1. Different olfactory receptors (R), which aresensitive to different odorant, can detect a large variety ofodor molecular and then send information through theiraxons to olfactory bulb (OB). The olfactory bulb is mainlycomposed of two kinds of cells, excitatory mitral cells (M)and inhibitory granule cells (G). It is the local negativefeedback between the M and G cells that creates theoscillations in gamma range observed in the bubal EEG.The bulbar neurons send their axons by way of the lateralolfactory tract (LOT) to the AON and the PC. In the AONand PC, there are also excitatory (E) and inhibitory cells (I),and the negative feedback between these cells supportoscillations in both AON and PC as in the OB but indifferent characteristic frequency. The PC then sendsaxons into the external capsule (EC) in the brain. In theother direction the AON and PC send axons back throughthe medial olfactory tract (MOT) to the OB. There are twosubsidiary control elements in the olfactory system. Oneelement is the periglomerular cells (P) in the outer layer ofthe OB, where it serves to preprocess receptor input to theOB. The periglomerular cells (P) are excitatory to each otherand also forward to the mitral cells in the OB. The P cellsprovide a positive bias for the M and G cells to maintain theoscillation. The other control element is the anteriorolfactory nucleus (AON) [12].

0-7803-9422-4/05/$20.00 02005 IEEE1367

Page 2: [IEEE 2005 International Conference on Neural Networks and Brain - Beijing, China (13-15 Oct. 2005)] 2005 International Conference on Neural Networks and Brain - Application of Chaotic

gndritic j5apse al 5a*a i;i,d

i polutodic graftheilsy t)onnate o n a F

Fig.l1. Topological diagram for the neural system (left) and KIII network (right) (Adapted from Chang and Freeman [16])

B. K Set ModelAccording to the olfactory neural architec

model was built. And a schematic diagramnetwork, in which KO, KI and KII are included,the right part of Fig. 1. In K-set model, each no:a neural population or cell ensemble. Thebehavior of each cell ensemble of the olfactorybe governed by equation (1):

J[x(t)+(a+b)i(t)+a-b-xi(t)]=E= YX Va.b 9

Q (xi (t), q) ={ e(1eX(t)_lq )

-1Xo = ln(l- q ln(l + I/ q))

Here i=l ...N, where N is the number of charequation xi(t) represents the state variable opopulation, xj(t) represePnts the state variable cpopulation, which is connected to, the ith, while'the connection strength between them. ii(t)function, which stands for external stimulus. Pexperiments confirm that a linear second-orderan appropriate choice. The parameter a = 0.2240.720msec-1 reflect two rate constant deriveelectro-physiological experiments. Q(xj(t), qj) i;sigmoid function derived from Hodgkin-Huxland is expressed as (2), conversing from wavepulse density. In the equation (2), q represents tiasymptote of the sigmoid function, which is afrom biological experiments.K set models include KO, KI, KII and KIII

21]. Each light circle or node represents a cell ensemble,either excitatory (P, M, E, A, C) or inhibitory (G, I, B).

Aure, K set When the neurons in an ensemble have no interactions withof the KiIl

sothe KIIIother ensembles, we represent it with a KO set. Both sets ofis shown m R and C are examples of the KO sets, when the cell

le represents ensemble does not contain any interaction among itsdynamical neurons.system can When the neurons in an ensemble interact reciprocally,

we use a Kl set, which we model with two excitatory+I (inhibitory) KO sets mutually connected to form a KI(e) (KI(i))

set. The sets of P and M are examples of the KI(e) sets, andthe set of G is the KI(i) set. The KII set, which is a couplednonlinear oscillator used to simulate channels in OB, AON

. ... (1) and PC layer with both positive and negative connections,consists of two reciprocally coupled KP) and KI(e) sets.

x(t) > X) From the schematic diagram of principal types of neurons,x(t) < xO pathways, and synaptic connections in the olfactory mucosa,

bulb, and cortex, the coupling of these KO's, KI's, and KII'sby feedforward and time-delayed feedback loops, which are

.... (2) either excitatory or inhibitory, forms a five-layer KIII set tomnels. In this model the entire olfactory system. In accordance with theif ith neural anatomic picture of olfactory system, the R, P and OB layers)fjth neural show n-channel parallel-distributed architecture.Wij indicates The KIII networks describe the whole olfactory neuralis an input system. It includes populations of neurons, local synaptic'hysiological connection, and long forward and distributed time-delayedderivative is feedback loops. In the topology of the KIII network, ROmsec-1, b = represents the olfactory receptor, which is sensitive to thed from the odor molecule, and offers the input to the KIII network. Thes a nonlinear periglomerular (PG) layer and the olfactory bulb (OB) layerley equation are distributed, which in this paper contains n channels. Theamplitude to AON and PC layer are only composed of single KII network.he maximum The parameters in the KIII network, including thelso obtained connection strength values between different nodes and the

gain values of the lateral connections, feed forward and feedmodels [13, back loops, were optimized by measuring the olfactory

1368

Page 3: [IEEE 2005 International Conference on Neural Networks and Brain - Beijing, China (13-15 Oct. 2005)] 2005 International Conference on Neural Networks and Brain - Application of Chaotic

evoked potentials and EEG, simulating their waveforms andstatistical properties, and fitting the simulated functions tothe data by means of nonlinear regression. After theparameter optimization, the KIII network generatesEEG-like waveform with 1/f power spectra [14, 15]. Thenumerical analysis of the KIII network showed that thesystem presents an aperiodic oscillation when there is nostimulus, and the stage of system goes into melodic stagefrom chaotic stage when the stimulus begins [16, 26].

In KIII network, the formation and consolidation of theselocal basins are implemented through changing the weightsbetween corresponding nodes, in accordance with thebiological increase and decrease of the synaptic connectionstrengths, which have been evaluated by curve-fitting ofsolutions to the equations to impulse responses of theolfactory system to electrical stimuli. Compared withwell-known low-dimensional deterministic chaotic systemssuch as the Lorenz, Rossler, and Logistics attractors,nervous systems are high-dimensional, non-autonomous andnoisy systems [17]. In the KIII network, independentrectified Gaussian noise was introduced to every olfactoryreceptor to model the peripheral excitatory noise and singlechannel of Gaussian noise with excitatory bias to model thecentral biological noise sources. The additive noiseeliminated numerical instability of the KIII model, andmade the system trajectory stable and robust under statisticalmeasures, which meant that under perturbation of the initialconditions or parameters, the system trajectories wererobustly stable [18]. Because of this stochastic chaos, theKIII network not only simulated the chaotic EEGwaveforms, but also acquired the capability for patternrecognition, which simulated an aspect of the biologicalintelligence, as demonstrated by previous applications of theKIII network to recognition of one-dimensional sequences,industrial data and spatiotemporal EEG pattems [19, 20].C. Learning RuleWe study the n-channel (n=16, 32, 64) KIII model, which

means that the R, P and OB layer all have n units, either KO(R, P) or KH (OB). When stimulus is presented to some ofthe n channels while other channels receive the backgroundinput (zero), the OB units in these channels experience astate transition from the basal activity to a limit cycleattractor and the oscillation in these units increasesdramatically. Therefore, we choose the activity of the MInode in the OB unit to be the scale to indicate the extend towhich the channel is excited. In other words, the output ofthe KIII set is taken as a 1 xn column vector at the mitrallevel to express the AM patterns of the local basins,specifically the input-induced oscillations in the gammaband (20 Hz to 80 Hz) across the n channels during theinput-maintained state transition. To put it clearly, theactivity of MI node is calculated as the root mean square(RMS) value of the MI signal over the period ofstimulation.

There are two kind of learning rules in our simulation:

Hebbian learning and habituation. Hebbian learningreinforces the desired stimulus patterns while habituationdecreases the impact of the background noise and thestimuli that are not relevant or significant.

According to the modified Hebbian learning rule, whentwo nodes become excited at the same time, their lateralconnection weight should get strengthened. Let PM(i) donatethe activity of the ith Ml node in the OB layer, PM indicatethe average value of all the PM (i) (i =-l-n), WM()-,Mo)represent the connection weight from the ith MI node to thejth MI node, hHeb and hH,b are the constant indicating theconnection weight after strengthened and weakened. We candescribe the algorithm as follows:

IF PM()>(l+K)*PM && PmA()>(1+K)*PM && il=j thenWVM(,)->Mg) = hH,b = I,--,n)

Else IF i==j

ELSEW'MO)M0)=hHb (i= 1,...,n)

ENDIFENDIF

The bias coefficient K>0 is introduced in order to avoid thesaturation ofthe weight space.The habituation constitutes an adaptive filter to reduce the

impact of environmental noise that is continuous anduninformative. In our simulation, the habituation exists atthe synapse of the Ml nodes on other nodes in the OB andthe lateral connection within the MI layer. It is implementedby incremental weight decay (multiply with a coefficienthh.b<l) of all these parameters at each time stepcontinuously through the entire learning period. Forexample, given that the simulation time is 600ms and hh5b=0.9995, each relevant parameter that is not strengthened byHebbian learning will reduce to 0.999560 = 0.74 of itsoriginal value.

WPT

AlWeei2 Wl2 W2.1 W2. Z

evel 3 W,. W, WN, WJ X W3. 4 W,,W. WA7

0 0062Z 0.120 01670 0.25 023125 0.375 0,4375 006

Frequency (Hz)Fig. 2. Wavelet packet decomposition

III. APPLICATION OF KIII MODEL ON FACE RECOGNITION

A. Feature extraction using waveletpacket transform

1369

Page 4: [IEEE 2005 International Conference on Neural Networks and Brain - Beijing, China (13-15 Oct. 2005)] 2005 International Conference on Neural Networks and Brain - Application of Chaotic

The wavelet packet transform (WPT) is a signal analysistool that has the frequency resolution power of the Fouriertransform and the time resolution power of the wavelettransform. It can be applied to time varying signals, wherethe Fourier transform does not produce useful results, andthe wavelet transform does not produce sufficient results.The coefficients of WPT can be expressed as formula (3).As a low-pass filter, the scaling coefficients, g2k-/2 extractslow frequency (LF) (c/+') from original signals (c/). As ahigh-pass filter, the wavelet coefficients, h2k4/2 extracts HF(high frequency) (d+') from the original signals (c,). ForDWT, only each level of (c/') (approximate signal) is usedfor the next level of analysis. For WPT, both scaling andwavelet coefficients are used for each level of (c/+') and(di+') (detail signal) (j=O...n-1), and the combination ofthese coefficients allowed analysis ofbroad frequency bands(Fig. 2). The diagram of Fig. 2 illustrates the analysis oforiginal signal with WPT of levels 1, 2, and 3. The originalsignal is transformed into each frequency components Wj, ,by scaling coefficients (solid lines) and scaling coefficients(dotted lines). With jth level of analysis, frequency index nranges from 0 to 2j-1. Horizontal axis denotes frequencyrange, i.e. zero to Nyquist frequency. WPT can extract allfrequency bands with equal resolution.

[ (j+l) = z 1 c](

L k Ie Z 2 2k 1 1WPT decomposes images with an overall scale factor four,

providing at each level one low-resolution sub-image andthree wavelet coefficient sub-images. The 2D wavelettransform performs a spatial/spatial frequency analysis onan image in the lower frequency sub-bands. The waveletdecomposition of an image results in an array of waveletcoefficients. The wavelet image decomposition provides arepresentation that is easy to interpret. Every sub-imagecontains the information of a specific scale and orientation.The decomposition conveniently separates the informationof different scales. The wavelet coefficients can be extractedfrom each sub-image separately in order to obtain featuresthat reflect scale-dependant properties. Since thecoefficients of a sub-image sum to zero, it is necessary tocompute a nonlinear function ofthe coefficients.

In our work, wavelet packet decomposition is used toextract the coefficients of lowest frequency range, whichcontains most energy, in sub-bands. The norm, whichdenotes the energy including in a frequency range, of all thecoefficients is calculated according to the formulas (4).

norm = l (coefficient)' (4)

It is shown in Fig. 3 and Table I that most energy of theimage is included in the lowest frequency range. The axis

Fig. 3. Average energy ratio in different frequency range

Table. I. Energy ratio in one image

Ratio ______ Frequency range of the first 51 2 3 4 5

1 0.835 0.021 0.029 0.011 0.011S 2 0.789 0.032 0.035 0.015 0.014f 3 0.838 0.035 0.031 0.010 0.017a 4 0.783 0.045 0.026 0.016 0.0210 5 0.872 0.028 0.020 0.008 0.013

6 0.837 0.030 0.025 0.011 0.013o 7 0.795 0.032 0.042 0.012 0.014

8 0.847 0.026 0.035 0.009 0.0119 0.813 0.039 0.033 0.013 0.019

entitled ratio in Fig. 3 denotes the ratio ofenergy in differentfrequency. The horizontal axis denotes 16 frequency rangeof one face image. The data is the average of 400 faceimages. And the first image of 10 persons is analyzed andthe results are listed in Table 1. The W2 0 obtained via 2 levelWPT is used as the feature.B. Image partition

The norm can be used to extract one feature after WPT.It reflects the whole feature of different frequency in oneimage and each frequency range has one feature. Butdifferent frequency range has different status in one image.The element of input vector for KIII model should be inequal status so that the result of wavelet packetdecomposition can't be input to the model directly.

Considering the low frequency reflects the wholeinformation in the image, the idea of image partition is usedto extract feature of some parts of the whole image. Basedon the idea of partitioning the image to equally sizedsub-images, we partition the image 16, 32 and 64 parts andget different input vectors as illustrated Fig. 4. In eachsub-image, we use WPT to extract the lowest frequencycoefficients first. Then, the norm is used to extract feature toinput KIII model.

1370

Page 5: [IEEE 2005 International Conference on Neural Networks and Brain - Beijing, China (13-15 Oct. 2005)] 2005 International Conference on Neural Networks and Brain - Application of Chaotic

Fig. 4 Feature vector (images are from ORL[22])

C. Image patterns classification using Kill modelThe KIII model is utilized to classify face image using the

extracted features. The parameters used in this study refer[13], which were optimized to fulfill the classification task.Stimuli-maintained period is from lOOms-300ms, while0-100ms is the initiation period for KIII to set up the basalchaotic state. The simulation time is 400ms for eachlearning or classification process.To perform the classification, the KIII model needs to

learn the desired pattems. During training stage, eachpattern is learnt 10 times. After training, each stimulusmakes a stable limit cycle attractor, which denotes onepattem. Compared with other pattern recognition algorithms,KIII model needs fewer learning trial times.

After training stage, we store the output of OB layer ascognition standard. The face images in unknown class areinput and get the output in OB layer. The nearest neighbor

principle is employed to classify face images after theEuclidean distances from unknown images to each class arecalculated. In our experiments, the features of face imagesare extracted from original images without any preprocess,such as image segmentation, image zooming, edge detection,etc. It shows that the KIII model has a good tolerance as aclassifier.D. Experiment resultsWe used the ORL face dataset to present a performance

comparison of our proposed method against the existingconventional methods. The ORL dataset is one of thepopular public datasets used in face recognition. The imageset consists of 400 images, 10 images for each individual.Each image for one person differs from each other inlighting, facial expression and pose. We have selected fiveimages of each person for training.

16, 32 and 64 represent the dimension of features in TableII. 10, 20, 30 and 40 denote the number of recognition inone recognition process. It can be seen from theexperimental results that the more dimensions the feature is,the better the performance. It is because more dimensions offeatures mean more information of the image. If the numberofperson in cognition is less, the accuracy is better. It is alsonatural because different class has enough space todiscriminate each other. It also means that the accuracycould be raised if the whole class is divided into severalsuitable sub-class to recognize.The comparison of the performance of KIll model to

some other algorithms for face recognition is also made. Theresults are tabulated in Table III to show that the KIII modelalgorithm gives better performance than original LDAalgorithm, a LDA-based algorithms and universal PCA.

Table II: Average recognition accuracy for ORL database~~~~~~~

Dimt

Number ofpattern in one recognition ProcessAccuracy 10 20 30 40

ension 16 0.9325 0.8650 0.8335 0.8180Of 32 0.9525 0.9150 0.8916 0.8725eature 64 0.9625 0.9317 0.9165 0.9075 I

Table III: Comparison between the KIII method and other algorithm for Olivetti database

Method LDA[25] universal Yu et al.[23] KIIIRecognition 67.25% 88.0% 90.80% 90.80/6-96.3%Accuracy

1371

_

Page 6: [IEEE 2005 International Conference on Neural Networks and Brain - Beijing, China (13-15 Oct. 2005)] 2005 International Conference on Neural Networks and Brain - Application of Chaotic

IV. DISCUSSION

KIII network is a kind of chaotic neural network which isderived directly from the biological neural system. Itsmechanism for pattern recognition is totally different fromother artificial neural networks (ANN). As a result, it notonly can simulate the experimental EEG waveform, but alsohave the ability to recognize complex image patterns, suchas human face images.

Taking after the biological olfactory neural system beinggood at learning new odors, KIII model requires only a fewlearning trials to set up the attraction basins of the memorypattems and has a good tolerance. Mimicking biologicalneural systems may be an efficient way to handlingcomplicated pattem recognition problems.

ACKNOWLEDGEMENTS

This research was supported by the National BasicResearch Program of China (973 Program, No.2002CCA01800 and 2004CB720302) and the NationalNatural Science Foundation of China (No. 30470470).

REFERENCES

[1] Adini, Y., Moses, Y., Ullman, S. "Face recognition: The problem ofcompensating for changes in illumination direction," IEEE Trans.Pattern Anal Machine Intell, vol. 19 (7), pp. 721-732, 1997.

[2] W.Y. Zhao, R Chellappa, and A. Rosenfeld, "Face recognition: Aliterature survey," CVL Technical Report, Center for AutomationResearch, University of Maryland at College Park, CAR-TR-948,2000.

[3] V. Perlibakas, "Distance measures for PCA-based face recognition,"Pattern Recognition Lett, vol. 25 (6), pp. 711-724,2004.

[4] J. Yang, D. Zhang, A. Frangi, J. Yang, "Two dimensional PCA: A newapproach to appearance-based face representation and recognition,"IEEE Trans. Pattern Anal Machine Intell, vol. 26 (1), pp. 131-137,2004.

[5] M.S. Bartlett, H.M. Lades, T. Sejnowshi, "Face recognition byindependent component analysis," IEEE Trans. Neural Networks, vol.13 (6), pp. 1450-1464, 2002.

[6] H.C. Kim, D. Kim, S.Y. Bang, "Face recognition using LDA mixturemodel," Pattern Recognition Lett, vol. 24(15), pp. 2815-2821, 2003.

[7] J. Lu, K.N. Plataniotis, A.N. Venetsanopoulos, "Face recognition usingLDA-based algorithms," IEEE Trans. Neural Networks, vol. 14 (1), pp.195-200,2003.

[8] R.O. Duda, P.E. Hart and D.G. Stork, Pattern Classification, New York,Wiley-Interscience Publication, 2001 2ndEdition.

[9] A. Jain, P.W. Robert, J. Mao, "Statistical pattern recognition: Areview," IEEE Trans. Pattern Anal. Machine Learn, vol. 22 (1), pp.4-37,2000.

[10] C.J.C. Burges, "A tutorial on support vector machines for patternrecognition," Data Mining and Knowledge Discovery, vol. 2(2), pp.121-167, 1998.

[11] M.J. Er, J. Wu, J. Lu, H.L. Toh, "Face recognition with radial basisfunction (RBF) neural networks," IEEE Trans. Neural Networks, vol.13(3), pp. 697-710,2002.

[12] W.J. Freeman, Olfactory system: odorant detection and classification,Chapter in Vol. III, Part 2: Building Blocks for Intelligent Systems:Brain Components as Elements of Intelligent, to be published byAcademic Press, New York.

[13] H.J. Chang and W.J. Freeman, "Biologically modeled noise stabilizingneurodynamics for pattern recognition," Int. J. Bifurcation and Chaos,

vol. 8(2), pp. 321-345, 1998.[14] H.J. Chang and W.J. Freeman, "Parameter optimization in models of

the olfactory neural system," Neural Networks, 9, pp. 1-14, 1996.[15] H.J. Chang and W.J. Freeman,"Optimization of olfactory model in

software to give power spectra reveals numerical instabilities insolutions governed by aperiodic (chaotic) attractors," Neural Networks,1, pp. 449-466, 1998.

[16] H.J. Chang and W.J. Freeman, "Biologically modeled noise stabilizingneurodynamics for pattern recognition," Int. J. Bifurcation and Chaos,8, pp. 321-345, 1998.

[17] W.J. Freeman and K Kozma, "Biocomplexity: adaptive behavior incomplex stochastic dynamic systems," Biosystems, 59, pp. 109-123,2001,

[18] H.J. Chang and W.J. Freeman, "Local homeostasis stabilizes a modelof the olfactory system globally in respect to perturbations by inputduring pattern classification," Int. J. Bifurcation and Chaos, 8, pp.2107-2123, 1998.

[191 Y. Yao and W.J. Freeman, "Pattern recognition in olfactory systems:modeling and simulation," Proc. of the 1989 International JointConference on Neural Networks, vol. 1, pp. 699-704, 1998.

[20] R. Kozma and W.J. Freeman, "Chaotic resonance - methods andapplications for robust classification of noisy and variable pattems," Int.J. Bifurcation and Chaos, 11, pp. 1607-1629, 2001.

[21] J.C. Principe, V.G. Tavares, J.G. Harris, and W.J. Freeman, "Designand Implementation of a Biologically Realistic Olfactory Cortex inAnalog VLSI," Proceedings of the IEEE 89, vol. 1, pp. 1030 - 1051,1989.

[22]. AT&T Laboratories Cambridge, the ORL Database of Faces,http://www.cam-orl.co.uk/facedatabase.html.

[23]. H. Yu and J. Yang, "A direct LDA algorithm for highdimensionaldata-with application to face recognition," Pattern Recognition, vol.34(10), pp. 2067-2070, 2001.

[24]. X.M. Liu, T. Chen, B.V.K.V. Kumar, "Face authentication formultiple subjects using eigenflow," Pattern Recognition, vol. 36, Issue:2, pp. 313-328, 2003.

[25]. W. Zhao, R. Chellappa and P.J. Phillips, "Subspace lineardiscriminant analysis for face recognition," Technical Report AR-TR-914, CS-TR-4009, University of Maryland at College Park, USA,1999.

[26]. X. Li, G. Li, L. Wang, and W. J. Freeman, "A study on a bionicpattern classifier based on olfactory neural system," Int. J. Bifurcationand Chaos, in press.

1372