13
Research Article Ear Recognition Based on Gabor Features and KFDA Li Yuan 1,2 and Zhichun Mu 1 1 School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing, 100083, China 2 Visualization and Intelligent Systems Laboratory, University of California Riverside, Riverside, CA, 92507, USA Correspondence should be addressed to Li Yuan; [email protected] Received 7 November 2013; Accepted 21 January 2014; Published 17 March 2014 Academic Editors: H. T. Chang and M. Nappi Copyright © 2014 L. Yuan and Z. Mu. is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. We propose an ear recognition system based on 2D ear images which includes three stages: ear enrollment, feature extraction, and ear recognition. Ear enrollment includes ear detection and ear normalization. e ear detection approach based on improved Adaboost algorithm detects the ear part under complex background using two steps: offline cascaded classifier training and online ear detection. en Active Shape Model is applied to segment the ear part and normalize all the ear images to the same size. For its eminent characteristics in spatial local feature extraction and orientation selection, Gabor filter based ear feature extraction is presented in this paper. Kernel Fisher Discriminant Analysis (KFDA) is then applied for dimension reduction of the high-dimensional Gabor features. Finally distance based classifier is applied for ear recognition. Experimental results of ear recognition on two datasets (USTB and UND datasets) and the performance of the ear authentication system show the feasibility and effectiveness of the proposed approach. 1. Introduction e research on ear recognition has been drawing more and more attention in recent five years [14]. Based on the research of the “Iannarelli system” [5], the structure of the ear is fairly stable and robust to changes in facial expressions or aging. Ear biometrics is noncontacting and so it can be applied for human identification at a distance, making it a helpful supplement to facial recognition. An ear recognition system based on 2D images is composed of the following stages: ear enrollment, feature extraction, and ear recog- nition/authentication. e stage of ear enrollment includes automatic ear detection and ear normalization. Ear detection focuses on detecting human ear from the input images and then locating and segmenting each ear in the image. en all the ear images are normalized to the same size based on some predefined standard. Next step is to represent the ear by appropriate features, such as structural features, local features, and holistic features. Finally effective classifier will be designed for ear recognition or authentication, such as nearest neighbor classifier, sparse representation classifier, RBF classifier, or SVM classifier. 1.1. Related Work. Ear recognition using 2D images can be categorized into three kinds based on the features extracted from the ear images: (1) structural features, (2) local features, and (3) holistic features. e application scenarios can be summarized as the following three: ear recognition under constrained environment, ear recognition with pose vari- ation, and ear recognition under partial occlusion. Table 1 outlines some representative references for different feature extraction methods and their corresponding performance on different ear datasets. 1.2. Our Work. e present works listed in Table 1 mainly focused on feature extraction and classification. e ear images are mostly manually extracted for later processing. e alignment process was not clearly illustrated in most of the ear recognition papers. So the ear images used in different methods were not normalized based on the same standard. erefore the comparisons among different methods are less meaningful. So [6, 7] proposed automatic ear detection based on Adaboost algorithm. e ear parts presented in source images are segmented. But these segmented ear images may contain not only the “pure ear,” but also some background image (such as face profile and hair). is means that even for the same subject, the size of each “pure ear” on the registered images in the dataset may be different, and it is also possible that the size of ears on the registered images is not the same as Hindawi Publishing Corporation e Scientific World Journal Volume 2014, Article ID 702076, 12 pages http://dx.doi.org/10.1155/2014/702076

Research Article Ear Recognition Based on Gabor Features

  • Upload
    others

  • View
    3

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Research Article Ear Recognition Based on Gabor Features

Research ArticleEar Recognition Based on Gabor Features and KFDA

Li Yuan12 and Zhichun Mu1

1 School of Automation and Electrical Engineering University of Science and Technology Beijing Beijing 100083 China2 Visualization and Intelligent Systems Laboratory University of California Riverside Riverside CA 92507 USA

Correspondence should be addressed to Li Yuan lyuanustbeducn

Received 7 November 2013 Accepted 21 January 2014 Published 17 March 2014

Academic Editors H T Chang and M Nappi

Copyright copy 2014 L Yuan and Z Mu This is an open access article distributed under the Creative Commons Attribution Licensewhich permits unrestricted use distribution and reproduction in any medium provided the original work is properly cited

We propose an ear recognition system based on 2D ear images which includes three stages ear enrollment feature extraction andear recognition Ear enrollment includes ear detection and ear normalization The ear detection approach based on improvedAdaboost algorithm detects the ear part under complex background using two steps offline cascaded classifier training andonline ear detection Then Active Shape Model is applied to segment the ear part and normalize all the ear images to the samesize For its eminent characteristics in spatial local feature extraction and orientation selection Gabor filter based ear featureextraction is presented in this paper Kernel Fisher Discriminant Analysis (KFDA) is then applied for dimension reduction ofthe high-dimensional Gabor features Finally distance based classifier is applied for ear recognition Experimental results of earrecognition on two datasets (USTB and UND datasets) and the performance of the ear authentication system show the feasibilityand effectiveness of the proposed approach

1 Introduction

The research on ear recognition has been drawing moreand more attention in recent five years [1ndash4] Based on theresearch of the ldquoIannarelli systemrdquo [5] the structure of theear is fairly stable and robust to changes in facial expressionsor aging Ear biometrics is noncontacting and so it can beapplied for human identification at a distance making it ahelpful supplement to facial recognition An ear recognitionsystem based on 2D images is composed of the followingstages ear enrollment feature extraction and ear recog-nitionauthentication The stage of ear enrollment includesautomatic ear detection and ear normalization Ear detectionfocuses on detecting human ear from the input images andthen locating and segmenting each ear in the image Thenall the ear images are normalized to the same size basedon some predefined standard Next step is to represent theear by appropriate features such as structural features localfeatures and holistic features Finally effective classifier willbe designed for ear recognition or authentication such asnearest neighbor classifier sparse representation classifierRBF classifier or SVM classifier

11 Related Work Ear recognition using 2D images can becategorized into three kinds based on the features extracted

from the ear images (1) structural features (2) local featuresand (3) holistic features The application scenarios can besummarized as the following three ear recognition underconstrained environment ear recognition with pose vari-ation and ear recognition under partial occlusion Table 1outlines some representative references for different featureextractionmethods and their corresponding performance ondifferent ear datasets

12 Our Work The present works listed in Table 1 mainlyfocused on feature extraction and classification The earimages are mostly manually extracted for later processingThe alignment process was not clearly illustrated in most ofthe ear recognition papers So the ear images used in differentmethods were not normalized based on the same standardTherefore the comparisons among different methods are lessmeaningful So [6 7] proposed automatic ear detection basedon Adaboost algorithm The ear parts presented in sourceimages are segmented But these segmented ear images maycontain not only the ldquopure earrdquo but also some backgroundimage (such as face profile and hair)Thismeans that even forthe same subject the size of each ldquopure earrdquo on the registeredimages in the dataset may be different and it is also possiblethat the size of ears on the registered images is not the same as

Hindawi Publishing Corporatione Scientific World JournalVolume 2014 Article ID 702076 12 pageshttpdxdoiorg1011552014702076

2 The Scientific World Journal

Table 1 Representative feature extraction methods and their performance evaluation

Reference Description Dataset PerformanceStructural feature extraction methods

[13]

Burge and Burger (1997) use the main curve segments to form Voronoidiagram and use adjacency graph matching based algorithm for

authentication But the curve segments will be affected by changes incamera-to-ear orientation or lighting variation

mdash mdash

[14]Moreno et al (1999) used feature points of outer ear contour and informationobtained from ear shape and wrinkles for ear recognition The compression

network is applied for classification

28 subjects 168 images6 images for each

subjectRank-1 93

[15]

Mu et al (2004) proposed a long axis based shape and structural featureextraction method the shape feature consisted of the curve fitting parametersof the outer ear contour the structural feature was composed of ratios of thelength of key sections to the length of the long axis and nearest neighborhood

classifier was used for recognition

USTB dataset2 77subjects 4 images for

each subjectRank-1 85

[16]Choras (2005) proposed a geometrical feature extraction method based onnumber of pixels that have the same radius in a circle with the centre in the

centroid and on the main curves240 images mdash

Local feature extraction methods

[17]

Hurley et al (2005) proposed the force field transformation method The earimages are treated as array of mutually attracting particles that act as thesource of Gaussian force field The force field transforms of the ear imageswere taken and the force fields were then converted to convergence fieldsThen Fourier based cross-correlation techniques were used to perform

multiplicative template matching on ternary thresholded convergence maps

XM2VTS face profilesubset (252 subjects) Rank-1 992

[18]

Nanni and Lumini (2007) proposed a local approach A multimatcher systemwas proposed where each matcher was trained using features extracted fromthe convolution of each subwindow with a bank of Gabor filters The bestmatchers corresponding to the most discriminative subwindows were

selected by Sequential Forward Floating Selection where the fitness functionwas related to the optimization of the ear recognition performance Ear

recognition was made using sum rule based decision level fusion

UND collection E (114subjects)

Rank-1 80EER 43

[19]

Bustard and Nixon (2010) proposed an ear registration and recognitionmethod by treating the ear as a planar surface and creating a homography

transform using SIFT feature matches Ear recognition under partialocclusion was discussed in this paper The relationship between occlusion

percentage and recognition rate was presented

XM2VTS face profiledataset (63 subjects)

Rank-1 92(30 occlusionfrom above)92 (30

occlusion fromleft side)

[20]

Arbab-Zavar and Nixon (2011) proposed a model-based approach for earrecognition The model was a partwise description of the ear derived by a

stochastic clustering on a set of scale invariant features of the training set Theouter ear curves were further analyzed with log-Gabor filter Ear recognition

was made by fusing the model-based and outer ear metrics

XM2VTS face profiledataset (63 subjects)

Rank-1 894(30 occlusionfrom above)

Holistic feature extraction methods

[21] Chang et al (2003) used standard PCA to compare face and ear and concludedthat ear and face did not have much difference on recognition performance

Human ID Database(197 subjects)

Rank-1 705 forface 716 for ear

[22]

Yuan et al (2006) proposed an improved Nonnegative Matrix Factorizationwith Sparseness Constraint for ear recognition with occlusion The ear imagewas divided into three parts with no overlapping INMFSC was applied forfeature extraction The final classification was based on a Gaussian model

based classifier

USTB dataset3 (79subjects)

Rank 1 sim91 (for10 occlusionfrom above)

[23]

Dun and Mu (2009) proposed an ICA based ear recognition method throughnonlinear adaptive feature fusion Firstly two types of complimentary featuresare extracted using ICA These features are then concatenated with differentweight to form a high-dimensional fused feature Then the feature dimensionwas reduced by Kernel PCA The final decision was made by nearest neighbor

classifier

USTB dataset3 (79subjects) and USTBdataset4 (150 subjects)

Rank-1 ge90(for pose

variation within15∘)

The Scientific World Journal 3

Table 1 Continued

Reference Description Dataset Performance

[24]

Wang et al (2008) proposed ear recognition based on Local Binary PatternEar images were decomposed by Haar wavelet transform Then Uniform LBPcombined with block-based and multiresolution methods was applied to

describe the texture features Finally the texture features are classified by thenearest neighbor method

USTB dataset3 (79subjects)

Rank-1 ge92(for pose

variation within20∘)

[25]

Zhou et al (2010) proposed ear recognition via sparse representation Gaborfeatures are used to develop a dictionary Classification is performed by

extracting features from the test data and using the dictionary for representingthe test data The class of the test data is then determined based upon the

involvement of the dictionary entries in its representation

UND G subset 39subjects

Rank-1 9846(4 images fortraining and 2images fortesting)

that of the ear to be authenticated Somany appearance basedmethods will not work in this situationThismeans that thereexists a gap between ear detection and feature extractionThis gap is automatic ear normalization very much like facenormalization whichmeans that we have to set up a standardto normalize the ear into the same size In this paper wecombine ear detection and ear normalization into one stagenamed ear enrollment To our best knowledge the researchon ear enrollment is still an open area

The rest of this paper is organized as follows Section 2describes our automated ear enrollment approach Section 3details the feature extraction approach using Gabor filtersand Kernel Fisher Discriminant Analysis Section 4 conductsear recognition and ear authentication experiments to eval-uate the proposed methods Finally concluding remarks aredrawn in Section 5

2 Ear Enrollment

This section will detail the ear enrollment process whichincludes ear detection and ear normalization The ear detec-tion approach based on our modified Adaboost algorithmdetects the ear part under complex background using twosteps offline cascaded classifier training and online eardetection We have made some modification compared withour previous work on ear detection [6] Then Active ShapeModel is then applied to segment the ear part and normalizeall the ear images to the same size

21 Automatic Ear Detector Based on Modified AdaboostAlgorithm In our previous work [6] we have proposed anear detection approach under complex background whichhas two stages offline cascaded classifier training and onlineear detectionThe cascaded classifier is composed of multiplestrong classifiers Each strong classifier is composed of multi-ple weak classifiers The training process of a strong classifieris as follows

(1) Given 119873 example images (119909

1 119910

1) (119909

119899 119910

119899)

where 119909 isin R119896 119910119894isin minus1 1 for negative and positive

examples respectively

(2) Initialize weights 1198631(119909

119894) = 1119899 119894 = 1 2 119899

(3) Repeat for 119905 = 1 2 119879 119879 is the total number ofweak classifiers do the following

(i) train the weak classifiers with weights 119863119905 and

get the regression function ℎ

119905 119883 rarr minus1 +1

The nearest neighbor classifier is used as theweak classifier

(ii) get the error rate of the regression function 120576

119905=

sum

119899

119894=1

119863

119905(119909

119894)[ℎ

119905(119909

119894) = 119910

119894]

(iii) set the weight of weak classifiers 120572

119905 119886

119905=

12 ln((1 minus 120576

119905)120576

119905)

(iv) update weights of different training samples

119863

119905+1(119909

119894) =

119863

119905(119909

119894)

119885

119905

times

119890

minus120572119905 if ℎ

119905(119909

119894) = 119910

119894

119890

120572119905 if ℎ

119905(119909

119894) = 119910

119894

(1)

where 119885

119905is the normalization factor for

sum

119899

119894=1

119863

119905(119909

119894) = 1

(4) After 119879 times of training the final strong classifier is

119867(119909) = sign(119879

sum

119894=1

120572

119905ℎ

119905(119909) minusTh) (2)

whereTh is the decision threshold decided by the falseclassification rate

The training process of the algorithm mentioned aboveis time consuming Also the false rejection rate and falseacceptance rate need to be lowered for real applicationscenarios Based on the structural features of the ear itself wepropose the following four improvements on the traditionalAdaBoost algorithm in view of its deficiency

Improvement 1 In Adaboost algorithm the selection of theoptimum classification threshold of the Haar-like features isvery important for weak classifier learning algorithm Thisprocedure is time consuming So we propose a ldquosegment-selecting methodrdquo to choose the optimum threshold of weakclassifiers

Step 1 We divide the feature value space composed of thefeature values of all the training samples for each Haar-likefeature into 119899 parts and 119903 feature values per part Suppose that

4 The Scientific World Journal

the original feature value space is 119892(119894) (119894 = 0 119873) where119873 is the total number of positive and negative samples Thenew feature value space is 119892(119896) (119896 = 0 119903 2119903 119899)

Step 2 In the new space search the optimum threshold 119892(119895)Then back to the original feature value space centered with119892(119895) we search the region from 119892(119895minus 119903) to 119892(119895+ 119903) to find outthe optimum threshold

Improvement 2We apply a strategywhich can reduce the falseacceptance rate bymeans of changing theweight distributionof weak classifiers

The strong classifier is composed of weighted weak clas-sifiers The smaller error rate a weak classifier possesses thebigger the weight assigned to a weak classifier The error rateis decided by the training samples The positive and negativesamples are equally important Ear detection experimentalresults show that with the traditional Adaboost algorithmthe false acceptance rate is not acceptable So we improve thetraining procedure of the weak classifiers by proposing thatthe weight distribution among the weak classifiers not only isdecided by the total error rate but also is concerned with thenegative samples

So we improve the Adaboost algorithm by including aparameter 119896119890119902119905 to give higher weights to those weak classifiersthat have lower false acceptance rate on negative samples 119902

119905

is the weight sum of those negative samples that have beenclassified correctly representing the classification ability ofthe weak classifier to negative samples 119896 is used to constrainthe impact of 119902

119905on the weight of the weak classifier The

modified Adaboost algorithm is as follows

Step 1 Given a weak classifier learning algorithm and atraining sample set (119909

1 119910

1) (119909

119899 119910

119899) where 119909

119894is the

training sample feature vector 119910119894isin minus1 +1 for negative and

positive examples respectively

Step 2 Initialize the weights1198631(119909

119894) = 1119899 119894 = 1 2 119899

Step 3 Repeat for 119905 = 1 2 119879 119889119900 the following

(1) train the weak classifier learning algorithm with theweights 119863

119905 and return the weak classifier ℎ

119905 119883 rarr

minus1 +1

(2) compute the error rate

120576

119905=

119899

sum

119894=1

119863

119905(119909

119894) [ℎ

119905(119909

119894) = 119910

119894] (3)

get the weight sum of those negative samples that areclassified correctly as

119902

119905=

119899

sum

119894=1

119863

119905(119909

119894) [119910

119894= minus1 ℎ

119905(119909

119894) = minus1] (4)

(3) get the updating parameter 120572119905and the weight param-

eter of weak classifier 120572119905minusnew

119886

119905=

1

2

ln(1 minus 120576

119905

120576

119905

)

119886

119905minusnew =

1

2

ln(1 minus 120576

119905

120576

119905

) + 119896119890

119902119905

(5)

where 119896 is a constant value The value of 119896 shouldguarantee that the new added weak classifier willreduce the upper boundary of the minimum errorrate In this paper 119896 is set to 0018

(4) Update the weights

119863

119905+1(119909

119894) =

119863

119905(119909

119894)

119885

119905

times

119890

minus120572119905 if ℎ

119905(119909

119894) = 119910

119894

119890

120572119905 if ℎ

119905(119909

119894) = 119910

119894

(6)

where 119885

119905is a normalization factor chosen so that

sum

119899

119894=1

119863

119905(119909

119894) = 1

Step 4 After 119879 loops output the strong classifier

119867(119909) = sign(119879

sum

119894=1

120572

119905minusnewℎ119905 (119909) minusTh) (7)

where Th is the threshold corresponding with the error rate

Improvement 3 We apply a new parameter called eliminationthreshold to improve the robustness of the detector andprevent overfitting

By analyzing human ear samples we find that the globalstructure of human ears is similar the shape the outercontour is almost oval all human ears have similar shapeof helix antihelix and concha These similar global featuresare helpful for the training of an earnon-ear classifier Butif we look into more details we find that each ear has itsunique features or measures on different ear componentsThe differences on details make the Adaboost based two-classclassifiermore difficult to constructHere we regard those earsamples that present special detail components as ldquodifficultsamplesrdquo These samples will get more weights during theweak classifier learning process because the weak classifierwill always try to get them classified correctly This willultimately incur overfitting In order to prevent overfittingwe apply a new parameter called ldquoelimination threshold Hwrdquoto improve the robustness of the ear detector During thetraining process the ear samples with weight greater thanHwwill be eliminated

Improvement 4 We propose the ldquosingle-ear-detection strat-egyrdquo in view of the asymmetry of left ear and right earThis strategy trains the detector with the samples of singleears After the training process we get left ear detector andright ear detector which are called ldquosingle-ear detectorrdquo Onan input image both detectors are applied to locate the earpart In our previous work both left ears and right earsimages are used for training We call this ldquodual-ear detectorrdquo

The Scientific World Journal 5

Table 2 Detection performance comparison FRR and FAR

Test dataset Number of earimages

Method in ourprevious work [6](dual-ear detector)

Proposed method in thispaper (dual-ear detector)

Proposed method in thispaper (left ear detector +right ear detector)

FRNFRR FANFAR FRNFRR FANFAR FRNFRR FANFARCAS-PEAL 166 530 636 530 212 318 212UMIST 48 121 00 00 00 121 00USTB220 220 105 523 00 523 00 105

0 2 4 6 8 10 12 14 16 180

005

01

015

02

025

03

035

04

Layer number

False

acc

epta

nce

rate

Single-ear detector (le)Dual-ear detector in this paper

Figure 1 Performance comparison between single-ear detector and dual-ear detector

Compared with the dual-ear detector the single-ear detectorcan improve the detection performance

Table 2 shows the experimental results on three ear imagedatasets These datasets are the same as used in our previouswork [6] In Table 2 FRN means false rejection numberFRR means false rejection rate FAN means false acceptancenumber and FAR means false acceptance rate In Figure 11we compare the number of weak classifiers on each layeramong the dual-ear detector in [6] dual-ear detector in thispaper and single-ear detector in this paper As we can seefrom Figure 11 compared with dual-ear detector the single-ear detector contains less number of weak classifiers on eachlayer With less number of weak classifiers the training timecan be reduced to 50 (CPU 30GHz RAM 2G the numberof positive training samples is 11000 the number of negativetraining samples is 5000 the sample size is 60 lowast 120) Thetraining time for the left ear detector right ear detector anddual-ear detector is about 111 hours 100 hours and 217 hoursrespectively

For each layer we compare the detection performancebetween the single-ear detector and the dual-ear detector inthis paper For each layer the difference of detection rate

between these twodetectors is very limited notmore than 1but they have difference on the false acceptance rate as shownin Figure 1 the single-ear detector has lower false acceptancerate which means that it performs better on the negativesample Figure 2(a) shows an example of detection results onthe registered images of one subject Figure 2(b) shows theextracted ear part

22 Ear Normalization Using Active Shape Model As we cansee from the second figure in Figure 2(b) the segmentedimage contains some background information other than thepure ear so we need to further extract the pure ear for thefollowed ear feature extraction

We apply an automatic ear normalization method basedon improved Active Shape Model (ASM) [8] In the offlinetraining step ASM was used to create the Point DistributedModel using the landmark points on the ear images of thetraining set This model was then applied on the test earimages to search the outer ear contour The final step wasto normalize the ear image to standard size and directionaccording to the long axis of outer ear contour The longaxis was defined as the line crossing through the two points

6 The Scientific World Journal

(a) (b)

Figure 2 Ear detection experimental examples

(a)

(b)

Figure 3 Example images of ear normalization

which have the longest distance on the ear contour Afternormalization the long axes of different ear images will benormalized to the same length and same direction Thisprocess is shown in Figure 3(a) The size of the normalizedimage is 60 lowast 117 pixels This ratio of 195 is set based onstatistical research

After converting the color images to gray images weuse histogram equalization to eliminate lighting variationsamong different subjects Figure 3(b) shows some exampleimages after ear normalization

3 Ear Recognition Based on Gabor Features

For feature extraction Gabor filter is applied on the earimages to extract spatial local features of different directionsand scales The Gabor features are of high dimension soKernel Fisher Discriminant Analysis is further applied fordimension reductionThendistance based classifier is appliedfor ear recognition

31 Gabor Feature Extraction The ear has its distinct featurescompared with the face such as the texture features on dif-ferent orientations For its eminent characteristics in spatiallocal feature exaction and orientation selection Gabor basedear is presented in this paper A two-dimensional Gaborfunction is defined as [9]

119892 (119909 119910) = exp(minus119909

1015840

2

+ 119910

1015840

2

2120590

2

) cos(21205871199091015840

120582

+ 120601)

[

119909

1015840

119910

1015840] = [

cos 120579 sin 120579minus sin 120579 cos 120579] [

119909

119910

]

120590

120582

=

1

120587

radic

ln 22

2

119887

+ 1

2

119887

minus 1

(8)

120582 is the wavelength of the cosine factor of the Gaborfilter kernel its value is specified in pixels 120579 specifies theorientation of the normal to the parallel stripes of a Gaborfunction Its value is specified in degrees 120590 is the standard

The Scientific World Journal 7

(a) (b) (c)

Figure 4 Gabor ear image (a) the real part of the Gabor kernel (b) the magnitude spectrum of the Gabor feature of the ear image on 4orientations (c) the magnitude spectrum of the Gabor feature of the ear image on 8 orientations

deviation of the Gaussian factor 119887 is the half-response spatialfrequency bandwidth

The Gabor feature of an ear image is the convolution ofthe ear image and the Gabor kernel function as shown in

119869 (119909 119910119898 119899) = int I (1199091015840 1199101015840) 119892119898119899

(119909 minus 119909

1015840

119910 minus 119910

1015840

) 119889119909

1015840

119889119910

1015840

(9)

where119898 and 119899 stand for the number of scale and orientationof the Gabor wavelet For each pixel (119909 119910) there are 119898 lowast 119899

Gabor coefficients Here we keep the magnitude of the Gaborfilter as the ear feature Figure 4 shows the Gabor feature ofthe ear image Figure 4(a) shows the real part of the Gaborkernel on 3 scales (120582 = 20 25 and 30 resp from the firstto the third row 120579 = 0 1205874 1205872 and 31205874 resp from thefirst to the fourth column) Figure 4(b) shows the magnitudespectrum of the Gabor features corresponding to Figure 4(a)and 4(c) shows themagnitude spectrumof theGabor featurescorresponding to 5 scales and 8 orientations By this way wecan get the texture features on different orientations of theear image As we can see form Figure 4 the texture featuresof the human ear are salient enough on the four directionsin Figure 4(b) so we choose these 4 orientations for futureapplication in Section 32

32 Dimension Reduction Using KFDA For a 119867 lowast 119882 pixelsimage the dimension of the Gabor feature is 119867 lowast 119882 lowast 119898 lowast

119899 So the downsampling scheme is applied for dimensionreduction of the high-dimensional Gabor features Supposethe downsampling factor is 119903 and the Gabor coefficientsare normalized to zero mean and unit variance Then weconcatenate all the coefficients119866119903

119906V to form the Gabor featureset 119866119903

119866

119903

= ((119866

119903

00

)

119905

(119866

119903

01

)

119905

sdot sdot sdot (119866

119903

23

)

119905

)

119905

(10)

Suppose that 119903 = 64 the feature dimension of an earimage with the size of 60lowast117will be 1317The downsamplingfactor is empirically selected and set to 64 for all theexperiments in this paper So the feature dimension needsto be further reduced In this paper the Full Space Kernel

Fisher Discriminant Analysis (FSKFDA) is used for featurereduction [10] FSKFDA makes full use of the discriminantinformation in the full space of the within-class scatternamely the discriminant information in the null space andnonnull space of the within-class scatter Figure 5 shows thedetailed flowchart of this method

4 Experimental Results

41 Ear Recognition Experiment In this experiment weselect two datasets the USTB dataset3 [11] and a subsetof UND collection J2 [12] The USTB dataset3 contains 79subjects and the images of each subject are taken under ori-entation variation Five images for each subject are selectedThese images are 10∘ 5∘ of left rotation and 0∘ 5∘ and 10∘ ofright rotation respectively Using the ear detection methodmentioned in Section 21 we get the miss rate of 2395 andthe false alarm rate of zero on this USTB dataset3 Figure 6shows five source images and normalized ear images of onesubject

In UND collection J2 we selected 150 subjects whichpossess more than 6 images per subject Six images areselected for the experiment There exists illumination andpose variation in this subset Using the ear detection methodmentioned in Section 21 we get the miss rate of 13900 andthe false alarm rate of 15900 on this UND dataset Figure 7shows six source images and normalized ear images of onesubject

In the training stage we use the Gabor + KFDA methodto get the feature space of the gallery set In the test stage thetest image in the probe set is projected to the feature space toget its feature vectorThen the cosine distance measure baseddistance classifier is applied for classification In (11) 119909 and 119910represent two Gabor feature vectors

120575cos (119909 119910) =minus119909

119879

119910

119909

1003817

1003817

1003817

1003817

119910

1003817

1003817

1003817

1003817

(11)

For the kernel function in KFDA we apply linearkernel polynomial kernel RBF kernel and cosine kernelfunctions as shown in Table 3 We conduct a leave-one-out

8 The Scientific World Journal

Gabor feature set X

Kernel matrix Nonlinear projection Φ(middot)

In the feature space F compute the total scatter matrix of the projected training samples 120601X

Kernel trick

Compute the nonnull spaceφ of the total scatter matrix first compute the eigenvectors tV

tV

corresponding to the nonzero eigenvalues of and then the nonnull spaceφ will be

Compute the projection of the training samples φX on the nonnull spaceφ 120601120601120601 XVY T

tt )(=

Compute the between-class scatter matrix 120601bS and the

within-class scatter matrix 120601wS of the projection Y

can be represented as

Y can be represented as

Kernel trick

Compute the null space perpΨ of 120601wS perpΨ is composed of the eigenvectors perp

swV corresponding to the zero eigenvalues of φ

wS

Compute the nonnull spaceΨ of 120601wS Ψ is composed of the eigenvectors swV corresponding to the

nonzero eigenvalues of 120601wS

Compute the optimal discriminant vectors in the null space perpΨ of 120601

wS firstly project 120601

bS into the null space perpΨ ofφwS namely perpperp= swb

Tswb VSVS

120601120601 )( And then compute the eigenvectors sbV corresponding to the d largest eigenvalues of XXSb λ

120601= so

the set of the optimal discriminant vectors in the null space perpΨ of 120601

wS is sbswVVV perp=1

Compute the optimal discriminant vectors in the nonnull spaceΨ of φ

wS according to the Fisher criterion

firstly project φbS and φ

wS into the nonnull spaceΨ ofφwS namely swb

Tswb VSVS φφ = and sww

Tsww VSVS φφ =

And then compute the eigenvectors wbV corresponding to

the t largest eigenvalues of XXSS bw λφφ =minus1

so the

set of the optimal discriminant vectors in the nonnull

space Ψ of φwS is wbswVVV =2

Get the final optimal discriminant projection matrix combine the optimal discriminant vectors in null space perpΨ and nonnull space Ψ of 120601

wS to form the set of the final optimal discriminant vectors namely TVVV ][ 21=

Extract the feature vector Z of the Gabor data namely VYZ =

S120601t

S120601t

S120601t

=Csumi=1

N119894sumj=1

(120601(x)ij minus u120601)(120601(x)ij minus u120601)T = 120601120601t

120601120601t

(120601120601t )

T

(120601120601t )

T

(120601120601t )

T

120601120601t = span(120601(x)11 minus u120601) (120601(x)CN1

minus u120601)

k(xi xj) = exp(minus xi xj2

1205902)

120601120601t(120601

120601t )

T

(120601120601t )

T120601120601t =

1

N(Kminus 1

N(K middot INtimesN + INtimesN middot K

+1

N2middot INtimesN middot K middot INtimesN)

Y = VTt middot(Kminus

1

Nmiddot INtimesN middot K)

Figure 5 Dimension reduction using the Full Space Kernel Fisher Discriminant Analysis

Figure 6 Sample images form USTB dataset3

The Scientific World Journal 9

Figure 7 Sample images of UND dataset

Table 3 Rank-1 recognition rate of the proposed method when adopting different kernel functions

Kernel119870(119909 119910)

Parameters inkernel functions

Rank-1 recognition rateUSTB dataset 3 UND dataset

Linear119870(119909 119909

119894

) = (119909 sdot 119909

119894

) mdash 7544 7422

Polynomial119870(119909 119909

119894

) = [119886 (119909 sdot 119909

119894

) + 119887]

119889 119886 = 0001 119887 = 1119889 = 2

8911 8733

RBF 119870(119909 119909

119894

) = exp (minus1205741003816100381610038161003816

119909 minus 119909

119894

1003816

1003816

1003816

1003816

2

) 120574 = 000019646 94

Cosine

119896(119909 119910) =

119896(119909 119910)

radic119896(119909 119909)119896(119910 119910)

119896(119909 119910) = [119886 (119909 sdot 119910) + 119887]

119889

119886 = 0001 119887 = 1119889 = 2

9241 9022

Table 4 Performance comparison between ear recognition with or without ear normalization

Dataset Ear images without normalization Ear images with normalization Rank-1 recognition rateWithout

normalizationWith

normalization

USTBdataset3 9358 9646

UNDdataset 9111 94

identification experiment The averaged rank-1 recognitionperformance when adopting each kernel function is listedin Table 3 (the feature dimension is reduced to 100) Theparameters in kernel functions are empirically selected in thispaper As we can see from Table 3 the recognition rate onboth datasets performs best with the RBF kernel

Table 4 compares the rank-1 recognition result using theRBF kernel in two scenarios with ear normalization andwithout ear normalization As we can see from Table 4 earrecognition with ear normalization performs better thanear recognition without ear recognition We can see fromthe pictures in Table 4 that in the ldquoear images withoutnormalizationrdquo scenario although the extracted ear imagesare set to the same size the ear locations and the pure earsize may be different among these different enrolled imagesOn the other hand in the ldquoear images with normalizationrdquoscenario only the pure ear part is extracted and used forenrollment all of the enrolled ears have the same size whichhelps to enhance the recognition performance

42 Ear Authentication Experiment Wehave designed an earauthentication system with the proposed method Figure 8shows the framework flow of the ear authentication system

Ear detection

Ear trackingand extraction

Gabor feature extraction

Ear normalization Training process

authenticationEar

Still image or video sequence

Ear normalization

Ear feature set

Gabor feature extractionEar image

dataset

Online authentication

Figure 8 Framework flow of the ear authentication system

Figure 9(a) shows the application scenario The systemis composed of a camera (resolution is 640 lowast 480) and acomputer (CPU i5M520 240GHz RAM 200GB) Thereare a total number of 59 subjects registered For each subject 6images are stored in the gallery set Figure 9(b) shows the earenrollment process Figure 9(c) shows the ear authenticationprocess The time consumption for authenticating a subjectis about 180ms in which the time consumption for the eardetector to extract the pure ear is about 150ms Figure 10

10 The Scientific World Journal

(a) (b)

(c)

Figure 9 Ear authentication system (a) authentication scenario (b) ear enrollment interface (c) ear authentication interface

0 02 04 06 08 10

01

02

03

04

05

06

07

08

09

1

False acceptance rate

Gen

uine

acc

epta

nce

rate

Figure 10 ROC curve of the ear authentication system

shows the ROC curve of the authentication performanceTheEER rate is about 4The experimental result shows that earbiometric is a feasible candidate for human identification

5 Conclusion

In this paper an ear recognition system based on 2D imagesis proposedThemain contributions of the proposed methodare the following (1) automatically detecting the ear part

from the input image and normalizing the ear part basedon the long axis of the outer ear contour and (2) extractingthe discriminating Gabor features of the ear images usingthe Full Space Kernel Discriminant Analysis Experimentalresults show that we can achieve automatic ear recognitionbased on 2D images with the proposed method

Our future work will be focused on two aspects (1) inthe ear normalization stage we need to improve the accuracyof the earlobe localization generate deliberate models for

The Scientific World Journal 11

0

5

10

15

20

25

30

35

40

45

50

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18

Num

ber o

f w

eak

clas

sifier

s

Layer number

Le ear detector

Right ear detector

Method in [19]

Method in this paper

Figure 11 Comparison of the number of weak classifiers on eachlayer

the earlobe landmarks and make the searching process lessdependent on the initial model shape and (2) in the earauthentication stage we need a larger dataset to testify thematching accuracy and the real-time performance of theproposed method

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

Thiswork is supported by theNational Natural Science Foun-dation of China under Grant no 61300075 and FundamentalResearch Funds for China Central Universities under Grantno FRF-SD-12-017A

References

[1] A Pflug and C Busch ldquoEar biometrics a survey of detectionfeature extraction and recognition methodsrdquo IET Biometricsvol 1 no 2 pp 114ndash129 2012

[2] A Abaza A Ross C Hebert et al ldquoA survey on ear biometricsrdquoACM Transactions on Embedded Computing Systems vol 9 no4 article 39 2010

[3] S M S Islam M Bennamoun R A Owens and R Davis ldquoAreview of recent advances in 3d ear and expression-invariantface biometricsrdquo ACMComputing Surveys vol 44 no 3 article14 2012

[4] L Yuan Z C Mu and F Yang ldquoA review of recent advances inear recognitionrdquo in Proceedings of the 6th Chinese Conference onBiometric Recognition vol 7098 of LNCS pp 252ndash259 SpringerBeijing China December 2011

[5] A Iannerelli Ear Identification Forensic Identification SeriesParamount Publishing Company Fremont Calif USA 1989

[6] L Yuan and F Zhang ldquoEar detection based on improvedadaboost algorithmrdquo in Proceedings of the International Con-ference on Machine Learning and Cybernetics pp 2414ndash2417Baoding China July 2009

[7] A Abaza C Hebert and M A F Harrison ldquoFast learning eardetection for real-time surveillancerdquo in Proceedings of the 4thIEEE International Conference on Biometrics Theory Applica-tions and Systems (BTAS rsquo10) pp 1ndash6 Washington DC USASeptember 2010

[8] L Yuan and Z C Mu ldquoEar recognition based on 2D imagesrdquoin Proceedings of 1st International Conference on BiometricsTheory Applications and Systems Washington DC USA 2007

[9] C Liu ldquoGabor-based kernel PCAwith fractional power polyno-mial models for face recognitionrdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 26 no 5 pp 572ndash5812004

[10] L Yuan Z-CMu andHZeng ldquoPartially occluded ear recogni-tion based on local featuresrdquo Journal of University of Science andTechnology Beijing vol 32 no 4 pp 530ndash535 2010 (Chinese)

[11] httpwwwustbeducnresb[12] httpwww3ndedusimcvrlCVRLData Setshtml[13] M Burge andW Burger ldquoEar biometrics in machine visionrdquo in

Proceedings of the 21stWorkshop of the Australian Association forPattern Recognition 1997

[14] B Moreno A Sanchez and J F Velez ldquoOn the use of outerear images for personal identification in security applicationsrdquoin Proceedings of the IEEE 33rd Annual International CarnahanConference on Security Technology pp 469ndash476 October 1999

[15] Z C Mu L Yuan and Z G Xu ldquoShape and structuralfeature based ear recognitionrdquo in Proceedings of InternationalConference on Advances in Biometric Personal Authenticationvol 3338 of LNCS pp 663ndash670 Springer 2004

[16] M Choras ldquoEar biometrics based on geometrical featureextractionrdquo Electronics Letters on Computer Vision amp ImageAnalysis no 3 pp 84ndash95 2005

[17] D J Hurley M S Nixon and J N Carter ldquoForce field featureextraction for ear biometricsrdquo Computer Vision and ImageUnderstanding vol 98 no 3 pp 491ndash512 2005

[18] L Nanni and A Lumini ldquoA multi-matcher for ear authentica-tionrdquo Pattern Recognition Letters vol 28 no 16 pp 2219ndash22262007

[19] J D Bustard and M S Nixon ldquoToward unconstrained earrecognition from two-dimensional imagesrdquo IEEE Transactionson Systems Man and Cybernetics Part A vol 40 no 3 pp 486ndash494 2010

[20] B Arbab-Zavar and M S Nixon ldquoOn guided model-basedanalysis for ear biometricsrdquo Computer Vision and Image Under-standing vol 115 no 4 pp 487ndash502 2011

[21] K Chang K W Bowyer S Sarkar and B Victor ldquoComparisonand combination of ear and face images in appearance-basedbiometricsrdquo IEEE Transactions on Pattern Analysis andMachineIntelligence vol 25 no 9 pp 1160ndash1165 2003

[22] L Yuan Z-C Mu Y Zhang and K Liu ldquoEar recognition usingimproved non-negative matrix factorizationrdquo in Proceedings ofthe 18th International Conference on Pattern Recognition (ICPRrsquo06) pp 501ndash504 Hong Kong August 2006

[23] W Dun and Z Mu ldquoICA-based ear recognition methodthrough nonlinear adaptive feature fusionrdquo Journal ofComputer-Aided Design and Computer Graphics vol 21no 3 pp 383ndash388 2009

12 The Scientific World Journal

[24] Y Wang Z-C Mu and H Zeng ldquoBlock-based and multi-resolutionmethods for ear recognition using wavelet transformand uniform local binary patternsrdquo in Proceedings of the 19thInternational Conference on Pattern Recognition (ICPR rsquo08)Tampa Fla USA December 2008

[25] J Zhou S Cadavid and M Abdel-Mottaleb ldquoHistograms ofcategorized shapes for 3D ear detectionrdquo in Proceedings ofthe 4th IEEE International Conference on Biometrics TheoryApplications and Systems (BTAS rsquo10) pp 278ndash282 WashingtonDC USA September 2010

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 2: Research Article Ear Recognition Based on Gabor Features

2 The Scientific World Journal

Table 1 Representative feature extraction methods and their performance evaluation

Reference Description Dataset PerformanceStructural feature extraction methods

[13]

Burge and Burger (1997) use the main curve segments to form Voronoidiagram and use adjacency graph matching based algorithm for

authentication But the curve segments will be affected by changes incamera-to-ear orientation or lighting variation

mdash mdash

[14]Moreno et al (1999) used feature points of outer ear contour and informationobtained from ear shape and wrinkles for ear recognition The compression

network is applied for classification

28 subjects 168 images6 images for each

subjectRank-1 93

[15]

Mu et al (2004) proposed a long axis based shape and structural featureextraction method the shape feature consisted of the curve fitting parametersof the outer ear contour the structural feature was composed of ratios of thelength of key sections to the length of the long axis and nearest neighborhood

classifier was used for recognition

USTB dataset2 77subjects 4 images for

each subjectRank-1 85

[16]Choras (2005) proposed a geometrical feature extraction method based onnumber of pixels that have the same radius in a circle with the centre in the

centroid and on the main curves240 images mdash

Local feature extraction methods

[17]

Hurley et al (2005) proposed the force field transformation method The earimages are treated as array of mutually attracting particles that act as thesource of Gaussian force field The force field transforms of the ear imageswere taken and the force fields were then converted to convergence fieldsThen Fourier based cross-correlation techniques were used to perform

multiplicative template matching on ternary thresholded convergence maps

XM2VTS face profilesubset (252 subjects) Rank-1 992

[18]

Nanni and Lumini (2007) proposed a local approach A multimatcher systemwas proposed where each matcher was trained using features extracted fromthe convolution of each subwindow with a bank of Gabor filters The bestmatchers corresponding to the most discriminative subwindows were

selected by Sequential Forward Floating Selection where the fitness functionwas related to the optimization of the ear recognition performance Ear

recognition was made using sum rule based decision level fusion

UND collection E (114subjects)

Rank-1 80EER 43

[19]

Bustard and Nixon (2010) proposed an ear registration and recognitionmethod by treating the ear as a planar surface and creating a homography

transform using SIFT feature matches Ear recognition under partialocclusion was discussed in this paper The relationship between occlusion

percentage and recognition rate was presented

XM2VTS face profiledataset (63 subjects)

Rank-1 92(30 occlusionfrom above)92 (30

occlusion fromleft side)

[20]

Arbab-Zavar and Nixon (2011) proposed a model-based approach for earrecognition The model was a partwise description of the ear derived by a

stochastic clustering on a set of scale invariant features of the training set Theouter ear curves were further analyzed with log-Gabor filter Ear recognition

was made by fusing the model-based and outer ear metrics

XM2VTS face profiledataset (63 subjects)

Rank-1 894(30 occlusionfrom above)

Holistic feature extraction methods

[21] Chang et al (2003) used standard PCA to compare face and ear and concludedthat ear and face did not have much difference on recognition performance

Human ID Database(197 subjects)

Rank-1 705 forface 716 for ear

[22]

Yuan et al (2006) proposed an improved Nonnegative Matrix Factorizationwith Sparseness Constraint for ear recognition with occlusion The ear imagewas divided into three parts with no overlapping INMFSC was applied forfeature extraction The final classification was based on a Gaussian model

based classifier

USTB dataset3 (79subjects)

Rank 1 sim91 (for10 occlusionfrom above)

[23]

Dun and Mu (2009) proposed an ICA based ear recognition method throughnonlinear adaptive feature fusion Firstly two types of complimentary featuresare extracted using ICA These features are then concatenated with differentweight to form a high-dimensional fused feature Then the feature dimensionwas reduced by Kernel PCA The final decision was made by nearest neighbor

classifier

USTB dataset3 (79subjects) and USTBdataset4 (150 subjects)

Rank-1 ge90(for pose

variation within15∘)

The Scientific World Journal 3

Table 1 Continued

Reference Description Dataset Performance

[24]

Wang et al (2008) proposed ear recognition based on Local Binary PatternEar images were decomposed by Haar wavelet transform Then Uniform LBPcombined with block-based and multiresolution methods was applied to

describe the texture features Finally the texture features are classified by thenearest neighbor method

USTB dataset3 (79subjects)

Rank-1 ge92(for pose

variation within20∘)

[25]

Zhou et al (2010) proposed ear recognition via sparse representation Gaborfeatures are used to develop a dictionary Classification is performed by

extracting features from the test data and using the dictionary for representingthe test data The class of the test data is then determined based upon the

involvement of the dictionary entries in its representation

UND G subset 39subjects

Rank-1 9846(4 images fortraining and 2images fortesting)

that of the ear to be authenticated Somany appearance basedmethods will not work in this situationThismeans that thereexists a gap between ear detection and feature extractionThis gap is automatic ear normalization very much like facenormalization whichmeans that we have to set up a standardto normalize the ear into the same size In this paper wecombine ear detection and ear normalization into one stagenamed ear enrollment To our best knowledge the researchon ear enrollment is still an open area

The rest of this paper is organized as follows Section 2describes our automated ear enrollment approach Section 3details the feature extraction approach using Gabor filtersand Kernel Fisher Discriminant Analysis Section 4 conductsear recognition and ear authentication experiments to eval-uate the proposed methods Finally concluding remarks aredrawn in Section 5

2 Ear Enrollment

This section will detail the ear enrollment process whichincludes ear detection and ear normalization The ear detec-tion approach based on our modified Adaboost algorithmdetects the ear part under complex background using twosteps offline cascaded classifier training and online eardetection We have made some modification compared withour previous work on ear detection [6] Then Active ShapeModel is then applied to segment the ear part and normalizeall the ear images to the same size

21 Automatic Ear Detector Based on Modified AdaboostAlgorithm In our previous work [6] we have proposed anear detection approach under complex background whichhas two stages offline cascaded classifier training and onlineear detectionThe cascaded classifier is composed of multiplestrong classifiers Each strong classifier is composed of multi-ple weak classifiers The training process of a strong classifieris as follows

(1) Given 119873 example images (119909

1 119910

1) (119909

119899 119910

119899)

where 119909 isin R119896 119910119894isin minus1 1 for negative and positive

examples respectively

(2) Initialize weights 1198631(119909

119894) = 1119899 119894 = 1 2 119899

(3) Repeat for 119905 = 1 2 119879 119879 is the total number ofweak classifiers do the following

(i) train the weak classifiers with weights 119863119905 and

get the regression function ℎ

119905 119883 rarr minus1 +1

The nearest neighbor classifier is used as theweak classifier

(ii) get the error rate of the regression function 120576

119905=

sum

119899

119894=1

119863

119905(119909

119894)[ℎ

119905(119909

119894) = 119910

119894]

(iii) set the weight of weak classifiers 120572

119905 119886

119905=

12 ln((1 minus 120576

119905)120576

119905)

(iv) update weights of different training samples

119863

119905+1(119909

119894) =

119863

119905(119909

119894)

119885

119905

times

119890

minus120572119905 if ℎ

119905(119909

119894) = 119910

119894

119890

120572119905 if ℎ

119905(119909

119894) = 119910

119894

(1)

where 119885

119905is the normalization factor for

sum

119899

119894=1

119863

119905(119909

119894) = 1

(4) After 119879 times of training the final strong classifier is

119867(119909) = sign(119879

sum

119894=1

120572

119905ℎ

119905(119909) minusTh) (2)

whereTh is the decision threshold decided by the falseclassification rate

The training process of the algorithm mentioned aboveis time consuming Also the false rejection rate and falseacceptance rate need to be lowered for real applicationscenarios Based on the structural features of the ear itself wepropose the following four improvements on the traditionalAdaBoost algorithm in view of its deficiency

Improvement 1 In Adaboost algorithm the selection of theoptimum classification threshold of the Haar-like features isvery important for weak classifier learning algorithm Thisprocedure is time consuming So we propose a ldquosegment-selecting methodrdquo to choose the optimum threshold of weakclassifiers

Step 1 We divide the feature value space composed of thefeature values of all the training samples for each Haar-likefeature into 119899 parts and 119903 feature values per part Suppose that

4 The Scientific World Journal

the original feature value space is 119892(119894) (119894 = 0 119873) where119873 is the total number of positive and negative samples Thenew feature value space is 119892(119896) (119896 = 0 119903 2119903 119899)

Step 2 In the new space search the optimum threshold 119892(119895)Then back to the original feature value space centered with119892(119895) we search the region from 119892(119895minus 119903) to 119892(119895+ 119903) to find outthe optimum threshold

Improvement 2We apply a strategywhich can reduce the falseacceptance rate bymeans of changing theweight distributionof weak classifiers

The strong classifier is composed of weighted weak clas-sifiers The smaller error rate a weak classifier possesses thebigger the weight assigned to a weak classifier The error rateis decided by the training samples The positive and negativesamples are equally important Ear detection experimentalresults show that with the traditional Adaboost algorithmthe false acceptance rate is not acceptable So we improve thetraining procedure of the weak classifiers by proposing thatthe weight distribution among the weak classifiers not only isdecided by the total error rate but also is concerned with thenegative samples

So we improve the Adaboost algorithm by including aparameter 119896119890119902119905 to give higher weights to those weak classifiersthat have lower false acceptance rate on negative samples 119902

119905

is the weight sum of those negative samples that have beenclassified correctly representing the classification ability ofthe weak classifier to negative samples 119896 is used to constrainthe impact of 119902

119905on the weight of the weak classifier The

modified Adaboost algorithm is as follows

Step 1 Given a weak classifier learning algorithm and atraining sample set (119909

1 119910

1) (119909

119899 119910

119899) where 119909

119894is the

training sample feature vector 119910119894isin minus1 +1 for negative and

positive examples respectively

Step 2 Initialize the weights1198631(119909

119894) = 1119899 119894 = 1 2 119899

Step 3 Repeat for 119905 = 1 2 119879 119889119900 the following

(1) train the weak classifier learning algorithm with theweights 119863

119905 and return the weak classifier ℎ

119905 119883 rarr

minus1 +1

(2) compute the error rate

120576

119905=

119899

sum

119894=1

119863

119905(119909

119894) [ℎ

119905(119909

119894) = 119910

119894] (3)

get the weight sum of those negative samples that areclassified correctly as

119902

119905=

119899

sum

119894=1

119863

119905(119909

119894) [119910

119894= minus1 ℎ

119905(119909

119894) = minus1] (4)

(3) get the updating parameter 120572119905and the weight param-

eter of weak classifier 120572119905minusnew

119886

119905=

1

2

ln(1 minus 120576

119905

120576

119905

)

119886

119905minusnew =

1

2

ln(1 minus 120576

119905

120576

119905

) + 119896119890

119902119905

(5)

where 119896 is a constant value The value of 119896 shouldguarantee that the new added weak classifier willreduce the upper boundary of the minimum errorrate In this paper 119896 is set to 0018

(4) Update the weights

119863

119905+1(119909

119894) =

119863

119905(119909

119894)

119885

119905

times

119890

minus120572119905 if ℎ

119905(119909

119894) = 119910

119894

119890

120572119905 if ℎ

119905(119909

119894) = 119910

119894

(6)

where 119885

119905is a normalization factor chosen so that

sum

119899

119894=1

119863

119905(119909

119894) = 1

Step 4 After 119879 loops output the strong classifier

119867(119909) = sign(119879

sum

119894=1

120572

119905minusnewℎ119905 (119909) minusTh) (7)

where Th is the threshold corresponding with the error rate

Improvement 3 We apply a new parameter called eliminationthreshold to improve the robustness of the detector andprevent overfitting

By analyzing human ear samples we find that the globalstructure of human ears is similar the shape the outercontour is almost oval all human ears have similar shapeof helix antihelix and concha These similar global featuresare helpful for the training of an earnon-ear classifier Butif we look into more details we find that each ear has itsunique features or measures on different ear componentsThe differences on details make the Adaboost based two-classclassifiermore difficult to constructHere we regard those earsamples that present special detail components as ldquodifficultsamplesrdquo These samples will get more weights during theweak classifier learning process because the weak classifierwill always try to get them classified correctly This willultimately incur overfitting In order to prevent overfittingwe apply a new parameter called ldquoelimination threshold Hwrdquoto improve the robustness of the ear detector During thetraining process the ear samples with weight greater thanHwwill be eliminated

Improvement 4 We propose the ldquosingle-ear-detection strat-egyrdquo in view of the asymmetry of left ear and right earThis strategy trains the detector with the samples of singleears After the training process we get left ear detector andright ear detector which are called ldquosingle-ear detectorrdquo Onan input image both detectors are applied to locate the earpart In our previous work both left ears and right earsimages are used for training We call this ldquodual-ear detectorrdquo

The Scientific World Journal 5

Table 2 Detection performance comparison FRR and FAR

Test dataset Number of earimages

Method in ourprevious work [6](dual-ear detector)

Proposed method in thispaper (dual-ear detector)

Proposed method in thispaper (left ear detector +right ear detector)

FRNFRR FANFAR FRNFRR FANFAR FRNFRR FANFARCAS-PEAL 166 530 636 530 212 318 212UMIST 48 121 00 00 00 121 00USTB220 220 105 523 00 523 00 105

0 2 4 6 8 10 12 14 16 180

005

01

015

02

025

03

035

04

Layer number

False

acc

epta

nce

rate

Single-ear detector (le)Dual-ear detector in this paper

Figure 1 Performance comparison between single-ear detector and dual-ear detector

Compared with the dual-ear detector the single-ear detectorcan improve the detection performance

Table 2 shows the experimental results on three ear imagedatasets These datasets are the same as used in our previouswork [6] In Table 2 FRN means false rejection numberFRR means false rejection rate FAN means false acceptancenumber and FAR means false acceptance rate In Figure 11we compare the number of weak classifiers on each layeramong the dual-ear detector in [6] dual-ear detector in thispaper and single-ear detector in this paper As we can seefrom Figure 11 compared with dual-ear detector the single-ear detector contains less number of weak classifiers on eachlayer With less number of weak classifiers the training timecan be reduced to 50 (CPU 30GHz RAM 2G the numberof positive training samples is 11000 the number of negativetraining samples is 5000 the sample size is 60 lowast 120) Thetraining time for the left ear detector right ear detector anddual-ear detector is about 111 hours 100 hours and 217 hoursrespectively

For each layer we compare the detection performancebetween the single-ear detector and the dual-ear detector inthis paper For each layer the difference of detection rate

between these twodetectors is very limited notmore than 1but they have difference on the false acceptance rate as shownin Figure 1 the single-ear detector has lower false acceptancerate which means that it performs better on the negativesample Figure 2(a) shows an example of detection results onthe registered images of one subject Figure 2(b) shows theextracted ear part

22 Ear Normalization Using Active Shape Model As we cansee from the second figure in Figure 2(b) the segmentedimage contains some background information other than thepure ear so we need to further extract the pure ear for thefollowed ear feature extraction

We apply an automatic ear normalization method basedon improved Active Shape Model (ASM) [8] In the offlinetraining step ASM was used to create the Point DistributedModel using the landmark points on the ear images of thetraining set This model was then applied on the test earimages to search the outer ear contour The final step wasto normalize the ear image to standard size and directionaccording to the long axis of outer ear contour The longaxis was defined as the line crossing through the two points

6 The Scientific World Journal

(a) (b)

Figure 2 Ear detection experimental examples

(a)

(b)

Figure 3 Example images of ear normalization

which have the longest distance on the ear contour Afternormalization the long axes of different ear images will benormalized to the same length and same direction Thisprocess is shown in Figure 3(a) The size of the normalizedimage is 60 lowast 117 pixels This ratio of 195 is set based onstatistical research

After converting the color images to gray images weuse histogram equalization to eliminate lighting variationsamong different subjects Figure 3(b) shows some exampleimages after ear normalization

3 Ear Recognition Based on Gabor Features

For feature extraction Gabor filter is applied on the earimages to extract spatial local features of different directionsand scales The Gabor features are of high dimension soKernel Fisher Discriminant Analysis is further applied fordimension reductionThendistance based classifier is appliedfor ear recognition

31 Gabor Feature Extraction The ear has its distinct featurescompared with the face such as the texture features on dif-ferent orientations For its eminent characteristics in spatiallocal feature exaction and orientation selection Gabor basedear is presented in this paper A two-dimensional Gaborfunction is defined as [9]

119892 (119909 119910) = exp(minus119909

1015840

2

+ 119910

1015840

2

2120590

2

) cos(21205871199091015840

120582

+ 120601)

[

119909

1015840

119910

1015840] = [

cos 120579 sin 120579minus sin 120579 cos 120579] [

119909

119910

]

120590

120582

=

1

120587

radic

ln 22

2

119887

+ 1

2

119887

minus 1

(8)

120582 is the wavelength of the cosine factor of the Gaborfilter kernel its value is specified in pixels 120579 specifies theorientation of the normal to the parallel stripes of a Gaborfunction Its value is specified in degrees 120590 is the standard

The Scientific World Journal 7

(a) (b) (c)

Figure 4 Gabor ear image (a) the real part of the Gabor kernel (b) the magnitude spectrum of the Gabor feature of the ear image on 4orientations (c) the magnitude spectrum of the Gabor feature of the ear image on 8 orientations

deviation of the Gaussian factor 119887 is the half-response spatialfrequency bandwidth

The Gabor feature of an ear image is the convolution ofthe ear image and the Gabor kernel function as shown in

119869 (119909 119910119898 119899) = int I (1199091015840 1199101015840) 119892119898119899

(119909 minus 119909

1015840

119910 minus 119910

1015840

) 119889119909

1015840

119889119910

1015840

(9)

where119898 and 119899 stand for the number of scale and orientationof the Gabor wavelet For each pixel (119909 119910) there are 119898 lowast 119899

Gabor coefficients Here we keep the magnitude of the Gaborfilter as the ear feature Figure 4 shows the Gabor feature ofthe ear image Figure 4(a) shows the real part of the Gaborkernel on 3 scales (120582 = 20 25 and 30 resp from the firstto the third row 120579 = 0 1205874 1205872 and 31205874 resp from thefirst to the fourth column) Figure 4(b) shows the magnitudespectrum of the Gabor features corresponding to Figure 4(a)and 4(c) shows themagnitude spectrumof theGabor featurescorresponding to 5 scales and 8 orientations By this way wecan get the texture features on different orientations of theear image As we can see form Figure 4 the texture featuresof the human ear are salient enough on the four directionsin Figure 4(b) so we choose these 4 orientations for futureapplication in Section 32

32 Dimension Reduction Using KFDA For a 119867 lowast 119882 pixelsimage the dimension of the Gabor feature is 119867 lowast 119882 lowast 119898 lowast

119899 So the downsampling scheme is applied for dimensionreduction of the high-dimensional Gabor features Supposethe downsampling factor is 119903 and the Gabor coefficientsare normalized to zero mean and unit variance Then weconcatenate all the coefficients119866119903

119906V to form the Gabor featureset 119866119903

119866

119903

= ((119866

119903

00

)

119905

(119866

119903

01

)

119905

sdot sdot sdot (119866

119903

23

)

119905

)

119905

(10)

Suppose that 119903 = 64 the feature dimension of an earimage with the size of 60lowast117will be 1317The downsamplingfactor is empirically selected and set to 64 for all theexperiments in this paper So the feature dimension needsto be further reduced In this paper the Full Space Kernel

Fisher Discriminant Analysis (FSKFDA) is used for featurereduction [10] FSKFDA makes full use of the discriminantinformation in the full space of the within-class scatternamely the discriminant information in the null space andnonnull space of the within-class scatter Figure 5 shows thedetailed flowchart of this method

4 Experimental Results

41 Ear Recognition Experiment In this experiment weselect two datasets the USTB dataset3 [11] and a subsetof UND collection J2 [12] The USTB dataset3 contains 79subjects and the images of each subject are taken under ori-entation variation Five images for each subject are selectedThese images are 10∘ 5∘ of left rotation and 0∘ 5∘ and 10∘ ofright rotation respectively Using the ear detection methodmentioned in Section 21 we get the miss rate of 2395 andthe false alarm rate of zero on this USTB dataset3 Figure 6shows five source images and normalized ear images of onesubject

In UND collection J2 we selected 150 subjects whichpossess more than 6 images per subject Six images areselected for the experiment There exists illumination andpose variation in this subset Using the ear detection methodmentioned in Section 21 we get the miss rate of 13900 andthe false alarm rate of 15900 on this UND dataset Figure 7shows six source images and normalized ear images of onesubject

In the training stage we use the Gabor + KFDA methodto get the feature space of the gallery set In the test stage thetest image in the probe set is projected to the feature space toget its feature vectorThen the cosine distance measure baseddistance classifier is applied for classification In (11) 119909 and 119910represent two Gabor feature vectors

120575cos (119909 119910) =minus119909

119879

119910

119909

1003817

1003817

1003817

1003817

119910

1003817

1003817

1003817

1003817

(11)

For the kernel function in KFDA we apply linearkernel polynomial kernel RBF kernel and cosine kernelfunctions as shown in Table 3 We conduct a leave-one-out

8 The Scientific World Journal

Gabor feature set X

Kernel matrix Nonlinear projection Φ(middot)

In the feature space F compute the total scatter matrix of the projected training samples 120601X

Kernel trick

Compute the nonnull spaceφ of the total scatter matrix first compute the eigenvectors tV

tV

corresponding to the nonzero eigenvalues of and then the nonnull spaceφ will be

Compute the projection of the training samples φX on the nonnull spaceφ 120601120601120601 XVY T

tt )(=

Compute the between-class scatter matrix 120601bS and the

within-class scatter matrix 120601wS of the projection Y

can be represented as

Y can be represented as

Kernel trick

Compute the null space perpΨ of 120601wS perpΨ is composed of the eigenvectors perp

swV corresponding to the zero eigenvalues of φ

wS

Compute the nonnull spaceΨ of 120601wS Ψ is composed of the eigenvectors swV corresponding to the

nonzero eigenvalues of 120601wS

Compute the optimal discriminant vectors in the null space perpΨ of 120601

wS firstly project 120601

bS into the null space perpΨ ofφwS namely perpperp= swb

Tswb VSVS

120601120601 )( And then compute the eigenvectors sbV corresponding to the d largest eigenvalues of XXSb λ

120601= so

the set of the optimal discriminant vectors in the null space perpΨ of 120601

wS is sbswVVV perp=1

Compute the optimal discriminant vectors in the nonnull spaceΨ of φ

wS according to the Fisher criterion

firstly project φbS and φ

wS into the nonnull spaceΨ ofφwS namely swb

Tswb VSVS φφ = and sww

Tsww VSVS φφ =

And then compute the eigenvectors wbV corresponding to

the t largest eigenvalues of XXSS bw λφφ =minus1

so the

set of the optimal discriminant vectors in the nonnull

space Ψ of φwS is wbswVVV =2

Get the final optimal discriminant projection matrix combine the optimal discriminant vectors in null space perpΨ and nonnull space Ψ of 120601

wS to form the set of the final optimal discriminant vectors namely TVVV ][ 21=

Extract the feature vector Z of the Gabor data namely VYZ =

S120601t

S120601t

S120601t

=Csumi=1

N119894sumj=1

(120601(x)ij minus u120601)(120601(x)ij minus u120601)T = 120601120601t

120601120601t

(120601120601t )

T

(120601120601t )

T

(120601120601t )

T

120601120601t = span(120601(x)11 minus u120601) (120601(x)CN1

minus u120601)

k(xi xj) = exp(minus xi xj2

1205902)

120601120601t(120601

120601t )

T

(120601120601t )

T120601120601t =

1

N(Kminus 1

N(K middot INtimesN + INtimesN middot K

+1

N2middot INtimesN middot K middot INtimesN)

Y = VTt middot(Kminus

1

Nmiddot INtimesN middot K)

Figure 5 Dimension reduction using the Full Space Kernel Fisher Discriminant Analysis

Figure 6 Sample images form USTB dataset3

The Scientific World Journal 9

Figure 7 Sample images of UND dataset

Table 3 Rank-1 recognition rate of the proposed method when adopting different kernel functions

Kernel119870(119909 119910)

Parameters inkernel functions

Rank-1 recognition rateUSTB dataset 3 UND dataset

Linear119870(119909 119909

119894

) = (119909 sdot 119909

119894

) mdash 7544 7422

Polynomial119870(119909 119909

119894

) = [119886 (119909 sdot 119909

119894

) + 119887]

119889 119886 = 0001 119887 = 1119889 = 2

8911 8733

RBF 119870(119909 119909

119894

) = exp (minus1205741003816100381610038161003816

119909 minus 119909

119894

1003816

1003816

1003816

1003816

2

) 120574 = 000019646 94

Cosine

119896(119909 119910) =

119896(119909 119910)

radic119896(119909 119909)119896(119910 119910)

119896(119909 119910) = [119886 (119909 sdot 119910) + 119887]

119889

119886 = 0001 119887 = 1119889 = 2

9241 9022

Table 4 Performance comparison between ear recognition with or without ear normalization

Dataset Ear images without normalization Ear images with normalization Rank-1 recognition rateWithout

normalizationWith

normalization

USTBdataset3 9358 9646

UNDdataset 9111 94

identification experiment The averaged rank-1 recognitionperformance when adopting each kernel function is listedin Table 3 (the feature dimension is reduced to 100) Theparameters in kernel functions are empirically selected in thispaper As we can see from Table 3 the recognition rate onboth datasets performs best with the RBF kernel

Table 4 compares the rank-1 recognition result using theRBF kernel in two scenarios with ear normalization andwithout ear normalization As we can see from Table 4 earrecognition with ear normalization performs better thanear recognition without ear recognition We can see fromthe pictures in Table 4 that in the ldquoear images withoutnormalizationrdquo scenario although the extracted ear imagesare set to the same size the ear locations and the pure earsize may be different among these different enrolled imagesOn the other hand in the ldquoear images with normalizationrdquoscenario only the pure ear part is extracted and used forenrollment all of the enrolled ears have the same size whichhelps to enhance the recognition performance

42 Ear Authentication Experiment Wehave designed an earauthentication system with the proposed method Figure 8shows the framework flow of the ear authentication system

Ear detection

Ear trackingand extraction

Gabor feature extraction

Ear normalization Training process

authenticationEar

Still image or video sequence

Ear normalization

Ear feature set

Gabor feature extractionEar image

dataset

Online authentication

Figure 8 Framework flow of the ear authentication system

Figure 9(a) shows the application scenario The systemis composed of a camera (resolution is 640 lowast 480) and acomputer (CPU i5M520 240GHz RAM 200GB) Thereare a total number of 59 subjects registered For each subject 6images are stored in the gallery set Figure 9(b) shows the earenrollment process Figure 9(c) shows the ear authenticationprocess The time consumption for authenticating a subjectis about 180ms in which the time consumption for the eardetector to extract the pure ear is about 150ms Figure 10

10 The Scientific World Journal

(a) (b)

(c)

Figure 9 Ear authentication system (a) authentication scenario (b) ear enrollment interface (c) ear authentication interface

0 02 04 06 08 10

01

02

03

04

05

06

07

08

09

1

False acceptance rate

Gen

uine

acc

epta

nce

rate

Figure 10 ROC curve of the ear authentication system

shows the ROC curve of the authentication performanceTheEER rate is about 4The experimental result shows that earbiometric is a feasible candidate for human identification

5 Conclusion

In this paper an ear recognition system based on 2D imagesis proposedThemain contributions of the proposed methodare the following (1) automatically detecting the ear part

from the input image and normalizing the ear part basedon the long axis of the outer ear contour and (2) extractingthe discriminating Gabor features of the ear images usingthe Full Space Kernel Discriminant Analysis Experimentalresults show that we can achieve automatic ear recognitionbased on 2D images with the proposed method

Our future work will be focused on two aspects (1) inthe ear normalization stage we need to improve the accuracyof the earlobe localization generate deliberate models for

The Scientific World Journal 11

0

5

10

15

20

25

30

35

40

45

50

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18

Num

ber o

f w

eak

clas

sifier

s

Layer number

Le ear detector

Right ear detector

Method in [19]

Method in this paper

Figure 11 Comparison of the number of weak classifiers on eachlayer

the earlobe landmarks and make the searching process lessdependent on the initial model shape and (2) in the earauthentication stage we need a larger dataset to testify thematching accuracy and the real-time performance of theproposed method

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

Thiswork is supported by theNational Natural Science Foun-dation of China under Grant no 61300075 and FundamentalResearch Funds for China Central Universities under Grantno FRF-SD-12-017A

References

[1] A Pflug and C Busch ldquoEar biometrics a survey of detectionfeature extraction and recognition methodsrdquo IET Biometricsvol 1 no 2 pp 114ndash129 2012

[2] A Abaza A Ross C Hebert et al ldquoA survey on ear biometricsrdquoACM Transactions on Embedded Computing Systems vol 9 no4 article 39 2010

[3] S M S Islam M Bennamoun R A Owens and R Davis ldquoAreview of recent advances in 3d ear and expression-invariantface biometricsrdquo ACMComputing Surveys vol 44 no 3 article14 2012

[4] L Yuan Z C Mu and F Yang ldquoA review of recent advances inear recognitionrdquo in Proceedings of the 6th Chinese Conference onBiometric Recognition vol 7098 of LNCS pp 252ndash259 SpringerBeijing China December 2011

[5] A Iannerelli Ear Identification Forensic Identification SeriesParamount Publishing Company Fremont Calif USA 1989

[6] L Yuan and F Zhang ldquoEar detection based on improvedadaboost algorithmrdquo in Proceedings of the International Con-ference on Machine Learning and Cybernetics pp 2414ndash2417Baoding China July 2009

[7] A Abaza C Hebert and M A F Harrison ldquoFast learning eardetection for real-time surveillancerdquo in Proceedings of the 4thIEEE International Conference on Biometrics Theory Applica-tions and Systems (BTAS rsquo10) pp 1ndash6 Washington DC USASeptember 2010

[8] L Yuan and Z C Mu ldquoEar recognition based on 2D imagesrdquoin Proceedings of 1st International Conference on BiometricsTheory Applications and Systems Washington DC USA 2007

[9] C Liu ldquoGabor-based kernel PCAwith fractional power polyno-mial models for face recognitionrdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 26 no 5 pp 572ndash5812004

[10] L Yuan Z-CMu andHZeng ldquoPartially occluded ear recogni-tion based on local featuresrdquo Journal of University of Science andTechnology Beijing vol 32 no 4 pp 530ndash535 2010 (Chinese)

[11] httpwwwustbeducnresb[12] httpwww3ndedusimcvrlCVRLData Setshtml[13] M Burge andW Burger ldquoEar biometrics in machine visionrdquo in

Proceedings of the 21stWorkshop of the Australian Association forPattern Recognition 1997

[14] B Moreno A Sanchez and J F Velez ldquoOn the use of outerear images for personal identification in security applicationsrdquoin Proceedings of the IEEE 33rd Annual International CarnahanConference on Security Technology pp 469ndash476 October 1999

[15] Z C Mu L Yuan and Z G Xu ldquoShape and structuralfeature based ear recognitionrdquo in Proceedings of InternationalConference on Advances in Biometric Personal Authenticationvol 3338 of LNCS pp 663ndash670 Springer 2004

[16] M Choras ldquoEar biometrics based on geometrical featureextractionrdquo Electronics Letters on Computer Vision amp ImageAnalysis no 3 pp 84ndash95 2005

[17] D J Hurley M S Nixon and J N Carter ldquoForce field featureextraction for ear biometricsrdquo Computer Vision and ImageUnderstanding vol 98 no 3 pp 491ndash512 2005

[18] L Nanni and A Lumini ldquoA multi-matcher for ear authentica-tionrdquo Pattern Recognition Letters vol 28 no 16 pp 2219ndash22262007

[19] J D Bustard and M S Nixon ldquoToward unconstrained earrecognition from two-dimensional imagesrdquo IEEE Transactionson Systems Man and Cybernetics Part A vol 40 no 3 pp 486ndash494 2010

[20] B Arbab-Zavar and M S Nixon ldquoOn guided model-basedanalysis for ear biometricsrdquo Computer Vision and Image Under-standing vol 115 no 4 pp 487ndash502 2011

[21] K Chang K W Bowyer S Sarkar and B Victor ldquoComparisonand combination of ear and face images in appearance-basedbiometricsrdquo IEEE Transactions on Pattern Analysis andMachineIntelligence vol 25 no 9 pp 1160ndash1165 2003

[22] L Yuan Z-C Mu Y Zhang and K Liu ldquoEar recognition usingimproved non-negative matrix factorizationrdquo in Proceedings ofthe 18th International Conference on Pattern Recognition (ICPRrsquo06) pp 501ndash504 Hong Kong August 2006

[23] W Dun and Z Mu ldquoICA-based ear recognition methodthrough nonlinear adaptive feature fusionrdquo Journal ofComputer-Aided Design and Computer Graphics vol 21no 3 pp 383ndash388 2009

12 The Scientific World Journal

[24] Y Wang Z-C Mu and H Zeng ldquoBlock-based and multi-resolutionmethods for ear recognition using wavelet transformand uniform local binary patternsrdquo in Proceedings of the 19thInternational Conference on Pattern Recognition (ICPR rsquo08)Tampa Fla USA December 2008

[25] J Zhou S Cadavid and M Abdel-Mottaleb ldquoHistograms ofcategorized shapes for 3D ear detectionrdquo in Proceedings ofthe 4th IEEE International Conference on Biometrics TheoryApplications and Systems (BTAS rsquo10) pp 278ndash282 WashingtonDC USA September 2010

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 3: Research Article Ear Recognition Based on Gabor Features

The Scientific World Journal 3

Table 1 Continued

Reference Description Dataset Performance

[24]

Wang et al (2008) proposed ear recognition based on Local Binary PatternEar images were decomposed by Haar wavelet transform Then Uniform LBPcombined with block-based and multiresolution methods was applied to

describe the texture features Finally the texture features are classified by thenearest neighbor method

USTB dataset3 (79subjects)

Rank-1 ge92(for pose

variation within20∘)

[25]

Zhou et al (2010) proposed ear recognition via sparse representation Gaborfeatures are used to develop a dictionary Classification is performed by

extracting features from the test data and using the dictionary for representingthe test data The class of the test data is then determined based upon the

involvement of the dictionary entries in its representation

UND G subset 39subjects

Rank-1 9846(4 images fortraining and 2images fortesting)

that of the ear to be authenticated Somany appearance basedmethods will not work in this situationThismeans that thereexists a gap between ear detection and feature extractionThis gap is automatic ear normalization very much like facenormalization whichmeans that we have to set up a standardto normalize the ear into the same size In this paper wecombine ear detection and ear normalization into one stagenamed ear enrollment To our best knowledge the researchon ear enrollment is still an open area

The rest of this paper is organized as follows Section 2describes our automated ear enrollment approach Section 3details the feature extraction approach using Gabor filtersand Kernel Fisher Discriminant Analysis Section 4 conductsear recognition and ear authentication experiments to eval-uate the proposed methods Finally concluding remarks aredrawn in Section 5

2 Ear Enrollment

This section will detail the ear enrollment process whichincludes ear detection and ear normalization The ear detec-tion approach based on our modified Adaboost algorithmdetects the ear part under complex background using twosteps offline cascaded classifier training and online eardetection We have made some modification compared withour previous work on ear detection [6] Then Active ShapeModel is then applied to segment the ear part and normalizeall the ear images to the same size

21 Automatic Ear Detector Based on Modified AdaboostAlgorithm In our previous work [6] we have proposed anear detection approach under complex background whichhas two stages offline cascaded classifier training and onlineear detectionThe cascaded classifier is composed of multiplestrong classifiers Each strong classifier is composed of multi-ple weak classifiers The training process of a strong classifieris as follows

(1) Given 119873 example images (119909

1 119910

1) (119909

119899 119910

119899)

where 119909 isin R119896 119910119894isin minus1 1 for negative and positive

examples respectively

(2) Initialize weights 1198631(119909

119894) = 1119899 119894 = 1 2 119899

(3) Repeat for 119905 = 1 2 119879 119879 is the total number ofweak classifiers do the following

(i) train the weak classifiers with weights 119863119905 and

get the regression function ℎ

119905 119883 rarr minus1 +1

The nearest neighbor classifier is used as theweak classifier

(ii) get the error rate of the regression function 120576

119905=

sum

119899

119894=1

119863

119905(119909

119894)[ℎ

119905(119909

119894) = 119910

119894]

(iii) set the weight of weak classifiers 120572

119905 119886

119905=

12 ln((1 minus 120576

119905)120576

119905)

(iv) update weights of different training samples

119863

119905+1(119909

119894) =

119863

119905(119909

119894)

119885

119905

times

119890

minus120572119905 if ℎ

119905(119909

119894) = 119910

119894

119890

120572119905 if ℎ

119905(119909

119894) = 119910

119894

(1)

where 119885

119905is the normalization factor for

sum

119899

119894=1

119863

119905(119909

119894) = 1

(4) After 119879 times of training the final strong classifier is

119867(119909) = sign(119879

sum

119894=1

120572

119905ℎ

119905(119909) minusTh) (2)

whereTh is the decision threshold decided by the falseclassification rate

The training process of the algorithm mentioned aboveis time consuming Also the false rejection rate and falseacceptance rate need to be lowered for real applicationscenarios Based on the structural features of the ear itself wepropose the following four improvements on the traditionalAdaBoost algorithm in view of its deficiency

Improvement 1 In Adaboost algorithm the selection of theoptimum classification threshold of the Haar-like features isvery important for weak classifier learning algorithm Thisprocedure is time consuming So we propose a ldquosegment-selecting methodrdquo to choose the optimum threshold of weakclassifiers

Step 1 We divide the feature value space composed of thefeature values of all the training samples for each Haar-likefeature into 119899 parts and 119903 feature values per part Suppose that

4 The Scientific World Journal

the original feature value space is 119892(119894) (119894 = 0 119873) where119873 is the total number of positive and negative samples Thenew feature value space is 119892(119896) (119896 = 0 119903 2119903 119899)

Step 2 In the new space search the optimum threshold 119892(119895)Then back to the original feature value space centered with119892(119895) we search the region from 119892(119895minus 119903) to 119892(119895+ 119903) to find outthe optimum threshold

Improvement 2We apply a strategywhich can reduce the falseacceptance rate bymeans of changing theweight distributionof weak classifiers

The strong classifier is composed of weighted weak clas-sifiers The smaller error rate a weak classifier possesses thebigger the weight assigned to a weak classifier The error rateis decided by the training samples The positive and negativesamples are equally important Ear detection experimentalresults show that with the traditional Adaboost algorithmthe false acceptance rate is not acceptable So we improve thetraining procedure of the weak classifiers by proposing thatthe weight distribution among the weak classifiers not only isdecided by the total error rate but also is concerned with thenegative samples

So we improve the Adaboost algorithm by including aparameter 119896119890119902119905 to give higher weights to those weak classifiersthat have lower false acceptance rate on negative samples 119902

119905

is the weight sum of those negative samples that have beenclassified correctly representing the classification ability ofthe weak classifier to negative samples 119896 is used to constrainthe impact of 119902

119905on the weight of the weak classifier The

modified Adaboost algorithm is as follows

Step 1 Given a weak classifier learning algorithm and atraining sample set (119909

1 119910

1) (119909

119899 119910

119899) where 119909

119894is the

training sample feature vector 119910119894isin minus1 +1 for negative and

positive examples respectively

Step 2 Initialize the weights1198631(119909

119894) = 1119899 119894 = 1 2 119899

Step 3 Repeat for 119905 = 1 2 119879 119889119900 the following

(1) train the weak classifier learning algorithm with theweights 119863

119905 and return the weak classifier ℎ

119905 119883 rarr

minus1 +1

(2) compute the error rate

120576

119905=

119899

sum

119894=1

119863

119905(119909

119894) [ℎ

119905(119909

119894) = 119910

119894] (3)

get the weight sum of those negative samples that areclassified correctly as

119902

119905=

119899

sum

119894=1

119863

119905(119909

119894) [119910

119894= minus1 ℎ

119905(119909

119894) = minus1] (4)

(3) get the updating parameter 120572119905and the weight param-

eter of weak classifier 120572119905minusnew

119886

119905=

1

2

ln(1 minus 120576

119905

120576

119905

)

119886

119905minusnew =

1

2

ln(1 minus 120576

119905

120576

119905

) + 119896119890

119902119905

(5)

where 119896 is a constant value The value of 119896 shouldguarantee that the new added weak classifier willreduce the upper boundary of the minimum errorrate In this paper 119896 is set to 0018

(4) Update the weights

119863

119905+1(119909

119894) =

119863

119905(119909

119894)

119885

119905

times

119890

minus120572119905 if ℎ

119905(119909

119894) = 119910

119894

119890

120572119905 if ℎ

119905(119909

119894) = 119910

119894

(6)

where 119885

119905is a normalization factor chosen so that

sum

119899

119894=1

119863

119905(119909

119894) = 1

Step 4 After 119879 loops output the strong classifier

119867(119909) = sign(119879

sum

119894=1

120572

119905minusnewℎ119905 (119909) minusTh) (7)

where Th is the threshold corresponding with the error rate

Improvement 3 We apply a new parameter called eliminationthreshold to improve the robustness of the detector andprevent overfitting

By analyzing human ear samples we find that the globalstructure of human ears is similar the shape the outercontour is almost oval all human ears have similar shapeof helix antihelix and concha These similar global featuresare helpful for the training of an earnon-ear classifier Butif we look into more details we find that each ear has itsunique features or measures on different ear componentsThe differences on details make the Adaboost based two-classclassifiermore difficult to constructHere we regard those earsamples that present special detail components as ldquodifficultsamplesrdquo These samples will get more weights during theweak classifier learning process because the weak classifierwill always try to get them classified correctly This willultimately incur overfitting In order to prevent overfittingwe apply a new parameter called ldquoelimination threshold Hwrdquoto improve the robustness of the ear detector During thetraining process the ear samples with weight greater thanHwwill be eliminated

Improvement 4 We propose the ldquosingle-ear-detection strat-egyrdquo in view of the asymmetry of left ear and right earThis strategy trains the detector with the samples of singleears After the training process we get left ear detector andright ear detector which are called ldquosingle-ear detectorrdquo Onan input image both detectors are applied to locate the earpart In our previous work both left ears and right earsimages are used for training We call this ldquodual-ear detectorrdquo

The Scientific World Journal 5

Table 2 Detection performance comparison FRR and FAR

Test dataset Number of earimages

Method in ourprevious work [6](dual-ear detector)

Proposed method in thispaper (dual-ear detector)

Proposed method in thispaper (left ear detector +right ear detector)

FRNFRR FANFAR FRNFRR FANFAR FRNFRR FANFARCAS-PEAL 166 530 636 530 212 318 212UMIST 48 121 00 00 00 121 00USTB220 220 105 523 00 523 00 105

0 2 4 6 8 10 12 14 16 180

005

01

015

02

025

03

035

04

Layer number

False

acc

epta

nce

rate

Single-ear detector (le)Dual-ear detector in this paper

Figure 1 Performance comparison between single-ear detector and dual-ear detector

Compared with the dual-ear detector the single-ear detectorcan improve the detection performance

Table 2 shows the experimental results on three ear imagedatasets These datasets are the same as used in our previouswork [6] In Table 2 FRN means false rejection numberFRR means false rejection rate FAN means false acceptancenumber and FAR means false acceptance rate In Figure 11we compare the number of weak classifiers on each layeramong the dual-ear detector in [6] dual-ear detector in thispaper and single-ear detector in this paper As we can seefrom Figure 11 compared with dual-ear detector the single-ear detector contains less number of weak classifiers on eachlayer With less number of weak classifiers the training timecan be reduced to 50 (CPU 30GHz RAM 2G the numberof positive training samples is 11000 the number of negativetraining samples is 5000 the sample size is 60 lowast 120) Thetraining time for the left ear detector right ear detector anddual-ear detector is about 111 hours 100 hours and 217 hoursrespectively

For each layer we compare the detection performancebetween the single-ear detector and the dual-ear detector inthis paper For each layer the difference of detection rate

between these twodetectors is very limited notmore than 1but they have difference on the false acceptance rate as shownin Figure 1 the single-ear detector has lower false acceptancerate which means that it performs better on the negativesample Figure 2(a) shows an example of detection results onthe registered images of one subject Figure 2(b) shows theextracted ear part

22 Ear Normalization Using Active Shape Model As we cansee from the second figure in Figure 2(b) the segmentedimage contains some background information other than thepure ear so we need to further extract the pure ear for thefollowed ear feature extraction

We apply an automatic ear normalization method basedon improved Active Shape Model (ASM) [8] In the offlinetraining step ASM was used to create the Point DistributedModel using the landmark points on the ear images of thetraining set This model was then applied on the test earimages to search the outer ear contour The final step wasto normalize the ear image to standard size and directionaccording to the long axis of outer ear contour The longaxis was defined as the line crossing through the two points

6 The Scientific World Journal

(a) (b)

Figure 2 Ear detection experimental examples

(a)

(b)

Figure 3 Example images of ear normalization

which have the longest distance on the ear contour Afternormalization the long axes of different ear images will benormalized to the same length and same direction Thisprocess is shown in Figure 3(a) The size of the normalizedimage is 60 lowast 117 pixels This ratio of 195 is set based onstatistical research

After converting the color images to gray images weuse histogram equalization to eliminate lighting variationsamong different subjects Figure 3(b) shows some exampleimages after ear normalization

3 Ear Recognition Based on Gabor Features

For feature extraction Gabor filter is applied on the earimages to extract spatial local features of different directionsand scales The Gabor features are of high dimension soKernel Fisher Discriminant Analysis is further applied fordimension reductionThendistance based classifier is appliedfor ear recognition

31 Gabor Feature Extraction The ear has its distinct featurescompared with the face such as the texture features on dif-ferent orientations For its eminent characteristics in spatiallocal feature exaction and orientation selection Gabor basedear is presented in this paper A two-dimensional Gaborfunction is defined as [9]

119892 (119909 119910) = exp(minus119909

1015840

2

+ 119910

1015840

2

2120590

2

) cos(21205871199091015840

120582

+ 120601)

[

119909

1015840

119910

1015840] = [

cos 120579 sin 120579minus sin 120579 cos 120579] [

119909

119910

]

120590

120582

=

1

120587

radic

ln 22

2

119887

+ 1

2

119887

minus 1

(8)

120582 is the wavelength of the cosine factor of the Gaborfilter kernel its value is specified in pixels 120579 specifies theorientation of the normal to the parallel stripes of a Gaborfunction Its value is specified in degrees 120590 is the standard

The Scientific World Journal 7

(a) (b) (c)

Figure 4 Gabor ear image (a) the real part of the Gabor kernel (b) the magnitude spectrum of the Gabor feature of the ear image on 4orientations (c) the magnitude spectrum of the Gabor feature of the ear image on 8 orientations

deviation of the Gaussian factor 119887 is the half-response spatialfrequency bandwidth

The Gabor feature of an ear image is the convolution ofthe ear image and the Gabor kernel function as shown in

119869 (119909 119910119898 119899) = int I (1199091015840 1199101015840) 119892119898119899

(119909 minus 119909

1015840

119910 minus 119910

1015840

) 119889119909

1015840

119889119910

1015840

(9)

where119898 and 119899 stand for the number of scale and orientationof the Gabor wavelet For each pixel (119909 119910) there are 119898 lowast 119899

Gabor coefficients Here we keep the magnitude of the Gaborfilter as the ear feature Figure 4 shows the Gabor feature ofthe ear image Figure 4(a) shows the real part of the Gaborkernel on 3 scales (120582 = 20 25 and 30 resp from the firstto the third row 120579 = 0 1205874 1205872 and 31205874 resp from thefirst to the fourth column) Figure 4(b) shows the magnitudespectrum of the Gabor features corresponding to Figure 4(a)and 4(c) shows themagnitude spectrumof theGabor featurescorresponding to 5 scales and 8 orientations By this way wecan get the texture features on different orientations of theear image As we can see form Figure 4 the texture featuresof the human ear are salient enough on the four directionsin Figure 4(b) so we choose these 4 orientations for futureapplication in Section 32

32 Dimension Reduction Using KFDA For a 119867 lowast 119882 pixelsimage the dimension of the Gabor feature is 119867 lowast 119882 lowast 119898 lowast

119899 So the downsampling scheme is applied for dimensionreduction of the high-dimensional Gabor features Supposethe downsampling factor is 119903 and the Gabor coefficientsare normalized to zero mean and unit variance Then weconcatenate all the coefficients119866119903

119906V to form the Gabor featureset 119866119903

119866

119903

= ((119866

119903

00

)

119905

(119866

119903

01

)

119905

sdot sdot sdot (119866

119903

23

)

119905

)

119905

(10)

Suppose that 119903 = 64 the feature dimension of an earimage with the size of 60lowast117will be 1317The downsamplingfactor is empirically selected and set to 64 for all theexperiments in this paper So the feature dimension needsto be further reduced In this paper the Full Space Kernel

Fisher Discriminant Analysis (FSKFDA) is used for featurereduction [10] FSKFDA makes full use of the discriminantinformation in the full space of the within-class scatternamely the discriminant information in the null space andnonnull space of the within-class scatter Figure 5 shows thedetailed flowchart of this method

4 Experimental Results

41 Ear Recognition Experiment In this experiment weselect two datasets the USTB dataset3 [11] and a subsetof UND collection J2 [12] The USTB dataset3 contains 79subjects and the images of each subject are taken under ori-entation variation Five images for each subject are selectedThese images are 10∘ 5∘ of left rotation and 0∘ 5∘ and 10∘ ofright rotation respectively Using the ear detection methodmentioned in Section 21 we get the miss rate of 2395 andthe false alarm rate of zero on this USTB dataset3 Figure 6shows five source images and normalized ear images of onesubject

In UND collection J2 we selected 150 subjects whichpossess more than 6 images per subject Six images areselected for the experiment There exists illumination andpose variation in this subset Using the ear detection methodmentioned in Section 21 we get the miss rate of 13900 andthe false alarm rate of 15900 on this UND dataset Figure 7shows six source images and normalized ear images of onesubject

In the training stage we use the Gabor + KFDA methodto get the feature space of the gallery set In the test stage thetest image in the probe set is projected to the feature space toget its feature vectorThen the cosine distance measure baseddistance classifier is applied for classification In (11) 119909 and 119910represent two Gabor feature vectors

120575cos (119909 119910) =minus119909

119879

119910

119909

1003817

1003817

1003817

1003817

119910

1003817

1003817

1003817

1003817

(11)

For the kernel function in KFDA we apply linearkernel polynomial kernel RBF kernel and cosine kernelfunctions as shown in Table 3 We conduct a leave-one-out

8 The Scientific World Journal

Gabor feature set X

Kernel matrix Nonlinear projection Φ(middot)

In the feature space F compute the total scatter matrix of the projected training samples 120601X

Kernel trick

Compute the nonnull spaceφ of the total scatter matrix first compute the eigenvectors tV

tV

corresponding to the nonzero eigenvalues of and then the nonnull spaceφ will be

Compute the projection of the training samples φX on the nonnull spaceφ 120601120601120601 XVY T

tt )(=

Compute the between-class scatter matrix 120601bS and the

within-class scatter matrix 120601wS of the projection Y

can be represented as

Y can be represented as

Kernel trick

Compute the null space perpΨ of 120601wS perpΨ is composed of the eigenvectors perp

swV corresponding to the zero eigenvalues of φ

wS

Compute the nonnull spaceΨ of 120601wS Ψ is composed of the eigenvectors swV corresponding to the

nonzero eigenvalues of 120601wS

Compute the optimal discriminant vectors in the null space perpΨ of 120601

wS firstly project 120601

bS into the null space perpΨ ofφwS namely perpperp= swb

Tswb VSVS

120601120601 )( And then compute the eigenvectors sbV corresponding to the d largest eigenvalues of XXSb λ

120601= so

the set of the optimal discriminant vectors in the null space perpΨ of 120601

wS is sbswVVV perp=1

Compute the optimal discriminant vectors in the nonnull spaceΨ of φ

wS according to the Fisher criterion

firstly project φbS and φ

wS into the nonnull spaceΨ ofφwS namely swb

Tswb VSVS φφ = and sww

Tsww VSVS φφ =

And then compute the eigenvectors wbV corresponding to

the t largest eigenvalues of XXSS bw λφφ =minus1

so the

set of the optimal discriminant vectors in the nonnull

space Ψ of φwS is wbswVVV =2

Get the final optimal discriminant projection matrix combine the optimal discriminant vectors in null space perpΨ and nonnull space Ψ of 120601

wS to form the set of the final optimal discriminant vectors namely TVVV ][ 21=

Extract the feature vector Z of the Gabor data namely VYZ =

S120601t

S120601t

S120601t

=Csumi=1

N119894sumj=1

(120601(x)ij minus u120601)(120601(x)ij minus u120601)T = 120601120601t

120601120601t

(120601120601t )

T

(120601120601t )

T

(120601120601t )

T

120601120601t = span(120601(x)11 minus u120601) (120601(x)CN1

minus u120601)

k(xi xj) = exp(minus xi xj2

1205902)

120601120601t(120601

120601t )

T

(120601120601t )

T120601120601t =

1

N(Kminus 1

N(K middot INtimesN + INtimesN middot K

+1

N2middot INtimesN middot K middot INtimesN)

Y = VTt middot(Kminus

1

Nmiddot INtimesN middot K)

Figure 5 Dimension reduction using the Full Space Kernel Fisher Discriminant Analysis

Figure 6 Sample images form USTB dataset3

The Scientific World Journal 9

Figure 7 Sample images of UND dataset

Table 3 Rank-1 recognition rate of the proposed method when adopting different kernel functions

Kernel119870(119909 119910)

Parameters inkernel functions

Rank-1 recognition rateUSTB dataset 3 UND dataset

Linear119870(119909 119909

119894

) = (119909 sdot 119909

119894

) mdash 7544 7422

Polynomial119870(119909 119909

119894

) = [119886 (119909 sdot 119909

119894

) + 119887]

119889 119886 = 0001 119887 = 1119889 = 2

8911 8733

RBF 119870(119909 119909

119894

) = exp (minus1205741003816100381610038161003816

119909 minus 119909

119894

1003816

1003816

1003816

1003816

2

) 120574 = 000019646 94

Cosine

119896(119909 119910) =

119896(119909 119910)

radic119896(119909 119909)119896(119910 119910)

119896(119909 119910) = [119886 (119909 sdot 119910) + 119887]

119889

119886 = 0001 119887 = 1119889 = 2

9241 9022

Table 4 Performance comparison between ear recognition with or without ear normalization

Dataset Ear images without normalization Ear images with normalization Rank-1 recognition rateWithout

normalizationWith

normalization

USTBdataset3 9358 9646

UNDdataset 9111 94

identification experiment The averaged rank-1 recognitionperformance when adopting each kernel function is listedin Table 3 (the feature dimension is reduced to 100) Theparameters in kernel functions are empirically selected in thispaper As we can see from Table 3 the recognition rate onboth datasets performs best with the RBF kernel

Table 4 compares the rank-1 recognition result using theRBF kernel in two scenarios with ear normalization andwithout ear normalization As we can see from Table 4 earrecognition with ear normalization performs better thanear recognition without ear recognition We can see fromthe pictures in Table 4 that in the ldquoear images withoutnormalizationrdquo scenario although the extracted ear imagesare set to the same size the ear locations and the pure earsize may be different among these different enrolled imagesOn the other hand in the ldquoear images with normalizationrdquoscenario only the pure ear part is extracted and used forenrollment all of the enrolled ears have the same size whichhelps to enhance the recognition performance

42 Ear Authentication Experiment Wehave designed an earauthentication system with the proposed method Figure 8shows the framework flow of the ear authentication system

Ear detection

Ear trackingand extraction

Gabor feature extraction

Ear normalization Training process

authenticationEar

Still image or video sequence

Ear normalization

Ear feature set

Gabor feature extractionEar image

dataset

Online authentication

Figure 8 Framework flow of the ear authentication system

Figure 9(a) shows the application scenario The systemis composed of a camera (resolution is 640 lowast 480) and acomputer (CPU i5M520 240GHz RAM 200GB) Thereare a total number of 59 subjects registered For each subject 6images are stored in the gallery set Figure 9(b) shows the earenrollment process Figure 9(c) shows the ear authenticationprocess The time consumption for authenticating a subjectis about 180ms in which the time consumption for the eardetector to extract the pure ear is about 150ms Figure 10

10 The Scientific World Journal

(a) (b)

(c)

Figure 9 Ear authentication system (a) authentication scenario (b) ear enrollment interface (c) ear authentication interface

0 02 04 06 08 10

01

02

03

04

05

06

07

08

09

1

False acceptance rate

Gen

uine

acc

epta

nce

rate

Figure 10 ROC curve of the ear authentication system

shows the ROC curve of the authentication performanceTheEER rate is about 4The experimental result shows that earbiometric is a feasible candidate for human identification

5 Conclusion

In this paper an ear recognition system based on 2D imagesis proposedThemain contributions of the proposed methodare the following (1) automatically detecting the ear part

from the input image and normalizing the ear part basedon the long axis of the outer ear contour and (2) extractingthe discriminating Gabor features of the ear images usingthe Full Space Kernel Discriminant Analysis Experimentalresults show that we can achieve automatic ear recognitionbased on 2D images with the proposed method

Our future work will be focused on two aspects (1) inthe ear normalization stage we need to improve the accuracyof the earlobe localization generate deliberate models for

The Scientific World Journal 11

0

5

10

15

20

25

30

35

40

45

50

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18

Num

ber o

f w

eak

clas

sifier

s

Layer number

Le ear detector

Right ear detector

Method in [19]

Method in this paper

Figure 11 Comparison of the number of weak classifiers on eachlayer

the earlobe landmarks and make the searching process lessdependent on the initial model shape and (2) in the earauthentication stage we need a larger dataset to testify thematching accuracy and the real-time performance of theproposed method

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

Thiswork is supported by theNational Natural Science Foun-dation of China under Grant no 61300075 and FundamentalResearch Funds for China Central Universities under Grantno FRF-SD-12-017A

References

[1] A Pflug and C Busch ldquoEar biometrics a survey of detectionfeature extraction and recognition methodsrdquo IET Biometricsvol 1 no 2 pp 114ndash129 2012

[2] A Abaza A Ross C Hebert et al ldquoA survey on ear biometricsrdquoACM Transactions on Embedded Computing Systems vol 9 no4 article 39 2010

[3] S M S Islam M Bennamoun R A Owens and R Davis ldquoAreview of recent advances in 3d ear and expression-invariantface biometricsrdquo ACMComputing Surveys vol 44 no 3 article14 2012

[4] L Yuan Z C Mu and F Yang ldquoA review of recent advances inear recognitionrdquo in Proceedings of the 6th Chinese Conference onBiometric Recognition vol 7098 of LNCS pp 252ndash259 SpringerBeijing China December 2011

[5] A Iannerelli Ear Identification Forensic Identification SeriesParamount Publishing Company Fremont Calif USA 1989

[6] L Yuan and F Zhang ldquoEar detection based on improvedadaboost algorithmrdquo in Proceedings of the International Con-ference on Machine Learning and Cybernetics pp 2414ndash2417Baoding China July 2009

[7] A Abaza C Hebert and M A F Harrison ldquoFast learning eardetection for real-time surveillancerdquo in Proceedings of the 4thIEEE International Conference on Biometrics Theory Applica-tions and Systems (BTAS rsquo10) pp 1ndash6 Washington DC USASeptember 2010

[8] L Yuan and Z C Mu ldquoEar recognition based on 2D imagesrdquoin Proceedings of 1st International Conference on BiometricsTheory Applications and Systems Washington DC USA 2007

[9] C Liu ldquoGabor-based kernel PCAwith fractional power polyno-mial models for face recognitionrdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 26 no 5 pp 572ndash5812004

[10] L Yuan Z-CMu andHZeng ldquoPartially occluded ear recogni-tion based on local featuresrdquo Journal of University of Science andTechnology Beijing vol 32 no 4 pp 530ndash535 2010 (Chinese)

[11] httpwwwustbeducnresb[12] httpwww3ndedusimcvrlCVRLData Setshtml[13] M Burge andW Burger ldquoEar biometrics in machine visionrdquo in

Proceedings of the 21stWorkshop of the Australian Association forPattern Recognition 1997

[14] B Moreno A Sanchez and J F Velez ldquoOn the use of outerear images for personal identification in security applicationsrdquoin Proceedings of the IEEE 33rd Annual International CarnahanConference on Security Technology pp 469ndash476 October 1999

[15] Z C Mu L Yuan and Z G Xu ldquoShape and structuralfeature based ear recognitionrdquo in Proceedings of InternationalConference on Advances in Biometric Personal Authenticationvol 3338 of LNCS pp 663ndash670 Springer 2004

[16] M Choras ldquoEar biometrics based on geometrical featureextractionrdquo Electronics Letters on Computer Vision amp ImageAnalysis no 3 pp 84ndash95 2005

[17] D J Hurley M S Nixon and J N Carter ldquoForce field featureextraction for ear biometricsrdquo Computer Vision and ImageUnderstanding vol 98 no 3 pp 491ndash512 2005

[18] L Nanni and A Lumini ldquoA multi-matcher for ear authentica-tionrdquo Pattern Recognition Letters vol 28 no 16 pp 2219ndash22262007

[19] J D Bustard and M S Nixon ldquoToward unconstrained earrecognition from two-dimensional imagesrdquo IEEE Transactionson Systems Man and Cybernetics Part A vol 40 no 3 pp 486ndash494 2010

[20] B Arbab-Zavar and M S Nixon ldquoOn guided model-basedanalysis for ear biometricsrdquo Computer Vision and Image Under-standing vol 115 no 4 pp 487ndash502 2011

[21] K Chang K W Bowyer S Sarkar and B Victor ldquoComparisonand combination of ear and face images in appearance-basedbiometricsrdquo IEEE Transactions on Pattern Analysis andMachineIntelligence vol 25 no 9 pp 1160ndash1165 2003

[22] L Yuan Z-C Mu Y Zhang and K Liu ldquoEar recognition usingimproved non-negative matrix factorizationrdquo in Proceedings ofthe 18th International Conference on Pattern Recognition (ICPRrsquo06) pp 501ndash504 Hong Kong August 2006

[23] W Dun and Z Mu ldquoICA-based ear recognition methodthrough nonlinear adaptive feature fusionrdquo Journal ofComputer-Aided Design and Computer Graphics vol 21no 3 pp 383ndash388 2009

12 The Scientific World Journal

[24] Y Wang Z-C Mu and H Zeng ldquoBlock-based and multi-resolutionmethods for ear recognition using wavelet transformand uniform local binary patternsrdquo in Proceedings of the 19thInternational Conference on Pattern Recognition (ICPR rsquo08)Tampa Fla USA December 2008

[25] J Zhou S Cadavid and M Abdel-Mottaleb ldquoHistograms ofcategorized shapes for 3D ear detectionrdquo in Proceedings ofthe 4th IEEE International Conference on Biometrics TheoryApplications and Systems (BTAS rsquo10) pp 278ndash282 WashingtonDC USA September 2010

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 4: Research Article Ear Recognition Based on Gabor Features

4 The Scientific World Journal

the original feature value space is 119892(119894) (119894 = 0 119873) where119873 is the total number of positive and negative samples Thenew feature value space is 119892(119896) (119896 = 0 119903 2119903 119899)

Step 2 In the new space search the optimum threshold 119892(119895)Then back to the original feature value space centered with119892(119895) we search the region from 119892(119895minus 119903) to 119892(119895+ 119903) to find outthe optimum threshold

Improvement 2We apply a strategywhich can reduce the falseacceptance rate bymeans of changing theweight distributionof weak classifiers

The strong classifier is composed of weighted weak clas-sifiers The smaller error rate a weak classifier possesses thebigger the weight assigned to a weak classifier The error rateis decided by the training samples The positive and negativesamples are equally important Ear detection experimentalresults show that with the traditional Adaboost algorithmthe false acceptance rate is not acceptable So we improve thetraining procedure of the weak classifiers by proposing thatthe weight distribution among the weak classifiers not only isdecided by the total error rate but also is concerned with thenegative samples

So we improve the Adaboost algorithm by including aparameter 119896119890119902119905 to give higher weights to those weak classifiersthat have lower false acceptance rate on negative samples 119902

119905

is the weight sum of those negative samples that have beenclassified correctly representing the classification ability ofthe weak classifier to negative samples 119896 is used to constrainthe impact of 119902

119905on the weight of the weak classifier The

modified Adaboost algorithm is as follows

Step 1 Given a weak classifier learning algorithm and atraining sample set (119909

1 119910

1) (119909

119899 119910

119899) where 119909

119894is the

training sample feature vector 119910119894isin minus1 +1 for negative and

positive examples respectively

Step 2 Initialize the weights1198631(119909

119894) = 1119899 119894 = 1 2 119899

Step 3 Repeat for 119905 = 1 2 119879 119889119900 the following

(1) train the weak classifier learning algorithm with theweights 119863

119905 and return the weak classifier ℎ

119905 119883 rarr

minus1 +1

(2) compute the error rate

120576

119905=

119899

sum

119894=1

119863

119905(119909

119894) [ℎ

119905(119909

119894) = 119910

119894] (3)

get the weight sum of those negative samples that areclassified correctly as

119902

119905=

119899

sum

119894=1

119863

119905(119909

119894) [119910

119894= minus1 ℎ

119905(119909

119894) = minus1] (4)

(3) get the updating parameter 120572119905and the weight param-

eter of weak classifier 120572119905minusnew

119886

119905=

1

2

ln(1 minus 120576

119905

120576

119905

)

119886

119905minusnew =

1

2

ln(1 minus 120576

119905

120576

119905

) + 119896119890

119902119905

(5)

where 119896 is a constant value The value of 119896 shouldguarantee that the new added weak classifier willreduce the upper boundary of the minimum errorrate In this paper 119896 is set to 0018

(4) Update the weights

119863

119905+1(119909

119894) =

119863

119905(119909

119894)

119885

119905

times

119890

minus120572119905 if ℎ

119905(119909

119894) = 119910

119894

119890

120572119905 if ℎ

119905(119909

119894) = 119910

119894

(6)

where 119885

119905is a normalization factor chosen so that

sum

119899

119894=1

119863

119905(119909

119894) = 1

Step 4 After 119879 loops output the strong classifier

119867(119909) = sign(119879

sum

119894=1

120572

119905minusnewℎ119905 (119909) minusTh) (7)

where Th is the threshold corresponding with the error rate

Improvement 3 We apply a new parameter called eliminationthreshold to improve the robustness of the detector andprevent overfitting

By analyzing human ear samples we find that the globalstructure of human ears is similar the shape the outercontour is almost oval all human ears have similar shapeof helix antihelix and concha These similar global featuresare helpful for the training of an earnon-ear classifier Butif we look into more details we find that each ear has itsunique features or measures on different ear componentsThe differences on details make the Adaboost based two-classclassifiermore difficult to constructHere we regard those earsamples that present special detail components as ldquodifficultsamplesrdquo These samples will get more weights during theweak classifier learning process because the weak classifierwill always try to get them classified correctly This willultimately incur overfitting In order to prevent overfittingwe apply a new parameter called ldquoelimination threshold Hwrdquoto improve the robustness of the ear detector During thetraining process the ear samples with weight greater thanHwwill be eliminated

Improvement 4 We propose the ldquosingle-ear-detection strat-egyrdquo in view of the asymmetry of left ear and right earThis strategy trains the detector with the samples of singleears After the training process we get left ear detector andright ear detector which are called ldquosingle-ear detectorrdquo Onan input image both detectors are applied to locate the earpart In our previous work both left ears and right earsimages are used for training We call this ldquodual-ear detectorrdquo

The Scientific World Journal 5

Table 2 Detection performance comparison FRR and FAR

Test dataset Number of earimages

Method in ourprevious work [6](dual-ear detector)

Proposed method in thispaper (dual-ear detector)

Proposed method in thispaper (left ear detector +right ear detector)

FRNFRR FANFAR FRNFRR FANFAR FRNFRR FANFARCAS-PEAL 166 530 636 530 212 318 212UMIST 48 121 00 00 00 121 00USTB220 220 105 523 00 523 00 105

0 2 4 6 8 10 12 14 16 180

005

01

015

02

025

03

035

04

Layer number

False

acc

epta

nce

rate

Single-ear detector (le)Dual-ear detector in this paper

Figure 1 Performance comparison between single-ear detector and dual-ear detector

Compared with the dual-ear detector the single-ear detectorcan improve the detection performance

Table 2 shows the experimental results on three ear imagedatasets These datasets are the same as used in our previouswork [6] In Table 2 FRN means false rejection numberFRR means false rejection rate FAN means false acceptancenumber and FAR means false acceptance rate In Figure 11we compare the number of weak classifiers on each layeramong the dual-ear detector in [6] dual-ear detector in thispaper and single-ear detector in this paper As we can seefrom Figure 11 compared with dual-ear detector the single-ear detector contains less number of weak classifiers on eachlayer With less number of weak classifiers the training timecan be reduced to 50 (CPU 30GHz RAM 2G the numberof positive training samples is 11000 the number of negativetraining samples is 5000 the sample size is 60 lowast 120) Thetraining time for the left ear detector right ear detector anddual-ear detector is about 111 hours 100 hours and 217 hoursrespectively

For each layer we compare the detection performancebetween the single-ear detector and the dual-ear detector inthis paper For each layer the difference of detection rate

between these twodetectors is very limited notmore than 1but they have difference on the false acceptance rate as shownin Figure 1 the single-ear detector has lower false acceptancerate which means that it performs better on the negativesample Figure 2(a) shows an example of detection results onthe registered images of one subject Figure 2(b) shows theextracted ear part

22 Ear Normalization Using Active Shape Model As we cansee from the second figure in Figure 2(b) the segmentedimage contains some background information other than thepure ear so we need to further extract the pure ear for thefollowed ear feature extraction

We apply an automatic ear normalization method basedon improved Active Shape Model (ASM) [8] In the offlinetraining step ASM was used to create the Point DistributedModel using the landmark points on the ear images of thetraining set This model was then applied on the test earimages to search the outer ear contour The final step wasto normalize the ear image to standard size and directionaccording to the long axis of outer ear contour The longaxis was defined as the line crossing through the two points

6 The Scientific World Journal

(a) (b)

Figure 2 Ear detection experimental examples

(a)

(b)

Figure 3 Example images of ear normalization

which have the longest distance on the ear contour Afternormalization the long axes of different ear images will benormalized to the same length and same direction Thisprocess is shown in Figure 3(a) The size of the normalizedimage is 60 lowast 117 pixels This ratio of 195 is set based onstatistical research

After converting the color images to gray images weuse histogram equalization to eliminate lighting variationsamong different subjects Figure 3(b) shows some exampleimages after ear normalization

3 Ear Recognition Based on Gabor Features

For feature extraction Gabor filter is applied on the earimages to extract spatial local features of different directionsand scales The Gabor features are of high dimension soKernel Fisher Discriminant Analysis is further applied fordimension reductionThendistance based classifier is appliedfor ear recognition

31 Gabor Feature Extraction The ear has its distinct featurescompared with the face such as the texture features on dif-ferent orientations For its eminent characteristics in spatiallocal feature exaction and orientation selection Gabor basedear is presented in this paper A two-dimensional Gaborfunction is defined as [9]

119892 (119909 119910) = exp(minus119909

1015840

2

+ 119910

1015840

2

2120590

2

) cos(21205871199091015840

120582

+ 120601)

[

119909

1015840

119910

1015840] = [

cos 120579 sin 120579minus sin 120579 cos 120579] [

119909

119910

]

120590

120582

=

1

120587

radic

ln 22

2

119887

+ 1

2

119887

minus 1

(8)

120582 is the wavelength of the cosine factor of the Gaborfilter kernel its value is specified in pixels 120579 specifies theorientation of the normal to the parallel stripes of a Gaborfunction Its value is specified in degrees 120590 is the standard

The Scientific World Journal 7

(a) (b) (c)

Figure 4 Gabor ear image (a) the real part of the Gabor kernel (b) the magnitude spectrum of the Gabor feature of the ear image on 4orientations (c) the magnitude spectrum of the Gabor feature of the ear image on 8 orientations

deviation of the Gaussian factor 119887 is the half-response spatialfrequency bandwidth

The Gabor feature of an ear image is the convolution ofthe ear image and the Gabor kernel function as shown in

119869 (119909 119910119898 119899) = int I (1199091015840 1199101015840) 119892119898119899

(119909 minus 119909

1015840

119910 minus 119910

1015840

) 119889119909

1015840

119889119910

1015840

(9)

where119898 and 119899 stand for the number of scale and orientationof the Gabor wavelet For each pixel (119909 119910) there are 119898 lowast 119899

Gabor coefficients Here we keep the magnitude of the Gaborfilter as the ear feature Figure 4 shows the Gabor feature ofthe ear image Figure 4(a) shows the real part of the Gaborkernel on 3 scales (120582 = 20 25 and 30 resp from the firstto the third row 120579 = 0 1205874 1205872 and 31205874 resp from thefirst to the fourth column) Figure 4(b) shows the magnitudespectrum of the Gabor features corresponding to Figure 4(a)and 4(c) shows themagnitude spectrumof theGabor featurescorresponding to 5 scales and 8 orientations By this way wecan get the texture features on different orientations of theear image As we can see form Figure 4 the texture featuresof the human ear are salient enough on the four directionsin Figure 4(b) so we choose these 4 orientations for futureapplication in Section 32

32 Dimension Reduction Using KFDA For a 119867 lowast 119882 pixelsimage the dimension of the Gabor feature is 119867 lowast 119882 lowast 119898 lowast

119899 So the downsampling scheme is applied for dimensionreduction of the high-dimensional Gabor features Supposethe downsampling factor is 119903 and the Gabor coefficientsare normalized to zero mean and unit variance Then weconcatenate all the coefficients119866119903

119906V to form the Gabor featureset 119866119903

119866

119903

= ((119866

119903

00

)

119905

(119866

119903

01

)

119905

sdot sdot sdot (119866

119903

23

)

119905

)

119905

(10)

Suppose that 119903 = 64 the feature dimension of an earimage with the size of 60lowast117will be 1317The downsamplingfactor is empirically selected and set to 64 for all theexperiments in this paper So the feature dimension needsto be further reduced In this paper the Full Space Kernel

Fisher Discriminant Analysis (FSKFDA) is used for featurereduction [10] FSKFDA makes full use of the discriminantinformation in the full space of the within-class scatternamely the discriminant information in the null space andnonnull space of the within-class scatter Figure 5 shows thedetailed flowchart of this method

4 Experimental Results

41 Ear Recognition Experiment In this experiment weselect two datasets the USTB dataset3 [11] and a subsetof UND collection J2 [12] The USTB dataset3 contains 79subjects and the images of each subject are taken under ori-entation variation Five images for each subject are selectedThese images are 10∘ 5∘ of left rotation and 0∘ 5∘ and 10∘ ofright rotation respectively Using the ear detection methodmentioned in Section 21 we get the miss rate of 2395 andthe false alarm rate of zero on this USTB dataset3 Figure 6shows five source images and normalized ear images of onesubject

In UND collection J2 we selected 150 subjects whichpossess more than 6 images per subject Six images areselected for the experiment There exists illumination andpose variation in this subset Using the ear detection methodmentioned in Section 21 we get the miss rate of 13900 andthe false alarm rate of 15900 on this UND dataset Figure 7shows six source images and normalized ear images of onesubject

In the training stage we use the Gabor + KFDA methodto get the feature space of the gallery set In the test stage thetest image in the probe set is projected to the feature space toget its feature vectorThen the cosine distance measure baseddistance classifier is applied for classification In (11) 119909 and 119910represent two Gabor feature vectors

120575cos (119909 119910) =minus119909

119879

119910

119909

1003817

1003817

1003817

1003817

119910

1003817

1003817

1003817

1003817

(11)

For the kernel function in KFDA we apply linearkernel polynomial kernel RBF kernel and cosine kernelfunctions as shown in Table 3 We conduct a leave-one-out

8 The Scientific World Journal

Gabor feature set X

Kernel matrix Nonlinear projection Φ(middot)

In the feature space F compute the total scatter matrix of the projected training samples 120601X

Kernel trick

Compute the nonnull spaceφ of the total scatter matrix first compute the eigenvectors tV

tV

corresponding to the nonzero eigenvalues of and then the nonnull spaceφ will be

Compute the projection of the training samples φX on the nonnull spaceφ 120601120601120601 XVY T

tt )(=

Compute the between-class scatter matrix 120601bS and the

within-class scatter matrix 120601wS of the projection Y

can be represented as

Y can be represented as

Kernel trick

Compute the null space perpΨ of 120601wS perpΨ is composed of the eigenvectors perp

swV corresponding to the zero eigenvalues of φ

wS

Compute the nonnull spaceΨ of 120601wS Ψ is composed of the eigenvectors swV corresponding to the

nonzero eigenvalues of 120601wS

Compute the optimal discriminant vectors in the null space perpΨ of 120601

wS firstly project 120601

bS into the null space perpΨ ofφwS namely perpperp= swb

Tswb VSVS

120601120601 )( And then compute the eigenvectors sbV corresponding to the d largest eigenvalues of XXSb λ

120601= so

the set of the optimal discriminant vectors in the null space perpΨ of 120601

wS is sbswVVV perp=1

Compute the optimal discriminant vectors in the nonnull spaceΨ of φ

wS according to the Fisher criterion

firstly project φbS and φ

wS into the nonnull spaceΨ ofφwS namely swb

Tswb VSVS φφ = and sww

Tsww VSVS φφ =

And then compute the eigenvectors wbV corresponding to

the t largest eigenvalues of XXSS bw λφφ =minus1

so the

set of the optimal discriminant vectors in the nonnull

space Ψ of φwS is wbswVVV =2

Get the final optimal discriminant projection matrix combine the optimal discriminant vectors in null space perpΨ and nonnull space Ψ of 120601

wS to form the set of the final optimal discriminant vectors namely TVVV ][ 21=

Extract the feature vector Z of the Gabor data namely VYZ =

S120601t

S120601t

S120601t

=Csumi=1

N119894sumj=1

(120601(x)ij minus u120601)(120601(x)ij minus u120601)T = 120601120601t

120601120601t

(120601120601t )

T

(120601120601t )

T

(120601120601t )

T

120601120601t = span(120601(x)11 minus u120601) (120601(x)CN1

minus u120601)

k(xi xj) = exp(minus xi xj2

1205902)

120601120601t(120601

120601t )

T

(120601120601t )

T120601120601t =

1

N(Kminus 1

N(K middot INtimesN + INtimesN middot K

+1

N2middot INtimesN middot K middot INtimesN)

Y = VTt middot(Kminus

1

Nmiddot INtimesN middot K)

Figure 5 Dimension reduction using the Full Space Kernel Fisher Discriminant Analysis

Figure 6 Sample images form USTB dataset3

The Scientific World Journal 9

Figure 7 Sample images of UND dataset

Table 3 Rank-1 recognition rate of the proposed method when adopting different kernel functions

Kernel119870(119909 119910)

Parameters inkernel functions

Rank-1 recognition rateUSTB dataset 3 UND dataset

Linear119870(119909 119909

119894

) = (119909 sdot 119909

119894

) mdash 7544 7422

Polynomial119870(119909 119909

119894

) = [119886 (119909 sdot 119909

119894

) + 119887]

119889 119886 = 0001 119887 = 1119889 = 2

8911 8733

RBF 119870(119909 119909

119894

) = exp (minus1205741003816100381610038161003816

119909 minus 119909

119894

1003816

1003816

1003816

1003816

2

) 120574 = 000019646 94

Cosine

119896(119909 119910) =

119896(119909 119910)

radic119896(119909 119909)119896(119910 119910)

119896(119909 119910) = [119886 (119909 sdot 119910) + 119887]

119889

119886 = 0001 119887 = 1119889 = 2

9241 9022

Table 4 Performance comparison between ear recognition with or without ear normalization

Dataset Ear images without normalization Ear images with normalization Rank-1 recognition rateWithout

normalizationWith

normalization

USTBdataset3 9358 9646

UNDdataset 9111 94

identification experiment The averaged rank-1 recognitionperformance when adopting each kernel function is listedin Table 3 (the feature dimension is reduced to 100) Theparameters in kernel functions are empirically selected in thispaper As we can see from Table 3 the recognition rate onboth datasets performs best with the RBF kernel

Table 4 compares the rank-1 recognition result using theRBF kernel in two scenarios with ear normalization andwithout ear normalization As we can see from Table 4 earrecognition with ear normalization performs better thanear recognition without ear recognition We can see fromthe pictures in Table 4 that in the ldquoear images withoutnormalizationrdquo scenario although the extracted ear imagesare set to the same size the ear locations and the pure earsize may be different among these different enrolled imagesOn the other hand in the ldquoear images with normalizationrdquoscenario only the pure ear part is extracted and used forenrollment all of the enrolled ears have the same size whichhelps to enhance the recognition performance

42 Ear Authentication Experiment Wehave designed an earauthentication system with the proposed method Figure 8shows the framework flow of the ear authentication system

Ear detection

Ear trackingand extraction

Gabor feature extraction

Ear normalization Training process

authenticationEar

Still image or video sequence

Ear normalization

Ear feature set

Gabor feature extractionEar image

dataset

Online authentication

Figure 8 Framework flow of the ear authentication system

Figure 9(a) shows the application scenario The systemis composed of a camera (resolution is 640 lowast 480) and acomputer (CPU i5M520 240GHz RAM 200GB) Thereare a total number of 59 subjects registered For each subject 6images are stored in the gallery set Figure 9(b) shows the earenrollment process Figure 9(c) shows the ear authenticationprocess The time consumption for authenticating a subjectis about 180ms in which the time consumption for the eardetector to extract the pure ear is about 150ms Figure 10

10 The Scientific World Journal

(a) (b)

(c)

Figure 9 Ear authentication system (a) authentication scenario (b) ear enrollment interface (c) ear authentication interface

0 02 04 06 08 10

01

02

03

04

05

06

07

08

09

1

False acceptance rate

Gen

uine

acc

epta

nce

rate

Figure 10 ROC curve of the ear authentication system

shows the ROC curve of the authentication performanceTheEER rate is about 4The experimental result shows that earbiometric is a feasible candidate for human identification

5 Conclusion

In this paper an ear recognition system based on 2D imagesis proposedThemain contributions of the proposed methodare the following (1) automatically detecting the ear part

from the input image and normalizing the ear part basedon the long axis of the outer ear contour and (2) extractingthe discriminating Gabor features of the ear images usingthe Full Space Kernel Discriminant Analysis Experimentalresults show that we can achieve automatic ear recognitionbased on 2D images with the proposed method

Our future work will be focused on two aspects (1) inthe ear normalization stage we need to improve the accuracyof the earlobe localization generate deliberate models for

The Scientific World Journal 11

0

5

10

15

20

25

30

35

40

45

50

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18

Num

ber o

f w

eak

clas

sifier

s

Layer number

Le ear detector

Right ear detector

Method in [19]

Method in this paper

Figure 11 Comparison of the number of weak classifiers on eachlayer

the earlobe landmarks and make the searching process lessdependent on the initial model shape and (2) in the earauthentication stage we need a larger dataset to testify thematching accuracy and the real-time performance of theproposed method

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

Thiswork is supported by theNational Natural Science Foun-dation of China under Grant no 61300075 and FundamentalResearch Funds for China Central Universities under Grantno FRF-SD-12-017A

References

[1] A Pflug and C Busch ldquoEar biometrics a survey of detectionfeature extraction and recognition methodsrdquo IET Biometricsvol 1 no 2 pp 114ndash129 2012

[2] A Abaza A Ross C Hebert et al ldquoA survey on ear biometricsrdquoACM Transactions on Embedded Computing Systems vol 9 no4 article 39 2010

[3] S M S Islam M Bennamoun R A Owens and R Davis ldquoAreview of recent advances in 3d ear and expression-invariantface biometricsrdquo ACMComputing Surveys vol 44 no 3 article14 2012

[4] L Yuan Z C Mu and F Yang ldquoA review of recent advances inear recognitionrdquo in Proceedings of the 6th Chinese Conference onBiometric Recognition vol 7098 of LNCS pp 252ndash259 SpringerBeijing China December 2011

[5] A Iannerelli Ear Identification Forensic Identification SeriesParamount Publishing Company Fremont Calif USA 1989

[6] L Yuan and F Zhang ldquoEar detection based on improvedadaboost algorithmrdquo in Proceedings of the International Con-ference on Machine Learning and Cybernetics pp 2414ndash2417Baoding China July 2009

[7] A Abaza C Hebert and M A F Harrison ldquoFast learning eardetection for real-time surveillancerdquo in Proceedings of the 4thIEEE International Conference on Biometrics Theory Applica-tions and Systems (BTAS rsquo10) pp 1ndash6 Washington DC USASeptember 2010

[8] L Yuan and Z C Mu ldquoEar recognition based on 2D imagesrdquoin Proceedings of 1st International Conference on BiometricsTheory Applications and Systems Washington DC USA 2007

[9] C Liu ldquoGabor-based kernel PCAwith fractional power polyno-mial models for face recognitionrdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 26 no 5 pp 572ndash5812004

[10] L Yuan Z-CMu andHZeng ldquoPartially occluded ear recogni-tion based on local featuresrdquo Journal of University of Science andTechnology Beijing vol 32 no 4 pp 530ndash535 2010 (Chinese)

[11] httpwwwustbeducnresb[12] httpwww3ndedusimcvrlCVRLData Setshtml[13] M Burge andW Burger ldquoEar biometrics in machine visionrdquo in

Proceedings of the 21stWorkshop of the Australian Association forPattern Recognition 1997

[14] B Moreno A Sanchez and J F Velez ldquoOn the use of outerear images for personal identification in security applicationsrdquoin Proceedings of the IEEE 33rd Annual International CarnahanConference on Security Technology pp 469ndash476 October 1999

[15] Z C Mu L Yuan and Z G Xu ldquoShape and structuralfeature based ear recognitionrdquo in Proceedings of InternationalConference on Advances in Biometric Personal Authenticationvol 3338 of LNCS pp 663ndash670 Springer 2004

[16] M Choras ldquoEar biometrics based on geometrical featureextractionrdquo Electronics Letters on Computer Vision amp ImageAnalysis no 3 pp 84ndash95 2005

[17] D J Hurley M S Nixon and J N Carter ldquoForce field featureextraction for ear biometricsrdquo Computer Vision and ImageUnderstanding vol 98 no 3 pp 491ndash512 2005

[18] L Nanni and A Lumini ldquoA multi-matcher for ear authentica-tionrdquo Pattern Recognition Letters vol 28 no 16 pp 2219ndash22262007

[19] J D Bustard and M S Nixon ldquoToward unconstrained earrecognition from two-dimensional imagesrdquo IEEE Transactionson Systems Man and Cybernetics Part A vol 40 no 3 pp 486ndash494 2010

[20] B Arbab-Zavar and M S Nixon ldquoOn guided model-basedanalysis for ear biometricsrdquo Computer Vision and Image Under-standing vol 115 no 4 pp 487ndash502 2011

[21] K Chang K W Bowyer S Sarkar and B Victor ldquoComparisonand combination of ear and face images in appearance-basedbiometricsrdquo IEEE Transactions on Pattern Analysis andMachineIntelligence vol 25 no 9 pp 1160ndash1165 2003

[22] L Yuan Z-C Mu Y Zhang and K Liu ldquoEar recognition usingimproved non-negative matrix factorizationrdquo in Proceedings ofthe 18th International Conference on Pattern Recognition (ICPRrsquo06) pp 501ndash504 Hong Kong August 2006

[23] W Dun and Z Mu ldquoICA-based ear recognition methodthrough nonlinear adaptive feature fusionrdquo Journal ofComputer-Aided Design and Computer Graphics vol 21no 3 pp 383ndash388 2009

12 The Scientific World Journal

[24] Y Wang Z-C Mu and H Zeng ldquoBlock-based and multi-resolutionmethods for ear recognition using wavelet transformand uniform local binary patternsrdquo in Proceedings of the 19thInternational Conference on Pattern Recognition (ICPR rsquo08)Tampa Fla USA December 2008

[25] J Zhou S Cadavid and M Abdel-Mottaleb ldquoHistograms ofcategorized shapes for 3D ear detectionrdquo in Proceedings ofthe 4th IEEE International Conference on Biometrics TheoryApplications and Systems (BTAS rsquo10) pp 278ndash282 WashingtonDC USA September 2010

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 5: Research Article Ear Recognition Based on Gabor Features

The Scientific World Journal 5

Table 2 Detection performance comparison FRR and FAR

Test dataset Number of earimages

Method in ourprevious work [6](dual-ear detector)

Proposed method in thispaper (dual-ear detector)

Proposed method in thispaper (left ear detector +right ear detector)

FRNFRR FANFAR FRNFRR FANFAR FRNFRR FANFARCAS-PEAL 166 530 636 530 212 318 212UMIST 48 121 00 00 00 121 00USTB220 220 105 523 00 523 00 105

0 2 4 6 8 10 12 14 16 180

005

01

015

02

025

03

035

04

Layer number

False

acc

epta

nce

rate

Single-ear detector (le)Dual-ear detector in this paper

Figure 1 Performance comparison between single-ear detector and dual-ear detector

Compared with the dual-ear detector the single-ear detectorcan improve the detection performance

Table 2 shows the experimental results on three ear imagedatasets These datasets are the same as used in our previouswork [6] In Table 2 FRN means false rejection numberFRR means false rejection rate FAN means false acceptancenumber and FAR means false acceptance rate In Figure 11we compare the number of weak classifiers on each layeramong the dual-ear detector in [6] dual-ear detector in thispaper and single-ear detector in this paper As we can seefrom Figure 11 compared with dual-ear detector the single-ear detector contains less number of weak classifiers on eachlayer With less number of weak classifiers the training timecan be reduced to 50 (CPU 30GHz RAM 2G the numberof positive training samples is 11000 the number of negativetraining samples is 5000 the sample size is 60 lowast 120) Thetraining time for the left ear detector right ear detector anddual-ear detector is about 111 hours 100 hours and 217 hoursrespectively

For each layer we compare the detection performancebetween the single-ear detector and the dual-ear detector inthis paper For each layer the difference of detection rate

between these twodetectors is very limited notmore than 1but they have difference on the false acceptance rate as shownin Figure 1 the single-ear detector has lower false acceptancerate which means that it performs better on the negativesample Figure 2(a) shows an example of detection results onthe registered images of one subject Figure 2(b) shows theextracted ear part

22 Ear Normalization Using Active Shape Model As we cansee from the second figure in Figure 2(b) the segmentedimage contains some background information other than thepure ear so we need to further extract the pure ear for thefollowed ear feature extraction

We apply an automatic ear normalization method basedon improved Active Shape Model (ASM) [8] In the offlinetraining step ASM was used to create the Point DistributedModel using the landmark points on the ear images of thetraining set This model was then applied on the test earimages to search the outer ear contour The final step wasto normalize the ear image to standard size and directionaccording to the long axis of outer ear contour The longaxis was defined as the line crossing through the two points

6 The Scientific World Journal

(a) (b)

Figure 2 Ear detection experimental examples

(a)

(b)

Figure 3 Example images of ear normalization

which have the longest distance on the ear contour Afternormalization the long axes of different ear images will benormalized to the same length and same direction Thisprocess is shown in Figure 3(a) The size of the normalizedimage is 60 lowast 117 pixels This ratio of 195 is set based onstatistical research

After converting the color images to gray images weuse histogram equalization to eliminate lighting variationsamong different subjects Figure 3(b) shows some exampleimages after ear normalization

3 Ear Recognition Based on Gabor Features

For feature extraction Gabor filter is applied on the earimages to extract spatial local features of different directionsand scales The Gabor features are of high dimension soKernel Fisher Discriminant Analysis is further applied fordimension reductionThendistance based classifier is appliedfor ear recognition

31 Gabor Feature Extraction The ear has its distinct featurescompared with the face such as the texture features on dif-ferent orientations For its eminent characteristics in spatiallocal feature exaction and orientation selection Gabor basedear is presented in this paper A two-dimensional Gaborfunction is defined as [9]

119892 (119909 119910) = exp(minus119909

1015840

2

+ 119910

1015840

2

2120590

2

) cos(21205871199091015840

120582

+ 120601)

[

119909

1015840

119910

1015840] = [

cos 120579 sin 120579minus sin 120579 cos 120579] [

119909

119910

]

120590

120582

=

1

120587

radic

ln 22

2

119887

+ 1

2

119887

minus 1

(8)

120582 is the wavelength of the cosine factor of the Gaborfilter kernel its value is specified in pixels 120579 specifies theorientation of the normal to the parallel stripes of a Gaborfunction Its value is specified in degrees 120590 is the standard

The Scientific World Journal 7

(a) (b) (c)

Figure 4 Gabor ear image (a) the real part of the Gabor kernel (b) the magnitude spectrum of the Gabor feature of the ear image on 4orientations (c) the magnitude spectrum of the Gabor feature of the ear image on 8 orientations

deviation of the Gaussian factor 119887 is the half-response spatialfrequency bandwidth

The Gabor feature of an ear image is the convolution ofthe ear image and the Gabor kernel function as shown in

119869 (119909 119910119898 119899) = int I (1199091015840 1199101015840) 119892119898119899

(119909 minus 119909

1015840

119910 minus 119910

1015840

) 119889119909

1015840

119889119910

1015840

(9)

where119898 and 119899 stand for the number of scale and orientationof the Gabor wavelet For each pixel (119909 119910) there are 119898 lowast 119899

Gabor coefficients Here we keep the magnitude of the Gaborfilter as the ear feature Figure 4 shows the Gabor feature ofthe ear image Figure 4(a) shows the real part of the Gaborkernel on 3 scales (120582 = 20 25 and 30 resp from the firstto the third row 120579 = 0 1205874 1205872 and 31205874 resp from thefirst to the fourth column) Figure 4(b) shows the magnitudespectrum of the Gabor features corresponding to Figure 4(a)and 4(c) shows themagnitude spectrumof theGabor featurescorresponding to 5 scales and 8 orientations By this way wecan get the texture features on different orientations of theear image As we can see form Figure 4 the texture featuresof the human ear are salient enough on the four directionsin Figure 4(b) so we choose these 4 orientations for futureapplication in Section 32

32 Dimension Reduction Using KFDA For a 119867 lowast 119882 pixelsimage the dimension of the Gabor feature is 119867 lowast 119882 lowast 119898 lowast

119899 So the downsampling scheme is applied for dimensionreduction of the high-dimensional Gabor features Supposethe downsampling factor is 119903 and the Gabor coefficientsare normalized to zero mean and unit variance Then weconcatenate all the coefficients119866119903

119906V to form the Gabor featureset 119866119903

119866

119903

= ((119866

119903

00

)

119905

(119866

119903

01

)

119905

sdot sdot sdot (119866

119903

23

)

119905

)

119905

(10)

Suppose that 119903 = 64 the feature dimension of an earimage with the size of 60lowast117will be 1317The downsamplingfactor is empirically selected and set to 64 for all theexperiments in this paper So the feature dimension needsto be further reduced In this paper the Full Space Kernel

Fisher Discriminant Analysis (FSKFDA) is used for featurereduction [10] FSKFDA makes full use of the discriminantinformation in the full space of the within-class scatternamely the discriminant information in the null space andnonnull space of the within-class scatter Figure 5 shows thedetailed flowchart of this method

4 Experimental Results

41 Ear Recognition Experiment In this experiment weselect two datasets the USTB dataset3 [11] and a subsetof UND collection J2 [12] The USTB dataset3 contains 79subjects and the images of each subject are taken under ori-entation variation Five images for each subject are selectedThese images are 10∘ 5∘ of left rotation and 0∘ 5∘ and 10∘ ofright rotation respectively Using the ear detection methodmentioned in Section 21 we get the miss rate of 2395 andthe false alarm rate of zero on this USTB dataset3 Figure 6shows five source images and normalized ear images of onesubject

In UND collection J2 we selected 150 subjects whichpossess more than 6 images per subject Six images areselected for the experiment There exists illumination andpose variation in this subset Using the ear detection methodmentioned in Section 21 we get the miss rate of 13900 andthe false alarm rate of 15900 on this UND dataset Figure 7shows six source images and normalized ear images of onesubject

In the training stage we use the Gabor + KFDA methodto get the feature space of the gallery set In the test stage thetest image in the probe set is projected to the feature space toget its feature vectorThen the cosine distance measure baseddistance classifier is applied for classification In (11) 119909 and 119910represent two Gabor feature vectors

120575cos (119909 119910) =minus119909

119879

119910

119909

1003817

1003817

1003817

1003817

119910

1003817

1003817

1003817

1003817

(11)

For the kernel function in KFDA we apply linearkernel polynomial kernel RBF kernel and cosine kernelfunctions as shown in Table 3 We conduct a leave-one-out

8 The Scientific World Journal

Gabor feature set X

Kernel matrix Nonlinear projection Φ(middot)

In the feature space F compute the total scatter matrix of the projected training samples 120601X

Kernel trick

Compute the nonnull spaceφ of the total scatter matrix first compute the eigenvectors tV

tV

corresponding to the nonzero eigenvalues of and then the nonnull spaceφ will be

Compute the projection of the training samples φX on the nonnull spaceφ 120601120601120601 XVY T

tt )(=

Compute the between-class scatter matrix 120601bS and the

within-class scatter matrix 120601wS of the projection Y

can be represented as

Y can be represented as

Kernel trick

Compute the null space perpΨ of 120601wS perpΨ is composed of the eigenvectors perp

swV corresponding to the zero eigenvalues of φ

wS

Compute the nonnull spaceΨ of 120601wS Ψ is composed of the eigenvectors swV corresponding to the

nonzero eigenvalues of 120601wS

Compute the optimal discriminant vectors in the null space perpΨ of 120601

wS firstly project 120601

bS into the null space perpΨ ofφwS namely perpperp= swb

Tswb VSVS

120601120601 )( And then compute the eigenvectors sbV corresponding to the d largest eigenvalues of XXSb λ

120601= so

the set of the optimal discriminant vectors in the null space perpΨ of 120601

wS is sbswVVV perp=1

Compute the optimal discriminant vectors in the nonnull spaceΨ of φ

wS according to the Fisher criterion

firstly project φbS and φ

wS into the nonnull spaceΨ ofφwS namely swb

Tswb VSVS φφ = and sww

Tsww VSVS φφ =

And then compute the eigenvectors wbV corresponding to

the t largest eigenvalues of XXSS bw λφφ =minus1

so the

set of the optimal discriminant vectors in the nonnull

space Ψ of φwS is wbswVVV =2

Get the final optimal discriminant projection matrix combine the optimal discriminant vectors in null space perpΨ and nonnull space Ψ of 120601

wS to form the set of the final optimal discriminant vectors namely TVVV ][ 21=

Extract the feature vector Z of the Gabor data namely VYZ =

S120601t

S120601t

S120601t

=Csumi=1

N119894sumj=1

(120601(x)ij minus u120601)(120601(x)ij minus u120601)T = 120601120601t

120601120601t

(120601120601t )

T

(120601120601t )

T

(120601120601t )

T

120601120601t = span(120601(x)11 minus u120601) (120601(x)CN1

minus u120601)

k(xi xj) = exp(minus xi xj2

1205902)

120601120601t(120601

120601t )

T

(120601120601t )

T120601120601t =

1

N(Kminus 1

N(K middot INtimesN + INtimesN middot K

+1

N2middot INtimesN middot K middot INtimesN)

Y = VTt middot(Kminus

1

Nmiddot INtimesN middot K)

Figure 5 Dimension reduction using the Full Space Kernel Fisher Discriminant Analysis

Figure 6 Sample images form USTB dataset3

The Scientific World Journal 9

Figure 7 Sample images of UND dataset

Table 3 Rank-1 recognition rate of the proposed method when adopting different kernel functions

Kernel119870(119909 119910)

Parameters inkernel functions

Rank-1 recognition rateUSTB dataset 3 UND dataset

Linear119870(119909 119909

119894

) = (119909 sdot 119909

119894

) mdash 7544 7422

Polynomial119870(119909 119909

119894

) = [119886 (119909 sdot 119909

119894

) + 119887]

119889 119886 = 0001 119887 = 1119889 = 2

8911 8733

RBF 119870(119909 119909

119894

) = exp (minus1205741003816100381610038161003816

119909 minus 119909

119894

1003816

1003816

1003816

1003816

2

) 120574 = 000019646 94

Cosine

119896(119909 119910) =

119896(119909 119910)

radic119896(119909 119909)119896(119910 119910)

119896(119909 119910) = [119886 (119909 sdot 119910) + 119887]

119889

119886 = 0001 119887 = 1119889 = 2

9241 9022

Table 4 Performance comparison between ear recognition with or without ear normalization

Dataset Ear images without normalization Ear images with normalization Rank-1 recognition rateWithout

normalizationWith

normalization

USTBdataset3 9358 9646

UNDdataset 9111 94

identification experiment The averaged rank-1 recognitionperformance when adopting each kernel function is listedin Table 3 (the feature dimension is reduced to 100) Theparameters in kernel functions are empirically selected in thispaper As we can see from Table 3 the recognition rate onboth datasets performs best with the RBF kernel

Table 4 compares the rank-1 recognition result using theRBF kernel in two scenarios with ear normalization andwithout ear normalization As we can see from Table 4 earrecognition with ear normalization performs better thanear recognition without ear recognition We can see fromthe pictures in Table 4 that in the ldquoear images withoutnormalizationrdquo scenario although the extracted ear imagesare set to the same size the ear locations and the pure earsize may be different among these different enrolled imagesOn the other hand in the ldquoear images with normalizationrdquoscenario only the pure ear part is extracted and used forenrollment all of the enrolled ears have the same size whichhelps to enhance the recognition performance

42 Ear Authentication Experiment Wehave designed an earauthentication system with the proposed method Figure 8shows the framework flow of the ear authentication system

Ear detection

Ear trackingand extraction

Gabor feature extraction

Ear normalization Training process

authenticationEar

Still image or video sequence

Ear normalization

Ear feature set

Gabor feature extractionEar image

dataset

Online authentication

Figure 8 Framework flow of the ear authentication system

Figure 9(a) shows the application scenario The systemis composed of a camera (resolution is 640 lowast 480) and acomputer (CPU i5M520 240GHz RAM 200GB) Thereare a total number of 59 subjects registered For each subject 6images are stored in the gallery set Figure 9(b) shows the earenrollment process Figure 9(c) shows the ear authenticationprocess The time consumption for authenticating a subjectis about 180ms in which the time consumption for the eardetector to extract the pure ear is about 150ms Figure 10

10 The Scientific World Journal

(a) (b)

(c)

Figure 9 Ear authentication system (a) authentication scenario (b) ear enrollment interface (c) ear authentication interface

0 02 04 06 08 10

01

02

03

04

05

06

07

08

09

1

False acceptance rate

Gen

uine

acc

epta

nce

rate

Figure 10 ROC curve of the ear authentication system

shows the ROC curve of the authentication performanceTheEER rate is about 4The experimental result shows that earbiometric is a feasible candidate for human identification

5 Conclusion

In this paper an ear recognition system based on 2D imagesis proposedThemain contributions of the proposed methodare the following (1) automatically detecting the ear part

from the input image and normalizing the ear part basedon the long axis of the outer ear contour and (2) extractingthe discriminating Gabor features of the ear images usingthe Full Space Kernel Discriminant Analysis Experimentalresults show that we can achieve automatic ear recognitionbased on 2D images with the proposed method

Our future work will be focused on two aspects (1) inthe ear normalization stage we need to improve the accuracyof the earlobe localization generate deliberate models for

The Scientific World Journal 11

0

5

10

15

20

25

30

35

40

45

50

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18

Num

ber o

f w

eak

clas

sifier

s

Layer number

Le ear detector

Right ear detector

Method in [19]

Method in this paper

Figure 11 Comparison of the number of weak classifiers on eachlayer

the earlobe landmarks and make the searching process lessdependent on the initial model shape and (2) in the earauthentication stage we need a larger dataset to testify thematching accuracy and the real-time performance of theproposed method

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

Thiswork is supported by theNational Natural Science Foun-dation of China under Grant no 61300075 and FundamentalResearch Funds for China Central Universities under Grantno FRF-SD-12-017A

References

[1] A Pflug and C Busch ldquoEar biometrics a survey of detectionfeature extraction and recognition methodsrdquo IET Biometricsvol 1 no 2 pp 114ndash129 2012

[2] A Abaza A Ross C Hebert et al ldquoA survey on ear biometricsrdquoACM Transactions on Embedded Computing Systems vol 9 no4 article 39 2010

[3] S M S Islam M Bennamoun R A Owens and R Davis ldquoAreview of recent advances in 3d ear and expression-invariantface biometricsrdquo ACMComputing Surveys vol 44 no 3 article14 2012

[4] L Yuan Z C Mu and F Yang ldquoA review of recent advances inear recognitionrdquo in Proceedings of the 6th Chinese Conference onBiometric Recognition vol 7098 of LNCS pp 252ndash259 SpringerBeijing China December 2011

[5] A Iannerelli Ear Identification Forensic Identification SeriesParamount Publishing Company Fremont Calif USA 1989

[6] L Yuan and F Zhang ldquoEar detection based on improvedadaboost algorithmrdquo in Proceedings of the International Con-ference on Machine Learning and Cybernetics pp 2414ndash2417Baoding China July 2009

[7] A Abaza C Hebert and M A F Harrison ldquoFast learning eardetection for real-time surveillancerdquo in Proceedings of the 4thIEEE International Conference on Biometrics Theory Applica-tions and Systems (BTAS rsquo10) pp 1ndash6 Washington DC USASeptember 2010

[8] L Yuan and Z C Mu ldquoEar recognition based on 2D imagesrdquoin Proceedings of 1st International Conference on BiometricsTheory Applications and Systems Washington DC USA 2007

[9] C Liu ldquoGabor-based kernel PCAwith fractional power polyno-mial models for face recognitionrdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 26 no 5 pp 572ndash5812004

[10] L Yuan Z-CMu andHZeng ldquoPartially occluded ear recogni-tion based on local featuresrdquo Journal of University of Science andTechnology Beijing vol 32 no 4 pp 530ndash535 2010 (Chinese)

[11] httpwwwustbeducnresb[12] httpwww3ndedusimcvrlCVRLData Setshtml[13] M Burge andW Burger ldquoEar biometrics in machine visionrdquo in

Proceedings of the 21stWorkshop of the Australian Association forPattern Recognition 1997

[14] B Moreno A Sanchez and J F Velez ldquoOn the use of outerear images for personal identification in security applicationsrdquoin Proceedings of the IEEE 33rd Annual International CarnahanConference on Security Technology pp 469ndash476 October 1999

[15] Z C Mu L Yuan and Z G Xu ldquoShape and structuralfeature based ear recognitionrdquo in Proceedings of InternationalConference on Advances in Biometric Personal Authenticationvol 3338 of LNCS pp 663ndash670 Springer 2004

[16] M Choras ldquoEar biometrics based on geometrical featureextractionrdquo Electronics Letters on Computer Vision amp ImageAnalysis no 3 pp 84ndash95 2005

[17] D J Hurley M S Nixon and J N Carter ldquoForce field featureextraction for ear biometricsrdquo Computer Vision and ImageUnderstanding vol 98 no 3 pp 491ndash512 2005

[18] L Nanni and A Lumini ldquoA multi-matcher for ear authentica-tionrdquo Pattern Recognition Letters vol 28 no 16 pp 2219ndash22262007

[19] J D Bustard and M S Nixon ldquoToward unconstrained earrecognition from two-dimensional imagesrdquo IEEE Transactionson Systems Man and Cybernetics Part A vol 40 no 3 pp 486ndash494 2010

[20] B Arbab-Zavar and M S Nixon ldquoOn guided model-basedanalysis for ear biometricsrdquo Computer Vision and Image Under-standing vol 115 no 4 pp 487ndash502 2011

[21] K Chang K W Bowyer S Sarkar and B Victor ldquoComparisonand combination of ear and face images in appearance-basedbiometricsrdquo IEEE Transactions on Pattern Analysis andMachineIntelligence vol 25 no 9 pp 1160ndash1165 2003

[22] L Yuan Z-C Mu Y Zhang and K Liu ldquoEar recognition usingimproved non-negative matrix factorizationrdquo in Proceedings ofthe 18th International Conference on Pattern Recognition (ICPRrsquo06) pp 501ndash504 Hong Kong August 2006

[23] W Dun and Z Mu ldquoICA-based ear recognition methodthrough nonlinear adaptive feature fusionrdquo Journal ofComputer-Aided Design and Computer Graphics vol 21no 3 pp 383ndash388 2009

12 The Scientific World Journal

[24] Y Wang Z-C Mu and H Zeng ldquoBlock-based and multi-resolutionmethods for ear recognition using wavelet transformand uniform local binary patternsrdquo in Proceedings of the 19thInternational Conference on Pattern Recognition (ICPR rsquo08)Tampa Fla USA December 2008

[25] J Zhou S Cadavid and M Abdel-Mottaleb ldquoHistograms ofcategorized shapes for 3D ear detectionrdquo in Proceedings ofthe 4th IEEE International Conference on Biometrics TheoryApplications and Systems (BTAS rsquo10) pp 278ndash282 WashingtonDC USA September 2010

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 6: Research Article Ear Recognition Based on Gabor Features

6 The Scientific World Journal

(a) (b)

Figure 2 Ear detection experimental examples

(a)

(b)

Figure 3 Example images of ear normalization

which have the longest distance on the ear contour Afternormalization the long axes of different ear images will benormalized to the same length and same direction Thisprocess is shown in Figure 3(a) The size of the normalizedimage is 60 lowast 117 pixels This ratio of 195 is set based onstatistical research

After converting the color images to gray images weuse histogram equalization to eliminate lighting variationsamong different subjects Figure 3(b) shows some exampleimages after ear normalization

3 Ear Recognition Based on Gabor Features

For feature extraction Gabor filter is applied on the earimages to extract spatial local features of different directionsand scales The Gabor features are of high dimension soKernel Fisher Discriminant Analysis is further applied fordimension reductionThendistance based classifier is appliedfor ear recognition

31 Gabor Feature Extraction The ear has its distinct featurescompared with the face such as the texture features on dif-ferent orientations For its eminent characteristics in spatiallocal feature exaction and orientation selection Gabor basedear is presented in this paper A two-dimensional Gaborfunction is defined as [9]

119892 (119909 119910) = exp(minus119909

1015840

2

+ 119910

1015840

2

2120590

2

) cos(21205871199091015840

120582

+ 120601)

[

119909

1015840

119910

1015840] = [

cos 120579 sin 120579minus sin 120579 cos 120579] [

119909

119910

]

120590

120582

=

1

120587

radic

ln 22

2

119887

+ 1

2

119887

minus 1

(8)

120582 is the wavelength of the cosine factor of the Gaborfilter kernel its value is specified in pixels 120579 specifies theorientation of the normal to the parallel stripes of a Gaborfunction Its value is specified in degrees 120590 is the standard

The Scientific World Journal 7

(a) (b) (c)

Figure 4 Gabor ear image (a) the real part of the Gabor kernel (b) the magnitude spectrum of the Gabor feature of the ear image on 4orientations (c) the magnitude spectrum of the Gabor feature of the ear image on 8 orientations

deviation of the Gaussian factor 119887 is the half-response spatialfrequency bandwidth

The Gabor feature of an ear image is the convolution ofthe ear image and the Gabor kernel function as shown in

119869 (119909 119910119898 119899) = int I (1199091015840 1199101015840) 119892119898119899

(119909 minus 119909

1015840

119910 minus 119910

1015840

) 119889119909

1015840

119889119910

1015840

(9)

where119898 and 119899 stand for the number of scale and orientationof the Gabor wavelet For each pixel (119909 119910) there are 119898 lowast 119899

Gabor coefficients Here we keep the magnitude of the Gaborfilter as the ear feature Figure 4 shows the Gabor feature ofthe ear image Figure 4(a) shows the real part of the Gaborkernel on 3 scales (120582 = 20 25 and 30 resp from the firstto the third row 120579 = 0 1205874 1205872 and 31205874 resp from thefirst to the fourth column) Figure 4(b) shows the magnitudespectrum of the Gabor features corresponding to Figure 4(a)and 4(c) shows themagnitude spectrumof theGabor featurescorresponding to 5 scales and 8 orientations By this way wecan get the texture features on different orientations of theear image As we can see form Figure 4 the texture featuresof the human ear are salient enough on the four directionsin Figure 4(b) so we choose these 4 orientations for futureapplication in Section 32

32 Dimension Reduction Using KFDA For a 119867 lowast 119882 pixelsimage the dimension of the Gabor feature is 119867 lowast 119882 lowast 119898 lowast

119899 So the downsampling scheme is applied for dimensionreduction of the high-dimensional Gabor features Supposethe downsampling factor is 119903 and the Gabor coefficientsare normalized to zero mean and unit variance Then weconcatenate all the coefficients119866119903

119906V to form the Gabor featureset 119866119903

119866

119903

= ((119866

119903

00

)

119905

(119866

119903

01

)

119905

sdot sdot sdot (119866

119903

23

)

119905

)

119905

(10)

Suppose that 119903 = 64 the feature dimension of an earimage with the size of 60lowast117will be 1317The downsamplingfactor is empirically selected and set to 64 for all theexperiments in this paper So the feature dimension needsto be further reduced In this paper the Full Space Kernel

Fisher Discriminant Analysis (FSKFDA) is used for featurereduction [10] FSKFDA makes full use of the discriminantinformation in the full space of the within-class scatternamely the discriminant information in the null space andnonnull space of the within-class scatter Figure 5 shows thedetailed flowchart of this method

4 Experimental Results

41 Ear Recognition Experiment In this experiment weselect two datasets the USTB dataset3 [11] and a subsetof UND collection J2 [12] The USTB dataset3 contains 79subjects and the images of each subject are taken under ori-entation variation Five images for each subject are selectedThese images are 10∘ 5∘ of left rotation and 0∘ 5∘ and 10∘ ofright rotation respectively Using the ear detection methodmentioned in Section 21 we get the miss rate of 2395 andthe false alarm rate of zero on this USTB dataset3 Figure 6shows five source images and normalized ear images of onesubject

In UND collection J2 we selected 150 subjects whichpossess more than 6 images per subject Six images areselected for the experiment There exists illumination andpose variation in this subset Using the ear detection methodmentioned in Section 21 we get the miss rate of 13900 andthe false alarm rate of 15900 on this UND dataset Figure 7shows six source images and normalized ear images of onesubject

In the training stage we use the Gabor + KFDA methodto get the feature space of the gallery set In the test stage thetest image in the probe set is projected to the feature space toget its feature vectorThen the cosine distance measure baseddistance classifier is applied for classification In (11) 119909 and 119910represent two Gabor feature vectors

120575cos (119909 119910) =minus119909

119879

119910

119909

1003817

1003817

1003817

1003817

119910

1003817

1003817

1003817

1003817

(11)

For the kernel function in KFDA we apply linearkernel polynomial kernel RBF kernel and cosine kernelfunctions as shown in Table 3 We conduct a leave-one-out

8 The Scientific World Journal

Gabor feature set X

Kernel matrix Nonlinear projection Φ(middot)

In the feature space F compute the total scatter matrix of the projected training samples 120601X

Kernel trick

Compute the nonnull spaceφ of the total scatter matrix first compute the eigenvectors tV

tV

corresponding to the nonzero eigenvalues of and then the nonnull spaceφ will be

Compute the projection of the training samples φX on the nonnull spaceφ 120601120601120601 XVY T

tt )(=

Compute the between-class scatter matrix 120601bS and the

within-class scatter matrix 120601wS of the projection Y

can be represented as

Y can be represented as

Kernel trick

Compute the null space perpΨ of 120601wS perpΨ is composed of the eigenvectors perp

swV corresponding to the zero eigenvalues of φ

wS

Compute the nonnull spaceΨ of 120601wS Ψ is composed of the eigenvectors swV corresponding to the

nonzero eigenvalues of 120601wS

Compute the optimal discriminant vectors in the null space perpΨ of 120601

wS firstly project 120601

bS into the null space perpΨ ofφwS namely perpperp= swb

Tswb VSVS

120601120601 )( And then compute the eigenvectors sbV corresponding to the d largest eigenvalues of XXSb λ

120601= so

the set of the optimal discriminant vectors in the null space perpΨ of 120601

wS is sbswVVV perp=1

Compute the optimal discriminant vectors in the nonnull spaceΨ of φ

wS according to the Fisher criterion

firstly project φbS and φ

wS into the nonnull spaceΨ ofφwS namely swb

Tswb VSVS φφ = and sww

Tsww VSVS φφ =

And then compute the eigenvectors wbV corresponding to

the t largest eigenvalues of XXSS bw λφφ =minus1

so the

set of the optimal discriminant vectors in the nonnull

space Ψ of φwS is wbswVVV =2

Get the final optimal discriminant projection matrix combine the optimal discriminant vectors in null space perpΨ and nonnull space Ψ of 120601

wS to form the set of the final optimal discriminant vectors namely TVVV ][ 21=

Extract the feature vector Z of the Gabor data namely VYZ =

S120601t

S120601t

S120601t

=Csumi=1

N119894sumj=1

(120601(x)ij minus u120601)(120601(x)ij minus u120601)T = 120601120601t

120601120601t

(120601120601t )

T

(120601120601t )

T

(120601120601t )

T

120601120601t = span(120601(x)11 minus u120601) (120601(x)CN1

minus u120601)

k(xi xj) = exp(minus xi xj2

1205902)

120601120601t(120601

120601t )

T

(120601120601t )

T120601120601t =

1

N(Kminus 1

N(K middot INtimesN + INtimesN middot K

+1

N2middot INtimesN middot K middot INtimesN)

Y = VTt middot(Kminus

1

Nmiddot INtimesN middot K)

Figure 5 Dimension reduction using the Full Space Kernel Fisher Discriminant Analysis

Figure 6 Sample images form USTB dataset3

The Scientific World Journal 9

Figure 7 Sample images of UND dataset

Table 3 Rank-1 recognition rate of the proposed method when adopting different kernel functions

Kernel119870(119909 119910)

Parameters inkernel functions

Rank-1 recognition rateUSTB dataset 3 UND dataset

Linear119870(119909 119909

119894

) = (119909 sdot 119909

119894

) mdash 7544 7422

Polynomial119870(119909 119909

119894

) = [119886 (119909 sdot 119909

119894

) + 119887]

119889 119886 = 0001 119887 = 1119889 = 2

8911 8733

RBF 119870(119909 119909

119894

) = exp (minus1205741003816100381610038161003816

119909 minus 119909

119894

1003816

1003816

1003816

1003816

2

) 120574 = 000019646 94

Cosine

119896(119909 119910) =

119896(119909 119910)

radic119896(119909 119909)119896(119910 119910)

119896(119909 119910) = [119886 (119909 sdot 119910) + 119887]

119889

119886 = 0001 119887 = 1119889 = 2

9241 9022

Table 4 Performance comparison between ear recognition with or without ear normalization

Dataset Ear images without normalization Ear images with normalization Rank-1 recognition rateWithout

normalizationWith

normalization

USTBdataset3 9358 9646

UNDdataset 9111 94

identification experiment The averaged rank-1 recognitionperformance when adopting each kernel function is listedin Table 3 (the feature dimension is reduced to 100) Theparameters in kernel functions are empirically selected in thispaper As we can see from Table 3 the recognition rate onboth datasets performs best with the RBF kernel

Table 4 compares the rank-1 recognition result using theRBF kernel in two scenarios with ear normalization andwithout ear normalization As we can see from Table 4 earrecognition with ear normalization performs better thanear recognition without ear recognition We can see fromthe pictures in Table 4 that in the ldquoear images withoutnormalizationrdquo scenario although the extracted ear imagesare set to the same size the ear locations and the pure earsize may be different among these different enrolled imagesOn the other hand in the ldquoear images with normalizationrdquoscenario only the pure ear part is extracted and used forenrollment all of the enrolled ears have the same size whichhelps to enhance the recognition performance

42 Ear Authentication Experiment Wehave designed an earauthentication system with the proposed method Figure 8shows the framework flow of the ear authentication system

Ear detection

Ear trackingand extraction

Gabor feature extraction

Ear normalization Training process

authenticationEar

Still image or video sequence

Ear normalization

Ear feature set

Gabor feature extractionEar image

dataset

Online authentication

Figure 8 Framework flow of the ear authentication system

Figure 9(a) shows the application scenario The systemis composed of a camera (resolution is 640 lowast 480) and acomputer (CPU i5M520 240GHz RAM 200GB) Thereare a total number of 59 subjects registered For each subject 6images are stored in the gallery set Figure 9(b) shows the earenrollment process Figure 9(c) shows the ear authenticationprocess The time consumption for authenticating a subjectis about 180ms in which the time consumption for the eardetector to extract the pure ear is about 150ms Figure 10

10 The Scientific World Journal

(a) (b)

(c)

Figure 9 Ear authentication system (a) authentication scenario (b) ear enrollment interface (c) ear authentication interface

0 02 04 06 08 10

01

02

03

04

05

06

07

08

09

1

False acceptance rate

Gen

uine

acc

epta

nce

rate

Figure 10 ROC curve of the ear authentication system

shows the ROC curve of the authentication performanceTheEER rate is about 4The experimental result shows that earbiometric is a feasible candidate for human identification

5 Conclusion

In this paper an ear recognition system based on 2D imagesis proposedThemain contributions of the proposed methodare the following (1) automatically detecting the ear part

from the input image and normalizing the ear part basedon the long axis of the outer ear contour and (2) extractingthe discriminating Gabor features of the ear images usingthe Full Space Kernel Discriminant Analysis Experimentalresults show that we can achieve automatic ear recognitionbased on 2D images with the proposed method

Our future work will be focused on two aspects (1) inthe ear normalization stage we need to improve the accuracyof the earlobe localization generate deliberate models for

The Scientific World Journal 11

0

5

10

15

20

25

30

35

40

45

50

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18

Num

ber o

f w

eak

clas

sifier

s

Layer number

Le ear detector

Right ear detector

Method in [19]

Method in this paper

Figure 11 Comparison of the number of weak classifiers on eachlayer

the earlobe landmarks and make the searching process lessdependent on the initial model shape and (2) in the earauthentication stage we need a larger dataset to testify thematching accuracy and the real-time performance of theproposed method

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

Thiswork is supported by theNational Natural Science Foun-dation of China under Grant no 61300075 and FundamentalResearch Funds for China Central Universities under Grantno FRF-SD-12-017A

References

[1] A Pflug and C Busch ldquoEar biometrics a survey of detectionfeature extraction and recognition methodsrdquo IET Biometricsvol 1 no 2 pp 114ndash129 2012

[2] A Abaza A Ross C Hebert et al ldquoA survey on ear biometricsrdquoACM Transactions on Embedded Computing Systems vol 9 no4 article 39 2010

[3] S M S Islam M Bennamoun R A Owens and R Davis ldquoAreview of recent advances in 3d ear and expression-invariantface biometricsrdquo ACMComputing Surveys vol 44 no 3 article14 2012

[4] L Yuan Z C Mu and F Yang ldquoA review of recent advances inear recognitionrdquo in Proceedings of the 6th Chinese Conference onBiometric Recognition vol 7098 of LNCS pp 252ndash259 SpringerBeijing China December 2011

[5] A Iannerelli Ear Identification Forensic Identification SeriesParamount Publishing Company Fremont Calif USA 1989

[6] L Yuan and F Zhang ldquoEar detection based on improvedadaboost algorithmrdquo in Proceedings of the International Con-ference on Machine Learning and Cybernetics pp 2414ndash2417Baoding China July 2009

[7] A Abaza C Hebert and M A F Harrison ldquoFast learning eardetection for real-time surveillancerdquo in Proceedings of the 4thIEEE International Conference on Biometrics Theory Applica-tions and Systems (BTAS rsquo10) pp 1ndash6 Washington DC USASeptember 2010

[8] L Yuan and Z C Mu ldquoEar recognition based on 2D imagesrdquoin Proceedings of 1st International Conference on BiometricsTheory Applications and Systems Washington DC USA 2007

[9] C Liu ldquoGabor-based kernel PCAwith fractional power polyno-mial models for face recognitionrdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 26 no 5 pp 572ndash5812004

[10] L Yuan Z-CMu andHZeng ldquoPartially occluded ear recogni-tion based on local featuresrdquo Journal of University of Science andTechnology Beijing vol 32 no 4 pp 530ndash535 2010 (Chinese)

[11] httpwwwustbeducnresb[12] httpwww3ndedusimcvrlCVRLData Setshtml[13] M Burge andW Burger ldquoEar biometrics in machine visionrdquo in

Proceedings of the 21stWorkshop of the Australian Association forPattern Recognition 1997

[14] B Moreno A Sanchez and J F Velez ldquoOn the use of outerear images for personal identification in security applicationsrdquoin Proceedings of the IEEE 33rd Annual International CarnahanConference on Security Technology pp 469ndash476 October 1999

[15] Z C Mu L Yuan and Z G Xu ldquoShape and structuralfeature based ear recognitionrdquo in Proceedings of InternationalConference on Advances in Biometric Personal Authenticationvol 3338 of LNCS pp 663ndash670 Springer 2004

[16] M Choras ldquoEar biometrics based on geometrical featureextractionrdquo Electronics Letters on Computer Vision amp ImageAnalysis no 3 pp 84ndash95 2005

[17] D J Hurley M S Nixon and J N Carter ldquoForce field featureextraction for ear biometricsrdquo Computer Vision and ImageUnderstanding vol 98 no 3 pp 491ndash512 2005

[18] L Nanni and A Lumini ldquoA multi-matcher for ear authentica-tionrdquo Pattern Recognition Letters vol 28 no 16 pp 2219ndash22262007

[19] J D Bustard and M S Nixon ldquoToward unconstrained earrecognition from two-dimensional imagesrdquo IEEE Transactionson Systems Man and Cybernetics Part A vol 40 no 3 pp 486ndash494 2010

[20] B Arbab-Zavar and M S Nixon ldquoOn guided model-basedanalysis for ear biometricsrdquo Computer Vision and Image Under-standing vol 115 no 4 pp 487ndash502 2011

[21] K Chang K W Bowyer S Sarkar and B Victor ldquoComparisonand combination of ear and face images in appearance-basedbiometricsrdquo IEEE Transactions on Pattern Analysis andMachineIntelligence vol 25 no 9 pp 1160ndash1165 2003

[22] L Yuan Z-C Mu Y Zhang and K Liu ldquoEar recognition usingimproved non-negative matrix factorizationrdquo in Proceedings ofthe 18th International Conference on Pattern Recognition (ICPRrsquo06) pp 501ndash504 Hong Kong August 2006

[23] W Dun and Z Mu ldquoICA-based ear recognition methodthrough nonlinear adaptive feature fusionrdquo Journal ofComputer-Aided Design and Computer Graphics vol 21no 3 pp 383ndash388 2009

12 The Scientific World Journal

[24] Y Wang Z-C Mu and H Zeng ldquoBlock-based and multi-resolutionmethods for ear recognition using wavelet transformand uniform local binary patternsrdquo in Proceedings of the 19thInternational Conference on Pattern Recognition (ICPR rsquo08)Tampa Fla USA December 2008

[25] J Zhou S Cadavid and M Abdel-Mottaleb ldquoHistograms ofcategorized shapes for 3D ear detectionrdquo in Proceedings ofthe 4th IEEE International Conference on Biometrics TheoryApplications and Systems (BTAS rsquo10) pp 278ndash282 WashingtonDC USA September 2010

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 7: Research Article Ear Recognition Based on Gabor Features

The Scientific World Journal 7

(a) (b) (c)

Figure 4 Gabor ear image (a) the real part of the Gabor kernel (b) the magnitude spectrum of the Gabor feature of the ear image on 4orientations (c) the magnitude spectrum of the Gabor feature of the ear image on 8 orientations

deviation of the Gaussian factor 119887 is the half-response spatialfrequency bandwidth

The Gabor feature of an ear image is the convolution ofthe ear image and the Gabor kernel function as shown in

119869 (119909 119910119898 119899) = int I (1199091015840 1199101015840) 119892119898119899

(119909 minus 119909

1015840

119910 minus 119910

1015840

) 119889119909

1015840

119889119910

1015840

(9)

where119898 and 119899 stand for the number of scale and orientationof the Gabor wavelet For each pixel (119909 119910) there are 119898 lowast 119899

Gabor coefficients Here we keep the magnitude of the Gaborfilter as the ear feature Figure 4 shows the Gabor feature ofthe ear image Figure 4(a) shows the real part of the Gaborkernel on 3 scales (120582 = 20 25 and 30 resp from the firstto the third row 120579 = 0 1205874 1205872 and 31205874 resp from thefirst to the fourth column) Figure 4(b) shows the magnitudespectrum of the Gabor features corresponding to Figure 4(a)and 4(c) shows themagnitude spectrumof theGabor featurescorresponding to 5 scales and 8 orientations By this way wecan get the texture features on different orientations of theear image As we can see form Figure 4 the texture featuresof the human ear are salient enough on the four directionsin Figure 4(b) so we choose these 4 orientations for futureapplication in Section 32

32 Dimension Reduction Using KFDA For a 119867 lowast 119882 pixelsimage the dimension of the Gabor feature is 119867 lowast 119882 lowast 119898 lowast

119899 So the downsampling scheme is applied for dimensionreduction of the high-dimensional Gabor features Supposethe downsampling factor is 119903 and the Gabor coefficientsare normalized to zero mean and unit variance Then weconcatenate all the coefficients119866119903

119906V to form the Gabor featureset 119866119903

119866

119903

= ((119866

119903

00

)

119905

(119866

119903

01

)

119905

sdot sdot sdot (119866

119903

23

)

119905

)

119905

(10)

Suppose that 119903 = 64 the feature dimension of an earimage with the size of 60lowast117will be 1317The downsamplingfactor is empirically selected and set to 64 for all theexperiments in this paper So the feature dimension needsto be further reduced In this paper the Full Space Kernel

Fisher Discriminant Analysis (FSKFDA) is used for featurereduction [10] FSKFDA makes full use of the discriminantinformation in the full space of the within-class scatternamely the discriminant information in the null space andnonnull space of the within-class scatter Figure 5 shows thedetailed flowchart of this method

4 Experimental Results

41 Ear Recognition Experiment In this experiment weselect two datasets the USTB dataset3 [11] and a subsetof UND collection J2 [12] The USTB dataset3 contains 79subjects and the images of each subject are taken under ori-entation variation Five images for each subject are selectedThese images are 10∘ 5∘ of left rotation and 0∘ 5∘ and 10∘ ofright rotation respectively Using the ear detection methodmentioned in Section 21 we get the miss rate of 2395 andthe false alarm rate of zero on this USTB dataset3 Figure 6shows five source images and normalized ear images of onesubject

In UND collection J2 we selected 150 subjects whichpossess more than 6 images per subject Six images areselected for the experiment There exists illumination andpose variation in this subset Using the ear detection methodmentioned in Section 21 we get the miss rate of 13900 andthe false alarm rate of 15900 on this UND dataset Figure 7shows six source images and normalized ear images of onesubject

In the training stage we use the Gabor + KFDA methodto get the feature space of the gallery set In the test stage thetest image in the probe set is projected to the feature space toget its feature vectorThen the cosine distance measure baseddistance classifier is applied for classification In (11) 119909 and 119910represent two Gabor feature vectors

120575cos (119909 119910) =minus119909

119879

119910

119909

1003817

1003817

1003817

1003817

119910

1003817

1003817

1003817

1003817

(11)

For the kernel function in KFDA we apply linearkernel polynomial kernel RBF kernel and cosine kernelfunctions as shown in Table 3 We conduct a leave-one-out

8 The Scientific World Journal

Gabor feature set X

Kernel matrix Nonlinear projection Φ(middot)

In the feature space F compute the total scatter matrix of the projected training samples 120601X

Kernel trick

Compute the nonnull spaceφ of the total scatter matrix first compute the eigenvectors tV

tV

corresponding to the nonzero eigenvalues of and then the nonnull spaceφ will be

Compute the projection of the training samples φX on the nonnull spaceφ 120601120601120601 XVY T

tt )(=

Compute the between-class scatter matrix 120601bS and the

within-class scatter matrix 120601wS of the projection Y

can be represented as

Y can be represented as

Kernel trick

Compute the null space perpΨ of 120601wS perpΨ is composed of the eigenvectors perp

swV corresponding to the zero eigenvalues of φ

wS

Compute the nonnull spaceΨ of 120601wS Ψ is composed of the eigenvectors swV corresponding to the

nonzero eigenvalues of 120601wS

Compute the optimal discriminant vectors in the null space perpΨ of 120601

wS firstly project 120601

bS into the null space perpΨ ofφwS namely perpperp= swb

Tswb VSVS

120601120601 )( And then compute the eigenvectors sbV corresponding to the d largest eigenvalues of XXSb λ

120601= so

the set of the optimal discriminant vectors in the null space perpΨ of 120601

wS is sbswVVV perp=1

Compute the optimal discriminant vectors in the nonnull spaceΨ of φ

wS according to the Fisher criterion

firstly project φbS and φ

wS into the nonnull spaceΨ ofφwS namely swb

Tswb VSVS φφ = and sww

Tsww VSVS φφ =

And then compute the eigenvectors wbV corresponding to

the t largest eigenvalues of XXSS bw λφφ =minus1

so the

set of the optimal discriminant vectors in the nonnull

space Ψ of φwS is wbswVVV =2

Get the final optimal discriminant projection matrix combine the optimal discriminant vectors in null space perpΨ and nonnull space Ψ of 120601

wS to form the set of the final optimal discriminant vectors namely TVVV ][ 21=

Extract the feature vector Z of the Gabor data namely VYZ =

S120601t

S120601t

S120601t

=Csumi=1

N119894sumj=1

(120601(x)ij minus u120601)(120601(x)ij minus u120601)T = 120601120601t

120601120601t

(120601120601t )

T

(120601120601t )

T

(120601120601t )

T

120601120601t = span(120601(x)11 minus u120601) (120601(x)CN1

minus u120601)

k(xi xj) = exp(minus xi xj2

1205902)

120601120601t(120601

120601t )

T

(120601120601t )

T120601120601t =

1

N(Kminus 1

N(K middot INtimesN + INtimesN middot K

+1

N2middot INtimesN middot K middot INtimesN)

Y = VTt middot(Kminus

1

Nmiddot INtimesN middot K)

Figure 5 Dimension reduction using the Full Space Kernel Fisher Discriminant Analysis

Figure 6 Sample images form USTB dataset3

The Scientific World Journal 9

Figure 7 Sample images of UND dataset

Table 3 Rank-1 recognition rate of the proposed method when adopting different kernel functions

Kernel119870(119909 119910)

Parameters inkernel functions

Rank-1 recognition rateUSTB dataset 3 UND dataset

Linear119870(119909 119909

119894

) = (119909 sdot 119909

119894

) mdash 7544 7422

Polynomial119870(119909 119909

119894

) = [119886 (119909 sdot 119909

119894

) + 119887]

119889 119886 = 0001 119887 = 1119889 = 2

8911 8733

RBF 119870(119909 119909

119894

) = exp (minus1205741003816100381610038161003816

119909 minus 119909

119894

1003816

1003816

1003816

1003816

2

) 120574 = 000019646 94

Cosine

119896(119909 119910) =

119896(119909 119910)

radic119896(119909 119909)119896(119910 119910)

119896(119909 119910) = [119886 (119909 sdot 119910) + 119887]

119889

119886 = 0001 119887 = 1119889 = 2

9241 9022

Table 4 Performance comparison between ear recognition with or without ear normalization

Dataset Ear images without normalization Ear images with normalization Rank-1 recognition rateWithout

normalizationWith

normalization

USTBdataset3 9358 9646

UNDdataset 9111 94

identification experiment The averaged rank-1 recognitionperformance when adopting each kernel function is listedin Table 3 (the feature dimension is reduced to 100) Theparameters in kernel functions are empirically selected in thispaper As we can see from Table 3 the recognition rate onboth datasets performs best with the RBF kernel

Table 4 compares the rank-1 recognition result using theRBF kernel in two scenarios with ear normalization andwithout ear normalization As we can see from Table 4 earrecognition with ear normalization performs better thanear recognition without ear recognition We can see fromthe pictures in Table 4 that in the ldquoear images withoutnormalizationrdquo scenario although the extracted ear imagesare set to the same size the ear locations and the pure earsize may be different among these different enrolled imagesOn the other hand in the ldquoear images with normalizationrdquoscenario only the pure ear part is extracted and used forenrollment all of the enrolled ears have the same size whichhelps to enhance the recognition performance

42 Ear Authentication Experiment Wehave designed an earauthentication system with the proposed method Figure 8shows the framework flow of the ear authentication system

Ear detection

Ear trackingand extraction

Gabor feature extraction

Ear normalization Training process

authenticationEar

Still image or video sequence

Ear normalization

Ear feature set

Gabor feature extractionEar image

dataset

Online authentication

Figure 8 Framework flow of the ear authentication system

Figure 9(a) shows the application scenario The systemis composed of a camera (resolution is 640 lowast 480) and acomputer (CPU i5M520 240GHz RAM 200GB) Thereare a total number of 59 subjects registered For each subject 6images are stored in the gallery set Figure 9(b) shows the earenrollment process Figure 9(c) shows the ear authenticationprocess The time consumption for authenticating a subjectis about 180ms in which the time consumption for the eardetector to extract the pure ear is about 150ms Figure 10

10 The Scientific World Journal

(a) (b)

(c)

Figure 9 Ear authentication system (a) authentication scenario (b) ear enrollment interface (c) ear authentication interface

0 02 04 06 08 10

01

02

03

04

05

06

07

08

09

1

False acceptance rate

Gen

uine

acc

epta

nce

rate

Figure 10 ROC curve of the ear authentication system

shows the ROC curve of the authentication performanceTheEER rate is about 4The experimental result shows that earbiometric is a feasible candidate for human identification

5 Conclusion

In this paper an ear recognition system based on 2D imagesis proposedThemain contributions of the proposed methodare the following (1) automatically detecting the ear part

from the input image and normalizing the ear part basedon the long axis of the outer ear contour and (2) extractingthe discriminating Gabor features of the ear images usingthe Full Space Kernel Discriminant Analysis Experimentalresults show that we can achieve automatic ear recognitionbased on 2D images with the proposed method

Our future work will be focused on two aspects (1) inthe ear normalization stage we need to improve the accuracyof the earlobe localization generate deliberate models for

The Scientific World Journal 11

0

5

10

15

20

25

30

35

40

45

50

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18

Num

ber o

f w

eak

clas

sifier

s

Layer number

Le ear detector

Right ear detector

Method in [19]

Method in this paper

Figure 11 Comparison of the number of weak classifiers on eachlayer

the earlobe landmarks and make the searching process lessdependent on the initial model shape and (2) in the earauthentication stage we need a larger dataset to testify thematching accuracy and the real-time performance of theproposed method

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

Thiswork is supported by theNational Natural Science Foun-dation of China under Grant no 61300075 and FundamentalResearch Funds for China Central Universities under Grantno FRF-SD-12-017A

References

[1] A Pflug and C Busch ldquoEar biometrics a survey of detectionfeature extraction and recognition methodsrdquo IET Biometricsvol 1 no 2 pp 114ndash129 2012

[2] A Abaza A Ross C Hebert et al ldquoA survey on ear biometricsrdquoACM Transactions on Embedded Computing Systems vol 9 no4 article 39 2010

[3] S M S Islam M Bennamoun R A Owens and R Davis ldquoAreview of recent advances in 3d ear and expression-invariantface biometricsrdquo ACMComputing Surveys vol 44 no 3 article14 2012

[4] L Yuan Z C Mu and F Yang ldquoA review of recent advances inear recognitionrdquo in Proceedings of the 6th Chinese Conference onBiometric Recognition vol 7098 of LNCS pp 252ndash259 SpringerBeijing China December 2011

[5] A Iannerelli Ear Identification Forensic Identification SeriesParamount Publishing Company Fremont Calif USA 1989

[6] L Yuan and F Zhang ldquoEar detection based on improvedadaboost algorithmrdquo in Proceedings of the International Con-ference on Machine Learning and Cybernetics pp 2414ndash2417Baoding China July 2009

[7] A Abaza C Hebert and M A F Harrison ldquoFast learning eardetection for real-time surveillancerdquo in Proceedings of the 4thIEEE International Conference on Biometrics Theory Applica-tions and Systems (BTAS rsquo10) pp 1ndash6 Washington DC USASeptember 2010

[8] L Yuan and Z C Mu ldquoEar recognition based on 2D imagesrdquoin Proceedings of 1st International Conference on BiometricsTheory Applications and Systems Washington DC USA 2007

[9] C Liu ldquoGabor-based kernel PCAwith fractional power polyno-mial models for face recognitionrdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 26 no 5 pp 572ndash5812004

[10] L Yuan Z-CMu andHZeng ldquoPartially occluded ear recogni-tion based on local featuresrdquo Journal of University of Science andTechnology Beijing vol 32 no 4 pp 530ndash535 2010 (Chinese)

[11] httpwwwustbeducnresb[12] httpwww3ndedusimcvrlCVRLData Setshtml[13] M Burge andW Burger ldquoEar biometrics in machine visionrdquo in

Proceedings of the 21stWorkshop of the Australian Association forPattern Recognition 1997

[14] B Moreno A Sanchez and J F Velez ldquoOn the use of outerear images for personal identification in security applicationsrdquoin Proceedings of the IEEE 33rd Annual International CarnahanConference on Security Technology pp 469ndash476 October 1999

[15] Z C Mu L Yuan and Z G Xu ldquoShape and structuralfeature based ear recognitionrdquo in Proceedings of InternationalConference on Advances in Biometric Personal Authenticationvol 3338 of LNCS pp 663ndash670 Springer 2004

[16] M Choras ldquoEar biometrics based on geometrical featureextractionrdquo Electronics Letters on Computer Vision amp ImageAnalysis no 3 pp 84ndash95 2005

[17] D J Hurley M S Nixon and J N Carter ldquoForce field featureextraction for ear biometricsrdquo Computer Vision and ImageUnderstanding vol 98 no 3 pp 491ndash512 2005

[18] L Nanni and A Lumini ldquoA multi-matcher for ear authentica-tionrdquo Pattern Recognition Letters vol 28 no 16 pp 2219ndash22262007

[19] J D Bustard and M S Nixon ldquoToward unconstrained earrecognition from two-dimensional imagesrdquo IEEE Transactionson Systems Man and Cybernetics Part A vol 40 no 3 pp 486ndash494 2010

[20] B Arbab-Zavar and M S Nixon ldquoOn guided model-basedanalysis for ear biometricsrdquo Computer Vision and Image Under-standing vol 115 no 4 pp 487ndash502 2011

[21] K Chang K W Bowyer S Sarkar and B Victor ldquoComparisonand combination of ear and face images in appearance-basedbiometricsrdquo IEEE Transactions on Pattern Analysis andMachineIntelligence vol 25 no 9 pp 1160ndash1165 2003

[22] L Yuan Z-C Mu Y Zhang and K Liu ldquoEar recognition usingimproved non-negative matrix factorizationrdquo in Proceedings ofthe 18th International Conference on Pattern Recognition (ICPRrsquo06) pp 501ndash504 Hong Kong August 2006

[23] W Dun and Z Mu ldquoICA-based ear recognition methodthrough nonlinear adaptive feature fusionrdquo Journal ofComputer-Aided Design and Computer Graphics vol 21no 3 pp 383ndash388 2009

12 The Scientific World Journal

[24] Y Wang Z-C Mu and H Zeng ldquoBlock-based and multi-resolutionmethods for ear recognition using wavelet transformand uniform local binary patternsrdquo in Proceedings of the 19thInternational Conference on Pattern Recognition (ICPR rsquo08)Tampa Fla USA December 2008

[25] J Zhou S Cadavid and M Abdel-Mottaleb ldquoHistograms ofcategorized shapes for 3D ear detectionrdquo in Proceedings ofthe 4th IEEE International Conference on Biometrics TheoryApplications and Systems (BTAS rsquo10) pp 278ndash282 WashingtonDC USA September 2010

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 8: Research Article Ear Recognition Based on Gabor Features

8 The Scientific World Journal

Gabor feature set X

Kernel matrix Nonlinear projection Φ(middot)

In the feature space F compute the total scatter matrix of the projected training samples 120601X

Kernel trick

Compute the nonnull spaceφ of the total scatter matrix first compute the eigenvectors tV

tV

corresponding to the nonzero eigenvalues of and then the nonnull spaceφ will be

Compute the projection of the training samples φX on the nonnull spaceφ 120601120601120601 XVY T

tt )(=

Compute the between-class scatter matrix 120601bS and the

within-class scatter matrix 120601wS of the projection Y

can be represented as

Y can be represented as

Kernel trick

Compute the null space perpΨ of 120601wS perpΨ is composed of the eigenvectors perp

swV corresponding to the zero eigenvalues of φ

wS

Compute the nonnull spaceΨ of 120601wS Ψ is composed of the eigenvectors swV corresponding to the

nonzero eigenvalues of 120601wS

Compute the optimal discriminant vectors in the null space perpΨ of 120601

wS firstly project 120601

bS into the null space perpΨ ofφwS namely perpperp= swb

Tswb VSVS

120601120601 )( And then compute the eigenvectors sbV corresponding to the d largest eigenvalues of XXSb λ

120601= so

the set of the optimal discriminant vectors in the null space perpΨ of 120601

wS is sbswVVV perp=1

Compute the optimal discriminant vectors in the nonnull spaceΨ of φ

wS according to the Fisher criterion

firstly project φbS and φ

wS into the nonnull spaceΨ ofφwS namely swb

Tswb VSVS φφ = and sww

Tsww VSVS φφ =

And then compute the eigenvectors wbV corresponding to

the t largest eigenvalues of XXSS bw λφφ =minus1

so the

set of the optimal discriminant vectors in the nonnull

space Ψ of φwS is wbswVVV =2

Get the final optimal discriminant projection matrix combine the optimal discriminant vectors in null space perpΨ and nonnull space Ψ of 120601

wS to form the set of the final optimal discriminant vectors namely TVVV ][ 21=

Extract the feature vector Z of the Gabor data namely VYZ =

S120601t

S120601t

S120601t

=Csumi=1

N119894sumj=1

(120601(x)ij minus u120601)(120601(x)ij minus u120601)T = 120601120601t

120601120601t

(120601120601t )

T

(120601120601t )

T

(120601120601t )

T

120601120601t = span(120601(x)11 minus u120601) (120601(x)CN1

minus u120601)

k(xi xj) = exp(minus xi xj2

1205902)

120601120601t(120601

120601t )

T

(120601120601t )

T120601120601t =

1

N(Kminus 1

N(K middot INtimesN + INtimesN middot K

+1

N2middot INtimesN middot K middot INtimesN)

Y = VTt middot(Kminus

1

Nmiddot INtimesN middot K)

Figure 5 Dimension reduction using the Full Space Kernel Fisher Discriminant Analysis

Figure 6 Sample images form USTB dataset3

The Scientific World Journal 9

Figure 7 Sample images of UND dataset

Table 3 Rank-1 recognition rate of the proposed method when adopting different kernel functions

Kernel119870(119909 119910)

Parameters inkernel functions

Rank-1 recognition rateUSTB dataset 3 UND dataset

Linear119870(119909 119909

119894

) = (119909 sdot 119909

119894

) mdash 7544 7422

Polynomial119870(119909 119909

119894

) = [119886 (119909 sdot 119909

119894

) + 119887]

119889 119886 = 0001 119887 = 1119889 = 2

8911 8733

RBF 119870(119909 119909

119894

) = exp (minus1205741003816100381610038161003816

119909 minus 119909

119894

1003816

1003816

1003816

1003816

2

) 120574 = 000019646 94

Cosine

119896(119909 119910) =

119896(119909 119910)

radic119896(119909 119909)119896(119910 119910)

119896(119909 119910) = [119886 (119909 sdot 119910) + 119887]

119889

119886 = 0001 119887 = 1119889 = 2

9241 9022

Table 4 Performance comparison between ear recognition with or without ear normalization

Dataset Ear images without normalization Ear images with normalization Rank-1 recognition rateWithout

normalizationWith

normalization

USTBdataset3 9358 9646

UNDdataset 9111 94

identification experiment The averaged rank-1 recognitionperformance when adopting each kernel function is listedin Table 3 (the feature dimension is reduced to 100) Theparameters in kernel functions are empirically selected in thispaper As we can see from Table 3 the recognition rate onboth datasets performs best with the RBF kernel

Table 4 compares the rank-1 recognition result using theRBF kernel in two scenarios with ear normalization andwithout ear normalization As we can see from Table 4 earrecognition with ear normalization performs better thanear recognition without ear recognition We can see fromthe pictures in Table 4 that in the ldquoear images withoutnormalizationrdquo scenario although the extracted ear imagesare set to the same size the ear locations and the pure earsize may be different among these different enrolled imagesOn the other hand in the ldquoear images with normalizationrdquoscenario only the pure ear part is extracted and used forenrollment all of the enrolled ears have the same size whichhelps to enhance the recognition performance

42 Ear Authentication Experiment Wehave designed an earauthentication system with the proposed method Figure 8shows the framework flow of the ear authentication system

Ear detection

Ear trackingand extraction

Gabor feature extraction

Ear normalization Training process

authenticationEar

Still image or video sequence

Ear normalization

Ear feature set

Gabor feature extractionEar image

dataset

Online authentication

Figure 8 Framework flow of the ear authentication system

Figure 9(a) shows the application scenario The systemis composed of a camera (resolution is 640 lowast 480) and acomputer (CPU i5M520 240GHz RAM 200GB) Thereare a total number of 59 subjects registered For each subject 6images are stored in the gallery set Figure 9(b) shows the earenrollment process Figure 9(c) shows the ear authenticationprocess The time consumption for authenticating a subjectis about 180ms in which the time consumption for the eardetector to extract the pure ear is about 150ms Figure 10

10 The Scientific World Journal

(a) (b)

(c)

Figure 9 Ear authentication system (a) authentication scenario (b) ear enrollment interface (c) ear authentication interface

0 02 04 06 08 10

01

02

03

04

05

06

07

08

09

1

False acceptance rate

Gen

uine

acc

epta

nce

rate

Figure 10 ROC curve of the ear authentication system

shows the ROC curve of the authentication performanceTheEER rate is about 4The experimental result shows that earbiometric is a feasible candidate for human identification

5 Conclusion

In this paper an ear recognition system based on 2D imagesis proposedThemain contributions of the proposed methodare the following (1) automatically detecting the ear part

from the input image and normalizing the ear part basedon the long axis of the outer ear contour and (2) extractingthe discriminating Gabor features of the ear images usingthe Full Space Kernel Discriminant Analysis Experimentalresults show that we can achieve automatic ear recognitionbased on 2D images with the proposed method

Our future work will be focused on two aspects (1) inthe ear normalization stage we need to improve the accuracyof the earlobe localization generate deliberate models for

The Scientific World Journal 11

0

5

10

15

20

25

30

35

40

45

50

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18

Num

ber o

f w

eak

clas

sifier

s

Layer number

Le ear detector

Right ear detector

Method in [19]

Method in this paper

Figure 11 Comparison of the number of weak classifiers on eachlayer

the earlobe landmarks and make the searching process lessdependent on the initial model shape and (2) in the earauthentication stage we need a larger dataset to testify thematching accuracy and the real-time performance of theproposed method

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

Thiswork is supported by theNational Natural Science Foun-dation of China under Grant no 61300075 and FundamentalResearch Funds for China Central Universities under Grantno FRF-SD-12-017A

References

[1] A Pflug and C Busch ldquoEar biometrics a survey of detectionfeature extraction and recognition methodsrdquo IET Biometricsvol 1 no 2 pp 114ndash129 2012

[2] A Abaza A Ross C Hebert et al ldquoA survey on ear biometricsrdquoACM Transactions on Embedded Computing Systems vol 9 no4 article 39 2010

[3] S M S Islam M Bennamoun R A Owens and R Davis ldquoAreview of recent advances in 3d ear and expression-invariantface biometricsrdquo ACMComputing Surveys vol 44 no 3 article14 2012

[4] L Yuan Z C Mu and F Yang ldquoA review of recent advances inear recognitionrdquo in Proceedings of the 6th Chinese Conference onBiometric Recognition vol 7098 of LNCS pp 252ndash259 SpringerBeijing China December 2011

[5] A Iannerelli Ear Identification Forensic Identification SeriesParamount Publishing Company Fremont Calif USA 1989

[6] L Yuan and F Zhang ldquoEar detection based on improvedadaboost algorithmrdquo in Proceedings of the International Con-ference on Machine Learning and Cybernetics pp 2414ndash2417Baoding China July 2009

[7] A Abaza C Hebert and M A F Harrison ldquoFast learning eardetection for real-time surveillancerdquo in Proceedings of the 4thIEEE International Conference on Biometrics Theory Applica-tions and Systems (BTAS rsquo10) pp 1ndash6 Washington DC USASeptember 2010

[8] L Yuan and Z C Mu ldquoEar recognition based on 2D imagesrdquoin Proceedings of 1st International Conference on BiometricsTheory Applications and Systems Washington DC USA 2007

[9] C Liu ldquoGabor-based kernel PCAwith fractional power polyno-mial models for face recognitionrdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 26 no 5 pp 572ndash5812004

[10] L Yuan Z-CMu andHZeng ldquoPartially occluded ear recogni-tion based on local featuresrdquo Journal of University of Science andTechnology Beijing vol 32 no 4 pp 530ndash535 2010 (Chinese)

[11] httpwwwustbeducnresb[12] httpwww3ndedusimcvrlCVRLData Setshtml[13] M Burge andW Burger ldquoEar biometrics in machine visionrdquo in

Proceedings of the 21stWorkshop of the Australian Association forPattern Recognition 1997

[14] B Moreno A Sanchez and J F Velez ldquoOn the use of outerear images for personal identification in security applicationsrdquoin Proceedings of the IEEE 33rd Annual International CarnahanConference on Security Technology pp 469ndash476 October 1999

[15] Z C Mu L Yuan and Z G Xu ldquoShape and structuralfeature based ear recognitionrdquo in Proceedings of InternationalConference on Advances in Biometric Personal Authenticationvol 3338 of LNCS pp 663ndash670 Springer 2004

[16] M Choras ldquoEar biometrics based on geometrical featureextractionrdquo Electronics Letters on Computer Vision amp ImageAnalysis no 3 pp 84ndash95 2005

[17] D J Hurley M S Nixon and J N Carter ldquoForce field featureextraction for ear biometricsrdquo Computer Vision and ImageUnderstanding vol 98 no 3 pp 491ndash512 2005

[18] L Nanni and A Lumini ldquoA multi-matcher for ear authentica-tionrdquo Pattern Recognition Letters vol 28 no 16 pp 2219ndash22262007

[19] J D Bustard and M S Nixon ldquoToward unconstrained earrecognition from two-dimensional imagesrdquo IEEE Transactionson Systems Man and Cybernetics Part A vol 40 no 3 pp 486ndash494 2010

[20] B Arbab-Zavar and M S Nixon ldquoOn guided model-basedanalysis for ear biometricsrdquo Computer Vision and Image Under-standing vol 115 no 4 pp 487ndash502 2011

[21] K Chang K W Bowyer S Sarkar and B Victor ldquoComparisonand combination of ear and face images in appearance-basedbiometricsrdquo IEEE Transactions on Pattern Analysis andMachineIntelligence vol 25 no 9 pp 1160ndash1165 2003

[22] L Yuan Z-C Mu Y Zhang and K Liu ldquoEar recognition usingimproved non-negative matrix factorizationrdquo in Proceedings ofthe 18th International Conference on Pattern Recognition (ICPRrsquo06) pp 501ndash504 Hong Kong August 2006

[23] W Dun and Z Mu ldquoICA-based ear recognition methodthrough nonlinear adaptive feature fusionrdquo Journal ofComputer-Aided Design and Computer Graphics vol 21no 3 pp 383ndash388 2009

12 The Scientific World Journal

[24] Y Wang Z-C Mu and H Zeng ldquoBlock-based and multi-resolutionmethods for ear recognition using wavelet transformand uniform local binary patternsrdquo in Proceedings of the 19thInternational Conference on Pattern Recognition (ICPR rsquo08)Tampa Fla USA December 2008

[25] J Zhou S Cadavid and M Abdel-Mottaleb ldquoHistograms ofcategorized shapes for 3D ear detectionrdquo in Proceedings ofthe 4th IEEE International Conference on Biometrics TheoryApplications and Systems (BTAS rsquo10) pp 278ndash282 WashingtonDC USA September 2010

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 9: Research Article Ear Recognition Based on Gabor Features

The Scientific World Journal 9

Figure 7 Sample images of UND dataset

Table 3 Rank-1 recognition rate of the proposed method when adopting different kernel functions

Kernel119870(119909 119910)

Parameters inkernel functions

Rank-1 recognition rateUSTB dataset 3 UND dataset

Linear119870(119909 119909

119894

) = (119909 sdot 119909

119894

) mdash 7544 7422

Polynomial119870(119909 119909

119894

) = [119886 (119909 sdot 119909

119894

) + 119887]

119889 119886 = 0001 119887 = 1119889 = 2

8911 8733

RBF 119870(119909 119909

119894

) = exp (minus1205741003816100381610038161003816

119909 minus 119909

119894

1003816

1003816

1003816

1003816

2

) 120574 = 000019646 94

Cosine

119896(119909 119910) =

119896(119909 119910)

radic119896(119909 119909)119896(119910 119910)

119896(119909 119910) = [119886 (119909 sdot 119910) + 119887]

119889

119886 = 0001 119887 = 1119889 = 2

9241 9022

Table 4 Performance comparison between ear recognition with or without ear normalization

Dataset Ear images without normalization Ear images with normalization Rank-1 recognition rateWithout

normalizationWith

normalization

USTBdataset3 9358 9646

UNDdataset 9111 94

identification experiment The averaged rank-1 recognitionperformance when adopting each kernel function is listedin Table 3 (the feature dimension is reduced to 100) Theparameters in kernel functions are empirically selected in thispaper As we can see from Table 3 the recognition rate onboth datasets performs best with the RBF kernel

Table 4 compares the rank-1 recognition result using theRBF kernel in two scenarios with ear normalization andwithout ear normalization As we can see from Table 4 earrecognition with ear normalization performs better thanear recognition without ear recognition We can see fromthe pictures in Table 4 that in the ldquoear images withoutnormalizationrdquo scenario although the extracted ear imagesare set to the same size the ear locations and the pure earsize may be different among these different enrolled imagesOn the other hand in the ldquoear images with normalizationrdquoscenario only the pure ear part is extracted and used forenrollment all of the enrolled ears have the same size whichhelps to enhance the recognition performance

42 Ear Authentication Experiment Wehave designed an earauthentication system with the proposed method Figure 8shows the framework flow of the ear authentication system

Ear detection

Ear trackingand extraction

Gabor feature extraction

Ear normalization Training process

authenticationEar

Still image or video sequence

Ear normalization

Ear feature set

Gabor feature extractionEar image

dataset

Online authentication

Figure 8 Framework flow of the ear authentication system

Figure 9(a) shows the application scenario The systemis composed of a camera (resolution is 640 lowast 480) and acomputer (CPU i5M520 240GHz RAM 200GB) Thereare a total number of 59 subjects registered For each subject 6images are stored in the gallery set Figure 9(b) shows the earenrollment process Figure 9(c) shows the ear authenticationprocess The time consumption for authenticating a subjectis about 180ms in which the time consumption for the eardetector to extract the pure ear is about 150ms Figure 10

10 The Scientific World Journal

(a) (b)

(c)

Figure 9 Ear authentication system (a) authentication scenario (b) ear enrollment interface (c) ear authentication interface

0 02 04 06 08 10

01

02

03

04

05

06

07

08

09

1

False acceptance rate

Gen

uine

acc

epta

nce

rate

Figure 10 ROC curve of the ear authentication system

shows the ROC curve of the authentication performanceTheEER rate is about 4The experimental result shows that earbiometric is a feasible candidate for human identification

5 Conclusion

In this paper an ear recognition system based on 2D imagesis proposedThemain contributions of the proposed methodare the following (1) automatically detecting the ear part

from the input image and normalizing the ear part basedon the long axis of the outer ear contour and (2) extractingthe discriminating Gabor features of the ear images usingthe Full Space Kernel Discriminant Analysis Experimentalresults show that we can achieve automatic ear recognitionbased on 2D images with the proposed method

Our future work will be focused on two aspects (1) inthe ear normalization stage we need to improve the accuracyof the earlobe localization generate deliberate models for

The Scientific World Journal 11

0

5

10

15

20

25

30

35

40

45

50

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18

Num

ber o

f w

eak

clas

sifier

s

Layer number

Le ear detector

Right ear detector

Method in [19]

Method in this paper

Figure 11 Comparison of the number of weak classifiers on eachlayer

the earlobe landmarks and make the searching process lessdependent on the initial model shape and (2) in the earauthentication stage we need a larger dataset to testify thematching accuracy and the real-time performance of theproposed method

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

Thiswork is supported by theNational Natural Science Foun-dation of China under Grant no 61300075 and FundamentalResearch Funds for China Central Universities under Grantno FRF-SD-12-017A

References

[1] A Pflug and C Busch ldquoEar biometrics a survey of detectionfeature extraction and recognition methodsrdquo IET Biometricsvol 1 no 2 pp 114ndash129 2012

[2] A Abaza A Ross C Hebert et al ldquoA survey on ear biometricsrdquoACM Transactions on Embedded Computing Systems vol 9 no4 article 39 2010

[3] S M S Islam M Bennamoun R A Owens and R Davis ldquoAreview of recent advances in 3d ear and expression-invariantface biometricsrdquo ACMComputing Surveys vol 44 no 3 article14 2012

[4] L Yuan Z C Mu and F Yang ldquoA review of recent advances inear recognitionrdquo in Proceedings of the 6th Chinese Conference onBiometric Recognition vol 7098 of LNCS pp 252ndash259 SpringerBeijing China December 2011

[5] A Iannerelli Ear Identification Forensic Identification SeriesParamount Publishing Company Fremont Calif USA 1989

[6] L Yuan and F Zhang ldquoEar detection based on improvedadaboost algorithmrdquo in Proceedings of the International Con-ference on Machine Learning and Cybernetics pp 2414ndash2417Baoding China July 2009

[7] A Abaza C Hebert and M A F Harrison ldquoFast learning eardetection for real-time surveillancerdquo in Proceedings of the 4thIEEE International Conference on Biometrics Theory Applica-tions and Systems (BTAS rsquo10) pp 1ndash6 Washington DC USASeptember 2010

[8] L Yuan and Z C Mu ldquoEar recognition based on 2D imagesrdquoin Proceedings of 1st International Conference on BiometricsTheory Applications and Systems Washington DC USA 2007

[9] C Liu ldquoGabor-based kernel PCAwith fractional power polyno-mial models for face recognitionrdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 26 no 5 pp 572ndash5812004

[10] L Yuan Z-CMu andHZeng ldquoPartially occluded ear recogni-tion based on local featuresrdquo Journal of University of Science andTechnology Beijing vol 32 no 4 pp 530ndash535 2010 (Chinese)

[11] httpwwwustbeducnresb[12] httpwww3ndedusimcvrlCVRLData Setshtml[13] M Burge andW Burger ldquoEar biometrics in machine visionrdquo in

Proceedings of the 21stWorkshop of the Australian Association forPattern Recognition 1997

[14] B Moreno A Sanchez and J F Velez ldquoOn the use of outerear images for personal identification in security applicationsrdquoin Proceedings of the IEEE 33rd Annual International CarnahanConference on Security Technology pp 469ndash476 October 1999

[15] Z C Mu L Yuan and Z G Xu ldquoShape and structuralfeature based ear recognitionrdquo in Proceedings of InternationalConference on Advances in Biometric Personal Authenticationvol 3338 of LNCS pp 663ndash670 Springer 2004

[16] M Choras ldquoEar biometrics based on geometrical featureextractionrdquo Electronics Letters on Computer Vision amp ImageAnalysis no 3 pp 84ndash95 2005

[17] D J Hurley M S Nixon and J N Carter ldquoForce field featureextraction for ear biometricsrdquo Computer Vision and ImageUnderstanding vol 98 no 3 pp 491ndash512 2005

[18] L Nanni and A Lumini ldquoA multi-matcher for ear authentica-tionrdquo Pattern Recognition Letters vol 28 no 16 pp 2219ndash22262007

[19] J D Bustard and M S Nixon ldquoToward unconstrained earrecognition from two-dimensional imagesrdquo IEEE Transactionson Systems Man and Cybernetics Part A vol 40 no 3 pp 486ndash494 2010

[20] B Arbab-Zavar and M S Nixon ldquoOn guided model-basedanalysis for ear biometricsrdquo Computer Vision and Image Under-standing vol 115 no 4 pp 487ndash502 2011

[21] K Chang K W Bowyer S Sarkar and B Victor ldquoComparisonand combination of ear and face images in appearance-basedbiometricsrdquo IEEE Transactions on Pattern Analysis andMachineIntelligence vol 25 no 9 pp 1160ndash1165 2003

[22] L Yuan Z-C Mu Y Zhang and K Liu ldquoEar recognition usingimproved non-negative matrix factorizationrdquo in Proceedings ofthe 18th International Conference on Pattern Recognition (ICPRrsquo06) pp 501ndash504 Hong Kong August 2006

[23] W Dun and Z Mu ldquoICA-based ear recognition methodthrough nonlinear adaptive feature fusionrdquo Journal ofComputer-Aided Design and Computer Graphics vol 21no 3 pp 383ndash388 2009

12 The Scientific World Journal

[24] Y Wang Z-C Mu and H Zeng ldquoBlock-based and multi-resolutionmethods for ear recognition using wavelet transformand uniform local binary patternsrdquo in Proceedings of the 19thInternational Conference on Pattern Recognition (ICPR rsquo08)Tampa Fla USA December 2008

[25] J Zhou S Cadavid and M Abdel-Mottaleb ldquoHistograms ofcategorized shapes for 3D ear detectionrdquo in Proceedings ofthe 4th IEEE International Conference on Biometrics TheoryApplications and Systems (BTAS rsquo10) pp 278ndash282 WashingtonDC USA September 2010

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 10: Research Article Ear Recognition Based on Gabor Features

10 The Scientific World Journal

(a) (b)

(c)

Figure 9 Ear authentication system (a) authentication scenario (b) ear enrollment interface (c) ear authentication interface

0 02 04 06 08 10

01

02

03

04

05

06

07

08

09

1

False acceptance rate

Gen

uine

acc

epta

nce

rate

Figure 10 ROC curve of the ear authentication system

shows the ROC curve of the authentication performanceTheEER rate is about 4The experimental result shows that earbiometric is a feasible candidate for human identification

5 Conclusion

In this paper an ear recognition system based on 2D imagesis proposedThemain contributions of the proposed methodare the following (1) automatically detecting the ear part

from the input image and normalizing the ear part basedon the long axis of the outer ear contour and (2) extractingthe discriminating Gabor features of the ear images usingthe Full Space Kernel Discriminant Analysis Experimentalresults show that we can achieve automatic ear recognitionbased on 2D images with the proposed method

Our future work will be focused on two aspects (1) inthe ear normalization stage we need to improve the accuracyof the earlobe localization generate deliberate models for

The Scientific World Journal 11

0

5

10

15

20

25

30

35

40

45

50

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18

Num

ber o

f w

eak

clas

sifier

s

Layer number

Le ear detector

Right ear detector

Method in [19]

Method in this paper

Figure 11 Comparison of the number of weak classifiers on eachlayer

the earlobe landmarks and make the searching process lessdependent on the initial model shape and (2) in the earauthentication stage we need a larger dataset to testify thematching accuracy and the real-time performance of theproposed method

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

Thiswork is supported by theNational Natural Science Foun-dation of China under Grant no 61300075 and FundamentalResearch Funds for China Central Universities under Grantno FRF-SD-12-017A

References

[1] A Pflug and C Busch ldquoEar biometrics a survey of detectionfeature extraction and recognition methodsrdquo IET Biometricsvol 1 no 2 pp 114ndash129 2012

[2] A Abaza A Ross C Hebert et al ldquoA survey on ear biometricsrdquoACM Transactions on Embedded Computing Systems vol 9 no4 article 39 2010

[3] S M S Islam M Bennamoun R A Owens and R Davis ldquoAreview of recent advances in 3d ear and expression-invariantface biometricsrdquo ACMComputing Surveys vol 44 no 3 article14 2012

[4] L Yuan Z C Mu and F Yang ldquoA review of recent advances inear recognitionrdquo in Proceedings of the 6th Chinese Conference onBiometric Recognition vol 7098 of LNCS pp 252ndash259 SpringerBeijing China December 2011

[5] A Iannerelli Ear Identification Forensic Identification SeriesParamount Publishing Company Fremont Calif USA 1989

[6] L Yuan and F Zhang ldquoEar detection based on improvedadaboost algorithmrdquo in Proceedings of the International Con-ference on Machine Learning and Cybernetics pp 2414ndash2417Baoding China July 2009

[7] A Abaza C Hebert and M A F Harrison ldquoFast learning eardetection for real-time surveillancerdquo in Proceedings of the 4thIEEE International Conference on Biometrics Theory Applica-tions and Systems (BTAS rsquo10) pp 1ndash6 Washington DC USASeptember 2010

[8] L Yuan and Z C Mu ldquoEar recognition based on 2D imagesrdquoin Proceedings of 1st International Conference on BiometricsTheory Applications and Systems Washington DC USA 2007

[9] C Liu ldquoGabor-based kernel PCAwith fractional power polyno-mial models for face recognitionrdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 26 no 5 pp 572ndash5812004

[10] L Yuan Z-CMu andHZeng ldquoPartially occluded ear recogni-tion based on local featuresrdquo Journal of University of Science andTechnology Beijing vol 32 no 4 pp 530ndash535 2010 (Chinese)

[11] httpwwwustbeducnresb[12] httpwww3ndedusimcvrlCVRLData Setshtml[13] M Burge andW Burger ldquoEar biometrics in machine visionrdquo in

Proceedings of the 21stWorkshop of the Australian Association forPattern Recognition 1997

[14] B Moreno A Sanchez and J F Velez ldquoOn the use of outerear images for personal identification in security applicationsrdquoin Proceedings of the IEEE 33rd Annual International CarnahanConference on Security Technology pp 469ndash476 October 1999

[15] Z C Mu L Yuan and Z G Xu ldquoShape and structuralfeature based ear recognitionrdquo in Proceedings of InternationalConference on Advances in Biometric Personal Authenticationvol 3338 of LNCS pp 663ndash670 Springer 2004

[16] M Choras ldquoEar biometrics based on geometrical featureextractionrdquo Electronics Letters on Computer Vision amp ImageAnalysis no 3 pp 84ndash95 2005

[17] D J Hurley M S Nixon and J N Carter ldquoForce field featureextraction for ear biometricsrdquo Computer Vision and ImageUnderstanding vol 98 no 3 pp 491ndash512 2005

[18] L Nanni and A Lumini ldquoA multi-matcher for ear authentica-tionrdquo Pattern Recognition Letters vol 28 no 16 pp 2219ndash22262007

[19] J D Bustard and M S Nixon ldquoToward unconstrained earrecognition from two-dimensional imagesrdquo IEEE Transactionson Systems Man and Cybernetics Part A vol 40 no 3 pp 486ndash494 2010

[20] B Arbab-Zavar and M S Nixon ldquoOn guided model-basedanalysis for ear biometricsrdquo Computer Vision and Image Under-standing vol 115 no 4 pp 487ndash502 2011

[21] K Chang K W Bowyer S Sarkar and B Victor ldquoComparisonand combination of ear and face images in appearance-basedbiometricsrdquo IEEE Transactions on Pattern Analysis andMachineIntelligence vol 25 no 9 pp 1160ndash1165 2003

[22] L Yuan Z-C Mu Y Zhang and K Liu ldquoEar recognition usingimproved non-negative matrix factorizationrdquo in Proceedings ofthe 18th International Conference on Pattern Recognition (ICPRrsquo06) pp 501ndash504 Hong Kong August 2006

[23] W Dun and Z Mu ldquoICA-based ear recognition methodthrough nonlinear adaptive feature fusionrdquo Journal ofComputer-Aided Design and Computer Graphics vol 21no 3 pp 383ndash388 2009

12 The Scientific World Journal

[24] Y Wang Z-C Mu and H Zeng ldquoBlock-based and multi-resolutionmethods for ear recognition using wavelet transformand uniform local binary patternsrdquo in Proceedings of the 19thInternational Conference on Pattern Recognition (ICPR rsquo08)Tampa Fla USA December 2008

[25] J Zhou S Cadavid and M Abdel-Mottaleb ldquoHistograms ofcategorized shapes for 3D ear detectionrdquo in Proceedings ofthe 4th IEEE International Conference on Biometrics TheoryApplications and Systems (BTAS rsquo10) pp 278ndash282 WashingtonDC USA September 2010

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 11: Research Article Ear Recognition Based on Gabor Features

The Scientific World Journal 11

0

5

10

15

20

25

30

35

40

45

50

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18

Num

ber o

f w

eak

clas

sifier

s

Layer number

Le ear detector

Right ear detector

Method in [19]

Method in this paper

Figure 11 Comparison of the number of weak classifiers on eachlayer

the earlobe landmarks and make the searching process lessdependent on the initial model shape and (2) in the earauthentication stage we need a larger dataset to testify thematching accuracy and the real-time performance of theproposed method

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

Thiswork is supported by theNational Natural Science Foun-dation of China under Grant no 61300075 and FundamentalResearch Funds for China Central Universities under Grantno FRF-SD-12-017A

References

[1] A Pflug and C Busch ldquoEar biometrics a survey of detectionfeature extraction and recognition methodsrdquo IET Biometricsvol 1 no 2 pp 114ndash129 2012

[2] A Abaza A Ross C Hebert et al ldquoA survey on ear biometricsrdquoACM Transactions on Embedded Computing Systems vol 9 no4 article 39 2010

[3] S M S Islam M Bennamoun R A Owens and R Davis ldquoAreview of recent advances in 3d ear and expression-invariantface biometricsrdquo ACMComputing Surveys vol 44 no 3 article14 2012

[4] L Yuan Z C Mu and F Yang ldquoA review of recent advances inear recognitionrdquo in Proceedings of the 6th Chinese Conference onBiometric Recognition vol 7098 of LNCS pp 252ndash259 SpringerBeijing China December 2011

[5] A Iannerelli Ear Identification Forensic Identification SeriesParamount Publishing Company Fremont Calif USA 1989

[6] L Yuan and F Zhang ldquoEar detection based on improvedadaboost algorithmrdquo in Proceedings of the International Con-ference on Machine Learning and Cybernetics pp 2414ndash2417Baoding China July 2009

[7] A Abaza C Hebert and M A F Harrison ldquoFast learning eardetection for real-time surveillancerdquo in Proceedings of the 4thIEEE International Conference on Biometrics Theory Applica-tions and Systems (BTAS rsquo10) pp 1ndash6 Washington DC USASeptember 2010

[8] L Yuan and Z C Mu ldquoEar recognition based on 2D imagesrdquoin Proceedings of 1st International Conference on BiometricsTheory Applications and Systems Washington DC USA 2007

[9] C Liu ldquoGabor-based kernel PCAwith fractional power polyno-mial models for face recognitionrdquo IEEE Transactions on PatternAnalysis and Machine Intelligence vol 26 no 5 pp 572ndash5812004

[10] L Yuan Z-CMu andHZeng ldquoPartially occluded ear recogni-tion based on local featuresrdquo Journal of University of Science andTechnology Beijing vol 32 no 4 pp 530ndash535 2010 (Chinese)

[11] httpwwwustbeducnresb[12] httpwww3ndedusimcvrlCVRLData Setshtml[13] M Burge andW Burger ldquoEar biometrics in machine visionrdquo in

Proceedings of the 21stWorkshop of the Australian Association forPattern Recognition 1997

[14] B Moreno A Sanchez and J F Velez ldquoOn the use of outerear images for personal identification in security applicationsrdquoin Proceedings of the IEEE 33rd Annual International CarnahanConference on Security Technology pp 469ndash476 October 1999

[15] Z C Mu L Yuan and Z G Xu ldquoShape and structuralfeature based ear recognitionrdquo in Proceedings of InternationalConference on Advances in Biometric Personal Authenticationvol 3338 of LNCS pp 663ndash670 Springer 2004

[16] M Choras ldquoEar biometrics based on geometrical featureextractionrdquo Electronics Letters on Computer Vision amp ImageAnalysis no 3 pp 84ndash95 2005

[17] D J Hurley M S Nixon and J N Carter ldquoForce field featureextraction for ear biometricsrdquo Computer Vision and ImageUnderstanding vol 98 no 3 pp 491ndash512 2005

[18] L Nanni and A Lumini ldquoA multi-matcher for ear authentica-tionrdquo Pattern Recognition Letters vol 28 no 16 pp 2219ndash22262007

[19] J D Bustard and M S Nixon ldquoToward unconstrained earrecognition from two-dimensional imagesrdquo IEEE Transactionson Systems Man and Cybernetics Part A vol 40 no 3 pp 486ndash494 2010

[20] B Arbab-Zavar and M S Nixon ldquoOn guided model-basedanalysis for ear biometricsrdquo Computer Vision and Image Under-standing vol 115 no 4 pp 487ndash502 2011

[21] K Chang K W Bowyer S Sarkar and B Victor ldquoComparisonand combination of ear and face images in appearance-basedbiometricsrdquo IEEE Transactions on Pattern Analysis andMachineIntelligence vol 25 no 9 pp 1160ndash1165 2003

[22] L Yuan Z-C Mu Y Zhang and K Liu ldquoEar recognition usingimproved non-negative matrix factorizationrdquo in Proceedings ofthe 18th International Conference on Pattern Recognition (ICPRrsquo06) pp 501ndash504 Hong Kong August 2006

[23] W Dun and Z Mu ldquoICA-based ear recognition methodthrough nonlinear adaptive feature fusionrdquo Journal ofComputer-Aided Design and Computer Graphics vol 21no 3 pp 383ndash388 2009

12 The Scientific World Journal

[24] Y Wang Z-C Mu and H Zeng ldquoBlock-based and multi-resolutionmethods for ear recognition using wavelet transformand uniform local binary patternsrdquo in Proceedings of the 19thInternational Conference on Pattern Recognition (ICPR rsquo08)Tampa Fla USA December 2008

[25] J Zhou S Cadavid and M Abdel-Mottaleb ldquoHistograms ofcategorized shapes for 3D ear detectionrdquo in Proceedings ofthe 4th IEEE International Conference on Biometrics TheoryApplications and Systems (BTAS rsquo10) pp 278ndash282 WashingtonDC USA September 2010

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 12: Research Article Ear Recognition Based on Gabor Features

12 The Scientific World Journal

[24] Y Wang Z-C Mu and H Zeng ldquoBlock-based and multi-resolutionmethods for ear recognition using wavelet transformand uniform local binary patternsrdquo in Proceedings of the 19thInternational Conference on Pattern Recognition (ICPR rsquo08)Tampa Fla USA December 2008

[25] J Zhou S Cadavid and M Abdel-Mottaleb ldquoHistograms ofcategorized shapes for 3D ear detectionrdquo in Proceedings ofthe 4th IEEE International Conference on Biometrics TheoryApplications and Systems (BTAS rsquo10) pp 278ndash282 WashingtonDC USA September 2010

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 13: Research Article Ear Recognition Based on Gabor Features

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of