6
Gait Recognition System Tailored for Arab Costume of The Gulf Region Tamer Shanableh Department of computer science and Engineering American University of Sharjah [email protected] Khaled Assaleh Department of Electrical Engineering American University of Sharjah [email protected] Layla AI- Hajjaj Computer Science program American university of Sharjah [email protected] AbdulWahab Kabani Computer Science program American university of Sharjah [email protected] Abstract - Existing work on gait recognition is focused on casual (western) customs hence not suitable for the Cu!f region where long gowns are used for both men and women. This paperproposes a gait recognition solution that is suitable for both Cu!f customs and casual customs. The solution is based on computing an adaptive image prediction between consecutive images. The resultant predictions are then accumulated into one image and transformed using either Discrete Cosine Transformation (Vcr) or Radon transformation. The feature vectors of the gait are computed from such transformed images. Feature modeling based on polYnomial networks follows. The proposed solution is tested on a dataset with around 100participants withmixedgenders and mixed customs. The proposed systemyields an impressive classification rates approaching 1000/0 accurary. Keywords - Human identification; computer vision; motion analYsis; gait biometric 1. INTRODUCTION In biometrics, people are identified based on their characteristics such as voice, iris, ftngerprint, hand geometry and face. It has been reported that such identification can also be based on the way that a human walks [1]. Such a biometric is referred to as Gait. Basically video cameras are used to acquire video sequences of individuals and recognize them based on the way they walk. Gait recognition has a number of attractive characteristics when compared to existing biometrics [2]. For example it does not require a physical contact as required by fingerprint or hand recognition. It also does not require high image resolution or special image acquisition conditions as required by face recognition for instance. Lastly it is non- intrusive and can recognize people at a distance without their knowledge or direct involvement. In 2005, a research group from the University of South Florida issued a human gait recognition challenge [3]. The group compiled a dataset of video sequences with different covariates such as camera viewing angle, walking surface type, carrying conditions where a person can be carrying a briefcase for example, shoe type where walking in heals for instance will affect the gait and the video capturing time. In the latter most video sequences were acquired in a second round after six months of the first shooting. The dataset contains data for experiments of increasing difficulty levels. A total of 1,870 video sequences are acquired with a total of 122 participants. The dataset is made available and it is used for benchmarking new solutions in gait recognition. 978-1-4244-5950-6/09/$26.00 ©2009 IEEE However, this dataset and subsequently all the previously proposed solutions are based on western style of dress code. Such solutions are likely to fail when applied to Gulf style dress code including white/black robes, head gears/scarves and veils. This is so because of the nature of the feature extraction methods that exploit the gait cycle which depends on the movements of the legs .. This paper proposes an efficient feature extraction and classification scheme for gait recognition for the Arab costume in the Gulf region. The proposed scheme is also shown to work for western costume as well. In general gait recognition based on video sequences is divided into a number of steps: 1. Segmentation: this step entails identifying the pixel locations belonging to the subject to be identified. The segmented images are binarized resulting in what is known as "silhouette frames". One approach for this segmentation is though background modeling and separation. For instance, in [3]it was proposed to extract bounding boxes of the subjects and then compute the mean vector and covariance matrix of background pixels. The pixels of the bounding box containing the subject are then classified into either foreground or background using Mahalanobis distances from the background model. The distances are then classified into foreground or background based on their likelihoods which are estimated using an Expectation Minimization (E1\1) procedure. Variants of this segmentation algorithm are also reported in the literature. For instance, in [5] the pixels' Mahalanobis distances from the background model were thresholded into either foreground or background without the need for computing likelihoods through the EM algorithm. Other approaches include extracting principal components of silhouette boundary vector variations [6] or Fourier descriptors [7]. 2. Feature extraction: This step can be preceded by what is known as gait cycle estimation which is the set of images starting from the right heel touching the ground all the way to where it touches the ground again. This information can be used to segment the sequences into cycles and then align them using various techniques such as Population Hidden Markov Models (PHMMs) [4]. Features are then based on averaged subsequences [8]. Other feature extraction techniques are reported for characterizing gait dynamics such as stride, stride speed, length and cadence [9]. Others used static body information such as ratios of various body parts [9]. Feature vectors composed of the amplitude of the spectrum of key 544

[IEEE 2009 IEEE International Symposium on Signal Processing and Information Technology (ISSPIT) - Ajman, United Arab Emirates (2009.12.14-2009.12.17)] 2009 IEEE International Symposium

Embed Size (px)

Citation preview

Page 1: [IEEE 2009 IEEE International Symposium on Signal Processing and Information Technology (ISSPIT) - Ajman, United Arab Emirates (2009.12.14-2009.12.17)] 2009 IEEE International Symposium

Gait Recognition System Tailored for Arab Costume ofThe Gulf Region

Tamer ShanablehDepartment of computerscience and EngineeringAmerican University of

[email protected]

Khaled AssalehDepartment of Electrical

EngineeringAmerican University of

[email protected]

Layla AI- HajjajComputer Science

programAmerican university of

[email protected]

AbdulWahab KabaniComputer Science

programAmerican university of

[email protected]

Abstract - Existing work on gait recognition is focused on casual(western) customs hence not suitable for the Cu!f region where long gownsare usedfor both men and women. Thispaperproposes a gait recognitionsolution that is suitable for both Cu!f customs and casual customs. Thesolution is based on computing an adaptive image prediction betweenconsecutive images. The resultant predictions are then accumulated into oneimage and transformed using either Discrete Cosine Transformation(Vcr) or Radon transformation. The feature vectors of the gait arecomputed from such transformed images. Feature modeling based onpolYnomial networks follows. Theproposed solution is tested on a datasetwitharound 100participants withmixedgenders andmixed customs. Theproposed systemyieldsan impressive classification rates approaching 1000/0accurary.

Keywords - Human identification; computer vision; motion analYsis;gait biometric

1. INTRODUCTION

In biometrics, people are identified based on theircharacteristics such as voice, iris, ftngerprint, hand geometryand face. It has been reported that such identification can alsobe based on the way that a human walks [1]. Such a biometric isreferred to as Gait. Basically video cameras are used to acquirevideo sequences of individuals and recognize them based on theway they walk. Gait recognition has a number of attractivecharacteristics when compared to existing biometrics [2]. Forexample it does not require a physical contact as required byfingerprint or hand recognition. It also does not require highimage resolution or special image acquisition conditions asrequired by face recognition for instance. Lastly it is non­intrusive and can recognize people at a distance without theirknowledge or direct involvement.In 2005, a research group from the University of South Floridaissued a human gait recognition challenge [3]. The groupcompiled a dataset of video sequences with different covariatessuch as camera viewing angle, walking surface type, carryingconditions where a person can be carrying a briefcase forexample, shoe type where walking in heals for instance willaffect the gait and the video capturing time. In the latter mostvideo sequences were acquired in a second round after sixmonths of the first shooting. The dataset contains data forexperiments of increasing difficulty levels. A total of 1,870video sequences are acquired with a total of 122 participants.The dataset is made available and it is used for benchmarkingnew solutions in gait recognition.

978-1-4244-5950-6/09/$26.00 ©2009 IEEE

However, this dataset and subsequently all the previouslyproposed solutions are based on western style of dress code.Such solutions are likely to fail when applied to Gulf style dresscode including white/black robes, head gears/scarves and veils.This is so because of the nature of the feature extractionmethods that exploit the gait cycle which depends on themovements of the legs..This paper proposes an efficient feature extraction andclassification scheme for gait recognition for the Arab costumein the Gulf region. The proposed scheme is also shown to workfor western costume as well.

In general gait recognition based on video sequences is dividedinto a number of steps:1. Segmentation: this step entails identifying the pixel locationsbelonging to the subject to be identified. The segmented imagesare binarized resulting in what is known as "silhouette frames".One approach for this segmentation is though backgroundmodeling and separation. For instance, in [3] it was proposed toextract bounding boxes of the subjects and then compute themean vector and covariance matrix of background pixels. Thepixels of the bounding box containing the subject are thenclassified into either foreground or background usingMahalanobis distances from the background model. Thedistances are then classified into foreground or backgroundbased on their likelihoods which are estimated using anExpectation Minimization (E1\1) procedure.Variants of this segmentation algorithm are also reported in theliterature. For instance, in [5] the pixels' Mahalanobis distancesfrom the background model were thresholded into eitherforeground or background without the need for computinglikelihoods through the EM algorithm. Other approachesinclude extracting principal components of silhouette boundaryvector variations [6] or Fourier descriptors [7].2. Feature extraction: This step can be preceded by what isknown as gait cycle estimation which is the set of imagesstarting from the right heel touching the ground all the way towhere it touches the ground again. This information can beused to segment the sequences into cycles and then align themusing various techniques such as Population Hidden MarkovModels (PHMMs) [4]. Features are then based on averagedsubsequences [8]. Other feature extraction techniques arereported for characterizing gait dynamics such as stride, stridespeed, length and cadence [9]. Others used static bodyinformation such as ratios of various body parts [9]. Featurevectors composed of the amplitude of the spectrum of key

544

Page 2: [IEEE 2009 IEEE International Symposium on Signal Processing and Information Technology (ISSPIT) - Ajman, United Arab Emirates (2009.12.14-2009.12.17)] 2009 IEEE International Symposium

silhouette frames are also reported in [2]. More recentlydynamic features from averaged silhouette cycles are extractedby Gabor-based discriminative common vectors (DCY) analysis[10]. Likewise [11] proposed the use of Kernel-based PrincipalComponent Analysis (KPCA) to extract gait features. In otherapproaches, the human body components are studied separatelyand feature vectors are extracted accordingly [12].3. Feature modeling and similarity measures: Here the extractedfeatures are compared against the stored entries in the dataset.Reported measures include computing the Euclidean distancesin the Linear Discriminate Analysis (LDA) space [4, 13],symmetric group theoretic distances [14], normalized Euclideandistance between the projection centroids of two gait sequences[15]. Dynamics of the gait sequences can also be modeled byhidden Markov models (HMMs) as reported in [16].The existing work on gait recognition is however based onidentifying people in western or casual costumes namely; pantsand shorts. Such solutions are not suitable for identifyingindividuals in the Gulf region of the Middle East. The localdress code in the Gulf region for males includes robesand head gears. Likewise the dress code for females includes

robes and head scarves or face veils. Examples of such customsare shown in Figure 1. It is clear that the gap between the legsof the individuals are concealed hence all the techniques basedon gait cycles do not apply. Note that this problem can also bepresent if the individuals are dres sed in long skirts as well;hence the problem is not specific for the Gulf customs. Wepropose a solution that applies to both casual and Gulf customsbased on accumulating the adaptive prediction errors ofconsecutive images are shall be explained in Section 3.The rest of this paper is organized as follows. The compileddataset and data acquisition procedure is described in Section 2.Feature extraction and motion representation for causal andGulf customs are presented in Section 3. The classificationproblem is then formalized using polynomial networks inSection 4. Experimental results are presented and discussed in 5prior to arriving to the final conclusion in section 6.

2. DATASET DESCRIPTION

Although the purpose of this research is to devise a method forgait recognition for individuals in Gulf costumes nonetheless,we need to verify that the proposed solution is also applicableto recognizing individuals in casual costumes (mainly with pantsor shorts). As such the same system can be deployed foridentifying individuals with mixed costumes.Similar to the setup reported in [3] the camera was positioned10 meters away from the walking subjects. However one digitalcamera was used with one view only. The video capturing tookplace in the rotunda of one of our lecture building using onedigital camera. An example of participants with differentcostumes is shown in Figure 1.

978-1-4244-5950-6/09/$26.00 ©2009 IEEE

Fig. 1. Example participants with different costumes. (a) Femalewith Gulf costume (b) Male with Gulf costume (c) Female with

casual costume (d) Male with casual costume.

A total number of 103 subjects participated in the datacollection. All participants are undergraduate students of thesame age group between 18 and 22 year old. Out of the 103subjects, 53 participated with Gulf costumes (33 females and 20males). Another 50 subjects participated in casual costumes (11females and 39 males).Each participant was asked to walk naturally across the rotundaback and forth a total number of 8 times. Out of which 4instances are captured with a walk from right to left and 4 in theother direction.

3. FEATURE EXTRACTION

The existing literature on Gait recognition heavily depends onthe extraction of gait cycles. Such a cycle can be defined as thesequence of images from which the right heel of an individualtouched the ground all the way until it touches the groundagain. The extraction of gait cycles depend on the observing the

545

Page 3: [IEEE 2009 IEEE International Symposium on Signal Processing and Information Technology (ISSPIT) - Ajman, United Arab Emirates (2009.12.14-2009.12.17)] 2009 IEEE International Symposium

gap between the two legs in a video sequence that starts from acertain position of the legs and ends after a full cycle with thesame position. Unfortunately, with the Gulf costume, the gapbetween the legs is not at all apparent hence a differentapproach to feature extraction shall be sought.We propose to extract the motion of an individual andaccumulate it into one or two images. Feature vectors can thenbe extracted from such images that describe the motion. Notethat in the dataset description it was mentioned that thesubjects are walking in front of a static background thatcontains various stationary objects. Hence the preprocess ofobject extraction and segmentation is not needed this case. Onthe other hand, in the absence of a stationary background thepreprocessing shall entail identifying the pixel locationsbelonging to the subject. The segmented images are usuallybinarized resulting in what is known as "silhouette frames" .One approach for this segmentation is through backgroundmodeling and separation as described above [3] ..

In this paper we base our feature extraction on the techniquesused in digital video coding where we compute the forwardprediction error between successive images. That is, each imageis subtracted from its immediate previous image. The resultantprediction error can be thresholded to filter out imagedifferences that did not result from the motion of theindividual. The threshold can be set to the 50th or the 75th

percentile of non-zero pixels of the prediction error image. Thethresholded prediction error images can then be accumulatedinto one image which we refer to as the AccumulatedPredictions (AP) image. For better representation of theindividual's motion, the forward prediction error between twoconsecutive images can be represented using 2 images. One forthe positive differences and the other for the negativedifferences. Each prediction error image is then thresholdedseparately. In this case we end up with two AP images whichcan be referred to as positive AP and negative AP images.Note that the covered and uncovered background will appear asa relative motion to the individual and thus represented as suchin the AP images. To minimize the appearance of relativemotion we propose to use the previous images or the futureimages in computing the prediction error for a given image. Theprediction in this case is referred to as forward prediction andbackward prediction respectively. The decision between the useof forward or backward prediction can be based on computingthe Sum of Absolute Differences (SAD) of the prediction error.The prediction source that minimizes the SAD is selected. Theresult of implementing this technique is shown in Figure 2. Thefigure shows that the appearance of the background objects arenow minimized as desired .

II t' .,~ II, J"} ' ,. f:l l ' l '\\ m~

.. ' l.tl l/.t ,.l\ .: l d I • ,):1 It~ f) I ':,1.. , 1,1. I I ' . t I,,' : , , '~I : .. -; :' i I ~ • ~ ~.. l-

(a)

978-1-4244-5950-6/09/$26.00 ©2009 IEEE

~)

Fig. 2. AP images of a motion sequence with adaptiveforward /backward prediction.

(a) Negative AP image (b) Positive AP image.

Once the AP images are computed, the next step is to extractspatial domain features. Following the authors' work on singlanguage recognition as reported in [17], these features can bebased on either the two dimensional Discrete CosineTransform (DCT) coefficients or the Radon transformcoefficients. An important property of the DCTtransformation is its energy compaction. Most of the imageenergy is concentrated in the top left comer of the transformedimaged. This fact is utilized in image and video coding wheresuch low frequency coefficients are quantized with a finerquantization step size in comparison with the high frequencycontent. Therefore in terms of spatial feature extraction wepropose to represent our AP image using the top left DCTcoefficients. There coefficients can be selected using a zig-zagscanning manner starting from the top left comer andprogressing inwards towards the bottom right comer. Thescanning process can select a predefined number ofcoefficients. This number is known as the DCT cutoff whichcan be selected empirically.The process of DCT transformation followed by zigzag­scanning is also known as Zonal coding.Note that the zonal coding is applied to both negative andpositive AP images. The resultant vectors of the zonal codingprocess are then interleaved to generate the final feature vector.As mentioned previously, the second approach to featureextraction is based on Radon transformation. Essentially, theAP images are projected at a given angle; the result is a onedimensional curve that reflects the integral of pixel lines acrossthe direction of the projection angle. Typically the projection isdone on either the horizontal or the vertical image axis. Tosmooth and reduce the size of the computed image projection,one dimensional DCT transformation can be used followed byideal low pass filtering with a given frequency cutoff. Hence theprojection can be represented using few low frequencycoefficients.

4. CLASSIFICATION

Polynomial network provides a parameterized nonlinear mapwhich nonlinearly expands a sequence of feature vectors to ahigher dimensionality and maps them to a target outputsequence. Training of a polynomial network consists of twomain stages. The first stage involves expanding the trainingfeature vectors via polynomial expansion with the aim ofimproving the separation of the different classes in theexpanded feature vector space. The second stage entailscomputing the weights of the polynomial network applied to

546

Page 4: [IEEE 2009 IEEE International Symposium on Signal Processing and Information Technology (ISSPIT) - Ajman, United Arab Emirates (2009.12.14-2009.12.17)] 2009 IEEE International Symposium

the expanded feature vectors. Polynomial networks have beenused successfully in biomedical signal separation [18].In a polynomial networks setting, the gait recognition problemcan be formulated as follows. The response variables whichrepresent the M individuals (where each individual can bereferred to a class in this case) of the training dataset, aredenoted by an M number of q vectors, i.e. Q= {qmI m =1,2, ... , M} . For a given class of features vectors, say class i , thecorresponding q vector will contain binary values with '1s'indicating individuals belonging to class i and 'Os' for the restof the individuals or participants.The feature vector of the I h individual at repetition j of class m

is composed of 1 feature variables and is denoted by xl·m=

[xf'm(O)xf'm(l) ...xf'm(m)]. Consequently, the featurevectors in the training set is denoted by the matrix X where :

lX~'l (O)X~'l (l) X~'l(O]x Z•1(O)xZ•1(1) xz.1(1)

X 0 : 0 0 : (1). .

~.M (O)X~·M (1) X~·M(0We wish to perform a nonlinear mapping between the featurevector matrix X and the response variables Q = {qmIm =1,2, ... , M}. In polynomial networks, the dimensionality of thefeature vectors in matrix X is first expanded into an ,m order.The dimensionality expansion can be achieved by a reducedmultivariate polynomial expansion as proposed in [16]. Theexpansion of X into the r<h order is denoted by the matrix P E

Rnx k where k is the dimensionality of the expanded feature

vector which is defined as [16]:k =1+r+ 1(2r-1) (2)

The mapping between P and Qis then achieved by using least­squared error objective criterion :

w;::'t = avg., minllPwop t - qmllz (3)Where II. liz denotes the 12norm. Minimizing the objectivefunction results in:

w;::'t = (pTp)-l pT qm (4)Note that model weights are computed using a non-iterativeleast squares method which is a clear advantage when it comesto computational complexity.Consequently the training process results in a set of weights

{w;::'tlm = 1,2, ... ,M}. To classify a feature vectorrepresenting the walk of an individual, we compute the innerproduct of its expanded feature vector with each of the weightvectors. This results in a score sequence Sm m=I ,2,..,M'. Theclass label of the feature vector is then determined byarg.; max(sm)'

5. EXPERIMENTAL RESULTS

In the following results we validate the proposed featureextraction schemes on Gulf costumes and compare the resultsagainst those obtained on casual costumes which are similar towhat is reported in the literature. Common to all of the results

978-1-4244-5950-6/09/$26.00 ©2009 IEEE

to follow, we report the gait classification rate obtained fromtraining and testing the system with feature vectors of differentlength s according to the DCT cutoff as explained in Section 3above. Unless otherwise stated, the classification results areobtained using a least-squares classifier without polynomialexpanslOn.We start the experimental results section by comparing betweendifferent approaches to spatial domain feature extraction. InFigure 3, the female and male datasets are mixed, this alsoincludes mixing different directions of walking (i.e. from left toright and vice versa). For each individual, 75% of the walkingsamples are used for training and 25% for testing. Hence thetesting data is unseen by the training model. The figure showsthat the Radon transformation with horizontal projections ofAP images results in the highest classification results. Intuitivelythis makes sense because the horizontal projection representsthe shape of the accumulated motion from both the front andthe rear of the body of an individual. At a DCT cutoff of 60coefficients, the classification results are very close to 100%.This should not come as a surprise as similar results have beenreported in [3] . On the other hand the figure shows that Radontransformation with vertical projection of the AP images resultsin a very poor classification result. This is so because suchprojections can only describe the height of the individual andthe sinusoidal-like motion of the head during the walk. Clearlysuch features are not enough for identifying an individual.Interleaving the feature vectors of both aforementionedprojection though results in an acceptable result as shown in thefigure. Feature extraction using zonal coding resulted is amoderate classification results. This can be justified by the factthat the AP images contain plenty of high frequencies , hencedescribing such images whilst discarding most of the highfrequency content though zonal coding does not result inaccurate and precise feature vectors. Lastly, it is worthmentioning that the above discussion applies equally for bothGulf and casual customs. However in the latter scenario, theclassification scores resulting from the horizontal projectionsare a bit more accurate. This is not a surprise as the Gulfcostume conceals some details of the body motion.

_ Horizontal Projection --..... Verti cal proj ection

-M- Interleaved projection --+-Zonal coding

Plfffbi,! r±'"..o 10 20 30 40 50 60

Length of feature vector

(a)

547

Page 5: [IEEE 2009 IEEE International Symposium on Signal Processing and Information Technology (ISSPIT) - Ajman, United Arab Emirates (2009.12.14-2009.12.17)] 2009 IEEE International Symposium

_ Horizontal Projection ......... Verti cal projection

_ Interleaved projection~Zona l coding

1se 0.8

~ 0.6:3 0.4Ii:.~ 0.2IIIiJ 0

10 20 30 40 50 60

_ Training dataset based on mixed direct ions of walking

.........Training dataset based on a dif ferent direction of walking

~With 2nd order reduced model expansion

1

]~,:se 0.8

<:

~ 0.6IIIuIi: 0.4.;;;VIIII

40 50 60o 10 20 30

length of feature vector

(b)Fig. 3. Comparison between various spatial feature extraction

approaches. (a) Gulf costume (b) Casual costume.

In Figure 4 we interleave the two directions of walking in termsof system training and testing. That is we train the system onone direction of walking and test it on the other direction.Clearly this experiment is carried out in a cross-validationmanner and the average classification result is reported. In thisexperiment 50% of the feature vectors belong to each directionof walking hence the training to testing ratio is set as such. Thespatial feature extraction approach is Radon transformationwith horizontal projections. Clearly, the classification ratesbased on different direction of walking results is less accurate asshown in the figure, nonetheless, at a DCT cutoff of 60 aclassification rate of around 90% is achieved. The figure alsoshows that with training and testing based on second orderreduced model polynomial expansion higher classificationresults are obtained. At low DCT cutoffs the enhancement isquite evident . However at higher dimensionality and due to thelow number of training samples per individual (4 in thisexperiment), the matrix of the expanded feature vector matrixbecomes ill-conditioned thus affecting the matrix inverseoperation in the computation of the model weights. Again thesame discussion applies for both the Gulf and the casualcostume.

length offeature vector

(b)Fig. 4.Classification rates using different training approaches

based on the direction of walking. (a) Gulf costume (b) Casualcostume.

6. CONCLUSION

This paper proposed a solution for gait recognition using non­western costumes. In particular, the work was considered withGulf customs for both genders. The proposed solution was alsotested on casual costumes and was shown to work as well. Assuch the same system can be deployed for identifyingindividuals with mixed customs without the need for acustomized solution for a particular custom. The paperproposed to accumulate the prediction errors of con secutivevideo images using an adaptive forward /backward predictionscheme. This was needed to counter effect the relative motionof the background objects . Once the motion is accumulatedinto one or two images, spatial feature extraction was applied. Itwas shown that the Radon transformation with horizontalimage projections result in precise and concise feature vectorsthat are linearly separable. The experimental results revealedthat the proposed solution results in accurate classification ratesand works equally well for both of the aforementionedcostumes.

REFERENCES

_ Training dataset based on mixed directions of walking

.........Training dataset based on a different direction of walking

~With 2nd order reduced model expansion

1CIJ~ 0.8<: 0.6.~ 0.4GIi:.;;;VIIIIo

10 20 30 40

length offeature vector

(a)

50 60

[1] S.V. Stevenage, M.S. Nixon, and K. Vince, "Visual analysisof gait as a cue to identity," Applied Cognitive Psychology, vol.13, pp. 513-526, Dec. 1999.[2] G . Zhao , R. Chen, G. Liu, and H. Li, "Amplitude spectrum­based gait recognition, " Proc. Int'l Conf. Automatic Face andGesture Recognition, pp . 23-28,2004.[3] S. Sarkar, P. Jonathon Phillips, Z. Liu, I. Robledo, P.Grother, K. W. Bowyer, "The human id gait challenge problem:data sets, performance, and analysis," IEEE Transactions onPattern Analysis and Machine Intelligence, 27(2), pp. 162 - 177,Feb . 2005[4] Z.i Liu and S. Sarkar, "Improved gait recognition by gaitdynamics normalization", IEEE Transactions on patternanalysis and machine intelligence, (28)6,June 2006.

978-1-4244-5950-6/09/$26.00 ©2009 IEEE 548

Page 6: [IEEE 2009 IEEE International Symposium on Signal Processing and Information Technology (ISSPIT) - Ajman, United Arab Emirates (2009.12.14-2009.12.17)] 2009 IEEE International Symposium

[5] P.J. Phillips, S. Sarkar, I. Robledo, P. Grother, and K.Bowyer, "The gait identification challenge problem: data setsand baseline algorithm," Proc. Int'l Conf. Pattern Recognition,pp. 385-388, 2002.[6] L. Wang, T. Tan, H. Ning, and W. Hu, "Silhouette analysis­based gait recognition for human identification," IEEE Trans.Pattern Analysis and Machine Intelligence, 25(12), pp. 1505­1518, Dec. 2003.[7] S.D. Mowbry and M.S. Nixon, "Automatic Gait Recognitionvia Fourier Descriptors of Deformable Objects," Proc. Conf.Audio Visual Biometric Person Authentication, pp. 566-573,2003.[8] Z. Liu and S. Sarkar, "Simplest representation yet for gaitrecognition: averaged silhouette," Proc. Int'l Conf. PatternRecognition, vol. 4, pp. 211-214, 2004.[9] A. Johnson and A. Bobick, "A multi-view method for gaitrecognition using static body parameters," Proc. Int'l Conf.Audio and Video-Based Biometric Person Authentication, pp.301-311, 2001.[10] X. Yang, Y. Zhou, T. Zhang, G. Shu,J. Yang, "Gaitrecognition based on dynamic region analysis," SignalProcessing, 88(9), pp. 2350-2356, September 2008.[11] J. WU,J. Wang, L. Liu, "Feature extraction via KPCA forclassification of gait patterns," Human Movement Science," pp.393-411, 26(3),June 2007.[12] N. Boulgouris, Z. Chi, "Human gait recognition based onmatching of body components," PatternRecognition," 40(6), pp. 1763-1770, June 2007.[13] J. Han and B. Bhanu, "Statistical feature fusion for gait­based human recognition," Proc. IEEE Conf. Computer Visionand Pattern Recognition, vol. 2, pp. 842-847, June 2004.[14] Y. Liu, R. Collins, and Y. Tsin, "Gait sequence analysisusing frieze patterns," Proc. European Conf. Computer Vision,pp. 657-671, May 2002.[15] L. Wang, T. Tan, H. Ning, and W. Hu, "Silhouette analysis­based gait recognition for human identification," IEEETransactions on pattern analysis and machine intelligence,25(12), December 2003.[16] M.-H. Cheng, M.-F. Ho, C.-L. Huang, "Gait analysis forhuman identification through manifold learning and HMM,"Pattern Recognition, 41(8), pp. 2541-2553, August 2008.[17] T. Shanableh and K. Assaleh, "Telescopic vectorcomposition and polar accumulated motion residuals forfeature extraction in Arabic Sign Language recognition,"EURASIP Journal on Image and Video Processing, vol. 2007,Article ID 87929, 10 pages, 2007. doi:l0.1155/2007/87929.[18] K. Assaleh, and H. AI-Nashash, "A Novel Technique forthe Extraction of Fetal ECG Using Polynomial Networks,"IEEE Transactions on Biomedical Engineering, 52(6), pp. 1148-1152,June 2005.

978-1-4244-5950-6/09/$26.00 ©2009 IEEE 549