11
Research Article Patch Classifier of Face Shape Outline Using Gray-Value Variance with Bilinear Interpolation Seokhoon Kang 1 and Seonwoon Kim 2 1 Department of Embedded Systems Engineering, University of Incheon, Incheon 406-772, Republic of Korea 2 TecAce Solutions Inc., Seoul 153-782, Republic of Korea Correspondence should be addressed to Seokhoon Kang; [email protected] Received 12 January 2015; Accepted 26 February 2015 Academic Editor: Wei Wu Copyright © 2016 S. Kang and S. Kim. is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. is paper proposes a method to classify whether a landmark, which consists of the outline in a face shape model in the shape model based approaches, is properly fitted to feature points. rough this method, the reliability of information can be determined in the process of managing and using the shape. e enlarged face image by image sensor is processed by bilinear interpolation. We use the gray-value variance that considers the texture feature of skin for classification of landmarks. e gray-value variance is calculated in skin area of the patch constructed around the landmark. In order to make a system strong to poses, we project the image of face to the frontal face shape model. And, to fill out each area, the area with insufficient pixel information is filled out with bilinear interpolation. When the fitting is properly done, it has the variance with a low value to be calculated for smooth skin texture. On the other hand, the variance for misaligned landmark shows a high variance by the background and facial contour gradient. We have proposed a classifier using this characteristic and, as a result, classified the true and false in the landmark with an accuracy of 83.32% through the patch classifier. 1. Introduction e approaches based on the shape model to faces are the technique that matches the shape model toobjects in an image through shapes or textures and have shown good results by applying the various methods [1–3]. In addition, researches are in progress, which consider the illumination in the real world, the impact of a background, and the case that some part of a face is covered [4, 5]. However, there are still some known imitations in the techniques that use the shape model. Representatively, the feature detection and tracking techniques, which estimate an approximation [1] with the least square solution that calculates the tracking parameter used in the shape alignment, have limitations in each [6]. e purpose of the techniques related to the face does not aim to simply fit shapes. Mainly using the location informa- tion of the landmark, they are used in systems that perform the face recognition [7], gaze estimation [8], and expression recognition [9]. If a landmark is not properly placed on a feature point, each system causes serious problems in these approaches. Furthermore, when tracking is progressed for a continuous image from a video stream, when a landmark of shape stays in the background, when the face is obscured, and when the object becomes a variation in the type that cannot be generated by a variant of the shape model, the situation occurs in which the accurate fitting cannot be made according to the environmental changes. erefore, we have decided that another verification method is required in addition to the optimization strategies and the shape alignment method. If it is gone through the verification, prior to using the result of shape fitting, the reliability of the current shape can be determined. For example, it can be determined whether to use the result or not before using the data. Also, if the verification result is not reliable as the result of fitting in the video stream, the current tracking is abandoned and it can be operated to start again from the search process. Or searching for a better spot for the landmark can be induced. We propose a method of verifying the landmark consti- tuting the outline of the face to achieve this. But it may not be enough to go through the verification process only for the outlines of the landmark of the face shape model. However, Hindawi Publishing Corporation Journal of Sensors Volume 2016, Article ID 3825931, 10 pages http://dx.doi.org/10.1155/2016/3825931

Research Article Patch Classifier of Face Shape …downloads.hindawi.com/journals/js/2016/3825931.pdfFace Shape Model and Fitting Method. Showing good performances by the method of

  • Upload
    others

  • View
    9

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Research Article Patch Classifier of Face Shape …downloads.hindawi.com/journals/js/2016/3825931.pdfFace Shape Model and Fitting Method. Showing good performances by the method of

Research ArticlePatch Classifier of Face Shape Outline Using Gray-ValueVariance with Bilinear Interpolation

Seokhoon Kang1 and Seonwoon Kim2

1Department of Embedded Systems Engineering University of Incheon Incheon 406-772 Republic of Korea2TecAce Solutions Inc Seoul 153-782 Republic of Korea

Correspondence should be addressed to Seokhoon Kang hanaincheonackr

Received 12 January 2015 Accepted 26 February 2015

Academic Editor Wei Wu

Copyright copy 2016 S Kang and S KimThis is an open access article distributed under the Creative Commons Attribution Licensewhich permits unrestricted use distribution and reproduction in any medium provided the original work is properly cited

This paper proposes a method to classify whether a landmark which consists of the outline in a face shape model in the shapemodel based approaches is properly fitted to feature pointsThrough this method the reliability of information can be determinedin the process of managing and using the shape The enlarged face image by image sensor is processed by bilinear interpolationWe use the gray-value variance that considers the texture feature of skin for classification of landmarks The gray-value variance iscalculated in skin area of the patch constructed around the landmark In order to make a system strong to poses we project theimage of face to the frontal face shape model And to fill out each area the area with insufficient pixel information is filled outwith bilinear interpolation When the fitting is properly done it has the variance with a low value to be calculated for smooth skintexture On the other hand the variance for misaligned landmark shows a high variance by the background and facial contourgradient We have proposed a classifier using this characteristic and as a result classified the true and false in the landmark withan accuracy of 8332 through the patch classifier

1 Introduction

The approaches based on the shape model to faces are thetechnique thatmatches the shapemodel toobjects in an imagethrough shapes or textures and have shown good results byapplying the various methods [1ndash3] In addition researchesare in progress which consider the illumination in the realworld the impact of a background and the case that somepart of a face is covered [4 5] However there are stillsome known imitations in the techniques that use the shapemodel Representatively the feature detection and trackingtechniques which estimate an approximation [1] with theleast square solution that calculates the tracking parameterused in the shape alignment have limitations in each [6]

The purpose of the techniques related to the face does notaim to simply fit shapes Mainly using the location informa-tion of the landmark they are used in systems that performthe face recognition [7] gaze estimation [8] and expressionrecognition [9] If a landmark is not properly placed on afeature point each system causes serious problems in theseapproaches Furthermore when tracking is progressed for

a continuous image from a video stream when a landmarkof shape stays in the background when the face is obscuredand when the object becomes a variation in the type thatcannot be generated by a variant of the shape model thesituation occurs in which the accurate fitting cannot be madeaccording to the environmental changes

Therefore we have decided that another verificationmethod is required in addition to the optimization strategiesand the shape alignment method If it is gone throughthe verification prior to using the result of shape fittingthe reliability of the current shape can be determined Forexample it can be determined whether to use the result ornot before using the data Also if the verification result is notreliable as the result of fitting in the video stream the currenttracking is abandoned and it can be operated to start againfrom the search process Or searching for a better spot for thelandmark can be induced

We propose a method of verifying the landmark consti-tuting the outline of the face to achieve this But it may notbe enough to go through the verification process only for theoutlines of the landmark of the face shape model However

Hindawi Publishing CorporationJournal of SensorsVolume 2016 Article ID 3825931 10 pageshttpdxdoiorg10115520163825931

2 Journal of Sensors

every landmark is associated with each other in the formof the shape model Thus verification on the outline of thelandmark can help to separate the fitting in the overall shapemodel

Landmarks of a face contour have a feature suitable to bedistinguished as compared to other landmarks because it islocated at the boundary between the background and otherobjects in the two-dimensional image Our approach is toclassify landmarks through the analysis of the combination ofa texture and a shape for a landmark If a patch is configuredthat contains the pixel information around each landmarkthis patch will all include the background the skin of the faceand the boundary gradient Therefore as analyzing texturefeatures targeting patches around the landmarks of the faceoutline the result placed on the facial contour can be verified

We named the classifier ldquopatch classifierrdquo that classifieslandmarks using features described above The patch classi-fier configures patches for the landmarks of the shape modeland classifies whether the fitting is correct or not by analyzingthe features from the gray-value variance calculation

Most of the shape models go through the fitting processin accordance with variations in poses At this point thecase occurs that the pixel information is not shown so itgoes through the process of filling content of a missing pixelusing bilinear interpolation Since the variance represents therelationship with the average the bilinear interpolation issuitable for filling the missing information without alterationof variance results

This paper is developed in the following flow First inSection 2 we review the approaches associated with our sys-tem In Section 3 the patch classifier that we propose is intro-duced In Section 4 the verification of the patch classifier isshown through an experiment Finally Section 5 refers to theconclusion and Section 6 concludes with the contributionsand suggestions for future research of the paper

2 Relative Work

21 Face Shape Model and Fitting Method Showing goodperformances by the method of matching the shape modelwith an image and the technique that matches the shapealignment with the image object various approaches havebeen studied The representative methods of the shape areActive ShapeModels [1 10ndash12] as start theActiveAppearanceModels [2 13ndash15] which uses the texture and theConstrainedLocal Model [3 6 16] which uses the patch texture arounda landmark These studies are common to estimate theposition of a landmark each feature point And it updates theoptimized shape parameter and goes through the process oftransforming the reference shape and generates a shape thatfits an object

Starting with these techniques a more extensive researchdirection is created Typically by aligning the nonrigid shapemodel on the face image and distinguishing the expression itestimates the emotion of a person [9 17] A pose is estimatedusing the geometric information of the shape [18] And byusing the pose and eye position information of the shape theresearch on gaze tracking is achieved [8] Also by generating

the frontal face image from the nonfrontal face it is usedto identify faces [7 19] Above these the various studies areunderway to make good use of the shape in a number ofareas In this paper before using the shape model by judgingwhether the result can be trusted or not it can be usefullyutilized

22 Gray-Value Variance Gray-value variance is an attributethat can calculate texture as the feature In Tracking-Learning-Detection as the first state from the 3 states inobject detectionmethod the gray-value variance of the patchwas used [20] If it is scanned by a scanning window and thevariance is less than 50 compared to the target object it isrejected from the candidate groupThis process is commonlya nonobject patch and it is determined from the backgroundlike sky street and so forth

Also in [21] similar to the previous paper a cascadefilter based tracking algorithm of multifeature is proposed intracking the object In this way in the stage of cascade if thegray-value variance is lower than 80 and greater than 120it is used in the method of rejection

The Fingerprint image segmentation research has pro-posed the algorithm using gray-level variance as threshold[22] In this way the gray-value variance is used to filterobjects or the background in the object detection research

23 Most Related Approaches In [7] as suggesting the viewbased active shape model it deals with the content of theboundary extraction In this research the whole active shapemode is initialized with the Procrustes analysis howeverdue to the individual differences in order to solve the casethat the shape grows apart from the actual face area theprocess of extracting the boundary of the face is added Itis limited through the boundary in the square that includesthe top bottom and sides from the eyebrow to the chinand the end of nose From the bottom to the top of therectangular area in which the edge strength and smoothcombination are optimized it was formulated by the curverunning Optimal curve is found by dynamic programmingin seam carving The seam means path of least importanceFirst the importance function is measured by measuringneighbor pixels of imageThe seam carving consists of findingthe path of minimum energy cost Then the optimal curve isexcluded and this part is maximized instead of minimizingthe edge strength Later it updates the boundary of AAMpoints using curve points

The study is different fromour purpose but it is consistentwith the point that it draws the result through the faceoutline However it extracts the contour of a face by using theedge components of the face outline proposed in the abovementioned paper The method of using the edge representsthe form feature of an object but this would not be able toavoid the influence on the background of real life

3 Patch Classifier

The patch classifier proposed in this paper has a role toclassify whether the landmark that consists of a face outline is

Journal of Sensors 3

Generatefrontal face

image

Constructpatch

Computegray-valuevariance

Redefineresult

Referenceshape

Current 2Dshape

[1 1 0 1 1 ]

[1 1 1 1 1 ]

Figure 1 Flow of patch classifier

properly placed on the outline of the face imageThe key ideaof the patch classifier is that the inside of a shape composedof the outskirts of a landmark consists of skin Since a regionother than the outer face is only composed of smooth skintexture when calculating the gray-value variance it shows alow value To calculate this we construct the area around thelandmark with patches In addition the outside of the shapecomposed of landmarks that constitute the face contour istreated with a mask The patch that excludes each maskshould not include the background and gradient boundaryof the face Using that the patch classifier judges whetherthe landmark is properly placed on the facial contour or notPrior to these works it is necessary to create a frontal imageof the face so that you can work in the free pose of the shapemodel Figure 1 shows the flow with the simple picture Weuse current shape in tracking method and trained referenceshape First the frontal face image is generated by projectingimage warped current shape to reference shape And weconstruct patches for set of pixel in region around eachlandmark of reference shapesThen the gray-value variance iscalculated to analyze the characteristics of the patch Finallywe decide classification results using the calculated gray-value variance and geometric feature of face shape model

This process will be described in the following flow FirstSection 31 deals with the process that configures the patchafter creating a frontal face image composed of gray-valuein the preprocessing section Section 32 explains the processof calculating the variance of each patch Finally Section 33describes the process of determining the variance calculationresult again considering the relationship between landmarks

31 Preprocessing A frontal face image is produced withthe shape and image which are in tracking to compute

the variance of the patch classifier Next it goes through thepreprocess for the calculation of variance by constructing thepatch made of gray-value of landmarks

311 Frontal Face Shape Model The shape model is basicallyperformed through the calculation of themean shape and theparameter [1]

In this paper we use the current shape image and thereference shape of the Point Distribution Model (PDM) usedin the shape alignment Reference shape is the mean shapeof the shape of the training set that is used when the PDMis training and it is generally from the front face The shapemodel is utilized only for the face when the background wasremoved Since our approach is pose-invariant system thefrontal image is prepared for the fact that the patch calculatesunder the same condition regardless of the pose

312 Generate 2D Frontal Face Image Using the referenceshape the current face image is transformed into the frontalimage In a number of studies Piecewise Affine Warp [23] isused to produce a face image This technique is the texturemapping which maps the different area from the imageFirst it configures the Delaunay triangulation in the shapeand reference shape currently being tracked Then using abilinear interpolation correction with the center coordinatesitmaps the specific triangle to the target triangleWhen a posevariation occurs in the shape model it is to fill with bilinearinterpolation for the case that 2D face image is not shownThe bilinear interpolation is an extension of the simple linearinterpolation It fills the empty area by the method that doesnot affect the gray-value variance which will be calculatedlater

In general when generating the front face image throughthe Piecewise AffineWarp the area other than the face regionis thereby excluded by themaskThat is the other area exceptthe set of the triangle region is treated with a maskThemaskis possible to be obtained by a convex hull to the referenceshape Figure 2 shows an example of the result of the process

Thismask is not used solely for the Piecewise AffineWarptransformation In the variance calculation the mask areais excluded to calculate only the skin area In this way itis to naturally review the outer boundary of the face Thebackground is included in the outside area of connecting linesbetween neighboring landmarks On the other hand only theskin is included in the inside of the boundary

313 Construct Patch Each patch specifies a rectangular areaaround the landmarks of the frontal face shape The size ofthe rectangular area determines the size of the patch in pro-portion to the whole size of the shape model All the createdpatches have to include a boundary of the face contour

The reason to specify the patch is to reduce the effect onthe skin or the light amount when the variance is calculatedThe gray-value variance is the number that indicates theextent of the fact that the gray-value image is spread fromthe average That is the value indicates the distribution ofthe pixel in the image Therefore if we specify a range ofimage as the full face the large numerical value must be

4 Journal of Sensors

Figure 2 Examples of frontal face image

acquired due to the gradient by the illumination eyes nosemouth and so forth If the variance is locally calculated byconfiguring the patch on the other hand it is possible toreduce the effect on the gradient of the overall light in the faceBecause we only calculate the variance to determine whetherthe texture inside of the shape model is soft calculating in alimited scale is more stable and efficient

32 Gray-Value Variance This section describes the detailsfor calculating a variance for each patch via the previouspreprocessing Variance is a value that measures how muchthe number sets of the values are spread out Low variancemeans that the value is very closely gathered to the averageOn the other hand a high variance value indicates that thenumbers are located far from the average That is if there isno change in the gray-value in the generated patch variancewill be low As a result if the variance of the gray-value is lowit means it has a smooth texture Conversely if the calculatedvalue of variance is high it means that the background orany other object is included in the patch Thus we calculatethe variance to the patch consisting of the landmark as thecenter anddetermine the skin texture detection by calculatingthe variance For this the gray-value variance 120590

119894to the 119894th

landmark 119897119894is calculated as the following equation First

the mean of gray-level values except mask119872 for image 119868 iscalculated by

119864 (119868119872) =

1

119873

119899

sum

119909=0

119899

sum

119910=0

119894 (119909 119910)119898 (119909 119910) (1)

where 119894(119909 119910) is the gray-level value of coordinate (119909 119910) of theimage 119868 119898(119909 119910) is the pixel of coordinate (119909 119910) of the mask119872 119873 is number of pixel when the value of the coordinate(119909 119910) is 1 and 119899 is size of image The image type should be agrayscale And themask consists of 0 or 1 Equation (1) is usedfor calculating variance except points where value of mask is0

The mask has been used in the preprocessing The idealresult should be that the mask is cut out along the facecontour Through this fact the gray-value variance of frontalface image excepting the mask is

120590119896= 119864 (119875

2

119896119872119896) minus 1198642(119875119896119872119896) (2)

0 200 400 600 800 1000 1200 1400 1600 1800 2000

0

1

2

3

4

5

6

7

8

9

times105

Gray-value variance

Num

ber o

f lan

dmar

ks

Figure 3 Gray-variance of the patch around the ground truthlandmark in the Multi-PIE

where 119875119896is the patch that contains 119899 times 119899 gray-level value

in the center of the 119896th landmark We can calculate thevariance of the range which should contain only the gray-value of the skin through this formula as noted above Wehave collected the appropriate range of variance through thelandmark passively placed In Table 1 the patch the actualgray-value figures and a gray-value variance are shown Themask 119872 area is shown as black in the patch of which gray-value is 0

We measured the gray-value variance from the faceimage of the Multi-PIE database [24] and the ground truthlandmarks in order to determine the numerical value forthe calculation results in the general image and the correctposition of the landmarks The face images of the Multi-PIE database contain a variety of light direction the numberof people and even the beard The results are shown inFigure 3This graph shows the result of collecting the numberof landmark that has a gray-value variance included in thegray-value variance range shown on the 119909-axis In the resultof the measurement in the landmarks of the entire faceoutline approximately 8752 of the gray-value varianceshowed a value of less than 200 Through the result wecome to believe that we are likely to determine the possibility

Journal of Sensors 5

Table 1 An example of gray-value variance

119896 Patch Gray-values Variance

0

0 0 0 0 125

1491060 0 0 32 1260 0 0 32 1230 0 0 55 1270 0 0 73 129

2

0 0 0 110 119

9384960 0 0 117 1190 0 0 110 1210 0 0 120 1220 0 0 88 109

6

0 119 142 148 139

137960 113 139 132 1410 0 120 132 1270 0 0 122 1340 0 0 0 106

9

120 113 112 110 108

105604103 98 111 102 1010 0 0 79 00 0 0 0 00 0 0 0 0

11

91 81 71 67 75

98728880 63 64 61 7061 59 58 0 068 52 0 0 00 0 0 0 0

13

102 90 85 78 0

32230190 88 87 0 088 87 80 0 086 84 0 0 085 79 0 0 0

of whether the fitting is correctly done in the proposedmethod Some landmark represents high gray-value variancevalue by a gradient according to the direction of the lightsource and the beard Since this phenomenon can occurmorefrequently in the real world the patch classifier determinesthe classification through one more step

33 Considering Relation between Landmarks As describedabove the result of whether it is placed in the correct positionof the landmark can be estimated through the calculationof the variance of the gray-value patch However due tothe environment like the amount of light shadow skincharacteristics and the beard may cause different results Inaddition there is a weak point in what will be calculated fromthe variance For example let us assume that the front faceis being tracked in front of a single color background If thepatch only contains the background due to the failure of theface alignment the result indicates a very low variance andthe incorrect fitting result more likely will be classified asnormal To solve this problem we redefine the result of thevariance through the relationship between the landmarks ofthe shape

As a result of the trace for the first situation in Figure 4(a)gray-value variance of one landmark is calculated over 300it is classified as an incorrect result In this case due to thegradient of the facial contour it came to have a high varianceHowever the RMS error of the manually placed landmarkand location is actually a very small number and it is the pointneeded to be determined as the correct fitting

Since these patches are created with the landmarks thatconsist of a face features of a face can be usedThe landmarksof which variances are calculated have the shape of curve thatforms the face boundary Also because it keeps the referenceshape by transforming using the parameters in the shapealignment technique it is difficult that only one landmarktakes the protrusive shape Thus the result of the protrusivelocation is treated as the same as the result of the peripherallandmark

As the same phenomenon of this the different situationin the protrusive result is shown in Figure 4(b) This resultobtained the very low value of variance since the aroundpatches of protrusive landmarks remain on the backgroundof a solid color Further a single landmark to start escapingshowed a high variance Until this situation it is the same as

6 Journal of Sensors

1

1

1

1

1

0

1

1

1

(a) False negative case

1

1

1

0

1

1

11

1

(b) False positive cases

Figure 4 Examples of the situation that needs to consider therelationship between landmarks The first column represents resultof fitting second column represents frontal face image and thirdrepresents parts of classified results using gray-value variance (1 istrue 0 is false)

the previous situation In order to prevent this the mean tothe patch that does not exclude themask used in the PiecewiseAffine Wrap after the landmark is

1198641015840(119868) =

1

1198992

119899

sum

119909=0

119899

sum

119910=0

119868 (119909 119910) (3)

Therefore the gray-value variance to the whole patch is

1205901015840

119894= 1198641015840(1199012

119894) minus 11986410158402(119901119894) (4)

This calculation will calculate the variance to the patchincluding the background face and so on If the landmarkstays in a solid colored background since the face contourdoes not exist the similar variance will be retained

4 Experimental Results

In this paper the patch classifier as described above classifiesthe correct fitting result by calculating the variance of eachlandmark nearby We did an experiment based on the CMUMulti-PIE database [24] to evaluate the accuracy of the resultAlso after performing by applying a patch classifier to theactual shape fitting technique in the video frame of FGNet[25] according to the RMS error the measurement of thegray-value variance and the result of the classification arechecked As the same experiment the patch classifier in real-life images with a complex background is reviewed

Verification of the basic patch classifier proceeds inthe Multi-PIE database Multi-PIE has various pose facialexpression changes and face images to the illumination68 ground truth landmark points for this have attached tocomments Each image is the 640 times 480 size A total of faceimages for 346 people exist in various poses and at the same

300 320 340 360 380 400 420 440 460 480

180

200

220

240

260

280

300

320

Figure 5 Shape RMS error to the face contour

0 01 02 03 04 05 06 07 08 09 10

01

02

03

04

05

06

07

08

09

1

False positive rate (1 minus specificity)

True

pos

itive

rate

(sen

sitiv

ity)

Figure 6 ROC curve according to the wide range of variance inthe Multi-PIE Red line is the ROC curve that is measured only bythe threshold of the gray-value variance to the Multi-PIE Blue lineis the adjusted ROC curve considering the threshold of the gray-value variance to theMulti-PIE and features of the curve of the facialshape

time 20 sets of direction of light are composed Also itis made up of images of faces taken from the camera ina total of 15 directions We use the total 75360 images byutilizing a set of three directions This database is a databasethat is primarily used in the face shape fitting performancecomparison so we carry out the experiment about the gray-value variance compared with the ground truth landmarkadvances

In this paper because it determines whether landmarksare correctly positioned in the facial contours or not errorsare estimated by the distance between the existing groundtrue landmark and the face contour not by the RMS errorThis example is shown in Figure 5

In order to find an appropriate variance threshold 120590 todistinguish between true and false in the patch classifierthe Receiver Operating Characteristic (ROC) curves for awide range of gray-value variance are presented in Figure 6

Journal of Sensors 7

0102030405060708090100

3000 3100 3200 3300 3400 3500 3600 3700 3800 3900 4000

01503004506007509001050120013501500

Frame

3000 3100 3200 3300 3400 3500 3600 3700 3800 3900 4000

Frame

Gra

y-va

lue v

aria

nce

Dist

ance

bet

wee

ngr

ound

trut

h

(a) 4th landmark

0102030405060708090100

01503004506007509001050120013501500

Gra

y-va

lue v

aria

nce

Dist

ance

bet

wee

ngr

ound

trut

h

2500 3000 3500 4000 4500 5000

Frame

2500 3000 3500 4000 4500 5000

Frame

(b) 3th landmark

0102030405060708090100

3000 3100 3200 3300 3400 3500 3600 3700 3800 3900 40000

1503004506007509001050120013501500

Frame

3000 3100 3200 3300 3400 3500 3600 3700 3800 3900 4000

Frame

Gra

y-va

lue v

aria

nce

Dist

ance

bet

wee

ngr

ound

trut

h

(c) 12th landmark

0102030405060708090100

01503004506007509001050120013501500

Frame

Frame

Gra

y-va

lue v

aria

nce

Dist

ance

bet

wee

ngr

ound

trut

h

2000 2100 2200 2300 2400 2500 2600 2700 2800 2900 3000

2000 2100 2200 2300 2400 2500 2600 2700 2800 2900 3000

(d) 17th landmark

Figure 7 Each landmark error and gray-value variance in the FGNet database

The curve is calculated only for the outline of the landmarkof the shape model It is used as the ground truth landmarkof the Multi-PIE for each sample image and goes throughthe fitting process for the same image using the techniqueof [6] We thus apply the patch classifier to the around ofthe landmark of the facial contour for this fitting The fittingresult is defined as true when the distance of the groundtruth landmark is less than 3 pixels and defined as falsewhen it is greater than that In ROC curve the red markis the case that the result is determined only by the gray-value variance The blue mark is the result of consideringthe relationship between the landmarks By considering therelationship between the landmarks it was identified thatthe result was improved in the true positive rate before thecorrection and also the false positive rate became muchlower The critical gray-value variance in which the classifiershows the best performance is showed as 228 If it is classified

by using this critical value the true positive rate is 8771 andthe true negative rate is 7184

The experiment is processed for continuous images usingthe calculated critical value of the gray-value variance InFigure 7 the result for the distance between fitted landmarksand ground truth landmarks in the FGNet talking face videodatabase [25] and the gray-value variance is showed As thesame as in the experiment of Multi-PIE the fitting technique[23] was used The graph at the top represents the distancebetween the landmark the result of the fitting and theground truth landmark As shown in the graph this fittingtechnique maintains the distance of approximately 10 pixelson the whole in tracking According to this it is confirmedthat the gray-value variance for each landmark shows lessthan the value of 300 As shown in Figure 7(a) if the fittingresult is stable the gray-value variance can be seen to be sta-ble And Figures 7(b) and 7(c) show the relationship between

8 Journal of Sensors

Figure 8 Example of the result of patch classifier in the complicated backgroundThe red point represents the failure of fitting and the greenpoint represents the success

gray-value variance and error As shown in Figure 7(d) the17th landmark which are the gray-value variance at both endsof the landmarks of the outline of the face do not seem tobe stable Since this position does not only contain the skindue to hairs the value must be high However according toconsidering the relationship between the landmarks of theshape as mentioned the result of the patch classifier can bereturned as true according to neighbor landmarks

Finally the result of applying the patch classifier on realimages is shown in Figure 8 The green dots represent thelandmarks of which normal fitting is made and the reddots represent the landmarks that are classified by the fittingfailure by the patch classifier In the upper right of each imagethe frontal image is placed that its gray-value variance ismeasured after configuring the patch The image representsthe result that the patch classifier classifies the case that theshape cannot keep up with or the landmark stays on thebackground by the rotation It can be identified that in thefrontal face image in the upper right the nonface contourarea is included This method showed better performance incomplex background The gray-value variances in complexbackground are higher than the gray-value variances insimple background When the face rotated yaw axis ourapproach can classify landmarks properly

5 Conclusion

In this study the patch classifier has been studied whichdetermines the fitting result of landmarks that consist of

the face contour lines of the face shape model The patchclassifier approaches to estimate a texture of skin through agray-value variance measurement To show the performanceinvariant to pose it transforms to the frontal shape modelIn this transformation we were able to fill a hidden pixel bya pose using the bilinear interpolation without modificationin measuring the gray-value variance Further in order toreduce the interference with the illumination or the like themethod of configuring a patch around a landmark is usedA gray-value variance is calculated only in this patch If youapply this approach to theMulti-PIE database approximately8752 of the gray-value variance of the landmark is con-firmed to appear as a value of less than or equal to 200 Thetexture and shape features are dealt with in the approachof the existing fitting methods and showed the potential forthe use of the features in this study In addition by applyingthis technique to the outskirts of the landmark face shapemodel we proposed a method to classify whether a fitting issuccessful or not As a result it was able to classify the correctfitting result in the probability of 8332 We could verifywhether a gray-value variance is placed on the landmark ofthe face outline in the simple and effective way

6 Limitations and Future Work

Weused the outline landmarks bywhich the success or failureof fitting can be determined in the relatively simple way Asmentioned above since landmarks of the shape model arecomposed of geometric relations we can estimate the fitting

Journal of Sensors 9

state of the shape being tracked Estimation of the fitting statemight be helpful in verifying the reliability of the shapemodelused in various directions and determining the usage

First there is a case that uses it as ameasure for improvingthe tracking of the shape model itself The systems that arerelated to the shape model consisting of the landmarks areflowed in two steps After the initial searching of a face areaand the initial fitting process of shape fitting process onlyoccurs If a landmark fails to fit the facial feature in tracking itcan be occur that a shape repeats the incorrect fitting In thiscase with analyzing the current status and returning to theinitial phase of the system by using this research the recoveryis possible

Also it can be used as a correction of the shape fittingresult The fitting technique keeps track of each feature andmaintains the natural form through the shape alignmentIn this process the outer landmark directly evaluates andfinds the better fitting points After verification of the shapefurthermore it can be used in various systems to takeadvantage of the shape model The systems of gaze trackingface recognition motion recognition and facial expressionrecognition are applicable The role of the landmark in thesystem is critical If landmarks are incorrectly fitted a moreserious problem might occur in these systems

Like this the patch classifier can achieve the better resultsby applying after the shape fitting In addition by utilizingthe failed data it contributes to the way of escaping from thebigger problem

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

References

[1] T F Cootes C J Taylor D H Cooper and J Graham ldquoActiveshapemodelsmdashtheir training and applicationrdquoComputer Visionand Image Understanding vol 61 no 1 pp 38ndash59 1995

[2] T F Cooles G J Edwards and C J Taylor ldquoActive appearancemodelsrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 23 no 6 pp 681ndash685 2001

[3] D Cristinacce and T Cootes ldquoAutomatic feature localisationwith constrained local modelsrdquo Pattern Recognition vol 41 no10 pp 3054ndash3067 2008

[4] X P Burgos-Artizzu P Perona and P Dollar ldquoRobust facelandmark estimation under occlusionrdquo inProceedings of the 14thIEEE International Conference on Computer Vision (ICCV rsquo13)pp 1513ndash1520 IEEE December 2013

[5] H Drira B Ben Amor A Srivastava M Daoudi and R Slamaldquo3D Face recognition under expressions occlusions and posevariationsrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 35 no 9 pp 2270ndash2283 2013

[6] JM Saragih S Lucey and J F Cohn ldquoDeformablemodel fittingby regularized landmark mean-shiftrdquo International Journal ofComputer Vision vol 91 no 2 pp 200ndash215 2011

[7] A Asthana T K Marks M J Jones K H Tieu and M VRohith ldquoFully automatic pose-invariant face recognition via

3D pose normalizationrdquo in Proceedings of the IEEE Interna-tional Conference on Computer Vision (ICCV rsquo11) pp 937ndash944November 2011

[8] C Xiong L Huang and C Liu ldquoGaze estimation based on3D face structure and pupil centersrdquo in Proceedings of the 22ndInternational Conference on Pattern Recognition (ICPR rsquo14) pp1156ndash1161 IEEE Stockholm Sweden August 2014

[9] C Cao Y Weng S Lin and K Zhou ldquo3D shape regression forreal-time facial animationrdquoACMTransactions on Graphics vol32 no 4 article 41 2013

[10] B vanGinneken A F Frangi J J Staal BM Ter Haar Romenyand M A Viergever ldquoActive shape model segmentation withoptimal featuresrdquo IEEE Transactions on Medical Imaging vol21 no 8 pp 924ndash933 2002

[11] T F Cootes and C J Taylor ldquoActive shape modelsmdashlsquosmartsnakesrsquordquo in BMVC92 Proceedings of the British Machine VisionConference organised by the British Machine Vision Association22ndash24 September 1992 Leeds pp 266ndash275 Springer LondonUK 1992

[12] C A R Behaine and J Scharcanski ldquoEnhancing the perfor-mance of active shape models in face recognition applicationsrdquoIEEETransactions on Instrumentation andMeasurement vol 61no 8 pp 2330ndash2333 2012

[13] F Dornaika and J Ahlberg ldquoFast and reliable active appearancemodel search for 3-D face trackingrdquo IEEE Transactions onSystems Man and Cybernetics Part B Cybernetics vol 34 no4 pp 1838ndash1853 2004

[14] J Matthews and S Baker ldquoActive appearance models revisitedrdquoInternational Journal of Computer Vision vol 60 no 2 pp 135ndash164 2004

[15] A U Batur and M H Hayes ldquoAdaptive active appearancemodelsrdquo IEEE Transactions on Image Processing vol 14 no 11pp 1707ndash1721 2005

[16] D Cristinacce and T Cootes ldquoFeature detection and trackingwith constrained local modelsrdquo in Proceedings of the 17thBritish Machine Vision Conference (BMVC rsquo06) pp 929ndash938September 2006

[17] X Cao Y Wei F Wen and J Sun ldquoFace alignment by explicitshape regressionrdquo International Journal of Computer Vision vol107 no 2 pp 177ndash190 2014

[18] M D Breitenstein D Kuettel T Weise L Van Gool andH Pfister ldquoReal-time face pose estimation from single rangeimagesrdquo inProceedings of the 26th IEEEConference onComputerVision and Pattern Recognition (CVPR rsquo08) pp 1ndash8 June 2008

[19] Y Taigman M Yang M Ranzato and LWolf ldquoDeepface clos-ing the gap to human-level performance in face verificationrdquoin Proceedings of the IEEE Conference on Computer Vision andPattern Recognition (CVPR rsquo14) pp 1701ndash1708 IEEE ColumbusOhio USA June 2014

[20] Z Kalal K Mikolajczyk and J Matas ldquoTracking-learning-detectionrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 34 no 7 pp 1409ndash1422 2012

[21] S Hu S Sun X Ma Y Qin and B Lei ldquoA fast on-lineboosting tracking algorithm based on cascade filter of multi-featuresrdquo in Proceedings of the 3rd International Conference onMultimedia Technology (ICM rsquo13) Atlantis Press GuangzhouChina November 2013

[22] I SMsiza F VNelwamondo andTMarwala ldquoOn the segmen-tation of fingerprint images how to select the parameters of ablock-wise variancemethodrdquo in Proceedings of the InternationalConference on Image Processing Computer Vision and PatternRecognition (IPCV rsquo12) pp 1107ndash1113 July 2012

10 Journal of Sensors

[23] C A Glasbey and K V Mardia ldquoA review of image-warpingmethodsrdquo Journal of Applied Statistics vol 25 no 2 pp 155ndash1711998

[24] R Gross I Matthews J Cohn T Kanade and S Baker ldquoMulti-PIErdquo in Proceedings of the 8th IEEE International Conference onAutomatic Face and Gesture Recognition (FG rsquo08) September2008

[25] T F Cootes C J Twining V Petrovic R Schestowitz and CJ Taylor ldquoGroupwise construction of appearance models usingpiece-wise affine deformationsrdquo in Proceedings of 16th BritishMachine Vision Conference (BMVC rsquo05) vol 5 pp 879ndash8882005

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 2: Research Article Patch Classifier of Face Shape …downloads.hindawi.com/journals/js/2016/3825931.pdfFace Shape Model and Fitting Method. Showing good performances by the method of

2 Journal of Sensors

every landmark is associated with each other in the formof the shape model Thus verification on the outline of thelandmark can help to separate the fitting in the overall shapemodel

Landmarks of a face contour have a feature suitable to bedistinguished as compared to other landmarks because it islocated at the boundary between the background and otherobjects in the two-dimensional image Our approach is toclassify landmarks through the analysis of the combination ofa texture and a shape for a landmark If a patch is configuredthat contains the pixel information around each landmarkthis patch will all include the background the skin of the faceand the boundary gradient Therefore as analyzing texturefeatures targeting patches around the landmarks of the faceoutline the result placed on the facial contour can be verified

We named the classifier ldquopatch classifierrdquo that classifieslandmarks using features described above The patch classi-fier configures patches for the landmarks of the shape modeland classifies whether the fitting is correct or not by analyzingthe features from the gray-value variance calculation

Most of the shape models go through the fitting processin accordance with variations in poses At this point thecase occurs that the pixel information is not shown so itgoes through the process of filling content of a missing pixelusing bilinear interpolation Since the variance represents therelationship with the average the bilinear interpolation issuitable for filling the missing information without alterationof variance results

This paper is developed in the following flow First inSection 2 we review the approaches associated with our sys-tem In Section 3 the patch classifier that we propose is intro-duced In Section 4 the verification of the patch classifier isshown through an experiment Finally Section 5 refers to theconclusion and Section 6 concludes with the contributionsand suggestions for future research of the paper

2 Relative Work

21 Face Shape Model and Fitting Method Showing goodperformances by the method of matching the shape modelwith an image and the technique that matches the shapealignment with the image object various approaches havebeen studied The representative methods of the shape areActive ShapeModels [1 10ndash12] as start theActiveAppearanceModels [2 13ndash15] which uses the texture and theConstrainedLocal Model [3 6 16] which uses the patch texture arounda landmark These studies are common to estimate theposition of a landmark each feature point And it updates theoptimized shape parameter and goes through the process oftransforming the reference shape and generates a shape thatfits an object

Starting with these techniques a more extensive researchdirection is created Typically by aligning the nonrigid shapemodel on the face image and distinguishing the expression itestimates the emotion of a person [9 17] A pose is estimatedusing the geometric information of the shape [18] And byusing the pose and eye position information of the shape theresearch on gaze tracking is achieved [8] Also by generating

the frontal face image from the nonfrontal face it is usedto identify faces [7 19] Above these the various studies areunderway to make good use of the shape in a number ofareas In this paper before using the shape model by judgingwhether the result can be trusted or not it can be usefullyutilized

22 Gray-Value Variance Gray-value variance is an attributethat can calculate texture as the feature In Tracking-Learning-Detection as the first state from the 3 states inobject detectionmethod the gray-value variance of the patchwas used [20] If it is scanned by a scanning window and thevariance is less than 50 compared to the target object it isrejected from the candidate groupThis process is commonlya nonobject patch and it is determined from the backgroundlike sky street and so forth

Also in [21] similar to the previous paper a cascadefilter based tracking algorithm of multifeature is proposed intracking the object In this way in the stage of cascade if thegray-value variance is lower than 80 and greater than 120it is used in the method of rejection

The Fingerprint image segmentation research has pro-posed the algorithm using gray-level variance as threshold[22] In this way the gray-value variance is used to filterobjects or the background in the object detection research

23 Most Related Approaches In [7] as suggesting the viewbased active shape model it deals with the content of theboundary extraction In this research the whole active shapemode is initialized with the Procrustes analysis howeverdue to the individual differences in order to solve the casethat the shape grows apart from the actual face area theprocess of extracting the boundary of the face is added Itis limited through the boundary in the square that includesthe top bottom and sides from the eyebrow to the chinand the end of nose From the bottom to the top of therectangular area in which the edge strength and smoothcombination are optimized it was formulated by the curverunning Optimal curve is found by dynamic programmingin seam carving The seam means path of least importanceFirst the importance function is measured by measuringneighbor pixels of imageThe seam carving consists of findingthe path of minimum energy cost Then the optimal curve isexcluded and this part is maximized instead of minimizingthe edge strength Later it updates the boundary of AAMpoints using curve points

The study is different fromour purpose but it is consistentwith the point that it draws the result through the faceoutline However it extracts the contour of a face by using theedge components of the face outline proposed in the abovementioned paper The method of using the edge representsthe form feature of an object but this would not be able toavoid the influence on the background of real life

3 Patch Classifier

The patch classifier proposed in this paper has a role toclassify whether the landmark that consists of a face outline is

Journal of Sensors 3

Generatefrontal face

image

Constructpatch

Computegray-valuevariance

Redefineresult

Referenceshape

Current 2Dshape

[1 1 0 1 1 ]

[1 1 1 1 1 ]

Figure 1 Flow of patch classifier

properly placed on the outline of the face imageThe key ideaof the patch classifier is that the inside of a shape composedof the outskirts of a landmark consists of skin Since a regionother than the outer face is only composed of smooth skintexture when calculating the gray-value variance it shows alow value To calculate this we construct the area around thelandmark with patches In addition the outside of the shapecomposed of landmarks that constitute the face contour istreated with a mask The patch that excludes each maskshould not include the background and gradient boundaryof the face Using that the patch classifier judges whetherthe landmark is properly placed on the facial contour or notPrior to these works it is necessary to create a frontal imageof the face so that you can work in the free pose of the shapemodel Figure 1 shows the flow with the simple picture Weuse current shape in tracking method and trained referenceshape First the frontal face image is generated by projectingimage warped current shape to reference shape And weconstruct patches for set of pixel in region around eachlandmark of reference shapesThen the gray-value variance iscalculated to analyze the characteristics of the patch Finallywe decide classification results using the calculated gray-value variance and geometric feature of face shape model

This process will be described in the following flow FirstSection 31 deals with the process that configures the patchafter creating a frontal face image composed of gray-valuein the preprocessing section Section 32 explains the processof calculating the variance of each patch Finally Section 33describes the process of determining the variance calculationresult again considering the relationship between landmarks

31 Preprocessing A frontal face image is produced withthe shape and image which are in tracking to compute

the variance of the patch classifier Next it goes through thepreprocess for the calculation of variance by constructing thepatch made of gray-value of landmarks

311 Frontal Face Shape Model The shape model is basicallyperformed through the calculation of themean shape and theparameter [1]

In this paper we use the current shape image and thereference shape of the Point Distribution Model (PDM) usedin the shape alignment Reference shape is the mean shapeof the shape of the training set that is used when the PDMis training and it is generally from the front face The shapemodel is utilized only for the face when the background wasremoved Since our approach is pose-invariant system thefrontal image is prepared for the fact that the patch calculatesunder the same condition regardless of the pose

312 Generate 2D Frontal Face Image Using the referenceshape the current face image is transformed into the frontalimage In a number of studies Piecewise Affine Warp [23] isused to produce a face image This technique is the texturemapping which maps the different area from the imageFirst it configures the Delaunay triangulation in the shapeand reference shape currently being tracked Then using abilinear interpolation correction with the center coordinatesitmaps the specific triangle to the target triangleWhen a posevariation occurs in the shape model it is to fill with bilinearinterpolation for the case that 2D face image is not shownThe bilinear interpolation is an extension of the simple linearinterpolation It fills the empty area by the method that doesnot affect the gray-value variance which will be calculatedlater

In general when generating the front face image throughthe Piecewise AffineWarp the area other than the face regionis thereby excluded by themaskThat is the other area exceptthe set of the triangle region is treated with a maskThemaskis possible to be obtained by a convex hull to the referenceshape Figure 2 shows an example of the result of the process

Thismask is not used solely for the Piecewise AffineWarptransformation In the variance calculation the mask areais excluded to calculate only the skin area In this way itis to naturally review the outer boundary of the face Thebackground is included in the outside area of connecting linesbetween neighboring landmarks On the other hand only theskin is included in the inside of the boundary

313 Construct Patch Each patch specifies a rectangular areaaround the landmarks of the frontal face shape The size ofthe rectangular area determines the size of the patch in pro-portion to the whole size of the shape model All the createdpatches have to include a boundary of the face contour

The reason to specify the patch is to reduce the effect onthe skin or the light amount when the variance is calculatedThe gray-value variance is the number that indicates theextent of the fact that the gray-value image is spread fromthe average That is the value indicates the distribution ofthe pixel in the image Therefore if we specify a range ofimage as the full face the large numerical value must be

4 Journal of Sensors

Figure 2 Examples of frontal face image

acquired due to the gradient by the illumination eyes nosemouth and so forth If the variance is locally calculated byconfiguring the patch on the other hand it is possible toreduce the effect on the gradient of the overall light in the faceBecause we only calculate the variance to determine whetherthe texture inside of the shape model is soft calculating in alimited scale is more stable and efficient

32 Gray-Value Variance This section describes the detailsfor calculating a variance for each patch via the previouspreprocessing Variance is a value that measures how muchthe number sets of the values are spread out Low variancemeans that the value is very closely gathered to the averageOn the other hand a high variance value indicates that thenumbers are located far from the average That is if there isno change in the gray-value in the generated patch variancewill be low As a result if the variance of the gray-value is lowit means it has a smooth texture Conversely if the calculatedvalue of variance is high it means that the background orany other object is included in the patch Thus we calculatethe variance to the patch consisting of the landmark as thecenter anddetermine the skin texture detection by calculatingthe variance For this the gray-value variance 120590

119894to the 119894th

landmark 119897119894is calculated as the following equation First

the mean of gray-level values except mask119872 for image 119868 iscalculated by

119864 (119868119872) =

1

119873

119899

sum

119909=0

119899

sum

119910=0

119894 (119909 119910)119898 (119909 119910) (1)

where 119894(119909 119910) is the gray-level value of coordinate (119909 119910) of theimage 119868 119898(119909 119910) is the pixel of coordinate (119909 119910) of the mask119872 119873 is number of pixel when the value of the coordinate(119909 119910) is 1 and 119899 is size of image The image type should be agrayscale And themask consists of 0 or 1 Equation (1) is usedfor calculating variance except points where value of mask is0

The mask has been used in the preprocessing The idealresult should be that the mask is cut out along the facecontour Through this fact the gray-value variance of frontalface image excepting the mask is

120590119896= 119864 (119875

2

119896119872119896) minus 1198642(119875119896119872119896) (2)

0 200 400 600 800 1000 1200 1400 1600 1800 2000

0

1

2

3

4

5

6

7

8

9

times105

Gray-value variance

Num

ber o

f lan

dmar

ks

Figure 3 Gray-variance of the patch around the ground truthlandmark in the Multi-PIE

where 119875119896is the patch that contains 119899 times 119899 gray-level value

in the center of the 119896th landmark We can calculate thevariance of the range which should contain only the gray-value of the skin through this formula as noted above Wehave collected the appropriate range of variance through thelandmark passively placed In Table 1 the patch the actualgray-value figures and a gray-value variance are shown Themask 119872 area is shown as black in the patch of which gray-value is 0

We measured the gray-value variance from the faceimage of the Multi-PIE database [24] and the ground truthlandmarks in order to determine the numerical value forthe calculation results in the general image and the correctposition of the landmarks The face images of the Multi-PIE database contain a variety of light direction the numberof people and even the beard The results are shown inFigure 3This graph shows the result of collecting the numberof landmark that has a gray-value variance included in thegray-value variance range shown on the 119909-axis In the resultof the measurement in the landmarks of the entire faceoutline approximately 8752 of the gray-value varianceshowed a value of less than 200 Through the result wecome to believe that we are likely to determine the possibility

Journal of Sensors 5

Table 1 An example of gray-value variance

119896 Patch Gray-values Variance

0

0 0 0 0 125

1491060 0 0 32 1260 0 0 32 1230 0 0 55 1270 0 0 73 129

2

0 0 0 110 119

9384960 0 0 117 1190 0 0 110 1210 0 0 120 1220 0 0 88 109

6

0 119 142 148 139

137960 113 139 132 1410 0 120 132 1270 0 0 122 1340 0 0 0 106

9

120 113 112 110 108

105604103 98 111 102 1010 0 0 79 00 0 0 0 00 0 0 0 0

11

91 81 71 67 75

98728880 63 64 61 7061 59 58 0 068 52 0 0 00 0 0 0 0

13

102 90 85 78 0

32230190 88 87 0 088 87 80 0 086 84 0 0 085 79 0 0 0

of whether the fitting is correctly done in the proposedmethod Some landmark represents high gray-value variancevalue by a gradient according to the direction of the lightsource and the beard Since this phenomenon can occurmorefrequently in the real world the patch classifier determinesthe classification through one more step

33 Considering Relation between Landmarks As describedabove the result of whether it is placed in the correct positionof the landmark can be estimated through the calculationof the variance of the gray-value patch However due tothe environment like the amount of light shadow skincharacteristics and the beard may cause different results Inaddition there is a weak point in what will be calculated fromthe variance For example let us assume that the front faceis being tracked in front of a single color background If thepatch only contains the background due to the failure of theface alignment the result indicates a very low variance andthe incorrect fitting result more likely will be classified asnormal To solve this problem we redefine the result of thevariance through the relationship between the landmarks ofthe shape

As a result of the trace for the first situation in Figure 4(a)gray-value variance of one landmark is calculated over 300it is classified as an incorrect result In this case due to thegradient of the facial contour it came to have a high varianceHowever the RMS error of the manually placed landmarkand location is actually a very small number and it is the pointneeded to be determined as the correct fitting

Since these patches are created with the landmarks thatconsist of a face features of a face can be usedThe landmarksof which variances are calculated have the shape of curve thatforms the face boundary Also because it keeps the referenceshape by transforming using the parameters in the shapealignment technique it is difficult that only one landmarktakes the protrusive shape Thus the result of the protrusivelocation is treated as the same as the result of the peripherallandmark

As the same phenomenon of this the different situationin the protrusive result is shown in Figure 4(b) This resultobtained the very low value of variance since the aroundpatches of protrusive landmarks remain on the backgroundof a solid color Further a single landmark to start escapingshowed a high variance Until this situation it is the same as

6 Journal of Sensors

1

1

1

1

1

0

1

1

1

(a) False negative case

1

1

1

0

1

1

11

1

(b) False positive cases

Figure 4 Examples of the situation that needs to consider therelationship between landmarks The first column represents resultof fitting second column represents frontal face image and thirdrepresents parts of classified results using gray-value variance (1 istrue 0 is false)

the previous situation In order to prevent this the mean tothe patch that does not exclude themask used in the PiecewiseAffine Wrap after the landmark is

1198641015840(119868) =

1

1198992

119899

sum

119909=0

119899

sum

119910=0

119868 (119909 119910) (3)

Therefore the gray-value variance to the whole patch is

1205901015840

119894= 1198641015840(1199012

119894) minus 11986410158402(119901119894) (4)

This calculation will calculate the variance to the patchincluding the background face and so on If the landmarkstays in a solid colored background since the face contourdoes not exist the similar variance will be retained

4 Experimental Results

In this paper the patch classifier as described above classifiesthe correct fitting result by calculating the variance of eachlandmark nearby We did an experiment based on the CMUMulti-PIE database [24] to evaluate the accuracy of the resultAlso after performing by applying a patch classifier to theactual shape fitting technique in the video frame of FGNet[25] according to the RMS error the measurement of thegray-value variance and the result of the classification arechecked As the same experiment the patch classifier in real-life images with a complex background is reviewed

Verification of the basic patch classifier proceeds inthe Multi-PIE database Multi-PIE has various pose facialexpression changes and face images to the illumination68 ground truth landmark points for this have attached tocomments Each image is the 640 times 480 size A total of faceimages for 346 people exist in various poses and at the same

300 320 340 360 380 400 420 440 460 480

180

200

220

240

260

280

300

320

Figure 5 Shape RMS error to the face contour

0 01 02 03 04 05 06 07 08 09 10

01

02

03

04

05

06

07

08

09

1

False positive rate (1 minus specificity)

True

pos

itive

rate

(sen

sitiv

ity)

Figure 6 ROC curve according to the wide range of variance inthe Multi-PIE Red line is the ROC curve that is measured only bythe threshold of the gray-value variance to the Multi-PIE Blue lineis the adjusted ROC curve considering the threshold of the gray-value variance to theMulti-PIE and features of the curve of the facialshape

time 20 sets of direction of light are composed Also itis made up of images of faces taken from the camera ina total of 15 directions We use the total 75360 images byutilizing a set of three directions This database is a databasethat is primarily used in the face shape fitting performancecomparison so we carry out the experiment about the gray-value variance compared with the ground truth landmarkadvances

In this paper because it determines whether landmarksare correctly positioned in the facial contours or not errorsare estimated by the distance between the existing groundtrue landmark and the face contour not by the RMS errorThis example is shown in Figure 5

In order to find an appropriate variance threshold 120590 todistinguish between true and false in the patch classifierthe Receiver Operating Characteristic (ROC) curves for awide range of gray-value variance are presented in Figure 6

Journal of Sensors 7

0102030405060708090100

3000 3100 3200 3300 3400 3500 3600 3700 3800 3900 4000

01503004506007509001050120013501500

Frame

3000 3100 3200 3300 3400 3500 3600 3700 3800 3900 4000

Frame

Gra

y-va

lue v

aria

nce

Dist

ance

bet

wee

ngr

ound

trut

h

(a) 4th landmark

0102030405060708090100

01503004506007509001050120013501500

Gra

y-va

lue v

aria

nce

Dist

ance

bet

wee

ngr

ound

trut

h

2500 3000 3500 4000 4500 5000

Frame

2500 3000 3500 4000 4500 5000

Frame

(b) 3th landmark

0102030405060708090100

3000 3100 3200 3300 3400 3500 3600 3700 3800 3900 40000

1503004506007509001050120013501500

Frame

3000 3100 3200 3300 3400 3500 3600 3700 3800 3900 4000

Frame

Gra

y-va

lue v

aria

nce

Dist

ance

bet

wee

ngr

ound

trut

h

(c) 12th landmark

0102030405060708090100

01503004506007509001050120013501500

Frame

Frame

Gra

y-va

lue v

aria

nce

Dist

ance

bet

wee

ngr

ound

trut

h

2000 2100 2200 2300 2400 2500 2600 2700 2800 2900 3000

2000 2100 2200 2300 2400 2500 2600 2700 2800 2900 3000

(d) 17th landmark

Figure 7 Each landmark error and gray-value variance in the FGNet database

The curve is calculated only for the outline of the landmarkof the shape model It is used as the ground truth landmarkof the Multi-PIE for each sample image and goes throughthe fitting process for the same image using the techniqueof [6] We thus apply the patch classifier to the around ofthe landmark of the facial contour for this fitting The fittingresult is defined as true when the distance of the groundtruth landmark is less than 3 pixels and defined as falsewhen it is greater than that In ROC curve the red markis the case that the result is determined only by the gray-value variance The blue mark is the result of consideringthe relationship between the landmarks By considering therelationship between the landmarks it was identified thatthe result was improved in the true positive rate before thecorrection and also the false positive rate became muchlower The critical gray-value variance in which the classifiershows the best performance is showed as 228 If it is classified

by using this critical value the true positive rate is 8771 andthe true negative rate is 7184

The experiment is processed for continuous images usingthe calculated critical value of the gray-value variance InFigure 7 the result for the distance between fitted landmarksand ground truth landmarks in the FGNet talking face videodatabase [25] and the gray-value variance is showed As thesame as in the experiment of Multi-PIE the fitting technique[23] was used The graph at the top represents the distancebetween the landmark the result of the fitting and theground truth landmark As shown in the graph this fittingtechnique maintains the distance of approximately 10 pixelson the whole in tracking According to this it is confirmedthat the gray-value variance for each landmark shows lessthan the value of 300 As shown in Figure 7(a) if the fittingresult is stable the gray-value variance can be seen to be sta-ble And Figures 7(b) and 7(c) show the relationship between

8 Journal of Sensors

Figure 8 Example of the result of patch classifier in the complicated backgroundThe red point represents the failure of fitting and the greenpoint represents the success

gray-value variance and error As shown in Figure 7(d) the17th landmark which are the gray-value variance at both endsof the landmarks of the outline of the face do not seem tobe stable Since this position does not only contain the skindue to hairs the value must be high However according toconsidering the relationship between the landmarks of theshape as mentioned the result of the patch classifier can bereturned as true according to neighbor landmarks

Finally the result of applying the patch classifier on realimages is shown in Figure 8 The green dots represent thelandmarks of which normal fitting is made and the reddots represent the landmarks that are classified by the fittingfailure by the patch classifier In the upper right of each imagethe frontal image is placed that its gray-value variance ismeasured after configuring the patch The image representsthe result that the patch classifier classifies the case that theshape cannot keep up with or the landmark stays on thebackground by the rotation It can be identified that in thefrontal face image in the upper right the nonface contourarea is included This method showed better performance incomplex background The gray-value variances in complexbackground are higher than the gray-value variances insimple background When the face rotated yaw axis ourapproach can classify landmarks properly

5 Conclusion

In this study the patch classifier has been studied whichdetermines the fitting result of landmarks that consist of

the face contour lines of the face shape model The patchclassifier approaches to estimate a texture of skin through agray-value variance measurement To show the performanceinvariant to pose it transforms to the frontal shape modelIn this transformation we were able to fill a hidden pixel bya pose using the bilinear interpolation without modificationin measuring the gray-value variance Further in order toreduce the interference with the illumination or the like themethod of configuring a patch around a landmark is usedA gray-value variance is calculated only in this patch If youapply this approach to theMulti-PIE database approximately8752 of the gray-value variance of the landmark is con-firmed to appear as a value of less than or equal to 200 Thetexture and shape features are dealt with in the approachof the existing fitting methods and showed the potential forthe use of the features in this study In addition by applyingthis technique to the outskirts of the landmark face shapemodel we proposed a method to classify whether a fitting issuccessful or not As a result it was able to classify the correctfitting result in the probability of 8332 We could verifywhether a gray-value variance is placed on the landmark ofthe face outline in the simple and effective way

6 Limitations and Future Work

Weused the outline landmarks bywhich the success or failureof fitting can be determined in the relatively simple way Asmentioned above since landmarks of the shape model arecomposed of geometric relations we can estimate the fitting

Journal of Sensors 9

state of the shape being tracked Estimation of the fitting statemight be helpful in verifying the reliability of the shapemodelused in various directions and determining the usage

First there is a case that uses it as ameasure for improvingthe tracking of the shape model itself The systems that arerelated to the shape model consisting of the landmarks areflowed in two steps After the initial searching of a face areaand the initial fitting process of shape fitting process onlyoccurs If a landmark fails to fit the facial feature in tracking itcan be occur that a shape repeats the incorrect fitting In thiscase with analyzing the current status and returning to theinitial phase of the system by using this research the recoveryis possible

Also it can be used as a correction of the shape fittingresult The fitting technique keeps track of each feature andmaintains the natural form through the shape alignmentIn this process the outer landmark directly evaluates andfinds the better fitting points After verification of the shapefurthermore it can be used in various systems to takeadvantage of the shape model The systems of gaze trackingface recognition motion recognition and facial expressionrecognition are applicable The role of the landmark in thesystem is critical If landmarks are incorrectly fitted a moreserious problem might occur in these systems

Like this the patch classifier can achieve the better resultsby applying after the shape fitting In addition by utilizingthe failed data it contributes to the way of escaping from thebigger problem

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

References

[1] T F Cootes C J Taylor D H Cooper and J Graham ldquoActiveshapemodelsmdashtheir training and applicationrdquoComputer Visionand Image Understanding vol 61 no 1 pp 38ndash59 1995

[2] T F Cooles G J Edwards and C J Taylor ldquoActive appearancemodelsrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 23 no 6 pp 681ndash685 2001

[3] D Cristinacce and T Cootes ldquoAutomatic feature localisationwith constrained local modelsrdquo Pattern Recognition vol 41 no10 pp 3054ndash3067 2008

[4] X P Burgos-Artizzu P Perona and P Dollar ldquoRobust facelandmark estimation under occlusionrdquo inProceedings of the 14thIEEE International Conference on Computer Vision (ICCV rsquo13)pp 1513ndash1520 IEEE December 2013

[5] H Drira B Ben Amor A Srivastava M Daoudi and R Slamaldquo3D Face recognition under expressions occlusions and posevariationsrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 35 no 9 pp 2270ndash2283 2013

[6] JM Saragih S Lucey and J F Cohn ldquoDeformablemodel fittingby regularized landmark mean-shiftrdquo International Journal ofComputer Vision vol 91 no 2 pp 200ndash215 2011

[7] A Asthana T K Marks M J Jones K H Tieu and M VRohith ldquoFully automatic pose-invariant face recognition via

3D pose normalizationrdquo in Proceedings of the IEEE Interna-tional Conference on Computer Vision (ICCV rsquo11) pp 937ndash944November 2011

[8] C Xiong L Huang and C Liu ldquoGaze estimation based on3D face structure and pupil centersrdquo in Proceedings of the 22ndInternational Conference on Pattern Recognition (ICPR rsquo14) pp1156ndash1161 IEEE Stockholm Sweden August 2014

[9] C Cao Y Weng S Lin and K Zhou ldquo3D shape regression forreal-time facial animationrdquoACMTransactions on Graphics vol32 no 4 article 41 2013

[10] B vanGinneken A F Frangi J J Staal BM Ter Haar Romenyand M A Viergever ldquoActive shape model segmentation withoptimal featuresrdquo IEEE Transactions on Medical Imaging vol21 no 8 pp 924ndash933 2002

[11] T F Cootes and C J Taylor ldquoActive shape modelsmdashlsquosmartsnakesrsquordquo in BMVC92 Proceedings of the British Machine VisionConference organised by the British Machine Vision Association22ndash24 September 1992 Leeds pp 266ndash275 Springer LondonUK 1992

[12] C A R Behaine and J Scharcanski ldquoEnhancing the perfor-mance of active shape models in face recognition applicationsrdquoIEEETransactions on Instrumentation andMeasurement vol 61no 8 pp 2330ndash2333 2012

[13] F Dornaika and J Ahlberg ldquoFast and reliable active appearancemodel search for 3-D face trackingrdquo IEEE Transactions onSystems Man and Cybernetics Part B Cybernetics vol 34 no4 pp 1838ndash1853 2004

[14] J Matthews and S Baker ldquoActive appearance models revisitedrdquoInternational Journal of Computer Vision vol 60 no 2 pp 135ndash164 2004

[15] A U Batur and M H Hayes ldquoAdaptive active appearancemodelsrdquo IEEE Transactions on Image Processing vol 14 no 11pp 1707ndash1721 2005

[16] D Cristinacce and T Cootes ldquoFeature detection and trackingwith constrained local modelsrdquo in Proceedings of the 17thBritish Machine Vision Conference (BMVC rsquo06) pp 929ndash938September 2006

[17] X Cao Y Wei F Wen and J Sun ldquoFace alignment by explicitshape regressionrdquo International Journal of Computer Vision vol107 no 2 pp 177ndash190 2014

[18] M D Breitenstein D Kuettel T Weise L Van Gool andH Pfister ldquoReal-time face pose estimation from single rangeimagesrdquo inProceedings of the 26th IEEEConference onComputerVision and Pattern Recognition (CVPR rsquo08) pp 1ndash8 June 2008

[19] Y Taigman M Yang M Ranzato and LWolf ldquoDeepface clos-ing the gap to human-level performance in face verificationrdquoin Proceedings of the IEEE Conference on Computer Vision andPattern Recognition (CVPR rsquo14) pp 1701ndash1708 IEEE ColumbusOhio USA June 2014

[20] Z Kalal K Mikolajczyk and J Matas ldquoTracking-learning-detectionrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 34 no 7 pp 1409ndash1422 2012

[21] S Hu S Sun X Ma Y Qin and B Lei ldquoA fast on-lineboosting tracking algorithm based on cascade filter of multi-featuresrdquo in Proceedings of the 3rd International Conference onMultimedia Technology (ICM rsquo13) Atlantis Press GuangzhouChina November 2013

[22] I SMsiza F VNelwamondo andTMarwala ldquoOn the segmen-tation of fingerprint images how to select the parameters of ablock-wise variancemethodrdquo in Proceedings of the InternationalConference on Image Processing Computer Vision and PatternRecognition (IPCV rsquo12) pp 1107ndash1113 July 2012

10 Journal of Sensors

[23] C A Glasbey and K V Mardia ldquoA review of image-warpingmethodsrdquo Journal of Applied Statistics vol 25 no 2 pp 155ndash1711998

[24] R Gross I Matthews J Cohn T Kanade and S Baker ldquoMulti-PIErdquo in Proceedings of the 8th IEEE International Conference onAutomatic Face and Gesture Recognition (FG rsquo08) September2008

[25] T F Cootes C J Twining V Petrovic R Schestowitz and CJ Taylor ldquoGroupwise construction of appearance models usingpiece-wise affine deformationsrdquo in Proceedings of 16th BritishMachine Vision Conference (BMVC rsquo05) vol 5 pp 879ndash8882005

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 3: Research Article Patch Classifier of Face Shape …downloads.hindawi.com/journals/js/2016/3825931.pdfFace Shape Model and Fitting Method. Showing good performances by the method of

Journal of Sensors 3

Generatefrontal face

image

Constructpatch

Computegray-valuevariance

Redefineresult

Referenceshape

Current 2Dshape

[1 1 0 1 1 ]

[1 1 1 1 1 ]

Figure 1 Flow of patch classifier

properly placed on the outline of the face imageThe key ideaof the patch classifier is that the inside of a shape composedof the outskirts of a landmark consists of skin Since a regionother than the outer face is only composed of smooth skintexture when calculating the gray-value variance it shows alow value To calculate this we construct the area around thelandmark with patches In addition the outside of the shapecomposed of landmarks that constitute the face contour istreated with a mask The patch that excludes each maskshould not include the background and gradient boundaryof the face Using that the patch classifier judges whetherthe landmark is properly placed on the facial contour or notPrior to these works it is necessary to create a frontal imageof the face so that you can work in the free pose of the shapemodel Figure 1 shows the flow with the simple picture Weuse current shape in tracking method and trained referenceshape First the frontal face image is generated by projectingimage warped current shape to reference shape And weconstruct patches for set of pixel in region around eachlandmark of reference shapesThen the gray-value variance iscalculated to analyze the characteristics of the patch Finallywe decide classification results using the calculated gray-value variance and geometric feature of face shape model

This process will be described in the following flow FirstSection 31 deals with the process that configures the patchafter creating a frontal face image composed of gray-valuein the preprocessing section Section 32 explains the processof calculating the variance of each patch Finally Section 33describes the process of determining the variance calculationresult again considering the relationship between landmarks

31 Preprocessing A frontal face image is produced withthe shape and image which are in tracking to compute

the variance of the patch classifier Next it goes through thepreprocess for the calculation of variance by constructing thepatch made of gray-value of landmarks

311 Frontal Face Shape Model The shape model is basicallyperformed through the calculation of themean shape and theparameter [1]

In this paper we use the current shape image and thereference shape of the Point Distribution Model (PDM) usedin the shape alignment Reference shape is the mean shapeof the shape of the training set that is used when the PDMis training and it is generally from the front face The shapemodel is utilized only for the face when the background wasremoved Since our approach is pose-invariant system thefrontal image is prepared for the fact that the patch calculatesunder the same condition regardless of the pose

312 Generate 2D Frontal Face Image Using the referenceshape the current face image is transformed into the frontalimage In a number of studies Piecewise Affine Warp [23] isused to produce a face image This technique is the texturemapping which maps the different area from the imageFirst it configures the Delaunay triangulation in the shapeand reference shape currently being tracked Then using abilinear interpolation correction with the center coordinatesitmaps the specific triangle to the target triangleWhen a posevariation occurs in the shape model it is to fill with bilinearinterpolation for the case that 2D face image is not shownThe bilinear interpolation is an extension of the simple linearinterpolation It fills the empty area by the method that doesnot affect the gray-value variance which will be calculatedlater

In general when generating the front face image throughthe Piecewise AffineWarp the area other than the face regionis thereby excluded by themaskThat is the other area exceptthe set of the triangle region is treated with a maskThemaskis possible to be obtained by a convex hull to the referenceshape Figure 2 shows an example of the result of the process

Thismask is not used solely for the Piecewise AffineWarptransformation In the variance calculation the mask areais excluded to calculate only the skin area In this way itis to naturally review the outer boundary of the face Thebackground is included in the outside area of connecting linesbetween neighboring landmarks On the other hand only theskin is included in the inside of the boundary

313 Construct Patch Each patch specifies a rectangular areaaround the landmarks of the frontal face shape The size ofthe rectangular area determines the size of the patch in pro-portion to the whole size of the shape model All the createdpatches have to include a boundary of the face contour

The reason to specify the patch is to reduce the effect onthe skin or the light amount when the variance is calculatedThe gray-value variance is the number that indicates theextent of the fact that the gray-value image is spread fromthe average That is the value indicates the distribution ofthe pixel in the image Therefore if we specify a range ofimage as the full face the large numerical value must be

4 Journal of Sensors

Figure 2 Examples of frontal face image

acquired due to the gradient by the illumination eyes nosemouth and so forth If the variance is locally calculated byconfiguring the patch on the other hand it is possible toreduce the effect on the gradient of the overall light in the faceBecause we only calculate the variance to determine whetherthe texture inside of the shape model is soft calculating in alimited scale is more stable and efficient

32 Gray-Value Variance This section describes the detailsfor calculating a variance for each patch via the previouspreprocessing Variance is a value that measures how muchthe number sets of the values are spread out Low variancemeans that the value is very closely gathered to the averageOn the other hand a high variance value indicates that thenumbers are located far from the average That is if there isno change in the gray-value in the generated patch variancewill be low As a result if the variance of the gray-value is lowit means it has a smooth texture Conversely if the calculatedvalue of variance is high it means that the background orany other object is included in the patch Thus we calculatethe variance to the patch consisting of the landmark as thecenter anddetermine the skin texture detection by calculatingthe variance For this the gray-value variance 120590

119894to the 119894th

landmark 119897119894is calculated as the following equation First

the mean of gray-level values except mask119872 for image 119868 iscalculated by

119864 (119868119872) =

1

119873

119899

sum

119909=0

119899

sum

119910=0

119894 (119909 119910)119898 (119909 119910) (1)

where 119894(119909 119910) is the gray-level value of coordinate (119909 119910) of theimage 119868 119898(119909 119910) is the pixel of coordinate (119909 119910) of the mask119872 119873 is number of pixel when the value of the coordinate(119909 119910) is 1 and 119899 is size of image The image type should be agrayscale And themask consists of 0 or 1 Equation (1) is usedfor calculating variance except points where value of mask is0

The mask has been used in the preprocessing The idealresult should be that the mask is cut out along the facecontour Through this fact the gray-value variance of frontalface image excepting the mask is

120590119896= 119864 (119875

2

119896119872119896) minus 1198642(119875119896119872119896) (2)

0 200 400 600 800 1000 1200 1400 1600 1800 2000

0

1

2

3

4

5

6

7

8

9

times105

Gray-value variance

Num

ber o

f lan

dmar

ks

Figure 3 Gray-variance of the patch around the ground truthlandmark in the Multi-PIE

where 119875119896is the patch that contains 119899 times 119899 gray-level value

in the center of the 119896th landmark We can calculate thevariance of the range which should contain only the gray-value of the skin through this formula as noted above Wehave collected the appropriate range of variance through thelandmark passively placed In Table 1 the patch the actualgray-value figures and a gray-value variance are shown Themask 119872 area is shown as black in the patch of which gray-value is 0

We measured the gray-value variance from the faceimage of the Multi-PIE database [24] and the ground truthlandmarks in order to determine the numerical value forthe calculation results in the general image and the correctposition of the landmarks The face images of the Multi-PIE database contain a variety of light direction the numberof people and even the beard The results are shown inFigure 3This graph shows the result of collecting the numberof landmark that has a gray-value variance included in thegray-value variance range shown on the 119909-axis In the resultof the measurement in the landmarks of the entire faceoutline approximately 8752 of the gray-value varianceshowed a value of less than 200 Through the result wecome to believe that we are likely to determine the possibility

Journal of Sensors 5

Table 1 An example of gray-value variance

119896 Patch Gray-values Variance

0

0 0 0 0 125

1491060 0 0 32 1260 0 0 32 1230 0 0 55 1270 0 0 73 129

2

0 0 0 110 119

9384960 0 0 117 1190 0 0 110 1210 0 0 120 1220 0 0 88 109

6

0 119 142 148 139

137960 113 139 132 1410 0 120 132 1270 0 0 122 1340 0 0 0 106

9

120 113 112 110 108

105604103 98 111 102 1010 0 0 79 00 0 0 0 00 0 0 0 0

11

91 81 71 67 75

98728880 63 64 61 7061 59 58 0 068 52 0 0 00 0 0 0 0

13

102 90 85 78 0

32230190 88 87 0 088 87 80 0 086 84 0 0 085 79 0 0 0

of whether the fitting is correctly done in the proposedmethod Some landmark represents high gray-value variancevalue by a gradient according to the direction of the lightsource and the beard Since this phenomenon can occurmorefrequently in the real world the patch classifier determinesthe classification through one more step

33 Considering Relation between Landmarks As describedabove the result of whether it is placed in the correct positionof the landmark can be estimated through the calculationof the variance of the gray-value patch However due tothe environment like the amount of light shadow skincharacteristics and the beard may cause different results Inaddition there is a weak point in what will be calculated fromthe variance For example let us assume that the front faceis being tracked in front of a single color background If thepatch only contains the background due to the failure of theface alignment the result indicates a very low variance andthe incorrect fitting result more likely will be classified asnormal To solve this problem we redefine the result of thevariance through the relationship between the landmarks ofthe shape

As a result of the trace for the first situation in Figure 4(a)gray-value variance of one landmark is calculated over 300it is classified as an incorrect result In this case due to thegradient of the facial contour it came to have a high varianceHowever the RMS error of the manually placed landmarkand location is actually a very small number and it is the pointneeded to be determined as the correct fitting

Since these patches are created with the landmarks thatconsist of a face features of a face can be usedThe landmarksof which variances are calculated have the shape of curve thatforms the face boundary Also because it keeps the referenceshape by transforming using the parameters in the shapealignment technique it is difficult that only one landmarktakes the protrusive shape Thus the result of the protrusivelocation is treated as the same as the result of the peripherallandmark

As the same phenomenon of this the different situationin the protrusive result is shown in Figure 4(b) This resultobtained the very low value of variance since the aroundpatches of protrusive landmarks remain on the backgroundof a solid color Further a single landmark to start escapingshowed a high variance Until this situation it is the same as

6 Journal of Sensors

1

1

1

1

1

0

1

1

1

(a) False negative case

1

1

1

0

1

1

11

1

(b) False positive cases

Figure 4 Examples of the situation that needs to consider therelationship between landmarks The first column represents resultof fitting second column represents frontal face image and thirdrepresents parts of classified results using gray-value variance (1 istrue 0 is false)

the previous situation In order to prevent this the mean tothe patch that does not exclude themask used in the PiecewiseAffine Wrap after the landmark is

1198641015840(119868) =

1

1198992

119899

sum

119909=0

119899

sum

119910=0

119868 (119909 119910) (3)

Therefore the gray-value variance to the whole patch is

1205901015840

119894= 1198641015840(1199012

119894) minus 11986410158402(119901119894) (4)

This calculation will calculate the variance to the patchincluding the background face and so on If the landmarkstays in a solid colored background since the face contourdoes not exist the similar variance will be retained

4 Experimental Results

In this paper the patch classifier as described above classifiesthe correct fitting result by calculating the variance of eachlandmark nearby We did an experiment based on the CMUMulti-PIE database [24] to evaluate the accuracy of the resultAlso after performing by applying a patch classifier to theactual shape fitting technique in the video frame of FGNet[25] according to the RMS error the measurement of thegray-value variance and the result of the classification arechecked As the same experiment the patch classifier in real-life images with a complex background is reviewed

Verification of the basic patch classifier proceeds inthe Multi-PIE database Multi-PIE has various pose facialexpression changes and face images to the illumination68 ground truth landmark points for this have attached tocomments Each image is the 640 times 480 size A total of faceimages for 346 people exist in various poses and at the same

300 320 340 360 380 400 420 440 460 480

180

200

220

240

260

280

300

320

Figure 5 Shape RMS error to the face contour

0 01 02 03 04 05 06 07 08 09 10

01

02

03

04

05

06

07

08

09

1

False positive rate (1 minus specificity)

True

pos

itive

rate

(sen

sitiv

ity)

Figure 6 ROC curve according to the wide range of variance inthe Multi-PIE Red line is the ROC curve that is measured only bythe threshold of the gray-value variance to the Multi-PIE Blue lineis the adjusted ROC curve considering the threshold of the gray-value variance to theMulti-PIE and features of the curve of the facialshape

time 20 sets of direction of light are composed Also itis made up of images of faces taken from the camera ina total of 15 directions We use the total 75360 images byutilizing a set of three directions This database is a databasethat is primarily used in the face shape fitting performancecomparison so we carry out the experiment about the gray-value variance compared with the ground truth landmarkadvances

In this paper because it determines whether landmarksare correctly positioned in the facial contours or not errorsare estimated by the distance between the existing groundtrue landmark and the face contour not by the RMS errorThis example is shown in Figure 5

In order to find an appropriate variance threshold 120590 todistinguish between true and false in the patch classifierthe Receiver Operating Characteristic (ROC) curves for awide range of gray-value variance are presented in Figure 6

Journal of Sensors 7

0102030405060708090100

3000 3100 3200 3300 3400 3500 3600 3700 3800 3900 4000

01503004506007509001050120013501500

Frame

3000 3100 3200 3300 3400 3500 3600 3700 3800 3900 4000

Frame

Gra

y-va

lue v

aria

nce

Dist

ance

bet

wee

ngr

ound

trut

h

(a) 4th landmark

0102030405060708090100

01503004506007509001050120013501500

Gra

y-va

lue v

aria

nce

Dist

ance

bet

wee

ngr

ound

trut

h

2500 3000 3500 4000 4500 5000

Frame

2500 3000 3500 4000 4500 5000

Frame

(b) 3th landmark

0102030405060708090100

3000 3100 3200 3300 3400 3500 3600 3700 3800 3900 40000

1503004506007509001050120013501500

Frame

3000 3100 3200 3300 3400 3500 3600 3700 3800 3900 4000

Frame

Gra

y-va

lue v

aria

nce

Dist

ance

bet

wee

ngr

ound

trut

h

(c) 12th landmark

0102030405060708090100

01503004506007509001050120013501500

Frame

Frame

Gra

y-va

lue v

aria

nce

Dist

ance

bet

wee

ngr

ound

trut

h

2000 2100 2200 2300 2400 2500 2600 2700 2800 2900 3000

2000 2100 2200 2300 2400 2500 2600 2700 2800 2900 3000

(d) 17th landmark

Figure 7 Each landmark error and gray-value variance in the FGNet database

The curve is calculated only for the outline of the landmarkof the shape model It is used as the ground truth landmarkof the Multi-PIE for each sample image and goes throughthe fitting process for the same image using the techniqueof [6] We thus apply the patch classifier to the around ofthe landmark of the facial contour for this fitting The fittingresult is defined as true when the distance of the groundtruth landmark is less than 3 pixels and defined as falsewhen it is greater than that In ROC curve the red markis the case that the result is determined only by the gray-value variance The blue mark is the result of consideringthe relationship between the landmarks By considering therelationship between the landmarks it was identified thatthe result was improved in the true positive rate before thecorrection and also the false positive rate became muchlower The critical gray-value variance in which the classifiershows the best performance is showed as 228 If it is classified

by using this critical value the true positive rate is 8771 andthe true negative rate is 7184

The experiment is processed for continuous images usingthe calculated critical value of the gray-value variance InFigure 7 the result for the distance between fitted landmarksand ground truth landmarks in the FGNet talking face videodatabase [25] and the gray-value variance is showed As thesame as in the experiment of Multi-PIE the fitting technique[23] was used The graph at the top represents the distancebetween the landmark the result of the fitting and theground truth landmark As shown in the graph this fittingtechnique maintains the distance of approximately 10 pixelson the whole in tracking According to this it is confirmedthat the gray-value variance for each landmark shows lessthan the value of 300 As shown in Figure 7(a) if the fittingresult is stable the gray-value variance can be seen to be sta-ble And Figures 7(b) and 7(c) show the relationship between

8 Journal of Sensors

Figure 8 Example of the result of patch classifier in the complicated backgroundThe red point represents the failure of fitting and the greenpoint represents the success

gray-value variance and error As shown in Figure 7(d) the17th landmark which are the gray-value variance at both endsof the landmarks of the outline of the face do not seem tobe stable Since this position does not only contain the skindue to hairs the value must be high However according toconsidering the relationship between the landmarks of theshape as mentioned the result of the patch classifier can bereturned as true according to neighbor landmarks

Finally the result of applying the patch classifier on realimages is shown in Figure 8 The green dots represent thelandmarks of which normal fitting is made and the reddots represent the landmarks that are classified by the fittingfailure by the patch classifier In the upper right of each imagethe frontal image is placed that its gray-value variance ismeasured after configuring the patch The image representsthe result that the patch classifier classifies the case that theshape cannot keep up with or the landmark stays on thebackground by the rotation It can be identified that in thefrontal face image in the upper right the nonface contourarea is included This method showed better performance incomplex background The gray-value variances in complexbackground are higher than the gray-value variances insimple background When the face rotated yaw axis ourapproach can classify landmarks properly

5 Conclusion

In this study the patch classifier has been studied whichdetermines the fitting result of landmarks that consist of

the face contour lines of the face shape model The patchclassifier approaches to estimate a texture of skin through agray-value variance measurement To show the performanceinvariant to pose it transforms to the frontal shape modelIn this transformation we were able to fill a hidden pixel bya pose using the bilinear interpolation without modificationin measuring the gray-value variance Further in order toreduce the interference with the illumination or the like themethod of configuring a patch around a landmark is usedA gray-value variance is calculated only in this patch If youapply this approach to theMulti-PIE database approximately8752 of the gray-value variance of the landmark is con-firmed to appear as a value of less than or equal to 200 Thetexture and shape features are dealt with in the approachof the existing fitting methods and showed the potential forthe use of the features in this study In addition by applyingthis technique to the outskirts of the landmark face shapemodel we proposed a method to classify whether a fitting issuccessful or not As a result it was able to classify the correctfitting result in the probability of 8332 We could verifywhether a gray-value variance is placed on the landmark ofthe face outline in the simple and effective way

6 Limitations and Future Work

Weused the outline landmarks bywhich the success or failureof fitting can be determined in the relatively simple way Asmentioned above since landmarks of the shape model arecomposed of geometric relations we can estimate the fitting

Journal of Sensors 9

state of the shape being tracked Estimation of the fitting statemight be helpful in verifying the reliability of the shapemodelused in various directions and determining the usage

First there is a case that uses it as ameasure for improvingthe tracking of the shape model itself The systems that arerelated to the shape model consisting of the landmarks areflowed in two steps After the initial searching of a face areaand the initial fitting process of shape fitting process onlyoccurs If a landmark fails to fit the facial feature in tracking itcan be occur that a shape repeats the incorrect fitting In thiscase with analyzing the current status and returning to theinitial phase of the system by using this research the recoveryis possible

Also it can be used as a correction of the shape fittingresult The fitting technique keeps track of each feature andmaintains the natural form through the shape alignmentIn this process the outer landmark directly evaluates andfinds the better fitting points After verification of the shapefurthermore it can be used in various systems to takeadvantage of the shape model The systems of gaze trackingface recognition motion recognition and facial expressionrecognition are applicable The role of the landmark in thesystem is critical If landmarks are incorrectly fitted a moreserious problem might occur in these systems

Like this the patch classifier can achieve the better resultsby applying after the shape fitting In addition by utilizingthe failed data it contributes to the way of escaping from thebigger problem

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

References

[1] T F Cootes C J Taylor D H Cooper and J Graham ldquoActiveshapemodelsmdashtheir training and applicationrdquoComputer Visionand Image Understanding vol 61 no 1 pp 38ndash59 1995

[2] T F Cooles G J Edwards and C J Taylor ldquoActive appearancemodelsrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 23 no 6 pp 681ndash685 2001

[3] D Cristinacce and T Cootes ldquoAutomatic feature localisationwith constrained local modelsrdquo Pattern Recognition vol 41 no10 pp 3054ndash3067 2008

[4] X P Burgos-Artizzu P Perona and P Dollar ldquoRobust facelandmark estimation under occlusionrdquo inProceedings of the 14thIEEE International Conference on Computer Vision (ICCV rsquo13)pp 1513ndash1520 IEEE December 2013

[5] H Drira B Ben Amor A Srivastava M Daoudi and R Slamaldquo3D Face recognition under expressions occlusions and posevariationsrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 35 no 9 pp 2270ndash2283 2013

[6] JM Saragih S Lucey and J F Cohn ldquoDeformablemodel fittingby regularized landmark mean-shiftrdquo International Journal ofComputer Vision vol 91 no 2 pp 200ndash215 2011

[7] A Asthana T K Marks M J Jones K H Tieu and M VRohith ldquoFully automatic pose-invariant face recognition via

3D pose normalizationrdquo in Proceedings of the IEEE Interna-tional Conference on Computer Vision (ICCV rsquo11) pp 937ndash944November 2011

[8] C Xiong L Huang and C Liu ldquoGaze estimation based on3D face structure and pupil centersrdquo in Proceedings of the 22ndInternational Conference on Pattern Recognition (ICPR rsquo14) pp1156ndash1161 IEEE Stockholm Sweden August 2014

[9] C Cao Y Weng S Lin and K Zhou ldquo3D shape regression forreal-time facial animationrdquoACMTransactions on Graphics vol32 no 4 article 41 2013

[10] B vanGinneken A F Frangi J J Staal BM Ter Haar Romenyand M A Viergever ldquoActive shape model segmentation withoptimal featuresrdquo IEEE Transactions on Medical Imaging vol21 no 8 pp 924ndash933 2002

[11] T F Cootes and C J Taylor ldquoActive shape modelsmdashlsquosmartsnakesrsquordquo in BMVC92 Proceedings of the British Machine VisionConference organised by the British Machine Vision Association22ndash24 September 1992 Leeds pp 266ndash275 Springer LondonUK 1992

[12] C A R Behaine and J Scharcanski ldquoEnhancing the perfor-mance of active shape models in face recognition applicationsrdquoIEEETransactions on Instrumentation andMeasurement vol 61no 8 pp 2330ndash2333 2012

[13] F Dornaika and J Ahlberg ldquoFast and reliable active appearancemodel search for 3-D face trackingrdquo IEEE Transactions onSystems Man and Cybernetics Part B Cybernetics vol 34 no4 pp 1838ndash1853 2004

[14] J Matthews and S Baker ldquoActive appearance models revisitedrdquoInternational Journal of Computer Vision vol 60 no 2 pp 135ndash164 2004

[15] A U Batur and M H Hayes ldquoAdaptive active appearancemodelsrdquo IEEE Transactions on Image Processing vol 14 no 11pp 1707ndash1721 2005

[16] D Cristinacce and T Cootes ldquoFeature detection and trackingwith constrained local modelsrdquo in Proceedings of the 17thBritish Machine Vision Conference (BMVC rsquo06) pp 929ndash938September 2006

[17] X Cao Y Wei F Wen and J Sun ldquoFace alignment by explicitshape regressionrdquo International Journal of Computer Vision vol107 no 2 pp 177ndash190 2014

[18] M D Breitenstein D Kuettel T Weise L Van Gool andH Pfister ldquoReal-time face pose estimation from single rangeimagesrdquo inProceedings of the 26th IEEEConference onComputerVision and Pattern Recognition (CVPR rsquo08) pp 1ndash8 June 2008

[19] Y Taigman M Yang M Ranzato and LWolf ldquoDeepface clos-ing the gap to human-level performance in face verificationrdquoin Proceedings of the IEEE Conference on Computer Vision andPattern Recognition (CVPR rsquo14) pp 1701ndash1708 IEEE ColumbusOhio USA June 2014

[20] Z Kalal K Mikolajczyk and J Matas ldquoTracking-learning-detectionrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 34 no 7 pp 1409ndash1422 2012

[21] S Hu S Sun X Ma Y Qin and B Lei ldquoA fast on-lineboosting tracking algorithm based on cascade filter of multi-featuresrdquo in Proceedings of the 3rd International Conference onMultimedia Technology (ICM rsquo13) Atlantis Press GuangzhouChina November 2013

[22] I SMsiza F VNelwamondo andTMarwala ldquoOn the segmen-tation of fingerprint images how to select the parameters of ablock-wise variancemethodrdquo in Proceedings of the InternationalConference on Image Processing Computer Vision and PatternRecognition (IPCV rsquo12) pp 1107ndash1113 July 2012

10 Journal of Sensors

[23] C A Glasbey and K V Mardia ldquoA review of image-warpingmethodsrdquo Journal of Applied Statistics vol 25 no 2 pp 155ndash1711998

[24] R Gross I Matthews J Cohn T Kanade and S Baker ldquoMulti-PIErdquo in Proceedings of the 8th IEEE International Conference onAutomatic Face and Gesture Recognition (FG rsquo08) September2008

[25] T F Cootes C J Twining V Petrovic R Schestowitz and CJ Taylor ldquoGroupwise construction of appearance models usingpiece-wise affine deformationsrdquo in Proceedings of 16th BritishMachine Vision Conference (BMVC rsquo05) vol 5 pp 879ndash8882005

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 4: Research Article Patch Classifier of Face Shape …downloads.hindawi.com/journals/js/2016/3825931.pdfFace Shape Model and Fitting Method. Showing good performances by the method of

4 Journal of Sensors

Figure 2 Examples of frontal face image

acquired due to the gradient by the illumination eyes nosemouth and so forth If the variance is locally calculated byconfiguring the patch on the other hand it is possible toreduce the effect on the gradient of the overall light in the faceBecause we only calculate the variance to determine whetherthe texture inside of the shape model is soft calculating in alimited scale is more stable and efficient

32 Gray-Value Variance This section describes the detailsfor calculating a variance for each patch via the previouspreprocessing Variance is a value that measures how muchthe number sets of the values are spread out Low variancemeans that the value is very closely gathered to the averageOn the other hand a high variance value indicates that thenumbers are located far from the average That is if there isno change in the gray-value in the generated patch variancewill be low As a result if the variance of the gray-value is lowit means it has a smooth texture Conversely if the calculatedvalue of variance is high it means that the background orany other object is included in the patch Thus we calculatethe variance to the patch consisting of the landmark as thecenter anddetermine the skin texture detection by calculatingthe variance For this the gray-value variance 120590

119894to the 119894th

landmark 119897119894is calculated as the following equation First

the mean of gray-level values except mask119872 for image 119868 iscalculated by

119864 (119868119872) =

1

119873

119899

sum

119909=0

119899

sum

119910=0

119894 (119909 119910)119898 (119909 119910) (1)

where 119894(119909 119910) is the gray-level value of coordinate (119909 119910) of theimage 119868 119898(119909 119910) is the pixel of coordinate (119909 119910) of the mask119872 119873 is number of pixel when the value of the coordinate(119909 119910) is 1 and 119899 is size of image The image type should be agrayscale And themask consists of 0 or 1 Equation (1) is usedfor calculating variance except points where value of mask is0

The mask has been used in the preprocessing The idealresult should be that the mask is cut out along the facecontour Through this fact the gray-value variance of frontalface image excepting the mask is

120590119896= 119864 (119875

2

119896119872119896) minus 1198642(119875119896119872119896) (2)

0 200 400 600 800 1000 1200 1400 1600 1800 2000

0

1

2

3

4

5

6

7

8

9

times105

Gray-value variance

Num

ber o

f lan

dmar

ks

Figure 3 Gray-variance of the patch around the ground truthlandmark in the Multi-PIE

where 119875119896is the patch that contains 119899 times 119899 gray-level value

in the center of the 119896th landmark We can calculate thevariance of the range which should contain only the gray-value of the skin through this formula as noted above Wehave collected the appropriate range of variance through thelandmark passively placed In Table 1 the patch the actualgray-value figures and a gray-value variance are shown Themask 119872 area is shown as black in the patch of which gray-value is 0

We measured the gray-value variance from the faceimage of the Multi-PIE database [24] and the ground truthlandmarks in order to determine the numerical value forthe calculation results in the general image and the correctposition of the landmarks The face images of the Multi-PIE database contain a variety of light direction the numberof people and even the beard The results are shown inFigure 3This graph shows the result of collecting the numberof landmark that has a gray-value variance included in thegray-value variance range shown on the 119909-axis In the resultof the measurement in the landmarks of the entire faceoutline approximately 8752 of the gray-value varianceshowed a value of less than 200 Through the result wecome to believe that we are likely to determine the possibility

Journal of Sensors 5

Table 1 An example of gray-value variance

119896 Patch Gray-values Variance

0

0 0 0 0 125

1491060 0 0 32 1260 0 0 32 1230 0 0 55 1270 0 0 73 129

2

0 0 0 110 119

9384960 0 0 117 1190 0 0 110 1210 0 0 120 1220 0 0 88 109

6

0 119 142 148 139

137960 113 139 132 1410 0 120 132 1270 0 0 122 1340 0 0 0 106

9

120 113 112 110 108

105604103 98 111 102 1010 0 0 79 00 0 0 0 00 0 0 0 0

11

91 81 71 67 75

98728880 63 64 61 7061 59 58 0 068 52 0 0 00 0 0 0 0

13

102 90 85 78 0

32230190 88 87 0 088 87 80 0 086 84 0 0 085 79 0 0 0

of whether the fitting is correctly done in the proposedmethod Some landmark represents high gray-value variancevalue by a gradient according to the direction of the lightsource and the beard Since this phenomenon can occurmorefrequently in the real world the patch classifier determinesthe classification through one more step

33 Considering Relation between Landmarks As describedabove the result of whether it is placed in the correct positionof the landmark can be estimated through the calculationof the variance of the gray-value patch However due tothe environment like the amount of light shadow skincharacteristics and the beard may cause different results Inaddition there is a weak point in what will be calculated fromthe variance For example let us assume that the front faceis being tracked in front of a single color background If thepatch only contains the background due to the failure of theface alignment the result indicates a very low variance andthe incorrect fitting result more likely will be classified asnormal To solve this problem we redefine the result of thevariance through the relationship between the landmarks ofthe shape

As a result of the trace for the first situation in Figure 4(a)gray-value variance of one landmark is calculated over 300it is classified as an incorrect result In this case due to thegradient of the facial contour it came to have a high varianceHowever the RMS error of the manually placed landmarkand location is actually a very small number and it is the pointneeded to be determined as the correct fitting

Since these patches are created with the landmarks thatconsist of a face features of a face can be usedThe landmarksof which variances are calculated have the shape of curve thatforms the face boundary Also because it keeps the referenceshape by transforming using the parameters in the shapealignment technique it is difficult that only one landmarktakes the protrusive shape Thus the result of the protrusivelocation is treated as the same as the result of the peripherallandmark

As the same phenomenon of this the different situationin the protrusive result is shown in Figure 4(b) This resultobtained the very low value of variance since the aroundpatches of protrusive landmarks remain on the backgroundof a solid color Further a single landmark to start escapingshowed a high variance Until this situation it is the same as

6 Journal of Sensors

1

1

1

1

1

0

1

1

1

(a) False negative case

1

1

1

0

1

1

11

1

(b) False positive cases

Figure 4 Examples of the situation that needs to consider therelationship between landmarks The first column represents resultof fitting second column represents frontal face image and thirdrepresents parts of classified results using gray-value variance (1 istrue 0 is false)

the previous situation In order to prevent this the mean tothe patch that does not exclude themask used in the PiecewiseAffine Wrap after the landmark is

1198641015840(119868) =

1

1198992

119899

sum

119909=0

119899

sum

119910=0

119868 (119909 119910) (3)

Therefore the gray-value variance to the whole patch is

1205901015840

119894= 1198641015840(1199012

119894) minus 11986410158402(119901119894) (4)

This calculation will calculate the variance to the patchincluding the background face and so on If the landmarkstays in a solid colored background since the face contourdoes not exist the similar variance will be retained

4 Experimental Results

In this paper the patch classifier as described above classifiesthe correct fitting result by calculating the variance of eachlandmark nearby We did an experiment based on the CMUMulti-PIE database [24] to evaluate the accuracy of the resultAlso after performing by applying a patch classifier to theactual shape fitting technique in the video frame of FGNet[25] according to the RMS error the measurement of thegray-value variance and the result of the classification arechecked As the same experiment the patch classifier in real-life images with a complex background is reviewed

Verification of the basic patch classifier proceeds inthe Multi-PIE database Multi-PIE has various pose facialexpression changes and face images to the illumination68 ground truth landmark points for this have attached tocomments Each image is the 640 times 480 size A total of faceimages for 346 people exist in various poses and at the same

300 320 340 360 380 400 420 440 460 480

180

200

220

240

260

280

300

320

Figure 5 Shape RMS error to the face contour

0 01 02 03 04 05 06 07 08 09 10

01

02

03

04

05

06

07

08

09

1

False positive rate (1 minus specificity)

True

pos

itive

rate

(sen

sitiv

ity)

Figure 6 ROC curve according to the wide range of variance inthe Multi-PIE Red line is the ROC curve that is measured only bythe threshold of the gray-value variance to the Multi-PIE Blue lineis the adjusted ROC curve considering the threshold of the gray-value variance to theMulti-PIE and features of the curve of the facialshape

time 20 sets of direction of light are composed Also itis made up of images of faces taken from the camera ina total of 15 directions We use the total 75360 images byutilizing a set of three directions This database is a databasethat is primarily used in the face shape fitting performancecomparison so we carry out the experiment about the gray-value variance compared with the ground truth landmarkadvances

In this paper because it determines whether landmarksare correctly positioned in the facial contours or not errorsare estimated by the distance between the existing groundtrue landmark and the face contour not by the RMS errorThis example is shown in Figure 5

In order to find an appropriate variance threshold 120590 todistinguish between true and false in the patch classifierthe Receiver Operating Characteristic (ROC) curves for awide range of gray-value variance are presented in Figure 6

Journal of Sensors 7

0102030405060708090100

3000 3100 3200 3300 3400 3500 3600 3700 3800 3900 4000

01503004506007509001050120013501500

Frame

3000 3100 3200 3300 3400 3500 3600 3700 3800 3900 4000

Frame

Gra

y-va

lue v

aria

nce

Dist

ance

bet

wee

ngr

ound

trut

h

(a) 4th landmark

0102030405060708090100

01503004506007509001050120013501500

Gra

y-va

lue v

aria

nce

Dist

ance

bet

wee

ngr

ound

trut

h

2500 3000 3500 4000 4500 5000

Frame

2500 3000 3500 4000 4500 5000

Frame

(b) 3th landmark

0102030405060708090100

3000 3100 3200 3300 3400 3500 3600 3700 3800 3900 40000

1503004506007509001050120013501500

Frame

3000 3100 3200 3300 3400 3500 3600 3700 3800 3900 4000

Frame

Gra

y-va

lue v

aria

nce

Dist

ance

bet

wee

ngr

ound

trut

h

(c) 12th landmark

0102030405060708090100

01503004506007509001050120013501500

Frame

Frame

Gra

y-va

lue v

aria

nce

Dist

ance

bet

wee

ngr

ound

trut

h

2000 2100 2200 2300 2400 2500 2600 2700 2800 2900 3000

2000 2100 2200 2300 2400 2500 2600 2700 2800 2900 3000

(d) 17th landmark

Figure 7 Each landmark error and gray-value variance in the FGNet database

The curve is calculated only for the outline of the landmarkof the shape model It is used as the ground truth landmarkof the Multi-PIE for each sample image and goes throughthe fitting process for the same image using the techniqueof [6] We thus apply the patch classifier to the around ofthe landmark of the facial contour for this fitting The fittingresult is defined as true when the distance of the groundtruth landmark is less than 3 pixels and defined as falsewhen it is greater than that In ROC curve the red markis the case that the result is determined only by the gray-value variance The blue mark is the result of consideringthe relationship between the landmarks By considering therelationship between the landmarks it was identified thatthe result was improved in the true positive rate before thecorrection and also the false positive rate became muchlower The critical gray-value variance in which the classifiershows the best performance is showed as 228 If it is classified

by using this critical value the true positive rate is 8771 andthe true negative rate is 7184

The experiment is processed for continuous images usingthe calculated critical value of the gray-value variance InFigure 7 the result for the distance between fitted landmarksand ground truth landmarks in the FGNet talking face videodatabase [25] and the gray-value variance is showed As thesame as in the experiment of Multi-PIE the fitting technique[23] was used The graph at the top represents the distancebetween the landmark the result of the fitting and theground truth landmark As shown in the graph this fittingtechnique maintains the distance of approximately 10 pixelson the whole in tracking According to this it is confirmedthat the gray-value variance for each landmark shows lessthan the value of 300 As shown in Figure 7(a) if the fittingresult is stable the gray-value variance can be seen to be sta-ble And Figures 7(b) and 7(c) show the relationship between

8 Journal of Sensors

Figure 8 Example of the result of patch classifier in the complicated backgroundThe red point represents the failure of fitting and the greenpoint represents the success

gray-value variance and error As shown in Figure 7(d) the17th landmark which are the gray-value variance at both endsof the landmarks of the outline of the face do not seem tobe stable Since this position does not only contain the skindue to hairs the value must be high However according toconsidering the relationship between the landmarks of theshape as mentioned the result of the patch classifier can bereturned as true according to neighbor landmarks

Finally the result of applying the patch classifier on realimages is shown in Figure 8 The green dots represent thelandmarks of which normal fitting is made and the reddots represent the landmarks that are classified by the fittingfailure by the patch classifier In the upper right of each imagethe frontal image is placed that its gray-value variance ismeasured after configuring the patch The image representsthe result that the patch classifier classifies the case that theshape cannot keep up with or the landmark stays on thebackground by the rotation It can be identified that in thefrontal face image in the upper right the nonface contourarea is included This method showed better performance incomplex background The gray-value variances in complexbackground are higher than the gray-value variances insimple background When the face rotated yaw axis ourapproach can classify landmarks properly

5 Conclusion

In this study the patch classifier has been studied whichdetermines the fitting result of landmarks that consist of

the face contour lines of the face shape model The patchclassifier approaches to estimate a texture of skin through agray-value variance measurement To show the performanceinvariant to pose it transforms to the frontal shape modelIn this transformation we were able to fill a hidden pixel bya pose using the bilinear interpolation without modificationin measuring the gray-value variance Further in order toreduce the interference with the illumination or the like themethod of configuring a patch around a landmark is usedA gray-value variance is calculated only in this patch If youapply this approach to theMulti-PIE database approximately8752 of the gray-value variance of the landmark is con-firmed to appear as a value of less than or equal to 200 Thetexture and shape features are dealt with in the approachof the existing fitting methods and showed the potential forthe use of the features in this study In addition by applyingthis technique to the outskirts of the landmark face shapemodel we proposed a method to classify whether a fitting issuccessful or not As a result it was able to classify the correctfitting result in the probability of 8332 We could verifywhether a gray-value variance is placed on the landmark ofthe face outline in the simple and effective way

6 Limitations and Future Work

Weused the outline landmarks bywhich the success or failureof fitting can be determined in the relatively simple way Asmentioned above since landmarks of the shape model arecomposed of geometric relations we can estimate the fitting

Journal of Sensors 9

state of the shape being tracked Estimation of the fitting statemight be helpful in verifying the reliability of the shapemodelused in various directions and determining the usage

First there is a case that uses it as ameasure for improvingthe tracking of the shape model itself The systems that arerelated to the shape model consisting of the landmarks areflowed in two steps After the initial searching of a face areaand the initial fitting process of shape fitting process onlyoccurs If a landmark fails to fit the facial feature in tracking itcan be occur that a shape repeats the incorrect fitting In thiscase with analyzing the current status and returning to theinitial phase of the system by using this research the recoveryis possible

Also it can be used as a correction of the shape fittingresult The fitting technique keeps track of each feature andmaintains the natural form through the shape alignmentIn this process the outer landmark directly evaluates andfinds the better fitting points After verification of the shapefurthermore it can be used in various systems to takeadvantage of the shape model The systems of gaze trackingface recognition motion recognition and facial expressionrecognition are applicable The role of the landmark in thesystem is critical If landmarks are incorrectly fitted a moreserious problem might occur in these systems

Like this the patch classifier can achieve the better resultsby applying after the shape fitting In addition by utilizingthe failed data it contributes to the way of escaping from thebigger problem

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

References

[1] T F Cootes C J Taylor D H Cooper and J Graham ldquoActiveshapemodelsmdashtheir training and applicationrdquoComputer Visionand Image Understanding vol 61 no 1 pp 38ndash59 1995

[2] T F Cooles G J Edwards and C J Taylor ldquoActive appearancemodelsrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 23 no 6 pp 681ndash685 2001

[3] D Cristinacce and T Cootes ldquoAutomatic feature localisationwith constrained local modelsrdquo Pattern Recognition vol 41 no10 pp 3054ndash3067 2008

[4] X P Burgos-Artizzu P Perona and P Dollar ldquoRobust facelandmark estimation under occlusionrdquo inProceedings of the 14thIEEE International Conference on Computer Vision (ICCV rsquo13)pp 1513ndash1520 IEEE December 2013

[5] H Drira B Ben Amor A Srivastava M Daoudi and R Slamaldquo3D Face recognition under expressions occlusions and posevariationsrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 35 no 9 pp 2270ndash2283 2013

[6] JM Saragih S Lucey and J F Cohn ldquoDeformablemodel fittingby regularized landmark mean-shiftrdquo International Journal ofComputer Vision vol 91 no 2 pp 200ndash215 2011

[7] A Asthana T K Marks M J Jones K H Tieu and M VRohith ldquoFully automatic pose-invariant face recognition via

3D pose normalizationrdquo in Proceedings of the IEEE Interna-tional Conference on Computer Vision (ICCV rsquo11) pp 937ndash944November 2011

[8] C Xiong L Huang and C Liu ldquoGaze estimation based on3D face structure and pupil centersrdquo in Proceedings of the 22ndInternational Conference on Pattern Recognition (ICPR rsquo14) pp1156ndash1161 IEEE Stockholm Sweden August 2014

[9] C Cao Y Weng S Lin and K Zhou ldquo3D shape regression forreal-time facial animationrdquoACMTransactions on Graphics vol32 no 4 article 41 2013

[10] B vanGinneken A F Frangi J J Staal BM Ter Haar Romenyand M A Viergever ldquoActive shape model segmentation withoptimal featuresrdquo IEEE Transactions on Medical Imaging vol21 no 8 pp 924ndash933 2002

[11] T F Cootes and C J Taylor ldquoActive shape modelsmdashlsquosmartsnakesrsquordquo in BMVC92 Proceedings of the British Machine VisionConference organised by the British Machine Vision Association22ndash24 September 1992 Leeds pp 266ndash275 Springer LondonUK 1992

[12] C A R Behaine and J Scharcanski ldquoEnhancing the perfor-mance of active shape models in face recognition applicationsrdquoIEEETransactions on Instrumentation andMeasurement vol 61no 8 pp 2330ndash2333 2012

[13] F Dornaika and J Ahlberg ldquoFast and reliable active appearancemodel search for 3-D face trackingrdquo IEEE Transactions onSystems Man and Cybernetics Part B Cybernetics vol 34 no4 pp 1838ndash1853 2004

[14] J Matthews and S Baker ldquoActive appearance models revisitedrdquoInternational Journal of Computer Vision vol 60 no 2 pp 135ndash164 2004

[15] A U Batur and M H Hayes ldquoAdaptive active appearancemodelsrdquo IEEE Transactions on Image Processing vol 14 no 11pp 1707ndash1721 2005

[16] D Cristinacce and T Cootes ldquoFeature detection and trackingwith constrained local modelsrdquo in Proceedings of the 17thBritish Machine Vision Conference (BMVC rsquo06) pp 929ndash938September 2006

[17] X Cao Y Wei F Wen and J Sun ldquoFace alignment by explicitshape regressionrdquo International Journal of Computer Vision vol107 no 2 pp 177ndash190 2014

[18] M D Breitenstein D Kuettel T Weise L Van Gool andH Pfister ldquoReal-time face pose estimation from single rangeimagesrdquo inProceedings of the 26th IEEEConference onComputerVision and Pattern Recognition (CVPR rsquo08) pp 1ndash8 June 2008

[19] Y Taigman M Yang M Ranzato and LWolf ldquoDeepface clos-ing the gap to human-level performance in face verificationrdquoin Proceedings of the IEEE Conference on Computer Vision andPattern Recognition (CVPR rsquo14) pp 1701ndash1708 IEEE ColumbusOhio USA June 2014

[20] Z Kalal K Mikolajczyk and J Matas ldquoTracking-learning-detectionrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 34 no 7 pp 1409ndash1422 2012

[21] S Hu S Sun X Ma Y Qin and B Lei ldquoA fast on-lineboosting tracking algorithm based on cascade filter of multi-featuresrdquo in Proceedings of the 3rd International Conference onMultimedia Technology (ICM rsquo13) Atlantis Press GuangzhouChina November 2013

[22] I SMsiza F VNelwamondo andTMarwala ldquoOn the segmen-tation of fingerprint images how to select the parameters of ablock-wise variancemethodrdquo in Proceedings of the InternationalConference on Image Processing Computer Vision and PatternRecognition (IPCV rsquo12) pp 1107ndash1113 July 2012

10 Journal of Sensors

[23] C A Glasbey and K V Mardia ldquoA review of image-warpingmethodsrdquo Journal of Applied Statistics vol 25 no 2 pp 155ndash1711998

[24] R Gross I Matthews J Cohn T Kanade and S Baker ldquoMulti-PIErdquo in Proceedings of the 8th IEEE International Conference onAutomatic Face and Gesture Recognition (FG rsquo08) September2008

[25] T F Cootes C J Twining V Petrovic R Schestowitz and CJ Taylor ldquoGroupwise construction of appearance models usingpiece-wise affine deformationsrdquo in Proceedings of 16th BritishMachine Vision Conference (BMVC rsquo05) vol 5 pp 879ndash8882005

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 5: Research Article Patch Classifier of Face Shape …downloads.hindawi.com/journals/js/2016/3825931.pdfFace Shape Model and Fitting Method. Showing good performances by the method of

Journal of Sensors 5

Table 1 An example of gray-value variance

119896 Patch Gray-values Variance

0

0 0 0 0 125

1491060 0 0 32 1260 0 0 32 1230 0 0 55 1270 0 0 73 129

2

0 0 0 110 119

9384960 0 0 117 1190 0 0 110 1210 0 0 120 1220 0 0 88 109

6

0 119 142 148 139

137960 113 139 132 1410 0 120 132 1270 0 0 122 1340 0 0 0 106

9

120 113 112 110 108

105604103 98 111 102 1010 0 0 79 00 0 0 0 00 0 0 0 0

11

91 81 71 67 75

98728880 63 64 61 7061 59 58 0 068 52 0 0 00 0 0 0 0

13

102 90 85 78 0

32230190 88 87 0 088 87 80 0 086 84 0 0 085 79 0 0 0

of whether the fitting is correctly done in the proposedmethod Some landmark represents high gray-value variancevalue by a gradient according to the direction of the lightsource and the beard Since this phenomenon can occurmorefrequently in the real world the patch classifier determinesthe classification through one more step

33 Considering Relation between Landmarks As describedabove the result of whether it is placed in the correct positionof the landmark can be estimated through the calculationof the variance of the gray-value patch However due tothe environment like the amount of light shadow skincharacteristics and the beard may cause different results Inaddition there is a weak point in what will be calculated fromthe variance For example let us assume that the front faceis being tracked in front of a single color background If thepatch only contains the background due to the failure of theface alignment the result indicates a very low variance andthe incorrect fitting result more likely will be classified asnormal To solve this problem we redefine the result of thevariance through the relationship between the landmarks ofthe shape

As a result of the trace for the first situation in Figure 4(a)gray-value variance of one landmark is calculated over 300it is classified as an incorrect result In this case due to thegradient of the facial contour it came to have a high varianceHowever the RMS error of the manually placed landmarkand location is actually a very small number and it is the pointneeded to be determined as the correct fitting

Since these patches are created with the landmarks thatconsist of a face features of a face can be usedThe landmarksof which variances are calculated have the shape of curve thatforms the face boundary Also because it keeps the referenceshape by transforming using the parameters in the shapealignment technique it is difficult that only one landmarktakes the protrusive shape Thus the result of the protrusivelocation is treated as the same as the result of the peripherallandmark

As the same phenomenon of this the different situationin the protrusive result is shown in Figure 4(b) This resultobtained the very low value of variance since the aroundpatches of protrusive landmarks remain on the backgroundof a solid color Further a single landmark to start escapingshowed a high variance Until this situation it is the same as

6 Journal of Sensors

1

1

1

1

1

0

1

1

1

(a) False negative case

1

1

1

0

1

1

11

1

(b) False positive cases

Figure 4 Examples of the situation that needs to consider therelationship between landmarks The first column represents resultof fitting second column represents frontal face image and thirdrepresents parts of classified results using gray-value variance (1 istrue 0 is false)

the previous situation In order to prevent this the mean tothe patch that does not exclude themask used in the PiecewiseAffine Wrap after the landmark is

1198641015840(119868) =

1

1198992

119899

sum

119909=0

119899

sum

119910=0

119868 (119909 119910) (3)

Therefore the gray-value variance to the whole patch is

1205901015840

119894= 1198641015840(1199012

119894) minus 11986410158402(119901119894) (4)

This calculation will calculate the variance to the patchincluding the background face and so on If the landmarkstays in a solid colored background since the face contourdoes not exist the similar variance will be retained

4 Experimental Results

In this paper the patch classifier as described above classifiesthe correct fitting result by calculating the variance of eachlandmark nearby We did an experiment based on the CMUMulti-PIE database [24] to evaluate the accuracy of the resultAlso after performing by applying a patch classifier to theactual shape fitting technique in the video frame of FGNet[25] according to the RMS error the measurement of thegray-value variance and the result of the classification arechecked As the same experiment the patch classifier in real-life images with a complex background is reviewed

Verification of the basic patch classifier proceeds inthe Multi-PIE database Multi-PIE has various pose facialexpression changes and face images to the illumination68 ground truth landmark points for this have attached tocomments Each image is the 640 times 480 size A total of faceimages for 346 people exist in various poses and at the same

300 320 340 360 380 400 420 440 460 480

180

200

220

240

260

280

300

320

Figure 5 Shape RMS error to the face contour

0 01 02 03 04 05 06 07 08 09 10

01

02

03

04

05

06

07

08

09

1

False positive rate (1 minus specificity)

True

pos

itive

rate

(sen

sitiv

ity)

Figure 6 ROC curve according to the wide range of variance inthe Multi-PIE Red line is the ROC curve that is measured only bythe threshold of the gray-value variance to the Multi-PIE Blue lineis the adjusted ROC curve considering the threshold of the gray-value variance to theMulti-PIE and features of the curve of the facialshape

time 20 sets of direction of light are composed Also itis made up of images of faces taken from the camera ina total of 15 directions We use the total 75360 images byutilizing a set of three directions This database is a databasethat is primarily used in the face shape fitting performancecomparison so we carry out the experiment about the gray-value variance compared with the ground truth landmarkadvances

In this paper because it determines whether landmarksare correctly positioned in the facial contours or not errorsare estimated by the distance between the existing groundtrue landmark and the face contour not by the RMS errorThis example is shown in Figure 5

In order to find an appropriate variance threshold 120590 todistinguish between true and false in the patch classifierthe Receiver Operating Characteristic (ROC) curves for awide range of gray-value variance are presented in Figure 6

Journal of Sensors 7

0102030405060708090100

3000 3100 3200 3300 3400 3500 3600 3700 3800 3900 4000

01503004506007509001050120013501500

Frame

3000 3100 3200 3300 3400 3500 3600 3700 3800 3900 4000

Frame

Gra

y-va

lue v

aria

nce

Dist

ance

bet

wee

ngr

ound

trut

h

(a) 4th landmark

0102030405060708090100

01503004506007509001050120013501500

Gra

y-va

lue v

aria

nce

Dist

ance

bet

wee

ngr

ound

trut

h

2500 3000 3500 4000 4500 5000

Frame

2500 3000 3500 4000 4500 5000

Frame

(b) 3th landmark

0102030405060708090100

3000 3100 3200 3300 3400 3500 3600 3700 3800 3900 40000

1503004506007509001050120013501500

Frame

3000 3100 3200 3300 3400 3500 3600 3700 3800 3900 4000

Frame

Gra

y-va

lue v

aria

nce

Dist

ance

bet

wee

ngr

ound

trut

h

(c) 12th landmark

0102030405060708090100

01503004506007509001050120013501500

Frame

Frame

Gra

y-va

lue v

aria

nce

Dist

ance

bet

wee

ngr

ound

trut

h

2000 2100 2200 2300 2400 2500 2600 2700 2800 2900 3000

2000 2100 2200 2300 2400 2500 2600 2700 2800 2900 3000

(d) 17th landmark

Figure 7 Each landmark error and gray-value variance in the FGNet database

The curve is calculated only for the outline of the landmarkof the shape model It is used as the ground truth landmarkof the Multi-PIE for each sample image and goes throughthe fitting process for the same image using the techniqueof [6] We thus apply the patch classifier to the around ofthe landmark of the facial contour for this fitting The fittingresult is defined as true when the distance of the groundtruth landmark is less than 3 pixels and defined as falsewhen it is greater than that In ROC curve the red markis the case that the result is determined only by the gray-value variance The blue mark is the result of consideringthe relationship between the landmarks By considering therelationship between the landmarks it was identified thatthe result was improved in the true positive rate before thecorrection and also the false positive rate became muchlower The critical gray-value variance in which the classifiershows the best performance is showed as 228 If it is classified

by using this critical value the true positive rate is 8771 andthe true negative rate is 7184

The experiment is processed for continuous images usingthe calculated critical value of the gray-value variance InFigure 7 the result for the distance between fitted landmarksand ground truth landmarks in the FGNet talking face videodatabase [25] and the gray-value variance is showed As thesame as in the experiment of Multi-PIE the fitting technique[23] was used The graph at the top represents the distancebetween the landmark the result of the fitting and theground truth landmark As shown in the graph this fittingtechnique maintains the distance of approximately 10 pixelson the whole in tracking According to this it is confirmedthat the gray-value variance for each landmark shows lessthan the value of 300 As shown in Figure 7(a) if the fittingresult is stable the gray-value variance can be seen to be sta-ble And Figures 7(b) and 7(c) show the relationship between

8 Journal of Sensors

Figure 8 Example of the result of patch classifier in the complicated backgroundThe red point represents the failure of fitting and the greenpoint represents the success

gray-value variance and error As shown in Figure 7(d) the17th landmark which are the gray-value variance at both endsof the landmarks of the outline of the face do not seem tobe stable Since this position does not only contain the skindue to hairs the value must be high However according toconsidering the relationship between the landmarks of theshape as mentioned the result of the patch classifier can bereturned as true according to neighbor landmarks

Finally the result of applying the patch classifier on realimages is shown in Figure 8 The green dots represent thelandmarks of which normal fitting is made and the reddots represent the landmarks that are classified by the fittingfailure by the patch classifier In the upper right of each imagethe frontal image is placed that its gray-value variance ismeasured after configuring the patch The image representsthe result that the patch classifier classifies the case that theshape cannot keep up with or the landmark stays on thebackground by the rotation It can be identified that in thefrontal face image in the upper right the nonface contourarea is included This method showed better performance incomplex background The gray-value variances in complexbackground are higher than the gray-value variances insimple background When the face rotated yaw axis ourapproach can classify landmarks properly

5 Conclusion

In this study the patch classifier has been studied whichdetermines the fitting result of landmarks that consist of

the face contour lines of the face shape model The patchclassifier approaches to estimate a texture of skin through agray-value variance measurement To show the performanceinvariant to pose it transforms to the frontal shape modelIn this transformation we were able to fill a hidden pixel bya pose using the bilinear interpolation without modificationin measuring the gray-value variance Further in order toreduce the interference with the illumination or the like themethod of configuring a patch around a landmark is usedA gray-value variance is calculated only in this patch If youapply this approach to theMulti-PIE database approximately8752 of the gray-value variance of the landmark is con-firmed to appear as a value of less than or equal to 200 Thetexture and shape features are dealt with in the approachof the existing fitting methods and showed the potential forthe use of the features in this study In addition by applyingthis technique to the outskirts of the landmark face shapemodel we proposed a method to classify whether a fitting issuccessful or not As a result it was able to classify the correctfitting result in the probability of 8332 We could verifywhether a gray-value variance is placed on the landmark ofthe face outline in the simple and effective way

6 Limitations and Future Work

Weused the outline landmarks bywhich the success or failureof fitting can be determined in the relatively simple way Asmentioned above since landmarks of the shape model arecomposed of geometric relations we can estimate the fitting

Journal of Sensors 9

state of the shape being tracked Estimation of the fitting statemight be helpful in verifying the reliability of the shapemodelused in various directions and determining the usage

First there is a case that uses it as ameasure for improvingthe tracking of the shape model itself The systems that arerelated to the shape model consisting of the landmarks areflowed in two steps After the initial searching of a face areaand the initial fitting process of shape fitting process onlyoccurs If a landmark fails to fit the facial feature in tracking itcan be occur that a shape repeats the incorrect fitting In thiscase with analyzing the current status and returning to theinitial phase of the system by using this research the recoveryis possible

Also it can be used as a correction of the shape fittingresult The fitting technique keeps track of each feature andmaintains the natural form through the shape alignmentIn this process the outer landmark directly evaluates andfinds the better fitting points After verification of the shapefurthermore it can be used in various systems to takeadvantage of the shape model The systems of gaze trackingface recognition motion recognition and facial expressionrecognition are applicable The role of the landmark in thesystem is critical If landmarks are incorrectly fitted a moreserious problem might occur in these systems

Like this the patch classifier can achieve the better resultsby applying after the shape fitting In addition by utilizingthe failed data it contributes to the way of escaping from thebigger problem

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

References

[1] T F Cootes C J Taylor D H Cooper and J Graham ldquoActiveshapemodelsmdashtheir training and applicationrdquoComputer Visionand Image Understanding vol 61 no 1 pp 38ndash59 1995

[2] T F Cooles G J Edwards and C J Taylor ldquoActive appearancemodelsrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 23 no 6 pp 681ndash685 2001

[3] D Cristinacce and T Cootes ldquoAutomatic feature localisationwith constrained local modelsrdquo Pattern Recognition vol 41 no10 pp 3054ndash3067 2008

[4] X P Burgos-Artizzu P Perona and P Dollar ldquoRobust facelandmark estimation under occlusionrdquo inProceedings of the 14thIEEE International Conference on Computer Vision (ICCV rsquo13)pp 1513ndash1520 IEEE December 2013

[5] H Drira B Ben Amor A Srivastava M Daoudi and R Slamaldquo3D Face recognition under expressions occlusions and posevariationsrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 35 no 9 pp 2270ndash2283 2013

[6] JM Saragih S Lucey and J F Cohn ldquoDeformablemodel fittingby regularized landmark mean-shiftrdquo International Journal ofComputer Vision vol 91 no 2 pp 200ndash215 2011

[7] A Asthana T K Marks M J Jones K H Tieu and M VRohith ldquoFully automatic pose-invariant face recognition via

3D pose normalizationrdquo in Proceedings of the IEEE Interna-tional Conference on Computer Vision (ICCV rsquo11) pp 937ndash944November 2011

[8] C Xiong L Huang and C Liu ldquoGaze estimation based on3D face structure and pupil centersrdquo in Proceedings of the 22ndInternational Conference on Pattern Recognition (ICPR rsquo14) pp1156ndash1161 IEEE Stockholm Sweden August 2014

[9] C Cao Y Weng S Lin and K Zhou ldquo3D shape regression forreal-time facial animationrdquoACMTransactions on Graphics vol32 no 4 article 41 2013

[10] B vanGinneken A F Frangi J J Staal BM Ter Haar Romenyand M A Viergever ldquoActive shape model segmentation withoptimal featuresrdquo IEEE Transactions on Medical Imaging vol21 no 8 pp 924ndash933 2002

[11] T F Cootes and C J Taylor ldquoActive shape modelsmdashlsquosmartsnakesrsquordquo in BMVC92 Proceedings of the British Machine VisionConference organised by the British Machine Vision Association22ndash24 September 1992 Leeds pp 266ndash275 Springer LondonUK 1992

[12] C A R Behaine and J Scharcanski ldquoEnhancing the perfor-mance of active shape models in face recognition applicationsrdquoIEEETransactions on Instrumentation andMeasurement vol 61no 8 pp 2330ndash2333 2012

[13] F Dornaika and J Ahlberg ldquoFast and reliable active appearancemodel search for 3-D face trackingrdquo IEEE Transactions onSystems Man and Cybernetics Part B Cybernetics vol 34 no4 pp 1838ndash1853 2004

[14] J Matthews and S Baker ldquoActive appearance models revisitedrdquoInternational Journal of Computer Vision vol 60 no 2 pp 135ndash164 2004

[15] A U Batur and M H Hayes ldquoAdaptive active appearancemodelsrdquo IEEE Transactions on Image Processing vol 14 no 11pp 1707ndash1721 2005

[16] D Cristinacce and T Cootes ldquoFeature detection and trackingwith constrained local modelsrdquo in Proceedings of the 17thBritish Machine Vision Conference (BMVC rsquo06) pp 929ndash938September 2006

[17] X Cao Y Wei F Wen and J Sun ldquoFace alignment by explicitshape regressionrdquo International Journal of Computer Vision vol107 no 2 pp 177ndash190 2014

[18] M D Breitenstein D Kuettel T Weise L Van Gool andH Pfister ldquoReal-time face pose estimation from single rangeimagesrdquo inProceedings of the 26th IEEEConference onComputerVision and Pattern Recognition (CVPR rsquo08) pp 1ndash8 June 2008

[19] Y Taigman M Yang M Ranzato and LWolf ldquoDeepface clos-ing the gap to human-level performance in face verificationrdquoin Proceedings of the IEEE Conference on Computer Vision andPattern Recognition (CVPR rsquo14) pp 1701ndash1708 IEEE ColumbusOhio USA June 2014

[20] Z Kalal K Mikolajczyk and J Matas ldquoTracking-learning-detectionrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 34 no 7 pp 1409ndash1422 2012

[21] S Hu S Sun X Ma Y Qin and B Lei ldquoA fast on-lineboosting tracking algorithm based on cascade filter of multi-featuresrdquo in Proceedings of the 3rd International Conference onMultimedia Technology (ICM rsquo13) Atlantis Press GuangzhouChina November 2013

[22] I SMsiza F VNelwamondo andTMarwala ldquoOn the segmen-tation of fingerprint images how to select the parameters of ablock-wise variancemethodrdquo in Proceedings of the InternationalConference on Image Processing Computer Vision and PatternRecognition (IPCV rsquo12) pp 1107ndash1113 July 2012

10 Journal of Sensors

[23] C A Glasbey and K V Mardia ldquoA review of image-warpingmethodsrdquo Journal of Applied Statistics vol 25 no 2 pp 155ndash1711998

[24] R Gross I Matthews J Cohn T Kanade and S Baker ldquoMulti-PIErdquo in Proceedings of the 8th IEEE International Conference onAutomatic Face and Gesture Recognition (FG rsquo08) September2008

[25] T F Cootes C J Twining V Petrovic R Schestowitz and CJ Taylor ldquoGroupwise construction of appearance models usingpiece-wise affine deformationsrdquo in Proceedings of 16th BritishMachine Vision Conference (BMVC rsquo05) vol 5 pp 879ndash8882005

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 6: Research Article Patch Classifier of Face Shape …downloads.hindawi.com/journals/js/2016/3825931.pdfFace Shape Model and Fitting Method. Showing good performances by the method of

6 Journal of Sensors

1

1

1

1

1

0

1

1

1

(a) False negative case

1

1

1

0

1

1

11

1

(b) False positive cases

Figure 4 Examples of the situation that needs to consider therelationship between landmarks The first column represents resultof fitting second column represents frontal face image and thirdrepresents parts of classified results using gray-value variance (1 istrue 0 is false)

the previous situation In order to prevent this the mean tothe patch that does not exclude themask used in the PiecewiseAffine Wrap after the landmark is

1198641015840(119868) =

1

1198992

119899

sum

119909=0

119899

sum

119910=0

119868 (119909 119910) (3)

Therefore the gray-value variance to the whole patch is

1205901015840

119894= 1198641015840(1199012

119894) minus 11986410158402(119901119894) (4)

This calculation will calculate the variance to the patchincluding the background face and so on If the landmarkstays in a solid colored background since the face contourdoes not exist the similar variance will be retained

4 Experimental Results

In this paper the patch classifier as described above classifiesthe correct fitting result by calculating the variance of eachlandmark nearby We did an experiment based on the CMUMulti-PIE database [24] to evaluate the accuracy of the resultAlso after performing by applying a patch classifier to theactual shape fitting technique in the video frame of FGNet[25] according to the RMS error the measurement of thegray-value variance and the result of the classification arechecked As the same experiment the patch classifier in real-life images with a complex background is reviewed

Verification of the basic patch classifier proceeds inthe Multi-PIE database Multi-PIE has various pose facialexpression changes and face images to the illumination68 ground truth landmark points for this have attached tocomments Each image is the 640 times 480 size A total of faceimages for 346 people exist in various poses and at the same

300 320 340 360 380 400 420 440 460 480

180

200

220

240

260

280

300

320

Figure 5 Shape RMS error to the face contour

0 01 02 03 04 05 06 07 08 09 10

01

02

03

04

05

06

07

08

09

1

False positive rate (1 minus specificity)

True

pos

itive

rate

(sen

sitiv

ity)

Figure 6 ROC curve according to the wide range of variance inthe Multi-PIE Red line is the ROC curve that is measured only bythe threshold of the gray-value variance to the Multi-PIE Blue lineis the adjusted ROC curve considering the threshold of the gray-value variance to theMulti-PIE and features of the curve of the facialshape

time 20 sets of direction of light are composed Also itis made up of images of faces taken from the camera ina total of 15 directions We use the total 75360 images byutilizing a set of three directions This database is a databasethat is primarily used in the face shape fitting performancecomparison so we carry out the experiment about the gray-value variance compared with the ground truth landmarkadvances

In this paper because it determines whether landmarksare correctly positioned in the facial contours or not errorsare estimated by the distance between the existing groundtrue landmark and the face contour not by the RMS errorThis example is shown in Figure 5

In order to find an appropriate variance threshold 120590 todistinguish between true and false in the patch classifierthe Receiver Operating Characteristic (ROC) curves for awide range of gray-value variance are presented in Figure 6

Journal of Sensors 7

0102030405060708090100

3000 3100 3200 3300 3400 3500 3600 3700 3800 3900 4000

01503004506007509001050120013501500

Frame

3000 3100 3200 3300 3400 3500 3600 3700 3800 3900 4000

Frame

Gra

y-va

lue v

aria

nce

Dist

ance

bet

wee

ngr

ound

trut

h

(a) 4th landmark

0102030405060708090100

01503004506007509001050120013501500

Gra

y-va

lue v

aria

nce

Dist

ance

bet

wee

ngr

ound

trut

h

2500 3000 3500 4000 4500 5000

Frame

2500 3000 3500 4000 4500 5000

Frame

(b) 3th landmark

0102030405060708090100

3000 3100 3200 3300 3400 3500 3600 3700 3800 3900 40000

1503004506007509001050120013501500

Frame

3000 3100 3200 3300 3400 3500 3600 3700 3800 3900 4000

Frame

Gra

y-va

lue v

aria

nce

Dist

ance

bet

wee

ngr

ound

trut

h

(c) 12th landmark

0102030405060708090100

01503004506007509001050120013501500

Frame

Frame

Gra

y-va

lue v

aria

nce

Dist

ance

bet

wee

ngr

ound

trut

h

2000 2100 2200 2300 2400 2500 2600 2700 2800 2900 3000

2000 2100 2200 2300 2400 2500 2600 2700 2800 2900 3000

(d) 17th landmark

Figure 7 Each landmark error and gray-value variance in the FGNet database

The curve is calculated only for the outline of the landmarkof the shape model It is used as the ground truth landmarkof the Multi-PIE for each sample image and goes throughthe fitting process for the same image using the techniqueof [6] We thus apply the patch classifier to the around ofthe landmark of the facial contour for this fitting The fittingresult is defined as true when the distance of the groundtruth landmark is less than 3 pixels and defined as falsewhen it is greater than that In ROC curve the red markis the case that the result is determined only by the gray-value variance The blue mark is the result of consideringthe relationship between the landmarks By considering therelationship between the landmarks it was identified thatthe result was improved in the true positive rate before thecorrection and also the false positive rate became muchlower The critical gray-value variance in which the classifiershows the best performance is showed as 228 If it is classified

by using this critical value the true positive rate is 8771 andthe true negative rate is 7184

The experiment is processed for continuous images usingthe calculated critical value of the gray-value variance InFigure 7 the result for the distance between fitted landmarksand ground truth landmarks in the FGNet talking face videodatabase [25] and the gray-value variance is showed As thesame as in the experiment of Multi-PIE the fitting technique[23] was used The graph at the top represents the distancebetween the landmark the result of the fitting and theground truth landmark As shown in the graph this fittingtechnique maintains the distance of approximately 10 pixelson the whole in tracking According to this it is confirmedthat the gray-value variance for each landmark shows lessthan the value of 300 As shown in Figure 7(a) if the fittingresult is stable the gray-value variance can be seen to be sta-ble And Figures 7(b) and 7(c) show the relationship between

8 Journal of Sensors

Figure 8 Example of the result of patch classifier in the complicated backgroundThe red point represents the failure of fitting and the greenpoint represents the success

gray-value variance and error As shown in Figure 7(d) the17th landmark which are the gray-value variance at both endsof the landmarks of the outline of the face do not seem tobe stable Since this position does not only contain the skindue to hairs the value must be high However according toconsidering the relationship between the landmarks of theshape as mentioned the result of the patch classifier can bereturned as true according to neighbor landmarks

Finally the result of applying the patch classifier on realimages is shown in Figure 8 The green dots represent thelandmarks of which normal fitting is made and the reddots represent the landmarks that are classified by the fittingfailure by the patch classifier In the upper right of each imagethe frontal image is placed that its gray-value variance ismeasured after configuring the patch The image representsthe result that the patch classifier classifies the case that theshape cannot keep up with or the landmark stays on thebackground by the rotation It can be identified that in thefrontal face image in the upper right the nonface contourarea is included This method showed better performance incomplex background The gray-value variances in complexbackground are higher than the gray-value variances insimple background When the face rotated yaw axis ourapproach can classify landmarks properly

5 Conclusion

In this study the patch classifier has been studied whichdetermines the fitting result of landmarks that consist of

the face contour lines of the face shape model The patchclassifier approaches to estimate a texture of skin through agray-value variance measurement To show the performanceinvariant to pose it transforms to the frontal shape modelIn this transformation we were able to fill a hidden pixel bya pose using the bilinear interpolation without modificationin measuring the gray-value variance Further in order toreduce the interference with the illumination or the like themethod of configuring a patch around a landmark is usedA gray-value variance is calculated only in this patch If youapply this approach to theMulti-PIE database approximately8752 of the gray-value variance of the landmark is con-firmed to appear as a value of less than or equal to 200 Thetexture and shape features are dealt with in the approachof the existing fitting methods and showed the potential forthe use of the features in this study In addition by applyingthis technique to the outskirts of the landmark face shapemodel we proposed a method to classify whether a fitting issuccessful or not As a result it was able to classify the correctfitting result in the probability of 8332 We could verifywhether a gray-value variance is placed on the landmark ofthe face outline in the simple and effective way

6 Limitations and Future Work

Weused the outline landmarks bywhich the success or failureof fitting can be determined in the relatively simple way Asmentioned above since landmarks of the shape model arecomposed of geometric relations we can estimate the fitting

Journal of Sensors 9

state of the shape being tracked Estimation of the fitting statemight be helpful in verifying the reliability of the shapemodelused in various directions and determining the usage

First there is a case that uses it as ameasure for improvingthe tracking of the shape model itself The systems that arerelated to the shape model consisting of the landmarks areflowed in two steps After the initial searching of a face areaand the initial fitting process of shape fitting process onlyoccurs If a landmark fails to fit the facial feature in tracking itcan be occur that a shape repeats the incorrect fitting In thiscase with analyzing the current status and returning to theinitial phase of the system by using this research the recoveryis possible

Also it can be used as a correction of the shape fittingresult The fitting technique keeps track of each feature andmaintains the natural form through the shape alignmentIn this process the outer landmark directly evaluates andfinds the better fitting points After verification of the shapefurthermore it can be used in various systems to takeadvantage of the shape model The systems of gaze trackingface recognition motion recognition and facial expressionrecognition are applicable The role of the landmark in thesystem is critical If landmarks are incorrectly fitted a moreserious problem might occur in these systems

Like this the patch classifier can achieve the better resultsby applying after the shape fitting In addition by utilizingthe failed data it contributes to the way of escaping from thebigger problem

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

References

[1] T F Cootes C J Taylor D H Cooper and J Graham ldquoActiveshapemodelsmdashtheir training and applicationrdquoComputer Visionand Image Understanding vol 61 no 1 pp 38ndash59 1995

[2] T F Cooles G J Edwards and C J Taylor ldquoActive appearancemodelsrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 23 no 6 pp 681ndash685 2001

[3] D Cristinacce and T Cootes ldquoAutomatic feature localisationwith constrained local modelsrdquo Pattern Recognition vol 41 no10 pp 3054ndash3067 2008

[4] X P Burgos-Artizzu P Perona and P Dollar ldquoRobust facelandmark estimation under occlusionrdquo inProceedings of the 14thIEEE International Conference on Computer Vision (ICCV rsquo13)pp 1513ndash1520 IEEE December 2013

[5] H Drira B Ben Amor A Srivastava M Daoudi and R Slamaldquo3D Face recognition under expressions occlusions and posevariationsrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 35 no 9 pp 2270ndash2283 2013

[6] JM Saragih S Lucey and J F Cohn ldquoDeformablemodel fittingby regularized landmark mean-shiftrdquo International Journal ofComputer Vision vol 91 no 2 pp 200ndash215 2011

[7] A Asthana T K Marks M J Jones K H Tieu and M VRohith ldquoFully automatic pose-invariant face recognition via

3D pose normalizationrdquo in Proceedings of the IEEE Interna-tional Conference on Computer Vision (ICCV rsquo11) pp 937ndash944November 2011

[8] C Xiong L Huang and C Liu ldquoGaze estimation based on3D face structure and pupil centersrdquo in Proceedings of the 22ndInternational Conference on Pattern Recognition (ICPR rsquo14) pp1156ndash1161 IEEE Stockholm Sweden August 2014

[9] C Cao Y Weng S Lin and K Zhou ldquo3D shape regression forreal-time facial animationrdquoACMTransactions on Graphics vol32 no 4 article 41 2013

[10] B vanGinneken A F Frangi J J Staal BM Ter Haar Romenyand M A Viergever ldquoActive shape model segmentation withoptimal featuresrdquo IEEE Transactions on Medical Imaging vol21 no 8 pp 924ndash933 2002

[11] T F Cootes and C J Taylor ldquoActive shape modelsmdashlsquosmartsnakesrsquordquo in BMVC92 Proceedings of the British Machine VisionConference organised by the British Machine Vision Association22ndash24 September 1992 Leeds pp 266ndash275 Springer LondonUK 1992

[12] C A R Behaine and J Scharcanski ldquoEnhancing the perfor-mance of active shape models in face recognition applicationsrdquoIEEETransactions on Instrumentation andMeasurement vol 61no 8 pp 2330ndash2333 2012

[13] F Dornaika and J Ahlberg ldquoFast and reliable active appearancemodel search for 3-D face trackingrdquo IEEE Transactions onSystems Man and Cybernetics Part B Cybernetics vol 34 no4 pp 1838ndash1853 2004

[14] J Matthews and S Baker ldquoActive appearance models revisitedrdquoInternational Journal of Computer Vision vol 60 no 2 pp 135ndash164 2004

[15] A U Batur and M H Hayes ldquoAdaptive active appearancemodelsrdquo IEEE Transactions on Image Processing vol 14 no 11pp 1707ndash1721 2005

[16] D Cristinacce and T Cootes ldquoFeature detection and trackingwith constrained local modelsrdquo in Proceedings of the 17thBritish Machine Vision Conference (BMVC rsquo06) pp 929ndash938September 2006

[17] X Cao Y Wei F Wen and J Sun ldquoFace alignment by explicitshape regressionrdquo International Journal of Computer Vision vol107 no 2 pp 177ndash190 2014

[18] M D Breitenstein D Kuettel T Weise L Van Gool andH Pfister ldquoReal-time face pose estimation from single rangeimagesrdquo inProceedings of the 26th IEEEConference onComputerVision and Pattern Recognition (CVPR rsquo08) pp 1ndash8 June 2008

[19] Y Taigman M Yang M Ranzato and LWolf ldquoDeepface clos-ing the gap to human-level performance in face verificationrdquoin Proceedings of the IEEE Conference on Computer Vision andPattern Recognition (CVPR rsquo14) pp 1701ndash1708 IEEE ColumbusOhio USA June 2014

[20] Z Kalal K Mikolajczyk and J Matas ldquoTracking-learning-detectionrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 34 no 7 pp 1409ndash1422 2012

[21] S Hu S Sun X Ma Y Qin and B Lei ldquoA fast on-lineboosting tracking algorithm based on cascade filter of multi-featuresrdquo in Proceedings of the 3rd International Conference onMultimedia Technology (ICM rsquo13) Atlantis Press GuangzhouChina November 2013

[22] I SMsiza F VNelwamondo andTMarwala ldquoOn the segmen-tation of fingerprint images how to select the parameters of ablock-wise variancemethodrdquo in Proceedings of the InternationalConference on Image Processing Computer Vision and PatternRecognition (IPCV rsquo12) pp 1107ndash1113 July 2012

10 Journal of Sensors

[23] C A Glasbey and K V Mardia ldquoA review of image-warpingmethodsrdquo Journal of Applied Statistics vol 25 no 2 pp 155ndash1711998

[24] R Gross I Matthews J Cohn T Kanade and S Baker ldquoMulti-PIErdquo in Proceedings of the 8th IEEE International Conference onAutomatic Face and Gesture Recognition (FG rsquo08) September2008

[25] T F Cootes C J Twining V Petrovic R Schestowitz and CJ Taylor ldquoGroupwise construction of appearance models usingpiece-wise affine deformationsrdquo in Proceedings of 16th BritishMachine Vision Conference (BMVC rsquo05) vol 5 pp 879ndash8882005

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 7: Research Article Patch Classifier of Face Shape …downloads.hindawi.com/journals/js/2016/3825931.pdfFace Shape Model and Fitting Method. Showing good performances by the method of

Journal of Sensors 7

0102030405060708090100

3000 3100 3200 3300 3400 3500 3600 3700 3800 3900 4000

01503004506007509001050120013501500

Frame

3000 3100 3200 3300 3400 3500 3600 3700 3800 3900 4000

Frame

Gra

y-va

lue v

aria

nce

Dist

ance

bet

wee

ngr

ound

trut

h

(a) 4th landmark

0102030405060708090100

01503004506007509001050120013501500

Gra

y-va

lue v

aria

nce

Dist

ance

bet

wee

ngr

ound

trut

h

2500 3000 3500 4000 4500 5000

Frame

2500 3000 3500 4000 4500 5000

Frame

(b) 3th landmark

0102030405060708090100

3000 3100 3200 3300 3400 3500 3600 3700 3800 3900 40000

1503004506007509001050120013501500

Frame

3000 3100 3200 3300 3400 3500 3600 3700 3800 3900 4000

Frame

Gra

y-va

lue v

aria

nce

Dist

ance

bet

wee

ngr

ound

trut

h

(c) 12th landmark

0102030405060708090100

01503004506007509001050120013501500

Frame

Frame

Gra

y-va

lue v

aria

nce

Dist

ance

bet

wee

ngr

ound

trut

h

2000 2100 2200 2300 2400 2500 2600 2700 2800 2900 3000

2000 2100 2200 2300 2400 2500 2600 2700 2800 2900 3000

(d) 17th landmark

Figure 7 Each landmark error and gray-value variance in the FGNet database

The curve is calculated only for the outline of the landmarkof the shape model It is used as the ground truth landmarkof the Multi-PIE for each sample image and goes throughthe fitting process for the same image using the techniqueof [6] We thus apply the patch classifier to the around ofthe landmark of the facial contour for this fitting The fittingresult is defined as true when the distance of the groundtruth landmark is less than 3 pixels and defined as falsewhen it is greater than that In ROC curve the red markis the case that the result is determined only by the gray-value variance The blue mark is the result of consideringthe relationship between the landmarks By considering therelationship between the landmarks it was identified thatthe result was improved in the true positive rate before thecorrection and also the false positive rate became muchlower The critical gray-value variance in which the classifiershows the best performance is showed as 228 If it is classified

by using this critical value the true positive rate is 8771 andthe true negative rate is 7184

The experiment is processed for continuous images usingthe calculated critical value of the gray-value variance InFigure 7 the result for the distance between fitted landmarksand ground truth landmarks in the FGNet talking face videodatabase [25] and the gray-value variance is showed As thesame as in the experiment of Multi-PIE the fitting technique[23] was used The graph at the top represents the distancebetween the landmark the result of the fitting and theground truth landmark As shown in the graph this fittingtechnique maintains the distance of approximately 10 pixelson the whole in tracking According to this it is confirmedthat the gray-value variance for each landmark shows lessthan the value of 300 As shown in Figure 7(a) if the fittingresult is stable the gray-value variance can be seen to be sta-ble And Figures 7(b) and 7(c) show the relationship between

8 Journal of Sensors

Figure 8 Example of the result of patch classifier in the complicated backgroundThe red point represents the failure of fitting and the greenpoint represents the success

gray-value variance and error As shown in Figure 7(d) the17th landmark which are the gray-value variance at both endsof the landmarks of the outline of the face do not seem tobe stable Since this position does not only contain the skindue to hairs the value must be high However according toconsidering the relationship between the landmarks of theshape as mentioned the result of the patch classifier can bereturned as true according to neighbor landmarks

Finally the result of applying the patch classifier on realimages is shown in Figure 8 The green dots represent thelandmarks of which normal fitting is made and the reddots represent the landmarks that are classified by the fittingfailure by the patch classifier In the upper right of each imagethe frontal image is placed that its gray-value variance ismeasured after configuring the patch The image representsthe result that the patch classifier classifies the case that theshape cannot keep up with or the landmark stays on thebackground by the rotation It can be identified that in thefrontal face image in the upper right the nonface contourarea is included This method showed better performance incomplex background The gray-value variances in complexbackground are higher than the gray-value variances insimple background When the face rotated yaw axis ourapproach can classify landmarks properly

5 Conclusion

In this study the patch classifier has been studied whichdetermines the fitting result of landmarks that consist of

the face contour lines of the face shape model The patchclassifier approaches to estimate a texture of skin through agray-value variance measurement To show the performanceinvariant to pose it transforms to the frontal shape modelIn this transformation we were able to fill a hidden pixel bya pose using the bilinear interpolation without modificationin measuring the gray-value variance Further in order toreduce the interference with the illumination or the like themethod of configuring a patch around a landmark is usedA gray-value variance is calculated only in this patch If youapply this approach to theMulti-PIE database approximately8752 of the gray-value variance of the landmark is con-firmed to appear as a value of less than or equal to 200 Thetexture and shape features are dealt with in the approachof the existing fitting methods and showed the potential forthe use of the features in this study In addition by applyingthis technique to the outskirts of the landmark face shapemodel we proposed a method to classify whether a fitting issuccessful or not As a result it was able to classify the correctfitting result in the probability of 8332 We could verifywhether a gray-value variance is placed on the landmark ofthe face outline in the simple and effective way

6 Limitations and Future Work

Weused the outline landmarks bywhich the success or failureof fitting can be determined in the relatively simple way Asmentioned above since landmarks of the shape model arecomposed of geometric relations we can estimate the fitting

Journal of Sensors 9

state of the shape being tracked Estimation of the fitting statemight be helpful in verifying the reliability of the shapemodelused in various directions and determining the usage

First there is a case that uses it as ameasure for improvingthe tracking of the shape model itself The systems that arerelated to the shape model consisting of the landmarks areflowed in two steps After the initial searching of a face areaand the initial fitting process of shape fitting process onlyoccurs If a landmark fails to fit the facial feature in tracking itcan be occur that a shape repeats the incorrect fitting In thiscase with analyzing the current status and returning to theinitial phase of the system by using this research the recoveryis possible

Also it can be used as a correction of the shape fittingresult The fitting technique keeps track of each feature andmaintains the natural form through the shape alignmentIn this process the outer landmark directly evaluates andfinds the better fitting points After verification of the shapefurthermore it can be used in various systems to takeadvantage of the shape model The systems of gaze trackingface recognition motion recognition and facial expressionrecognition are applicable The role of the landmark in thesystem is critical If landmarks are incorrectly fitted a moreserious problem might occur in these systems

Like this the patch classifier can achieve the better resultsby applying after the shape fitting In addition by utilizingthe failed data it contributes to the way of escaping from thebigger problem

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

References

[1] T F Cootes C J Taylor D H Cooper and J Graham ldquoActiveshapemodelsmdashtheir training and applicationrdquoComputer Visionand Image Understanding vol 61 no 1 pp 38ndash59 1995

[2] T F Cooles G J Edwards and C J Taylor ldquoActive appearancemodelsrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 23 no 6 pp 681ndash685 2001

[3] D Cristinacce and T Cootes ldquoAutomatic feature localisationwith constrained local modelsrdquo Pattern Recognition vol 41 no10 pp 3054ndash3067 2008

[4] X P Burgos-Artizzu P Perona and P Dollar ldquoRobust facelandmark estimation under occlusionrdquo inProceedings of the 14thIEEE International Conference on Computer Vision (ICCV rsquo13)pp 1513ndash1520 IEEE December 2013

[5] H Drira B Ben Amor A Srivastava M Daoudi and R Slamaldquo3D Face recognition under expressions occlusions and posevariationsrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 35 no 9 pp 2270ndash2283 2013

[6] JM Saragih S Lucey and J F Cohn ldquoDeformablemodel fittingby regularized landmark mean-shiftrdquo International Journal ofComputer Vision vol 91 no 2 pp 200ndash215 2011

[7] A Asthana T K Marks M J Jones K H Tieu and M VRohith ldquoFully automatic pose-invariant face recognition via

3D pose normalizationrdquo in Proceedings of the IEEE Interna-tional Conference on Computer Vision (ICCV rsquo11) pp 937ndash944November 2011

[8] C Xiong L Huang and C Liu ldquoGaze estimation based on3D face structure and pupil centersrdquo in Proceedings of the 22ndInternational Conference on Pattern Recognition (ICPR rsquo14) pp1156ndash1161 IEEE Stockholm Sweden August 2014

[9] C Cao Y Weng S Lin and K Zhou ldquo3D shape regression forreal-time facial animationrdquoACMTransactions on Graphics vol32 no 4 article 41 2013

[10] B vanGinneken A F Frangi J J Staal BM Ter Haar Romenyand M A Viergever ldquoActive shape model segmentation withoptimal featuresrdquo IEEE Transactions on Medical Imaging vol21 no 8 pp 924ndash933 2002

[11] T F Cootes and C J Taylor ldquoActive shape modelsmdashlsquosmartsnakesrsquordquo in BMVC92 Proceedings of the British Machine VisionConference organised by the British Machine Vision Association22ndash24 September 1992 Leeds pp 266ndash275 Springer LondonUK 1992

[12] C A R Behaine and J Scharcanski ldquoEnhancing the perfor-mance of active shape models in face recognition applicationsrdquoIEEETransactions on Instrumentation andMeasurement vol 61no 8 pp 2330ndash2333 2012

[13] F Dornaika and J Ahlberg ldquoFast and reliable active appearancemodel search for 3-D face trackingrdquo IEEE Transactions onSystems Man and Cybernetics Part B Cybernetics vol 34 no4 pp 1838ndash1853 2004

[14] J Matthews and S Baker ldquoActive appearance models revisitedrdquoInternational Journal of Computer Vision vol 60 no 2 pp 135ndash164 2004

[15] A U Batur and M H Hayes ldquoAdaptive active appearancemodelsrdquo IEEE Transactions on Image Processing vol 14 no 11pp 1707ndash1721 2005

[16] D Cristinacce and T Cootes ldquoFeature detection and trackingwith constrained local modelsrdquo in Proceedings of the 17thBritish Machine Vision Conference (BMVC rsquo06) pp 929ndash938September 2006

[17] X Cao Y Wei F Wen and J Sun ldquoFace alignment by explicitshape regressionrdquo International Journal of Computer Vision vol107 no 2 pp 177ndash190 2014

[18] M D Breitenstein D Kuettel T Weise L Van Gool andH Pfister ldquoReal-time face pose estimation from single rangeimagesrdquo inProceedings of the 26th IEEEConference onComputerVision and Pattern Recognition (CVPR rsquo08) pp 1ndash8 June 2008

[19] Y Taigman M Yang M Ranzato and LWolf ldquoDeepface clos-ing the gap to human-level performance in face verificationrdquoin Proceedings of the IEEE Conference on Computer Vision andPattern Recognition (CVPR rsquo14) pp 1701ndash1708 IEEE ColumbusOhio USA June 2014

[20] Z Kalal K Mikolajczyk and J Matas ldquoTracking-learning-detectionrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 34 no 7 pp 1409ndash1422 2012

[21] S Hu S Sun X Ma Y Qin and B Lei ldquoA fast on-lineboosting tracking algorithm based on cascade filter of multi-featuresrdquo in Proceedings of the 3rd International Conference onMultimedia Technology (ICM rsquo13) Atlantis Press GuangzhouChina November 2013

[22] I SMsiza F VNelwamondo andTMarwala ldquoOn the segmen-tation of fingerprint images how to select the parameters of ablock-wise variancemethodrdquo in Proceedings of the InternationalConference on Image Processing Computer Vision and PatternRecognition (IPCV rsquo12) pp 1107ndash1113 July 2012

10 Journal of Sensors

[23] C A Glasbey and K V Mardia ldquoA review of image-warpingmethodsrdquo Journal of Applied Statistics vol 25 no 2 pp 155ndash1711998

[24] R Gross I Matthews J Cohn T Kanade and S Baker ldquoMulti-PIErdquo in Proceedings of the 8th IEEE International Conference onAutomatic Face and Gesture Recognition (FG rsquo08) September2008

[25] T F Cootes C J Twining V Petrovic R Schestowitz and CJ Taylor ldquoGroupwise construction of appearance models usingpiece-wise affine deformationsrdquo in Proceedings of 16th BritishMachine Vision Conference (BMVC rsquo05) vol 5 pp 879ndash8882005

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 8: Research Article Patch Classifier of Face Shape …downloads.hindawi.com/journals/js/2016/3825931.pdfFace Shape Model and Fitting Method. Showing good performances by the method of

8 Journal of Sensors

Figure 8 Example of the result of patch classifier in the complicated backgroundThe red point represents the failure of fitting and the greenpoint represents the success

gray-value variance and error As shown in Figure 7(d) the17th landmark which are the gray-value variance at both endsof the landmarks of the outline of the face do not seem tobe stable Since this position does not only contain the skindue to hairs the value must be high However according toconsidering the relationship between the landmarks of theshape as mentioned the result of the patch classifier can bereturned as true according to neighbor landmarks

Finally the result of applying the patch classifier on realimages is shown in Figure 8 The green dots represent thelandmarks of which normal fitting is made and the reddots represent the landmarks that are classified by the fittingfailure by the patch classifier In the upper right of each imagethe frontal image is placed that its gray-value variance ismeasured after configuring the patch The image representsthe result that the patch classifier classifies the case that theshape cannot keep up with or the landmark stays on thebackground by the rotation It can be identified that in thefrontal face image in the upper right the nonface contourarea is included This method showed better performance incomplex background The gray-value variances in complexbackground are higher than the gray-value variances insimple background When the face rotated yaw axis ourapproach can classify landmarks properly

5 Conclusion

In this study the patch classifier has been studied whichdetermines the fitting result of landmarks that consist of

the face contour lines of the face shape model The patchclassifier approaches to estimate a texture of skin through agray-value variance measurement To show the performanceinvariant to pose it transforms to the frontal shape modelIn this transformation we were able to fill a hidden pixel bya pose using the bilinear interpolation without modificationin measuring the gray-value variance Further in order toreduce the interference with the illumination or the like themethod of configuring a patch around a landmark is usedA gray-value variance is calculated only in this patch If youapply this approach to theMulti-PIE database approximately8752 of the gray-value variance of the landmark is con-firmed to appear as a value of less than or equal to 200 Thetexture and shape features are dealt with in the approachof the existing fitting methods and showed the potential forthe use of the features in this study In addition by applyingthis technique to the outskirts of the landmark face shapemodel we proposed a method to classify whether a fitting issuccessful or not As a result it was able to classify the correctfitting result in the probability of 8332 We could verifywhether a gray-value variance is placed on the landmark ofthe face outline in the simple and effective way

6 Limitations and Future Work

Weused the outline landmarks bywhich the success or failureof fitting can be determined in the relatively simple way Asmentioned above since landmarks of the shape model arecomposed of geometric relations we can estimate the fitting

Journal of Sensors 9

state of the shape being tracked Estimation of the fitting statemight be helpful in verifying the reliability of the shapemodelused in various directions and determining the usage

First there is a case that uses it as ameasure for improvingthe tracking of the shape model itself The systems that arerelated to the shape model consisting of the landmarks areflowed in two steps After the initial searching of a face areaand the initial fitting process of shape fitting process onlyoccurs If a landmark fails to fit the facial feature in tracking itcan be occur that a shape repeats the incorrect fitting In thiscase with analyzing the current status and returning to theinitial phase of the system by using this research the recoveryis possible

Also it can be used as a correction of the shape fittingresult The fitting technique keeps track of each feature andmaintains the natural form through the shape alignmentIn this process the outer landmark directly evaluates andfinds the better fitting points After verification of the shapefurthermore it can be used in various systems to takeadvantage of the shape model The systems of gaze trackingface recognition motion recognition and facial expressionrecognition are applicable The role of the landmark in thesystem is critical If landmarks are incorrectly fitted a moreserious problem might occur in these systems

Like this the patch classifier can achieve the better resultsby applying after the shape fitting In addition by utilizingthe failed data it contributes to the way of escaping from thebigger problem

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

References

[1] T F Cootes C J Taylor D H Cooper and J Graham ldquoActiveshapemodelsmdashtheir training and applicationrdquoComputer Visionand Image Understanding vol 61 no 1 pp 38ndash59 1995

[2] T F Cooles G J Edwards and C J Taylor ldquoActive appearancemodelsrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 23 no 6 pp 681ndash685 2001

[3] D Cristinacce and T Cootes ldquoAutomatic feature localisationwith constrained local modelsrdquo Pattern Recognition vol 41 no10 pp 3054ndash3067 2008

[4] X P Burgos-Artizzu P Perona and P Dollar ldquoRobust facelandmark estimation under occlusionrdquo inProceedings of the 14thIEEE International Conference on Computer Vision (ICCV rsquo13)pp 1513ndash1520 IEEE December 2013

[5] H Drira B Ben Amor A Srivastava M Daoudi and R Slamaldquo3D Face recognition under expressions occlusions and posevariationsrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 35 no 9 pp 2270ndash2283 2013

[6] JM Saragih S Lucey and J F Cohn ldquoDeformablemodel fittingby regularized landmark mean-shiftrdquo International Journal ofComputer Vision vol 91 no 2 pp 200ndash215 2011

[7] A Asthana T K Marks M J Jones K H Tieu and M VRohith ldquoFully automatic pose-invariant face recognition via

3D pose normalizationrdquo in Proceedings of the IEEE Interna-tional Conference on Computer Vision (ICCV rsquo11) pp 937ndash944November 2011

[8] C Xiong L Huang and C Liu ldquoGaze estimation based on3D face structure and pupil centersrdquo in Proceedings of the 22ndInternational Conference on Pattern Recognition (ICPR rsquo14) pp1156ndash1161 IEEE Stockholm Sweden August 2014

[9] C Cao Y Weng S Lin and K Zhou ldquo3D shape regression forreal-time facial animationrdquoACMTransactions on Graphics vol32 no 4 article 41 2013

[10] B vanGinneken A F Frangi J J Staal BM Ter Haar Romenyand M A Viergever ldquoActive shape model segmentation withoptimal featuresrdquo IEEE Transactions on Medical Imaging vol21 no 8 pp 924ndash933 2002

[11] T F Cootes and C J Taylor ldquoActive shape modelsmdashlsquosmartsnakesrsquordquo in BMVC92 Proceedings of the British Machine VisionConference organised by the British Machine Vision Association22ndash24 September 1992 Leeds pp 266ndash275 Springer LondonUK 1992

[12] C A R Behaine and J Scharcanski ldquoEnhancing the perfor-mance of active shape models in face recognition applicationsrdquoIEEETransactions on Instrumentation andMeasurement vol 61no 8 pp 2330ndash2333 2012

[13] F Dornaika and J Ahlberg ldquoFast and reliable active appearancemodel search for 3-D face trackingrdquo IEEE Transactions onSystems Man and Cybernetics Part B Cybernetics vol 34 no4 pp 1838ndash1853 2004

[14] J Matthews and S Baker ldquoActive appearance models revisitedrdquoInternational Journal of Computer Vision vol 60 no 2 pp 135ndash164 2004

[15] A U Batur and M H Hayes ldquoAdaptive active appearancemodelsrdquo IEEE Transactions on Image Processing vol 14 no 11pp 1707ndash1721 2005

[16] D Cristinacce and T Cootes ldquoFeature detection and trackingwith constrained local modelsrdquo in Proceedings of the 17thBritish Machine Vision Conference (BMVC rsquo06) pp 929ndash938September 2006

[17] X Cao Y Wei F Wen and J Sun ldquoFace alignment by explicitshape regressionrdquo International Journal of Computer Vision vol107 no 2 pp 177ndash190 2014

[18] M D Breitenstein D Kuettel T Weise L Van Gool andH Pfister ldquoReal-time face pose estimation from single rangeimagesrdquo inProceedings of the 26th IEEEConference onComputerVision and Pattern Recognition (CVPR rsquo08) pp 1ndash8 June 2008

[19] Y Taigman M Yang M Ranzato and LWolf ldquoDeepface clos-ing the gap to human-level performance in face verificationrdquoin Proceedings of the IEEE Conference on Computer Vision andPattern Recognition (CVPR rsquo14) pp 1701ndash1708 IEEE ColumbusOhio USA June 2014

[20] Z Kalal K Mikolajczyk and J Matas ldquoTracking-learning-detectionrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 34 no 7 pp 1409ndash1422 2012

[21] S Hu S Sun X Ma Y Qin and B Lei ldquoA fast on-lineboosting tracking algorithm based on cascade filter of multi-featuresrdquo in Proceedings of the 3rd International Conference onMultimedia Technology (ICM rsquo13) Atlantis Press GuangzhouChina November 2013

[22] I SMsiza F VNelwamondo andTMarwala ldquoOn the segmen-tation of fingerprint images how to select the parameters of ablock-wise variancemethodrdquo in Proceedings of the InternationalConference on Image Processing Computer Vision and PatternRecognition (IPCV rsquo12) pp 1107ndash1113 July 2012

10 Journal of Sensors

[23] C A Glasbey and K V Mardia ldquoA review of image-warpingmethodsrdquo Journal of Applied Statistics vol 25 no 2 pp 155ndash1711998

[24] R Gross I Matthews J Cohn T Kanade and S Baker ldquoMulti-PIErdquo in Proceedings of the 8th IEEE International Conference onAutomatic Face and Gesture Recognition (FG rsquo08) September2008

[25] T F Cootes C J Twining V Petrovic R Schestowitz and CJ Taylor ldquoGroupwise construction of appearance models usingpiece-wise affine deformationsrdquo in Proceedings of 16th BritishMachine Vision Conference (BMVC rsquo05) vol 5 pp 879ndash8882005

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 9: Research Article Patch Classifier of Face Shape …downloads.hindawi.com/journals/js/2016/3825931.pdfFace Shape Model and Fitting Method. Showing good performances by the method of

Journal of Sensors 9

state of the shape being tracked Estimation of the fitting statemight be helpful in verifying the reliability of the shapemodelused in various directions and determining the usage

First there is a case that uses it as ameasure for improvingthe tracking of the shape model itself The systems that arerelated to the shape model consisting of the landmarks areflowed in two steps After the initial searching of a face areaand the initial fitting process of shape fitting process onlyoccurs If a landmark fails to fit the facial feature in tracking itcan be occur that a shape repeats the incorrect fitting In thiscase with analyzing the current status and returning to theinitial phase of the system by using this research the recoveryis possible

Also it can be used as a correction of the shape fittingresult The fitting technique keeps track of each feature andmaintains the natural form through the shape alignmentIn this process the outer landmark directly evaluates andfinds the better fitting points After verification of the shapefurthermore it can be used in various systems to takeadvantage of the shape model The systems of gaze trackingface recognition motion recognition and facial expressionrecognition are applicable The role of the landmark in thesystem is critical If landmarks are incorrectly fitted a moreserious problem might occur in these systems

Like this the patch classifier can achieve the better resultsby applying after the shape fitting In addition by utilizingthe failed data it contributes to the way of escaping from thebigger problem

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

References

[1] T F Cootes C J Taylor D H Cooper and J Graham ldquoActiveshapemodelsmdashtheir training and applicationrdquoComputer Visionand Image Understanding vol 61 no 1 pp 38ndash59 1995

[2] T F Cooles G J Edwards and C J Taylor ldquoActive appearancemodelsrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 23 no 6 pp 681ndash685 2001

[3] D Cristinacce and T Cootes ldquoAutomatic feature localisationwith constrained local modelsrdquo Pattern Recognition vol 41 no10 pp 3054ndash3067 2008

[4] X P Burgos-Artizzu P Perona and P Dollar ldquoRobust facelandmark estimation under occlusionrdquo inProceedings of the 14thIEEE International Conference on Computer Vision (ICCV rsquo13)pp 1513ndash1520 IEEE December 2013

[5] H Drira B Ben Amor A Srivastava M Daoudi and R Slamaldquo3D Face recognition under expressions occlusions and posevariationsrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 35 no 9 pp 2270ndash2283 2013

[6] JM Saragih S Lucey and J F Cohn ldquoDeformablemodel fittingby regularized landmark mean-shiftrdquo International Journal ofComputer Vision vol 91 no 2 pp 200ndash215 2011

[7] A Asthana T K Marks M J Jones K H Tieu and M VRohith ldquoFully automatic pose-invariant face recognition via

3D pose normalizationrdquo in Proceedings of the IEEE Interna-tional Conference on Computer Vision (ICCV rsquo11) pp 937ndash944November 2011

[8] C Xiong L Huang and C Liu ldquoGaze estimation based on3D face structure and pupil centersrdquo in Proceedings of the 22ndInternational Conference on Pattern Recognition (ICPR rsquo14) pp1156ndash1161 IEEE Stockholm Sweden August 2014

[9] C Cao Y Weng S Lin and K Zhou ldquo3D shape regression forreal-time facial animationrdquoACMTransactions on Graphics vol32 no 4 article 41 2013

[10] B vanGinneken A F Frangi J J Staal BM Ter Haar Romenyand M A Viergever ldquoActive shape model segmentation withoptimal featuresrdquo IEEE Transactions on Medical Imaging vol21 no 8 pp 924ndash933 2002

[11] T F Cootes and C J Taylor ldquoActive shape modelsmdashlsquosmartsnakesrsquordquo in BMVC92 Proceedings of the British Machine VisionConference organised by the British Machine Vision Association22ndash24 September 1992 Leeds pp 266ndash275 Springer LondonUK 1992

[12] C A R Behaine and J Scharcanski ldquoEnhancing the perfor-mance of active shape models in face recognition applicationsrdquoIEEETransactions on Instrumentation andMeasurement vol 61no 8 pp 2330ndash2333 2012

[13] F Dornaika and J Ahlberg ldquoFast and reliable active appearancemodel search for 3-D face trackingrdquo IEEE Transactions onSystems Man and Cybernetics Part B Cybernetics vol 34 no4 pp 1838ndash1853 2004

[14] J Matthews and S Baker ldquoActive appearance models revisitedrdquoInternational Journal of Computer Vision vol 60 no 2 pp 135ndash164 2004

[15] A U Batur and M H Hayes ldquoAdaptive active appearancemodelsrdquo IEEE Transactions on Image Processing vol 14 no 11pp 1707ndash1721 2005

[16] D Cristinacce and T Cootes ldquoFeature detection and trackingwith constrained local modelsrdquo in Proceedings of the 17thBritish Machine Vision Conference (BMVC rsquo06) pp 929ndash938September 2006

[17] X Cao Y Wei F Wen and J Sun ldquoFace alignment by explicitshape regressionrdquo International Journal of Computer Vision vol107 no 2 pp 177ndash190 2014

[18] M D Breitenstein D Kuettel T Weise L Van Gool andH Pfister ldquoReal-time face pose estimation from single rangeimagesrdquo inProceedings of the 26th IEEEConference onComputerVision and Pattern Recognition (CVPR rsquo08) pp 1ndash8 June 2008

[19] Y Taigman M Yang M Ranzato and LWolf ldquoDeepface clos-ing the gap to human-level performance in face verificationrdquoin Proceedings of the IEEE Conference on Computer Vision andPattern Recognition (CVPR rsquo14) pp 1701ndash1708 IEEE ColumbusOhio USA June 2014

[20] Z Kalal K Mikolajczyk and J Matas ldquoTracking-learning-detectionrdquo IEEE Transactions on Pattern Analysis and MachineIntelligence vol 34 no 7 pp 1409ndash1422 2012

[21] S Hu S Sun X Ma Y Qin and B Lei ldquoA fast on-lineboosting tracking algorithm based on cascade filter of multi-featuresrdquo in Proceedings of the 3rd International Conference onMultimedia Technology (ICM rsquo13) Atlantis Press GuangzhouChina November 2013

[22] I SMsiza F VNelwamondo andTMarwala ldquoOn the segmen-tation of fingerprint images how to select the parameters of ablock-wise variancemethodrdquo in Proceedings of the InternationalConference on Image Processing Computer Vision and PatternRecognition (IPCV rsquo12) pp 1107ndash1113 July 2012

10 Journal of Sensors

[23] C A Glasbey and K V Mardia ldquoA review of image-warpingmethodsrdquo Journal of Applied Statistics vol 25 no 2 pp 155ndash1711998

[24] R Gross I Matthews J Cohn T Kanade and S Baker ldquoMulti-PIErdquo in Proceedings of the 8th IEEE International Conference onAutomatic Face and Gesture Recognition (FG rsquo08) September2008

[25] T F Cootes C J Twining V Petrovic R Schestowitz and CJ Taylor ldquoGroupwise construction of appearance models usingpiece-wise affine deformationsrdquo in Proceedings of 16th BritishMachine Vision Conference (BMVC rsquo05) vol 5 pp 879ndash8882005

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 10: Research Article Patch Classifier of Face Shape …downloads.hindawi.com/journals/js/2016/3825931.pdfFace Shape Model and Fitting Method. Showing good performances by the method of

10 Journal of Sensors

[23] C A Glasbey and K V Mardia ldquoA review of image-warpingmethodsrdquo Journal of Applied Statistics vol 25 no 2 pp 155ndash1711998

[24] R Gross I Matthews J Cohn T Kanade and S Baker ldquoMulti-PIErdquo in Proceedings of the 8th IEEE International Conference onAutomatic Face and Gesture Recognition (FG rsquo08) September2008

[25] T F Cootes C J Twining V Petrovic R Schestowitz and CJ Taylor ldquoGroupwise construction of appearance models usingpiece-wise affine deformationsrdquo in Proceedings of 16th BritishMachine Vision Conference (BMVC rsquo05) vol 5 pp 879ndash8882005

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 11: Research Article Patch Classifier of Face Shape …downloads.hindawi.com/journals/js/2016/3825931.pdfFace Shape Model and Fitting Method. Showing good performances by the method of

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of