6
Surface UP-SR for an Improved Face Recognition Using Low Resolution Depth Cameras Djamila Aouada Kassem Al Ismaeil Kedija Kedir Idris Bj¨ orn Ottersten ? Interdisciplinary Centre for Security, Reliability and Trust Universtity of Luxembourg {djamila.aouada, kassem.alismaeil, bjorn.ottersten}@uni.lu, [email protected] Abstract We address the limitation of low resolution depth cam- eras in the context of face recognition. Considering a face as a surface in 3-D, we reformulate the recently pro- posed Upsampling for Precise Super–Resolution algorithm as a new approach on three dimensional points. This re- formulation allows an efficient implementation, and leads to a largely enhanced 3-D face reconstruction. Moreover, combined with a dedicated face detection and representa- tion pipeline, the proposed method provides an improved face recognition system using low resolution depth cameras. We show experimentally that this system increases the face recognition rate as compared to directly using the low res- olution raw data. 1 1. Introduction In the past ten to fifteen years, research on automatic face recognition has actively moved from 2-D to 3-D data mostly acquired using high resolution (HR) laser scanners. Multi- ple approaches have been developed for this kind of data. Until recently the race was about designing sensors to cap- ture data with higher levels of details and higher resolu- tions [1]. Today much more affordable and less bulky depth cameras, with 3-D capabilities, have become accessible. They are, however, of limited resolutions, and present a high level of noise. Some examples are the 3D MLI by IEE S.A. of resolution (56 × 64) [2], and the PMD camboard nano of resolution (120 × 165) [3]. Because of their low resolution (LR) and the noisy nature of the acquired data, previously defined 3-D face recognition algorithms are no longer en- sured to be as effective [9]. The multi-frame super-resolution (SR) framework is an ap- propriate solution where it becomes possible to recover a higher resolution frame by fusing multiple LR ones. It has 1 This work was supported by the National Research Fund, Luxem- bourg, under the CORE project C11/BM/1204105/FAVE/Ottersten. been successfully used in the case of 2-D face images [5, 6]. Similar efforts have been undertaken for 3-D facial data. In [11], a learning–based method has been proposed to di- rectly find the mapping between an LR image and its corre- sponding HR image without using multiple frames. In [12], Peng et al. proposed to use facial features in a Maximum A Posteriori SR framework. Depth facial data may also benefit from the SR framework. Recently, Berretti et al. proposed to use SR on facial depth images once back-projected in 3-D, and defined the super- faces approach [9]. The SR algorithm they deployed is sim- ilar in principle to the initial blurred estimate provided in the enhanced Shift & Add algorithm proposed by Al Is- maeil et al. in [7]. Later on, this work was extended to the dynamic case where the considered multiple realizations were ordered frames constituting a video sequence [8]. This approach is referred to as Upsampling for Precise Super- Resolution (UP-SR). Its key component is a prior upsam- pling of the observed data which is proven to enhance the registration of frames over time. In addition, it uses a bi- lateral total variation framework as a smoothness condition. In [16], a similar concept of temporal fusion was considered for 3-D facial data enhancement. However, the increase in resolution was induced from temporal data cumulation without a real SR formulation or upsampling. Moreover, smoothness was ensured by bilateral filtering as a post pro- cessing operation and not included in the optimization ob- jective function. The contribution of this paper is twofold; first, we refor- mulate UP-SR on 3-D point clouds constituting the facial surface similarly to the work in [9]. However, by perform- ing the deblurring phase of UP-SR, 3-D face reconstruction results are maintained, if not enhanced. Second, we show experimentally that using these results for 3-D face recog- nition clearly improves the recognition rate as compared to using raw LR acquisitions. This second contribution re- quires a full dedicated pipeline for automatic face acqui- sition from depth cameras. Moreover, level curves equidis- tant from the nose tip and radially sampled are considered

Surface UP-SR for an Improved Face Recognition Using Low Resolution Depth …€¦ ·  · 2015-07-28Surface UP-SR for an Improved Face Recognition Using Low Resolution Depth Cameras

  • Upload
    lethu

  • View
    214

  • Download
    1

Embed Size (px)

Citation preview

Page 1: Surface UP-SR for an Improved Face Recognition Using Low Resolution Depth …€¦ ·  · 2015-07-28Surface UP-SR for an Improved Face Recognition Using Low Resolution Depth Cameras

Surface UP-SR for an Improved Face RecognitionUsing Low Resolution Depth Cameras

Djamila Aouada Kassem Al Ismaeil Kedija Kedir Idris Bjorn Ottersten?Interdisciplinary Centre for Security, Reliability and Trust

Universtity of Luxembourg{djamila.aouada, kassem.alismaeil, bjorn.ottersten}@uni.lu, [email protected]

Abstract

We address the limitation of low resolution depth cam-eras in the context of face recognition. Considering aface as a surface in 3-D, we reformulate the recently pro-posed Upsampling for Precise Super–Resolution algorithmas a new approach on three dimensional points. This re-formulation allows an efficient implementation, and leadsto a largely enhanced 3-D face reconstruction. Moreover,combined with a dedicated face detection and representa-tion pipeline, the proposed method provides an improvedface recognition system using low resolution depth cameras.We show experimentally that this system increases the facerecognition rate as compared to directly using the low res-olution raw data. 1

1. IntroductionIn the past ten to fifteen years, research on automatic facerecognition has actively moved from 2-D to 3-D data mostlyacquired using high resolution (HR) laser scanners. Multi-ple approaches have been developed for this kind of data.Until recently the race was about designing sensors to cap-ture data with higher levels of details and higher resolu-tions [1]. Today much more affordable and less bulky depthcameras, with 3-D capabilities, have become accessible.They are, however, of limited resolutions, and present a highlevel of noise. Some examples are the 3D MLI by IEE S.A.of resolution (56×64) [2], and the PMD camboard nano ofresolution (120× 165) [3]. Because of their low resolution(LR) and the noisy nature of the acquired data, previouslydefined 3-D face recognition algorithms are no longer en-sured to be as effective [9].The multi-frame super-resolution (SR) framework is an ap-propriate solution where it becomes possible to recover ahigher resolution frame by fusing multiple LR ones. It has

1This work was supported by the National Research Fund, Luxem-bourg, under the CORE project C11/BM/1204105/FAVE/Ottersten.

been successfully used in the case of 2-D face images [5, 6].Similar efforts have been undertaken for 3-D facial data.In [11], a learning–based method has been proposed to di-rectly find the mapping between an LR image and its corre-sponding HR image without using multiple frames. In [12],Peng et al. proposed to use facial features in a Maximum APosteriori SR framework.Depth facial data may also benefit from the SR framework.Recently, Berretti et al. proposed to use SR on facial depthimages once back-projected in 3-D, and defined the super-faces approach [9]. The SR algorithm they deployed is sim-ilar in principle to the initial blurred estimate provided inthe enhanced Shift & Add algorithm proposed by Al Is-maeil et al. in [7]. Later on, this work was extended tothe dynamic case where the considered multiple realizationswere ordered frames constituting a video sequence [8]. Thisapproach is referred to as Upsampling for Precise Super-Resolution (UP-SR). Its key component is a prior upsam-pling of the observed data which is proven to enhance theregistration of frames over time. In addition, it uses a bi-lateral total variation framework as a smoothness condition.In [16], a similar concept of temporal fusion was consideredfor 3-D facial data enhancement. However, the increasein resolution was induced from temporal data cumulationwithout a real SR formulation or upsampling. Moreover,smoothness was ensured by bilateral filtering as a post pro-cessing operation and not included in the optimization ob-jective function.The contribution of this paper is twofold; first, we refor-mulate UP-SR on 3-D point clouds constituting the facialsurface similarly to the work in [9]. However, by perform-ing the deblurring phase of UP-SR, 3-D face reconstructionresults are maintained, if not enhanced. Second, we showexperimentally that using these results for 3-D face recog-nition clearly improves the recognition rate as compared tousing raw LR acquisitions. This second contribution re-quires a full dedicated pipeline for automatic face acqui-sition from depth cameras. Moreover, level curves equidis-tant from the nose tip and radially sampled are considered

Page 2: Surface UP-SR for an Improved Face Recognition Using Low Resolution Depth …€¦ ·  · 2015-07-28Surface UP-SR for an Improved Face Recognition Using Low Resolution Depth Cameras

as facial features for matching and comparison.The remainder of the paper is organized as follows: Sec-

tion 2 reviews the UP-SR algorithm. Its adaptation to facialdepth data on a surface is given in Section 3. Our proposedface recognition pipeline is detailed in Section 4 which in-cludes a description of the considered level curves. The ex-perimental setup and results are summarized in Section 5.Finally, we conclude with Section 6.

2. BackgroundIn what follows, we review the UP-SR algorithm. We rep-resent all images in lexicographic vector form. Let us con-sider an HR depth image x of size n, and N observedLR images yk, k = 0, ..., (N − 1), of size m, such thatn = r ·m, where r is the SR factor. Every frame yk is anLR noisy and deformed realization of x modeled as follows:

yk = DHWkx + nk, k = 0, ..., (N − 1), (1)

where Wk is an (n × n) invertible matrix correspondingto the geometric motion between x and yk. We assumethat y0 is the reference frame for which W0 = In. Thepoint spread function of the depth camera is modeled by the(n × n) space and time invariant blurring matrix H. Thematrix D of dimension (m × n) represents the downsam-pling operator, and the vector nk is an additive noise at kwhich follows a white multivariate Laplace distribution ofmean zero and covariance Σ = σ2Im, with Im being theidentity matrix of size (m×m).One of the key components of UP-SR is to upsample theobserved LR images prior to any operation. We define theresulting r-times upsampled image as:

yk ↑= U · yk, (2)

where U is an (n ×m) upsampling matrix. This allows todirectly solve the problem of undefined pixels in the SR ini-tialization phase. It also leads to a more accurate and robustestimation of the motion Wk as it is now computed betweenyk ↑ and y0 ↑. The following registration of frames to thereference is consequently enhanced:

yk ↑= W−1k yk ↑ . (3)

Without loss of generality, both H and Wk are assumedto be block circulant matrices. Choosing the upsamplingmatrix U to be the transpose of D, the product UD = Adefines a new block circulant blurring matrix B = AH. Wehave, therefore, BWk = WkB. As a result, the estimationof x may be decomposed into two steps; estimation of ablurred HR image z = Bx, followed by a deblurring step.The data model in (1) becomes

yk ↑= z + νk, k = 0, ..., (N − 1), (4)

Algorithm 1: UP-SR

1. Choose the reference frame y0.for k, s.t., k = 1, · · · , N ,do2. Compute yk ↑ using (2).3. Find Wk by optical flow estimation.4. Compute yk ↑ using (3).end doend for5. Find z by applying a median estimator (5).6. Deduce x by deblurring using (6).end for

Table 1. Classical Upsampling for Precise Super-Resolution

where νk = W−1k U · nk is an additive noise vector of

length n. Using an L1–norm ‖ · ‖1, the estimate of z usingthe corresponding Maximum Likelihood is

z = arg minz

N−1∑k=0

‖z− yk ↑ ‖1. (5)

The result in (5) is, by definition, the pixel-wise temporalmedian estimator z = medk{yk ↑}.To recover x from z, an iterative optimization is performedas a deblurring step. Considering a regularization termΓ(x), chosen to be the bilateral total variation (BilateralTV) given in [13], we find

x = argminx

(‖Bx− z‖1 + λΓ(x)

), (6)

where λ is the regularization parameter. The UP-SR algo-rithm is given in Table 1, and summarized in Figure 1.

3. Surface Upsampling for Precise Super-Resolution

The different steps in UP-SR as described in Section 2 maybe directly applied on LR depth images yk of faces as thoseillustrated in Figure 2 (a). The resulting reconstructed facex is shown in Figure 2 (c). While it is of higher resolution,it presents artifacts that we argue are caused by applyingUP-SR on gridded depth data. To remedy these artifacts,we propose in what follows to back–project the yk frames,k = 1, · · · , N, to R3 using the intrinsic parameters of thecamera used for the acquisition. We end up with N corre-sponding point clouds Yk = {pki = (xki , y

ki , z

ki ) ∈ R3, i =

1, · · · ,m} as shown in Figure 2 (b). The objective is now toreconstruct an HR point cloud X = {qki = (xki , y

ki , z

ki ) ∈

R3, i = 1, · · · , n} belonging to the surface S of the orig-inal face, i.e., X ⊂ S. We adapt the algorithm in Table 1to point clouds, and define a modified version of the UP-SR algorithm that we refer to as SurfUP-SR. The two main

Page 3: Surface UP-SR for an Improved Face Recognition Using Low Resolution Depth …€¦ ·  · 2015-07-28Surface UP-SR for an Improved Face Recognition Using Low Resolution Depth Cameras

Figure 1. UP-SR steps on depth data and on a surface in 3-D.

Figure 2. Face reconstruction with UP-SR using (a) depth images,(b) point clouds. The corresponding results are shown in (c) and(d), respectively.

phases are maintained: 1) estimation ofZ , a blurred versionof X ; 2) deblurring by optimization as in (6). The steps ofupsamling and registration need to be adapted as describedin the following sections. An illustration of differences be-tween UP-SR and SurfUP-SR is given in Figure 1.

3.1. Surface Upsampling

Assuming that the point cloud Yk is a sampling of a surfaceSk, the upsampling of Yk may be reformulated as a problemof interpolating the surface Sk from scattered points. Thesurface Sk may be defined implicitly by a function f as:f(x, y, z) = 0, ∀ p = (x, y, z) ∈ Sk, or equivalently byusing the interpolant Pf as:

Pf (x, y) = z, ∀ p = (x, y, z) ∈ Sk. (7)

The m points in Yk verify (7), hence they form a systemof m equations, from which Pf may be defined. A solu-

tion using kernel regression has been proposed in [14]. Anefficient GPU implementation has been given in [15]. Weused the Matlab scatteredInterpolant function inour implementation. Once Pf is found, it is used to define(r − 1) · m additional points belonging to Sk for chosen(x, y)-positions. As a result, a denser point cloud Yk ↑ con-taining a total of n points is found such that

Yk ↑= Yk ∪ {pki = (xki , yki , z

ki ) ∈ R3, i = m+ 1, · · · , n},

(8)and (xki , y

ki ) ∈ [−1, 1]× [−1, 1].

3.2. Surface Registration

The motion estimation and registration steps in UP-SR arereplaced by directly using classical 3-D point cloud regis-tration techniques. We use iterative closest points (ICP)to rigidly register each point cloud Yk ↑ to the referenceY0 ↑. This is done by estimating the optimal transformationparameters, namely, 3-D rotation Rk, translation tk, andglobal scaling factor αk that minimize the distance Err(·)between the transformed and the reference point cloudssuch that

[Rk, tk, αk] = argminR,t,α

Err (αRYk ↑ +t,Y0 ↑) . (9)

The registered point cloud Yk ↑ is then computed as:

Yk ↑= αkRkYk ↑ +tk. (10)

With these modifications, the new SurfUP-SR algorithm isgiven in Table 2. Its visual impact is shown in the exampleof Figure 2 (d).

4. Proposed Face Recognition PipelineOur proposed pipeline is composed of three main stages:preprocessing of raw data, feature extraction and matching.

4.1. Preprocessing

The preprocessing step is an essential step in the design ofa face recognition system as it affects the performance of

Page 4: Surface UP-SR for an Improved Face Recognition Using Low Resolution Depth …€¦ ·  · 2015-07-28Surface UP-SR for an Improved Face Recognition Using Low Resolution Depth Cameras

Figure 3. Preprocessing step of the facial acquisition pipeline using a depth camera.

Algorithm 2: SurfUP-SR

1. Choose the reference frame Y0.for k, s.t., k = 1, · · · , N ,do2. Compute Yk ↑ using (8).3. Estimate Rk, tk, and αk using ICP as in (9).4. Compute Yk ↑ using (10).end doend for5. Find Z by applying a median estimator (5).6. Deduce X by deblurring using (6).end for

Table 2. Surface Upsampling for Precise Super-Resolution

the system significantly. We implement fast and efficienttechniques to detect the face region and the nose tip for aneffective segmentation and alignment. We apply a face de-tection algorithm on the amplitude or 2-D image only, thenwe map the face region with the corresponding depth imageto obtain the corresponding 3-D facial region. In this work,the Viola-Jones [19] face detection algorithm is used for itscomputational efficiency and high detection rate. Once wedetect the depth face region, we detect the nose tip repre-sented by the point with the smallest depth value. The nosetip is used as a basic feature for our segmentation and align-ment. Using a spherical cropping centered at the nose tip,we discard the ear, hair and part of the neck areas. Finally,the ICP registration is used for alignment.

4.2. Feature extraction

We use spherical curves and their radial discretization asfeatures to represent each face. A spherical curve is ob-tained by intersecting the facial surface with a sphere. Inorder to have smoother and continuous curves, we apply the

(a) (b)Figure 4. Feature extraction step using: (a) Observed LR 3-D facewith texture from amplitude or 2-D images. (b) Extracted levelcurves.

interpolation technique proposed in [18]. Spherical curvesare discretized radially by slicing the spherical intersectioncurves using a plane that is parallel to the face normal andthat intersects the spherical curves radially at uniform an-gles. Each face is represented by an indexed collection ofM × L points in 3-D, where M denotes the number ofcurves per sample face andL is the number of points in eachcurve. We end up with a feature vector of size M × L × 3for each face. An example of the extracted feature curves isshown in Figure 4.

4.3. Matching

The matching step aims to associate each probe 3-D faceto the the closest 3-D face in the database by comparingtheir extracted features. The comparison is carried out by anappropriate distance measure on the space of the extractedfeature curves. We choose the cosine distance in our exper-iments as we found it to be the best performing one. This isconfirmed by the survey of Smeets et al. [20].

Page 5: Surface UP-SR for an Improved Face Recognition Using Low Resolution Depth …€¦ ·  · 2015-07-28Surface UP-SR for an Improved Face Recognition Using Low Resolution Depth Cameras

Figure 5. 3-D face reconstruction results. (a) 3-D laser scan ground truth. (b) One of the LR 3D faces. (c) Results of the superfacesalgorithm. (d) Results of the proposed SurfUP-SR algorithm. (e) 3-D error map corresponding to the 3-D LR face. (f) 3-D error mapcorresponding to the superfaces results. (g) 3-D error map corresponding to the proposed SurfUP-SR.

.

5. Experimental Part

We evaluate the performance of the proposed system forboth 3-D face reconstruction and recognition. First, to eval-uate the quality of the reconstructed 3D faces, we use thepublicly available superfaces dataset [17]. It has been ac-quired using the well known Kinect camera [4]. A sequenceof 2-D and depth images for 20 different subjects are pro-vided. Moreover, an HR scanned version for each subjectis available as ground truth. The dataset has only one real-ization for each subject which makes it not appropriate forrecognition purposes. Thus, we built our real dataset using10 subjects with two different realizations for each subject.The dataset is acquired using the PMD camboard nano timeof flight camera with a resolution of (120× 165) pixels [3].

5.1. Reconstruction

In order to evaluate the quality of the reconstructed faces,we use the above mentioned real dataset [17]. The faces inthe depth frames are of low resolution due to the object dis-tance from the camera. To improve its quality, we conductthe following test. We apply SurfUP-SR, and show the re-sults for two subjects (01 and 19) using 5 LR frames. AnLR frame for each subject is shown in Figure 5.(b), first andsecond rows, respectively. Obtained results show that theproposed algorithm provides a visually improved HR 3-Dfaces as seen in Fig. 5.(d) as compared to the LR captureddata Figure 5.(b). Moreover, our algorithm provides bettervisual results than the recently proposed superfaces algo-rithm [9], Figure 5. (c). This is due to the fact that SurfUP-SR includes an additional deblurring step. Our results areof sufficient quality for many applications such as 3-D facerecognition. In order to provide a quantitative evaluation,

Figure 6. Extracted level curves from 3-D faces for: (a) Groundtruth. (b) LR. (c) superfaces. (d) SurfUP-SR.

we measure the reconstruction error of SurfUP-SR and su-perfaces against the laser scanned ground truth. In Figure 5.(f) and (g), we may see the color-coded reconstruction errorof the superfaces method [9] and SurfUP-SR, respectively.As expected, obtained results show that SurfUP-SR is atleast as good as superfaces and sometimes better. More-over, by taking a look to the error range bar in Figure 5, wenote that in most areas the errors are below 0.5 cm.

5.2. Recognition

In order to test the impact of SurfUP-SR on a face recogni-tion algorithm, we evaluate the performance of the pipelinepresented in Section 4 on the raw LR faces in our database.We then run the same pipeline on the superresolved facesof our database. We may see in Figure 6 the enhancementincurred by SurfUP-SR on the quality of the extracted fea-ture curves. Indeed, their extraction from LR faces leads tonoisy curves. For the same subject, these curves become

Page 6: Surface UP-SR for an Improved Face Recognition Using Low Resolution Depth …€¦ ·  · 2015-07-28Surface UP-SR for an Improved Face Recognition Using Low Resolution Depth Cameras

(a) (b)Figure 7. Confusion matrices. (a) Using the LR 3-D observedfaces. (b) Using the super-resolved 3-D faces by the proposedSurfUP-SR.

smoother and less noisy if extracted from superresolveddata. The quality of these curves directly affects the finalresult of the face recognition algorithm. The correspond-ing confusion matrices are given in Figure 7(a) and in Fig-ure 7(b). We notice an improved recognition rate from 50%to 80% when super-resolving. This confirms the importanceof having a higher resolution for an increased recognitionrate and the effectiveness of the proposed SurfaceUP-SR.

6. ConclusionIn this paper we proposed a new multi-frame super-resolution algorithm SurfUP-SR which improves 3-D facerecognition rate using low resolution, and cost-effectivedepth cameras. We reformulated the UP-SR algorithm ona 3-D point cloud instead of its original formulation on adepth image. In addition, we provided a full automatic 3-Dface acquisition from depth cameras. Experimental eval-uation of SurfUP-SR using a real low resolution 3-D facedataset has been carried out. Obtained results show an ef-ficient enhancement in the resolution and the quality of thecaptured low resolution 3-D faces. Moreover, we showedthe impact of the proposed algorithm in decreasing the 3-Dreconstruction error, and most importantly in increasing the3-D face recognition rate.

References[1] T. Beeler, B. Bickel, P. Beardsley, B. Sumner, M. Gross,

“High-Quality Single-Shot Capture of Facial Geometry,”ACM Trans. on Graphic, vol. 29, pp. 40:1–40:9, 2010.

[2] http://www.iee.lu/technologies

[3] http://www.pmdtec.com

[4] http://www.primesense.com/

[5] F. Lin, C. Fookes, V. Chandran, S. Sridharan, “Super-resolvedfaces for improved face recognition from surveillance video,”In Advances in Biometrics, pp. 1-10. Springer Berlin Heidel-berg, 2007.

[6] C. Fookes, F. Lin, V. Chandran, S. Sridharan, “Evaluation ofimage resolution and super-resolution on face recognition per-

formance,” Journal of Visual Communication and Image Rep-resentation, vol. 23, no. 1, pp. 75-93, 2012.

[7] K. Al Ismaeil, D. Aouada, B. Mirbach, B. Ottersten, “Depthsuper-resolution by enhanced shift & add,” In Proceedings ofthe 15th International Conference on Computer Analysis ofImages and Patterns, pp. 27-29, 2013.

[8] K. Al Ismaeil, D. Aouada, B. Mirbach, B. Ottersten, “Dy-namic super-resolution of depth sequences with non-rigid mo-tions,” In Proceedings of the IEEE International Conferebceon Image Processing, pp. 15-18, 2013.

[9] S. Berretti, A. Del Bimbo, P. Pala, “Superfaces: A Super-resolution Model for 3D Faces,” 5th Workshop on Non-Rigid Shape Analysis and Deformable Image Alignment, vol.7583/2012, pp.73-82, 2012.

[10] M. Ebrahimi, E. Vrscay, “Multi-frame super-resolution withno explicit motion estimation,” In Proceedings of the IEEEInternational Conference on Image Processing, Computer Vi-sion, and Pattern Recognition (IPCV), pp. 455459, 2008.

[11] S. Peng, G. Pan, Z. Wu, “Learning-based super-resolutionof 3D face model,” In Proceedings of the IEEE InternationalConferebce on Image Processing (ICIP), vol. 2, pp. 382385,2005.

[12] G. Pan, S. Han, Z. Wu, Y. Wang, “Super-Resolution of 3DFace,” IEEE European Conference on Computer Vision, vol.3952, pp. 389401, 2006,

[13] S. Farsiu, D. Robinson, M. Elad, and P. Milanfar, “Fast andRobust Multi-Frame Super-Resolution,” IEEE Transactionson Image Processing, vol. 13, pp. 1327-1344, 2004.

[14] H. Takeda, S. Farsiu, P. Milanfar, “Kernel regression formimage processing and reconstruction,” IEEE Transactions onImage Processing, vol. 16, no. 2, pp. 349-365, 2007.

[15] S. Cuomo, A. Galletti, G. Giunta, A. Starace, “Surface re-construction from scattered points via RBF interpolation onGPU,” Federated Conference on Computer Science and Infor-mation Systems, vol. abs/1305.5179, pp. 433-440, 2013.

[16] M. Hernandez, J. Choi and Grard Medioni, “Laser ScanQuality 3-D Face Modeling Using a Low-Cost Depth Cam-era,” In Proceedings of the IEEE European Signal ProcessingConference (EUSIPCO), pp. 1995-1999, 2012.

[17] http://www.micc.unifi.it/vim/datasets/4d-faces/

[18] D. Aouada, H. Krim, “Squigraphs for fine and fompact mod-eling of 3-D shapes,” IEEE Transactions on Image Processing,vol. 19, pp. 306-321, 2010.

[19] P. Viola, M. Jones, “ Rapid object detection using a boostedcascade of Simple features,” In Proceeding of the IEEE Inter-national Conference on Computer Vision and Pattern Recog-nition (CVPR), vol. 1, pp. 511-518, 2001.

[20] D. Smeets, P. Claes, J. Hermans, D. Vandermeulen, P.Suetens, “A Comparative Study of 3-D Face Recognition Un-der Expression Variations,” IEEE Transactions on Systems,Man, and Cybernetics, Part C: Applications and Reviews,vol.42, no.5, pp.710,727, Sept. 2012.