13
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination. IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS: SYSTEMS 1 Robust Sclera Recognition System With Novel Sclera Segmentation and Validation Techniques S. Alkassar, Student Member, IEEE, W. L. Woo, Senior Member, IEEE, S. S. Dlay, and J. A. Chambers, Fellow, IEEE Abstract—Sclera blood veins have been investigated recently as a biometric trait which can be used in a recognition system. The sclera is the white and opaque outer protective part of the eye. This part of the eye has visible blood veins which are randomly distributed. This feature makes these blood veins a promising factor for eye recognition. The sclera has an advantage in that it can be captured using a visible-wavelength camera. Therefore, applications which may involve the sclera are wide ranging. The contribution of this paper is the design of a robust sclera recog- nition system with high accuracy. The system comprises of new sclera segmentation and occluded eye detection methods. We also propose an efficient method for vessel enhancement, extraction, and binarization. In the feature extraction and matching pro- cess stages, we additionally develop an efficient method, that is, orientation, scale, illumination, and deformation invariant. The obtained results using UBIRIS.v1 and UTIRIS databases show an advantage in terms of segmentation accuracy and computa- tional complexity compared with state-of-the-art methods due to Thomas, Oh, Zhou, and Das. Index Terms—Biometrics, feature extraction, pattern recognition, sclera recognition, wavelet transforms. I. I NTRODUCTION R ECENT research on biometrics has shown an increased interest in new human traits rather than the typical biometrics [1]. Human recognition systems using blood ves- sel patterns for instance have been investigated using the retina [2], palms [3], fingers [4], conjunctival vasculature [5], and sclera [6]. The sclera can be defined as the white and opaque outer protective part of the eye. It consists of four tissue layers: 1) the episclera; 2) stroma; 3) lamina fusca; and 4) endothelium [7] which surround the iris. The iris is the colored tissue around the pupil. This is shown in Fig. 1. The sclera has visible blood veins which are randomly dis- tributed in different orientations and layers making them a Manuscript received October 5, 2015; accepted November 22, 2015. The work of S. Alkassar was supported by the Ministry of Higher Education and Scientific Research in Iraq. This paper was recommended by Associate Editor K. Huang. S. Alkassar is with the School of Electrical and Electronic Engineering, Newcastle University, Newcastle upon Tyne NE1 7RU, U.K., and also with the Electronics Engineering College, University of Mosul, Mosul 41002, Iraq (e-mail: [email protected]; [email protected]). W. L. Woo, S. S. Dlay, and J. A. Chambers are with the School of Electrical and Electronic Engineering, Newcastle University, Newcastle upon Tyne NE1 7RU, U.K. (e-mail: [email protected]; [email protected]; [email protected]). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TSMC.2015.2505649 Fig. 1. Eye structure consisting of pupil, iris, and sclera region. promising factor for the improvement of an eye recognition system [8]. Typical sclera recognition systems involve sclera segmen- tation, blood vessel enhancement, feature extraction, and a matching process which is shown in Fig. 2. In addition, a summary of recent published work on single sclera modality is given in Table I. Sclera segmentation has evolved from the manual segmentation applied by Derakhshani et al. [5] and Derakhshani and Ross [9] which is an unreliable approach for real-time applications because of the human supervision required with the segmentation process and the expensive processing time. In the semiautomated method suggested by Crihalmeanu et al. [10] based on K-means clustering, the eyelids included in the resulting sclera image were manually corrected. After that, two automated strategies were suggested based on sclera pixel thresholding and a sclera shape con- tour to extract sclera regions. For the sclera pixel thresholding technique, Thomas et al. [11] and Zhou et al. [6], [12], [13] converted the color images into the HSV color space and the sclera region was extracted as it has low hue, low saturation, and high intensity in the HSV space. Then, another mask was created from a skin detection method and the convex hull of each mask was calculated and fused for the final sclera mask. Oh and Toh [14] used the HSV color space with his- togram equalization and low-pass filtering in order to extract the sclera. For grayscale images, Otsu’s thresholding method was applied to detect sclera regions as the intensity of the sclera area is different from the background. In contrast, the sclera shape contour technique was utilized in [15]–[17] which depends on the convergence of the contours through the sclera region. A time-adaptive active contour was used to extract the sclera regions. For the enhancement of blood vessels, Derakhshani et al. [5] and Derakhshani and Ross [9] utilized contrast-limited adaptive histogram equalization (CLAHE) with a region growing method to extract the binary network of the 2168-2216 c 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

Robust Sclera Recognition System With Novel

  • Upload
    dipesh

  • View
    8

  • Download
    4

Embed Size (px)

Citation preview

Page 1: Robust Sclera Recognition System With Novel

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS: SYSTEMS 1

Robust Sclera Recognition System With NovelSclera Segmentation and Validation Techniques

S. Alkassar, Student Member, IEEE, W. L. Woo, Senior Member, IEEE,S. S. Dlay, and J. A. Chambers, Fellow, IEEE

Abstract—Sclera blood veins have been investigated recently asa biometric trait which can be used in a recognition system. Thesclera is the white and opaque outer protective part of the eye.This part of the eye has visible blood veins which are randomlydistributed. This feature makes these blood veins a promisingfactor for eye recognition. The sclera has an advantage in thatit can be captured using a visible-wavelength camera. Therefore,applications which may involve the sclera are wide ranging. Thecontribution of this paper is the design of a robust sclera recog-nition system with high accuracy. The system comprises of newsclera segmentation and occluded eye detection methods. We alsopropose an efficient method for vessel enhancement, extraction,and binarization. In the feature extraction and matching pro-cess stages, we additionally develop an efficient method, that is,orientation, scale, illumination, and deformation invariant. Theobtained results using UBIRIS.v1 and UTIRIS databases showan advantage in terms of segmentation accuracy and computa-tional complexity compared with state-of-the-art methods due toThomas, Oh, Zhou, and Das.

Index Terms—Biometrics, feature extraction, patternrecognition, sclera recognition, wavelet transforms.

I. INTRODUCTION

RECENT research on biometrics has shown an increasedinterest in new human traits rather than the typical

biometrics [1]. Human recognition systems using blood ves-sel patterns for instance have been investigated using theretina [2], palms [3], fingers [4], conjunctival vasculature [5],and sclera [6]. The sclera can be defined as the white andopaque outer protective part of the eye. It consists of fourtissue layers: 1) the episclera; 2) stroma; 3) lamina fusca;and 4) endothelium [7] which surround the iris. The iris isthe colored tissue around the pupil. This is shown in Fig. 1.The sclera has visible blood veins which are randomly dis-tributed in different orientations and layers making them a

Manuscript received October 5, 2015; accepted November 22, 2015. Thework of S. Alkassar was supported by the Ministry of Higher Educationand Scientific Research in Iraq. This paper was recommended by AssociateEditor K. Huang.

S. Alkassar is with the School of Electrical and Electronic Engineering,Newcastle University, Newcastle upon Tyne NE1 7RU, U.K., and also withthe Electronics Engineering College, University of Mosul, Mosul 41002, Iraq(e-mail: [email protected]; [email protected]).

W. L. Woo, S. S. Dlay, and J. A. Chambers are with theSchool of Electrical and Electronic Engineering, Newcastle University,Newcastle upon Tyne NE1 7RU, U.K. (e-mail: [email protected];[email protected]; [email protected]).

Color versions of one or more of the figures in this paper are availableonline at http://ieeexplore.ieee.org.

Digital Object Identifier 10.1109/TSMC.2015.2505649

Fig. 1. Eye structure consisting of pupil, iris, and sclera region.

promising factor for the improvement of an eye recognitionsystem [8].

Typical sclera recognition systems involve sclera segmen-tation, blood vessel enhancement, feature extraction, and amatching process which is shown in Fig. 2. In addition, asummary of recent published work on single sclera modalityis given in Table I. Sclera segmentation has evolved from themanual segmentation applied by Derakhshani et al. [5] andDerakhshani and Ross [9] which is an unreliable approachfor real-time applications because of the human supervisionrequired with the segmentation process and the expensiveprocessing time. In the semiautomated method suggested byCrihalmeanu et al. [10] based on K-means clustering, theeyelids included in the resulting sclera image were manuallycorrected. After that, two automated strategies were suggestedbased on sclera pixel thresholding and a sclera shape con-tour to extract sclera regions. For the sclera pixel thresholdingtechnique, Thomas et al. [11] and Zhou et al. [6], [12], [13]converted the color images into the HSV color space and thesclera region was extracted as it has low hue, low saturation,and high intensity in the HSV space. Then, another mask wascreated from a skin detection method and the convex hullof each mask was calculated and fused for the final scleramask. Oh and Toh [14] used the HSV color space with his-togram equalization and low-pass filtering in order to extractthe sclera. For grayscale images, Otsu’s thresholding methodwas applied to detect sclera regions as the intensity of thesclera area is different from the background. In contrast, thesclera shape contour technique was utilized in [15]–[17] whichdepends on the convergence of the contours through the scleraregion. A time-adaptive active contour was used to extract thesclera regions.

For the enhancement of blood vessels, Derakhshani et al. [5]and Derakhshani and Ross [9] utilized contrast-limitedadaptive histogram equalization (CLAHE) with a regiongrowing method to extract the binary network of the

2168-2216 c© 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

Page 2: Robust Sclera Recognition System With Novel

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

2 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS: SYSTEMS

Fig. 2. Typical sclera recognition system design.

TABLE IRECENT WORKS RELATED TO SCLERA-BASED RECOGNITION USING A SINGLE SCLERA MODALITY

sclera blood vessels. A bank of Gabor filters was usedin [6], [12], and [13] whereas adaptive histogram equalizationwith the discrete Haar wavelet was used in [17] to enhancethe vessel patterns. While for the feature extraction process,Zhou et al. [6], [12], [13] proposed a line segment descriptorbased on the iris centroid. Oh and Toh [14] used an angulargrid with local binary patterns whereas Derakhshani et al. [5]and Derakhshani and Ross [9] suggested minutiae detectionand a matching method for sclera recognition.

Several issues and challenges remain for sclera recogni-tion which may affect the system performance. These are asfollows.

1) Sclera segmentation with pixel thresholding could beaffected by the noise and distortion present in scleraimages.

2) The effects of the sclera boundary on the convergenceof the sclera shape contour.

3) Occluded or partial occluded and noisy images arediscarded manually.

4) The enhancement of blood vessels and the featureextraction algorithm should be invariant to nonlinearblood vessel movement [6].

5) Robust user template registration and an efficient match-ing procedure are required.

To mitigate these limitations, we propose the following novelcontributions to achieve a practical sclera recognition system

based only on visible-wavelength illumination.1) Occluded eye image detection.2) Adaptive sclera shape contour segmentation based on

active contours without edges.3) Efficient image enhancement and vessel map extraction.4) Robust sclera feature extraction with template registra-

tion and matching.The organization of this paper is as follows. In Section II, we

propose a new unsupervised sclera segmentation and occludedeye detection method. Section III discusses the enhancement,extraction, and mapping of blood vessels. Section IV includesthe feature extraction process and user template registrationwhile Section V discusses the matching and decision steps. InSection VI, we introduce our evaluation results and finally wepresent the conclusions in Section VII.

II. PROPOSED SCLERA SEGMENTATION

Sclera segmentation is the initial and the most challeng-ing step in a sclera recognition system. The accuracy ofthe sclera recognition system could be degraded if the seg-mentation process fails to extract the correct sclera regionsfrom an eye image. Some incorrect sclera segmentationscenarios include segmenting the sclera with some partsof the iris, eyelids, and eyelashes. Table I shows sclerasegmentation techniques developed from both manual and

Page 3: Robust Sclera Recognition System With Novel

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

ALKASSAR et al.: ROBUST SCLERA RECOGNITION SYSTEM WITH NOVEL SCLERA SEGMENTATION AND VALIDATION TECHNIQUES 3

Fig. 3. Iris segmentation process.

automated segmentation processes. Two strategies, sclera pixelthresholding and sclera shape contour, have been adopted eachhaving their advantages and disadvantages. We will focus onthe sclera shape contour technique in this paper as the sclerapixel thresholding involves multiple steps to remove noiseand segmenting sclera region, thus increasing the complex-ity and processing time. We propose an adaptive approachfor unsupervised fully automated sclera segmentation by usingactive contours without edges [20] with a novel occluded eyedetection method.

A. Iris Segmentation

Iris center estimation has an essential role in our proposedsclera segmentation method. Although the sclera recognitionsystem does not depend directly on the iris for the systemimplementation. However, locating the position of the iris cen-ter within the eye image plays a crucial part in our proposedsclera segmentation. There is a significant amount of litera-ture on iris segmentation [21]–[25] for which the iris regionis modeled as circular boundaries and it is not our focus areato improve these methods rather than to extract the iris centerand radius. We use the integro-differential operator suggestedby Daugman [21] which acts as a circular edge detector. Theintegro-differential operator is defined as

arg max(r,x0,y0)

∣∣∣∣Gσ (r) ∗ ∂

∂r

r,x0,y0

I(x, y)

2πrds

∣∣∣∣

(1)

where I(x, y) is the grayscale level of the eye image, (r, x0, y0)

are the iris radius and center coordinates, the symbol ∗ isthe convolution operator, and Gσ (r) is a Gaussian smooth-ing function of scale σ . The colored image is first convertedinto grayscale format and down-sampled by factor of 0.25 toenhance the processing time. This is shown in Fig. 3.

B. Occluded Eye Detection For Sclera Validation

One major challenge in the sclera segmentation processis the sclera validation where the segmented sclera is auto-matically verified without any supervision from the human.There are some factors such as an occluded eye image andsmall sclera area which will affect the automation processand increase error rate. Many researchers have discardedthese images manually or applied sclera validation after thesclera segmentation step. However, if the sclera image hasan occluded eye, then validating the sclera after segmentation

is inefficient. Therefore, we propose a sclera validation processby isolating the poor samples in the enrollment and verificationstages. The proposed method is applied before sclera segmen-tation to detect partial or fully occluded eye image, to makea validation decision of a sufficient sclera region.

Based on the iris radius and center coordinates (r, x0, y0),two arc areas of intensities are specified according to thefollowing:

Arc = RGB(x0 + r cos θ, y0 + r sin θ) (2)

where θ ∈ [−π/3 : π/3] ∪ [−2π/3 : 2π/3] with uniformincrement steps of 0.1◦ and is set in this range to check thestatus where eyelids are partially closed. Then, the RGB inten-sities of each pixel in these two arc areas have been classifiedby heuristic rules into skin or not-skin labels using the colordistance map (CDM) proposed in [26]. First, we will explainthis method by applying it on the right arc Arcr and thesame operation will be applied on the left arc Arcl. First,two skin clusters for natural illumination and flash illuminatorconditions are defined as

CDM1 =

⎪⎪⎪⎨

⎪⎪⎪⎩

R > 95, G > 40, B > 20

1, max(R, G, B) − min(R, G, B) > 15

|R − G| > 15, R > G, R > B

0, otherwise

(3)

CDM2 =

⎪⎨

⎪⎩

R > 220, G > 210, B > 170

1, |R − G| ≤ 15, B < R, B > G

0, otherwise.

(4)

Then, the nonskin map is created based on these clusters as

sr(i) ={

1, if CDM1(i) ‖ CDM2(i) = 0

0, otherwise(5)

where the symbol ‖ refers to the logical OR operator, 1 refersto a nonskin pixel, and 0 to a skin pixel. The same operationwill be repeated on Arcl to produce sl. Next, from the sr andsl vectors, four subarcs upr with � ∈ [π/6 : π/3], upl with� ∈ [5π/6 : 2π/3], downr with � ∈ [−π/6 : −π/3], anddownl with � ∈ [−5π/6 : −2π/3] are used to define decisionparameters for the nonskin pixels as

nskup(i) = upr(i)⋂

upl(i) (6)

nskdown(i) = downr(i)⋂

downl(i) (7)

where⋂

is the logical AND operator and � is set within thisrange empirically to enhance the processing time. Then, thepercentage of the nonskin pixels is calculated for both nskupand nskdown as

pernskup= No. of 1s (nskup)

Total No. of elements (nskup)(8)

pernskdown= No. of 1s (nskdown)

Total No. of elements (nskdown). (9)

Page 4: Robust Sclera Recognition System With Novel

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

4 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS: SYSTEMS

According to pernskupand pernskdown

, the final decision toaccept or reject the eye image is defined as

Eye image

={

Accept, if pernskup≥ 0.6

⋂pernskdown

≥ 0.3

Reject, otherwise(10)

where 0.6 and 0.3 are set empirically for optimum perfor-mance. After that, if the eye image is validated for a sufficientsclera area, then the iris binary mask is applied to segmentthe iris and the image is ready for the sclera segmentation.Otherwise, no further processing time is required and theimage is rejected. The proposed validation process for an idealeye and partial occluded eye images is shown in Fig. 4.

C. Sclera Segmentation

Having presented the sclera validation methodology, wenext describe the proposed full procedure for sclera segmen-tation.

1) Seed-Base Initialization of Contours: For a frontal-looking iris, two initial seeds for the left and right contours areinitialized depending on the iris radius and center coordinates(r, x0, y0). The center positions of these seeds are calculated as

Crs(x, y) = (x0 + (1.35 × r), y0) (11)

and

Cls(x, y) = (x0 − (1.35 × r), y0) (12)

where 1.35 × r is set to ensure that the right and left seedcenters are outside the iris. The height of the initial contours isset with altitude from the point on the iris circumference withθ = π/12 to the point with θ = −π/12 and the width (r/2)to make the contours converge inside the sclera regions. Theinitial seed positions are depicted in Fig. 5(a).

2) Active Contours Without Edges: The basic idea in snakesor active contour models is to develop a curve subject toimage forces in order to detect salient objects [27]–[30]. Forinstance, a curve is initialized around an object to be detectedand the curve will move toward that object until its bound-ary is detected. However, a major problem with these activecontour models is that they have to rely on an edge func-tion which depends on the image gradient. These models candetect only objects with edges defined by the gradients thatstop the curve evolution. In practice, the discrete gradientsare bounded. Therefore, the stopping function is never zeroon the edges and the curve may pass through the boundary.Another problem is that if the image is contaminated withnoise, it will require a strong isotropic smoothing Gaussianfunction to remove the noise at the cost of smoothing the edgesas well.

In contrast, the active contours without edges model [20]has no stopping edge function, i.e., the gradient of theimage is not adopted for the stopping process, instead theMumford–Shah [31] segmentation techniques are adopted.The advantage of this model is the ability of detecting con-tours independent of gradients. For instance, objects with verysmooth boundaries or with discontinuous boundaries.

(a)

(b)

(c)

Fig. 4. Proposed occluded and partially occluded eye image detection.(a) and (b) Ideal eye image and partial occluded eye image, respectively, withthe angles depicted where nskup and nskdown are calculated. (c) Histogramsof skin and nonskin pixels in the nskup and nskdown vectors.

Summarizing the development in [20], let an eye image inthe grayscale I0 be an open subset of R

2 with its boundaryequals ∂I0, and the evolving curve C in I0 representing theboundary of sclera ω of I0, i.e., ω ⊂ I0 and C = ∂ω. Then,the terms inside(C) and outside(C) will equal the area regionsof ω and I0\ω, respectively, where the symbol (\) denotesremoval of the ensuing term and ω is the closed region of ω.Assuming that I0 has two regions of approximately piecewise-constant intensities Ii

0 and Io0 , then the sclera to be detected is

Page 5: Robust Sclera Recognition System With Novel

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

ALKASSAR et al.: ROBUST SCLERA RECOGNITION SYSTEM WITH NOVEL SCLERA SEGMENTATION AND VALIDATION TECHNIQUES 5

Fig. 5. Segmentation process for the sclera. (a) Initial seeds for the rightand left contours for iris-centered eye. (b) Sclera segmented template.

the region with value Ii0. The fitting term of the active contour

model is represented by the energy function F(c1, c2, C) whichis minimized if the curve C is on the boundary of ω anddefined as

F(c1, c2, C) = F1(C) + F2(C) + μ · Length(C)

+ ν · Area(inside(C)) (13)

where F1(C) and F2(C) equal

F1(C) = λ1

inside(C)

|I0(x, y) − c1|2dxdy (14)

F2(C) = λ2

outside(C)

|I0(x, y) − c2|2dxdy (15)

where μ ≥ 0, ν ≥ 0, λ1, and λ2 > 0 are fixed parameters,and c1 and c2 are the averages of I0 inside and outside C,respectively. Further details of the minimizing procedure isin [20]. The parameters λ1, λ2 = 1 and ν = 0 are fixed forbest performance whereas the time step t is set to 0.1. μ isthe controlling parameter of the evolving curve C. The smallerthe value of μ, the more ability to detect as many objects aspossible with different sizes. As μ increases, the curve C willdetect only large objects. In our case, we set μ empiricallyhigh to 0.2 in order to not detect the blood veins inside thesclera rather the sclera boundary.

Then, the iris-segmented image is converted to the bluechannel and down-sampled by factor = 0.2 to enhance pro-cessing time and the final sclera binary map is created. Theinitial contours will converge toward the sclera boundaries inall direction and will stop for a number of iterations i = 200.The final sclera template is shown in Fig. 5(b).

III. SCLERA BLOOD VESSEL ENHANCEMENT

AND MAPPING

The main purpose of the vessel enhancement is to isolate theblood vessels in the sclera from their background. This processhas two stages. In the first stage, the green layer of the RGBimage is extracted as it leads to better contrast between thesclera blood vessel and the background [32]. Then, CLAHEwas applied to the sclera regions as it will enhance the greenlayer of the colored image [5].

For the vessel extraction, which is the second stage, we pro-pose the 2-D isotropic undecimated wavelet transform (IUWT)as it is robust and well adapted to astronomical data and biol-ogy images where objects are more or less isotropic [33].To extract W = {w1, . . . , wJ, cJ}, where wj are the waveletcoefficients at scale j and cJ are the coefficients at the coars-est resolution, a subtraction between two adjacent sets ofcoefficient scales is applied as

wj+1[k, l] = cj[k, l] − cj+1[k, l] (16)

where

cj+1[k, l] =(

h( j)h( j) ∗ cj

)

[k, l] (17)

and h( j)[k] = ([1, 4, 6, 4, 1]/16) is the nonorthogonal Astrofilter bank with k = −2, . . . , 2. At each scale j, one wavelet setis obtained which has the same resolution as the sclera image.This feature solves the dimensionalty increment introducedusing Gabor filters with different scales and orientations andproduces efficient processing time. The reconstruction processis obtained by co-addition of j wavelet scales as

c0[k, l] = cJ[k, l] +J∑

j=1

wj[k, l]. (18)

The segmentation process for the vessels can be initiatedsimply by adding the best wavelet levels with a thresholdingprocess that represents the best vessels contrast. The thresholdto detect the vessels is empirically set to identify the lowest30% of the coefficients. It is likely to misclassify nonvesselpixels as a vessel pixel. However, a cleaning process can beachieved simply by calculating the area of misclassified pixelsand set a threshold to remove these undesired pixels. This isshown in Fig. 6(a).

The thickness variation in the sclera vessels due to thephysiological status of a person [6] affects the recognition pro-cess. Therefore, these vessels must be transformed to a singlepixel skeleton map. For the thinning process, a morphologi-cal thinning algorithm was applied and binary morphologicaloperations are applied to remove the exterior pixels from thedetected vascular vessels map and to create a one-pixel skele-ton running along the center of the vessels. The binary skeletonmap of the sclera vessels is shown in Fig. 6(b).

IV. SCLERA FEATURE EXTRACTION

AND TEMPLATE REGISTRATION

A. Sclera Feature Extraction

The feature extraction process in the sclera recognition sys-tem involves producing a reliable mathematical model in order

Page 6: Robust Sclera Recognition System With Novel

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

6 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS: SYSTEMS

Fig. 6. Sclera blood vessels enhancement and mapping. (a) Apply IUWTfor the vessel extraction. (b) Morphological thinning process.

to identify individuals. We propose a new method for the sclerapattern feature extraction based on Harris corner and edgedetection [34]. This algorithm detects the interest-points (IPs)represented by the corner response R. Some of these IPsinclude Y , T , L, and X vessel corner formations which supplya significant 2-D texture for pattern recognition.

The corner response R is defined as

R = det(A) − k tr2(A) (19)

where det is the determinant of matrix A, k is a constant setto 0.04, tr is the matrix trace, and A is the image structurematrix computed from the image derivatives as

A =[

I2x IxIy

IxIy I2y

]

(20)

where Ix and Iy are the partial derivatives in x and y, respec-tively. The corner response value is positive when a cornerregion is detected, small if a flat region is detected and neg-ative if an edge region is detected within the smooth circularGaussian window. The steps for the IP extraction process areas follows.

1) Compute the first derivatives in x and y.2) Apply the Gaussian smoothing filter to remove the noisy

response due to a binary window function.3) Find the points with a large corner response function R

when (R > thC).4) Take the points of local maxima of R where thC is set

to 0.01 for best performance.The extracted IPs in the sclera vessel map are shown in Fig. 7.

After the IP locations have been extracted, the characteris-tic information such as the magnitude, phase, and orientationwithin a specific window surrounding each feature point arecalculated. We exploit the 2-D monogenic signal method [35]to extract the local information of these IPs. An analytic sig-nal is constructed by using the Riesz transform. The analyticsignal is isotropic and therefore provides a split of iden-tity where the information is orthogonally decomposed intogeometric and energetic information for the blood veins IPs

Fig. 7. Applying Harris detector to extract IP features.

in terms of local amplitude, local phase, and local orienta-tion. If the coordinates of the vertical and horizontal filtersare u = (u1, u2), then the Riesz kernels in the frequencydomain are multiplied with log-Gabor scale mask to calcu-late the spatial representation of the vertical and horizontalfilters, respectively, as

h1(u) = iu1

|u|G(|u|), h2(u) = iu2

|u|G(|u|) (21)

where |u| =√

u21 + u2

2 represents the radius of frequency val-ues from the center and G(|u|) is the monogenic scale spacedefined by using the log-Gabor filter for a wavelength 1/f0 as

G(|u|) = exp

(

− (log(|u|/f0))2

2(log(k/f0))2

)

(22)

where 1/f0 = 0.1 and k = 0.2 is the ratio of the standarddeviation of the Gaussian describing the log-Gabor filter’stransfer function in the frequency domain to the filter centerfrequency. If f (I) represents the input image in the frequencydomain, then the monogenic signal of input image fM(I) canbe defined as

fM(I) = f (I) + h(u) ∗ f (I) (23)

where ∗ is the convolution operator. For the monogenic sig-nal representation in the image domain, three-tuple variable{p(I), q1(I), q2(I)} is defined as

p(x) = ( f ∗ Ga)(I)

q1(x) = ( f ∗ h1)(I)

q2(x) = ( f ∗ h2)(I) (24)

where Ga is the log-Gabor filter of the image domain. Theamplitude information is ignored as the vessel image is abinary form whereas the monogenic phase ϕ(I), which willbe in range −π/2 ≤ ϕ ≤ π/2, is calculated as

ϕ(I) = tan−1 p(I)√

q1(I)2 + q2(I)2. (25)

Finally for each IP location, a window patch with size of(19 × 19) pixels is stored along with the analytic signal infor-mation and the sclera template will consist of the followingcomponents:

St = {IP.locations, IP.phases}. (26)

Page 7: Robust Sclera Recognition System With Novel

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

ALKASSAR et al.: ROBUST SCLERA RECOGNITION SYSTEM WITH NOVEL SCLERA SEGMENTATION AND VALIDATION TECHNIQUES 7

B. Sclera Feature Template Registration

The investigation of nonlinear blood vein movement in thesclera carried out by Zhou et al. [6] has shown that thesevessels move slightly as the eye moves. To overcome this limi-tation and produce an invariant-blood movement user template,we propose a new method for user template registration. Thismethod is initiated with the alignment of user sclera templatesto a reference point. For the three training templates St1, St2,and St3, the reference points are r1(x, y), r2(x, y), and r3(x, y)which represent the radius of the iris. Then, these templatesare aligned as follows.

1) Let r1(x, y) be the point where St2 and St3 are to bealigned.

2) For St2, subtract (r1(x, y), r2(x, y)) to extract the shiftvalues (hr, vr) in the horizontal and vertical dimensions.

3) Apply a circular shift where St2 is shifted in bothdirections as St2(x + hr, y + vr).

4) Repeat for St3.If (hr, vr) are positive, St2 is shifted to the right and to thebottom, if (hr, vr) are negative, St2 is shifted to the left andto the top, if (hr) is positive and (vr) is negative, St2 is shiftedto the right and to the top, and if (hr) is negative and (vr) ispositive, St2 is shifted to the left and to the bottom.

Next, we propose a new method for feature matching basedon a local descriptor to generate putative matching pairs pmbetween these sclera feature sets, and random sampling con-sensus (RANSAC) [36] to select the homogeneous featuresets fin. RANSAC uses the minimum number of feature pointsto estimate the initial feature sets and proceed to maximizethe number of pairs within the set with consistent data pointsand thus decreasing the processing time. The proposed localdescriptor includes generating a correlation matrix betweentwo sclera templates (St1, St2). The correlation matrix holdsthe correlation strength of every feature set in St1 relativeto St2. Then, search for pm1 pairs that correlate maximally inboth directions; pm1 can be calculated as follows.

1) Set a radius for the correlation window rc =((w − 1)/2).

2) Calculate the distances of St1.IP.location(i) with allSt2.IP.location.

3) Specify the pairs points that have a distance < dmax.4) Normalize the phase information of the selected pairs

points window to a unit vector form and measure thecorrelation using a dot product.

5) Find pm1 = arg{max |corr(St1, St2), where dmax is setempirically to 50 for best performance.

Once the pm1 pairs for these templates were specified, thenthe process of finding fin1 for pm1 can be defined as follows.

1) Select randomly the minimum number of feature pointsm = arg{min(pm1)}.

2) Normalize each set of points so that the origin is atcentroid and mean distance from origin is

√2.

3) Calculate Sampson error distances [37] between thesesets and determine how many feature points from thetemplates fit with predefined Sampson error tolerance< ε, where ε empirically set to 0.002.

4) If the ratio of the number of fin1 over the total num-ber of pm1 exceeds a predefined threshold b, re-estimate

Fig. 8. Most homogeneous feature sets St f represented in the red squareextracted from fin1, fin2, and fin3.

the model parameters using all the identified fin1 andterminate.

5) Otherwise, repeat steps 1–4 for a maximum of Niterations.

The decision to stop selecting new feature subsets is basedon the number of iterations N required to ensure that the prob-ability z = 0.99 that choosing random samples has at least oneset does not include fout points. N can be set as

N = log(1 − z)

log(1 − b)(27)

where b = 1 − ((No. of fin/No. of pm))s represents the prob-ability that any selected feature points is fin1 feature pointsand s = 8 is the number of points needed to fit a fundamentalmatrix [36]. This method uses a minimum number of featurepoints to estimate the homogeneous pairs set and maximizesthis set with consistent feature points. After fin1, fin2, and fin3have been extracted and grouped in one set, the final sclerauser template St f is created by removing the duplicated points.This is shown in Fig. 8. The angles φ between St f points andiris center have been calculated for each point which will beused in the matching process to mitigate some outlier pointsbeen paired incorrectly.

V. SCLERA MATCHING AND DECISION PROCESS

The matching and decision process between two sclera tem-plates is the final stage where the St f of an enrolled user iscompared in terms of local orientation and local phase withStt of any test template. First, the putative feature sets pm areextracted from St f and Stt. Then these pair sets are comparedin terms of the location and the angle to iris center. For thedecision process, we propose a two stage decision method byusing the Euclidean distance and the angle difference to clas-sify the matching results. For each pair, two parameters aredefined as

Dp ={

1, if dist(

St f , Stt) ≤ thp

0, otherwise(28)

Dφ ={

1, if∣∣φf − φt

∣∣ ≥ thφ

0, otherwise(29)

Page 8: Robust Sclera Recognition System With Novel

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

8 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS: SYSTEMS

Fig. 9. Examples of the matching process output between two sclera templates. Matching process between two sclera templates belonging to (a) sameindividual and (b) different individuals.

Algorithm 1: Decision Process Between Two ScleraTemplate Pairs

if (Dφ == 1) thenif (Dp == 1) then

Pair accepted;else

Pair rejected;end

elsePair rejected;

end

where thp = 20◦ and thφ = 5◦ are set empirically for best per-formance. The decision to accept or reject each pair is shownin Algorithm 1.

Finally, the decision process is calculated as

Df =⎧

Accept, ifno. of accepted pairs

no. of pm≥ thf

Reject, otherwise(30)

where thf is set empirically to 0.6.Some examples of the matching results between two users

are shown in Fig. 9. pm points are represented by a redcross and fin are in a blue cross. The matching between twosclera templates belonging to the same individual is shown inFig. 9(a), whereas Fig. 9(b) shows the matching between twosclera templates belonging to different individuals. Df is dra-matically higher (Df = 89.51%) when matching two scleratemplates for the same user than matching different usertemplates (Df = 49%).

VI. EXPERIMENTAL RESULTS

A. Experimental Methodology

The evaluation of a biometric system basically includescalculating the false acceptance rate (FAR), the false rejec-tion rate (FRR), and the equal error rate (EER) [38].

FAR and FRR are defined as

FAR = FP

FP + TN× 100% (31)

FRR = FN

FN + TP× 100% (32)

where FP is the false positive match, TP is the true positivematch, FN is the false negative match, and TN is the truenegative match. EER denotes the error rate at threshold t whereFAR(t) = FRR(t). In addition, we used the receiver operatingcharacteristic (ROC) curve to evaluate the performance of oursystem where ROC is a plot of the FAR versus the genuineacceptance rate (GAR). GAR is calculated as

GAR = 1 − FRR. (33)

B. Experimental Results Using the UBIRIS Database

We utilized the UBIRIS.v1 database [39] to evaluate the per-formance of our proposed method. This database is composedof 1877 eye RGB images collected from 241 individuals in twosessions. In the first session which consists of 1214 imagesfrom 241 users, noise factors such as reflections, luminos-ity, and contrast were minimized as the images were capturedinside a dark room. While in the second session which is com-posed of 663 images and involved only 135 from the 241users, capturing location was changed in order to introducea natural luminosity factor which introduced more reflec-tions contrast, luminosity, and focus problems. The UBIRIS.v1database includes the images captured by a vision systemwith or without minimizing the collaboration of the subjects.Some images have a poor quality condition such as irritatedsclera, closed eye, severe blurring, uncentered uncropped eye-area, and poor lighting. Examples are shown in Fig. 10. Theevaluation process has been achieved in single-session andmultisession contexts. For the single-session context, we usedthree images for training and two for testing per user whereasthe multisession scenario uses session 1 images for trainingand session 2 images for testing and vice versa.

1) Sclera Validation Method Evaluation: The performanceof the proposed sclera validation is measured by computingthe correct sclera validation (CSV) rate where the correct

Page 9: Robust Sclera Recognition System With Novel

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

ALKASSAR et al.: ROBUST SCLERA RECOGNITION SYSTEM WITH NOVEL SCLERA SEGMENTATION AND VALIDATION TECHNIQUES 9

Fig. 10. UBIRIS.v1 database poor quality images. (a) Iterated sclera.(b) Occluded eye. (c) Partially occluded eye. (d) Uncropped eye image.(e) Insufficient sclera region. (f) High light exposure image.

TABLE IICSV RATE ON UBIRIS.V1

validation images are subjectively evaluated against the eyeimage decision in (10) and CSV is calculated as

CSV = Number of CAS + Number of CRS

Total number of images× 100% (34)

where CAS is the correctly accepted sclera image and CRSis the correctly rejected sclera image. As shown in Table II,our proposed method has removed the partial or full occludedimages and thus avoids expensive processing times and anyhuman intervention. However, some limitations in our methodcan be concluded as follows: 1) the detection process dependson the skin detection algorithm which is specifically designedfor RGB images and thus, the inability to use it on grayscaleimages and 2) the process of finding Arcr and Arcl dependsdramatically on the iris segmentation algorithm for the iriscenter. Therefore, the failure in extracting the iris center willcause errors in the validation process.

2) Active Contour Methods for Sclera SegmentationComparison: We compared our proposed sclera segmentationmethod using active contours without edges with state-of-the-art active contour models such as the geodesic [30],balloons [40], and the gradient vector flow [41] active contoursin terms of accuracy and complexity as shown in Table III. Weused MATLAB (version R2013a) on a personal computer withIntel core i5 3.0 GHz processor and 8.0 GB RAM for imple-menting these algorithms. Our proposed algorithm is tendedto focus on computer-based application and thus the hardwarecomplexity is not discussed. For a fair comparison, the num-ber of iterations in all models is set to 200 and the scleraimages are down-sampled with factor = 0.2. For the subjectiveevaluation results as shown in Fig. 12, our proposed methodovercomes the active contour model problem by not relyingon the edge function which depends on the image gradientto stop the curve evolution. These drawbacks appear severely

TABLE IIICOMPARISON OF SCLERA SEGMENTATION USING DIFFERENT

ACTIVE CONTOUR MODELS IN TERMS OF ACCURACY

AND COMPLEXITY ON UBIRIS.V1

Fig. 11. R square values for different active contour methods evaluation atiteration i = 200.

in noisy images and in the case of irregular sclera shape.In terms of accuracy, our adapted method achieved 98.65%for session 1 and 95.3% for session 2 which is significantlyhigher than other active contour methods. In addition, our pro-posed method presented significantly lower processing timeamong these methods (0.003 and 0.010 s for sessions 1 and 2,respectively).

In addition, we utilized a supervised evaluation method usedby Proenca [42] to evaluate our proposed sclera segmentationmethod. First, we manually segmented and labeled 200 scleraimages from both sessions. Then, R2 which is an objectivemeasure for the good-of-fit is calculated as

R2 = 1 −∑(

yi − yi)2

∑(

yi − y)2

(35)

where yi are the desired sclera pixel values resulted from themanual segmentation, y is the mean, and yi are the actual sclerapixel values resulted from the above mentioned active contoursegmentation methods. As shown in Fig. 11, our proposedmethod has superiority over the compared methods supportingthat active contours without edge which has no stopping edgefunction is reliable for sclera segmentation.

3) State-of-the-Art Sclera Recognition MethodComparison: We compared our proposed sclera recognition

Page 10: Robust Sclera Recognition System With Novel

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

10 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS: SYSTEMS

Fig. 12. Sclera regions extracted using different active contour models appliedon the same image. (a) Proposed active contours without edges. (b) Geodesic.(c) Balloons. (d) Gradient vector flow active contours.

Fig. 13. UBIRIS.v1 ROC curve using single-session and multisessioncomparisons.

method with the state-of-the-art methods on UBIRIS.v1database as shown in Table IV. In addition, we plotted theROC curves for these comparison scenarios in Fig. 13. Thecomparison in terms of EER includes single-session 1 and 2,session 1 training-session 2 testing, and session 2 training-session 1 testing. In terms of human supervision, the authorsin [6], [11], [14], [17], and [18], which their learning-basedinformation is shown in Table I, have discarded session 2manually as this session has more noisy images which willaffect system evaluation. While Das et al. [19] stated thatthey used all the images even the occluded eye images butwithout including any occluded eye detection method. For thesegmentation speed in terms of active contours, Das et al. [19]used a balloon-based active contour with segmentation speedof 0.14 s.

In contrast, we proposed a method for occluded and par-tial occluded eye images which increase the robustness of

TABLE IVSTATE-OF-THE-ART COMPARISON OF RECENT

WORK ON THE UBIRIS.V1 DATABASE

TABLE VEER COMPARISON OF IRIS AND SCLERA RECOGNITION

SYSTEMS ON UBIRIS.V1 DATABASE

the system. In addition, the segmentation speed of the pro-posed sclera segmentation is significantly lower (0.003 s).While in terms of the nonlinear blood vein movement effect,the methods in [14], [17], and [19] do not discuss nor suggestany method to overcome this limitation whereas our proposedsystem has an adaptive user template registration to extract themost homogeneous features for a final user template.

4) Sclera Versus Iris Recognition Using the UBIRISDatabase: We compared our system with the iris recog-nition systems using the UBIRIS.v1 database in order toanalyze both iris and sclera recognition systems using visi-ble wavelength images. For the iris recognition, two systemssuggested by Proença and Alexandre [43] and the traditionalDaugman method were used. While for the sclera recognitionsystem, the Zhou et al. [6] method was used in compari-son. We used 800 images for evaluating our system sincein these systems, they chose 800 images with best qual-ity and least noise to evaluate the accuracy. In addition,we used 800 images which are randomly selected to evalu-ate the proposed system. As shown in Table V, our systemresults are better (EER = 1.23%) compared to Daugman

Page 11: Robust Sclera Recognition System With Novel

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

ALKASSAR et al.: ROBUST SCLERA RECOGNITION SYSTEM WITH NOVEL SCLERA SEGMENTATION AND VALIDATION TECHNIQUES 11

Fig. 14. UTIRIS right and left eye images. The upper row represents theright eye images whereas the lower row represents the left eye images.

TABLE VICOMPARISON OF SCLERA SEGMENTATION USING DIFFERENT

ACTIVE CONTOURS MODELS IN TERMS OF ACCURACY

AND COMPLEXITY ON UTIRIS DATABASE

method (EER = 3.72%) and to Proença and Alexandre method(EER = 2.38%) using iris in the visible wavelength. In addi-tion, our system results are promising compared to Zhou’smethod in both best and random 800 images selection usingproposed sclera recognition system.

C. Experimental Results Using the UTIRIS Database

We used the UTIRIS database [44] to evaluate our proposedsegmentation method accuracy. The UTIRIS database consistsof two distinct sessions of visible-wavelength and near-infraredimages for 79 individuals with five images from both left andright eyes as shown in Fig. 14. We discarded the near-infraredimages as our focus is on visible-wavelength images. Thedimension of RGB images is 2048 × 1360. Some of theseimages are off-angle iris position which have been discardedmanually. First, we resized the dimension of RGB images to600 × 800 using bilinear interpolation and utilized the sameevaluation process parameters as on UBIRIS.v1. As shown inTable VI, the accuracy of our proposed active contours with-out edges method has been decreased as compared with theUBIRIS.v1 database because some of the UTIRIS images aredefocused on the sclera regions as this database is created forevaluating the iris pigmentation role in an iris recognition sys-tem. Therefore, sclera regions are not considered important tothe authors study. However, the processing time remains sig-nificantly lower compared to other active contours models. Inaddition, EER has been calculated (EER = 6.67%) and ROCcurve is shown in Fig. 15.

D. Computational Load of the Proposed System

This section provides the computational load of the pro-posed system in each step as shown in Table VII. Theproperties of simulation program and PC are mentioned in

Fig. 15. UTIRIS database ROC curve.

TABLE VIITIME COMPLEXITY OF THE PROPOSED SYSTEM

Section VI-B2. the computational complexities of each stepis calculated by running the simulation program 20 timesusing both UBIRIS.v1 and UTIRIS databases and the averageprocessing time is recorded.

E. Empirical Parameters Tuning Analysis

To analyze the tuning criteria for the system controllingparameters, we summarized these variables according to theirlocation as follows.

1) Occluded Eye Detection for Sclera Validation: Thissection has three controlling parameters which are �,pernskup

, and pernskdown. For �, increasing the angle

width provides more accurate results at the expense ofprocessing time. While for pernskup

and pernskdown, the

higher the values, the more sclera samples are rejected.2) Sclera Segmentation: This section has two controlling

parameters which are μ and the number of iterationsi. Increasing i means more curve convergence with apossibility that the fitting curve my pass through thesclera border whereas μ is discussed in Section II-C2.

3) Sclera Matching and Decision Process: Finally, the deci-sion process is determined through first detect pair setsdepending on thp and thφ whereas thf sets the final deci-sion. Increasing the values of thp and thφ are more likelyproduce false pair sets. While thf determines FRR andFAR where the higher the thf , the higher the FRR andthe lower FAR and vice versa.

VII. CONCLUSION

In this paper, a novel occluded eye detection methodhas been proposed to discard the noisy images.

Page 12: Robust Sclera Recognition System With Novel

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

12 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS: SYSTEMS

Sclera segmentation has been achieved through an adaptiveactive contours without edges method. In addition, a newmethod for blood vein extraction and mapping have beensuggested based on IUWT. For the nonlinear movement ofthe blood veins, a new user template registration is proposedto overcome this effect and create a robust sclera featuretemplate. The results using the UBIRIS.v1 and UTIRISdatabases showed that our proposed method has a lower pro-cessing time with high segmentation accuracy. In addition, theproposed method minimizes the human supervision during therecognition process introducing more robustness. This paperhas not evaluated the use of off-angle sclera images whichneed to be considered as future work. In addition, consideringdifferent versions of the RANSAC algorithm to select thehomogeneous feature sets and comparing the efficiency andprocessing time of each version is recommended.

REFERENCES

[1] M. Tistarelli and M. S. Nixon, “Advances in biometrics,” in Proc. 3rdInt. Conf. (ICB), vol. 5558. Alghero, Italy, Jun. 2009, pp. 1260–1269.

[2] R. B. Hill, “Retina identification,” in Biometrics: Personal Identificationin Networked Society. New York, NY, USA: Springer Science+BusinessMedia, 2002, pp. 123–141.

[3] C.-L. Lin and K.-C. Fan, “Biometric verification using thermal imagesof palm-dorsa vein patterns,” IEEE Trans. Circuits Syst. Video Technol.,vol. 14, no. 2, pp. 199–213, Feb. 2004.

[4] N. Miura, A. Nagasaka, and T. Miyatake, “Feature extraction of finger-vein patterns based on repeated line tracking and its application topersonal identification,” Mach. Vis. Appl., vol. 15, no. 4, pp. 194–203,2004.

[5] R. Derakhshani, A. Ross, and S. Crihalmeanu, “A new biometric modal-ity based on conjunctival vasculature,” in Proc. Artif. Neural Netw.Eng. (ANNIE), 2006, pp. 1–8.

[6] Z. Zhou, E. Y. Du, N. L. Thomas, and E. J. Delp, “A new human iden-tification method: Sclera recognition,” IEEE Trans. Syst., Man, Cybern.A, Syst., Humans, vol. 42, no. 3, pp. 571–583, May 2012.

[7] C. Oyster, The Human Eye: Structure and Function. Sunderland,MA, USA: Sinauer Associates Inc., 1999. [Online]. Available:http://books.google.co.uk/books?id=n9yoJQAACAAJ

[8] R. M. Broekhuyse, “The lipid composition of aging sclera and cornea,”Ophthalmologica, vol. 171, no. 1, pp. 82–85, 1975.

[9] R. Derakhshani and A. Ross, “A texture-based neural network clas-sifier for biometric identification using ocular surface vasculature,” inProc. Int. Joint Conf. Neural Netw. (IJCNN), Orlando, FL, USA, 2007,pp. 2982–2987.

[10] S. Crihalmeanu, A. Ross, and R. Derakhshani, “Enhancement andregistration schemes for matching conjunctival vasculature,” in Proc.3rd IAPR/IEEE Int. Conf. Biometrics (ICB), Alghero, Italy, 2009,pp. 1240–1249.

[11] N. L. Thomas, Y. Du, and Z. Zhou, “A new approach for sclera veinrecognition,” in Proc. SPIE Defen. Security Sens., Orlando, FL, USA,2010, Art. ID 770805.

[12] Z. Zhou, E. Y. Du, N. L. Thomas, and E. J. Delp, “A comprehensiveapproach for sclera image quality measure,” Int. J. Biometrics, vol. 5,no. 2, pp. 181–198, 2013.

[13] Z. Zhou, E. Y. Du, C. Belcher, N. L. Thomas, and E. J. Delp, “Qualityfusion based multimodal eye recognition,” in Proc. IEEE Int. Conf.Systems Man Cybern. (SMC), Seoul, Korea, 2012, pp. 1297–1302.

[14] K. Oh and K.-A. Toh, “Extracting sclera features for cancelable identityverification,” in Proc. 5th IAPR Int. Conf. Biometrics (ICB), New Delhi,India, 2012, pp. 245–250.

[15] M. H. Khosravi and R. Safabakhsh, “Human eye sclera detection andtracking using a modified time-adaptive self-organizing map,” PatternRecognit., vol. 41, no. 8, pp. 2571–2593, 2008.

[16] S. Crihalmeanu and A. Ross, “Multispectral scleral patterns for ocu-lar biometric recognition,” Pattern Recognit. Lett., vol. 33, no. 14,pp. 1860–1869, 2012.

[17] A. Das, U. Pal, M. A. F. Ballester, and M. Blumenstein, “Sclera recog-nition using dense-SIFT,” in Proc. 13th Int. Conf. Intell. Syst. DesignAppl. (ISDA), Bangi, Malaysia, 2013, pp. 74–79.

[18] Y. Lin, E. Y. Du, Z. Zhou, and N. L. Thomas, “An efficient paral-lel approach for sclera vein recognition,” IEEE Trans. Inf. ForensicsSecurity, vol. 9, no. 2, pp. 147–157, Feb. 2014.

[19] A. Das, U. Pal, M. A. F. Ballester, and M. Blumenstein, “A new efficientand adaptive sclera recognition system,” in Proc. IEEE Symp. Comput.Intell. Biometrics Identity Manage. (CIBIM), Orlando, FL, USA, 2014,pp. 1–8.

[20] T. F. Chan and L. A. Vese, “Active contours without edges,” IEEE Trans.Image Process., vol. 10, no. 2, pp. 266–277, Feb. 2001.

[21] J. Daugman, “How iris recognition works,” IEEE Trans. Circuits Syst.Video Technol., vol. 14, no. 1, pp. 21–30, Jan. 2004.

[22] J. Daugman, “New methods in iris recognition,” IEEE Trans. Syst., Man,Cybern. B, Cybern., vol. 37, no. 5, pp. 1167–1175, Oct. 2007.

[23] W. W. Boles and B. Boashash, “A human identification technique usingimages of the iris and wavelet transform,” IEEE Trans. Signal Process.,vol. 46, no. 4, pp. 1185–1188, Apr. 1998.

[24] L. Ma, T. Tan, Y. Wang, and D. Zhang, “Efficient iris recognition bycharacterizing key local variations,” IEEE Trans. Image Process., vol. 13,no. 6, pp. 739–750, Jun. 2004.

[25] R. P. Wildes, “Iris recognition: An emerging biometric technology,”Proc. IEEE, vol. 85, no. 9, pp. 1348–1363, Sep. 1997.

[26] J. Kovac, P. Peer, and F. Solina, “2D versus 3D colour space facedetection,” in Proc. 4th EURASIP Conf. Focused Video/Image Process.Multimedia Commun., vol. 2. Zagreb, Croatia, 2003, pp. 449–454.

[27] M. Kass, A. Witkin, and D. Terzopoulos, “Snakes: Active contourmodels,” Int. J. Comput. Vis., vol. 1, no. 4, pp. 321–331, 1988.

[28] V. Caselles, F. Catté, T. Coll, and F. Dibos, “A geometric model for activecontours in image processing,” Numer. Math., vol. 66, no. 1, pp. 1–31,1993.

[29] R. Malladi, J. A. Sethian, and B. C. Vemuri, “Topology-independentshape modeling scheme,” in Proc. SPIE Int. Symp. Opt. Imag. Instrum.,1993, pp. 246–258.

[30] V. Caselles, R. Kimmel, and G. Sapiro, “Geodesic active contours,”Int. J. Comput. Vis., vol. 22, no. 1, pp. 61–79, 1997.

[31] D. Mumford and J. Shah, “Optimal approximations by piecewise smoothfunctions and associated variational problems,” Commun. Pure Appl.Math., vol. 42, no. 5, pp. 577–685, 1989.

[32] C. G. Owen, T. J. Ellis, A. R. Rudnicka, and E. G. Woodward, “Optimalgreen (red-free) digital imaging of conjunctival vasculature,” Ophthal.Physiol. Opt., vol. 22, no. 3, pp. 234–243, 2002.

[33] J.-L. Starck, J. Fadili, and F. Murtagh, “The undecimated wavelet decom-position and its reconstruction,” IEEE Trans. Image Process., vol. 16,no. 2, pp. 297–309, Feb. 2007.

[34] C. Harris and M. Stephens, “A combined corner and edge detector,” inProc. Alvey Vis. Conf., vol. 15. Manchester, U.K., 1988, pp. 147–151.

[35] M. Felsberg and G. Sommer, “The monogenic signal,” IEEE Trans.Signal Process., vol. 49, no. 12, pp. 3136–3144, Dec. 2001.

[36] M. A. Fischler and R. C. Bolles, “Random sample consensus:A paradigm for model fitting with applications to image analysis andautomated cartography,” Commun. ACM, vol. 24, no. 6, pp. 381–395,1981.

[37] R. Hartley and A. Zisserman, Multiple View Geometry in ComputerVision. Cambridge, U.K.: Cambridge Univ. Press, 2003.

[38] T. Dunstone and N. Yager, Biometric System and Data Analysis: Design,Evaluation, and Data Mining. New York, NY, USA: Springer, 2008.

[39] H. Proença and L. A. Alexandre, “UBIRIS: A noisy iris image database,”in Image Analysis and Processing—ICIAP. Berlin, Germany: Springer,2005, pp. 970–977.

[40] L. D. Cohen, “On active contour models and balloons,” CVGIP: ImageUnderstand., vol. 53, no. 2, pp. 211–218, 1991.

[41] N. Paragios, O. Mellina-Gottardo, and V. Ramesh, “Gradient vector flowfast geodesic active contours,” in Proc. 8th IEEE Int. Conf. Comput.Vis. (ICCV), vol. 1. Vancouver, BC, Canada, 2001, pp. 67–73.

[42] H. Proenca, “Iris recognition: On the segmentation of degraded imagesacquired in the visible wavelength,” IEEE Trans. Pattern Anal. Mach.Intell., vol. 32, no. 8, pp. 1502–1516, Aug. 2010.

[43] H. Proença and L. A. Alexandre, “Toward noncooperative iris recogni-tion: A classification approach using multiple signatures,” IEEE Trans.Pattern Anal. Mach. Intell., vol. 29, no. 4, pp. 607–612, Apr. 2007.

[44] M. S. Hosseini, B. N. Araabi, and H. Soltanian-Zadeh, “Pigmentmelanin: Pattern for iris recognition,” IEEE Trans. Instrum. Meas.,vol. 59, no. 4, pp. 792–804, Apr. 2010.

Page 13: Robust Sclera Recognition System With Novel

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

ALKASSAR et al.: ROBUST SCLERA RECOGNITION SYSTEM WITH NOVEL SCLERA SEGMENTATION AND VALIDATION TECHNIQUES 13

S. Alkassar (S’14) received the B.Sc. degreein computer engineering from the University ofTechnology, Baghdad, Iraq, in 2006, and the M.Sc.degree in communications and signal processingfrom Newcastle University, Newcastle upon Tyne,U.K., in 2011, where he is currently pursuing thePh.D. degree with the School of Electrical andElectronic Engineering.

His current research interests include signal pro-cessing, image processing, biometrics security, andpattern recognition.

W. L. Woo (M’09–SM’11) was born in Malaysia.He received the B.Eng. degree (First Class Hons.)in electrical and electronics engineering and thePh.D. degree in statistical signal processing fromNewcastle University, Newcastle upon Tyne, U.K,in 1993 and 1998, respectively.

He is currently a Senior Lecturer with theSchool of Electrical and Electronic Engineering,Newcastle University. He is currently the Directorof Operations with the International Branch ofNewcastle University, Singapore. He acts as a con-

sultant to a number of industrial companies that involve the use of statisticalsignal and image processing techniques. He has an extensive portfolio of rele-vant research supported by a variety of funding agencies. His current researchinterests include mathematical theory and algorithms for nonlinear signal andimage processing, machine learning for signal processing, blind source sep-aration, multidimensional signal processing, and signal/image deconvolutionand restoration. He has published over 250 papers on the above topics onvarious journals and international conference proceedings.

Dr. Woo was a recipient of the IEE Prize and the British Scholarship in 1998for his research work. He currently serves on the editorial board of severalinternational signal processing journals. He actively participates in interna-tional conferences and workshops, and serves on their organizing and technicalcommittees. He is a Senior Member of Institution Engineering Technology.

S. S. Dlay received the B.Sc. degree (Hons.) inelectrical and electronic engineering and the Ph.D.degree in VLSI design from Newcastle University,Newcastle upon Tyne, U.K., in 1979 and 1984,respectively.

In recognition of his major achievements, hewas appointed as the Personal Chair of SignalProcessing Analysis in 2006. He has authored over250 research papers ranging from biometrics andsecurity, biomedical signal processing, and imple-mentation of signal processing architectures.

Prof. Dlay serves on several editorial boards and has played an active rolein numerous international conferences in terms of serving on technical andadvisory committees as well as organizing special sessions.

J. A. Chambers (S’83–M’90–SM’98–F’11)received the Ph.D. and D.Sc. degrees in signalprocessing from the Imperial College of Science,Technology and Medicine (Imperial CollegeLondon), London, U.K., in 1990 and 2014,respectively.

He is a Professor of Signal and InformationProcessing with the School of Electrical andElectronic Engineering, Newcastle University,Newcastle upon Tyne, U.K.

Prof. Chambers has served as an Associate Editorfor the IEEE TRANSACTIONS ON SIGNAL PROCESSING from 1997 to 1999,and 2004 to 2007, and a Senior Area Editor from 2011 to 2015. He is aFellow of the Royal Academy of Engineering, U.K.