22
Contactless and Pose Invariant Biometric Identification Using Hand Surface Vivek Kanhangad, Ajay Kumar, Senior Member, IEEE, and David Zhang, Fellow, IEEE

Contactless and Pose Invariant Biometric Identification Using Hand Surface

  • Upload
    keahi

  • View
    27

  • Download
    2

Embed Size (px)

DESCRIPTION

Contactless and Pose Invariant Biometric Identification Using Hand Surface. Vivek Kanhangad, Ajay Kumar , Senior Member, IEEE , and David Zhang , Fellow, IEEE. INTRODUCTION. Image acquisition 1) Constrained and contact based 2) Unconstrained and contact based - PowerPoint PPT Presentation

Citation preview

Page 1: Contactless and Pose Invariant Biometric Identification Using Hand Surface

Contactless and Pose Invariant Biometric

Identification Using Hand Surface

Vivek Kanhangad, Ajay Kumar, Senior Member, IEEE, and David Zhang, Fellow, IEEE

Page 2: Contactless and Pose Invariant Biometric Identification Using Hand Surface

Image acquisition 1) Constrained and contact based 2) Unconstrained and contact based 3) Unconstrained and contact-free

The key contributions : (1) A fully automatic hand identification. (2) Proposed dynamic fusion

INTRODUCTION

Page 3: Contactless and Pose Invariant Biometric Identification Using Hand Surface

INTRODUCTION

Page 4: Contactless and Pose Invariant Biometric Identification Using Hand Surface

3-D AND 2-D HAND POSE NORMALIZATION

Page 5: Contactless and Pose Invariant Biometric Identification Using Hand Surface

Locate the palm center detect local minima points distance transform

3-D AND 2-D HAND POSE NORMALIZATION

Page 6: Contactless and Pose Invariant Biometric Identification Using Hand Surface

3-D AND 2-D HAND POSE NORMALIZATION

3-D plane fitting iterative reweighted least squares (IRLS)

α = [α1, α2, α3]T

Xi = [1,xi,yi] ri = (zi - Xiα)

Page 7: Contactless and Pose Invariant Biometric Identification Using Hand Surface

3-D AND 2-D HAND POSE NORMALIZATION

normal vector to the plane n = [nx,ny,nz] θx = -arctan(ny/nx) θy = arctan(nx/nz)

Page 8: Contactless and Pose Invariant Biometric Identification Using Hand Surface

3-D AND 2-D HAND POSE NORMALIZATION

Fig.5. (a) Sample intensity images with varying pose in our database.

using bicubic interpolation filling hole (b) Corresponding pose corrected and resampled images. (c) Pose corrected images after hole filling.

Page 9: Contactless and Pose Invariant Biometric Identification Using Hand Surface

A. 3-D Palmprint

3-D palmprints extracted from the range images of the hand.

Compute shape index at every point on the palm surface, every point can be classified in to one of the nine surface types.

The index of the surface category is then binary encoded using four bits to obtain a SurfaceCode representation.

The computation of similarity between two feature matrices (SurfaceCodes) is based upon the normalized Hamming distance.

HAND FEATURE EXTRACTION

Page 10: Contactless and Pose Invariant Biometric Identification Using Hand Surface

A. 3-D Palmprint

Page 11: Contactless and Pose Invariant Biometric Identification Using Hand Surface

B. 2-D Palmprint

Use a bank of six Gabor filters oriented in different directions.

The index of this orientation is binary encoded to form a feature representation (CompCode).

The similarity between two CompCodes is computed using the normalized Hamming distance.

Page 12: Contactless and Pose Invariant Biometric Identification Using Hand Surface

C. 3-D Hand Geometry 20 cross-sectional finger segments are

extracted at uniformly spaced distances along the finger length.

Compute curvature and orientation.

Page 13: Contactless and Pose Invariant Biometric Identification Using Hand Surface

D. 2-D Hand Geometry

The hand geometry features include : finger lengths and widths, finger perimeter, finger area and palm width.

The computation of matching score between two feature vectors from a pair of hands being matched is based upon the Euclidean distance.

Page 14: Contactless and Pose Invariant Biometric Identification Using Hand Surface

Weighted sum rule based fusionDynamically weight a match score based upon the quality of the corresponding modality.Ignore the hand geometry information and rely only on the palmprint match scores.

w1,w2 and w3 are empirically set to 0.4, 0.4, and 0.2 .

DYNAMIC FUSION

Page 15: Contactless and Pose Invariant Biometric Identification Using Hand Surface

DYNAMIC FUSION

Page 16: Contactless and Pose Invariant Biometric Identification Using Hand Surface

V. EXPERIMENTAL RESULTSA. Dataset DescriptionThe database currently contains 1140 right hand images (3-D and the corresponding 2-D) acquired from 114 subjects.

Page 17: Contactless and Pose Invariant Biometric Identification Using Hand Surface

leave-one-out strategy

In order to generate genuine match scores, a sample is matched to all the remaining samples of the user

B. Verification Results

Page 18: Contactless and Pose Invariant Biometric Identification Using Hand Surface

B. Verification Results

Page 19: Contactless and Pose Invariant Biometric Identification Using Hand Surface

B. Verification Results

Fig. 11. ROC curves for (a)the 3-D hand/finger geometry (b) 2-D hand geometry matching before and after pose correction. (c) ROC curves for the combination of 2-D, 3-D palmprint and 3-D hand geometry matching scores using weighted sum rule and the proposed dynamic approach.

Page 20: Contactless and Pose Invariant Biometric Identification Using Hand Surface

B. Verification Results

Page 21: Contactless and Pose Invariant Biometric Identification Using Hand Surface

The palmprint features (2-D as well as 3-D) are more suitable

to be utilized.

The hand (finger) geometry features suffer from loss of crucial information due to occlusion around the finger edges.

The proposed dynamic combination approach achieves a relative performance improvement of 60% in terms of EER over the case when features are combined using weighted sum rule.

C. Discussion

Page 22: Contactless and Pose Invariant Biometric Identification Using Hand Surface

Slow acquisition speed, cost and size of this scanner

As part of our future work, we intend to investigate alternative3-D imaging technologies that can overcome these drawbacks.

We are also exploring a dynamic feature level combination in order to further improve the performance.

CONCLUSION