Upload
christina-hudson
View
215
Download
0
Embed Size (px)
Citation preview
Face Recognition in Hyperspectral Images
Z. Pan, G. Healey, M. Prasad and B. TrombergUniversity of California
Published at IEEE Trans. on PAMIVol 25, No. 12, December 2003.
Introduction
What is a hyperspectral Image?
RGB
Red,
Green,
Blue Channels
0.4 0.7 µm
visible electromagnetic spectrum
Introduction
UV = Ultra VioletVis = VisibleNIR = Near infraredSWIR = Short wavelength infraredMWIR = Medium wavelength infraredLWIR = Long wavelength infrared
What is a hyperspectral Image?
Introduction
“Hyperspectral cameras provide useful discriminants for human face that cannot be obtained by other imaging methods.”
Introduction
• The utility of using near-infrared (NIR) hyperspectral images for face recognition is studied;
• Spectral measurements over the NIR allow sensing subsurface tissue structures;
• Subsurface tissue:– Significantly different from person to person,– Relatively stable over time,– Nearly invariant to face orientations and expressions.
Introduction
“Significantly different from person to person”
Introduction“Nearly invariant to face orientations”
Data Collection
• 200 subjects;• 31 spectral bands (0.7-1.0µm);• Tunable filter;• 468x498 spatial resolution;• Uniform illumination;• 10 seconds each image.
Data Collection
Data Collection
7 images for each subject and at most 5 regions (17x17) sampled:
20 subjects took part of different imaging sessions:
Experiments
Setup
– Cumulative Match Characteristic (CMC) curves.
– Minimum Mahalanobis Distance from query to gallery:
where ωx is 1 or 0, if region x was sampled or not;Dx(i, j) is computed from the average intensities of the sampled region x of i and j.
First Experiment- Verification of utility of various tissues types for hyperspectral face recognition;
- Only frontal images were used (Gallery: fg; Query: fa, fb).
First Experiment
Better performance is achieved when different tissues are combined
First Experiment
Changes in expression do not impact significantly the hyperspectral discriminants
First Experiment
Forehead is the least affected by change of expressions
Second Experiment- Examination of the impact of changes in face orientation for hyperspectral face recognition;
- Only frontal images were used (Gallery: fg; Query: all others).
Second Experiment
45° - 75% for n = 1 and 94% for n = 5;90° - 80% for n = 10.
The distance function assumes that tissue spectral reflectancedoes not depend on photometric angles.
Second Experiment
Performance degrades as the size of the subset considered increases.
Analyses of First and Second Experiment
Analyses of First and Second Experiment
Third Experiment- Examination of variance of hyperspectral discriminants over time;- 20 subjects imaged between 3 days and 5 weeks after the first session; - The same 200 subject gallery is used.
Third Experiment
- Similar results for images from different times;- Significant reduction of performance over “single day” images
Third ExperimentThe difference in performance can be attributed to changes in subject condition:
- blood flow;- water concentration;- blood oxygenation;- melanin concentration;
Also- sensor characteristics.
Questions?
Face Recognition on Fitting a 3D Morphable Model
V. Blanz and T. Vetter
Published at IEEE Trans. on PAMIVol 25, No. 9, September 2003.
Introduction
• Color values in a face image do not depend only on the person identity (pose and illumination);
• Goal: separate the characteristics of a face (shape and texture) from conditions of image acquisition;
• The conditions may be described consistently across the entire image by a small set of extrinsic parameters;
Introduction
• The algorithm developed combines deformable 3D models with CG simulations of illumination and projection;
• It makes face shape and texture fully independent of extrinsic parameters;
• Given a single image of a person, the algorithm automatically estimates face 3D shape, texture, and all relevant 3D scene parameters.
Model-Based Recognition
Morphable Model
• Vector space constructed such that any “convex combination” of shape and texture vectors Si and Ti describes a human face;
• Continuous changes in model parameters generate a smooth transition that moves the initial surface toward a final one;
Database of 3D Laser Scans
• Laser scans of 200 faces were used to create the morphable model;
Correspondence
• Establish dense point-to-point correspondence between each face and a reference face:
• Generalization of “Optical Flow” to 3D surfaces is used to determine the vector field:
),(),( hhIhI
)),(),,((),( hhhhv
Vi
Generalized Optical Flow
To find the face vector field, the following expression must be minimized for a neighborhood R (5x5):
Face Vectors
• One scanned face is chosen as reference I0
• Reference shape and texture vectors are defined from conversion of each cylindrical coordinate to Cartesian coordinates:
Face Vectors• For a novel scan I, the flow field from I0 to I is computed
and converted to cartesian coordinates (S and T).
Principal Component Analysis
• PCA is performed on Si and Ti
• Shape and texture eigenvectors (si and ti) and variances (σS and σT) are computed:
Model Fitting• Given a novel face image, the parameters and are found to
provide the reconstruction of the 3D shape;
• Pose, camera focal length, light intensity, color and direction are automatically found;
ii
Model Fitting
Model Fitting• Optimization of shape coefficients and texture coefficients , along
with pose angles, translation and focal length parameters, Lambertian light intensity and direction, contrast, and gains and offsets of color channels (ρ);
• Cost Function:
• Optimization method: Stochastic Newton Algorithm.
• Similar to stochastic gradient descent algorithm;• Makes use of first derivative of E;
i i
),,,,(min FI EEfE
Experiments
• Model fitting and identification were tested on PIE (4488 images) and FERET (1940 images) databases;
• None of the faces are in the model database;
• Feature points manually defined:
• Gallery and Query recognition approach.
Results of Model Fitting
Results of Model Fitting
Results of Recognition• Metrics used for comparison:
– Sum of Mahalanobis Distances dM = ||c1-c2||^2
– Cosine of the angle between two vectors dA=<c1,c2>/||c1||.||c2||
– Maximum-Likelihood and LDA
• c is a face, represented by shape and texture coefficients;
dW is superior because it takes into account fitting inaccuracy (different coefficients for the same subject)
Results of Recognition
Results of Recognition
Results of Recognition
Comment
• Fitting process depends on user interaction and takes 4.5 minutes on a Pentium 3 2GHz.
Questions?