7
1 Abstract A novel pose invariant 3D dental biometrics framework is proposed for human identification by matching dental plasters in this paper. Using 3D overcomes a number of key problems that plague 2D methods. As best as we can tell, our study is the first attempt at 3D dental biometrics. It includes a multi-scale feature extraction algorithm for extracting pose invariant feature points and a triplet-correspondence algorithm for pose estimation. Preliminary experimental result achieves 100% rank-1 accuracy by matching 7 postmortem (PM) samples against 100 ante-mortem (AM) samples. In addition, towards a fully automated 3D dental identification testing, the accuracy achieves 71.4% at rank-1 accuracy and 100% at rank-4 accuracy. Comparing with the existing algorithms, the feature point extraction algorithm and the triplet-correspondence algorithm are faster and more robust for pose estimation. In addition, the retrieval time for a single subject has been significantly reduced. Furthermore, we discover that the investigated dental features are discriminative and useful for identification. The high accuracy, fast retrieval speed and the facilitated identification process suggest that the developed 3D framework is more suitable for practical use in dental biometrics applications in the future. Finally, the limitations and future research directions are discussed. 1. Introduction Dental biometrics utilizes dental features for victim identification. The use of teeth in postmortem (PM) identification has gained increasing attention over the last half-century. In forensic dentistry, the postmortem identification of a deceased individual is based on the dental records when other evidences of the victim (e.g. clothing, jewelry, pocket contents, gender, estimated age, height, build, color of skin, scars, moles, tattoos, abnormalities, DNA, fingerprints, iris etc.) are not available [1]. Due to the survivability and diversity of dental features, identification by dental records outperforms that by DNA [2] in severe conditions and mass disasters because DNA is fragile that its structure is easily altered or destroyed through time, heat, chemical or other forces. Traditionally, the identification based on dental radiograph comparisons is labor-intensive and low in efficiency. There are several computer-aided postmortem (PM) identification systems, such as the famous CAPMI [3] and WinID [4]. However, these systems are text-based searching of records and do not provide high level of automation as the feature extraction, coding, and image comparison are still carried out manually. Extensive efforts have been put into the research towards automated two-dimensional (2D) radiograph-based dental identification in the last decade. The 2D framework mainly involves four steps [5]: image segmentation [6], feature extraction [6, 7], atlas registration [8, 9] and matching [10, 11]. (a) (b) Fig. 1 Dental plasters (a) an AM madibular plaster of a live person (b) a PM mandibular plaster of a dry skull However, many unsolved problems and challenges limit the identification capability and accuracy of the 2D methodology, including: 1) radiographs are often blurred images, making it very difficult to extract the tooth contours accurately with minimal geometric distortions. Moreover this process is often time-consuming. Chen et al. [12] reported that 14 of the 25 subjects in their database could not be identified due to poor image quality , variation of the dental structure and insufficient number of AM images for matching. 2) 2D radiographs are projections of 3D teeth. Distortions in tooth shape arising from different imaging angles are often significant, which causes incorrect matching, namely tooth contours extracted from genuine samples (paired PM and AM samples of a victim) could not be matched together. In contrast, 3D dental identification based on the digitized dental plaster is able to overcome the Towards Automated Pose Invariant 3D Dental Biometrics Xin ZHONG 1 , Deping YU 1 , Kelvin W C FOONG 2 , Terence SIM 3 , Yoke San WONG 1 and Ho-lun CHENG 3 1. Mechanical Engineering, National University of Singapore, 117576, [email protected] 2. Faculty of Dentistry, National University of Singapore, 119083 3. School of Computing, National University of Singapore, 117417 978-1-4577-1359-0/11/$26.00 ©2011 IEEE

Dental Biometrics

  • Upload
    muzich

  • View
    234

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Dental Biometrics

1

Abstract

A novel pose invariant 3D dental biometrics framework is

proposed for human identification by matching dental

plasters in this paper. Using 3D overcomes a number of key

problems that plague 2D methods. As best as we can tell,

our study is the first attempt at 3D dental biometrics. It

includes a multi-scale feature extraction algorithm for

extracting pose invariant feature points and a

triplet-correspondence algorithm for pose estimation.

Preliminary experimental result achieves 100% rank-1

accuracy by matching 7 postmortem (PM) samples against

100 ante-mortem (AM) samples. In addition, towards a

fully automated 3D dental identification testing, the

accuracy achieves 71.4% at rank-1 accuracy and 100% at

rank-4 accuracy. Comparing with the existing algorithms,

the feature point extraction algorithm and the

triplet-correspondence algorithm are faster and more

robust for pose estimation. In addition, the retrieval time

for a single subject has been significantly reduced.

Furthermore, we discover that the investigated dental

features are discriminative and useful for identification.

The high accuracy, fast retrieval speed and the facilitated

identification process suggest that the developed 3D

framework is more suitable for practical use in dental

biometrics applications in the future. Finally, the

limitations and future research directions are discussed.

1. Introduction

Dental biometrics utilizes dental features for victim

identification. The use of teeth in postmortem (PM)

identification has gained increasing attention over the last

half-century. In forensic dentistry, the postmortem

identification of a deceased individual is based on the dental

records when other evidences of the victim (e.g. clothing,

jewelry, pocket contents, gender, estimated age, height,

build, color of skin, scars, moles, tattoos, abnormalities,

DNA, fingerprints, iris etc.) are not available [1]. Due to the

survivability and diversity of dental features, identification

by dental records outperforms that by DNA [2] in severe

conditions and mass disasters because DNA is fragile that

its structure is easily altered or destroyed through time, heat,

chemical or other forces. Traditionally, the identification

based on dental radiograph comparisons is labor-intensive

and low in efficiency. There are several computer-aided

postmortem (PM) identification systems, such as the

famous CAPMI [3] and WinID [4]. However, these systems

are text-based searching of records and do not provide high

level of automation as the feature extraction, coding, and

image comparison are still carried out manually. Extensive

efforts have been put into the research towards automated

two-dimensional (2D) radiograph-based dental

identification in the last decade. The 2D framework mainly

involves four steps [5]: image segmentation [6], feature

extraction [6, 7], atlas registration [8, 9] and matching [10,

11].

(a) (b)

Fig. 1 Dental plasters (a) an AM madibular plaster of a live person

(b) a PM mandibular plaster of a dry skull

However, many unsolved problems and challenges limit

the identification capability and accuracy of the 2D

methodology, including: 1) radiographs are often blurred

images, making it very difficult to extract the tooth contours

accurately with minimal geometric distortions. Moreover

this process is often time-consuming. Chen et al. [12]

reported that 14 of the 25 subjects in their database could

not be identified due to poor image quality , variation of the

dental structure and insufficient number of AM images for

matching. 2) 2D radiographs are projections of 3D teeth.

Distortions in tooth shape arising from different imaging

angles are often significant, which causes incorrect

matching, namely tooth contours extracted from genuine

samples (paired PM and AM samples of a victim) could not

be matched together. In contrast, 3D dental identification

based on the digitized dental plaster is able to overcome the

Towards Automated Pose Invariant 3D Dental Biometrics

Xin ZHONG1, Deping YU

1, Kelvin W C FOONG

2, Terence SIM

3, Yoke San WONG

1 and Ho-lun CHENG

3

1. Mechanical Engineering, National University of Singapore, 117576, [email protected]

2. Faculty of Dentistry, National University of Singapore, 119083

3. School of Computing, National University of Singapore, 117417

978-1-4577-1359-0/11/$26.00 ©2011 IEEE

Page 2: Dental Biometrics

2

aforementioned limitations because 1) laser-scanned 3D

dental plasters are high-resolution surface data; 2)

projection from 3D to 2D is not required, thus no distortion

of the tooth shape occurs. The problem arising from

different imaging angles in 3D is what we call pose variation

problem which we aim to solve in this paper.

Trends in 3D dental biometrics There has been much

interest and development in the investigation of 3D dental

biometrics. With the development of real-time scanning and

3D reconstruction technologies from 2D images or video

sequences, the acquisition of 3D models has become

effortless and fast. 3D biometrics is receiving increasingly

more attention than 2D biometrics. For instance, 3D face

and ear recognition [13, 14] showed a promising future. In

addition, there are some emerging dental research works in

assisting 3D reconstruction of teeth from CT images [15]

and 3D automatic teeth segmentation for dental biometrics

[16].

Therefore, the present study aims to investigate 3D

identification scheme in dental biometrics by matching

dental plasters, such as the two shown in Fig. 1.

Our paper makes the following contributions:

1. We propose a novel 3D pose invariant dental biometrics

framework. As best as we can tell, ours is the first attempt at

3D dental biometrics; all existing works use only 2D images.

It overcomes a number of key hurdles in traditional 2D

methods, thus making our method more useful.

2. Our method is fast and could be fully automatic and thus

can be used for rapid identification of large groups of

people. It takes about 1.7 hours to retrieve one subject from

33 subjects and 7 hours to retrieve from 133 subjects (PC

with a 2.99 GHz Pentium 4 processor) [17]. In contrast, it

takes only 25 minutes on average to retrieve one subject

from 100 subjects. (PC with 2 Duo CPU 2.33 GHz 1.96GB

RAM).

3. Our method is faster and more robust to pose variations,

which is shown in Experiment III and IV.

4. The dental arch (the curving structure formed by the teeth

in their normal position), tooth crown shape and the

arrangement of teeth (teeth neighboring position) are used

directly without projection to 2D in our study. We discover

that the discriminability of these dental features is useful

and distinguishable enough to provide potential identities

among individuals without tedious single tooth

segmentation and contour extraction requirements. In

addition, our method is more robust because we can use the

dental arch for identification even when individual teeth

have been damaged.

2. System Approach

An overview of the 3D dental biometrics framework is

shown in Fig. 2.

Ante-mortem (AM) database The AM database

comprises 100 mandibular teeth samples scanned using

Minolta VIVID 900 Surface Laser Scanner

(Konica-Minolta Corporation, Osaka, Japan).

Postmortem (PM) database The PM samples used to

match with the AM samples consist of 7 plasters of

mandibular teeth which are separately prepared and scanned

by a different investigator using the same scanner without

knowing the previous scanning parameters.

The initial orientations are seldom the same when

genuine samples are prepared and scanned by different

investigators. In addition, it is observed that even the

appearances of the genuine samples are different as can be

seen in Fig. 3. The PM sample in Fig. 3 (b) looks smooth

compared with its AM sample (Fig. 3 (a)), e.g. some holes

are presented in the AM sample. The reason of these

differences could be 1) the physical dental plasters are made

AM Plaster

Casts

PM Plaster

Casts

AM Dentition

PM Dentition

AM Digitized

Model

PM Digitized

Model

Manual or AUTO

PCA-plane

Segmentation(Experim

ent I and II)

AUTO PCA-plane

SegmentationDecimation

Decimation

Feature Point

Detection

Feature Point

Detection

Correspondence(algorithm

comparison Experiment III

and IV)

Fine

Matching

Matching

ScoreRank List

Fig.2 An overview of 3D dental biometrics framework

Page 3: Dental Biometrics

3

by different investigators; 2) the different resolutions of

scanning; and 3) handling errors during scanning.

To facilitate efficient and accurate matching of

corresponding AM and PM samples, preprocessing of the

digitized samples is required to reduce the size of the

sample. The preprocessing comprises three operations: 1)

decimation for both AM and PM samples; 2) PCA-plane

segmentation for 100 AM samples; and 3) manual/ auto

PCA-plane segmentation for 7 PM samples (two

experiments).

Decimation for both AM and PM samples Each digitized

sample is 14~30MB, comprising of 340k~400k triangles

and so is decimated by 90% to achieve higher

computational speed. Only 10% of the original mesh is used

for identification in our present study. We want to show that

a competitive accuracy can be achieved by using our

proposed approaches even after such large-scale decimation.

The decimation algorithm in [18] was utilized. The

decimated samples are shown in Fig. 3.

(a) (b)

Fig.3 Difference in genuine samples after decimation (a) AM

sample of victim I (b) PM sample of victim I

PCA-plane segmentation for 100 AM samples. For a

large AM database, the automatic segmentation is necessary.

As best as we know, no fully automatic 3D segmentation

method achieves a promising accuracy. This is the most

tedious and time-consuming step both in 3D and 2D dental

biometrics. Some researchers are working towards this goal

in orthodontics planning studies. Kondo et al. [19] proposed

a highly automatic tooth segmentation method. The dental

arch is used to calculate the panoramic range image.

However, four reference points need to be manually

specified by users at the beginning. Kronfeld et al. [20]

presented a highly automatic segmentation method for

separation of teeth from the mesh model by applying an

active contour algorithm. However, they reported that

manual adjustment is still needed when the initial snakes are

not appropriately located at the transition between teeth and

gum. Both methods fail where the boundary between tooth

and gum is very smooth or in severe malocclusion cases. In

this study, instead of single-tooth segmentation, a fast

automatic processing method is proposed for a large AM

database to eliminate the bottom part of the plaster which

does not contain tooth information. The Principal

Component Analysis (PCA)-plane passing through the

centroid of the plaster was calculated for each AM plaster as

shown in Fig. 4(a). A dental plaster was segmented by its

PCA-plane into the crown part and bottom part as illustrated

in Fig.4 (b) and Fig.4(c) respectively.

Manual/auto segmentation for 7 PM samples. The gum

and teeth for the PM samples are to be exactly segmented. It

is manually performed because segmentation for a PM

sample, which still contains tooth gum, is different from that

for the mandibular teeth of a human skull as shown in Fig.

5(a). Most of the 3D segmentation methods detect the

interstice between gum and teeth (gingival margin) by

computing the points located at minimum curvatures on

meshes. If this minima rule is applied to madibular teeth of a

skull as shown in Fig.5 (a), the dash line in Fig. 5 (b) will be

detected which is the interstices between the teeth and

alveolar bone, instead of the expected solid line which is the

real interstice between teeth and gum (gingival margin) as

shown in Fig. 5(b). Thus one portion of the tooth root,

which does not exist in its corresponding AM plaster sample,

will be included in PM sample. It will produce error in the

matching process. According to forensic dentists’

experience, gum begins to decay within two or three days

after death. Therefore, it is quite common to see PM

samples without gums. Based on the aforementioned

reasons, manual segmentation is implemented to segment

madibular teeth of skulls according to the gingival margin.

The segmented teeth are shown in Fig. 5(c). In addition, we

also test fully automatic identification process in

experiment II by applying the same PCA-plane

segmentation method to the 7 PM samples in experiment II.

(a) (b) (c)

Fig. 4 PCA-plane segmentation for an AM sample (a) PCA-plane

(b) segmented tooth crown (c) bottom part of a dental plaster

(a) (b) (c)

Fig. 5 Manual segmentation of a human skull (a) a human skull (b)

the expected detected interstices (solid line) and the interstices

obtained by minima curvature rule (dash line) (c) a set of manual

segmented mandibular teeth of a human skull

Feature point detection. The principle of key feature point

or salient feature point detection is well-established in 2D

image processing [21, 22]. During the last decade, several

studies have extended it to the 3D domain [23-25]. Inspired

by these studies, a multi-scale feature point detection

algorithm is presented to extract feature points on digitized

Page 4: Dental Biometrics

4

dental surfaces. Fig.6 shows the differences between the

existing work [23-25] and this work. The main steps are

given below.

The first step of the feature point detection is computing

multi-scale representations for dental mesh surface by

applying N Gaussian filters on it. For each vertex v in the

surface model, the neighborhood ( , )N v is point x i

within distance . As the Euclidean distance gives better

results than the geodesic distance[23], equation

( , ) , : vertexN v x x v x (1)

is used for calculating the neighborhood points. A

representation of the surface model ( , )G v can be

obtained using equation

2 2

( ,2 )

2 2

( ,2 )

exp / (2 )

( , )exp / (2 )

i

i

i ix N v

ix N v

x x v

G vx v

. (2)

The second step of feature point detection is saliency map

computation of dental mesh surfaces. To compute the mesh

saliency, the Difference-of-Gaussian (DoG) for each vertex

v is defined:

( ) ( , ) ( , )i iDoG v G v G v k (3)

as the difference between its Gaussian-weighted

representation at scale ( i ) and scale ( ik ). DoG( v ) is

actually a 3D vector which denotes the displacement

between different scales. Six scales were used σi{1ε, 2ε,

3ε, 4ε, 5ε, 6ε }, where ε is 0.3% of the length of the diagonal

of the bounding box of the dental surface model. In order

to promote the small number of distinctive high peaks while

suppressing the large number of similar high peaks in the

saliency map, each saliency map is normalized using the

non-linear suppression operator S proposed by Itti et al [21].

The third step is boundary effect removal. The following

algorithm is applied: 1) search for the boundary vertices 2)

search for the vertices within distance 2σ6 to the boundary

vertices; 3) set the saliency of all these vertices to zero.

The fourth step is feature point extraction. The saliency

map at each scale is processed such that each saliency value

is set to zero unless it is larger than the saliency of 85% of its

neighboring vertices. The final saliency map for the surface

model is then obtained by adding the saliency map at all six

scales followed by a normalization process. Finally, a vertex

whose saliency value is a local maximum and larger than

60% of the global maximum is detected as a salient point.

As shown in Fig. 6(a), edge points are detected as feature

points by the existing work [23-25]. Usually, more feature

points require more computational time in finding

correspondence at the next stage. The edge points are not

feature points of tooth shape. The feature points detected by

this work with edge effect removal are shown in Fig. 6(b).

Later, we compare the number of extracted points and the

total time in matching genuine samples and imposter

samples. It is about six times faster using feature point

detection algorithm in this work in matching one PM sample

to its genuine AM sample. The results are shown in

Experiment III in the next section.

(a) (b)

Fig. 6 Feature points on dental meshes (a) existing work (b) this

work

Correspondence Let P’ and Q’ be the feature points

extracted from the PM dental surface and the AM dental

surface respectively. For each feature point 'ip P and

'iq Q , the respective saliency value ( )iS p and ( )iS q

were already calculated in the feature point detection stage.

The following triplet-constrain algorithm is presented to

find the best transformations. This step is to find three

feature points both in PM and AM samples with similar

saliency values and similar relative positions in Euclidean

space for correspondence.

For any feature point 'p P , select the salient

points q as potential correspondence if

( ) ( )S p S q , where ε is threshold value and set

to be 0.1 in our tests. Therefore, a set of potential

correspondences for each feature point are

determined and designated as (C(p1), …, C(pn)). For each pair of feature points (pi, pj), choose any

( )i iq C p , ( )j jq C p and set the point pair (qi, qj)

which minimizes the distance root mean squared

(dRMS) error defined in equation

2 2

21 1

1( ', ') ( )

n n

i j i ji j

dRMS P Q p p q qn

(4)

as the associated correspondence pair, resulting in a set

E2 of two-point correspondences. E2 is then sorted in

order of ascending dRMS error. Any 2e E whose

dRMS error is larger than a threshold dRMS is

discarded.

For each two-point correspondence 2e E , add another

potential correspondence pair (pk, qk) which minimizes

the dRMS error. In this way, a set E3 of triplet-point

correspondence is formed. E3 is then sorted in order of

ascending dRMS error. Any 3e E whose dRMS

error is larger than a threshold dRMS is discarded.

For each triplet-point correspondence in E3, a rotation and

translation matrix can be obtained by Singular Value

Page 5: Dental Biometrics

5

Decomposition (SVD) method and the corresponding

coordinate root mean square (cRMS) error is then

computed using equation

22

, 1

1( , ) min

n

i iR t i

cRMS P Q Rp t qn

. (5)

Finally, E3 is sorted in order of cRMS error.

The first triplet-point correspondence in E3 corresponding

to minimal cRMS error is taken as the best triplet-point

correspondence. Fig. 7 shows the correspondence in

genuine samples. We compare the existing work[26] with

this work in Experiment IV in the next section. We show

more robust characteristics of this work regarding pose

invariant.

Fig. 7 Triplet-point correspondence in genuine samples

Fine Matching With the estimated initial position by

feature points correspondence, the fine comparisons are

achieved by utilizing iterative closest point (ICP) algorithm

which was first developed by Besel and Mckay [27], Chen

and Medioni[28]. The results of genuine matching and

imposter matching of samples in Fig.8. The comparison

shows that genuine samples require less iterations and the

matching error is much smaller.

(a)

(b)

Fig. 8 Fine matching of samples in Fig 8. (a) genuine samples (b)

imposter samples

3. Experimental Results

Towards an automatic 3D dental identification system

development, an automatic segmentation method for PM

samples are also expected. In our preliminary study, it is

interesting to investigate the identification accuracy if all

the process are automated. Therefore, two experiements are

designed.

Experiment I Identification process with human

interaction in PM segmentation Experimental results

show fully correct priority ranking accuracy based on

matching of 7 manually segmented PM samples to a

database of 100 AM samples. At rank 1, 100% accuracy

was achieved. The retrieval performance curve, as shown in

Fig. 9, is often used to evaluate the accuracy of the

experiment. The x axis represents the rank of retrieved

subjects. Identification of 7 PM samples from 100 AM

samples, each PM sample has 100 possible ranks. The y axis

indicates the cumulated number of correct retrievals at each

rank.

Experiment II Fully automated identification process

without human interaction In Experiment I, the PM

segmentation is the only manual part of the whole

identification process. In Experiment II, the same 7 PM

samples are segmented using the same PCA-plane

segmentation method for AM samples, namely a portion of

gum has not been exactly segmented and attached to the

teeth. Undoubtedly, the gum and plaster portion will bring

errors but the identification process becomes fully

automated. We try to test identification accuracy under a

rough segmentation condition. The results are shown in Fig.

9. Five out of seven achieved rank-1 accuracy(5/7=71.4%);

6 out of 7 achieved rank-2 accuracy (6/7=85.7%) and at

rank 4, 100% accuracy was achieved.

Experiment III Feature point extraction algorithms

comparison We compare the number of extracted points

and the computational time in matching two samples

between the existing algorithm and this work by using the

same computer. The initial positions of the two samples are

the same. We test both genuine samples and imposter

samples. The results are shown in Table 1. All the

calculations in this paper include time (second) for model

importing, visualization. By using this work, the

computational total time for matching one pair samples is

reduced to 1/6~1/5 (139/22=6.3;151/29=5.2).

Experiment VI Correspondence algorithms comparison

regarding pose invariant characteristic We compare the

similar existing work greedy algorithm [26] with this work.

We show that ours is more robust to pose variations. The

results are shown in Table 2. The rotation variation is

designed to simulate the possible real rotations in scanning.

There is a base plane (almost a parallel plane to the principal

plane we calculated in Fig.4) the plaster is placed on this

plane with the teeth side facing the scanner. Therefore, most

rotation variation is around the normal to this plane.

Subsequently, we increased 30 degree every time until 360

degree rotation. And we also test the imposter samples.

Results show that this work always gave the correct

Page 6: Dental Biometrics

6

matching while the existing work [26] failed in most cases.

The reason is the previous work is developed for general

shapes, such as animal shapes which have visual salient

points at ear tips, mouths, claws, nose tips, failing in

corresponding dental mesh with highly similar convex and

concave, saddle points.

We have run further experiments to show that our

triplet-constraint algorithm is indeed robust: we injected

destructive noises, and our algorithm was still able to

correctly locate the corresponding points, even when

significant noise was added. Due to page limitation, we are

unable to give further details.

Fig.9 Comparisons of identification accuracy between a

user-intervention process (Experiment I) and a fully automated

process (Experiment II)

Table 1 Feature extraction algorithms comparison

(Experiment III)

Number of

points

Total time

(second)

Genuine

samples

Existing

work

[23-25]

AM I 118 139

PM I 103

This work AM I 48 22

PM I 30

Imposter

samples

Existing

work

[23-25]

AM II 139 151

PM I 103

This work AM II 74 29

PM I 30

Table 2-Experiment VI Correspondence algorithms

comparison

Rotation 30 degree

Existing

work[26]

This

work

Rotation 60 degree

Existing

work

[26]

This

work

Rotation 90 degree

Existing

work[26]

This

work

Rotation 180 degree

Existing

work

[26]

This

work

4. Conclusions and Future Work

A novel pose invariant 3D dental biometrics framework has

been proposed in this paper. As best as we know, our work

is the first attempt at 3D dental biometrics; all existing

works use only 2D images. A feature point extraction

algorithm and a triplet-correspondence algorithm are

developed for pose estimation of dental meshes.

Experimental results show that the developed algorithms

are faster and more robust than the existing ones for pose

estimation. We also facilitate the identification process by

using 3D dental features directly, avoiding tedious single

tooth segmentation and contour extraction processes. We

discover that the discriminability of these dental features is

enough to provide potential identities. 100% rank-1

accuracy is achieved with user interaction in segmentation

by retrieving 7 subjects from 100 subjects. In addition,

Page 7: Dental Biometrics

7

71.4% rank-1 accuracy is achieved in fully automated

identification process. The single subject retrieval time has

also been significant reduced compare to that using 2D

identification framework. There is no 3D dental biometrics

benchmark database and the 2D database is not publicly

available[29]. Although the comparisons to 2D are not

based on the same dataset, our preliminary work is to

provide a new vision into dental biometrics by using 3D

identification framework which aims to overcome

limitations in previous 2D work while facilitating the whole

identification process. The retrieval efficiency, accuracy

and capability have shown the feasibility of the proposed

3D framework.

However, there are some limitations. The data used in

this work are dental plasters which only contain the tooth

crown shapes. Thus the tooth root and dental work (tooth

fillings) are not available which are also useful for dental

identification. The samples size is still small. Therefore, our

future work could include 1) sample acquisition from

Computed Tomography (CT) or Magnetic Resonance

Imaging (MRI) images; 2) further testing on a larger

database; and 2) other efficient geometric invariant features

extraction and correspondence algorithms development.

5. References

[1] D.R. Senn, P.G. Stimson, Forensic Dentistry, Second Edition

ed., CRC Press, Taylor& Francis Group, 2010.

[2] Dental records beat DNA in tsunami IDs, New Scientists,

2516:12 (2005).

[3] R.M. Lorton L, Frideman R, The computer-assisted

postmortem identification (CAPMI) system: a computer-based

identification program., Journal of Forensic Science, (1988)

997-984.

[4] M. J, WinID3 dental identification system, in, 2006.

[5] H. Chen, A.K. Jain, Automatic Forensic Dental

Identification Handbook of Biometrics (2008) 231-251.

[6] A.K. Jain, H. Chen, Matching of dental X-ray images for

human identification, Pattern Recognition, 37 (2004) 1519-1532.

[7] H. Chen, A.K. Jain, Tooth contour extraction for matching

dental radiographs, in: ICPR 2004. Proceedings of the 17th

International Conference on Pattern Recognition, 2004, pp.

522-525

[8] M.H. Mahoor, M. Abdel-Mottaleb, Automatic classification

of teeth in bitewing dental images, in: ICIP '04. International

Conference on Image Processing, 2004, pp. 3475-3478

[9] P.L. Lin, Y.H. Lai, P.W. Huang, An effective classification

and numbering system for dental bitewing radiographs using teeth

region and contour information, Pattern Recognition, 43 (2010)

1380-1392.

[10] O. Nomir, M. Abdel-Mottaleb, Fusion of matching

algorithms for human identification using dental X-ray

radiographs, IEEE Transactions on Information Forensics and

Security, 3 (2008) 223-233.

[11] O. Nomir, M. Abdel-Mottaleb, Hierarchical contour

matching for dental X-ray radiographs, Pattern Recognition, 41

(2008) 130-138.

[12] H. Chen, A.K. Jain, Dental Biometrics: Alignment and

Matching of Dental Radiographs, IEEE Transactions on Pattern

Analysis and Machine Intelligence, 27 (2005) 1319-1326.

[13] L. Xiaoguang, A.K. Jain, Deformation Modeling for Robust

3D Face Matching, Pattern Analysis and Machine Intelligence,

IEEE Transactions on, 30 (2008) 1346-1357.

[14] C. Hui, B. Bhanu, Efficient Recognition of Highly Similar

3D Objects in Range Images, Pattern Analysis and Machine

Intelligence, IEEE Transactions on, 31 (2009) 172-179.

[15] S. Tohnak, A.J.H. Mehnert, M. Mahoney, S. Crozier,

Synthesizing Dental Radiographs for Human Identification, J.

Dent. Res., 86 (2007) 1057-1062.

[16] D. Mairaj, S.D. Wolthusen, C. Busch, Teeth Segmentation

and Feature Extraction for Odontological Biometrics, in:

Intelligent Information Hiding and Multimedia Signal Processing

(IIH-MSP), 2010 Sixth International Conference on, 2010, pp.

323-328.

[17] H. Chen, AUTOMATIC FORENSIC IDENTIFICATION

BASED ON DENTAL RADIOGRAPHS, in: Department of

Computer Science and Engineering, PhD Thesis, Michigan State

University, 2007.

[18] W.J. Schroeder, J.A. Zarge, W.E. Lorensen, Decimation of

triangle meshes, SIGGRAPH Comput. Graph., 26 (1992) 65-70.

[19] T. Kondo, S.H. Ong, K.W.C. Foong, Tooth segmentation of

dental study models using range images, Medical Imaging, IEEE

Transactions on, 23 (2004) 350-362.

[20] T. Kronfeld, D. Brunner, G. Brunnett, Snake-based

segmentation of teeth from virtual dental casts, Computer-Aided

Design and Applications, 7 (2010) 221-233.

[21] L. Itti, C. Koch, E. Niebur, A model of saliency-based visual

attention for rapid scene analysis, IEEE TRANSACTIONS ON

PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 20

(1998) 1254-1259.

[22] D.G. Lowe, Distinctive image features from scale-invariant

keypoints, International Journal of Computer Vision, 60 (2004)

91-110.

[23] C. Lee, A. Varshney, D. Jacobs, Mesh saliency, in:

SIGGRAPH '05: ACM SIGGRAPH 2005 Papers, ACM, 2005,

pp. 659-666.

[24] Y.-S. Liu, M. Liu, D. Kihara, K. Ramani, Salient critical

points for meshes, in: SPM 2007: ACM Symposium on Solid and

Physical Modeling, June 4, 2007 - June 6, 2007, Association for

Computing Machinery, Beijing, China, 2007, pp. 277-282.

[25] U. Castellani, M. Cristani, S. Fantoni, V. Murino, Sparse

points matching by combining 3D mesh saliency with statistical

descriptors, in, Blackwell Publishing Ltd, 9600 Garsington Road,

Oxford, OX4 2XG, United Kingdom, 2008, pp. 643-652.

[26] N. Gelfand, N.J. Mitra, L.J. Guibas, H. Pottmann, Robust

global registration, in: Proceedings of the third Eurographics

symposium on Geometry processing, Eurographics Association,

Vienna, Austria, 2005, pp. 197.

[27] P.J. Besl, A Method for Registration of 3-D Shapes, IEEE

TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE

INTELLIGENCE, 14, NO. 2 (1992) 239-256.

[28] M. Chen, Object modeling by registration of multiple range

images, 1991 IEEE International Conference on Robotics and

Automation, 1991. Proceedings., , vol.3 (1992) 2724 - 2729

[29] CJIS division-ADIS, digitized radiographic images

(database), (August 2002).