15
  1 3 International Journal of Legal Medicine  ISSN 0937-9827 Volume 129 Number 3  Int J Legal Med (2015) 129:569-581 DOI 10.1007/s00414-014-107 4-1 Ground truth data generation for skull– ace overlay O. Ibáñez, F. Cavalli, B. R. Campomanes-Álvarez, C. Campomanes-Álvarez, A. Valsecchi & M. I. Huete

Ground truth data generation for skull–face overlay

Embed Size (px)

DESCRIPTION

Objective and unbiased validation studies over asignificant number of cases are required to get a more solidpicture on craniofacial superimposition reliability. It will notbe possible to compare the performance of existing and upcomingmethods for craniofacial superimposition without acommon forensic database available for the research community.Skull–face overlay is a key task within craniofacialsuperimposition that has a direct influence on the subsequenttask devoted to evaluate the skull–face relationships. In thiswork, we present the procedure to create for the first time sucha dataset.We have also created a database with 19 skull–faceoverlay cases for which we are trying to overcome legal issuesthat allow us tomake it public. The quantitative analysismadein the segmentation and registration stages, together with thevisual assessment of the 19 face-to-face overlays, allows us toconclude that the results can be considered as a gold standard.With such a ground truth dataset, a new horizon is opened forthe development of new automatic methods whose performancecould be now objectively measured and comparedagainst previous and future proposals. Additionally, other usesare expected to be explored to better understand the visualevaluation process of craniofacial relationships in craniofacial identification. It could be very useful also as a starting pointfor further studies on the prediction of the resulting facialmorphology after corrective or reconstructive interventionismin maxillofacial surgery.

Citation preview

  • 1 23

    International Journal of LegalMedicine ISSN 0937-9827Volume 129Number 3 Int J Legal Med (2015) 129:569-581DOI 10.1007/s00414-014-1074-1

    Ground truth data generation for skullface overlay

    O.Ibez, F.Cavalli,B.R.Campomanes-lvarez,C.Campomanes-lvarez, A.Valsecchi &M.I.Huete

  • 1 23

    Your article is protected by copyright andall rights are held exclusively by Springer-Verlag Berlin Heidelberg. This e-offprint isfor personal use only and shall not be self-archived in electronic repositories. If you wishto self-archive your article, please use theaccepted manuscript version for posting onyour own website. You may further depositthe accepted manuscript version in anyrepository, provided it is only made publiclyavailable 12 months after official publicationor later and provided acknowledgement isgiven to the original source of publicationand a link is inserted to the published articleon Springer's website. The link must beaccompanied by the following text: "The finalpublication is available at link.springer.com.

  • TECHNICAL NOTE

    Ground truth data generation for skullface overlay

    O. Ibez & F. Cavalli & B. R. Campomanes-lvarez &C. Campomanes-lvarez & A. Valsecchi & M. I. Huete

    Received: 29 April 2014 /Accepted: 28 August 2014 /Published online: 30 September 2014# Springer-Verlag Berlin Heidelberg 2014

    Abstract Objective and unbiased validation studies over asignificant number of cases are required to get a more solidpicture on craniofacial superimposition reliability. It will notbe possible to compare the performance of existing and up-coming methods for craniofacial superimposition without acommon forensic database available for the research commu-nity. Skullface overlay is a key task within craniofacialsuperimposition that has a direct influence on the subsequenttask devoted to evaluate the skullface relationships. In thiswork, we present the procedure to create for the first time sucha dataset. We have also created a database with 19 skullfaceoverlay cases for which we are trying to overcome legal issuesthat allow us to make it public. The quantitative analysis madein the segmentation and registration stages, together with thevisual assessment of the 19 face-to-face overlays, allows us toconclude that the results can be considered as a gold standard.With such a ground truth dataset, a new horizon is opened forthe development of new automatic methods whose perfor-mance could be now objectively measured and comparedagainst previous and future proposals. Additionally, other usesare expected to be explored to better understand the visualevaluation process of craniofacial relationships in craniofacial

    identification. It could be very useful also as a starting pointfor further studies on the prediction of the resulting facialmorphology after corrective or reconstructive interventionismin maxillofacial surgery.

    Keywords Forensic anthropology . Craniofacialsuperimposition . Computer-aided craniofacialsuperimposition . Skullface overlay . Ground truth .

    Craniofacial relationships

    Introduction

    Anthropologists have focused their attention on determiningthe identity of a missing person when skeletal informationbecomes the last resort for the forensic assessment [1, 2].Craniofacial superimposition (CFS) [3], one of the approachesin craniofacial identification [4, 5], involves the superimposi-tion of a skull (or a skull model) with a number of antemortemimages of an individual and the analysis of their morpholog-ical correspondence.

    Regardless of the technological means considered, we dis-tinguished three different stages for the whole CFS process in[6]: (i) the first stage involves the acquisition and processingof the skull (or skull 3D model) and the antemortem facialimages together with the craniometric and facial landmarklocation, (ii) the second stage is the skullface overlay(SFO), which focuses on achieving the best possible superim-position of the skull and a single antemortem image of themissing person. This process is repeated for each availablephotograph, obtaining different overlays. SFO thus corre-sponds to what traditionally has been known as the adjustmentof the skull size and its orientation with respect to the facialphotograph [3, 7], and (iii) the third stage accomplishes thedecision-making. Based on the superimpositions achieved inthe latter SFO stage, the degree of support of being the same

    O. Ibez (*) :C. Campomanes-lvarezDepartment of Computer Science and Artificial Intelligence,University of Granada, 18014 Granada, Spaine-mail: [email protected]

    O. Ibez :B. R. Campomanes-lvarez :A. ValsecchiEuropean Centre for Soft Computing, 33600 Mieres, Asturias, Spain

    F. CavalliResearch Unit of Paleoradiology and Allied Sciences, OspedaliRiuniti di Trieste, Trieste, Italy

    M. I. HuetePhysical Anthropology Laboratory, University of Granada,18012 Granada, Spain

    Int J Legal Med (2015) 129:569581DOI 10.1007/s00414-014-1074-1

    Author's personal copy

  • person or not (exclusion) is determined by considering thedifferent factors studying the relationship between the skulland the face: the morphological correlation, the matchingbetween the corresponding landmarks according to the softtissue depth and the consistency between asymmetries.

    Since the first documented use of CFS for identificationpurposes [8], the technique has been under continuous devel-opment. Although the foundations of the CFS method werelaid by the end of the ninetieth century [9, 10], the associatedprocedures evolved as new technology was found available.Therefore, three main different approaches have been devel-oped: photographic, video, and computer-aided superimposi-tion [3, 6, 11].

    The first superimpositions involved acquiring the negativeof the original facial photograph and marking the facial land-marks on it. The same task was done with a photograph of theskull. Then, both negatives were overlapped and the positivewas developed. This procedure was called photographic su-perimposition [3]. Many authors further developed photo-graphic superimposition techniques to improve the scale andthe orientation of the skull and the facial images [1214].

    Video superimposition was introduced in 1976 [15].Instead of marking photographs, tracings, or drawings in orderto properly superimpose the skull and the face, video camerasprovide a live image of the object (skull, photograph) fo-cused. These systems present an enormous advantage over theformer photographic superimposition procedure by minimiz-ing several problems associated to it. The video superimposi-tion technique continued evolving [1618], and it became themost broadly employed method.

    The popularization, huge development, and larger amountof possibilities offered by computers turned them into the nextgeneration of CFS systems. Two different system categoriesarise within this group [6]. Nonautomatic computer-aidedmethods use the computer for storing and/or visualizing thedata [7, 19, 20, 11], but they do not consider the computationalcapacity to automate human tasks. Automatic computer-aidedmethods use a computer program to accomplish any CFSsubtask itself [21, 22].

    Computer-aided methods are attracting more attention ofboth practitioners and researchers [23]. In general, they areconsidered the most promising approaches and in particular,automatic methods represent the most appropriate tool toincrease the objectivity and reliability of the CFS technique.

    Numerous factors have an impact at the various stages ofCFS method and can potentially introduce biases and affectthe outcome and reliability of the superimposition. Many ofthe difficulties can be tackled with computer-aided automaticsolutions, enhancing CFS reliability. For many cases, the onlyavailable photograph is distorted or has poor quality, computervision algorithms can be easily implemented to enhance thequality of photographs. Modern superimposition techniquesaside from high-quality photographs also require accurate 3D

    models of the skull which can be acquired from medicaltechnology equipment such as a CT scan or laser range scan-ner in combination with appropriate software [6].

    The most time-consuming and challenging step in CFS ispositioning of the skull to match the orientation and pose seenin a target photograph [16, 24]. To perform the overlay of theface and skull, most methods rely on thematching of a numberof cranial and facial landmarks. For nonautomatic systems,either photo, video or computer based, this process is usuallyslow and conducted by trial and error. Adjusting the size andorienting the images can take hours to arrive at a good overlay.The works developed by authors such as [22, 2527] serve asexamples of how computer algorithms can automate SFO andaccommodate the uncertainty/fuzziness of some facial land-mark [28], improving CFS reliability by reducing subjectivityand time inherent to nonautomated methods. The success ofthe final identification strongly relies on an accurate superim-position since this is the previous step to analyze the anatom-ical correspondence between the face and the skull. Thus,reaching an accurate overlay is of paramount importancebefore continuing with the final decision-making stage.However, there is no single objective and reliable method inthe literature to determine whether the achieved superimposi-tion is correct or not. This is because we are trying to overlaytwo different objects (a skull and a face), and this itselfintroduces an inherent uncertainty [26].

    The latter fact affects both nonautomatic and automaticmethods independently of the technological means employed(photo, video, or computer). In the case of computer-aidedautomatic methods, all existing SFO approaches are evaluatedusing the distance between a pair of corresponding landmarks[21, 22, 2527]. This is clearly an unsatisfactory evaluationsince it does not consider neither the depth of the soft tissuenor the morphological matching of the face and the skull.Thus, visual evaluation represents the only meaningful avail-able resource, despite being a subjective and expert-dependentprocedure.

    In the current contribution, we aim to address this impor-tant shortage. We have created a ground truth dataset whichwill allow to measure and compare the performance of auto-matic SFO methods following an objective and reliable pro-cedure to assess the SFO achieved. With such a ground truthdataset, a new horizon is opened for the development of novelautomatic methods whose performance could be now objec-tively measured and compared against previous and futureproposals.

    Material and methods

    With the goal of achieving a number of ground truth SFOs,frontal and lateral photographs were taken from patients whosehead has just been scanned with a cone beam computed

    570 Int J Legal Med (2015) 129:569581

    Author's personal copy

  • tomography (CBCT) (further details about the data acquisitionare provided in subsection Data acquisition). The DICOMimages resulting from the CBCT machine were automaticallyprocessed to obtain the corresponding 3D face and 3D skullmodels (see subsection Automatic segmentation of cone beamcomputed tomography data for a detailed description of thesegmentation algorithm). After positioning homologous pointsin both the 3D face model and the photograph, the former wasautomatically projected onto the latter so they perfectly match(the fundamentals and method followed for this task are de-scribed in subsection Image registration process). Then, theparameters originating that perfect match between the 3D facemodel and the photograph were applied to the 3D skull model,resulting in a perfect SFO. The latter should be the ground truthprojection of the skull over the face photograph. Thus, for eachcase, we record the 2D location (x and y pixels) of somelandmarks previously marked on the 3D skull model as theground truth data to compare with. Figure 1 graphically showsan overview of the whole ground truth data creation process.Finally, in order to be able to validate the ground truth datasetusing real distances (in millimeters), a method to estimatedistances in the 3D space is applied between 3D facial pointsand 2D facial points backprojected using the registration trans-formation (see subsection Real distances estimation).

    Data acquisition

    The subjects were submitted to CBCT for clinical purposes.During the same clinical session, the patient was undergoing

    CBCT and subsequently, some photographs for clinical doc-umentation were taken in the two orthogonal projections(front and side) and, in one case, in oblique projection. Thepatient was asked to maintain a neutral expression. The pho-tographs were taken at a distance between 1 and 1.5 m, using adigital camera with a CCD with a minimum resolution of4 Mpx. The patients expressed their informed consent to theuse of their clinical data, anonymized, for study and researchpurposes.

    The data was acquired in multiple locations. Thus, acqui-sitions were obtained with different equipment but with min-imum requirements of acquisition characteristics (orthostatic,acquisition time 1515 mm, voxel 0.30.30.3 mm).

    Nine different persons, without facial disorders, werescanned in total. For eight of them, two photographs, frontaland lateral views, were taken. In one case, the one showed inFigs. 6 and 7, three photographs, frontal, lateral, and obliqueviews, were taken.

    Automatic segmentation of cone beam computed tomographydata

    In CT scans, the grey level of a tissue is a function of itsradiodensity, usually measured in Hounsfield Units (HU).This means that by selecting the voxels having grey valuesin a specific range, one can easily separate regions of theimage having different radiodensities such as air, soft, andbony tissues. In CBCT, however, the relation between grey

    Fig. 1 Overview of the groundtruth data creation process

    Int J Legal Med (2015) 129:569581 571

    Author's personal copy

  • level and radiodensity is inaccurate so that regions having thesame density appear with different grey values, depending ontheir position relative to the organ being scanned [2931].

    Segmentation of the face surface: the difference inradiodensity between the head and the surrounding air volumeis quite large. Voxels corresponding to air have values be-tween 1,000 and 800 HU, while the head volume has aradiodensity above 400 HU. Thus, despite the accuracy issuewe havementioned earlier, the head volume can be segmentedby simply selecting the voxels having HU higher than athreshold. The actual threshold values used in each imagewere found manually but the process is quick and straightfor-ward nevertheless. The face surface is then created by com-puting the polygonal mesh surrounding the head volume. Thisstep was performed using the well-known marching cubesalgorithm followed by smoothing and decimation [32]. Boththe segmentation and the mesh computation were performedusing the Slicer software package [33].

    Segmentation of the skull surface: the grey level valuesassociated with soft and bone tissues have a significant over-lap (see Fig. 2). Therefore, the simple approach used tosegment the head volume cannot be applied. In addition,CBCTshows significant levels of noise and artifacts with greylevels in a similar range to that of bones.

    The grey level alone is not a reliable indicator of a voxelbelonging to a bone. To overcome this issue, our methodconsiders the texture of the image, i.e., the patterns of greylevels that occur between a voxel and the surroundingones. We used the well-known approach introduced byHaralik in [34] for texture classification. For each voxelv, one computes the distribution of grey level that occursbetween v and their neighbors. This results in a co-occurrence matrix from which a series of numerical tex-ture descriptors are computed, each measuring a certaincharacteristic of the texture of the image around v. Forinstance, texture correlation measures the degree of depen-dence between the grey values of neighboring voxels,resulting in high values when the grey levels in an areahave a regular or gradual variation and in small valueswhen the grey level changes suddenly or in an irregular

    fashion. We considered three descriptors, namely energy,inertia, and correlation [34].

    Texture descriptors are able to characterize the appearanceof a tissue to a much larger extent than just the grey level.However, to segment the skull, we need to find a criterion totell apart bony and non-bony voxels based on their grey leveland texture descriptors. Instead of attempting to design thecriterion manually, we adopted a classic machine learning(ML) [35] approach, in which the solution of the problem is*learned* from a series of examples. In ML, such criterion isknown as *classifier*, as it provides a way to classify someobjects (in this case, the texture descriptor values of a specificvoxel) into a number of classes (bony or non-bony tissue). A*learner*, instead, is the kind of algorithm that creates aclassifier automatically from a set of examples, in this case,a set of already classified voxels.

    Among several well-known classifiers, we choose decisiontrees [36], which are graphical tree-like models representing adecision-making process. Each internal node of the tree rep-resents a condition C. For every possible outcome of C, thereis a branch leading to another condition or to a leaf node,which indicates a class. To classify an instance of the problemX, one begins at the top of the tree, evaluates the conditions,and takes the associated branch until a leaf node is reached;the class of X is that of the terminal node. In our specific case,each decision is the value of a texture feature being greater orsmaller than a given threshold so the possible outcomes arejust true or false.

    The process of learning a classifier is shown in Fig. 3. Itbegins with the creation of a set of examples, i.e., a set ofobjects that have been already classified, usually by an expert.In this case, we used a CBCT scan that has been manuallysegmented by a specialist (Dr. Cavalli) and computed its texturedescriptors. The dataset was segmentedmanually, after a coarseautomatic threshold-based segmentation, with the aim to elim-inate residual noise artifacts. The CBCT dataset was processedwith a multipurpose software for medical/scientific imaging(Amira, Visage Imaging Inc.), and the segmentation wasexecuted manually slice by slice in orthogonal projections withits specific tool to obtain a clean skull image.

    Fig. 2 The histogram showingthe grey level of bony and non-bony tissues, depicted in grey andblack, respectively. Note the largeoverlapping region

    572 Int J Legal Med (2015) 129:569581

    Author's personal copy

  • Each voxel provides an example of the associationbetween the values of the texture descriptors of a voxeland the corresponding status of belonging to a bony tissueor not. We created a training set by considering all voxelsof the manually segmented image. Then, the training setwas fed to a decision tree learner, which computes adecision tree to correctly classify the training data. Weused the reference learning algorithm for decision trees,C4.5 [37], implemented in the Weka machine learningtoolkit [38].

    Once the decision tree has been created, a CBCT scan issegmented by applying the classification process to all itsvoxels. This results in a volume of voxels being marked asbone tissue. After this process, the segmented volume isrefined by removing isolated sets of voxels having a sizebelow a certain threshold. This removes some of the voxelsbeing incorrectly classified as bone due to noise and artifactsin the original image. Finally, a polygonal mesh is createdfrom the segmented volume following the same procedureused before in the face model generation. The overall proce-dure is shown in Fig. 4.

    Image registration process

    The aim of this problem is to find a geometric transformation fthat, when applied to the 3D face/skull model, can locate itexactly in the same pose the patients head had in the momentof the photograph. Thus, the problem of superimposing the3D face model over the photograph is modeled following anImage Registration (IR) [39] approach. The photograph istechnically the result of the 2D projection of a real (3D) scenethat was acquired by a particular (unknown) camera [40]. Insuch a scene, the person (the patient in our case) was some-where inside the camera field of view with a given pose. Thegoal is to replicate the latter original scenario. Thus, thefollowing steps are performed by our method:

    1. First, two sets of homologous points are located in boththe 3D face model and the photograph. In our case, thosepoints do not necessarily correspond to somatometriclandmarks but in many cases, they are just points thatcan be easily and accurately located in the 3D model andthe photograph.

    Fig. 3 An overview of thecreation of the classifier used inthe proposed segmentationmethod

    Fig. 4 The process ofsegmenting the skull in a CBCTusing a classifier

    Int J Legal Med (2015) 129:569581 573

    Author's personal copy

  • 2. The face 3D model is positioned in the camera coordinatesystem through geometric transformations, i.e., transla-tion, rotation, and scaling.

    3. Then, a perspective projection of the 3D skull model ontothe 2D photograph of the face is performed (the perspec-tive projection is related with the camera focal distanceand the LCD matrix size).

    4. Points two and three of this process are iteratively per-formed, using different transformations, a number of iter-ations given a priori (see Fig. 5 for an overview of theprocess).

    Therefore, the described framework involves a 3D2D IRtask compiling geometric transformations and a perspectiveprojection modeled by 12 unknown parameters [25]. Anautomatic method, in our case, a genetic algorithm [41],iteratively searches for a transformation (the values of thelatter unknown parameters) that minimizes the distancesamong the corresponding landmark pairs. The interested read-er is referred to [25] which provides a detailed description ofthe optimizer together with the mathematical structures usedfor the geometric transformations and the perspectiveprojection.

    The distance between homologous points (see Tables 2and 3) in the superimposed images is considered to ob-jectively validate the ground truth overlay. Once the 3Dface model is perfectly superimposed over the photo-graph of the face, we can directly apply the same geo-metric transformation over the 3D skull model to perfectlysuperimpose it in the same manner. The final transforma-tion, the whole picture of the skull projected in the 2Dimage or the location of a set of craniometric landmarks,can be used as ground truth data for comparison andmethod validation purposes.

    Real distances estimation

    The Euclidean distance between homologous points (seeTable 2) in the superimposed images is considered to objec-tively validate the ground truth overlay. As we are measuringdistances in an image (2D plane), they are given in pixels.

    Besides the error in pixels, we have included an additionalestimation of the total error in millimeters (mm). Bybackprojecting the facial points located in the photograph,we can calculate a backprojection ray for a given geometrictransformation f. Thus, we apply the inverse, f1, of the sametransformation f we want to validate and then we choose onepoint of this ray as the 3D position of the 2D point. Inparticular, we select the point that makes minimum theEuclidean distance between the ray and the facial point inthe 3D model.

    More formally, the geometric transformation f iscalculated following the explanation in [25], F=C (A D1 D2 D2

    1D11A1) S T P, where F and C are the

    corresponding sets of 2D facial and 3D facial points,respectively. However, in order to backproject the 2Dfacial points, we use the inverse of that equation asfollows: C=F [(A D1D2 D2

    1 D11A1) S T P]1

    Then, we obtain two different points, C0 and Cb, of thebackprojected ray by applying this last equation twiceusing different values for the z coordinate of matrix F,z=0 and z=b, b being any constant value. As a result,with these two points, we formulate the equation of the

    backprojected ray as follows: r! r0! t v! < xC0 ;yC0 ; zC0 > t < xCbxC0 ; yCbyC0 ; zCbzC0

    Finally, as already explained, the estimation error (in mil-

    limeters) is defined as the minimum Euclidean distance be-tween a 3D facial point and the backprojected ray generatedby f1 from its corresponding actual point in the photograph.Table 3 depicts the individual and average estimation errors(in millimeters) for corresponding 3D and 2D points in all thecases.

    Results

    Two different experimental results are provided in this section:on the one hand, a quantitative analysis of the performance ofour 3D skull segmentation method for CBCT data; on theother hand, the validation of the ground truth data.

    Segmentation method evaluation

    A reliable way to assess the quality of an automatic segmen-tation is to compare with that performed by an expert, assum-ing this is completely or at least very accurate. When suchground truth segmentation data is available, the automaticFig. 5 The IR optimization process

    574 Int J Legal Med (2015) 129:569581

    Author's personal copy

  • segmentation can be evaluated quantitatively by some mea-sure of the agreement between automatic and manual results,such as the Dice coefficient [34].

    In this study, however, no such data is available. Twoimages have been manually segmented by an expert, but thesegmentation is not accurate enough to be considered groundtruth. Indeed, the segmentations were actually performed withthe aim of creating a mesh from the segmented volumes,rather than accurately mark the bony tissue in the images.This means that, for instance, while the exterior of the verte-brae has been segmented precisely, the internal part was notmarked as bones, as the process of creating a mesh onlyinvolves the outermost voxels of each structure.

    Bearing this issue in mind, we have performed two kinds oftests to assess the quality of our automatic segmentationtechnique. Let us consider the two images having manualsegmentation, A and B, and recall that one of such segmenta-tions is required in the process of learning the classifier. First,we tested the accuracy of the classifier learned over a sampleof an image to segment other data from the same image. Thisresult shows whether the texture descriptors and the kind ofclassifier employed are able to characterize the status of atissue. Second, we tested the accuracy of the classifier learnedon A to segment B and vice versa. This indicates the ability ofthe approach to generalize the information gathered on oneimage to another one, and in the end, it shows that the methodis robust and it can perform properly regardless of the actualimage used for learning the classifier.

    While the second test is straightforward, the first test em-ploys a standard technique for testing classifiers called cross-validation (CV) [42]. The idea is to consider the set of examples

    created from the voxels of the image that have been manuallysegmented. The set is split in two subsets, called training andtesting sets. The former is used to train a decision tree, while thesecond is used to evaluate it. The data is split at random in nsubsets called folds. N-1 folds are used to train the classifier,while the remaining fold is used for its test. The process isrepeated N times, where each fold plays the role the test data.Finally, the results of the classifier are averaged over theN runs.

    The results of the tests are reported in Table 1. We mea-sured the accuracy of the classification, i.e., the percentage ofvoxels that were classified correctly, which in this context isequivalent to the Dice coefficient of the corresponding seg-mentations. The results are excellent, with 94.7 % being thelowest value, and describe a quite clear picture. From the firsttwo tests, one can conclude that the texture descriptors areeffectively discriminating the texture of bony tissues from thatof soft tissue and air. Moreover, decision trees are actually ableto express the relationship between texture descriptor valuesof bony tissues. This validates the overall classification-basedapproach, and notably, it happens despite the classifier hasbeen trained with imprecise data.

    Validation of the skullface overlay ground truth

    In order to quantitatively and objectively assess the groundtruth SFO data, we have to analyze the 3D2D face overlaysemployed for its generation. Tables 2 and 3 show, for each 3Dface2D face overlay problem, the distance between corre-sponding points in the final superimposition image. Mean andmaximum distances are also reported. The distance used inevery case is the Euclidean distance. While Table 2 shows

    Fig. 7 Example of the groundtruth of three 3D skullfaceoverlays corresponding to thethree cases showed in Fig. 6 (fromleft to right, ID-5-F, ID-5-L, andID-5-O). Red dotswere employedto locate some craniometriclandmarks on the 3D model

    Fig. 6 Example of 3 out of the 193D face2D facesuperimpositions. In particular,from left to right, ID-5-F, ID-5-L,and ID-5-O. Green and red dotsrepresent the points used toregister the face model over thephotograph

    Int J Legal Med (2015) 129:569581 575

    Author's personal copy

  • Euc l i dean d i s t ance s measu r ed in p ixe l s , dp x1x2 2 y1y2 2

    q, where (x1, y1) and (x2, y2) represent

    the pixel coordinates of two points, Table 3 shows theEuclidean distances measured in millimeters, dmm

    x1x2 2 y1y2 2 z1z2 2q

    , where (x1,y

    1, z

    1) and

    (x2,y2,z2) represent 3D (real) coordinates of two points.

    As Table 2 shows, the mean error ranges from 1.79 to4.38 pixels. If we compare these error values with the totalarea the face has into the pictures (it goes from around 450800 to 600950 pixels), we can conclude that the error is notsignificant.

    Similarly, Table 3 shows mean errors ranging from 0.06 to2.14 mm, and only in two cases, the mean distance is above1 mm. Although real distances depicted here are estimationsand cannot be considered as precise measurements, they con-firm the precision of the matching between the 3D face modeland the facial photograph. The sources of error and uncertain-ty affecting both pixel- and millimeter-based measurementsare discussed in the final Discussion and conclusionssection.

    Together with the quantitative validation, we also per-form a visual assessment of the resulting 3D face2D facematching. In case of a significant difference on the artic-ulation of the mandible and/or mouth between the modeland the photograph, the visual assessment does not con-sider their region of influence. Visually checking eachcase, we conclude that in all the cases, the 3D face modelperfectly overlays the face in the photograph. Figure 6shows three different overlays corresponding to ID-5-F,ID-5-L, and ID-5-O cases. The analysis of these threecases can perfectly serve as an example of the remainingones. In case ID-5-F, while the visual assessment con-cluded a perfect matching (mouth area was not subject forevaluation), the mean distance between landmarks is3 pixels, with a maximum distance of six. However, thesedistances are observed to be not significant since all thehomologous points are touching each other. In the remain-ing two cases (ID-5-L and ID-5-O), the visual assessmentindicated a wrong matching in mandible and mouth areas.These can be explained again by the different apertures ofthe mouth. The overall matching is again perfect, anevaluation supported by the insignificant mean distancesbetween facial points (partially or completely overlapped),2.22 and 1.84 pixels (0.50 and 0.08 mm), respectively.The corresponding SFOs are depicted in Fig. 7.

    Discussion and conclusions

    Objective and unbiased validation studies over a significantnumber of cases are required to get a more solid picture onCFS reliability. It will not be possible to compare the perfor-mance of existing and upcoming methods for CFS without acommon forensic database available for the research commu-nity. Skullface overlay is a key task within CFS that has adirect influence on the subsequent task devoted to evaluate theskullface relationships. In the present work, we present theprocedure to create for the first time such a dataset. We havealso created a database with 19 SFO cases for which we aretrying to overcome legal issues that allow us to make it public.The quantitative analysis made in the segmentation and reg-istration stages, together with the visual assessment of the 19face-to-face overlays, allows us to conclude that the resultscan be considered as a gold standard.

    Within this study, we have preferred to employ CBCTrather than multiple detector computed tomography (MDCT)for the following reasons:

    (a) In CBCT, it is possible to obtain a scan in orthostaticposition, more physiological than clinostatic acquisi-tion of MDCT where a certain degree of deformation ofthe soft tissues takes place due to gravity [43]

    (b) In CBCT, there is no systematic error comparing averagehomologous landmark coordinates in conventional digi-tal cephalograms and CBCT-generated cephalograms[44]

    (c) The low exposure dose of CBCT [45], which makes thistype of investigation very common in the clinical prac-tice. It must be emphasized, however, that our subjectswere undergoing CBCT for clinical purposes: to submit asubject to an X-ray examination, albeit with very lowdose without a reasoned clinical need, clashes with theprinciples of justification and ALARA [46]

    The CBCT is an equipment made for the clinical study ofbone and teeth rather than for the soft parts of the face.Therefore, it also has some disadvantages: (i) the modeststatistic of the image with a signal/noise ratio lower thanMDCT due to the low dose of acquisition, (ii) the maximumfield of view (FOV), optimized for the acquisition of the jawor malar, does not allow the acquisition of the whole skulluntil the vertex, and (iii) the displayed grey levels in CBCTsystems are arbitrary and do not allow for the assessment ofbone quality as performed with Hounsfield Units (HU) in

    Table 1 Accuracy of the pro-posed segmentation method 10-fold CVover A 10-fold CVover B Training A, test B Training B, test A

    Accuracy/DICE 98.4 % 97.8 % 94.7 % 94.9 %

    576 Int J Legal Med (2015) 129:569581

    Author's personal copy

  • Table2

    Euclideandistance

    betweencorrespondingpointsofthe3D

    face

    model(onceprojectedintothephotograph)and

    thephotograph

    oftheface.Inthefirstcolum

    non

    theleft,F

    isused

    forfrontalview

    cases,Lforlateralviewcases,andOfortheoblique

    view

    case

    Case

    3D2Dface

    pointm

    atchingEuclideandistance

    (pixels)

    Max.dist.

    Meandist.

    ID1F

    00

    11

    11.41

    1.41

    2.24

    3.61

    4.12

    55

    5.39

    5.39

    2.40

    ID1L

    02.83

    2.83

    2.83

    3.16

    3.16

    44.12

    4.47

    56.08

    6.40

    6.40

    7.07

    7.28

    7.28

    4.38

    ID2F

    11.41

    1.41

    2.24

    2.83

    44.47

    5.39

    5.83

    77.28

    9.01

    9.01

    4.32

    ID2L

    00

    1.41

    45

    5.39

    6.71

    8.25

    8.25

    3.84

    ID3F

    00

    1.41

    22.24

    2.24

    2.83

    3.61

    5.66

    6.08

    6.71

    7.21

    7.21

    7.21

    3.53

    ID3L

    00

    1.41

    2.24

    2.24

    2.24

    33.16

    3.16

    1.79

    ID4F

    01

    22.24

    2.83

    4.12

    4.12

    4.24

    5.10

    5.66

    6.40

    7.07

    10

    10

    4.21

    ID4L

    00

    01

    12

    29.90

    9.90

    1.99

    ID5F

    11

    12.24

    33.16

    3.16

    44.24

    55

    5.83

    6.40

    6.40

    6.40

    3.67

    ID5L

    00

    12.83

    33.16

    3.16

    3.16

    3.61

    3.61

    2.21

    ID5O

    00

    00

    2.24

    9.22

    9.49

    9.49

    2.99

    ID6F

    01

    1.41

    1.41

    23.16

    4.12

    55

    5.39

    5.83

    5.83

    3.12

    ID6L

    00

    01

    3.61

    7.21

    7.21

    1.97

    ID7F

    00

    11

    2.24

    2.24

    2.83

    33.16

    59.22

    9.22

    2.69

    ID7L

    00

    01.41

    3.61

    5.66

    5.66

    5.83

    6.40

    9.06

    9.06

    3.76

    ID8F

    00

    11.41

    22.24

    2.24

    2.24

    2.83

    46.71

    6.71

    2.24

    ID8L

    00

    3.61

    4.47

    5.39

    8.06

    8.06

    3.59

    ID9F

    01

    11.41

    1.41

    22

    2.83

    3.61

    3.61

    3.61

    3.61

    2.04

    ID9L

    00

    11

    1.41

    2.24

    5.10

    9

    92.47

    Int J Legal Med (2015) 129:569581 577

    Author's personal copy

  • Table3

    Estimated

    realdistance

    inmillimetersbetweencorrespondingpointsof

    the3D

    face

    model(onceprojectedintothephotograph)andthephotograph

    oftheface.Inthefirstcolum

    non

    theleft,F

    isused

    forfrontalviewcases,Lforlateralviewcases,andOfortheoblique

    view

    case

    Case

    3D2Dface

    pointm

    atchingEuclideandistance

    (mm)

    Max.dist.

    Meandist.

    ID1F

    0.00

    0.00

    0.04

    0.06

    0.12

    0.14

    0.14

    0.15

    0.15

    0.20

    0.24

    0.24

    0.28

    0.28

    0.14

    ID1L

    0.07

    1.32

    1.41

    1.55

    1.86

    1.92

    1.94

    2.06

    2.17

    2.18

    2.83

    3.02

    3.05

    3.06

    3.62

    3.62

    2.13

    ID2F

    0.04

    0.04

    0.06

    0.07

    0.08

    0.08

    0.09

    0.10

    0.11

    1.22

    2.15

    3.16

    3.16

    0.56

    ID2L

    0.00

    0.00

    0.03

    0.06

    0.07

    0.08

    0.09

    0.10

    0.10

    0.05

    ID3F

    0.00

    0.00

    0.02

    0.10

    0.10

    0.12

    0.13

    0.13

    0.15

    0.16

    0.83

    0.98

    1.08

    1.53

    1.53

    0.38

    ID3L

    0.01

    0.01

    0.5

    0.55

    0.66

    0.71

    0.84

    1.59

    1.59

    0.61

    ID4F

    0.04

    0.05

    0.05

    0.05

    0.06

    0.07

    0.07

    0.08

    0.08

    0.09

    0.09

    0.09

    0.09

    0.09

    0.07

    ID4L

    0.00

    0.00

    0.03

    0.05

    0.09

    0.09

    0.11

    0.13

    0.13

    0.06

    ID5F

    0.10

    0.19

    0.21

    0.37

    0.39

    0.46

    0.50

    0.52

    0.53

    0.55

    0.58

    0.65

    1.45

    1.69

    1.69

    0.58

    ID5L

    0.00

    0.07

    0.21

    0.29

    0.30

    0.30

    0.45

    1.23

    1.63

    1.63

    0.50

    ID5O

    0.00

    0.01

    0.01

    0.09

    0.13

    0.14

    0.17

    0.17

    0.08

    ID6F

    0.00

    0.28

    0.31

    0.34

    0.42

    0.59

    0.91

    0.91

    1.24

    1.39

    2.24

    2.24

    0.78

    ID6L

    0.00

    0.00

    0.00

    0.79

    1.04

    1.72

    1.72

    0.59

    ID7F

    0.01

    0.04

    0.05

    0.06

    0.06

    0.11

    0.11

    0.11

    0.16

    0.18

    0.18

    0.09

    ID7L

    0.00

    0.00

    0.29

    0.59

    1.00

    1.52

    1.62

    1.72

    2.43

    6.40

    6.40

    1.56

    ID8F

    0.00

    0.00

    0.30

    0.49

    0.58

    0.60

    0.62

    0.65

    0.70

    0.99

    1.86

    1.86

    0.62

    ID8L

    0.00

    0.00

    0.07

    0.07

    0.10

    0.10

    0.10

    0.06

    ID9F

    0.02

    0.04

    0.06

    0.08

    0.08

    0.09

    0.12

    0.12

    0.15

    0.15

    0.17

    0.17

    0.10

    ID9L

    0.00

    0.00

    0.03

    0.08

    0.09

    0.10

    0.11

    0.14

    0.16

    0.16

    0.08

    578 Int J Legal Med (2015) 129:569581

    Author's personal copy

  • medical CT. In [47], the authors demonstrated that it should bepossible to derive one from the other. However, the practicalapplication of this method is almost lacking. As the authorspointed out, the results were obtained in an ideal situation inwhich the location, the size of each material in the 3D dentalphantom, and the subsequent size ROI to sample were known.Additionally, at low kilovolt and milliampere settings onCBCTmachines, quantum noise may be sufficient to interferewith estimation of the actual grey level. Another assumptionof that study is that the photon beam in CBCTmachines obeysthe laws of narrow beam attenuation. Contrary to the assump-tion, CBCT machines operate with an area detector which isnot collimated like fan beam medical CT. In conclusion, thereis a need of ad hoc CBCT segmentation methods.

    Thus, we designed, implemented, and tested a new novelmethod for skull segmentation in CBCT data using decisiontree classifiers and texture information. This method allows usto have automatic, accurate, unbiased, and expert-independent3D skull segmentation from CBCT data.

    These medical data of the head allow us to obtain a reliableand accurate ground truth. As introduced before, this is pos-sible thanks to the presence of the corresponding 3D facemodel together with the skull. The 3D face model gives usthe possibility to superimpose a face (3D model) over thesame face (photograph), i.e., superimpose the same objectacquired by different sensors. The process of finding a geo-metric transformation that overlays two images taken underdifferent conditions (at different times, from different view-points, and/or by different sensors) is called image registration(IR) [39]. Several works reviewing the state of the art on IRmethods have been contributed in the last few years [4850].Despite an extensive survey on every aspect related to the IRframework that is out of the aim of this work, we would like tobriefly describe the key concepts concerning the IR method-ology in order to achieve a better understanding of our work.There is not a universal design for a hypothetical IR methodthat could be applied to all registration tasks since variousconsiderations on the particular application must be taken intoaccount. Nevertheless, IR methods usually require the fourfollowing components (see Fig. 5): two input images namedscene and model; a Registration transformation f, being aparametric function relating the two images; a Similaritymetric, in order to measure a qualitative value of closenessor degree of fitting between the transformed scene image andthe model image; and an Optimizer which looks for theoptimal transformation f inside the defined solution searchspace. The case of SFO is a complex task because we aretrying to reproduce the original scenario with an importantnumber of unknowns coming from two different sources [25]:(i) the camera configuration: at the moment of the acquisition,there were different parameters that have an influence in theSFO problem. In particular, the focal length or the distancefrom the camera to the person; and (ii) the 3D face model: this

    face model will have a specific orientation, resolution, andsize given by the technical features of the considered scanner.Hence, a 3D2D IR process where all these unknown param-eters have to be estimated seems to be the most natural andappropriate formulation.

    This technical procedure followed is the best scientificapproach to the problem, it is quite robust and accurate, andit is automatic and expert-independent (unbiased). Thus, itperfectly serves a general procedure to develop ground truthdate sets with different medical data (CBCT but also CT).However, the resulting ground truth dataset we have generatedstill present some limitations:

    (1) CBCT data have one major limitation, the reduced fieldof view. In our case, all our 3D scans lack an importantpart of the head, the upper part. This has a negativeinfluence on the reliability of the facial superimpositionscarried out. Although we have quantitatively and quali-tatively demonstrated the accurate matching in the vali-dation section, the absence of a part of the face modelmakes it impossible to conclude that a perfect matchinghave been achieved. Similarly, if this dataset is employedfor the comparison of SFO methods, conclusions of howdifferent methods match the upper part cannot beachieved. This is an important part since many practi-tioners rely on the cranial outline significantly duringforensic practice and thus it will make the SFO processmore difficult. As a result, there could be methodsperforming badly in these conditions while those couldperform better in a more realistic situation where all thecranial data is included within the 3D skull model

    (2) The validation of the CBCT segmentation method didnot make use of ground truth data. From the two testsexplained in the Segmentation method evaluation sub-section, it follows that the results are highly consistentwith the manual segmentations. Note that even a perfectsegmentation would have resulted in some error, as weare not comparing with ground truth data, so the auto-matic segmentations could actually be better than themanual ones. Also, the actual image used for learningthe classifier had a negligible effect on the final results,indicating the robustness of the approach

    (3) The correspondence among pairs of facial points in thephotograph and in the 3D face model is not perfect. In anideal situation, the distance between points should bealways zero. However, in the real situation, we have totake into account two different sources of error that arealmost impossible to overcome:

    a. Facial point location error: this is related to theextremely difficult task to locate the points in theexact same place (pixel) in both the 3D model andthe photograph

    Int J Legal Med (2015) 129:569581 579

    Author's personal copy

  • b. Facial pose error: it is really difficult, if not impossi-ble, that the patient has the same pose (mandible,mouth and eye aperture) in the two different acqui-sition moments (scanning and photographing). Inaddition, some of them were asked to smile for thephotograph (to be able to evaluate teeth matching inCFS). Thus, small facial changes should be assumed.Additionally, homologous points were located inface regions where this effect is expected to beinsignificant. For example, in most of the cases, weavoided to locate points in the mandible area of theface to reduce the effect of the facial pose error

    Thus, the errors depicted in Tables 2 and 3 can be assumeddue to the two different sources of error described before. Infact, these mean and max error values are in any case ex-plained by the facial landmark positioning error studies [28,51]. Notice that it is not possible to calculate the precise 3Dposition of a 2D point in a single image since the depthinformation (z coordinate) is unknown. Nevertheless, we em-ploy a convention, the minimum Euclidean distance betweena 3D facial point and the backprojected ray that allows us toestimate distances in millimeter with the clear advantage ofbeing independent of the image resolution.

    As a result of the methodology proposed, once the 3D facemodel is perfectly superimposed over the photograph of theface, we can directly apply the same geometric transformationover the 3D skull model to perfectly superimpose it in thesame manner. Thus, the geometric transformation, the wholepicture of the skull projected in the 2D image or the location ofa set of craniometric landmarks can be used as ground truthdata for comparison and method validation purposes.

    Concerning the utility of these ground truth dataset or thetechnical process described to generate new ones, there are afew potential applications apart from the direct practical usefor automatic SFO method assessment and comparison. Itcould be really useful for studying the discriminative powerof the different criteria for assessing the skullface relation-ship. In fact, this represents the first use of the database in theframework of the European project The new methodologiesand protocols of forensic identification by craniofacial super-imposition (MEPROCS).1 A subset of the 19 ground truthSFOs, together with an equivalent number of manually ob-tained SFOs of negative cases, was provided to several par-ticipants that were asked to evaluate a number of morpholog-ical criteria for each case. Using the results obtained from thisstudy, MEPROCS partners are trying to establish a raking ofimportance of morphological criteria for CFS. This study, ofmain forensic interest, may also have an important outcome inthe field of maxillofacial surgery and orthodontics, not only toimprove our knowledge of craniofacial relationships but also

    as a starting point for further studies on the prediction of theresulting facial morphology after corrective or reconstructiveinterventions [52].

    Acknowledgments Wewould like to thank all the participants that giveus the permission to work with both their head scans and facial photo-graphs, Drs. Luca Contardo and Domenico Dalessandri for the supportprovided during images acquisition and head scanning. The UniversityHospital of Trieste and Ortoscan for supporting this research. This workhas been supported by the Spanish Ministerio de Economa yCompetitividad under the SOCOVIFI2 project (refs. TIN2012-38525-C01/C02, http://www.softcomputing.es/socovifi/), the AndalusianDepartment of Innovacin, Ciencia y Empresa under project TIC2011-7745, the Principality of Asturias Government under the project withreference CT13-55, and the European Unions Seventh FrameworkProgramme for research technological development and demonstrationunder the MEPROCS project (Grant Agreement No. 285624), includingEuropean Development Regional Funds (EDRF). Mrs. C. Campomanes-lvarezs work has been supported by Spanish MECD FPU grant AP-2012-4285. Dr. Ibaezs work has been supported by Spanish MINECOJuan de la Cierva Fellowship JCI-2012-15359.

    References

    1. Burns KR (2012) Forensic anthropology training manual, 3rd edn.Pearson Education, Upper Saddle River

    2. Cattaneo C (2007) Forensic anthropology: development of a classicaldiscipline in the new millennium. Forensic Sci Int 165(23):185193

    3. Yoshino M (2012) Craniofacial superimposition. In: Wilkinson C,Rynn C (eds) Craniofacial identification. University Press,Cambridge, pp 238253

    4. Aulsebrook WA, Iscan MY, Slabbert JM, Beckert P (1995)Superimposition and reconstruction in forensic facial identification:a survey. Forensic Sci Int 75(23):101120

    5. Stephan CN (2009) Craniofacial identification: techniques of facialapproximation and CFS. In: Blau S, Ubelaker DH (eds) Handbook offorensic anthropology and archaeology. Left Coast, California, pp304321

    6. Damas S, Cordn O, Ibez O, Santamara J, Alemn I, Botella M(2011) Forensic identification by computer-aided CFS: a survey.ACM Comput Surv 43(4):27

    7. Al-Amad S, McCullough M, Graham J, Clement J, Hill A (2006)Craniofacial identification by computer-mediated superimposition. JForensic Odontostomatol 24(2):4752

    8. Glaister J, Brash JC (1937) Medico-legal aspects of the Ruxton case.E and S Livingstone, Edinburgh

    9. Galton F (1896) The Bertillon system of identification. Nature 54:569570

    10. Broca P (1875) Instructions craniologiques et craniomtriques de laSocit dAnthrpologie de Paris [in French]. In Masson G (ed), viiParis

    11. Ubelaker DH, Bubniak E, Odonnell G (1992) Computer-assistedphotographic superimposition. J Forensic Sci 37(3):750762

    12. Dorion RB (1983) Photographic superimposition. J Forensic Sci28(3):724734

    13. Brocklebank LM, Holmgren CJ (1989) Development of equipmentfor the standardization of skull photographs in personal identifica-tions by photographic superimposition. J Forensic Sci 34(5):12141221

    14. Maat GJ (1989) The positioning and magnification of faces andskulls for photographic superimposition. Forensic Sci Int 41(3):2252351 www.meprocs.eu

    580 Int J Legal Med (2015) 129:569581

    Author's personal copy

  • 15. Helmer R, Grner O (1976) Vereinfachte Schdelidentifizierung nachdem Superprojektionsverfahren mit Hilfe einer Video-Anlage [inGerman]. Z fr Rechtsmedizin 80 (3):v

    16. Fenton TW, Heard AN, Sauer NJ (2008) Skull-photo superimposi-tion and border deaths: identification through exclusion and thefailure to exclude. J Forensic Sci 53(1):3440

    17. Seta S, Yoshino M (1993) A combined apparatus for photographicand video superimposition. In: Iscan MY, Helmer R (eds) Forensicanalysis of the skull. Wiley, New York, pp 161169

    18. Lan Y, Cai D (1993) Technical advances in skull-photo superimpo-sition. In Iscan MYand Helmer R

    19. Pesce Delfino V, Colonna M, Vacca E, Potente F, Introna F (1986)Computer-aided skull/face superimposition. Am J Forensic MedPathol 7(3):201212

    20. Ricci A, Marella GL, Apostol MA (2006) A new experimentalapproach to computer-aided face/skull identification in forensic an-thropology. Am J Forensic Med Pathol 27(1):4649

    21. Ghosh AK, Sinha P (2001) An economised craniofacial identificationsystem. Forensic Sci Int 117(12):109119

    22. Nickerson BA, Fitzhorn PA, Koch SK, Charney M (1991) A meth-odology for near-optimal computational superimposition of two-dimensional digital facial photographs and three-dimensional cranialsurface meshes. J Forensic Sci 36(2):480500

    23. Huete MI, Kahana T, Ibez O (2014) Past, present, and future ofCFS: literature and international surveys. University of Granada,Spain, Tech. Rep. DECSAI 201401. Submitted to legal medicine

    24. Ubelaker DH (2000) A history of Smithsonian-FBI collaboration inforensic anthropology, especially in regard to facial imagery [ab-stract]. Forensic Sci Commun 2 (4)

    25. Ibez O, Ballerini L, Cordn O, Damas S, Santamara J (2009) Anexperimental study on the applicability of evolutionary algorithms toCFS in forensic identification. Inf Sci 179(23):39984028

    26. Ibez O, Cordn O, Damas S, Santamara J (2011) Modeling theskullface overlay uncertainty using fuzzy sets. IEEE Trans FuzzySyst 19(5):946959

    27. Ibez O, Cordn O, Damas S (2012) A cooperative coevolutionaryapproach dealing with the skull-face overlay uncertainty in forensicidentification by CFS. Soft Comput 18(5):797808

    28. Campomanes-Alvarez B, Ibez O, Navarro F, Alemn I, Cordn O,Damas S (2014) Dispersion assessment in the location of faciallandmarks on photographs. Int J Legal Med:In press

    29. De Vos W, Casselman J, Swennen GR (2009) Cone-beam comput-erized tomography (CBCT) imaging of the oral and maxillofacialregion: a systematic review of the literature. Int J Oral MaxillofacSurg 38:609625

    30. Swennen GRJ, Schutyser F (2006) Three-dimensional cephalometry:spiral multi-slice vs cone-beam computed tomography. Am J OrthodDentofac Orthop 130:410416

    31. Katsumata A, Hirukawa A, Noujeim M, Okumura S, Naitoh M,Fujishita M, Ariji E, Langlais RP (2006) Image artifact in dentalcone-beam CT. Oral Surg Oral Med Oral Pathol Oral Radiol Endod101:652657

    32. Botsch M, Kobbelt L, Pauly M, Alliez P, Levy B (2010) Polygonmesh processing. AK Peters

    33. Fedorov A, Beichel R, Kalpathy-Cramer J, Finet J, Fillion-Robin JC,Pujol S, Bauer C, Jennings D, Fennessy F, Sonka M, Buatti J,Aylward SR, Miller JV, Pieper S, Kikinis R (2012) 3D Slicer as animage computing platform for the Quantitative Imaging Network.Magn Reson Imaging 9:13231341

    34. Haralick RM, Shanmugam K, Dinstein I (1973) Textural features forimage classification. IEEE Trans SystMan Cybern 3(6):610621

    35. Mitchell T (1997) Machine learning. McGraw Hill36. Loh WY (2011) Classification and regression trees. Wiley Interdisc

    Rew Data Min Knowl Disc 1(1):142337. Quinlan JR (1993) C4.5: programs for machine learning. Morgan

    Kaufmann38. Hall M, Frank E, Holmes G, Pfahringer BRP, Witten IH (2009) The

    WEKA data mining software: an update. ACM SIGKDD ExplorNewsl 11(1):1018

    39. Zitova B, Flusser J (2003) Image registration methods: a survey.Image Vision Comput 21(11):9771000

    40. Faugeras O (1993) Three-dimensional computer vision. A geometricviewpoint. MIT Press, Cambridge

    41. Talbi E (2009) Metaheuristics: from design to implementation. Wiley42. Geisser S (1993) Predictive inference. Chapman and Hall, New York43. Gwen R, Swennen J, Schutyser F (2006) Three-dimensional cepha-

    lometry: spiral multi-slice vs cone-beam computed tomography. AmJ Orthod Dentofac Orthop 130:410416

    44. Grauer D, Cevidanes LSH, StynerMA, Heulfe I, Harmon ET, Zhu H,Proffit WR (2010) Accuracy and landmark error calculation usingcone-beam computed tomography-generated cephalograms. AngleOrthod 80(2):286294

    45. Loubele M, Bogaerts R, Van Dijck E, Pauwels R, Vanheusden S,Suetens P, Marchal G, Sanderink G, Jacobs R (2009) Comparisonbetween effective radiation dose of CBCT and MSCT scanners fordentomaxillofacial applications. Eur J Radiol 71(3):461468

    46. Moores BM, Regulla D (2011) A review of the scientific basis forradiation protection of the patient. Radiat Prot Dosim 147(12):2229

    47. Mah P, Reeves TE, McDavid WD (2010) Deriving Hounsfield unitsusing grey levels in cone beam computed tomography.Dentomaxillofac Radiol 39:323335

    48. Damas S, Cordn O, Santamara J (2011) Medical image registrationusing evolutionary computation: an experimental study. IEEEComput Intell Mag 6(4):2642

    49. Goshtasby AA (2005) 2-D and 3-D image registration for medical,remote sensing, and industrial applications. Wiley interscience

    50. Salvi J, Matabosch C, Fofi D, Forest J (2007) A review of recentrange image registration methods with accuracy evaluation. ImageVis Comput 25(5):578596

    51. Cummaudo M, Guerzoni M, Marasciuolo L, Gibelli D, Cigada A,Obertov Z, Ratnayake M, Poppa P, Gabriel P, Rizt-Timme S,Cattaneo C (2013) Pitfalls at the root of facial assessment on photo-graphs: a quantitative study of accuracy in positioning facial land-marks. Int J Legal Med 127:699706

    52. Plooij JM, Maal TJ, Haers P, Borstlap WA, Kuijpers-Jagtman AM,Berg SJ (2011) Digital three-dimensional image fusion processes forplanning and evaluating orthodontics and orthognathic surgery. Asystematic review. Int J Oral Maxillofac Surg 40(4):341345

    Int J Legal Med (2015) 129:569581 581

    Author's personal copy

    Ground truth data generation for skullface overlayAbstractIntroductionMaterial and methodsData acquisitionAutomatic segmentation of cone beam computed tomography dataImage registration processReal distances estimation

    ResultsSegmentation method evaluationValidation of the skullface overlay ground truth

    Discussion and conclusionsReferences