253
AIX-MARSEILLE UNIVERSITY Identification number of the thesis : Doctoral school : Physics and Material Sciences (352) Fresnel institute, Fraunhofer iis, Areva HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE PROCESSING OF MULTIMODAL DATA : Application for metal surface inspection in an industrial context by means of multispectral imagery, infrared thermography and stripe projection techniques THESIS presented and defended publicly the December 19, 2013 to obtain the degree of Doctor of Aix-Marseille University speciality «Optics, Photonics and Image Processing » by : Mohammed Seghir Benmoussat Composition of the jury : Reviewers : Mr. Franck Marzani Laboratoire Le2i - Université de Bourgogne, France Mr. Xavier Maldague Université Laval (Québec), Canada Examiners : Mr. Yannick Caulier AREVA Reactors & Services, France Ms. Mireille Guillaume Ecole Centrale Marseille, France Mr. Jean Sequeira LSIS, Aix Marseille Universités, France Mr. Klaus Spinnler Fraunhofer Institute, Fürth, Germany

HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

  • Upload
    others

  • View
    6

  • Download
    0

Embed Size (px)

Citation preview

Page 1: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

AIX-MARSEILLE UNIVERSITY

Identification number of the thesis :

Doctoral school : Physics and Material Sciences (352)

Fresnel institute, Fraunhofer iis, Areva

HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE

PROCESSING OF MULTIMODAL DATA :

Application for metal surface inspection in an industrial

context by means of multispectral imagery, infrared

thermography and stripe projection techniques

THESIS

presented and defended publicly the December 19, 2013 to obtain the degree of

Doctor of Aix-Marseille University

speciality «Optics, Photonics and Image Processing »

by :

Mohammed Seghir Benmoussat

Composition of the jury :

Reviewers : Mr. Franck Marzani Laboratoire Le2i - Université de Bourgogne, France

Mr. Xavier Maldague Université Laval (Québec), Canada

Examiners : Mr. Yannick Caulier AREVA Reactors & Services, France

Ms. Mireille Guillaume Ecole Centrale Marseille, France

Mr. Jean Sequeira LSIS, Aix Marseille Universités, France

Mr. Klaus Spinnler Fraunhofer Institute, Fürth, Germany

Page 2: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …
Page 3: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

AIX-MARSEILLE UNIVERSITÉ

Numéro d’identification de la thèse :

Ecole Doctorale Physique et Science de la Matière (352)

Institut Fresnel, Fraunhofer iis, Areva

ALGORITHMES DE L’IMAGERIE HYPERSPECTRALE

POUR LE TRAITEMENT DE DONNÉES

MULTIMODALES :

Application pour l’inspection de surfaces métalliques dans un

contexte industriel par moyen de l’imagerie multispectrale, la

thermographie infrarouge et des techniques de projection de

franges

THÈSE

présentée et soutenue publiquement le 19 Décembre 2013

en vue d’obtenir le grade de

Docteur de l’Université d’Aix-Marseille

spécialité «Optique, Photonique et Traitement d’Image »

par :

Mohammed Seghir Benmoussat

Composition du jury :

Rappoteurs : Mr. Franck Marzani Laboratoire Le2i - Université de Bourgogne, France

Mr. Xavier Maldague Université Laval (Québec), Canada

Examinateurs : Mr. Yannick Caulier AREVA Reactors & Services, France

Mme. Mireille Guillaume Ecole Centrale de Marseille, France

Mr. Jean Sequeira LSIS, Aix Marseille Universités, France

Mr. Klaus Spinnler Fraunhofer Institute, Fürth, Allemagne

Page 4: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …
Page 5: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

Acknowledgement

First of all, I thank God who gave me strength and Volente to go through this thesis. I

wish to express here my deep gratitude to all those who in one-way or another have been

involved in the development of this work.

I would like to thank in a special way my thesis advisor, Mireille Guillaume for providing

an excellent research environment and for the precious freedom she always gave me to try

out my own ideas. And, especially, for her help and support during all these years. I am also

very grateful to Yannick Caulier, co-advisor of this thesis, and Klaus Spinnler. Without them

this thesis would have not been possible. They proposed the subject of the thesis and offered

me everything I needed to complete this work, but, more importantly, they gave me also

the strength to believe in this work. It has been both an honor and a pleasure to work with

them. I would also like to thank the jury members ; in particular Franck Marzani and Xavier

Maldague for their interest in my work by accepting the tedious task of reviewers and for the

relevance of their observations and the accuracy of their assessments, and Jean Sequeira for

giving me the honor to preside it.

This work was held at the Fraunhofer institute (Fürth, Germany), based on a collaboration

with Fresnel institute (Marseille, France) and AREVA (Chalon-sur-Saône, France). I express

my gratitude to the Bavarian research foundation (BFS : Bayerische Forschungsstiftung) and

Thomas Wenzel for supporting this researches. And, of course, my deepest gratitude to all

colleagues of the Fraunhofer IIS, and GSM and HIPE teams, particularly to my colleagues in

the PRP department, for providing a very pleasant environment and a stimulating atmosphere.

Finally, I would like to extend my thanks to all my family and friends. To my father and

mother, my wife and my children Younes and Anfal. To Sofiane and Nassima, Chakib, Bénali

Page 6: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

6 ACKNOWLEDGEMENT

and Kamel ; and to my sister Fadia and her husband and daughters Imen and Radjaa. Thanks

for putting up with me all these years ! You have been my strength during the elaboration of

this thesis. Thanks for your words of encouragement, your understanding, your loyal support.

Thanks for all the moments we shared together. Thanks for your precious friendship.

Page 7: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

Acronyms

ACE Adaptive cosine estimator FFT Fast Fourier transform

AIC Akaike information criterion FIR Far IR

AMF Adaptive matched filter GLRT Generalized likelihood ratio test

AOTF Acousto-optic tunable filter HCP Heating and cooling part

ATC Absolute thermal contrast HOS Higher order statistics

AVT Automated visual testing HP Heating part

CAD Computer-aided design HSI Hyperspectral imagery

CFAR Constant false alarm rate HySime Hyperspectral signal identificationby minimum error

CP Cooling part IC Independent component

DAC Differential absolute contrast ICA Independent component analysis

DC Direct current iPP Inverse fringe projection

DFT Discrete Fourier transform IRT Infrared thermography

EOF Empirical orthogonal functions LAP List of anomaly pixels

ERT Early recorded thermogram LCTF Liquid crystal tunable filter

FAR False alarm rate LED Light emitting diode

Page 8: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

8 ACRONYMS

LT Lock-in thermography PT Pulsed thermography

LNC List of all neighbor curves PWL Polarized white light

LP Light pattern RARX Regularized adaptive RX

LRT Likelihood ratio test ROC Receiver operating characteristic

LWIR Long wave IR ROI Region of interest

MCD Most common distance RT Radiographic testing

MDL Minimum description length RX Reed and Xiaoli

MNF Maximum noise fraction SAM Spectral angle mapper

MT Magnetic particle testing SH Step-heating

MWIR Mid wave IR SID Spectral information divergence

NB Native Bayes SNR Signal to noise ratio

NDE Nondestructive evaluation SV Singular value

NDT Non-destructive testing SVD Singular value decomposition

NIR Near infrared SWIR Short wave IR

NN Nearest-neighbor TAU Kendall’s τ

PC Principal component TT Thermography testing

PCA Principal component analysis UML Unpolarized monochromatic light

PCT Principal component thermography UT Ultrasonic testing

PD Probability of detection UWL Unpolarized white light

PFA Probability of false alarm VD Virtual dimension

PGP Prism-grating-prism VT Vibrothermography

PML Polarized monochromatic light WP Workpiece

PPT Pulsed phase thermography WSP Without saturated pixel

PSC Pseudo-spectral-cubes

Page 9: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

Table of contents

Acknowledgement 5

Acronyms 7

Abstract 15

Résumé 17

Introduction 19

1 Hyperspectral imagery 22

1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

1.2 Hyperspectral imagery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

1.2.1 Spectral reflectance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

1.2.2 Spectral libraries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

1.2.3 Hyperspectral images . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

1.2.4 Advantages and disadvantages of HSI . . . . . . . . . . . . . . . . . . 28

1.3 Representation of hyperspectral images . . . . . . . . . . . . . . . . . . . . . . 29

1.4 Denoising and dimensionality reduction . . . . . . . . . . . . . . . . . . . . . 30

1.4.1 Denoising and dimensionality reduction algorithms . . . . . . . . . . . 31

1.4.1.1 Singular value decomposition (SVD) . . . . . . . . . . . . . . 31

1.4.1.2 Maximum noise fraction (MNF) . . . . . . . . . . . . . . . . 33

Page 10: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

10 TABLE OF CONTENTS

1.4.2 Estimation of the subspace signal dimension . . . . . . . . . . . . . . . 34

1.4.2.1 AIC and MDL . . . . . . . . . . . . . . . . . . . . . . . . . . 34

1.4.2.2 HySime . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

1.5 Detection methods in multivariate imaging . . . . . . . . . . . . . . . . . . . . 36

1.5.1 Supervised detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

1.5.1.1 Target detection . . . . . . . . . . . . . . . . . . . . . . . . . 39

1.5.1.2 Spectral distance measures . . . . . . . . . . . . . . . . . . . 42

1.5.2 Unsupervised detection . . . . . . . . . . . . . . . . . . . . . . . . . . 43

1.5.3 Principle of detection with CFAR . . . . . . . . . . . . . . . . . . . . . 45

1.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

2 Surface defect detection on flat metal parts using multi-component images 47

2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

2.2 Image acquisition system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

2.2.1 Light sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

2.2.2 Polarized light . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

2.2.3 Light perception and colors interpretation . . . . . . . . . . . . . . . . 50

2.2.4 Modalities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

2.3 Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

2.4 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

2.5 Results and discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58

2.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64

3 Surface and sub-surface defect detection on nuclear parts using thermogra-

phy images 65

3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

3.1.1 Motivation of thermography testing . . . . . . . . . . . . . . . . . . . 65

Page 11: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

TABLE OF CONTENTS 11

3.1.2 History of previous works . . . . . . . . . . . . . . . . . . . . . . . . . 67

3.1.3 Goal and new contributions . . . . . . . . . . . . . . . . . . . . . . . . 68

3.1.4 Structure of the chapter . . . . . . . . . . . . . . . . . . . . . . . . . . 69

3.2 State of the art . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70

3.2.1 Existing methods of defect detection in IRT . . . . . . . . . . . . . . . 70

3.2.1.1 Image normalization . . . . . . . . . . . . . . . . . . . . . . . 71

3.2.1.2 Thermal contrast techniques . . . . . . . . . . . . . . . . . . 71

3.2.1.3 Pulsed phase thermography (PPT) . . . . . . . . . . . . . . . 72

3.2.1.4 Principal component thermography (PCT) . . . . . . . . . . 73

3.2.2 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75

3.3 Proposed method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77

3.3.1 Problem formulation and approach . . . . . . . . . . . . . . . . . . . . 77

3.3.1.1 Construction of hypertemporal cubes . . . . . . . . . . . . . 77

3.3.1.2 Detection algorithms . . . . . . . . . . . . . . . . . . . . . . . 79

3.3.1.3 Dimension reduction . . . . . . . . . . . . . . . . . . . . . . . 81

3.3.1.4 Is there a best direction ? . . . . . . . . . . . . . . . . . . . . 82

3.3.1.5 Flowchart of the proposed method . . . . . . . . . . . . . . . 82

3.3.2 Methods, experiments and results (application on nuclear components

examination) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84

3.3.2.1 Experimental setup . . . . . . . . . . . . . . . . . . . . . . . 84

3.3.2.2 Choosing temporal ROI . . . . . . . . . . . . . . . . . . . . . 86

3.3.2.3 Detection after singular value decomposition (SVD) . . . . . 89

3.3.2.4 Detection after maximum noise fraction (MNF) . . . . . . . . 98

3.3.2.5 Detection after independent component analysis (ICA) . . . 102

3.3.2.6 Choice of the virtual dimension, K . . . . . . . . . . . . . . . 105

Page 12: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

12 TABLE OF CONTENTS

3.3.2.7 Proposed one-dimensional approach with principal component

analysis (PCA) . . . . . . . . . . . . . . . . . . . . . . . . . . 110

3.3.2.8 Performance analysis . . . . . . . . . . . . . . . . . . . . . . 116

3.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117

4 Structured light for the inspection of free-form metal surfaces 120

4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120

4.1.1 Motivation of structured light . . . . . . . . . . . . . . . . . . . . . . . 120

4.1.2 History of previous works . . . . . . . . . . . . . . . . . . . . . . . . . 122

4.1.3 Goal and new contributions . . . . . . . . . . . . . . . . . . . . . . . . 125

4.1.4 Structure of the chapter . . . . . . . . . . . . . . . . . . . . . . . . . . 126

4.2 State of the art of surface inspection methods . . . . . . . . . . . . . . . . . . 126

4.2.1 Inspection of cylindrical reflective metallic surfaces . . . . . . . . . . . 126

4.2.2 Inverse fringe projection . . . . . . . . . . . . . . . . . . . . . . . . . . 129

4.2.3 Actual limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132

4.3 Proposed method for the inspection of free-form surfaces with phase-shifting

technique . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134

4.3.1 Problem formulation and approach description . . . . . . . . . . . . . 134

4.3.2 Choose of ROI and patterns generation . . . . . . . . . . . . . . . . . 136

4.3.3 Phase-shifting and camera recording . . . . . . . . . . . . . . . . . . . 139

4.3.4 Fringe analysis and defect detection . . . . . . . . . . . . . . . . . . . 140

4.3.4.1 Phase-image calculation . . . . . . . . . . . . . . . . . . . . . 143

4.3.4.2 Stripes detection . . . . . . . . . . . . . . . . . . . . . . . . . 144

4.3.4.3 Curves analysis and defect detection . . . . . . . . . . . . . . 150

4.3.5 Experimental results (application on car wheels) . . . . . . . . . . . . 153

4.3.5.1 Experimental setup . . . . . . . . . . . . . . . . . . . . . . . 153

4.3.5.2 Inspected workpieces . . . . . . . . . . . . . . . . . . . . . . . 158

Page 13: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

TABLE OF CONTENTS 13

4.3.5.3 Experimental results and discussion . . . . . . . . . . . . . . 158

4.3.6 Performance analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168

4.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169

Concluding remarks 175

Appendices 181

A Fundamentals of infrared thermography (IRT) 183

A.1 Thermal energy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183

A.1.1 Heat transfer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183

A.1.2 Latent Heat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185

A.1.3 Conduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185

A.1.4 Convection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187

A.1.5 Radiation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188

A.2 Infrared systems fundamentals . . . . . . . . . . . . . . . . . . . . . . . . . . 192

A.2.1 Thermal emission . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192

A.2.2 Spectral bands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194

A.2.3 Choice of infrared band . . . . . . . . . . . . . . . . . . . . . . . . . . 194

A.2.4 Infrared radiation from the object to the image . . . . . . . . . . . . . 195

A.2.5 Detectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196

A.3 Infrared thermography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200

A.3.1 Overview of nondestructive testing methods . . . . . . . . . . . . . . . 200

A.3.2 Infrared thermography in the NDT&E scene . . . . . . . . . . . . . . . 202

A.3.3 IRT system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205

A.3.4 Conditions for using IRT . . . . . . . . . . . . . . . . . . . . . . . . . . 206

A.3.5 Advantages and difficulties of IRT . . . . . . . . . . . . . . . . . . . . 207

A.3.6 Pulsed thermography . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208

Page 14: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

14 TABLE OF CONTENTS

A.3.7 Lock-in thermography . . . . . . . . . . . . . . . . . . . . . . . . . . . 210

A.3.8 The complete thermogram sequence . . . . . . . . . . . . . . . . . . . 211

B Bernoulli-Gaussian model 215

C Additional results obtained with SVD 217

D Additional results obtained with MNF 223

E Additional results obtained with ICA 230

F Principal components obtained with SVD 232

G Principal components obtained with MNF 234

References 240

Page 15: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

Abstract

THE work presented in this thesis deals with the quality control and inspection of indus-

trial metallic surfaces. The purpose is the generalization and application of hyperspec-

tral imagery (HSI) methods for multimodal data such as multi-channel optical images and

multi-temporal thermographic images. We also proposed a multi-phase method to detect 3D

defects with structured light. The objective is to develop reliable algorithms implementable

in an industrial context for the detection of different types of defects in metal parts such as

nuclear components and car wheels surfaces.

First, we recall some basics of HSI data processing. The problem of Hughes phenome-

non, related to high dimensionality of the processed data spaces, is addressed. Some classical

denoising and dimensionality reduction methods as well as the algorithms used to estimate

the subspace signal dimension are presented. Then, the most used target detection and ano-

maly detection algorithms used in HSI applications are reviewed. In the first application,

data cubes are built from multi-component images to detect surface defects within flat me-

tallic parts. The different lighting modalities, white and monochromatic light in combination

with polarization, are used to construct the data cubes. Both, supervised and unsupervised

approaches are investigated to detect the defects. The best performances are obtained with

multi-wavelength illuminations in the visible and near infrared ranges, and detection using

spectral angle mapper with mean spectrum as a reference.

The second application turns on the use of thermography imaging for the inspection of

nuclear metal components to detect surface and subsurface defects. The multi-temporal data

cubes are built from thermal images recorded during the heating and cooling steps of the

inspected specimen by means of induction thermography with lock-in and pulsed techniques.

A variation of energy or signal-to-noise ratio (SNR) based method is proposed to estimate

the virtual dimension of the reduced data spaces, which are reduced by means of the sin-

gular value decomposition (SVD) and maximum noise fraction (MNF) algorithm. Anomaly

Page 16: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

16 ABSTRACT

detection algorithms are applied on the reduced data cubes to detect the defects. Then, a one-

dimensional approach is proposed based on using the kurtosis as a contrast criterion to select

one principal component (PC), as best candidate contrast map, from the first PCs obtained

after reducing the original data cube with the principal component analysis (PCA) algorithm.

The proposed PCA-1PC method gives good performances with non-noisy and homogeneous

data, and SVD with anomaly detection algorithms gives the most consistent results and is

quite robust to perturbations such as inhomogeneous background.

Finally, an approach based on fringe analysis and structured light techniques in case of

deflectometric recordings is presented for the purpose of inspection of reflected free-form metal

surfaces, avoiding the drawbacks of the classical methods consisting in the projection of an

inverse fringe projection pattern created by knowing the computer-aided design (CAD) model

of the inspected surface and requiring an accurate calibration step of the whole system. After

determining the parameters describing the sinusoidal stripe patterns according to the defect

characteristics and the acquisition system, the proposed approach consists in projecting a list

of phase-shifted sinusoidal patterns and calculating the phase-images from the corresponding

recorded patterns. Defect location is based on detecting and analyzing the stripes within the

phase-images : if distortion or non-continuity appears along the stripes, a defect is present.

At the end, a study on the choice of the fringe parameters is made in order to optimize the

detection of the defects according to their characteristics.

Keywords : Hyperspectral imagery, Dimensionality reduction, Defect and anomaly de-

tection, Surface inspection, Polarization, Illumination, Non-destructive testing, Pulsed ther-

mography, Lock-in thermography, Induction thermography, Deflectometry, Structured light,

Stripe projection, Phase-shifting, Fringe analysis.

Page 17: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

Résumé

LE travail présenté dans cette thèse porte sur le contrôle qualité et l’inspection de surfaces

métalliques industrielles. Nous proposons de généraliser et d’appliquer des méthodes

de l’imagerie hyperspectrale à des données multimodales comme des images optiques multi-

canales, et des images thermographiques multi-temporelles. Nous avons également proposé une

méthode multi-phase pour détecter des défauts 3D en illumination ’structurée’. L’objectif est

de développer des algorithmes fiables et réalisables dans un milieu industriel pour la détection

de différents types de défauts dans des pièces métalliques tels que des composantes nucléaires

et des surfaces des jantes de voitures.

Tout d’abord nous rappelons des bases du traitement des données hyperspectrales. Le

phénomène de Hughes, lié à la grande dimensionnalité de l’espace de données traitées, est

mentionné. Puis les méthodes classiques de débruitage et de réduction de dimensionnalité

ainsi que des algorithmes d’estimation de la dimension du sous-espace signal sont présentés.

Ensuite, les algorithmes classiques de détection de cibles et de détection d’anomalies sont

présentés. Dans la première application, les cubes de données sont construits à partir d’images

multi-composantes pour détecter des défauts de surface dans des pièces métalliques planes. Les

différentes modalités d’éclairage : lumière blanche et monochromatique en combinaison avec la

polarisation sont utilisées pour construire les cubes de données. Les approches supervisées et

non-supervisées sont étudiées pour détecter les défauts présents. Les meilleures performances

sont obtenues avec les éclairages multi-longueurs d’ondes dans le visible et le proche infrarouge,

et la détection du défaut en utilisant l’angle spectral, avec le spectre moyen comme référence.

La deuxième application concerne l’utilisation de l’imagerie thermographique pour l’ins-

pection de composants métalliques nucléaires afin de détecter des défauts de surface et sub-

surface. Les cubes de données multi-temporels sont construits à partir des images thermiques

enregistrées durant les phases de chauffage et de refroidissement de l’échantillon inspecté, en

utilisant la thermographie inductive avec deux techniques, la thermographie de lock-in et la

Page 18: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

18 RÉSUMÉ

thermographie pulsée. Une méthode basée sur la variation d’énergie ou du rapport signal à

bruit est proposée pour estimer la dimension virtuelle des espaces de données, qui sont ré-

duites en utilisant la décomposition en valeurs singulières (SVD) et l’algorithme de fraction

de bruit maximal (MNF). Des algorithmes de détection d’anomalies sont appliqués sur les

cubes réduits afin de détecter les défauts. Puis, une approche monodimensionnelle est pro-

posée, basée sur l’utilisation du coefficient d’aplatissement en tant que critère de contraste

pour sélectionner la composante principale (CP), en tant que meilleure carte de contraste,

parmi les premières composantes obtenues après réduction du cube de données en utilisant

l’algorithme de l’analyse en composantes principales (ACP). La méthode proposée ACP-1CP

donne de bonnes performances avec des données non-bruitées et homogènes, cependant la

SVD avec les algorithmes de détection d’anomalies est très robuste aux perturbations telles

que fond inhomogène et donne des résultats plus constants.

Finalement, une approche, basée sur les techniques d’analyse de franges et la lumière

structurée dans le cas d’enregistrements déflectométriques est présentée, dans le but d’ins-

pecter des surfaces métalliques réfléchissantes et à forme libre. Cette approche surmonte les

inconvénients des méthodes classiques qui consistent à projeter un modèle de franges inverse

créé en se basant sur la connaissance à priori du modèle conceptuel de la surface inspectée

et nécessitant une étape précise de calibrage du système entier. Après avoir déterminé les

paramètres décrivant les modèles de franges sinusoïdaux en fonction des caractéristiques des

défauts et du système d’acquisition, l’approche proposée consiste à projeter une liste de mo-

tifs sinusoïdaux déphasés et à calculer l’image de phase des motifs enregistrés. La localisation

des défauts est basée sur la détection et l’analyse des franges dans les images de phase : si

une distorsion ou une discontinuité apparaît le long des franges, alors un défaut est désormais

présent. A la fin, une étude sur le choix des paramètres de franges est réalisée à fin d’optimiser

la détection des défauts en fonction de leurs caractéristiques.

Mots clés : Imagerie hyperspectrale, Réduction de dimension, Détection de défauts et

d’anomalie, Inspection de surfaces, Polarisation, Eclairage, Contrôle non-destructif, Thermo-

graphie pulsée, Thermographie de lock-in, Thermographie inductive, Déflectométrie, Lumière

structurée, Projection de franges, Décalage de phase, Analyse des franges.

Page 19: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

Introduction

QUALITY control of industrial pieces is a challenging domain. Different kinds and shapes

of metallic parts (steel, stainless steel, aluminum, silver, etc.) are used in several in-

dustrial areas such as automotive, aviation, nuclear, etc. ; and are very often inspected during

the production processes for quality control purposes. Controls include product inspection,

where inspectors will be provided with lists and descriptions of unacceptable product defects

such as cracks or surface blemishes for example. The objective is to fulfill the specific requi-

rements in terms of inspection time and measurement resolution, while taking into account

additional challenges such as handling large surfaces or complex geometries. The Inspection

process consists of determining if a product deviates from a given set of specifications.

Human operators have several disadvantages compared to intelligent machines including

subjectivity, productivity, consistency, repeatability, etc. The automated quality control sys-

tems offer various advantages, including the ability to work in hazardous environments 24

hours a day, and in some tasks perform quicker measurements with higher accuracy and

consistency than humans.

Hyperspectral imagery (HSI) algorithms have been basically developed for detection and

identification of targets from hyperspectral sensors. The detection and identification is based

on the spectral signature of the target. Many supervised and unsupervised HSI algorithms

have been proposed in the literature. The HSI sensors provide image data containing spatial

and spectral information, where this information is used to address such detection tasks. The

basic idea for HSI stems from the fact that, for any given material, the amount of radiation that

is reflected, absorbed, or emitted varies with wavelength. HSI sensors measure the radiance of

the materials within each pixel area at a very large number of contiguous spectral wavelength

bands. HSI has been widely used for the detection of targets in military and civil applications.

The objective of this thesis is to evaluate the interest of the application of HSI methods in

industrial context by using multimodal imagery. Multivariate imagery, thermographic imagery

Page 20: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

20 INTRODUCTION

and fringe projection are the proposed applications in this thesis to inspect metallic surfaces.

Supervised and unsupervised approaches are considered.

This thesis is based on a collaboration between the Fraunhofer IIS in Fürth, Germany ;

Fresnel institute in Marseille, France ; and AREVA in Chalon, France. The thesis was held

at the Fraunhofer institute, which is basically concerned with the inspection of flat and free-

form surfaces by means of optical imagery and fringe projection techniques. Car wheels are

the inspected parts in the context of this thesis, where an industrial inspection system based

on stripe projection technique is available in the laboratory to make recordings and experi-

ments. AREVA is concerned with the inspection of nuclear metallic parts by means of infrared

thermographic imagery, where an experimental setup is provided to acquire thermal images

with different thermography techniques. Within the Fresnel institute, some researchers de-

velop their research in hyperspectral signal processing. During this thesis, I was within the

MAP and the HIPE groups, which respectively focused their research on surface analysis by

power photonics and hyper frequencies.

Overview

The aim of the presented work is to propose different approaches for the detection of

different types of defects in metal components. The originality of these approaches lies in the

combination of HSI methods with different imaging modalities.

We present in chapter 1 the used HSI algorithms in this thesis, starting with giving an

overview of the HSI technique. We explain how the HSI images are obtained and how they

are represented, and what are the basic elements of an imaging spectrometer. We recall the

"Hughes phenomenon" related to the high dimensionality of the HSI images and we explain the

need of the data space reduction step. We present the mainly used dimensionality reduction

algorithms and subspace dimension estimation methods dedicated to HSI images. We detail

the description of the detection methods used in multi- and hyper-spectral imaging, including

supervised and unsupervised approaches.

Chapter 2 presents a first application of HSI algorithms on multispectral imagery. The re-

corded data are formed from multivariate images obtained with different lighting modalities.

White and monochromatic light sources are used and combined with polarization to illumi-

nate flat metallic surfaces containing dent and scratch defects. The inspection task consists

Page 21: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

INTRODUCTION 21

of detecting defects within the inspected surfaces and evaluating the influence of the used

modalities on the false alarm rates. We use supervised and unsupervised approaches.

Chapter 3 presents a new approach based on application of HSI algorithms on thermogra-

phic images. We recall the state of the art of thermal energy, infrared systems and infrared

thermography. We present different existing methods dedicated to defect detection in thermo-

graphic imagery. We describe the used experimental setup. We compare different data space

reduction techniques and we present a one-dimensional reduction version.

Chapter 4 presents a new approach for the inspection of free-form surfaces with fringe pro-

jection techniques. We start with presenting the state of the art of free-form surface inspection

methods in structured light and we describe the proposed approach based on phase-shifting

technique and fringe analysis.

Page 22: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

CHAPTER

1 Hyperspectral imagery

1.1 Introduction

Hyperspectral sensors allow simultaneous acquisition of several information, especially

with the quasi-continuous acquisition of spectral data for all pixels of the scene, from ultra-

violet to near infrared, which corresponds to several hundred images associated with different

spectral bands for the same scene. This very large number of data inevitably increases the

complexity of the treatment, making less effective traditional methods of image processing. In

order to exploit this information, it was necessary to develop new methods of hyperspectral

image processing. The multi-linear algebra tools are particularly well suited for analyzing this

type of multidimensional data. Indeed, numerically, a data that is depending of several para-

meters can be modeled as a multi-input array. Each input corresponds to one parameter. For

example, a hyperspectral image can be represented by an array of three inputs : two inputs

for the spatial information (indexes of the spatial coordinates of the pixels) and one input for

the spectral information (index of the spectral band). Each input in the array is associated

with a sampled physical quantity. These multidimensional arrays allow a unique and global

representation of the spectral data.

The treatment of these arrays requires the use of multi-linear algebra tools. Actually,

several methods based on linear algebra tools are proposed in the literature [1], [2] for the

analysis of hyperspectral images. The most common applications are :

– Detection, for researching and identifying specific objects.

– Classification, for characterizing and grouping objects according to a criterion.

– Unmixing, for estimating the spectra and/or the abundance of each material in the

scene.

Page 23: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

1.2. HYPERSPECTRAL IMAGERY 23

All these applications usually need multiple preprocessing of hyperspectral data in order

to enhance their performance, such as spectral dimension reduction or denoising the data,

which are corrupted by noise from various phenomena, such as the atmospheric disturbance,

sensors’ noise, etc.

The remainder of this chapter is organized as follows. Section 1.2 recalls some basics

of hyperspectral imagery. The tensor representation of hyperspectral images is presented in

section 1.3. Section 1.4 describes the most used denoising and dimensionality reduction algo-

rithms applied to hyperspectral images. A particular attention is given to the singular value

decomposition (SVD) and the maximum noise fraction (MNF) algorithms. Some methods for

estimating the signal subspace are also presented in this section. Section 1.5 reports the mul-

tivariate imaging detection methods used in this thesis. Target detection, spectral distance

measures and anomaly detection algorithms are detailed. Section 1.6 concludes this chapter.

1.2 Hyperspectral imagery

Hyperspectral Imaging (HSI) is a technique that combines digital imaging with spectro-

scopy. HSI collects and processes information from across the electromagnetic spectrum as a

function of the wavelength, and produces hyperspectral images with instruments called ima-

ging spectrometers. Spectral imagers form and analyze the spectral radiances at each pixel

in the scene, where the objects are characterized by their spatial shape and their spectral

radiance. The spectral radiance of an object is its reflected light intensity as a function of

wavelength, which is indicative of the material forming the object. Although the human eye

can distinguish spatial shape very well, it does not discern spectral radiance characteristics

nearly as precisely. Instead, the human eye perceives only a dominant portion of the spectral

radiance, which is perceived as the color of the object [3].

The spectral reflectance of an object is obtained by the ratio between the radiance and

the scene illumination. In principle most objects can be identified by their spectral reflectance

alone. The choice of the relevant spectral bands is crucial for the analysis. Indeed, the de-

flections of the spectral curves mark the wavelength ranges for which the material selectively

absorbs the incident energy. These features are commonly called absorption bands. The overall

shape of a spectral curve and the position and strength of absorption bands in many cases can

Page 24: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

24 CHAPTER 1. HYPERSPECTRAL IMAGERY

be used to identify and discriminate different materials. For example, vegetation has higher

reflectance in the near infrared range and lower reflectance of red light than soils. However,

because the human eye is a relatively unsophisticated observer, it is not always possible to

distinguish objects based upon their perceived colors.

Multispectral remote sensors such as the Landsat Thematic Mapper and SPOT XS pro-

duce images with a few relatively broad wavelength bands. Hyperspectral remote sensors, on

the other hand, collect image data simultaneously in hundreds or thousands of narrow, adja-

cent spectral bands. These measurements make it possible to derive a continuous spectrum

for each image cell, as shown in the Figure 1.1.

Hyperspectral images contain a wealth of data, but interpreting them requires an unders-

tanding of exactly what properties of materials we are trying to measure, and how they relate

to the measurements actually made by the hyperspectral sensor.

Figure 1.1 — a plot of the brightness values versus wavelength shows the continuousspectrum for the image cell, which can be used to identify surface materials.

The basic elements of an imaging spectrometer are shown in Figure 1.2. The development

of these complex sensors has involved the convergence of two related but distinct technologies :

spectroscopy and the remote imaging of Earth and planetary surfaces [4].

Spectroscopy is the study of light that is emitted by or reflected from materials and its

variation in energy with wavelength. As applied to the field of optical remote sensing, spectro-

scopy deals with the spectrum of sunlight that is diffusely reflected (scattered) by materials

at the Earth’s surface. Instruments called spectrometers (or spectroradiometers) are used to

Page 25: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

1.2. HYPERSPECTRAL IMAGERY 25

make ground-based or laboratory measurements of the light reflected from a test material.

An optical dispersing element such as a grating or prism in the spectrometer splits this light

into many narrow, adjacent wavelength bands and the energy in each band is measured by a

separate detector. By using hundreds or even thousands of detectors, spectrometers can make

spectral measurements of bands as narrow as 0.01 µm over a wide wavelength range, typically

at least 0.4 to 2.4 µm (visible through middle infrared wavelength ranges) in remote-sensing

applications. For other applications, such as astronomy, different wavelength ranges can be

used (e.g. UV, visible, infrared and radio wavelength ranges).

Remote imagers are designed to focus and measure the light reflected from many adjacent

areas on the Earth’s surface. Recent advances have allowed the design of imagers that have

spectral ranges and resolutions comparable to ground-based spectrometers.

Figure 1.2 — schematic diagram of the basic elements of an imaging spectrometer. Somesensors use multiple detector arrays to measure hundreds of narrow wavelength (λ) bands.

1.2.1 Spectral reflectance

In reflected-light spectroscopy the fundamental property that we want to obtain is spec-

tral reflectance : the ratio of reflected energy to incident energy as a function of wavelength.

Reflectance varies with wavelength for most materials because energy at certain wavelengths

is scattered or absorbed to different degrees. These reflectance variations are clear when we

compare spectral reflectance curves (plots of reflectance versus wavelength) for different ma-

terials, as in Figure 1.3. The overall shape of a spectral curve and the position and strength of

absorption bands in many cases can be used to identify and discriminate different materials.

For example, vegetation has higher reflectance in the near infrared range and lower reflectance

Page 26: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

26 CHAPTER 1. HYPERSPECTRAL IMAGERY

in the red range than soils.

Figure 1.3 — representative spectral reflectance curves for several common Earth surfacematerials over the visible light to reflected infrared spectral range.

1.2.2 Spectral libraries

Several libraries of reflectance spectra of natural and man-made materials are available for

public use. These libraries provide a source of reference spectra that can aid the interpretation

of hyperspectral and multispectral images.

– ASTER Spectral Library : this library has been made available by NASA as part of

the advanced spaceborne thermal emission and reflection radiometer (ASTER) imaging

instrument program. It includes spectral compilations from NASA’s Jet Propulsion La-

boratory, Johns Hopkins University, and the United States Geological Survey (Reston).

The ASTER spectral library currently contains nearly 2000 spectra, including minerals,

rocks, soils, man-made materials, water, and snow. Many of the spectra cover the entire

wavelength region from 0.4 to 14 µm. The library is accessible interactively via the

Worldwide Web at http ://speclib.jpl.nasa.gov.

– USGS Spectral Library : the united states geological survey spectroscopy lab in Denver,

Colorado has compiled a library of about 500 reflectance spectra of minerals and a few

plants over the wavelength range from 0.2 to 3.0 µm. This library is accessible online

at http ://speclab.cr.usgs.gov/spectral.lib04/spectral-lib04.html.

Page 27: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

1.2. HYPERSPECTRAL IMAGERY 27

1.2.3 Hyperspectral images

Color images have three layers (or bands) that each has different information. It is possible

to make even more layers by using smaller wavelength bands, say 20 nm wide between 400 nm

and 800 nm. Then each pixel would be a spectrum of 21 wavelength bands. This is the

multivariate image. The 21 wavelength bands in the example are called the image variables

and in general there are K variables. An I × J image in K variables would form a three-way

array of size I × J ×K.

Figure 1.4 — (a) an I × J image in K variables is an I × J × K array of data. (b) theI × J ×K image can be presented as K slices where each slice is a grey-value image. (c) theI × J ×K images can be presented as an image of vectors. In special cases, the vectors can

be shown and interpreted as spectra.

With more than three wavelength bands, simple color representation is not possible, but

some artificial color images may be made by combining any three bands. In that case the

colors are not real and are called pseudo-colors.

Many imaging techniques make it possible to make multivariate images and their number is

constantly growing. Also, the number of variables available is constantly growing. From about

100 variables upwards the name hyperspectral images was coined in the field of satellite and

airborne imaging, but hyperspectral imaging is also available in laboratories and hospitals.

Images as in Figure 1.4 with K = 2 or more are multivariate images. Multivariate images

can also be mixed mode, e.g. an UV wavelength image, a near infrared (NIR) image and a

polarization image in white light. In this case, the vector of three variables is not really a

spectrum.

So, the hyperspectral images characterize :

– many wavelength or other variables bands, often more than 100 ;

– the possibility to express a pixel as a spectrum with spectral interpretation, spectral

Page 28: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

28 CHAPTER 1. HYPERSPECTRAL IMAGERY

transformation, spectral data analysis, etc.

Many principles from physics can be used to generate multivariate and hyperspectral

images [5]. Other variables, e.g. time, can be used to generate sequencies of images and

construct multivariate images. Examples of making NIR optical images are used to illustrate

some general principles.

A classical spectrophotometer consists of a light source, a monochromator or filter system

to disperse the light into wavelength bands, a sample presentation unit and a detection system

including both a detector and digitalization/storage hardware and software. The most common

sources for broad spectral NIR radiation are tungsten halogen or xenon gas plasma lamps.

Light emitting diodes and tunable lasers may also be used for illumination with less broad

wavelength bands. In this case, more diodes or more lasers are needed to cover the whole NIR

spectral range (780−2500 nm). For broad spectral sources, selection of wavelength bands can

be based on specific bandpass filters based on simple interference filters, liquid crystal tunable

filters (LCTFs), or acousto-optic tunable filters (AOTFs), or the spectral energy may be

dispersed by a grating device or a prism-grating-prism (PGP) filter. Scanning interferometers

can also be used to acquire NIR spectra from a single spot.

A spectrometer camera designed for hyperspectral imaging has the hardware components

listed above for acquisition of spectral information plus additional hardware necessary for the

acquisition of spatial information. The spatial information comes from measurement directly

through the spectrometer optics or by controlled positioning of the sample, or by a combination

of both. Three basic camera configurations are used based on the type of spatial information

acquired ; they are called point scan, line scan or plane scan.

1.2.4 Advantages and disadvantages of HSI

The primary advantage to hyperspectral imaging is that, because an entire spectrum is

acquired at each point, the operator holds all available information from the dataset to be

mined. HSI can also take advantage of the spatial relationships among the different spec-

tra in a neighborhood, allowing more elaborate spectral-spatial models for a more accurate

segmentation and classification of the image.

The primary disadvantages are cost and complexity. Fast computers, sensitive detectors,

Page 29: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

1.3. REPRESENTATION OF HYPERSPECTRAL IMAGES 29

and large data storage capacities are needed for analyzing hyperspectral data. Moreover, in

high dimensionality, it comes difficult to perform accurate parameters estimation, for example

in the Bayesian context, and the distance measures lose their efficiency to distinguish between

different vectors (see section 1.4).

1.3 Representation of hyperspectral images

By the nature of the hyperspectral data, in which each pixel is a vector, the data are

typically represented by a hyperspectral cube. Because of this cubic representation of the

data, it is natural to consider the use of tensors of order 3 as a mathematical model for the

analysis of hyperspectral images. Typically, the spatial dimensions are respectively associated

to 1-mode and 2-mode of the tensor and the spectral dimension is associated to the 3-mode

of the tensor, see Figure 1.5.

Figure 1.5 — illustration of the tensor representation of a hyperspectral image.

The folding matrix in the spectral mode (3-mode) of a tensor is of major interest in the

study of data compared to the folding matrices of the spatial modes. Indeed, the folding

matrix in the spectral mode allows a concrete physical representation of the spectral data

where each column of the folding matrix represents the spectrum of a pixel, unlike the two

folding matrices of spatial modes that are more difficult to interpret, see Figure 1.6.

Indeed, the folding 3-mode matrix is currently used for spectral analysis. However, intro-

ducing spatial information often allows to increase the performances [6].

Page 30: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

30 CHAPTER 1. HYPERSPECTRAL IMAGERY

Figure 1.6 — folding matrix of the spectral mode corresponding to the column matrix ofall the spectral pixels of a hyperspectral image.

1.4 Denoising and dimensionality reduction

In the case of HSI, the number of the acquired images can reach hundreds or even exceed

thousand images. It is necessary to have a good spectral resolution and a sufficiently large

band width for accurate analysis of the information. Indeed, this large number of images in-

volves manipulating vector spaces of high dimension. In more challenging algorithms, the large

dimension of the vector spaces, time consumption and significant storage involve a decrease

in the accuracy of the statistical estimation for a fixed number of samples. This phenomenon

is well known in hyperspectral image processing under the names of "Hughes phenomenon"

and "the curses of the high dimensionality" [7], [8]. More often, the acquisition step is fol-

lowed by a signal space dimensionality reduction, where a subspace signal is estimated. The

identification of this subspace enables a correct dimensionality reduction, yielding gains in al-

gorithm performance and complexity, and in data storage. This is a crucial first step in many

hyperspectral processing algorithms such as target detection, change detection, classification,

and unmixing.

The noising of the data is not only specific to hyperspectral data but is a well-known image

and signal-processing problem. Indeed, during the acquisition step of multivariate images from

a sensor (HSI camera, IR camera or other), there are still several phenomena disturbing the

acquisition process. The disturbances can be directly related to the quality of the sensor

(such as electronic noise, the photon noise, aberrations of the optical system) or related to

Page 31: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

1.4. DENOISING AND DIMENSIONALITY REDUCTION 31

the environment in which the acquisition takes place (such as atmospheric disturbances) [9],

[10], [11] [12]. Even low power, this acquisition noise affects the efficiency of the detection and

classification algorithms based on spectral identification. In order to reduce the nuisance of

these phenomena on the detection and classification methods, a pre-processing data denoising

is commonly used to limit the influence on the results of detection and classification [13].

The steps of denoising and spectral dimensionality reduction are crucial stages of prepro-

cessing for the analysis of hyperspectral images. Indeed, they directly affect the efficiency of

the detection and classification methods. In recent years, numerous works have shown the

interest of the denoising and spectral dimensionality reduction as pre-processing steps for the

classification of hyperspectral images [3], [10] [13].

1.4.1 Denoising and dimensionality reduction algorithms

In hyperspectral images processing, the spectral dimensionality reduction and denoising

methods are two processes with distinct goals, however, they are based on a similar approach,

which consists to use the subspace signal. Usually the denoising methods tend to separate the

subspace signal from the subspace noise, while the spectral dimension reduction methods seek

to estimate the subspace signal in order to work on this reduced space with smaller dimension

than the original data space. Whatever, these two methods can be combined in case where a

denoising and data dimensionality reduction are both needed. The most popular used denoi-

sing and/or dimensionality reductions methods are : the singular value decomposition (SVD),

maximum noise fraction (MNF) [11] and hyperspectral signal identification by minimum error

(HySime) [2].

Traditionally, SVD and MNF are used for dimension reduction, but they are also useful for

visually identifying the dominant image components. HySime is a combination algorithm of

dimensionality reduction and denoising tools, which also estimates automatically the virtual

dimension (VD) of the reduced data space.

1.4.1.1 Singular value decomposition (SVD)

The SVD of a matrix is a linear algebra method for matrix factorization, with many

useful applications in signal processing and statistics. This factorization consists of finding in

Page 32: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

32 CHAPTER 1. HYPERSPECTRAL IMAGERY

each of the two spaces, corresponding to the two dimensions associated with the matrix, an

orthonormal basis wherein the rows and columns vectors of this matrix can be expressed in.

Let’s a rectangular matrix A ∈ RI1×I2 , then there exist a factorization of A in the form :

A = UΣVT (1.1)

where

U = [u1, · · · ,uI1 ] ∈ RI1×I1 (1.2)

U is an orthogonal matrix containing the left singular vectors. The vectors {uk}k=1,··· ,I1∈

RI1 constitute an orthonormal basis of the space E

(1), of dimension I1, associated with the

matrix, and are the eigenvectors of the symmetric matrix AAT .

V = [v1, · · · ,vI2 ] ∈ RI2×I2 (1.3)

In a similar way, V is an orthogonal matrix containing the right singular vectors. The

vectors {vk}k=1,··· ,I2∈ R

I2 constitute an orthonormal basis of the space E(2), of dimension

I2, associated with the matrix, and are the eigenvectors of the symmetric matrix ATA.

Σ = diag (λ1, · · · ,λk) ∈ RI1×I1 (1.4)

Σ is a diagonal matrix containing k ordered singular values (SVs), λ1 ≥ · · · ≥ λk, of the

matrix A. The SVs λk correspond to the square root of the eigenvalues βk of the symmetric

matrix ATA.

The decomposition of the matrix A, Eq. (1.1) can be written as :

A =

I1�

i=1

�λiuiv

Ti =

I1�

i=1

βiAi (1.5)

where βi = λ1/2i is the ith SV and Ai is the corresponding proper image.

In the case of HSI, SVD decomposes the data space into a set of k components, arranged in

a descending order of the corresponding SVs, where the first components contain the maximum

variance and represent the most characteristic variability of the data. The spatial and spectral

Page 33: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

1.4. DENOISING AND DIMENSIONALITY REDUCTION 33

information are extracted from the original matrix in a compact manner. SVD is close to

principal component analysis (PCA) with the difference that SVD simultaneously provides

the PCAs in both row and column spaces.

The columns of matrix U consist of the left singular vectors that represent the spectral

variation of the data set. The columns of matrix V are the right singular vectors that represent

the spatial variation of the data set. Usually, original data can be adequately represented with

only a few components. The reduced space AR is obtained as :

AR =K�

k=1

λkukvTk (1.6)

where K is the desired dimension of the reduced data space.

1.4.1.2 Maximum noise fraction (MNF)

Another popular subspace inference tool, which takes into account the noise statistics, is

the MNF transformation, where new components ordered by image quality are produced.

MNF finds non-orthogonal directions, minimizing the noise fraction (or, equivalently, maxi-

mizing the signal to noise ratio, SNR). It has been shown [11] that MNF is equivalent to

principal components when the noise variance is the same in all bands and that it reduces

to a multiple linear regression when the noise is in one band only. Noise is removed from

multispectral data by transforming to the MNF space, smoothing or rejecting the most noisy

components, and then retransforming to the original space. The MNF transformation requires

knowledge of both the signal and noise covariance matrices. Except when the noise is in one

band only, the noise covariance matrix needs to be estimated. Assuming that the observed

spectral vectors are given by

y = x+ n (1.7)

where x and n are L-Dimensional vectors standing for signal and additive noise, respectively.

Assuming that the noise correlation matrix Rn or an estimate is known, this procedure

consists of finding non-orthogonal directions vi, minimizing the ratio

vTi Rnvi

vTi Rxvi

(1.8)

Page 34: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

34 CHAPTER 1. HYPERSPECTRAL IMAGERY

with respect to vi. Rx is the observed data correlation matrix. This problem is known as the

generalized Rayleigh quotient, and its solution is given by the left-hand eigenvectors vi of

RnR−1x , which are for i = 1, · · · , L [14].

For the independent and identically distributed (i.i.d.) noise, we have

Rn = σ−1n 11L (1.9)

and

R−1x = E

�Σ+ σ2

n11L�−1

ET (1.10)

where 11L is the L × L identity matrix and E and Σ are the eigenvector and eigenvalue

matrices of the signal correlation matrix Rx, respectively. Therefore, the MNF and SVD yield

the same subspace estimate. However, if the noise is not i.i.d., the directions found by the

MNF transform maximize the SNR instead of the energy.

1.4.2 Estimation of the subspace signal dimension

Although all of the presented methods are based on different assumptions ; such as the

nature of the noise for the denoising methods, or the mixture model of the spectra for spectral

reduction methods ; these preprocessing methods are brought into their process to estimate

the signal subspace. The identification of this subspace enables the representation of the

spectral vectors in a low dimensional subspace. Generally, SVD and MNF are the techniques

often used to reduce the dimensionality of hyperspectral data. The first technique seeks the

projection that best represents the data in the least square sense, whereas the last seeks the

projection that optimizes the ratio of noise power to signal power. In addition, MNF method

needs to estimate the noise covariance.

1.4.2.1 AIC and MDL

Reference techniques involve minimizing one of the criteria ; minimum description length

(MDL), based on the theory of information [15], [16], or Akaike information criterion (AIC),

based on a Bayesian approach [17], [18] ; they have also been used to infer the hyperspectral

signal subspace. Both methods use the SVD and information criteria to determine the singular

Page 35: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

1.4. DENOISING AND DIMENSIONALITY REDUCTION 35

vectors associated with the dominant eigenvalues for the estimation of the signal subspace.

The basic assumption is that the useful signal is corrupted by a Gaussian white noise with

variance σ2.

Let {β1, · · · ,βI} the I eigenvalues of the covariance matrix of A ; where the column-vectors

are the observations ; sorted by decreasing values. Their distribution is such that :

β1 ≥ β2 · · · ≥ βi ≥ βi+1 ≥ · · · ≥ βI (1.11)

with βi = λ2i .

Thus, to estimate the rank J , the data should be deployed in the spectral mode, and then

their covariance matrix is decomposed into eigenvalues arranged in decreasing values. The

rank of the eigenvalue that minimizes the considered criterion corresponds to the rank of the

subspace. These criteria are given by :

AIC(k) = −2M�I

k�=k+1logβk�

+M(I − k)log�

1I−k

�Ik�=k+1

logβk��

+2k(I − k)

(1.12)

MDL(k) = −2M�I

k�=k+1

logβk�

+M(I − k)log�

1I−k

�Ik�=k+1

logβk��

+12k(2I − k)logM

(1.13)

where M is the number of columns in A. However, in the case of hyperspectral images, the

noise is not necessarily Gaussian white. The criteria AIC and MDL are therefore not always

suitable for estimating the ranks of such signals.

1.4.2.2 HySime

The dimensionality reduction and denoising tool, hyperspectral signal identification by

minimum error (HySime) [2], is an algorithm used to reduce the dimensionality of HSI data

and to estimate its signal subspace. HySime is a method, which is eigen-decomposition based

and it does not depend on any tuning parameters.

HySime starts by estimating the signal and the noise correlation matrices, using multiple

Page 36: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

36 CHAPTER 1. HYPERSPECTRAL IMAGERY

regressions. A subset of eigenvectors of the signal correlation matrix is then used to represent

the signal subspace. This subspace is inferred by minimizing the sum of the projection error

power with the noise power, which are, respectively, decreasing and increasing functions of the

subspace dimension. Therefore, if the subspace dimension is overestimated, the noise power

term is dominant, whereas if the subspace dimension is underestimated, the projection error

power term is the dominant. The overall scheme is computationally efficient, unsupervised,

and fully automatic in the sense that it does not depend on any tuning parameters.

1.5 Detection methods in multivariate imaging

In target detection applications, the main objective is to search the pixels of an HSI data

cube for the presence of a specific material (target). Conceptually, at least at a theoretical

level, target detection can be viewed as a binary hypothesis-testing problem, where each

pixel is assigned a target or non-target label. We use the term "background" to refer to all

non-target pixels of a scene.

We start with the idea that the observed data cube X is considered as a set of M (Nx ×Ny)

vectors in a multidimensional space, where the number of dimensions equals the number of

the acquired images, N . More often, the task of a detection algorithm is to decide, by means

of a statistical hypothesis test, whether a target of interest s is present or not in a pixel-under-

test with observed pixel vector x [19]. Generally, the process of target detection involves two

steps :

– At first, a contrast function (called also statistical decision) D is constructed from

the data, assumptions, and eventually from prior knowledge. This contrast function is

generally obtained with the likelihood ratio test (LRT) or the generalized likelihood

ratio test (GLRT). Applied to each pixel of the data, y = D (x) ; the contrast function

allows to obtain a grayscale detection map. The gray level of a pixel is related to the

probability that it contains a target.

– Then, the values of the detection map are compared to a threshold η allowing to obtain

a binary detection mask, which indicates whether there is presence or absence of a target

in each pixel. The comparison of the detection map to the threshold η is such that :

– If D (x) < η, we decide that there is no target,

Page 37: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

1.5. DETECTION METHODS IN MULTIVARIATE IMAGING 37

– If D (x) > η, we decide that a target is present,

which will be written in the following :

D(x)

target

no target

η (1.14)

We define two hypotheses, H0 in the case where there is no target and H1 in the case where

a target is present. Depending on the decision given by the detection test, four situations are

considered :

❶ There is no target (H0), and the detection test indicates that there is no target,

❷ There is no target (H0), and the detection test indicates that there is a target,

❸ A target is present (H1), and the detection test indicates that a target is present,

❹ A target is present (H1), and the detection test indicates that there is no target.

In order to study the performances of a detection test, we are interested only in two

situations (the other two situations can be inferred from the previous one) to define the

probability of detection (PD) and probability of false alarm (PFA).

– The PFA is the probability to detect a target s when there is no target, therefore, under

the hypothesis H0 (situation 2) :

PFA(s) = P (D(x) > η | H0) =

� +∞

η

P0 (D(x)) dx (1.15)

where P0 (D(x)) is the probability density of the random variable D(x) under the hy-

pothesis H0.

– The PD is the probability to detect a target when the target is actually present, there-

fore, under the hypothesis H1 (situation 3) :

PD(s) = P (D(x) > η | H1) =

� +∞

η

P1 (D(x)) dx (1.16)

where P1 (D(x)) is the probability density of the random variable D(x) under the hy-

pothesis H1.

Page 38: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

38 CHAPTER 1. HYPERSPECTRAL IMAGERY

Figure 1.7 — example of probability density of the statistic D(x) under the hypotheses :(a) H0 and (b) H1. The areas of the hatched surfaces correspond to the values (a) of the

PFA and (b) of the PD.

Figure 1.7 plots an example of probability density of the statistic D(x) under both hy-

potheses H0 (Figure 1.7a) and H1 (Figure 1.7b). The value of the threshold is noted on the

two curves. The probability that D(x) > η is equal to the area of the hashed surface and it

corresponds to the PFA in Figure 1.7a, and to the PD in Figure 1.7b.

A practical question of paramount importance for a detection algorithm user is how to

set the threshold to keep the number of detection errors small. If it is possible to define the

distribution of D(x), the threshold can be fixed theoretically according to a fixed false alarm

rate.

Indeed, there is always a compromise between choosing a low threshold to increase the

probability of (target) detection PD, and a high threshold to keep the PFA low. For any

given detector, the trade-off between PD and PFA is described by the receiver operating

characteristic (ROC) curves, which plot PD(η) versus PFA(η) as a function of threshold η.

The ROC curves are usually used to characterize the performance of a detection algorithm

on a data set. These are parametric curves of the false alarm and the detection rates, obtained

from the different values of the threshold of the detection map. For different threshold values,

experimental PD and PFA are obtained as :

PD = number of target pixels in the map that are greater than ηtotal number of target pixels

PFA = number of background pixels in the map that are greater than ηtotal number of background pixels

(1.17)

In the literature, many algorithms for detection and classification are proposed. We present

different approaches generally considered to detect targets in hyperspectral and multispectral

images. Taking into consideration spectral variability and receiver noise, the observations

Page 39: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

1.5. DETECTION METHODS IN MULTIVARIATE IMAGING 39

provided by the sensor can be modeled, for the purpose of theoretical analysis, as random

vectors with specific probability distributions.

1.5.1 Supervised detection

If the goal is to detect targets characterized by a known spectrum ; this can be used as

a priori knowledge to estimate the contrast function, D. In this case, a supervised target

detection approach is then considered.

1.5.1.1 Target detection

In the literature, the reference supervised detectors are the adaptive matched filter (AMF)

and the adaptive cosine / coherence estimator (ACE), proposed in [20], used as detectors of

constant false alarm rate (CFAR) in [20] and applied to HSI in [6]. These detectors consider

the following assumptions

H0 : target absent

H1 : target present(1.18)

and are based on testing the GLRT of these assumptions. Given an observed pixel, x, the

GLRT is given by the ratio of the conditional probability density functions :

Λ(x) �p�x, �θ1 | H1

p�x, �θ0 | H0

�H1

H0

η (1.19)

where �θ0 and �θ1 are estimations of the parameters of probability densities associated to the

hypotheses H0 and H1, respectively. The function Λ(.) can be seen as the contrast function

introduced above, and η corresponds to the threshold decision that allows to obtain a mask

of binary detection, i.e. to choose between H0 and H1 for each pixel. If Λ(x) is greater than

the threshold η, the "target present" hypothesis is accepted ; otherwise, the "target absent"

hypothesis is considered.

Any systematic procedure to determine ROC curves or the threshold requires specifying

the distribution of the observed spectra x under each of the two hypotheses.

Since statistical decision procedures, based on normal probability models, are simple and

Page 40: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

40 CHAPTER 1. HYPERSPECTRAL IMAGERY

often lead to good performance, we model the target and background spectra as multivariate

normal vectors. A random vector x follows a multivariate normal distribution with mean

vector µ � E(x) and covariance matrix Γ � E�(x− µ)(x− µ)T

�, denoted by x ∼ N(µ,Γ),

if its probability density function is given by

p(x) =1

(2π)N/2 | Γ |1/2e1/2(x−µ)TΓ

−1(x−µ) (1.20)

where | Γ | represents the determinant of matrix Γ.

Consider the detection problem specified by the following hypotheses :

H0 : x ∼ N (µb,Γb) i.e. x = b target absent

H1 : x ∼ N (µt,Γb) i.e. x = b+ s target present(1.21)

where µb is the average spectrum of the background, s is the spectrum of the intended target,

Γb is the covariance matrix assumed common to both classes (target and background). In

practice, the spectrum of the target is extracted from spectral libraries (supervised detection).

The matched filter detector requires the mean vector and the common covariance matrix

of the target and background distributions. Furthermore, the resulting detector is optimum

(in the Bayes or Neyman-Pearson sense) only when the target and background classes follow

multivariate normal distributions with the same covariance matrix, an unlikely situation for

real-world HSI data. In practical applications, these quantities are unavailable and have to be

estimated from the available data. Under the assumption of low-probability targets, we can

use the available data x(n), n = 1, 2, · · · , N , to determine the maximum likelihood estimates

�µ =1

N

N�

n=1

x(n) � �µb (1.22)

�Γ =1

N

N�

n=1

[x(n)− �µ] [x(n)− �µ]T � �Γb (1.23)

of the mean vector and covariance matrix of the background. Unfortunately, there is usually

not sufficient training data to accurately determine the mean and covariance of the target.

Typically, we use a target spectral signature s from a library or the mean of a small number

of known target pixels observed under the same conditions. The resulting adaptive matched

Page 41: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

1.5. DETECTION METHODS IN MULTIVARIATE IMAGING 41

filter (AMF) is given by

DAMF (x, s) =sT �Γ−1

b x

sT �Γ−1b s

H1

H0

ηAMF (1.24)

where usually the data cube mean is removed from the target and test pixel spectra.

The "sub-pixel" version of this detector is based on the following model :

H0 : x ∼ N (µb,Γb) i.e. x = b target absent

H1 : x ∼ N (Sa,σ2Γb) i.e. x = σb+ Sa target present

(1.25)

where S is a matrix that describes the spectral variability of the target, and a is unknown.

The covariance matrix of the target class is proportional to that of the background, and

the factor σ2 is directly related to the filling rate of the target in the pixel. The angular

adapted detector (ACE) derived from this model is given by :

DACE(x, s) =xT �Γ−1

b S�ST �Γ−1

b S�−1

ST �Γ−1b x

xT �Γ−1b x

H1

H0

ηACE (1.26)

which, for a single target s, is reduced to

DACE(x, s) =|| sT �Γ−1

b x ||�sT �Γ−1

b s��

xT �Γ−1b x

�H1

H0

ηACE (1.27)

If multiple targets are considered, a specific detection mask is associated with each target.

AMF and ACE have the CFAR property [21], [22], and consequently the threshold can be

set according to the desired false alarm rate. The success of these detectors has led to further

studies [23] and variants [24]. But in practice, knowing in priori the spectrum of the desired

material is difficult or even unrealistic.

Page 42: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

42 CHAPTER 1. HYPERSPECTRAL IMAGERY

1.5.1.2 Spectral distance measures

Spectral distance measures have been developed to differentiate between two pixel vectors

and can also be used to detect defects. In that case, the threshold must be empirically chosen.

– Spectral angle mapper (SAM) has been widely used in multispectral and hyperspectral

image analysis as a spectral similarity measure for material identification. It calculates

the angle between two spectra and uses it as a measure of discrimination. The resulting

of SAM is given by [19]

DSAM (x, s) = cos−1

�xT s

|| x || || s ||

�(1.28)

– Spectral information divergence (SID) has been suggested to model the spectrum of a

hyperspectral image pixel as a probability distribution and to measure the information

divergence between the probability distributions generated by the test and target spectra

x and s, respectively. The SID resulting is given by [25]

DSID(x, s) =N�

n=1

pn log

�pnqn

�+

N�

n=1

qn log

�qnpn

�(1.29)

where p = (p1, p2 . . . pN )T and q = (q1, q2 . . . qN )T are the two probability mass functions

generated by normalizing to sum-to-unity the spectral vectors s = (s1, s2 . . . sN )T and

x = (x1, x2 . . . xN )T , respectively and N is the number of the spectral bands.

– The Kendall’s τ (TAU) is a measure based on the concordance level between s and x,

a given target and test pixels, respectively. TAU expresses the probability of concor-

dance less the probability of discordance between x and s. An empiric estimator of the

Kendall’s τ is given by [26]

DTAU (x, s) =2

N(N − 1)

N−1�

n=1

N�

k=n+1

sign [(xk − xn)(sk − sn)] (1.30)

with :

sign(u) =

1 if (u > 0)

−1 if (u < 0)

0 if (u = 0)

(1.31)

Page 43: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

1.5. DETECTION METHODS IN MULTIVARIATE IMAGING 43

where (xi, si), i = 1 . . . N , is a sample of N observations of (x, s).

1.5.2 Unsupervised detection

A less restrictive approach is to detect targets in an unsupervised manner. The most

popular unsupervised detectors are the anomaly detection algorithms. In [27], [28], [29], the

authors introduce the term of "anomaly detection". This approach does not require any prior

knowledge of the spectral signature of the target of interest (defect). Anomalies are defined

with reference to a model of the background. Background models are developed adaptively

using reference data from either a local neighborhood of the test pixel or a large section of the

image. Anomalies are defined as observations that deviate in some way from the neighboring

clutter background or the image-wide clutter background, respectively [30]. This approach has

the CFAR property if the statistical parameters are known. The lack of a priori knowledge

about the spectra of the targets leads to a global detection mask, common to different spectral

classes of detected targets.

The problem of anomaly detection is typically formulated as a binary test between the two

hypotheses H0 and H1, similar to Eq. (1.18), but the spectrum s of the target is unknown.

The most common used anomaly detector is the one of Reed and Xiaoli Yu (RX). In HSI,

the RX [31] algorithm is often considered as a reference anomaly detector [30], [22]. As well

as the AMF and ACE, RX is based on the GLRT.

In the classical multivariate Gaussian statistical model assumption, RX is given by [31] :

DRX(x) =

�XsT

�T �X XT

�−1 �XsT

ssT

H1

H0

ηRX (1.32)

where s is a vector defining the spatial extend of the anomaly, X is the matrix obtained by

folding the hyperspectral cube on the spectral mode, and X is centered. The vector s is a

feature of the topology of the target, its geometric form as well as its size. In real situations,

this information is often unavailable. In that case, s is considered as a Dirac distribution

located at the test pixel x, and the algorithm is simplified and given by its commonly used

Page 44: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

44 CHAPTER 1. HYPERSPECTRAL IMAGERY

expression [27] :

DRX(x) = (x− µ)T Γ−1(x− µ)

H1

H0

ηRX (1.33)

where Γ−1 and µ are respectively the estimated covariance matrix and mean vector of the

reference background data. Basically, DRX(x) estimates of Mahalanobis distance between the

pixel vector x and the mean background, which is zero for centered data [32]. This algorithm

has the CFAR property, it is locally adaptive, which assume that the background follows a

local Gaussian multivariate distribution, is spatially uncorrelated for each pixel, the spatial

signature of the target is unknown, and that the covariance matrix spectrum is unknown [31].

The success of RX has led to improvements. An interesting approach of RX is proposed in

[33] to estimate the spatial distribution conjointly with the detection algorithm. In the case

of known target shape and unknown target spectrum, the equation of RX optimal detector is

given by Eq. (1.32). In the whitened space, this equation becomes :

DRX(X) = 1ss

T (XsT )T (XsT )

or

DRX(xi) =1

ssT

�l,k slskx

Tl xk

(1.34)

The unknown s = [s1, s2, . . . , sV ] being estimated by the abundance sl of the tested pixel

xi in its neighbours, which can be calculated by the following expression :

sl =

�xTi xl

�xi��xl�

�2

, l = 1 . . . V (1.35)

where x is the whitened vector and V is the number of neighbour pixels. Eq. (1.35) is known

as the ACE obtained by GLRT, and the output gives a value belonging to [0, 1], characteristic

of the percentage of the pixel filled by the target.

A regularized adaptive RX (RARX) is proposed in [33] and its expression is given by :

DRARX(xi) =1

ssT

�xi�2 + 2

l∈v(i)

slxTi xl

(1.36)

Page 45: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

1.6. CONCLUSION 45

The first term in Eq. (1.36) is similar to Eq. (1.33), and the second one is a linear correlation

between xi and its neighbors, weighted by the estimated spatial distribution of the target in

the neighborhood. This term linearly depends on sl.

1.5.3 Principle of detection with CFAR

In the detection problem with CFAR, the detection threshold η is determined from the

PFA chosen by the user. In order to insure a CFAR detection, which means with a control

of the number of false alarms, this threshold η should only depend from known parameters.

In addition, from Eq. (1.15) we note that for desired PFA, the calculation of the threshold

value requires knowing the probability density of the statistic D(x) under the hypothesis H0.

1.6 Conclusion

In this chapter, a particular attention has been done to the multi-linear algebra tools that

allow a single modeling of a set of multidimensional data. The acquired hyperspectral images

are stored in a 3-dimenstional cube, where each pixel is an information vector, is typically

represented by a tensors of order 3.

In the case of HSI, the number of the acquired images can reach thousand images. Indeed,

this large number of images involves manipulating vector spaces of large dimension. More

often, the acquisition step is followed by a signal space dimensionality reduction, where a

subspace signal is estimated. The most popular used denoising and dimensionality reductions

methods, namely SVD and MNF, have been presented. The reference techniques which in-

volve minimizing the MDL and AIC criteria as well as the HySime algorithm have also been

addressed. These algorithms are used to estimate the VD of the subspaces.

After the preprocessing step, consisting of denoising and reducing the data space of the

HSI images, target detection algorithms are often applied on the reduced data spaces. If the

spectrum of the target is known a priori, supervised target detection approaches are then

considered. The reference-supervised detectors are the AMF and ACE, used as CFAR de-

tectors. Three spectral distance measures (SAM, SID and TAU) used as defect non-CFAR

detection algorithms have been presented. If no knowledge about the target is known, un-

Page 46: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

46 CHAPTER 1. HYPERSPECTRAL IMAGERY

supervised approaches are then considered. The most popular unsupervised detector is the

well-known anomaly detection algorithm, RX. A recent approach of this algorithm, based on

the estimation the spatial distribution of the target in the neighborhood, has been presented.

All of these techniques presented in this chapter will be used in the next chapters for the de-

tection of surface and subsurface defects within metallic components with different industrial

applications.

Page 47: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

CHAPTER

2 Surface defect detection on

flat metal parts using

multi-component images

2.1 Introduction

Illumination is a particularly important component of the image acquisition system for

both human and automatic surface inspection. The choice of the illumination technique is a

crucial step in the designing process of any machine vision system. The goal of illumination

in machine vision is to make the important features of the object to be inspected visible

and reduce undesired features [34]. The quality of these important features is related to the

illumination concept, as they need to be presented with a maximum of contrast. The challenge

of illumination is to increase the signal to noise ratio, and to emphasize and explore these

features to maximize the contrast. The means to increase the contrast are the direction of the

light, the choice of the light spectral band and the effect of polarization [35].

The HSI algorithms are mainly used in remote sensing applications, which treat three-

dimensional data (spatial and spectral information) generated from hyperspectral sensors.

HSI are non-destructive technologies that represent an attractive solution for characterization,

classification and quality control of different materials in several industrial sectors.

The studies based on the application of HSI techniques to material classification and

inspection are increasing every day [36], demonstrating that this technique represents a very

smart and promising analytical tool for quality control. However, despite these advantages,

Page 48: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

48CHAPTER 2. SURFACE DEFECT DETECTION ON FLAT METAL PARTS

USING MULTI-COMPONENT IMAGES

HSI is still difficult to be systematically applied, especially in real-time industrial applications,

because of the huge amount of data constituting a spectral image. The long computation time

sometimes represents a big constraint at industrial level, where often, a fast processing of the

collected information is required.

In this chapter, we propose a simple technique to produce pseudo-spectral-cubes (PSC)

using different lighting modalities (white source light, monochromatic source lights in combi-

nation with polarized light) to illuminate flat metal parts containing artificial surface defects.

At first, we create four PSCs that correspond respectively to the four basics lighting modali-

ties : (i) un-polarized white light (UWL), (ii) polarized white light (PWL), (iii) un-polarized

monochromatic light (UML), and (iv) polarized monochromatic light (PML). Another cube

is created which corresponds to the combination of these basic cubes : (v) the entirety of all

modalities.

The goal of this chapter, on one hand, is to evaluate, combine and test different lighting

modalities for defect detection in industrial context. On the other hand, is to show the pos-

sibility of applying HSI methods on PSCs obtained with multiple illumination modalities.

This chapter is considered as a direct application of some algorithms presented in chapter

I, on non-hyperspectral data. We have used three metallic parts, which contain three types

of artificial defects produced in the laboratory and for each part we built the five PSCs as

presented above. We used supervised and unsupervised HSI algorithms for the detection of

the defects.

The remainder of this chapter is organized as follows. Section 2.2 presents the setup used

to achieve different lighting modalities and to acquire the corresponding images. Section 2.3

recalls the HSI algorithms presented in chapter 1 ; and dedicated to target detection, anomaly

detection and measurement of spectral distances. Section 2.4 outlines the technical specifi-

cations of the components used in the acquisition system, shows some acquired images and

explains how the targets to be detected are chosen. Section 2.5 shows the experimental results

and section 2.6 concludes this chapter.

Page 49: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

2.2. IMAGE ACQUISITION SYSTEM 49

2.2 Image acquisition system

2.2.1 Light sources

Light emitting diodes (LEDs) are diodes made of semiconducting materials, which emit

radiation with a distinctive spectral position [37]. Since LEDs have so many practical advan-

tages such as longevity, high efficiency, and thus low heat dissipation ; they are currently the

primary illumination technology used in machine vision applications [35].

The acquisition system is set as (Figures 2.1 and 2.2) :

– LED source is used to generate white light (WL) illumination which emits WL in visible

domain.

– LED array sources are used to generate monochromatic light (ML) illuminations in

visible and near-infrared domains.

2.2.2 Polarized light

Almost all light sources emit unpolarized (natural) light (LEDs, incandescent lamps, fluo-

rescent lamps). This means that the light contains electromagnetic waves that oscillate in all

directions perpendicular to the propagation direction.

– Polarized illumination (Figure 2.1a) for both white and monochromatic sources is achie-

ved using a polarization filter in front of the light sources. The polarized light interacts

with the metal part surface (scattering or reflection) that changes the polarizing pro-

perties in relation with the incoming light. To grasp this, another polarizing filter, the

analyzer, is mounted in front of the camera lens. Depending on the angular position of

this filter the incoming polarized light from the test part is analyzed. One main interest

of using polarized light is due to the interaction with the metal surface, which depends

on the 3D geometry of the defect, while only 2D information is usually available from

unpolarized illumination.

– Unpolarized illumination for both white and monochromatic sources is achieved with

the same acquisition system by just removing the polarizing filter (Figure 2.1b). The

analyzer filter could be kept and fixed to a selected degree of rotation to enhance the

image’s contrast.

Page 50: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

50CHAPTER 2. SURFACE DEFECT DETECTION ON FLAT METAL PARTS

USING MULTI-COMPONENT IMAGES

Figure 2.1 — acuisition system : (a) polarized white and monochromatic lights (b) unpo-larized white and monochromatic lights.

Figure 2.2 — picture of the set-up made of a color camera with a linear polarization filter,LED illumination source and the inspected flat surface.

2.2.3 Light perception and colors interpretation

The light perception of machine vision cameras differs from the human eye. Machine vision

uses in most cases light with wavelengths between 380 nm and 1100 nm (from blue light to

near infrared). To interpret and determine colors, it is necessary to combine the information

of three (or four) pixels with different color sensitivity, corresponding to red, green and blue

(RGB) for example, or the complementary colors cyan, yellow and magenta, which are typical

triples to determine the color values. In practice this is realized with micro-optical mosaic

filters for single chip color cameras. These color filters cover the single pixels inside the image

sensor with a special pattern (for example the Bayer pattern) and make them spectrally

sensitive (one-chip-color camera). Another construction uses three separate sensors. Each

sensor is made sensitive for only one color (RGB). The information of all three sensors is

correlated and delivers the color information (3-CCD-cameras) [35].

Page 51: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

2.3. ALGORITHMS 51

ModalitiesCubes

(1) (2) (3) (4) (5)

White light • • •

Monochromatic light • • •

Polarized light • • •

Unpolarized light • • •

Number of components 3 9 12 36 60

Table 2.1 — pseudo-spectral cubes. Each PSC is constructed from the modalities markedwith the black dots.

2.2.4 Modalities

In order to produce multiple data cubes representing different pseudo-spectra for the same

area of interest on the tested metallic parts, we have used the acquisition system depicted in

Figures 2.1 and 2.2. More practical details on the experiments are listed in paragraph 2.4.

The used acquisition system enables us to realize the four basic lighting modalities :

– UWL (respectively UML) modality is realized by placing white LED (respectively mo-

nochromatic LEDs) for illumination and polarizing filter (called the analyzer) in the

front of the camera which is fixed in a selected angular position.

– PWL (respectively PML) modality is realized by placing white LED (respectively mo-

nochromatic LEDs) for illumination followed by a polarizing filter (called the polarizer)

and by placing another polarizing filter (analyzer) in the front of the camera. The po-

larizer is fixed in a defined angular position.

For each of these four modalities corresponds one PSC which contains the acquired images.

Another cube is created from these basic cubes as shown in Table 2.1. The number of com-

ponents varies from 3 to 60, as shown in Table 2.1.

2.3 Algorithms

In the literature, many algorithms for the detection and classification of multispectral and

hyperspectral imagery are proposed. For the purposes of this work, we have tested and com-

pared different supervised and unsupervised HSI algorithms for detection of surface defects.

Page 52: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

52CHAPTER 2. SURFACE DEFECT DETECTION ON FLAT METAL PARTS

USING MULTI-COMPONENT IMAGES

As the HSI algorithms were already presented in chapter 1, we recall their expressions in the

following.

Adaptive matched filter (AMF) and adaptive cosine estimator (ACE) are spectral target

detection algorithms, which require spectral information about the target of interest. The

AMF detector is based on following hypothesis test, for full pixel detection

H0 : x ∼ N (µb,Γ) target absent

H1 : x ∼ N (s,Γ) target present(2.1)

Its expression can de found in [38]

DAMF (x, s) =sT �Γ−1

b x

sT �Γ−1b s

H1

H0

ηAMF (2.2)

where x and s are respectively the spectrum of the pixel under the test and the spectrum

of the desired target ; Γ is the estimated covariance matrix, supposed to be the same for the

background and the target.

The ACE detector is based on the hypothesis that the covariance matrix of the target is

proportional to the covariance matrix of the background, modeled as a Gaussian noise.

H0 : x ∼ N (µb,Γ) target absent

H1 : x ∼ N (Sa,σ2Γb) target present

(2.3)

where Sa is the spectral variability of the target.

The angular adapted detector (ACE) derived from this model is given by [38] :

DACE(x, s) =|| sT �Γ−1

b x ||�sT �Γ−1

b s��

xT �Γ−1b x

�H1

H0

ηACE (2.4)

The used spectral distance measures are :

– Spectral angle mapper (SAM), which calculates the angle between x and s and given

Page 53: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

2.3. ALGORITHMS 53

by [19]

DSAM (x, s) = cos−1

�xT s

|| x || || s ||

�(2.5)

– Spectral information divergence (SID) given by [19] :

DSID(x, s) =N�

n=1

pn log

�pnqn

�+

N�

n=1

qn log

�qnpn

�(2.6)

where p = (p1, p2 . . . pN )T and q = (q1, q2 . . . qN )T are the two probability mass functions

generated by normalizing to sum-to-unity the spectral vectors s = (s1, s2 . . . sN )T and

x = (x1, x2 . . . xN )T , respectively and N is the number of the spectral bands.

– The Kendall’s τ (TAU) which measures the concordance level between x and s, and

given by [26] :

DTAU (x, s) =2

N(N − 1)

N−1�

n=1

N�

k=n+1

sign [(xk − xn)(sk − sn)] (2.7)

with :

sign(u) =

1 if (u > 0)

−1 if (u < 0)

0 if (u = 0)

(2.8)

where (xi, si), i = 1 . . . N , is a sample of N observations of (x, s).

The anomaly detectors are unsupervised algorithms, which do not require any knowledge

of the spectral signature of the target of interest. Background models are developed adaptively

using reference data from either a local neighborhood of the test pixel or a large section of the

image. Anomalies are defined as observations that deviate in some way from the neighboring

clutter background or the image-wide clutter background, respectively [30]. The most common

used algorithm is RX, given by

DRX(x) = (x− µ)T Γ−1(x− µ)

H1

H0

ηRX (2.9)

where Γ and µ are respectively the estimated covariance matrix and mean vector of the

Page 54: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

54CHAPTER 2. SURFACE DEFECT DETECTION ON FLAT METAL PARTS

USING MULTI-COMPONENT IMAGES

Figure 2.3 — BAYER demosaicing : (a) example of 2x2 matrix (b) colors interpolation[39].

reference background data.

2.4 Experiments

The following components have been used in the acquisition system described in Fi-

gure 2.1 :

– "LUXEON Star Warm white LED" source to generate WL illumination, which emits

WL between 400 and 750 nm.

– "Super high power LED array with 12 parallel 1 mm2 power chips" sources to generate

ML illuminations in visible and near-infrared domains. Nine LED arrays are used, which

emit MLs in the following wavelengths : 470, 505, 590, 630, 780, 810, 850, 880 and

940 nm.

– "Heliopan ES 55 Pol lin. SH-PCM" linear polarizer filter to polarize white and mono-

chromatic light.

– "AVT Dolphin F145C color camera, which incorporates a Sony ICX-285AQ sensor and

is sensitive to wavelengths up to 1000 nm" to acquire color images. The camera sensor

captures the color information via the so-called primary color (RGB) filters placed over

the individual pixels in a BAYER mosaic layout (Figure 2.3a).

Red, green or blue value is determined for each pixel, interpolating two adjacent lines.

(Figure 2.3b).

Depending on the wavelength illumination, each primary color (RGB) filter of the camera

gives a different response. The spectral response curve presented in the data sheet of the

sensor [39] used in the camera informs about this. For example, at λ = 470 nm, the spectral

responses of the red, green and blue filters are respectively about 3 %, 30 % and 82 %. The

Page 55: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

2.4. EXPERIMENTS 55

Figure 2.4 — construction of the PSCs from the acquired color images, where only thecomponents that contain relevant information are taken into consideration.

red component of the acquired image at 470 nm is almost zero. Therefore, this component will

not be taken for the construction of the PSCs. On the other hand, only the red components

will be used for near infrared wavelengths (from 780 nm to 940 nm). The construction of the

PSCs is shown in Figure 2.4. Table 2.3 shows the selected components.

We have used three flat metal parts which contain different 3D artificial defects (called

thereafter : Defect I, Defect II and Defect III) and fabricated with different tools. The conside-

red samples and their specifications are listed in Table 2.2. For our experiments, we considered

only a part of the whole inspected metal part.

The acquired images with different wavelength bands are shown in Table 2.3. Those ob-

tained with different polarization degrees are shown in Figures 2.5, 2.6, and 2.7 for respectely

Defect I, Defect II and Defect III (these images combine in fact the different components).

Concerning the polarization modalities, three angular positions of the analyzer are used

(30 ◦, 60 ◦ and 90 ◦) in order to have three different representations for the same scene. The

same color components used for unpolarized modalities (Table 2.3) are also used for the pola-

rized modalities. So, for each image in Table 2.3 we have three other images which correspond

respectively to the three polarization degrees 30 ◦, 60 ◦ and 90 ◦. Since we will use pixel-wise

algorithms to detect the defects, which means that the size of the spatial region of interest

will not influence the detection results ; we have chosen to inspect small regions from the

Page 56: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

56CHAPTER 2. SURFACE DEFECT DETECTION ON FLAT METAL PARTS

USING MULTI-COMPONENT IMAGES

Defect I Defect II Defect III

Sample photo

Size (pixels) 170× 240 90× 360 130× 230

Defect specifications

2 dent defects : 2 dent defects : Scratch defect :

width ≈ 0.2 mm

diameter = 2 mm diameter ≈ 0.5 mm length ≈ 6 mm

depth = 0.2 mm depth ≈ 0.05 mm depth ≈ 0.03 mm

Table 2.2 — inspected regions from the considered samples and their specifications.

acquired images as shown in Table 2.2. It is necessary to mention that, only the estimation

of the covariance matrix of the background is depending on the selected spatial region. The

spatial dimensions in pixel unit of the inspected regions for each metal part are respectively

(170× 240), (90× 360) and (130× 230). The spectral dimensions are shown in Table 2.1.

We have built the five cubes presented in Table 2.1 for each metal part and we have

used the six algorithms presented in section 2.3 to detect the defects. We have selected four

different targets, as shown in Figure 2.8, in order to evaluate the influence of the choice of

the target on the detection results. Two targets, appointed thereafter target_1 and target_2,

are selected respectively in the middle and border of the defect. Two other targets, appointed

thereafter target_3 and target_4, are respectively selected as a background pixel and the

mean spectrum of the cube.

Choosing targets as background spectrum or as the mean spectrum of the data cube can

make the detection quasi- or totally-unsupervised. For those two last targets, we consider the

Page 57: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

2.4. EXPERIMENTS 57

λ(nm) Color Defect I Defect II Defect III

WLR

G

B

470G

B

505G

B

590R

G

630 R

780 R

810 R

850 R

880 R

940 R

Table 2.3 — acquired images with unpolarized modalities for the three used parts.

complement-to-one of the normalized detection map, as the objects of interest are the defects

and not the background.

Page 58: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

58CHAPTER 2. SURFACE DEFECT DETECTION ON FLAT METAL PARTS

USING MULTI-COMPONENT IMAGES

Figure 2.5 — acquired images for Defect I with different polarization degrees for thedifferent wavelength bands.

2.5 Results and discussion

The detection results of the used algorithms are provided as detection maps. The com-

parison between the detection results is done by calculating the PFA and PD for all the

defects present in each metal part. In order to calculate these probabilities, we normalize the

detection maps between [0 - 1] and by varying the detection threshold η between 0 and 1 with

a chosen step of 10−3, we calculate PD and PFA for each value of the threshold. Once PD

and PFA are calculated, we can generate the ROC curves for each detection map.

In order to reduce the amount of information, we calculate for each case the value of

the threshold η and the false alarm rate (FAR) that correspond to a fixed good detection

rate (GDR). The binary detection mask can be created by thresholding the detection map

Page 59: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

2.5. RESULTS AND DISCUSSION 59

Figure 2.6 — acquired images for Defect II with different polarization degrees for thedifferent wavelength bands.

according to the calculated value of η. The best methods, combining algorithms and target

choice, are those that give lower FAR.

For each algorithm and metal part, we present one detection mask among 20 others (cor-

responding to five cubes and four targets, except for RX, unsupervised algorithm, which have

only 5 detection masks corresponding to five cubes). The pair (c, t) indicates the used cube

’c’ and the chosen target ’t’ for the detection. AFAR (%) is the average false alarm rate of

the 20 (repectively 5 for RX) detection maps for a fixed GDR to 90%. Table 2.4 shows the

closest detection masks to the mean performance AFAR for the six algorithms presented in

section 2.3 and for the three metal parts presented in section 2.4. ci{i=1...5} are respectively

cube (i){i=1...5} and tj{j=1...4} are respectively target_j{j=1...4}.

The results show that :

Page 60: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

60CHAPTER 2. SURFACE DEFECT DETECTION ON FLAT METAL PARTS

USING MULTI-COMPONENT IMAGES

Figure 2.7 — acquired images for Defect III with different polarization degrees for thedifferent wavelength bands.

Figure 2.8 — example of different targets selected from Defect I, and used to detect thedefects.

– AMF and ACE have the worse detection results, because of the spectral variability of the

target (defect) and background classes. The defect is represented by multiple spectra,

Page 61: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

2.5. RESULTS AND DISCUSSION 61

Algo. Defect I Defect II Defect III

AMF (c, t) (c3, t1) (c4, t2) (c5, t1)AFAR 74.63 49.11 63.37

ACE (c, t) (c4, t2) (c3, t1) (c3, t1)AFAR 76.11 61.65 69.99

TAU (c, t) (c4, t2) (c2, t3) (c4, t2)AFAR 54.77 27.07 37.10

SAM (c, t) (c4, t3) (c3, t3) (c4, t3)AFAR 00.50 00.01 01.32

SID (c, t) (c5, t2) (c1, t3) (c1, t3)AFAR 48.65 32.37 32.79

RX (c) (c1) (c4) (c4)AFAR 45.66 02.96 04.83

Table 2.4 — detection masks and AFAR corresponding to GDR = 90%

which makes the choice of a single representative spectrum difficult. By choosing only

one spectrum as target to be detected, AMF and ACE can detect only the pixels that

have the same spectral signature as the chosen target. On the other hand, some of the

key assumptions used for these detectors [31] may not be verified, such as :

– The background should be homogeneous and could be modeled by a multivariate

normal distribution.

– The background spectrum interfering with the test pixel spectrum should have the

same covariance matrix with the background training pixels.

– The test and training pixels should be independent.

In our case, under the assumption of low-probability targets we have estimated the mean

vector and covariance matrix of the background from the entire data cube. Although,

Page 62: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

62CHAPTER 2. SURFACE DEFECT DETECTION ON FLAT METAL PARTS

USING MULTI-COMPONENT IMAGES

AMF and ACE can provide reduced FAR by choosing the pair (c1, t1/t2) and fixing

GDR to 90%.

– The estimated values of TAU belong to [0.114, 1], [0.329, 1] and [0.576, 1] respectively

for Defect I, Defect II and Defect III which means that the pixels are not discordant.

TAU and SID have almost similar results and can give good results by choosing t3 or

t4 as target to be detected. TAU hardly detects when using cube (1), unpolarized WL

modality with only 3 components which is not enough.

– RX is an unsupervised algorithm based on a local or global background model reference.

Here we compute the covariance matrix of the data cube, and define the anomalies to be

the points that are farthest from the mean spectrum, in the Mahalanobis distance sense.

The results vary a lot from one modality to another and from one defect to another,

although the AFAR value is rather good for Defects II and III. When the background

is not uniform or when the defects are spread in a large surface, RX do not present

satisfactory results.

– SAM, simple spectral cosine angle, is the algorithm that gives the best results, which

are more constant and stable comparing to the other algorithms. For an unsupervised

approach (by choosing t3 or t4 as target), and GDR=90%, SAM can have a very low

FAR which can reach 0% for the three defects and for all modalities.

The FAR is reduced significantly when using ML modalities compared to WL modalities for

TAU, SAM and SID. These distance measures have the same behavior with respect to different

modalities but they have different results, such as : modalities (3), (4) and (5) are very close

to each other and are better than modality (2) which is also better than modality (1).

In order to compare the different modalities, we plot the ROC curves (Figure 2.9) using the

semi-log function on the PFA axis for the detection maps of SAM (a distance measure applied

on a non-supervised approach, which does not require the estimation of the covariance matrix

of the background) and RX (unsupervised detection algorithm, which has CFRA property).

As a conclusion :

– SAM is better than RX

– Modalities (1) and (2), with white light illumination and small number of components

(resp. 3 and 9), are not performant. The performances are improved with ML modalities,

which involve a higher number of components (resp. 12 and 36).

Page 63: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

2.5. RESULTS AND DISCUSSION 63

Figure 2.9 — ROC curves of SAM with target_4 and RX for GDR=90%

– Polarization can improve the detection in certain cases, depending on the geometry of

the defect and the used algorithm.

– The change of wavelength band, modality (3), permits to smooth the image and is

generally better than changing the polarization degree, modality (4), which does not

bring pertinent features for the detection. However, the combination of all modalities

(modality (5)) can give a good compromise between them and for some cases even the

best results.

Page 64: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

64CHAPTER 2. SURFACE DEFECT DETECTION ON FLAT METAL PARTS

USING MULTI-COMPONENT IMAGES

2.6 Conclusion

In this chapter, a comparison study is presented between different lighting modalities

used to illuminate metal parts containing 3D artificial surface defects. These modalities are

combined in order to obtain data cubes, which are compared and tested by means of supervised

and unsupervised detection algorithms dedicated to hyperspectral imaging applications.

Usually, it is difficult to determine a single representative spectral signature for a real

defect (target) because there can be a wide variation for the same defect. Moreover, it may

be worthless or even impossible to investigate all the possible defects signatures. This is why

we also investigated unsupervised methods.

The spectral angle mapper (SAM) algorithm, which is widely used for material identifica-

tion, shows interesting results in an unsupervised approach, where the target to be detected

has been chosen as a background pixel or as the mean of the data cube. Moreover, this simple

approach allows fast detection, which is an important parameter in the context of industrial

applications.

In conclusion, the use of multiple source lights to illuminate the same area of interest, which

provides multiple representations with different information for the same defect, improves the

detection performances. Indeed, the change of polarization degree does not bring pertinent

features for the defection, whereas the change of wavelength band permits to smooth the

image and to give better detection results.

Page 65: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

CHAPTER

3 Surface and sub-surface

defect detection on nuclear

parts using thermography

images

3.1 Introduction

3.1.1 Motivation of thermography testing

Different kinds and shapes of metallic parts (steel, stainless steel, aluminum, etc.) are

used in several industrial areas such as the automotive, aviation, and nuclear domains. These

components are very often inspected during the production or maintenances processes for

quality control purposes. The component inspection, which is a quality control task, is defined

by Newman and Jain [40] as a process of determining if a product deviates from a given set of

specifications. The inspectors are provided with lists and descriptions of unacceptable defects

such as cracks or surface blemishes.

Depending on the quality control requirements (number of parts to be examined, size of

defects to be detected, etc.), the products are examined manually or automatically during the

production or the manufacturing processes. Various non-destructive testing (NDT) techniques

exist in the literature [41]. The choice of the appropriate technique primary depends on the

type of anomalies to be detected. For metal parts, anomalies are often either surface defects

(scratches, dents, etc.) or subsurface defects (certain types of cracks or porosities, etc.) which

Page 66: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

66CHAPTER 3. SURFACE AND SUB-SURFACE DEFECT DETECTION ON

NUCLEAR PARTS USING THERMOGRAPHY IMAGES

are internal discontinuities and cannot be seen visually.

Automated visual testing (AVT) concerns surfaces and also the subsurfaces parts of com-

ponents. As an example, the examination weld structures of nuclear components encompasses

the early detection of inter-dentrictic propagating cracks.

The detection of surface and sub-surface defects has been the subject of several researches

for the inspection of metallic industrial components by means of NDT techniques. NDT

is an examination, test, or evaluation performed on an object of any type, size, shape or

material without changing or altering that object in any way, in order to determine the

absence or presence of discontinuities that may have an effect on the usefulness or serviceability

of that object. NDT may also be conducted to measure other test object characteristics,

such as size ; dimension ; configuration ; or structure, including alloy content, hardness, and

grain size [41]. Nondestructive Evaluation (NDE) is a term that is often used interchangeably

with NDT. NDE may be used to determine material properties such as fracture toughness,

formability, and other physical characteristics [42]. Nondestructive testing and evaluation

NDT&E methods are required to be reliable, economical, sensitive, user friendly and fast [43].

In the following of this chapter the terminology NDT will be sued.

Although, NDT cannot guarantee that failures will not occur, it plays a significant role

in minimizing the possibilities of failure. Several NDT techniques such as liquid penetrant

testing, magnetic particle testing (MT), radiographic testing (RT), ultrasonic testing (UT),

and thermography testing (TT) have been used for material inspection. Each of these NDT

techniques has appropriate and adequate treatments to inspect the objects. In liquid pene-

trant testing, only surface breaking defects can be detected ; surface preparation is critical

as contaminants can mask defects ; relatively smooth and nonporous surface are required ;

and chemical handling precautions are necessary (toxicity, fire, waste). In MT, only ferro-

magnetic materials can be inspected ; smooth surfaces are relatively required ; paint or other

nonmagnetic coverings adversely affect sensitivity ; and demagnetization and post cleaning is

usually necessary. In RT, access to both sides of the structure is usually required ; relatively

expensive equipment investment is required ; and possible radiation hazard for personnel. In

UT, skill and training required is more extensive than other technique ; surface finish and

roughness can interfere with inspection ; thin parts may be difficult to inspect ; and linear

defects oriented parallel to the sound beam can go undetected. TT is an imaging technology,

Page 67: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

3.1. INTRODUCTION 67

which is contactless and completely non-destructive and secure. Since the temperature is one

of the most useful parameter that indicates the structural health of an object, TT is used to

detect surface and sub-surface defects by determining the surface temperature of the object

using an IR camera.

We focus in this work on infrared thermography (IRT) techniques. IRT is a non-intrusive

temperature measuring technique for producing an image of the infrared light - invisible to

our eyes - emitted by objects due to their thermal condition. IRT is a NDT method, with

the advantages of being fast ; easy to apply ; applicable to all situations as long as there is

a temperature difference on the surface of the inspected object ; and providing non-contact,

non-interaction, real-time measurements over a large detection area - instead of point or line -

with a long range. IRT can only detect defects that cause a change in heat flow or the surface

temperature of the item.

3.1.2 History of previous works

The use of IRT as a NDT technique has been the subject of several researches. Some of

the recent research works on IRT are cited here. Bagavathiappan et al. [44] reviewed and

discussed in details various applications of the IRT as a conditions monitoring technique.

Joao et al. [45] presented a method consisting in the use of IRT for indirect identification of

sectors with high current density concentration in planar microwaves devices. Ludwig et al.

[46] presented a study based on IRT as a tool for rapidly screening the anti-transpirant activity

of chitosan in bean plants. Salaimeh et al. [47] developed an IRT approach to quantify in real-

time the viable bacteria in liquid medium. In the same field, Lahiri et al. [44] employed a

real time IRT method for measurement of temperature variations in four clinically significant

gram-negative pathogenic bacteria. Tan et al. [48] presented a research work to evaluate

topographical variation in the ocular surface temperature among the young, elderly and the

subjects wearing contact lens using IRT. Sham et al. [49] proposed an algorithm based on the

principle of computerized tomography for reconstruction of unavailable/partially available

temperature distribution in IRT using the measured surrounding temperature field. Salaimeh

et al. [50] valuated the ability of IRT to quantify in real-time the Staphylococcus aureus in

liquid medium. Picazo-Rodenas et al. [51] proposed a methodology for the computation of the

energy balance and heating curves of an induction motor taking as a basis the information

Page 68: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

68CHAPTER 3. SURFACE AND SUB-SURFACE DEFECT DETECTION ON

NUCLEAR PARTS USING THERMOGRAPHY IMAGES

provided by the application of IRT. Hu et al. [52] used IRT technology for winter wheat

irrigation scheduling.

3.1.3 Goal and new contributions

We propose in this chapter a new approach for the inspection of surface and sub-surface

metallic parts in nuclear components. The proposed approach is based on the use of IRT

techniques and hyperspectral imagery (HSI) algorithms.

On one hand, the HSI algorithms have the property of being generic algorithms, which

means that they can be applied on different types of images. On the other hand, the data

structure is similar in both techniques, which means that the sequence of the acquired thermal

images in TT, where the temperature profile of each pixel is stored with respect to the time

can be considered as a single hyper-component cube, containing temporal response instead of

spectral response, for each pixel. Lock-in and pulsed thermography (LT and PT, respectively)

techniques are used to heat the inspected specimen and its thermal behavior is recorded during

the heating and the cooling periods.

The novelty of this work consists of the combination of these two techniques, TT and HSI

detection algorithms. It is shown that this approach leads to a new unsupervised and generic

examination procedure, as different defect types can be revealed with one processing method.

We illustrate our purpose with experiences on three metallic parts containing open cracks, and

open and closed notches with different sizes and depths. Two thermography techniques, LT

and PT, were applied to the samples and dataset of thermal images were created. Our goal here

is to apply anomaly detection algorithms on the elaborated dataset images in order to detect

surface and subsurface existing anomalies within the inspected samples. Only unsupervised

algorithms are investigated, where no prior knowledge about the defects is known.

We first show that using the whole cube is not efficient for anomaly detection, due to the

high false alarm rate. Indeed, due to the high dimensionality of the hypertemporal cubes, the

Hughes phenomenon applies and can be at the origin of many false alarms. We consequently

try to reduce the dimension, while keeping as much as possible the anomaly information.

Different denoising and dimensionality reduction algorithms ; such as singular value de-

composition (SVD), principal component analysis (PCA), maximum noise fraction (MNF),

Page 69: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

3.1. INTRODUCTION 69

independent component analysis (ICA) ; have been used to reduce the data space of the ac-

quired data cubes, where a subspace signal is estimated in order to work on space of smaller

dimension than the original space of the data. After the dimensionality reduction of the data

cube, HSI algorithms - originally dedicated to remote sensing applications - are applied on the

reduced dataset images, where the anomalies are detected in an unsupervised way. The well-

known Reed and Xiaoli Yu detector (RX) and a spatially adaptive version, the regularized

adaptive RX (RARX) ; have been used in this work to detect the anomalies.

We propose a practical issue to estimate the virtual dimensions (VDs) of the reduced data

spaces based on the evolution of the energy and the signal-to-noise ratio (SNR) of the princi-

pal components after reducing the space dimensionality. We compare this approach to other

existing techniques ; such as the Akaike information criterion (AIC), the minimum description

length (MDL) ; and the hyperspectral signal identification by minimum error (HySime, used

to reduce the dimensionality and to estimate the VD).

We also propose a new method, based on the combination of two criterion, the first one

taking into account statistical moment of second order (the energy and SNR which is an

energy ratio), and the second criterion taking into account a higher order moment, here the

kurtosis, to reduce the dimension of the original space to only one principal component (PC),

the most non-Gaussian one. The original data are projected on the direction of the selected

PC, which has enough energy to be in the first PCs and has the maximum value of kurtosis.

The experimental false alarm rates (FARs) calculated from the projected data are compared

with those obtained from SVD and MNF as well as with the theoretical FARs.

3.1.4 Structure of the chapter

The remainder of this chapter is organized as follows. Section 3.2 recalls the state of the

art of the theory of thermal energy transfer and the fundamentals of infrared systems, where

the thermal emission, thermal spectral bands, image formation and infrared detectors are

discussed. The main thermography techniques dedicated to defects detection in IRT images

are presented in this section. A particular attention is given to two techniques, pulsed ther-

mography and lock-in thermography. The existing methods in IRT and their limitations are

also reviewed in this section. A particular attention is given to the methods that are based

Page 70: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

70CHAPTER 3. SURFACE AND SUB-SURFACE DEFECT DETECTION ON

NUCLEAR PARTS USING THERMOGRAPHY IMAGES

on thermal contrast techniques, pulsed phase thermography (PPT) and principal component

thermography (PCT). The proposed approach is described in Section 3.3, where the main

steps of the approach and the experimental setup and results are presented and discussed.

Section 3.4 concludes this chapter.

3.2 State of the art

The fundamentals of IRT are presented in Annexe A. A particular attention is given in

the first part to heat energy and transfert since they are very important in TT, helping to

explain observed phenomena such as abnormal temperature patterns. The second part deals

with infrared systems fundamentals and the last part presents the IRT (lock-in and pulsed

thermography) techniques used in this chapter.

3.2.1 Existing methods of defect detection in IRT

Defect detection and material inspection methods in thermography have gone through

several progressive steps. Classical thermography is based on the visual interpretation of the

thermographic images. The heating or the cooling anomalies are observed after the application

of the heat. Defects that produce subtle temperature differences in the thermal images are

generally not detected. Furthermore, this technique is based on temperature information only

and can be susceptible to emissivity or uneven heating variations. Detection of subsurface

defects can be greatly enhanced by the real time capture of a series of thermal images and the

subsequent analysis of these images using various image processing algorithms, where defects

not readily observable can be detected and quantitatively characterized [53].

Various techniques including : image normalization [53], thermal contrast calculations [54],

pulsed phase thermography (PPT) [54], [55], [56] and principal component thermography

(PCT) [57], [58] have been developed to remove emissivity or uneven heating variations so as

to increase defect contrast and inspection depths [53], [59].

Page 71: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

3.2. STATE OF THE ART 71

3.2.1.1 Image normalization

Image normalization is an image processing technique, where the sum (average) of the

total images to be processed is divided by the averaged set of images, where the defect is

observed in the temperature data [53]. This simple calculation minimizes uneven heating and

emissivity variations while improving defect contrast. An obvious difficulty is of course to find

those relevant images.

3.2.1.2 Thermal contrast techniques

Most of the data processing algorithms that have been developed for defect characteriza-

tion use thermal contrast calculations. Various thermal contrast definitions exist [54] such as

the absolute thermal contrast (ATC), the running contrast, the normalized contrast and the

standard contrast. Most of them share the need of a sound area Sa, i.e. a non-defective region

within the field of view. The basic definition of thermal contrast is the ATC, which measures

the difference between defective and non-defective regions. ATC is defined as [54] :

∆T (t) = Td(t)− TSa(t) (3.1)

Establishing this Sa is the main drawback of thermal contrast especially if automated

analysis is needed or if nothing is known about the specimen. Even when defining a Sa is

straightforward, considerable variations on the results are observed when changing the location

of Sa [60]. To overcome the problem of Sa location, the differential absolute contrast (DAC)

was proposed [56], [61]. DAC is based on Eq. (3.1), however instead of finding a Sa somewhere

in the image, the Sa temperature at time t is computed locally assuming that on the first few

images (at time t� in particular) local point p behaves as a Sa. The DAC is defined as [59],

[60] :

∆TDAC(t) = Td(t)−�

t�

tT (t�) (3.2)

The first step is to define t� as a given time value between the instant when the pulse has

been launched, and the precise moment when the first defective spot appears in the thermo-

gram sequence, i.e. when there is enough contrast for the defect to be detected. Originally,

proper selection of t� requires an iterative graphical procedure. Afterwards, a modified DAC

Page 72: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

72CHAPTER 3. SURFACE AND SUB-SURFACE DEFECT DETECTION ON

NUCLEAR PARTS USING THERMOGRAPHY IMAGES

technique has been proposed [56]. It is based on a finite plate model and the thermal qua-

drupoles theory that includes the plate thickness L explicitly in the solution to extend the

validity of the DAC algorithm to later times. The Laplace inverse transform is used to obtain

a solution of the form :

∆TDAC,mod(t) = Td(t)−L−1

�coth

�pL2/α

�t

L−1�coth

�pL2/α

�t�

T (t�) (3.3)

where p is the Laplace variable, α and L are respectively the thermal diffusivity and the plate

thickness of the material.

3.2.1.3 Pulsed phase thermography (PPT)

In PPT, the acquisition is accomplished in a similar way as in classical PT, the images

sequence is afterward transformed pixel by pixel from the time domain to the frequency

spectra using the one-dimensional discrete Fourier transform (DFT) [60], [54] :

Fn = ∆tN−1�

k=0

T (k∆t)exp(−j2πnk/N) = Ren + Imn (3.4)

where j is the imaginary number, n designates the frequency increment (n = 0, 1, · · · , N−1) ;

∆t is the sampling interval, T (k∆t) designates the temperature at location p in the kth

image of the sequence and Re and Im are the real and the imaginary parts of the transform,

respectively. DFT can be applied to any waveform ; hence it can be used with pulsed and

lock-in data. Although very useful, Eq. (3.4) requires a lot of computation time. Usually, the

fast Fourier transform (FFT) algorithm is used to Fourier transform the temperature response

of the images sequence. The real and imaginary parts of the complex transform are used to

estimate the amplitude A, and the phase φ [54], [60], [56] :

An =�

Re2n + Im2n ; φn = tan−1

�Im

Re

�(3.5)

The phase, Eq. (3.5), is of particular interest in NDE given that it is less affected than

raw thermal data by environmental reflections, emissivity variations, non-uniform heating,

and surface geometry and orientation. These phase characteristics are very attractive not

Page 73: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

3.2. STATE OF THE ART 73

only for qualitative inspections but also for quantitative characterization of materials [60].

This algorithm effectively removes uneven heating and emissivity variations, and heat

defect contrast is observed in the maximum phase images. Defect contrast can also be enhanced

by imaging the effective diffusivity of the sample. This data reduction algorithm involves

fitting a theoretical one dimensional temperature response model with the measured temporal

temperature response point by point. As any other thermographic technique, PPT is not

without drawbacks. The noise content is considerable, especially at high frequencies. A de-

noising step is therefore often required [61].

3.2.1.4 Principal component thermography (PCT)

In PCT, the thermographic data is projected from its original space to its eigenspace to

increase its variance and reduce its covariance. The data is decomposed into a set of ortho-

gonal statistical modes, known as empirical orthogonal functions (EOF), obtained through

singular value decomposition (SVD), where the first components contain the maximum va-

riance. The first EOF will represent the most characteristic variability of the data ; the second

EOF will contain the second most important variability and so on. SVD extracts the spatial

and temporal information from a thermographic matrix in a compact or simplified manner.

SVD is close to principal component analysis (PCA) with the difference that SVD simulta-

neously provides the PCAs in both row and column spaces. In order to apply the SVD to

thermographic data, the 3D thermogram matrix representing time and spatial variations has

to be reorganized as a 2D M ×N matrix A, where M is the total number of pixels and N is

the total number of images. This can be done by rearranging the thermograms for each time

steps as columns in A, in such a way that time variations will occur column-wise while spatial

variations will occur row-wise. Under this configuration, the matrix A can be decomposed

into three matrices U, Σ and V to facilitate the principal components computation as follows

[60], [57], [58] :

A = UΣVT (3.6)

The matrix U consists of EOFs that represent the spatial variation of the data set. Each

column of U gives the coordinates of data in the space of principal components. The matrix Σ

is a diagonal matrix with the singular values on its diagonal. The singular values in the matrix

Page 74: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

74CHAPTER 3. SURFACE AND SUB-SURFACE DEFECT DETECTION ON

NUCLEAR PARTS USING THERMOGRAPHY IMAGES

Σ are the eigenvalues for the corresponding eigenvectors in the matrix V which are reordered

in descending order based on their pixel values. The columns of matrix V or the rows of

matrix VT are the principal components or eigenvectors of the data set, which are sorted in

the descending order of magnitude. Usually, original data can be adequately represented with

only a few EOFs. Typically, a 1000 thermogram sequence can be replaced by 10 or less EOFs

[60].

Vrabie et al. proposed in [62] another processing method based on PCT and higher order

statistics (HOS) for defect identification. In this processing method, the SVD is used to sepa-

rate the recorded datacube into orthogonal subspaces as in Eq. (3.6). The authors supposed

that the data matrix, A, is affected by an energetic mean response of the analyzed sample (a

sum of exponentials for one recorded signal). This response depends on the acquisition pixel

due to the configuration of the system and the environment (relative position of camera versus

sample and excitation sources, ambient temperature, etc.). This information can be extracted

by the most energetic subspace. On the other hand, the recorded data is polluted by noise

(electronics, etc.). As the noise is supposed white and uncorrelated, it is generally modeled

by the least energetic subspace. The decomposition of the matrix V in the corresponding

subspaces is given by :

A(k, t) = Dtrend(k, t) +Duseful(k, t) +Dnoise(k, t)

=�m

i=1UiΣiVTi +

�ni=m+1UiΣiV

Ti +

�Ni=n+1UiΣiV

Ti

(3.7)

with k and t are two index depending of N and M , respectively. The subspace Dtrend, construc-

ted with the first more energetic m vectors, models the mean response of sample and the

environment ; the subspace Dnoise, constructed with the last N − n vectors, characterizes the

uncorrelated white noise. The useful information is spanned by the rest of vectors into the

Duseful subspace, and is interpreted as realizations of a stationary (supposed ergodic) random

process. The result of SVD is analyzed in terms of choice of the number of singular values,

m and n, to be kept for constructing the useful subspace Duseful. The choice of these values

is made on the basis of energies of the resulting subspaces, which are dependent on the re-

corded signals. HOS estimators - the third and fourth order normalized statistics, namely the

skewness κ3(k) and kurtosis κ4(k) - are then computed on this subspace in each acquisition

pixel. These values can then be reshaped in a matrix format indexed by the coordinates (x, y),

Page 75: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

3.2. STATE OF THE ART 75

providing two images for the diagnostic of the analyzed sample.

3.2.2 Limitations

All of the techniques presented previously are based on numerous data reduction algo-

rithms, where the initial data cube containing the images sequence is reduced to only one

or a few images that will be manually analyzed by an experienced observer or automatically

post-processed in order to detect the existing anomalies within the inspected part. On the

automated side, different algorithms have been proposed. The most used are thresholding and

edge detector operators such as Sobel, Roberts and Canny of the processed - contrast, PPT,

PCT - images [60].

In image normalization technique, the obvious difficulty is to find those relevant images

used to normalize the sum of the total images in the sequence. In thermal-contrast-based

methods (basically, ATC and DAC), a non-defective region (sound area Sa) or the time value

t� (value between the instant when the pulse is launched and the precise moment when the first

defective spot appears in the thermogram sequence) should be selected in order to calculate

the thermal contrast. This is the main drawback of thermal contrast methods if automated

analysis is needed or if nothing is known about the specimen and/or the anomalies.

In PPT, the pixels of the images sequence are transformed from the time domain to the

frequency spectra using FFT where the phase images are computed from real and imaginary

parts of the complex transform. Instead of analyzing the phase images φn at a particular

frequency, it was found [55] a better approach to look at the maximum value of the phase.

Such image, φmax, is obtained by considering, for all the pixels (x, y), the maximum value of

the phase computed with Eq. (3.5). This data reduction algorithm, and as any other ther-

mographic technique, is not without drawbacks. The noise content is considerable, especially

at high frequencies. A de-noising step is therefore often required [61]. The calculated φmax

images of the used data cubes in our experiments, presented in Table 3.4 in paragraph 3.3.2.2,

are shown in Figure 3.1. The obtained φmax images are very noisily and cannot be used to

detect the existing anomalies within the inspected samples.

PCT is used to reduce the dimensionality of the original data space in order to work with

smaller spaces. But the choice of the dimension of the reduced data space is a critical point

Page 76: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

76CHAPTER 3. SURFACE AND SUB-SURFACE DEFECT DETECTION ON

NUCLEAR PARTS USING THERMOGRAPHY IMAGES

a) b)

c) d)

e) f)

Figure 3.1 — calculated φmax for the cubes a) S1 - Lockin (1 Hz), b) S1 - Pulse (1 s),c) S2 - Lockin (1 Hz), d) S2 - Pulse (1 s), e) S3 - Lockin (1 Hz) and f) S3 - Pulse (5 s) by

considering for all the pixels the maximum value of the phase computed with Eq. (3.5)

Page 77: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

3.3. PROPOSED METHOD 77

Figure 3.2 — examples of (a) thermograms sequence, and (b) temperature profile for thepixel p on coordinates (i, j).

of this method that we are trying to solve and study in this work.

3.3 Proposed method

3.3.1 Problem formulation and approach

3.3.1.1 Construction of hypertemporal cubes

In IRT, the acquired thermal images are grouped in a sequence of thermograms, (Fi-

gure 3.2a), where the first two dimensions represent the spatial information (pixel positions)

and the third dimension represents the variation of the temperature for each pixel over the

time, called also, temperature profile (Figure 3.2b). This data structure reminds the hyper-

spectral cubes that have the same data structure except that the third dimension represents

the spectral response of each pixel with respect to the wavelength position. Figure 3.2 shows

a thermograms sequence with respect to the acquisition time and the temperature profile for

the pixel p on coordinates (i, j). ∆t is the sampling time.

This shows that, as the structure of the acquired IRT data is compatible with multi-

and hyperspectral imaging (HSI) algorithms, such approaches can be used for the defect

detection in thermographic sequences. It is true that these algorithms are basically developed

for remote sensing applications, but this does not prevent their use as the only difference is

that the spectral information is replaced by temporal information, as long as this temporal

information is characteristic of the observed material, and can be used as a temporal signature.

Page 78: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

78CHAPTER 3. SURFACE AND SUB-SURFACE DEFECT DETECTION ON

NUCLEAR PARTS USING THERMOGRAPHY IMAGES

Figure 3.3 — (a) data cube dimensions, (b) thermogram at t500 and (c) temperature riseand decay curves for 3 different objects.

It has already been shown in chapter 2 and [63] that is possible to apply HSI algorithms for

metallic defect detection on such pseudo-spectral-cubes corresponding to the different lighting

modalities : white light and monochromatic lights in combination with polarization.

The acquired thermal images are arranged in a data cube, in ascending order of acquisition

time. The first image corresponds to time t1, the second corresponds to time t2 and so on

until the last image in the cube which corresponds to time tN (Figure 3.3a).

The defect detection is based on the temporal behavior of each pixel, where from a sta-

tistical point of view, the defects or anomalies are defined as observations that deviate in

some way from the background. Although, the observed temperature-profiles are composed

of two main successive domains, corresponding respectively to the heating (temperature rise)

and the cooling (temperature decay) processes of the specimen (Figure 3.3c). Usually, the

behavior of the specimen is analyzed only either during the rising surface temperature or

during the decay [55], [55]. Most often, the temperature decay part is used to analyze the

inspected parts [59], [64]. Basically, the specimen is briefly heated for a certain period of time

and then it is allowed to cool. In parallel, its temperature profile is recorded. At time t1,

before heat reaches the specimen’s surface, a cold image is captured. Then the temperature

of the material rises during the pulse. After the pulse, it then decays because the energy - the

thermal front - propagates by diffusion at the surface of the specimen. Later, the presence of

an anomaly perturbs the propagation of the thermal wave, so that a gradient of temperature

between the anomaly area and the surrounding area is observed. Figure 3.3 illustrates the

spatial and spectral dimensions ; and how the data cube is constructed from the thermograms

Page 79: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

3.3. PROPOSED METHOD 79

Figure 3.4 — (a) thermogram at t50 and (b) temperature profile saturation for background,defect pixels and heating tool.

sequence. It also shows a thermogram example at t500 and plots the temperature rise and de-

cay curves for the heating tool, background and defect pixels. The limit between the heating

and cooling parts is plotted with respect to the heating tool and background (black and red

lines, respectively). The limits of the defect pixels (blue line) are slightly shifted with respect

to the heating tool and background pixels, due to the thermal front propagations and to the

distance separating them from the heating tool.

During the application of a heat pulse, the acquired thermograms could be temperature

saturated (Figure 3.4), i.e. the reading is out of the calibration scale of the camera and no

accurate measure can be computed. Saturated thermograms give no valuable information and

therefore must be discarded from the processing stage [65]. Figure 3.4 shows an example of

temperature saturation for another inspected metallic part.

In that case, we have two alternatives in order to keep only valuable information : either

discard the saturated pixels, or discard the saturated temporal bands. In the following we will

test both.

3.3.1.2 Detection algorithms

We make the assumption that we do not have a priori information on the temporal

signature nor the spatial location and shape of the defect. In that case, an anomaly detector

is suited to detect the defect. We first consider the well-known RX detector [27], presented in

Page 80: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

80CHAPTER 3. SURFACE AND SUB-SURFACE DEFECT DETECTION ON

NUCLEAR PARTS USING THERMOGRAPHY IMAGES

a) b)

Figure 3.5 — example of two histograms of the background distribution for (a) a normaldistribution (band-9) and (b) non-normal distribution (band-13), from the cube S1 - Lockin

(1 Hz).

chapter 1. Its formulation is

DRX(x) = (x− µ)T Γ−1(x− µ)

H1

H0

ηRX (3.8)

where x is the pixel under test, Γ and µ are respectively the estimated covariance matrix and

mean vector of the reference background data.

Let recall the basic assumptions for this detector :

– Homogeneous and Gaussian random background,

– Small sized anomaly, of same covariance matrix and different mean than the background.

In our experiments, the homogeneous background assumption is not perfectly satisfied, due

to the heating tool (see Figure 3.4 for example), and depends on the imaging configuration.

It let us think that the more disturbed background, the less performing the detection will be.

The Gaussian assumption is not perfectly satisfied either. Indeed, if we plot the histo-

gram (Figure 3.5) for some time values, it can seen that sometimes the distribution looks far

from Gaussian, which is enough to conclude that the multivariate N-components law is not

Gaussian.

The consequence of these two limitations is that, when using the RX detector, we should

not reach the theoretical false alarm ratio.

Let note that the small size defect assumption is coherent with the considered application,

Page 81: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

3.3. PROPOSED METHOD 81

as we look for notches and cracks. Furthermore, we can add the assumption that despite the

small size, the defect is of high energy, because the heat accumulates inside.

The second anomaly detector that we consider here takes into account the neighbor pixels,

such allowing to detect small objects with more reliability. Its formulation is :

DRARX(xi) =1

ssT

�xi�2 + 2

l∈v(i)

slxTi xl

(3.9)

where s is a vector containing the estimated abundance sl of the neighbors xl of the tested

pixel xi. x is the whitened vector and v is the neighbor of the texted pixel.

3.3.1.3 Dimension reduction

In the considered data, the number of dimensions is very high, currently around N = 1000,

due to the thermal inertia of the materials, and to the temporal sampling. It is well known that

in that case, parameters estimation, like for example covariance matrix estimation, becomes

difficult, due to the curse of dimensionality. On other hand, the Hughes phenomenon states

that, for such dimensions, the quadratic distance measures are not efficient any more [66]. As

the RX detector finally results in the Mahalanobis distance between the pixel under test and

the background, we guess that a dimension reduction may be necessary to improve the results.

Furthermore, usually dimension reduction involves linear transformation of the data, which

will increase the proximity to a Gaussian distribution, due to the Central Limit Theorem.

Usually, the reduction dimension based on an energy criterion must be used with care for

anomaly detection, because anomalies can lay in low Eigen values components. In the conside-

red application, anomalies are still in the energetic components, due to the heat accumulation

on the defects.

We have considered here many dimension reduction techniques. Some existing data reduc-

tion methods will be presented and discussed later as well as how to choose the dimension of

the subspace.

Page 82: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

82CHAPTER 3. SURFACE AND SUB-SURFACE DEFECT DETECTION ON

NUCLEAR PARTS USING THERMOGRAPHY IMAGES

3.3.1.4 Is there a best direction ?

We would like here to take into account both the parsimony of the spatial distribution

of the defects, and its energetic state. We have already supposed that the anomalies appear

in the first principal components. Among those components, some contain mainly the back-

ground, and other one(s) contain the anomalies. The background components do not help

for anomaly detection, because we need high contrast maps in order to perform efficient de-

tection. Some authors have proposed a Bernoulli-Gaussian model to describe anomalies in

multivariate Gaussian data [67].

We present in Annexe B this model, and we show that kurtosis is a relevant criterion for

choosing the best component, among the first (energetic) ones.

In consequence, we propose to select the higher kurtosis principal component as the best

candidate contrast map for the anomaly detection. As we keep only one component, we do

not use the RX detector, nor RARX, but we simply perform an hypothesis test onto the

one-dimension projected data :

H0 : the whitened and projected data follow the normal law N (0, 1).

H1 : the data do not follow the normal law.

We can calculate the theoretical false alarm probability with :

PFAtheo. = 1−� th

−th

1√2π

exp

�−x2

2

�dx (3.10)

and choose the detection threshold th according to the desired false alarm ratio.

In conclusion, in this way we keep the advantage of constant false alarm detection, even

if the test is not the more powerful Likelihood ratio test, but a simple hypothesis test. Of

course, the quality of the results depends, here again, on the validity of the hypothesis.

3.3.1.5 Flowchart of the proposed method

The flowchart in Figure 3.6 illustrates the main steps involved in the proposed approach.

Once the thermograms are acquired (1) , a sequence of pre-treatments may possibly be applied

to the constructed data cube. First, either the entire data cube is treated or spatial and

Page 83: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

3.3. PROPOSED METHOD 83

Figure 3.6 — flowchart of the presented approach.

spectral regions of interest (ROIs) can be selected (2) . The spatial ROI corresponds to the

desired area to be inspected, while the spectral ROI corresponds to the heating or cooling

parts (Figure 3.3c), where only one of these two parts or both can be selected. We will show

and discuss later that the detection results are depending on the choice of the spectral ROI.

Afterwards, the selected ROI may possibly spatially and/or spectrally be pre-processed (3)

by means of image processing techniques in order to reduce the noise and to separate the

anomalies spectra from those of the background. And then, a reduction of the data space (4)

is often used, where a subspace signal is estimated in order to work on this small space of

smaller dimension than the original space of the data. In general, this space reduction leads

to a loss of information, especially for targets that have low spatial dimensions (represented

by only a few pixels). Some existing data reduction methods will be presented and discussed

later as well as how to choose the dimension of the subspace. Once the pre-treatment step

is competed, HSI detectors are applied on the pre-treated data cube in order to detect the

defects with a non-supervised approach (5) . This means that no prior knowledge about the

defects is used. The used HSI algorithms are presented in paragraph 3.3.1.2. The detection

results are given as 2D maps which are used to evaluate and compare (6) the detection

algorithms and the selected pre-treatment methods by means of ROC curves.

Page 84: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

84CHAPTER 3. SURFACE AND SUB-SURFACE DEFECT DETECTION ON

NUCLEAR PARTS USING THERMOGRAPHY IMAGES

Samplename

S1 S2 S3

Samplephoto

Type ofanomalies

Open notcheswith different

sizes.

Closed notches withdifferent depths.

Open cracks.

Material Inconel 600 Inconel 600 Inconel 600

Defectsize

Length : 5 mm,Depth : 2 mm

Length : 8 mm,Width : 20 µm(approximately)

Table 3.1 — considered mockups for the thermography reference dataset.

3.3.2 Methods, experiments and results (application on nuclear compo-

nents examination)

3.3.2.1 Experimental setup

In order to validate our proposed approach, we used three samples containing different ano-

malies and we recorded a thermography reference dataset of images. The considered samples

are listed in Table 3.1.

The used specimens are realistic as we have :

– Electro-erroded anomalies : they are used for the qualificaion of systems, such as Eddy

current. The detection is usefull, because they help to characterize the considered TT

approach. In particular as other reference NDE techniques, such as ET e.g., they have

been used on these kinds of anomalies.

– Real cracks : these are important defect types that can be found on the components of

the nuclear reactor. The causes can be : pressure, corrosion . . .

The dataset was elaborated with different anomalies, where two recording processes, lock-

in and pulsed thermography, were considered. The recording setup is made of an uncooled

IR-camera (Xenics GOBI 640 GiG E), a tailored inductor (for eddy-current excitation with

approximately 30 mm of diameter) and a mockup. The camera is placed at a certain distance

angle to the surface in order to have a constant lateral resolution and to avoid disturbing

Page 85: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

3.3. PROPOSED METHOD 85

a) b)

c)

Figure 3.7 — considered recording setup for the three inspected samples : (a) overview forsample S1, (b) side view for sample S2, and (c) overview for sample S3.

reflections (in case of high emissive surfaces e.g.). Figure 3.7 depicts the recording setup.

The current generator (of AMERITHERM INC.) is used to generate Eddy currents (also

called Foucault currents), which are electric currents induced within conductors by a changing

magnetic field in the conductor. These circulating Eddy currents have inductance and thus

induce magnetic fields causing heating effects. Pulsed and lock-in thermography processes

were considered in these experiments. In PT, a short heating pulse is generated and launched

for a few seconds (from 1 to 10 s) on the specimen through the inductor for heating and in

LT, a sinusoidal wave of few hertz of frequencies (form 1 to 5 Hz) is generated and applied

for 5 to 10 s. In both processes, LT and PT, the specimen is left to cool for 5 to 10 s. The

IR camera images the temperature variations as thermograms during the heating and cooling

phases with a frequency of acquisition of 62 fps and send the acquired images to the computer,

where the output data of each sequence is created as a cube of size [Nx×Ny×N ] (Nx×Ny

signals of length N). Table 3.2 lists the recorded reference dataset images.

Page 86: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

86CHAPTER 3. SURFACE AND SUB-SURFACE DEFECT DETECTION ON

NUCLEAR PARTS USING THERMOGRAPHY IMAGES

Samples S1 S2 S3

Anomaly type Open notch Closed notch Open crack

Number of

anomalies

1 5 1

(0.2 mm to 1.0 mm,depth in 0.2 mmsteps)

Number of

Lock-in

sequences

4 3 6

(1 to 4 Hz in 1 Hzsteps)

(1 Hz, 3 Hz and 5 Hz)(1 to 5 Hz in 1 Hzsteps and 10 Hz)

Number of

pulsed

sequences

10 2 3

(1 to 10 s in 1 s steps) (1 s and 5 s) (1 s , 5 s and 10 s)

Total numberof sequences

14 25 9

Table 3.2 — overview of the recorded reference dataset images.

During the heating process, the currents are propagating in the vicinity of the inductor at

the surface of the component. This heat generated by these currents is therefore propagating

from the inductor. The heat flow is disturbed by the presence of an anomaly. For sake of

clarity, we distinguish following different image parts (Table 3.3) :

1 - Without a defect (yellow pixels) : corresponding to the heating tool.

2 - Without a defect (blue pixels) : where the currents are propagating.

3 - With a defect (red pixels) : where the currents are disturbed.

4 - Without a defect (green pixels) : corresponding to the marker we used in S2 (piece of

paper to localize the defect).

5 - Without a defect (remaining pixels) : corresponding to the background.

Table 3.3, shows the different image parts of the inspected ROI (inside the inductor), for

the three samples S1, S2 and S3, due to the thermal front propagation processes.

3.3.2.2 Choosing temporal ROI

As described in section 3.2 ; the behavior of the inspected object is usually analyzed only

either during the rising surface temperature or during the decay. The rising part (called also

Page 87: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

3.3. PROPOSED METHOD 87

Samples S1 (t100) S2 (t70) S3 (t100)

Images

Legend1 Heating tool. 2 Currents propagation area.

3 Defect area. 4 Marker.

Table 3.3 — different image regions.

heating part) of the temperature profile corresponds to the time when the heat is launched

until the time when it is stopped and the decay part (called also cooling part) corresponds to

the time when the heat is stopped until the time when the surface will completely be cold. As

the anomalies detection is based on the time behavior of the pixels [68] and in order to help us

to choose the relevant part (heating part, cooling part or both together), where the anomaly

pixels are well separated from the other pixels, we made a comparative study based on the

criterion of false alarm rate (FAR) and we used the two HSI algorithms described in chapter 1,

RX and RARX, corresponding to Eqs. (1.33) and (1.36) respectively. We have chosen for this

study only two cubes (one lock-in and one pulse), from those presented in Table 3.2, for each

mockup and modality. The selected cubes are listed in Table 3.4 as well as their names will

be used thereafter.

The detection results are provided as detection maps, where the anomalies correspond to

the higher values. The comparison between the detection results is done by fixing the good

detection rate (GDR) to 90 % and calculating the corresponding FAR. Table 3.5 shows the

FAR for each detection map corresponding to the three selected cubes and to the three studied

regions : heating part (HP), cooling part (CP) and both heating and cooling parts (HCP).

The limits between HP and CP regions are chosen with respect to the heating tool pixels.

The results shown in Table 3.5 vary a lot from one sample to another that complicates

the choice of the ideal temporal ROI. That’s why we suggest to keep the whole provided

information about each pixel and to choose both regions : heating and cooling parts, of

Page 88: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

88CHAPTER 3. SURFACE AND SUB-SURFACE DEFECT DETECTION ON

NUCLEAR PARTS USING THERMOGRAPHY IMAGES

Samples S1 S2 S3

Lock-in Lock-in 1 Hz Lock-in 1 Hz Lock-in 1 Hz

(0.2 mm of depth)

Name S1 - Lockin (1 Hz) S2 - Lockin (1 Hz) S2 - Lockin (1 Hz)

Pulse Pulse 1 s Pulse 1 s Pulse 5 s

(0.8 mm of depth)

Name S1 - Pulse (1 s) S2 - Pulse (1 s) S3 - Pulse (5 s)

Table 3.4 — selected data cubes for each sample from those described in Table 3.2.

Algorithm RX RARX

SampleDifferent parts

HP CP HCP HP CP HCP

S1 - Lockin (1 Hz) 14.49 15.77 42.55 30.65 26.14 47.07

S1 - Pulse (1 s) 35.21 55.04 36.29 38.35 56.00 36.72

S2 - Lockin (1 Hz) 53.71 79.77 62.37 57.40 79.22 63.39

S2 - Pulse (1 s) 80.39 78.69 79.58 80.46 78.61 79.34

S3 - Lockin (1 Hz) 51.99 80.32 69.42 53.44 81.06 70.22

S3 - Pulse (5 s) 93.34 99.98 99.12 95.04 99.98 99.81

Table 3.5 — FAR (%) corresponding to GDR = 80% when HP, CP and HCP temporalROIs are used from the selected cubes.

the temperature profile [68]. This will avoid the problems of how to choose the temporal

ROI and how to find the limits between the heating and cooling parts ; and leans towards an

unsupervised solution. In addition, another reason of choosing all the parts of the temperature

profile is that dimensionality reduction methods that will be applied on the data cubes in order

to work with smaller cubes in reduced data spaces, will permit to select only the relevant

components.

Page 89: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

3.3. PROPOSED METHOD 89

3.3.2.3 Detection after singular value decomposition (SVD)

As described in chapter 1, SVD is a dimensionality reduction algorithm that projects the

data source in a subspace of dimension K, where the images in the resulting cube are arranged

in descending order of the variance r. The choice of K is based on the desired amount of the

variance proportion r, retained in the first K eigenvalues according to equation :

r =

�Ki=1 ei�Ni=1 ei

(3.11)

where ei is the ith eigenvalue on the diagonal matrix Σ described in Eq. (3.6).

In this experiment, we varied the values of K from 2 to 15. The anomaly detection al-

gorithms, RX and RARX described in Eqs. (3.8) and (3.9), are then applied on the reduced

data cubes with different values of K, where the detection results are given as 2D masks after

thresholding the detection maps. We fixed the value of the probabilities of detection (40 %,

60 %, 80 % and 90 %) and we calculated the corresponding FARs. Since both algorithms RX

and RARX have given similar results, we show only the results of RARX.

S1 - SVD

Figures 3.8 and 3.9 plot the probabilities of false alarm concerning the sample S1

for different values of K(K = 1, 2, · · · , 15) with fixed probabilities of detection (PD =

40 %, 60 %, 80 %, 90 %).

The anomalies in S1 are mostly detected with very low FARs in both lock-in and pulse

cubes. The FARs vary proportionally to the detection rates. The more the detection rate

increases, the more the FARs increase with. The optimal rates are obtained with a detection

of 40 % of the pixels of the defect. The FARs vary from 0 % to 0.07 % for S1 - Lockin (1

Hz), and from 0 % to 0.62 % for S1 -Pulse (1 s). These anomalies are easily detected with low

FARs because the defects are located on the surface of the sample and are visible in all the

acquired thermograms, which means that after data space reduction step, they are present

and with a high energy. Also, their thermal profiles are different from those of the pixels of the

background and the heating tool, which makes easy to detect them with the used anomaly

detection algorithms.

Table 3.6 shows the detection masks for the fixed detection rates (20 %, 30 %, 40 %, 60 %,

Page 90: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

90CHAPTER 3. SURFACE AND SUB-SURFACE DEFECT DETECTION ON

NUCLEAR PARTS USING THERMOGRAPHY IMAGES

Figure 3.8 — evolution of FARs according to K for the fixed detection rates (40 %, 60 %,80 % and 90 %) after reducing the dimensionality of the cube S1 - Lockin (1 Hz) with SVD.

Figure 3.9 — evolution of FARs according to K for the fixed detection rates (40 %, 60 %,80 % and 90 %) after reducing the dimensionality of the cube S1 - Pulse (1 s) with SVD.

80 % and 90 %) and K (2, 6, 10 and 14), and reports the corresponding FARs for S1 - Lockin

(1 Hz). The detection maps, the mean image of the cube, and the mask used to calculate the

false alarm and detection probabilities are depicted at the top of the tables.

For K = 2, some pixels of the current propagation area are detected as false alarms from

40 % of detection. Moreover, the linear form of the open notches begins to be detected from

Page 91: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

3.3. PROPOSED METHOD 91

SVD Mean image Mask

S1 - Lockin (1 Hz)K 2 6 10 14

Detectionrate ↓

Detectionmap →

20 %Detection

mask

FAR (%) 0 0 0 0

30 %Detection

mask

FAR (%) 0 0 0 0

40 %Detection

mask

FAR (%) 0.07 0 0 0

60 %Detection

mask

FAR (%) 0.20 0 0 0

80 %Detection

mask

FAR (%) 0.41 0 0 0

90 %Detection

mask

FAR (%) 0.70 0.10 0.05 0.04

Table 3.6 — detection masks and their corresponding FARs for different fixed detectionrates after reducing the dimensionality of the cube S1 - Lockin (1 Hz) with SVD.

a K greater than 2 with very low FARs. With a detection rate fixed to 90 %, only 6 pixels

from 14344 are detected as false alarms for K = 14.

Table 3.7 shows the detection maps according to K (2, 6, 10 and 14) for S1 - Pulse (1 s)

Page 92: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

92CHAPTER 3. SURFACE AND SUB-SURFACE DEFECT DETECTION ON

NUCLEAR PARTS USING THERMOGRAPHY IMAGES

and depicts the mean image of the cube and the mask used to calculate the false alarm and

detection probabilities.

SVD Mean image Mask

S1 - Pulse (1 s)K 2 6 10 14

Detection map

Table 3.7 — detection maps for different values of K after reducing the dimensionality ofthe cube S1 - pulse (1 s) with SVD.

The detection masks corresponding to the fixed detection rates (20 %, 30 %, 40 %, 60 %,

80 % and 90 %) and their corresponding FARs are shown in Table C.1 in Annexe C. The

optimal FARs are obtained with K > 6 with a detection rate that can reach 80 % of the

pixels of the defect. For 90 % of detection, only 13 and 3 pixels from 20864 are detected as

false alarms for K = 10 and K = 14 respectively.

The optimal results for S1, in both lock-in and pulse modalities, are obtained when low

detection rates are fixed (from 60 % to 80 %) with K > 6. The low detection rates allow to

obtain very low FARs and to see the main form of the anomaly within the detection masks.

S2 - SVD

Figures 3.10 and 3.11 plot the probabilities of false alarm concerning the sample S2 for

different values of K(K = 1, 2, · · · , 15) with fixed probabilities of detection (PD = 40 %, 60

%, 80 %, 90 %).

The anomalies in S2 are detected with very important FARs in both lock-in and pulse

cubes. The FARs vary proportionally to the detection rates. The optimal rates are obtained

with a detection of 40 % of the pixels of the defect for both cubes lock-in and pulse. The

minimum FAR (16.21 %) obtained for S2 - Lockin (1 Hz) is with K = 10 and a detection

rate fixed to 40 %. This FAR represents a total of about 3000 pixels from 19560. As well, the

Page 93: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

3.3. PROPOSED METHOD 93

Figure 3.10 — evolution of FARs according to K for fixed detection rates (40 %, 60 %, 80% and 90 %) after reducing the dimensionality of the cube S2 - Lockin (1 Hz) with SVD.

Figure 3.11 — evolution of FARs according to K for fixed detection rates (40 %, 60 %, 80% and 90 %) after reducing the dimensionality of the cube S2 - Pulse (1 s) with SVD.

minimum FAR obtained for S2 - Pulse (1 s) represents about 810 pixels from 19560.

Tables 3.8 and 3.9 show the detection maps according to K (2, 6, 10 and 14) for both

cubes S2 - Lockin (1 Hz) and S2 - Pulse (1 s), and depict the mean image of these cubes and

the masks used to calculate the false alarm and detection probabilities.

Only few pixels of the upper part of the closed notches presented in S2 are detected as

Page 94: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

94CHAPTER 3. SURFACE AND SUB-SURFACE DEFECT DETECTION ON

NUCLEAR PARTS USING THERMOGRAPHY IMAGES

SVD Mean image Mask

S2 - Lockin (1 Hz)K 2 6 10 14

Detection map

Table 3.8 — detection maps for different values of K after reducing the dimensionality ofthe cube S2 - Lockin (1 Hz) with SVD.

SVD Mean image Mask

S2 - Pulse (1 s)K 2 6 10 14

Detection map

Table 3.9 — detection maps for different values of K after reducing the dimensionality ofthe cube S2 - Pulse (1 s) with SVD.

well the pixels of the marker used to locate the position of the anomaly. For the cube S2 -

Lockin (1 Hz), the saturated thermograms ; which give no valuable information ; are removed

from the data cube before applying SVD. Figure 3.12 plots four different temperature profiles

of four pixels corresponding to the marker, defect, heating tool and the background.

As it can be seen in Figure 3.12 the number of the saturated thermograms is very important

which can exceed 600 images in this case. Otherwise, the background, heating tool and the

lower part of the defect are not temperature saturated and contain valuable information.

The removal of the saturated thermograms involves also the removal of valuable information

present within the non-saturated pixels.

Page 95: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

3.3. PROPOSED METHOD 95

Figure 3.12 — temperature profile of four different regions of S2 from the cube S2 - Lockin(1 Hz) corresponding to four pixels of the marker, defect, heating tool and the background.

In order to keep all valuable information, we propose to keep the whole thermograms

and to discard only the saturated pixels from the dimensionality reduction and detection

processes. At first, the saturated pixels are identified and their positions are recorded. All

these pixels will not be taken into account thereafter. They are removed from the initial data

cube and another cube is created. Then, the same procedure is applied to this new cube

without saturated pixels ; the anomaly detection algorithms are applied after reducing the

new cube with SVD. The recorded positions of the saturated pixels will be then used to

restore the detection maps where, all saturated pixel will be set to zero.

The new curves of the probabilities of false alarm concerning the cube S2 - Lockin (1 Hz)

for different values of K (K = 1, 2, · · · , 15) with fixed probabilities of detection (PD = 40 %,

60 %, 80 %, 90 %) are plotted in Figure 3.13.

The results in Figure 3.13 show that masking of the saturated pixels has strongly decreased

the FARs. With 40 % of detection, the minimum FAR has been decreased with a factor of

3.17, from 23.74 % to 7.47 %. Table 3.10 shows the new detection maps according to K (2,

6, 10 and 14) for S2 Lockin (1 Hz).

We see in the detection maps in Table 3.10 that some pixels of the upper and lower edges

of the anomaly are detected contrary to the pixels of the middle part of the anomaly. The

main reason that these pixels are not detected is that they are very far from the inductor,

which means that they are not sufficiently heated, so they stayed cold. Contrariwise, the pixels

that are close to the inductor have been sufficiently heated. They have enough energy to be

Page 96: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

96CHAPTER 3. SURFACE AND SUB-SURFACE DEFECT DETECTION ON

NUCLEAR PARTS USING THERMOGRAPHY IMAGES

Figure 3.13 — evolution of FARs according to K for fixed detection rates (40 %, 60 %,80 % and 90 %) after reducing the dimensionality of the cube S2 - Lockin (1 Hz) with SVDafter removing the saturated pixels and without taking them into account in the reduction

and detection procedures.

SVD Mean image Mask

S2 - Lockin (1Hz) (without sa-turated pixels)K 2 6 10 14

Detection map

Table 3.10 — detection maps for different values of K after reducing the dimensionalityof the cube S2 - Lockin (1 Hz) with SVD without taking into account the saturated pixels

in the reduction and detection procedures.

kept after the reduction procedure and have been easily detected. Perhaps, if another form of

the inductor is used, more adapted to the anomaly form in case where a priori information

about of the defect shape is known, the whole pixels of the anomaly can be detected. But,

with unsupervised surface inspection approaches, i.e. without any knowledge about the crack’s

shape, it is very difficult to investigate all forms of the inductor.

Page 97: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

3.3. PROPOSED METHOD 97

Figure 3.14 — evolution of FARs according to K for fixed detection rates (40 %, 60 %, 80% and 90 %) after reducing the dimensionality of the cube S3 - Lockin (1 Hz) with SVD.

The detection masks corresponding to the fixed detection rates (20 %, 30 %, 40 %, 60 %,

80 % and 90 %) and their corresponding FARs are shown in Tables C.2 and C.3 in Annexe C

for S1 - Lockin (1 Hz) and S2 - Pulse (1 s). The optimal FARs are obtained with K > 2 with

very low detection rates (20 %). The pixels that have important temperature values in the

currents propagation area are also detected as anomalies. These false alarm pixels make the

FARs very high on one hand ; and since only the pixels of the upper and lower parts of the

anomaly are detected, higher detection rates make also the FARs higher on the other hand.

In order to obtain small FARs in the case of this anomaly, the detection rate should be as

small as possible (since only edge parts of the defect are detected) with K > 2. For K = 10,

1.4583 % of the pixels are detected as false alarms corresponding to about 285 pixels from

19560.

S3 - SVD

Figures 3.14 and 3.15 plot the probabilities of false alarm concerning the sample S3 for

different values of K (K = 1, 2, · · · , 15) with a fixed probabilities of detection (PD = 40 %,

60 %, 80 %, 90 %).

Concerning S3, the FARs vary proportionally to the detection rates. The more we increase

the detection rate ; the more the false alarm rates increase with. The FARs attain 10 % when

high detection rates (80% - 90%) are fixed. The optimal rates are obtained with lower detection

Page 98: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

98CHAPTER 3. SURFACE AND SUB-SURFACE DEFECT DETECTION ON

NUCLEAR PARTS USING THERMOGRAPHY IMAGES

Figure 3.15 — evolution of FARs according to K for fixed detection rates (40 %, 60 %, 80% and 90 %) after reducing the dimensionality of the cube S3 - Pulse (5 s) with SVD.

rates (20 % - 40 %). The false alarm rates vary from 0.06 % to 1.68 % for S3 - Lockin (1 Hz),

and from 0.18 % to 3.83 % for S3 - Pulse (5 s) for a detection of 40 % of the pixels of the

defect.

The main reason of these high false alarm rates is that, there is an additional, compared

to S1, class of pixels (the pixels where the heating tool is reflected on the surface) ; in addi-

tion to the other classes (background, defect and heating tool) ; which have also significant

temperature values. In fact, the signal spaces of both classes (defect and reflexion) are kept

after the reduction of the data cube dimensionality. Then the assumption of homogeneous

background, used in detection, is not fulfilled. This is the reason why the pixels of these two

classes appear in the detection masks, as shown in Tables C.4 and C.5 in Annexe C. Tables 17

and 18 show the detection maps according to K (2, 6, 10 and 14) for both cubes S3 - Lockin

(1 Hz) and S3 - Pulse (5 s), and depict the mean image of these cubes and the masks used to

calculate the false alarm and detection probabilities.

3.3.2.4 Detection after maximum noise fraction (MNF)

MNF is a denoising algorithm that transforms the original data cube to a new one where

the images are in descending order of SNR [69], [11]. The choice of the desired number of

Page 99: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

3.3. PROPOSED METHOD 99

SVD Mean image Mask

S3 - Lockin (1 Hz)K 2 6 10 14

Detection map

Table 3.11 — detection maps for different values of K after reducing the dimensionalityof the cube S3 - Lockin (1 Hz) with SVD.

SVD Mean image Mask

S3 - Pulse (5 s)K 2 6 10 14

Detection map

Table 3.12 — detection maps for different values of K after reducing the dimensionalityof the cube S3 - Pulse (5 s) with SVD.

images K is based on the number of the first images that have a higher SNR.

We have applied the MNF algorithm on the original data cubes, and we varied the values

of K from 2 to 15. The anomaly detection algorithms are also applied on the reduced data

cubes with different values of K. We fixed the value of the probabilities of detection (40 %,

60 %, 80 % and 90 %) and we calculated the corresponding FARs. The results are shown in

Figures 3.16, 3.17, 3.18, 3.19, 3.20, 3.21 respectively for S1 - Lockin (1 Hz), S1 - Pulse (1

s), S2 - Lockin (1 Hz), S2 - Pulse (1 s), S3 - Lockin (1 Hz) and S3 - Pulse (5 s). Since both

algorithms RX and RARX have given similar results, we show only the results of RARX.

Table 3.13 shows the detection maps according to K (2, 6, 10 and 14) for all used cubes

Page 100: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

100CHAPTER 3. SURFACE AND SUB-SURFACE DEFECT DETECTION ON

NUCLEAR PARTS USING THERMOGRAPHY IMAGES

Figure 3.16 — evolution of FARs according to K for fixed detection rates(40 %, 60 %, 80% and 90 %) after reducing the dimensionality of the cube S1 - Lockin (1 Hz) with MNF.

Figure 3.17 — evolution of FARs according to K for fixed detection rates (40 %, 60 %, 80% and 90 %) after reducing the dimensionality of the cube S1 - Pulse (1 s) with MNF.

S1, S2 and S3, lock-in and pulse ; and depict the mean image of these cubes and the masks

used to calculate the false alarm and detection probabilities.

The detection masks from MNF corresponding to the fixed detection rates (20 %, 30

%, 40 %, 60 %, 80 % and 90 %) and their corresponding FARs for all used cubes are

shown in Tables D.1, D.2, D.3, D.4, D.5, D.6 in Annexe D. The curves plotted in Fi-

Page 101: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

3.3. PROPOSED METHOD 101

Figure 3.18 — evolution of FARs according to K for fixed detection rates (40 %, 60 %, 80% and 90 %) after reducing the dimensionality of the cube S2 - Lockin (1 Hz) with MNF.

Figure 3.19 — evolution of FARs according to K for fixed detection rates (40 %, 60 %, 80% and 90 %) after reducing the dimensionality of the cube S2 - Pulse (1 s) with MNF.

gures 3.16, 3.17, 3.18, 3.19, 3.20, 3.21 and the detection maps plotted in Table 19 show

similar results when SVD is used for the same cubes. The optimal FARs are obtained when

low detection rates are chosen for the three samples S1, S2 and S3.

Page 102: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

102CHAPTER 3. SURFACE AND SUB-SURFACE DEFECT DETECTION ON

NUCLEAR PARTS USING THERMOGRAPHY IMAGES

Figure 3.20 — evolution of FARs according to K for fixed detection rates (40 %, 60 %, 80% and 90 %) after reducing the dimensionality of the cube S3 - Lockin (1 Hz) with MNF.

Figure 3.21 — evolution of FARs according to K for fixed detection rates (40 %, 60 %, 80% and 90 %) after reducing the dimensionality of the cube S3 - pulse (5 s) with MNF.

3.3.2.5 Detection after independent component analysis (ICA)

ICA has also been used in these experiments, which finds the independent components

(also called sources) by maximizing the statistical independence of the estimated components

[70], [71], [72] . ICA separates the source signal into additive subcomponents by assuming that

the subcomponents are non-Gaussian signals and that they are all statistically independent

Page 103: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

3.3. PROPOSED METHOD 103

MNF Mean image Mask

S1 - Lockin (1 Hz)K 2 6 10 14

Detection mapMNF Mean image Mask

S1 - Pulse (1 s)K 2 6 10 14

Detection mapMNF Mean image Mask

S2 - Lockin (1 Hz)K 2 6 10 14

Detection mapMNF Mean image Mask

S2 - Pulse (1 s)K 2 6 10 14

Detection map

Page 104: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

104CHAPTER 3. SURFACE AND SUB-SURFACE DEFECT DETECTION ON

NUCLEAR PARTS USING THERMOGRAPHY IMAGES

MNF Mean image Mask

S3 - Lockin (1 Hz)K 2 6 10 14

Detection mapMNF Mean image Mask

S3 - Pulse (5 s)K 2 6 10 14

Detection map

Table 3.13 — detection maps for different values of K after reducing the dimensionalityof the cubes S1 - Lockin (1 Hz), S1 - Pulse (1 s), S2 - Lockin (1 Hz), S2 - Pulse (1 s), S3 -

Lockin (1 Hz), and S3 - Pulse (5 s) with MNF.

from each other. Several criterion of non-gaussianity can be used. We chose the kurtosis as

measure of contrast. Some authors have shown that this measure can highlight the anomalies

that are contained in the components of maximum kurtosis [73], [74], [67]. The first inde-

pendent components (ICs) are those of maximum kurtosis when Fast-ICA algorithm is used.

We fixed the number of ICs, K to 15 and we have applied the fast-ICA algorithm to all the

used data cubes for the three samples. Once the K first ICs are calculated, RX and RARX

are then applied on these images. The first 5 ICs of each cube and the detection results from

RX and RARX are shown in Tables E.1 and E.2 in Annexe E.

The results obtained with fast-ICA algorithm show that : i) the defect is present in the first

independent components, ii) different parts of the defect are detected in different components

and iii) the defect is confused with the false alarm’s pixels. However, even if the kurtosis

criterion allows to bring up the defect in the first independent analysis, the detection results

are not sufficient.

Page 105: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

3.3. PROPOSED METHOD 105

3.3.2.6 Choice of the virtual dimension, K

The results from SVD and MNF are varying and depending on the choice of the desired

number of images, K. After dimensionality reduction, in SVD the obtained images are arran-

ged in a descending order of the variance r, and in MNF they are arranged in a descending

order of the SNR.

In order to estimate the virtual dimensionality (VD) of the signal subspace from the origi-

nal data space of the used cubes, we first used two information criteria ; the Akaike information

criterion (AIC) and minimum description length (MDL). These criteria are expressed for each

component k as :

AIC(k) = −2M�I

k�=k+1logβk�

+M(I − k)log�

1I−k

�Ik�=k+1

logβk��

+2k(I − k)

(3.12)

MDL(k) = −2M�I

k�=k+1

logβk�

+M(I − k)log�

1I−k

�Ik�=k+1

logβk��

+12k(2I − k)logM

(3.13)

where Mn is the number of columns in A and {β1, · · · ,βIn} the In eigenvalues of the covariance

matrix of A.

Furthermore, the hyperspectral signal identification by minimum error (HySime) algorithm

gives both denoised data and an estimation of the signal dimensionality [2]. Table 3.14 reports

the original space dimension of all the used cubes and their estimated VDs with AIC, MDL

and HySime.

Cubes N VD-AIC VD-MDL VD-HySimeS1 Lockin (1 Hz) 939 789 309 251

S1 Pulse (1 s) 967 857 329 228S2 Lockin (1 Hz) 1227 100 33 31S2 - Pulse (1 s) 974 903 316 9

S3 - Lockin (1 Hz) 824 765 302 209S3 - Pulse (5 s) 952 940 321 209

Table 3.14 — estimated VDs with AIC, MDL and HySime.

Page 106: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

106CHAPTER 3. SURFACE AND SUB-SURFACE DEFECT DETECTION ON

NUCLEAR PARTS USING THERMOGRAPHY IMAGES

a) b)

c) d)

e) f)

Figure 3.22 — estimated values of AIC, MDL and MSE according to K for the cubes a)S1 - Lockin (1 Hz), b) S1 - Pulse (1 s), c) S2 - Lockin (1 Hz), d) S2 - Pulse (1 s), e) S3 -

Lockin (1 Hz) and f) S3 - Pulse (5 s).

The estimated values of AIC, MDL and the mean squared error (MSE) obtained with Hy-

Sime subspace estimation according to K are plotted in Figure 3.22, from which the minimum

values of AIC, MDL and MSE are used to select the VDs of the subspaces.

The drawback of these VD-estimation methods is that the values of K are overestimated,

Page 107: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

3.3. PROPOSED METHOD 107

Estimating the VD from the energy (SNR respectively) obtained by SVD (MNF respectively)N : Number of components in source signal.th = 10−8 : Fixed value of the threshold for SVD (for MNF , th = 10−4).

For i from 1 to N − 1If (Energy (i) - Energy (i+ 1) ) / sum (Energy) < th

K = iBreak

End ifEnd for

since have experimentally shown that optimal FARs are obtained with subspaces that have

small dimensions. Actually, we do not want to keep all the signal information, but mainly the

information relative to the anomalies. For this reason, we propose a practical issue to estimate

the VD based on the energy and SNR for SVD and MNF algorithms respectively.

The idea is to calculate the energy and the SNR of each component after applying, SVD

and MNF, respectively ; and to check their evolution according to K. The VD is obtained

for SVD (MNF respectively) when a change of the energy (SNR respectively) between two

adjacent components is not significant after normalization. Knowing that the energy and the

SNR of the data are in a descending order of K. The fixed thresholds for SVD and MNF

are respectively 10−8 and 10−4. This means that, starting from the first component, if the

difference between two adjacent components is greater than the fixed threshold, the VD is

then chosen and it corresponds to the current value of K. The choice of K is explained in the

following algorithm for SVD and MNF :

The calculated values of the energy and SNR for all the data cubes are plotted in Fi-

gure 3.23. The curves are normalized 0 and 1 in order to plot them both in the same graph.

The obtained values of K after applying the proposed algorithms to calculate the VDs of

the reduced data spaces from the energy and SNR, obtained respectively by SVD and MNF,

are reported in Tables 3.15 and 3.16. The detection maps are also shown there for all used

data cubes from RX and RARX algorithms.

The ROC curves of the detection maps shown in Table ?? are plotted in Figure 3.24 for

the algorithm RARX.

Page 108: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

108CHAPTER 3. SURFACE AND SUB-SURFACE DEFECT DETECTION ON

NUCLEAR PARTS USING THERMOGRAPHY IMAGES

a) b)

c) d)

e) f)

Figure 3.23 — energy and SNR values according to K for the cubes a) S1 - Lockin (1 Hz),b) S1 - Pulse (1 s), c) S2 - Lockin (1 Hz), d) S2 - Pulse (1 s), e) S3 - Lockin (1 Hz) and f)

S3 - Pulse (5 s) after applying SVD and MNF respectively.

Page 109: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

3.3. PROPOSED METHOD 109

a) b)

c) d)

e) f)

Figure 3.24 — ROC curves of the detection maps shown in Table ?? for the cubes a) S1 -Lockin (1 Hz), b) S1 - Pulse (1 s), c) S2 - Lockin (1 Hz), d) S2 - Pulse (1 s), e) S3 - Lockin (1Hz), and f) S3 - Pulse (5 s) after using RARX on the reduced cubes with SVD and MNF. Inthe case of S2 - Lockin (1 Hz), where saturated pixels are present in the cube, an additional

ROC curve is plotted corresponding to the detection without saturated pixels (WSPs).

Page 110: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

110CHAPTER 3. SURFACE AND SUB-SURFACE DEFECT DETECTION ON

NUCLEAR PARTS USING THERMOGRAPHY IMAGES

Cube RX RARX

S1-Loc

kin

(1H

z)

SVD(K=6)

MNF(K=14)

S1-P

ulse

(1s) SVD

(K=10)

MNF(K=7)

S2-Loc

kin

(1H

z)

SVD(K=8)

SVDWSPs(K=5)

MNF(K=38)

Table 3.15 — estimated VDs from the energy and SNR and their corresponding detectionmaps from RX and RARX for the data cubes : S1 - Lockin (1 Hz), S1 - Pulse (1 s) andS2 - Lockin (1 Hz). In the case of S2 - Lockin (1 Hz), where saturated pixels are present inthe cube, the VD of the cube without saturated pixels (WSPs) and the detection maps are

added.

3.3.2.7 Proposed one-dimensional approach with principal component analysis

(PCA)

In the methods (such as SVD and MNF) based on the calculation of criterion tacking into

account statistical moment of order 2 (energy and SNR which is an energy ratio), the assumed

Page 111: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

3.3. PROPOSED METHOD 111

Cube RX RARX

S2-P

ulse

(1s) SVD

(K=3)

MNF(K=24)

S3-Loc

kin

(1H

z)

SVD(K=12)

MNF(K=19)

S3-P

ulse

(5s) SVD

(K=9)

MNF(K=22)

Table 3.16 — estimated VDs from the energy and SNR and their corresponding detectionmaps from RX and RARX for the data cubes : S2 - Pulse (1 s), S3 - Lockin (1 Hz) and S3

- Pulse (5 s).

hypothesis is that the defect has enough energy to be in the first components after reducing

the dimensionality of the data space. Physically, this is justified, because in thermography,

the heat is accumulated at the defect pixels.

In the methods (such as ICA) based on higher statistical moment order (kurtosis - order 4),

the assumed hypothesis is that the defect is independent from the background. With the use

of kurtosis criterion, we try to find the non-Gaussian components. A Gaussian distribution is

symmetric and has a kurtosis value equal to zero, as shown in Figure 3.25a. Then, calculation

of the kurtosis value gives us an idea about the distribution of each component. The presence

Page 112: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

112CHAPTER 3. SURFACE AND SUB-SURFACE DEFECT DETECTION ON

NUCLEAR PARTS USING THERMOGRAPHY IMAGES

Figure 3.25 — example of two distributions a) with kurtosis = 0 and b) kurtosis > 0

of anomalies deviates from a Gaussian distribution, as shown is Figure 3.25b, and increases

the value of the kurtosis.

If we apply directly the PCA on the source data cubes, as it is based on criterion of

second order, it may not work perfectly. Then, the idea here is to take into considera-

tion both criterions (energy and kurtosis). When we look at the first principal components

shown in Tables 3.17, 3.18, 3.19, (the first principal components for the other cubes are

shown in Tables F.1, F.2, F.3 in Annexe F and those obtained from MNF are shown in

Tables G.1, G.2, G.3, G.4, G.5, G.6 in Annexe G), we see that some components are more

useful than others. We chose then the component that has the biggest value of kurtosis, cor-

responding to the most non-Gaussian distribution. We suppose that this selected component

satisfies both criterions ; it contains an important energy and is the most non-Gaussian.

The main steps of the proposed approach are as follow :

❶ First, among the principal components, we look for the direction of the maximum kurtosis.

❷ Then we whiten the data cube in order to have a normal distribution, N (0, 1), in the

direction of the maximum kurtosis.

❸ We project the data into the direction of the selected component. After projection, the

obtained data is a one-dimensional vector.

❹ And then we make a hypothesis test for all pixels :

H0 : Anomaly absent, x follows a normal law, x � N (0, 1).

H1 : Anomaly present, x does not follow a normal law.

We vary the values of the threshold η and we calculate the corresponding probability of

Page 113: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

3.3. PROPOSED METHOD 113

Principal components of S1 - Lockin (1 Hz)PC (1) PC (2) PC (3)

PC (4) PC (5) PC (6)

Table 3.17 — first principal components selected from S1 - Lockin (1 Hz) after estimatingthe VD based on the energy distribution, with same method than with SVD.

Principal components - S2 - Lockin (1 Hz) - SVDPC (1) PC (2) PC (3)

PC (4) PC (5)

Table 3.18 — first principal components selected from S2 - Lockin (1 Hz) - without takinginto account the saturated pixels - after estimating the VD based on the energy distribution.

false alarm (pfa), which is the probability to have a value biggest than η. The used function

to calculate the pfa is described follow :

pfa = 2� +∞η

1√2πe−

x2

2 dx

= 2

�1−

� η

+∞1√2πe−

x2

2 dx

� (3.14)

An explanation scheme of the calculation of the pfa is shown in Figure 3.26, where the

pfa is twice the integral of the distribution from η to +∞ (blue area).

Page 114: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

114CHAPTER 3. SURFACE AND SUB-SURFACE DEFECT DETECTION ON

NUCLEAR PARTS USING THERMOGRAPHY IMAGES

Principal components - S3 - Lockin (1 Hz) - SVDPC (1) PC (2) PC (3)

PC (4) PC (5) PC (6)

PC (7) PC (8) PC (9)

PC (10) PC (11) PC (12)

Table 3.19 — first principal components selected from S3 - Lockin (1 Hz) after estimatingthe VD based on the energy distribution.

Figure 3.26 — probability of false alarm calculated from the projected data into theselected principal component.

The advantage of this approach is that, we can determine the value of the theoretical

threshold by fixing the false alarm rate, since RX is a CFAR algorithm. Figure 3.27 shows

the FARs for all the used data cubes calculated from the proposed approach (PCA one

principal component, labeled as PCA (1 PC)), theoretical FAR (labeled as PFA Th.), and

Page 115: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

3.3. PROPOSED METHOD 115

a) b)

c) d)

e) f)

Figure 3.27 — FARs obtained with PCA (one principal component), theoretical FAR,SVD and MNF for the cubes a) S1 - Lockin (1 Hz), b) S1 - Pulse (1 s), c) S2 - Lockin (1Hz) without taking into account the saturated pixels, d) S2 - Pulse (1 s), e) S3 - Lockin (1

Hz) and f) S3 - Pulse (5 s).

SVD MNF with obtained VDs from the algorithms described in section 3.3. In addition, the

FARs obtained with SVD without tacking into consideration the saturated pixels for S2 -

Lockin (1 Hz), are plotted in Figure 3.27c.

The obtained FARs vary from each sample and modality. The FAR with PCA (1 PC)

Page 116: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

116CHAPTER 3. SURFACE AND SUB-SURFACE DEFECT DETECTION ON

NUCLEAR PARTS USING THERMOGRAPHY IMAGES

has been greatly improved for S1 - Lockin (1 Hz). MNF gives best FARs for S1 - Pulse (1 s)

followed by SVD. For S2, the FARs are very close and for S3, SVD gives mainly better results

for both lock-in and pulse modalities.

These results are mainly related to the original data cubes, where the images should be

acquired under ideal conditions in order to be in good quality, with no saturated pixels,

homogeneous background, and low noise and acquisition artifacts.

3.3.2.8 Performance analysis

The best performances of all experiments are obtained with the one-direction approach

PCA-1PC/Hypothesis test, and defect S1, with Lockin (1Hz). Indeed, we obtain 2 × 10−4

false alarm ratio, for 100 % defect detected. However, this method is not always the best,

depending on the quality of the data cube. For the sample S2, all the proposed methods give

quite similar poor results, due to the lack of quality of the acquisition.

For sample S3, SVD/RARX with Lockin (1Hz) gives the best results, showing that this

method is quite robust to perturbations such as inhomogeneous background. Compared to

detection on original data cubes, the inspection time when SVD is used to reduce the dimen-

sionality of the data is significantly increased (see Table 3.20), but the detection performances

are substantially improved with the use of dimension reduction. Moreover, the use of the redu-

ced version of SVD, such as truncated SVD version, to determine only the first components,

should be much quicker and more economical than the full SVD version. All simulations were

done with Matlab (R2009b).

For a data cube of 80× 80× 900, the one-dimensional approach with PCA needs about 2

seconds to find the component with the maximum kurtosis, to project the data and to detect

the anomaly, the approach based on MNF needs about 3 seconds and those based on ICA

(with 15 ICs) and SVD need respectively about 45 seconds and 30 seconds.

The proposed approach, based on selecting only the most abnormal PC with PCA, is

fast in terms of computation time compared to those using SVD, MNF and ICA. However,

this approach is very depending on the quality of the original data. The direction of the data

projection, selected from the PCs, has enough energy to be in the first PCs and has maximum

value of kurtosis. Take into account a compromise between both criterions, could be a good

Page 117: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

3.4. CONCLUSION 117

Original data cube Reduced data cube

SampleSize(pixels)

Detectiontime (s)

FAR(%)

Size(pixels)

Reductiontime (s)

Detectiontime (s)

FAR(%)

S1 - Lockin(1 Hz)

88× 163×939

2.37 17.2388 ×163× 6

50.00 0.11 0

S1 - Pulse(1 s)

128×163×967

3.05 12.53128 ×163× 4

99.70 0.21 0.62

S2 - Lockin(1 Hz)

120×163×1227

5.19 18.83120 ×163× 4

139.37 0.19 10.39

S2 - Pulse(1 s)

120×163×974

3.03 38.97120 ×163× 3

93.45 0.18 25.26

S3 - Lockin(1 Hz)

124×163×824

2.18 29.28124 ×163× 7

94.49 0.20 0.07

S3 - Pulse(5 s)

124×163×952

3.02 97.96124 ×163× 6

103.72 0.20 0.95

Table 3.20 — comparison between original and reduced data cubes with SVD.

perspective for choosing the important components instead of choosing the maximum kurtosis

one.

3.4 Conclusion

In this chapter, an unsupervised approach of surface and sub-surface defect detection

for the inspection of nuclear metallic components has been proposed. It is based on the

use of induction thermography and HSI algorithms, basically dedicated to remote sensing

applications. A dataset of thermal cubes, where the temporal behavior of each image’s pixel is

recorded, has been established for three metallic parts containing different types of anomalies,

such as open cracks, open and closed notches with different sizes and depths ; by means of

lock-in and pulsed thermography techniques. One pulsed and one lock-in thermal cube for

each inspected sample were considered in our experiments.

Page 118: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

118CHAPTER 3. SURFACE AND SUB-SURFACE DEFECT DETECTION ON

NUCLEAR PARTS USING THERMOGRAPHY IMAGES

Usually, either the rising or decay parts of the surface temperature profile is analyzed to

inspect the specimens. In our approach, we proposed to consider both parts of the temperature

profile and to keep the whole information about the temporal behavior of each pixel. This

solves the problem of choosing the temporal ROI when automated inspections are investigated.

The space dimensions of the selected data cubes were then reduced. Some existing data

denoising and dimensionality reduction algorithms, such as SVD, MNF, PCA and ICA, were

used in order to work with smaller data spaces as the original ones that can exceed 950

thermograms. Usually, the initial data cubes are reduced to only one or a few images that will

be manually analyzed by an experienced observer or automatically post-processed (mostly,

thresholding and edge detector operators are used) in order to detect the anomalies. In this

work the reduced data cubes have been analyzed by means of anomaly detection algorithms

to obtain the existing anomalies within the inspected parts, with no prior knowledge about

the defects.

A comparison study has been done on the choice of the dimension of the reduced data

spaces based on FARs corresponding to different dimensions and detection rates of the pixels of

the anomalies. The detection maps, resulting from the used algorithms RX and RARX, have

been compared for different dimensions of the reduced data spaces and different detection

rates of the anomaly pixels. The results show that the detection strategy allows detecting

compact anomalies with very low false alarm rates when low detection rates are fixed. The

results are better when only three main classes are present within the data : background,

heating tool and defect pixels and no additive perturbation pixels are present on the scene

corresponding to the reflexion of the heating tool on the surface, or additional marker in the

scene.

An empiric method for estimating the VD of the reduced data, based on the energy and

SNR evolution, has been presented and compared to other VD-estimation methods based on

information criteria. This can be used to estimate the VD of the sub-signals for automated

inspection approaches.

The detection maps of the used algorithms, RX and RARX, applied on these reduced

cubes in an unsupervised way show interesting results, where the detection performances is

considerably improved comparing to when the whole data cube is considered.

Finally, a one-dimension reduction approach has been proposed. It consists of using the

Page 119: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

3.4. CONCLUSION 119

PCA algorithm to reduce the original data space, where the most abnormal component was

selected from the first PCs based on the maximum value of kurtosis. The original data were

whiten, projected on the direction of this component and then a hypothesis test was made,

consisting of testing each pixel to determine whether it follows a normal distribution or not. We

have calculated the probabilities of false alarm for different threshold’s values. The calculated

FARs have been compared to those obtained with SVD, MNF and theoretically calculated.

The proposed dimensionality reduction approach allows fast detection, which is an important

parameter in the context of industrial applications, and gives better results when the ther-

mographic images are acquired in ideal conditions to be in good quality and noisily as low

as possible. When the image quality is not sufficient, the approach with SVD/RARX shows

good robustness and should be preferred.

Page 120: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

CHAPTER

4 Structured light for the

inspection of free-form

metal surfaces

4.1 Introduction

4.1.1 Motivation of structured light

Automated visual inspection (AVI) tasks are often concerned with surfaces, such as car

body parts, machined surfaces, painted surfaces, dies and molds, etc. Inspection of surface

defects has become a critical task for manufacturers who strive to improve product quality

and production efficiency. Surface defects can affect not only the appearances of products but

also their functionality, stability, safety, etc. Large and obvious surface defects, such as dents,

scraps and scratches are usually inspected by AVI systems, where image processing techniques

play a crucial role.

Works on AVI have been carried out on various industrial components and products, such

as fired ceramic tiles ; glass bottles ; variable data prints used in printing industries ; cast

and welded components of foundry industries ; directionally textured surfaces which arise in

textile fabrics, natural wood and machined surfaces ; and textured surfaces found in industry,

such as milled surfaces, leather and sandpaper. In respect to automobile and metal industries

which develop most of the AVT works done in this area, various inspection systems have been

developed to inspect mass produced custom parts, bearing rolls, sheet panels and metallic

surfaces, bumpy metallic surfaces, aluminum castings and welds, smooth chrome-plated, raw

Page 121: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

4.1. INTRODUCTION 121

and stamped sheet metal parts, etc.

The objects to be inspected usually have complex structures and different textures. Sur-

face defects can have different shapes, sizes, and physical aspects. Thus, the difficulty within

the machine vision domain is to build intelligent vision systems, which must be at least as

good as the human inspector in terms of quality control. The AVI tasks can be more com-

plicated with surfaces that have free-form shapes comparing to those that have flat or simple

shapes. Complex surfaces have been more and more used in the design of parts. With the

increasing and extensive application of free-form surfaces in many fields - such as design and

manufacturing of molds ; patterns ; and models in the automotive, biomedical, and aerospace

industries - the inspection process of such surfaces is becoming more and more important in

order to reduce the inspection time and costs. The inspection of parts with free-form surfaces

is becoming increasingly critical due to increasing requirements of higher precision in location

and shape estimation, as well as higher detection rates, and to the complexity of the geometry.

In the last several decades, significant research and development efforts have been made for

the design and manufacturing of products/objects consisting partially or solely of free-form

surfaces [75], [76], [77].

In automotive and metal industries, there is a need for accurately measuring the 3D shapes

of surfaces to speed up and ensure product development and manufacturing quality by using

non-contact techniques. 3D measurement constitutes an important topic in computer vision.

It has different applications, such as range sensoring, industrial inspection of manufactured

parts, reverse engineering (digitization of complex and free-form surfaces), object recognition,

3D map building, and others. Moreover, automatic inspection and recognition issues can

be converted to the 3D shape measurement of an object under inspection [78]. A variety

of optical techniques of 3D shape measurement have been proposed by a large number of

researchers, such as time/light in flight, laser scanning, laser tracking system, interferometry,

photogrammetry, Moiré and structured light methods. These techniques have the advantage

that they are contactless and can work at a distance. They differ in many aspects such as

precision, measurement time, type and complexity of the measurement object, affordability.

A very successful branch of optical shape acquisition is structured light. The structured

light methods have the following merits : non-contact and nondestructive ; easy implementa-

tion ; no need of moving the object ; and fast full field measurement. We propose in this chapter

Page 122: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

122CHAPTER 4. STRUCTURED LIGHT FOR THE INSPECTION OF FREE-FORM

METAL SURFACES

an inspection technique for reflected free-form surfaces based on an industrial machine vision

system realized at the Fraunhofer Institute for Integrated Circuits iis in Fuerth, Germany. The

method is based on structured light and fringe analysis techniques. The task consists of the

inspection of metallic car wheels surfaces for the detection of defective surface regions. The

inspection system is installed at the Fraunhofer laboratory and consists of a fringe projection

component, and a workpiece and camera positioning subsystems. A set of sinusoidal phase-

shifted fringe patterns are sequentially sent through a computer to a DLP-projector and then

projected onto a light translucent screen. The reflection of the screen in the surface of the ins-

pected car wheel is observed by a CCD-camera and is sent to the computer as an image. The

projection system is based on a combination of deflectometric approach and phase-shifting

technique. The defect detection is based on a developed fringe analysis algorithm where the

acquired phase-shifted patterns are used to estimate the iso-phase curves, which are analyzed

to detect the defects.

4.1.2 History of previous works

One of the primary aims of machine vision is to provide surface measurement for rapidly

acquiring the three-dimensional coordinates of points on surfaces.

The term surface imaging refers to techniques that are able to measure the (x, y, z)

coordinates of points on the surface of an object. Since the surface is, in general, non-planar,

it is described in a 3D space, and the imaging problem is called 3D surface imaging. The result

of the measurement may be regarded as a map of the depth (or range) z, a function of the

position (x, y) in a Cartesian coordinate system. This process is also referred to as 3D surface

measurement, range finding, range sensing, depth mapping, or surface scanning. These terms

are used in different application fields and usually refer to loosely equivalent basic surface

imaging functionality [79].

Optical 3D shape measurement techniques have been rapidly developed in recent years.

They can be classified according to different optical principles, such as stereovision, laser

scanning, structured light, Moiré, interferometry, and triangulation [78], [80]. Structured light

and stereovision are the two most developed technologies based on triangulation principles.

They have been widely studied and significantly improved alongside the rapid development of

Page 123: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

4.1. INTRODUCTION 123

the digital CCD camera and LCD/DLP projector. As both of these technologies are based on

triangulation principle, they have a similar mathematical model. In other words, they have to

solve two complementary basic problems : the camera-object-projector correspondence and

the system calibration.

Stereovision uses two or more cameras to capture the images of a scene from different

positions. By comparing these images, the relative depth information is obtained in the form

of disparities, which are inversely proportional to the differences in distance to the objects.

The correspondence problem in stereovision, which is solved by a stereo matching procedure,

is one of the most active research topics in computer vision. It is widely acknowledged that

producing a high quality disparity map under passive illumination is challenging, especially

in a texture-less area. Stereovision’s performance is mostly limited by the stereo matching

process. The calibration problem in stereovision, which is solved by the camera calibration

procedure, is based on well-established procedure. Two calibration models can be used, the

linear pinhole and the nonlinear radial and tangential distortion models. They have a good

simulation effect on actual cameras, and their parameters can be calculated accurately by the

well-studied bundle adjustment algorithm e.g [81], [82].

Structured light technology uses a projector to replace one of the two cameras. It differs

from stereovision technology in expression and solution of the correspondence problem, which

is then the phase or code variation of the same pixel between reference and deformed pattern.

The absolute phase-map or code-map can be viewed as a disparity map in which depth infor-

mation is encoded. However, the calibration problem is complicated by the requirement that

a relationship between phase and coordinates must be established. Moreover, the projector’s

possible nonlinearity should be calibrated to prevent nonlinear error. A key aspect for accu-

rate measurements using fringe projection techniques is the calibration procedure to obtain

real (x, y, z)-coordinates. Several calibration methods have been proposed ; among them are

the neural network-based [83], the polynomial approach [84] and the model-based [85], [86],

[87]. In general, the neural network and polynomial techniques use a z-stage to process several

planes along the z-axis, to find a rule for phase to depth conversion, while the model-based

techniques consist of using control points to model the camera-projector system. Villa et al.

[88] presented a phase to (x, y, z)-coordinates transformation method for the calibration of a

fringe projection profilometer. Li et al. [89] presented a 3D shape measurement method based

Page 124: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

124CHAPTER 4. STRUCTURED LIGHT FOR THE INSPECTION OF FREE-FORM

METAL SURFACES

on structured light projection applying a polynomial interpolation technique. The deduction

that phase and depth coordinates meet a polynomial relation is supposed to calibrate rela-

tive position between camera and projector. Batlle et al. [90] presented a survey of the most

significant coded structured light techniques employed to get 3D information.

An imaging sensor (a video camera for example) is used to acquire a 2D image of the

scene under the structured-light illumination. If the scene is a planar surface without any

3D surface variation, the pattern depicted in the acquired image is similar to that of the

projected structured-light pattern. However, when the surface in the scene is nonplanar, the

geometric shape of the surface distorts the projected structured-light pattern as seen from the

camera. The principle of structured-light 3D surface imaging techniques is to extract the 3D

surface shape, based on the information from the distortion of the projected structured-light

pattern. Accurate 3D surface profiles of objects in the scene can be computed by using various

structured-light principles and algorithms.

Jason Geng [79] provided an overview of recent advances in 3D surface imaging tech-

nologies focusing particularly on noncontact 3D surface measurement techniques based on

structured illumination. Qinghua et al. [91] presented a fast 3D reconstruction method to ex-

tract geometric properties and surface flaws of raceway groove of bearings based on structured

light shape measurement. A three-step phase-shifting approach is used simultaneously where

three digital parallel grating stripes distributed with sine density are projected onto the race-

way groove. Three images covered by different stripes are obtained by a high-resolution CCD

camera at the same local location of the raceway groove. The bearing is then rotated on a high

precision computer-controlled rotational stage to obtain three next images in preprogrammed

location. After one cycle, all images information is combined to get the 3D information in

form of a full 360 raceway groove. Some methods combining stereovision and structured light

have been proposed in order to achieve better performances. The simplest setup consists of

one projector and two cameras. The projector casts structured light which contains phase or

code information onto the object surface to assist stereo matching, and the two cameras work

the same as those in stereovision. There is no need to calibrate the projector. The structured

pattern is just used to assist stereo matching. The two cameras need to be calibrated in the

same manner as in stereovision. The essence of the combined method is stereo matching under

structured light illumination. Therefore, this kind of technology is called active stereo. Reich

Page 125: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

4.1. INTRODUCTION 125

et al. [92] proposed a combined method that casts phase-shifting patterns in the horizontal

and the vertical direction respectively for stereo matching. Han and Huang [93] presented a

method that combines stereovision and phase shifting techniques using two cameras and one

projector. The two cameras are set up for stereovision and the projector is used to project

phase-shifted fringe patterns onto the object in the horizontal and vertical direction. Wang

and Hu [94] proposed a method that uses epipolar constraint and binary coding structured

light to perform dense and accurate stereo matching. The correspondence in horizontal direc-

tions is determined by the former, and the latter determines the correspondence in vertical

directions.

4.1.3 Goal and new contributions

When flat or cylindrical metallic objects are inspected with structured light techniques,

the patterns can be perfectly horizontally or vertically acquired by adapting the workpiece or

the projected stripes orientations. In this case, i.e. when the recorded light patterns geometry

remains unchanged for the whole workpieces, the surface defects can be easily detected, be-

cause the stripe deformation appears only when textural (2D) or structural (3D) defects are

present. In the automobile and metal industry, the inspected objects have usually free-form

surfaces. In this case, i.e. when free-form surfaces are inspected by mean of structured light

systems, the projected fringe patterns are also deformed because of the shape of the inspected

surface. This makes the approach based on the direct interpretation of the stripe pattern to

detect surface defects more complicated or even - in case of strong geometrical deformations

- impossible.

The existing works in the literature in case of complex shapes are mainly based on the use

of computer-aided design (CAD) model of the inspected surface to create an inverse fringe

projection pattern, where an accurate calibration of the system is needed. The problem of this

approach is that, the system calibration requires a precise work and generally consumes a lot

of processing time. Moreover, the CAD models are usually not available, so that a preliminary

3D scanning of the surface is necessary.

Our goal in this chapter is to overcome the problem of the necessity of calibration and

unavailability of the CAD models of the inspected workpieces. The proposed approach consists

Page 126: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

126CHAPTER 4. STRUCTURED LIGHT FOR THE INSPECTION OF FREE-FORM

METAL SURFACES

of using a structured light technique to inspect free-form surfaces by using a phase-shifting

approach in case of deflectometric recording. A fringe analysis algorithm is proposed and de-

veloped to detect and analyze the stripes presents within the recorded patterns. This proposed

approach does not require any calibration process and does not need the CAD model of the

inspected part.

We use phase-shifting technique, which is usually used to calibrate structured light sys-

tems, for the inspection of free-form surfaces and we propose and develop an automatic stripe

detection analysis algorithm. The stripe detection is based on the calculation of phase-images

from the acquired phase-shifted patterns. The detected stripes are analyzed, where the surface

anomalies are localized by the discontinuities present in the stripes.

4.1.4 Structure of the chapter

The remainder of this chapter is organized as follows. Section 4.2 recalls the state of the

art in inspection approaches for free-form surfaces by mean of structured light techniques. We

give a particular attention to two main applications. The first uses structured light projection

technique to inspect cylindrical reflective surfaces where the projected stripe patterns are not

deformed by the surface. The second uses the inverse fringe projection technique to correct

the deformation of the recorded stripe patterns. Section 4.3 presents the proposed approach.

The first part deals with a proposed improvement of the matching step for the inverse fringe

projection procedure. The second parts deals with the new approach dedicated to inspect

reflective free-form surfaces based on the use of deflectometry and phase-shifting techniques.

The experimental procedure, the used inspection system and the results are discussed there.

Section 4.4 concludes this chapter.

4.2 State of the art of surface inspection methods

4.2.1 Inspection of cylindrical reflective metallic surfaces

Caulier et al. [95] proposed a machine vision approach applicable in an industrial setting

for automatic surface inspection of high reflective metallic tubes by applying a structured

illumination. In the considered industrial inspection, long cylindrical object surfaces such as

Page 127: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

4.2. STATE OF THE ART OF SURFACE INSPECTION METHODS 127

tubes or round rots of different diameters containing 3D structural and/or 2D textural surface

defects are inspected. The automatic inspection system is placed at the end of the production

line where the objects are moving with a constant speed. The requirements were twofold. First,

all the defective surfaces must be detected and second the false alarm rates have to remain

low. For the second requirement, the inspection task imposes that structural defects must

be detected and classified correctly with a 100% accuracy, no misclassifications as textural

defects are allowed.

In case of the inspection task of high-reflective metallic cylindrical objects, the use of

line-scan sensors was imposed as the surface of long workpieces moving with a constant speed

has to be inspected. The line scanning technique allows to record the whole surface without

a perspective distortion along the longitudinal axis of the objects, contrariwise to the pure

perspective projection in case of matrix-sensors.

The recording setup is operable if the image of the projected light pattern LPreflected

is characterized by a succession of vertical parallel and periodical bright and dark vertical

regions. This vertical pattern has to have a constant period dP,px (in pixel) in the u direction

of the image. The ratio of dP,px with the period dP,mm (in millimeter) of the pattern LPreflected

gives the image resolution in the u direction of the image coordinate system.

An image example of a cylindrical tube surface section illuminated with the above descri-

bed structured lighting is shown in Figure 4.1. Here, Nr = 21 rays are necessary to illuminate

the complete cross section of the surface (Sinspect). In this image, one single horizontal image

line corresponds directly to the scan line of the line-scan sensor at a certain point of time t.

The depicted image is obtained by concatenating a certain number of single line scans, where

the vertical resolution v corresponds directly to the number of line scans over a certain period

of time. All the Nr bright stripes in the image f are vertical (along the v axis) and parallel

to the moving direction of the cylindrical object.

The authors proposed in [96] a feature extraction algorithm based on the segmentation of

the fringe structures. A morphological thinning algorithm is used to compute the skeletons

of the bright and dark fringes independently. The segmentation is refined by applying a local

linear interpolation for each pixel in the skeletons. Then, for each pattern F , ten features from

the extracted bright fringes and four features from the extracted dark fringes are computed.

These features are the values of the feature vectors cm,m = 1, · · · , Nc = 14. Figure 4.2 gives

Page 128: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

128CHAPTER 4. STRUCTURED LIGHT FOR THE INSPECTION OF FREE-FORM

METAL SURFACES

Figure 4.1 — Typical image of a specular non-defective cylindrical surface of diameterDO = 9.5 mm obtained with the adapted structured illumination. dP,px is the period in

pixel of the depicted stripe pattern in the image [95].

Figure 4.2 — Overview of the feature extraction algorithm, which computes 14 differentfeatures from a pattern F . Output of the algorithm is the corresponding feature vector cm

[96].

an overview of the feature extraction algorithm.

The eight features c01 to c08 have been specially developed and adapted to the classification

task for cylindrical object surfaces. Six geometry-based and two intensity-based features are

considered. These are the directions of bright and dark fringes, the maximum and minimum

distances of two consecutive bright and dark fringes, and the grey levels of bright and dark

fringes.

The six remaining features, four geometry-based and two statistic-based features, were

proposed in [97] for the characterization of holographic fringe patterns. The first feature

group describes the shape, the tangent, the curvature, and the straightness of the bright

fringes, whereas the second feature group characterizes the length and the number of pixel

elements of the bright fringes in a local window.

Page 129: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

4.2. STATE OF THE ART OF SURFACE INSPECTION METHODS 129

The obtained feature vector is then used in the classification procedure. It consists of

assigning to each pattern one of the three classes (noncritical object parts class, critical depth

defects class and critical surface defects class) according to its computed feature vector cm.

The native Bayes (NB) and the nearest-neighbor (k-NN) pattern-based supervised learning

classifiers were used.

4.2.2 Inverse fringe projection

Common fringe projection setups use straight fringe patterns, which are projected onto

an object to record deformed fringe patterns, as shown in Figure 4.3a. But inverse fringe

projection (iPP) inverts the whole process, which projects a deformed fringe pattern onto

the object and gets a straight fringe pattern on the recording plan, as shown in Figure 4.3b.

The calculation of iPP results from inversion of the light path, which is only feasible utilizing

a virtual fringe projection system by virtually swapping the functionality of the projector

and the camera. In the virtual fringe projection system, the virtual camera is used as virtual

projector emitting the structured light pattern through the former camera pixels ; and the

virtual projector is used as virtual camera for image acquisition through the former projector

pixels. If the object is deformed, what we get is no longer a straight fringe pattern. In the

location of deformation, the fringes are also distorted as shown in Figure 4.3c. Therefore, the

deformation becomes obvious and its position and size are much easier to get. This technique

is quite suitable for fast on-line and batch inspection.

– Poesch et al. [98] developed an inverse fringe projection system consisting of a real and a

virtual part to detect local and global geometry defects on complex shaped workpieces

surfaces. The proposed hardware setup is similar to that of a classical fringe projec-

tion system. It consists of a digital projector and a digital camera both connected to a

standard computer. The virtual setup is implemented as a ray-tracing computer simu-

lation. The CAD model of an ideal specimen is used to generate a single sophisticated

inverse fringe projection pattern which is, then, projected onto the surface of the real

workpiece. The method requires accurate system calibration of the hardware setup to

match the system parameterization with the virtual setup. Both, camera and projector

are modeled by help of a pinhole model.

For a defined desired camera image pattern, there exists an associated iPP that, when

Page 130: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

130CHAPTER 4. STRUCTURED LIGHT FOR THE INSPECTION OF FREE-FORM

METAL SURFACES

Figure 4.3 — differences between conventional and inverse fringe projection : (a) conven-tional fringe projection, (b) inverse fringe projection, (c) deformation inspection by inverse

fringe projection.

Figure 4.4 — a) classical fringe pattern projected onto the surface of a defected turbineblade. Note that the fringe lines are curved on any region of the blade. b) Inverse projectionpattern applied to the surface, non-straight, non-vertical lines only appear where there are

geometry errors [98].

projected onto the specimen renders this desired camera image. Calculation of this iPP

requires consideration of the system calibration data and the three dimensional CAD

model of the specimen. The 3D-geometry defects can be directly extracted from a single

image captured by the real camera. Figure 4.4 shows the camera views of a conventional

fringe pattern and the corresponding iPP projected onto the surface of a turbine blade.

Various algorithms for the detection of geometry defects - such as : direction of gra-

dient method, detection of local defects by short time DFT, global error detection via

skeletonizing and the sensitivity map - are used in [98]. They are divided up into space

domain algorithms and frequency domain algorithms.

– Caulier et al. [99] proposed an approach of inverse pattern determination adapted to

the surface geometry for the characterization of specular surfaces. The proposed method

Page 131: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

4.2. STATE OF THE ART OF SURFACE INSPECTION METHODS 131

Figure 4.5 — determination of the points to be matched, using horizontal and verticalsinusoidal and rectangular patterns. Each matched point is uniquely defined with a nh + nv

length code sequence, where nh and nv are the number of horizontal and vertical images.LSB and MSB are respectively least and most significant bits [99].

necessitates the determination of the correspondence between the projecting light screen

and the recording camera, and the computation of a transformation matrix permitting

the determination of the inverse pattern to be projected.

The used point matching algorithm for linking the projector and camera views is ba-

sed on the combined projection of gray coded sinusoidal and rectangular patterns. The

sinusoidal patterns serve the sub-pixel determination by projecting vertical and hori-

zontal patterns. The matched points of interest are relevant points corresponding to

the maxima of the projected sinusoidal patterns. The rectangular patterns serve the

coding of the detected relevant points by means of the natural binary code. A temporal

sequence of nh horizontal patterns and nv vertical patterns is projected onto the surface

to be inspected (each horizontal and vertical group of pattern is composed of 1 sinusoi-

dal and n−1 (n = nh or nv) rectangular patterns). Figure 4.5 shows the point-matching

determination principle.

Once the correspondence between projector points pp and camera points pc is deter-

mined, the transformation matrix H linking the two images can be computed. H is

modeled by a polynomial equation of degree r, by means of n corresponding reference

points, where n � nr , and nr is the minimum necessary number of points to retrieve

Page 132: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

132CHAPTER 4. STRUCTURED LIGHT FOR THE INSPECTION OF FREE-FORM

METAL SURFACES

Figure 4.6 — computation principle of feature cmσ

and three different regular patternsobtained by means of the proposed method [99].

the coefficients of the polynomial equation of degree r.

Ic = H.Ip (4.1)

Ip is the irregular or inverse pattern, so that after its projection on the surface, a regular

pattern is depicted in the camera image Ic.

The optimal transformation is done by retrieving optimal degree r and number n of

points, in order to minimize the residual mean square error (RMSE) of known camera

points pc and estimated points pc after applying H to projector points pc.

The degree of regularity of the projected patterns was quantitatively estimated. The

involved feature for regularity characterization is the average variance of all the detected

periods in an image line i, noted cmσ and expressed as follows :

cmσ = mean(σ(ti)), ∀i ∈ [0 : nl − 1[ (4.2)

where nl is the number of lines in image Ic and ti is the vector with all detected periods

in line i. Figure 4.6 describes the computation of feature cmσ and shows examples of

obtained regular stripe patterns with the corresponding values of cmσ .

4.2.3 Actual limitations

The main two problems of the proposed approach in section 4.2 to inspect cylindrical

reflective metallic surfaces are (i) the necessity of having perfect horizontal/vertical stripe

Page 133: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

4.2. STATE OF THE ART OF SURFACE INSPECTION METHODS 133

a) b)

c)

Figure 4.7 — example of a flat surface without defect : a) non-vertical stripes, b) detectedpeaks and valleys and c) curvature feature calculated for the detected peaks and valleys in

b).

orientation for the inspection of flat or cylindrical surface metals and (ii) the regularity of the

recorded stripes for the inspection of curved surfaces.

(i) When flat or cylindrical surfaces are inspected with this approach, a big attention must be

done to adapt the orientations of the projection unit and the inspected object in order to

record perfect horizontal or vertical stripes. Otherwise, the curvature feature will be non-

zeros and will cause a miss-classification problem to non-defective stripes. These stripes

will be classified as critical surface defects even if there are no surface defects on these

stripes. Non-zeros values of the curvature feature can appear due to the discretization of

the non-horizontal stripes, even when no defect is present. The problem is related to the

non-perfect orientation of the stripes. An example of non-vertical stripes, without the

presence of any defect, is shown in Figure 4.7.

The calculated curvature feature (Figure 4.7c) from the detected peaks and valleys (Fi-

Page 134: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

134CHAPTER 4. STRUCTURED LIGHT FOR THE INSPECTION OF FREE-FORM

METAL SURFACES

gure 4.7b) leads to false alarm pixels even if the processed stripe image does not contain

any defective pixels.

(ii) When curved surfaces are inspected with this approach, the extracted feature vectors for

non-defective patterns can be classified as critical surface defects. The main reason of

this miss-classification is that, especially, the geometry-based features are not adapted to

this kind of surfaces. Even if there is no surface defect, the distorted stripes (caused by

the shape of the curved surface) are considered as defective. This is why, this approach

cannot be used for the inspection of free-form surfaces.

The main problems of the proposed approaches based on the calculation of the inverse

fringe projection pattern are (i) the lack of the CAD models of the inspected parts and (ii)

the necessity of an accurate calibration of the system. The second approach discussed in

section 4.2 tries to determine an inverse pattern adapted to the surface geometry by making

a direct link between the projector’s and camera’s reference points, and by computing a

transformation matrix permitting the determination of the inverse pattern to be projected.

Even if this approach ignores the calibration step and does not use any CAD model of the

inspected part, the obtained stripes, after projecting the inverse patterns, remain non-regulars.

Another drawback of this approach is that, the determination of point-interest is supervised,

which depends on the parameters of projected sinusoidal patterns, such as the period and the

difference between gray levels of the peaks and valleys.

4.3 Proposed method for the inspection of free-form surfaces

with phase-shifting technique

4.3.1 Problem formulation and approach description

As discussed in the previous section, in order to obtain perfect regular recorded stripes, an

accurate calibration of the inspection system (camera and projector) must be done and the

CAD model of the inspected part should be known to generate an adapted inverse pattern.

The calibration requires precise device and measurements, and generally consumes a lot of

time. A calibration artifact is used to provide sufficient number of 3D reference points. During

calibration, the artifact is placed in various poses to provide enough 3D reference points within

Page 135: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

4.3. PROPOSED METHOD FOR THE INSPECTION OF FREE-FORM SURFACESWITH PHASE-SHIFTING TECHNIQUE 135

the measurement volume. The camera calibration is accomplished based on the reference data

composed of the 3D reference points and their 2D camera correspondences, extracted from

the images. Unlike the camera calibration, the projector calibration normally prepares the

reference data by projecting an extra calibration pattern with known 2D references to the

calibration artifact in different poses and obtaining the 3D correspondences with the aid

of the calibrated camera. In this way, the camera calibration error unavoidably affects the

reliability of the projector reference data, and thus degrades the accuracy of the projector

calibration. 3D CAD model of an ideal specimen (usually not available) is used to generate

adapted inverse fringe projection pattern, which is projected onto the surface of the real

workpiece in order to obtain regular stripes. The calibration step is necessary in such cases.

To overcome the need of the calibration step and the unavailability of the 3D CAD model of

the inspected part, we propose a new inspection approach of free-from reflected surfaces based

on digital fringe projection and phase-shifting technique. This approach is based on the use of

structured light system when a list of sinusoidal patterns is projected and its corresponding

camera views are recorded. The list of the projected patterns contains a basic sinusoidal

pattern and its phase-shifted corresponding patterns. Instead to try to have perfect regular

recorded stripes by using inverse stripe projection methods, we developed an algorithm based

on the analysis of irregular stripes to detect the existing defects within the inspected part.

The underlying hypothesis is that the presence of the defect causes more irregular fringes than

the non-defected part. When a regular fringe pattern is projected on a 3D inspected surface,

the stripes on the pattern are distorted by the shape of the surface as well as by the presence

of the defect in the surface. Depending on the period of the projected pattern, the defect can

interrupt the continuity of the projected stripes on the surface. This is the case when the

period of the stripes is close to the size of the defect. But when the defect size is considerably

smaller that the stripe period, the defect does not affect the continuity of the stripes, but

distorts them. Since the proposed approach is based on the detection of the discontinuities

within the stripes in order to localize the surface defect, the optimal solution is to consider a

fringe pattern that has a period as close as possible to the defect size.

The basic pattern is a fixed regular horizontal or vertical pattern containing a sinusoidal

signal, with a certain period, duplicated in all rows (respectively, columns), in case of hori-

zontal (respectively vertical) projection. The spatial dimension of the basic pattern can be

Page 136: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

136CHAPTER 4. STRUCTURED LIGHT FOR THE INSPECTION OF FREE-FORM

METAL SURFACES

Figure 4.8 — flowchart of the proposed approach.

adapted to the region to be inspected. From this basic pattern and by shifting the sinusoidal

signal relatively to its phase, other patterns are created. The number of shifted patterns is at

maximum equal to the vertical (rep. horizontal) size of the pattern period, in pixels. Many

periods can be used, to adapt for the size of the defects.

The flowchart in Figure 4.8 illustrates the main steps involved in the proposed approach.

First, a region of interest (ROI) is selected (1) in order to limit the processing of the

pixels that are inside this region and to define the spatial dimensions of the patterns to be

projected. Then, a sinusoidal signal is generated (2) , depending on the size of the ROI,

the number of periods (stripes) wanted and the phase-shifting step. A basic sinusoidal fringe

pattern is created and then projected (3) and its corresponding camera-view is recorded (4) .

These last three steps are sequentially repeated several times by shifting the phase of the last

projected sinusoidal pattern (5) . Once all patterns are projected and recorded, a developed

fringe analysis algorithm is used (6) to calculate the phase images, detect the stripes and

create the corresponding curves list, which is then used to detect the existing defects (7)

within the ROI.

4.3.2 Choose of ROI and patterns generation

The generation of fringe patterns starts by choosing the ROI on the projector side. This

ROI is a rectangular window included inside the projected pattern (as shown in Figure 4.9,

the red rectangular window inside the projected pattern). A black image is created inside the

projected pattern, excepting the ROI (part of the projected pattern) in which the set of the

sinusoidal-fringe pattern will be created. Its position and size are adapted in such a way that

the inspected region on the surface (blue region in Figure 4.9) should be onto the camera

view. Some parameters of the projected fringe patterns, such as period and shifting-step, are

Page 137: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

4.3. PROPOSED METHOD FOR THE INSPECTION OF FREE-FORM SURFACESWITH PHASE-SHIFTING TECHNIQUE 137

Figure 4.9 — example of choosing a ROI (red rectangle) in the projected pattern wherethe fringe patterns are created and projected onto the inspected surface.

depending on the size of the selected ROI. Figure 4.9 shows an example of ROI where Nx

and Ny are respectively the total number of columns and rows in pixel.

The ROI contains the sinusoidal stripe pattern to be projected. Once the size and the

position of the ROI are fixed, the basic sinusoidal stripe pattern can be generated and its

following parameters should be fixed :

– The orientation

– The amplitude A,

– The frequency f ,

– The phase-shifting δ.

The orientation of the stripes can either be horizontal or vertical. In the case of horizontal

Page 138: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

138CHAPTER 4. STRUCTURED LIGHT FOR THE INSPECTION OF FREE-FORM

METAL SURFACES

(respectively, vertical) orientation, the columns (respectively, rows) of the projected stripes

contain the same sinusoidal signal. This means that the same signal is duplicated in all columns

or rows.

The amplitude A, frequency f and phase-shifting δ are the standard characteristics of a

sinusoidal signal S. Its most usual form as a function of position x is given as :

S(x) = A.sin(2πfx+ φ)

= A.sin(2π xT + 2π φ

T )

= A.sin(2πxNbstripes

Nx + 2πδNbstripes

Nx )

(4.3)

where :

– A, the amplitude, is the peak deviation of the function from zero.

– f , the ordinary frequency, is the number of oscillations (cycles) that occur each second

of time.

– φ, the phase, specifies where in its cycle the oscillation is at x = 0. When φ is non-zero,

the entire signal appears to be shifted in x-axis by the amount δ pixels. A negative value

represents a delay, and a positive value represents an advance.

As a sinusoidal fringe pattern will be projected, the parameters of the sinusoidal signal

should be adapted to stripe projection, in such a way :

– A, the amplitude, is the value of gray level. It belongs to [0, 255]

– f , the number of oscillations, is the ratio between number of stripes (Nbstripes) and the

pattern size. The minimum number of stripes is 1, ω = Nbstripes ∗ 2π/Nx.

– δ, the delay, is the phase-shifting that specifies (in pixel) where in its cycle the oscillation

is at x = 0 (the first pixel). E.g. if δ = 10, the initial signal generated when δ = 0 will

be shifted 10 times (10 pixels). δ ∈ [0, P ], with P is the period of the stripe pattern.

We obtain following equation of the sinusoidal signal in case of vertical sinusoidal fringe

pattern elaboration.

S(x) =A

2− A

2sin

� x

Nx2πNbstripes + δ2πNbstripes/Nx

�, x = 0, · · · , P (4.4)

Figure 4.10 shows examples of sinusoidal signals and sinusoidal fringe patterns. Fi-

gure 4.10a and Figure 4.10c plot a sinusoidal signal S with A = 255, Nbstripes = 6 and

Page 139: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

4.3. PROPOSED METHOD FOR THE INSPECTION OF FREE-FORM SURFACESWITH PHASE-SHIFTING TECHNIQUE 139

Figure 4.10 — examples of sinusoidal signals a) and c) used to generate b) vertical and d)horizontal and sinusoidal fringe patterns.

δ = 0 spread over 512 (800, respectively) pixels. Figure 4.10b and Figure 4.10d show the

corresponding vertical (horizontal, respectively) pattern of (a) and (c) with Nx = 512 and

Ny = 800 pixels where the signal S is duplicated in all rows (columns, respectively).

4.3.3 Phase-shifting and camera recording

The list of the fringe patterns to be projected on the surface is depending on the phase-

shifting parameter (δ), which is also depending on the stripe orientation and the size of the

ROI (Nx or Ny). As described in the previous section, the phase-shifting parameter δ belongs

to the interval [0, P ] with P is the period of the fringe pattern. This means that the maximum

number of phase shifting is equal to P . We mention that, it is not necessary to consider the

whole range of the pattern size (Nx when horizontal projection is considered and Ny when

vertical), the period range is enough, since the same signals are repeated in the following

periods.

Phase-shift is defined as any change that occurs in the phase of one quantity, or in the

phase difference between two or more quantities. In our case, the phase-shifting means that

the sinusoidal signal S used to generate the basic sinusoidal stripe pattern is shifted according

to a defined step (in pixel), called Shiftingstep, over the ROI horizontally or vertically.

Page 140: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

140CHAPTER 4. STRUCTURED LIGHT FOR THE INSPECTION OF FREE-FORM

METAL SURFACES

Figure 4.11 — examples of phase-shifted signals.

Figure 4.11 shows three examples of phase-shifting signals : S1, S2 and S3 generated with

A = 255, Nbstripes = 6, Nx = 100 pixels and shifted respectively with δ = 0, δ = 1 and δ = 3.

Table 4.1 shows different phase-shifted signals and their corresponding generated stripe

patterns starting with the basic stripe pattern with δ = 0.

The list of the stripe patterns to be projected is created once the shifting-step is defined.

The number of stripe patterns (Numberpatterns) constituting this list is calculated as follow :

Numberpatterns =P

Shiftingstep(4.5)

for vertical and horizontal projections .

Once the list of patterns is created, all the patterns are projected sequentially onto the

object surface without changing either the position of the camera, projector or surface object.

The camera captures the corresponding patterns in the same projection order and records

them in a data-cube by a simple stacking.

4.3.4 Fringe analysis and defect detection

The fringe analysis technique proposed in this chapter for the inspection of free-form

reflective surfaces to detect the existing defects within the inspected part is based on phase-

Page 141: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

4.3. PROPOSED METHOD FOR THE INSPECTION OF FREE-FORM SURFACESWITH PHASE-SHIFTING TECHNIQUE 141

Phase-shifting Sinusoidal signal S Vertical fringe pattern

δ = 0

δ = 1

δ = 3

δ = 5

δ = 7

δ = 9

δ = 11

δ = 13

δ = 15

Table 4.1 — phase-shifted signals in the range of the period P = 16 and correspondingstripe patterns with A = 255, Nbstripes = 8, Nx = 100 and Ny = 128 pixels.

shifting methods. Phase-shifting method, also called phase-stepping method, is a technique

that has been widely used in structured light systems for 3D shape measurement [100], [101],

[102]. In shape measurement based on fringe projection technique, phase-shifting method

is used to obtain the accurate phase values of points on the surface being measured, from

which the 3-D positions of the points can be resolved. In this research, we propose a method

based on phase-shifting technique to inspect free-form reflective surfaces. In the phase-shifting

procedure, a series of sinusoidal phase-shifted fringe patterns are generated in a computer and

sequentially projected on the surface being measured by a projector. Meanwhile, images of

the surface under the projections are recorded by the camera. From the acquired image set,

a 2-D matrix of phase values can be calculated. This matrix of phase values has the same

dimension as the individual images acquired and is usually called the "phase map" of the

Page 142: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

142CHAPTER 4. STRUCTURED LIGHT FOR THE INSPECTION OF FREE-FORM

METAL SURFACES

surface.

There are many different phase-shifting algorithms available due to varied designs of phase-

shift values for the sequence of fringe patterns. However, the basic idea of phase-shifting

method can be illustrated from the classic 4-step phase-shifting algorithm as the following :

– The 4-step phase-shifting algorithm takes a total number of four projection patterns, one

"original" sinusoidal fringe pattern and three phase-shifted versions of the "original". For

shape measurement based on digital fringe projection technique the projection patterns

are constructed as grayscale bitmaps. The intensity distributions of the patterns can be

described using the following equation :

I(P )n (x, y) =

I(P )m ax

2

�1 + sin

�2πx

p+

(n− 1)π

2

��, n = 1, · · · , 4 (4.6)

where x and y are coordinates on the horizontal and the vertical axes of the bitmaps

respectively, n represents the phase-shift step, I(P )m ax is the maximum intensity in the

bitmaps, p is the fringe pitch, and I(P )n (x, y) is the intensity distribution of the nth

pattern.

– Using the four fringe patterns as defined above for projections, the corresponding images

of the surface being measured can be expressed using the following equation :

In(i, j) = A(i, j) +B(i, j)sin

�φ(i, j) +

(n− 1)π

2

�, n = 1, · · · , 4 (4.7)

where (i, j) are the indices for pixels (i, j) ; phi(i, j) is the pixel’s absolute phase value

for a given pixel (i, j) ; A(i, j) and B(i, j) are respectively the average intensity and

the intensity modulation, which are both constants for n = 1, · · · , 4 ; and In(i, j) is the

pixel’s intensity in the nth image.

– Using the four images obtained from the phase-shifting process, the "wrapped" phase

map, φ(i, j), can be calculated from the following function :

φ(i, j) = arctan∗�I1(i, j)− I3(i, j)

I2(i, j)− I4(i, j)

�(4.8)

Page 143: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

4.3. PROPOSED METHOD FOR THE INSPECTION OF FREE-FORM SURFACESWITH PHASE-SHIFTING TECHNIQUE 143

where the function arctan∗(· · · ) has two arguments and is defined as the following :

arctan∗�f

g

�=

arctan�fg

�if g ≥ 0

arctan�fg

�+ φ if g < 0 and f ≥ 0

arctan�fg

�− φ if g < 0 and f < 0

(4.9)

The wrapped phase map φ(i, j) computed from Eq.(4.8) has a value range of [−π,π]

and is a 2π wrapping of the absolute phase map Φ(i, j), which has a much larger value

range depending on the number of fringes in the projection patterns. The relationship

between φ(i, j) and Φ(i, j), can be expressed using the following equation :

φ(i, j) = mod(Φ(i, j), 2π) (4.10)

The step of fringe analysis and defect detection includes the following sub-steps :

– From the recorded list of patterns, four phase-shifted images are selected,

– Calculate the corresponding wrapped phase-image, subsequently for simplification called

phase-image,

– Detection of the existing stripes within the phase-image and creation of a list of curves

(subsequently called also stripes),

– Analysis of the curves by using distance parameters,

– Detection of the abnormal regions and localization of the defects.

4.3.4.1 Phase-image calculation

Figures 4.12 and 4.13 show example of phase-images for obtained respectively from pro-

jected and recorded phase-shifted vertical sinusoidal patterns I1, I2, I3 and I4 and calculated

with Eq. (4.8).

Figures 4.14 shows phase signal obtained from the first row of the phase-image in Fi-

gure 4.12b and Figures 4.15 shows phase signals obtained from the rows 10 and 130 of the

phase-image in Figure 4.13b corresponding respectively for non-defective and defective re-

gions.

Page 144: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

144CHAPTER 4. STRUCTURED LIGHT FOR THE INSPECTION OF FREE-FORM

METAL SURFACES

a)

b)

Figure 4.12 — example of (a) four vertical phase-shifted projected patterns and (b) theircorresponding phase-image obtained by Eq. (4.8).

4.3.4.2 Stripes detection

The curves or stripes detection is based on the detection of all peaks existing in all rows

(respectively columns) of the calculated phase-image when vertical (respectively horizontal)

fringe projection is considered. The particularity of the phase-images, as depicted in Fi-

gures 4.12 and 4.13, is that the phase over all rows (as shown in Figures 4.14 and 4.15a)

Page 145: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

4.3. PROPOSED METHOD FOR THE INSPECTION OF FREE-FORM SURFACESWITH PHASE-SHIFTING TECHNIQUE 145

a)

b)

Figure 4.13 — example of (a) four vertical phase-shifted recorded patterns and (b) theircorresponding phase-image obtained by Eq. (4.8).

is linearly distributed and spatial continuously over the whole light projection direction. The

wrapped phase-image shows sudden jumps between peaks and valleys that corresponds to the

constrained phase to its principal value, i.e. the interval [0, 2π]. The jump locations are the

points that we want to detect and localize in order to create a list of stripes that are used

in the defect detection procedure. Once all peaks for all rows (or columns) are detected and

their positions are localized, the stripes can be formed and saved as curves which will be used

Page 146: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

146CHAPTER 4. STRUCTURED LIGHT FOR THE INSPECTION OF FREE-FORM

METAL SURFACES

Figure 4.14 — phase-signal from the first row of the phase-image shown in Figure 4.12.

Figure 4.15 — phase-signals from a) the row 10 (non-defective region) and b) the row 130(defective-region) of the phase-image shown in Figure 4.12.

later to analyze the surface of the inspected part. The defect detection is based on stripes

analysis, where the discontinuities within the detected stripes inform about the presence of

surface defect, as seen in Figure 4.15b. The presence of the defect on the surface perturbs the

regularity of the projected fringes.

The detection of peaks and valleys can be easily done by applying the 1-D numerical

gradient operator on each row’s signal. The 1-D gradient is able to separate the abrupt changes

in a signal S by calculating its derivative dS/dx which corresponds to the differences in x

(horizontal) direction. As shown in Figure 4.16b, the peaks can be easily detected by a simple

Page 147: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

4.3. PROPOSED METHOD FOR THE INSPECTION OF FREE-FORM SURFACESWITH PHASE-SHIFTING TECHNIQUE 147

Figure 4.16 — example of peaks detection principle. (a) phase-image signal S correspon-ding to first row in the phase-image (Figure 4.12b), (b) corresponding 1-D gradient, (c)

positions of the detected peaks and (d) the detected peaks superimposed on S

Figure 4.17 — example of peaks detection principle. (a) phase-image signal S correspon-ding to row 10 (non-defective region) in the phase-image (Figure 4.13b), (b) corresponding1-D gradient, (c) positions of the detected peaks and (d) the detected peaks superimposed

on S.

thresholding of the gradient signal. Figure 4.16 shows the signal S plotted in Figure 4.14 which

corresponds to the first row of the phase-image in Figure 4.12b. It also shows the result of the

1-D gradient applied on S and the detected peaks. Figures 4.17 and 4.18 show the results of

the 1-D gradient applied to the signals plotted in Figure 4.15 corresponding to non-defective

and defective regions, and the detected peaks.

The result of peak detection step is a binary image (labeled image) where all the detected

Page 148: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

148CHAPTER 4. STRUCTURED LIGHT FOR THE INSPECTION OF FREE-FORM

METAL SURFACES

Figure 4.18 — example of peaks detection principle. (a) phase-image signal S correspon-ding to row 130 (defective region) in the phase-image (Figure 4.1b), (b) corresponding 1-Dgradient, (c) positions of the detected peaks and (d) the detected peaks superimposed on S.

peaks are set to 1. Figure 4.19 shows the labeled image after the detection of the peaks in the

first row of the phase-image shown in Figure 4.12, and the rows 10 and 130 in the phase-images

shown in Figure 4.13.

The next step, after detection of all the peaks from the phase-image and save of their

positions, is to create a list of curves from the peaks. For this purpose, we propose a curve-

creation algorithm, where its principle is as follows :

"Curve-creation algorithm"

[0] : i = −1, go to [1].[1] : i ← i+ 1, if (i < Ny) go to [2] else go to [8].[2] : j = −1, go to [3].[3] : j ← j + 1 ; if (j < Nx) go to [4] else go to [1].[4] : if a labeled pixel is found, then create a new curve and go to [5] else go to [3].[5] : add the found pixel to the current curve, set the found pixel to 0 and go to [6].[6] : look in the neighborhood if there is another labeled pixel and go to [7].[7] : if there is a labeled pixel, go to [5], else go to [3].[8] : end

Figures 4.20 and 4.21 show, respectively, the created stripes when the curve-creation al-

gorithm is applied on the phase-image shown in Figure 4.12 and Figure 4.13. Each detected

curve is plotted in a different color.

Page 149: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

4.3. PROPOSED METHOD FOR THE INSPECTION OF FREE-FORM SURFACESWITH PHASE-SHIFTING TECHNIQUE 149

a)

b)

c)

Figure 4.19 — binary images created after the detection of the peaks from a) the first rowof the phase image shown in Figure 4.12, b) the row 10 (non-defective region) and c) row

130 (defective region) of the phase-image shown in Figure 4.13.

Page 150: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

150CHAPTER 4. STRUCTURED LIGHT FOR THE INSPECTION OF FREE-FORM

METAL SURFACES

a)

b)

Figure 4.20 — a) phase-image obtained from projected patterns and b) results of thecurve-creation algorithm applied on phase image in a).

4.3.4.3 Curves analysis and defect detection

The detected curves are stored in a structure containing a sequential number

(Phase_image_Nr) corresponding to the number of the used phase-image, the number of

found pixels (Curve_pixels_Nb), and all the coordinates of the pixels constituting the curve.

The curves analysis is based on distance computations. The curves are analyzed for each

Phase_image_Nr as follows (vertical projection is considered as an example) :

Once all the curves are detected, a list of all neighbor curves (LNC) is created, where the

redundant inputs are deleted. For each input in the list, two curves, Curve_1 and Curve_2,

are checked together. The existing defects between Curve_1 and Curve_2 (inspected region)

are detected in two steps.

First, the most common distance (mcd) between the two checked curves is calculated.

Then, the distance between each pixel x in the first curve and its corresponding in the second

Page 151: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

4.3. PROPOSED METHOD FOR THE INSPECTION OF FREE-FORM SURFACESWITH PHASE-SHIFTING TECHNIQUE 151

a)

b)

Figure 4.21 — a) phase-image obtained from recorded patterns and b) results of thecurve-creation algorithm applied on phase image in a).

one is calculated. Then the absolute value of the difference between the calculated distance

and mcd, as :

Dist(x) = abs[distance(Curve_1(x), Curve_2(x))−mcd] (4.11)

The obtained value of Dist(x) is compared to a fixed threshold value

(Distance_threshold). Practical experiments have shown that the optimal results are

obtained with Distance_threshold = 4. When the calculated distance between the curve’s

pixels is more than Distance_threshold this means that the defect in the surface doesn’t cut

the fringes but distorts them. The region between the pixels Curve_1(x) and Curve_2(x)

is considered as defective.

The second step consists of finding the discontinuities between Curve_1 and Curve_2.

When a pixel in Curve_1 does not have a corresponding one in Curve_2, this means that

Page 152: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

152CHAPTER 4. STRUCTURED LIGHT FOR THE INSPECTION OF FREE-FORM

METAL SURFACES

"Stripe-inspection algorithm"[0] : For all rows, x = 1 : Nx, calculate the most common distance, MCD(x),

between the curves. Initialize the list of anomaly pixels (LAP ) and its inputnumber (Nb_LAP ) to zero and go to [1].

[1] : For all detected curves, detect the right and/or left neighbor curves (if exists)and create a list of all neighbor curves (LNC) and go to [2].

[2] : Remove the redundant inputs in LNC, count the number of inputs (Nb_IC)in LNC, put j ← 1 and go to [3].

[3] : if (j ≤ Nb_IC) thenCurve_1 ← first curve in LNC(j),Curve_2 ← second curve in LNC(j),Go to [4].

ElseGo to [6].

End if[4] : Calculate the most common distance (mcd) between the pixels of Curve_1

and Curve_2 and go to [5].[5] : For all pixels x in Curve_1, find the corresponding pixel in Curve_2.

If existCalculate Dist(x) =absolute value [distance (Curve_1(x), Curve_2(x))−mcd].If (Dist(x) > Distance_threshold) then

Nb_LAP ← Nb_LAP + 1.Save in LAP (nb_LAP ) the x and y coordinates of the currenttested pixel.

End ifElse (does not exist)

Nb_LAP ← Nb_LAP + 1.Save in LAP (nb_LAP ) the x and y coordinates of the currenttested pixel.

End ifPut j ← j + 1 and return to [3].

[6] : For each pixel in LAP (j = 1 : Nb_LAP ) put x = x-coordinate andy = y-coordinate of LAP (j). Plot a red line between y and y ±MCD(x) andgo to [7].

[7] : End.

there is a discontinuity between the curves in position x. The region between Curve_1 and

Curve_2 in position x is considered as defective. In both cases, when the distance between

the curves exceeds the threshold value Distance_threshold and when a discontinuity exists

between the curves, the region between this curves at the position x is considered as defective,

where the defect is located there.

Page 153: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

4.3. PROPOSED METHOD FOR THE INSPECTION OF FREE-FORM SURFACESWITH PHASE-SHIFTING TECHNIQUE 153

a)

b)

Figure 4.22 — example of defective surface where in a) the defect distorts the stripes andin b) the defect interrupts the continuity of the stripes.

Figure 4.22 shows an example for both cases, first, when the continuity of the stripes is

disturbed by the presence of a surface defect Figure 4.22a and second, when the stripes are

continued but are distorted Figure 4.22b.

4.3.5 Experimental results (application on car wheels)

4.3.5.1 Experimental setup

In order to validate the proposed approach, we used an industrial inspection system,

called "6-axis-measurement-system", which was built to inspect car wheels (rims) surfaces.

The measurement system consists of two subsystems for the positioning of the workpiece and

the camera. Figure 4.23 provides a schematic representation of the measurement system with

its components.

Page 154: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

154CHAPTER 4. STRUCTURED LIGHT FOR THE INSPECTION OF FREE-FORM

METAL SURFACES

Figure 4.23 — system overview of the 6-axis measurement system.

80 : Control unit (cabinet of Xemo-MC-controller). The Xemo-MC-controller includesall electrical and electronic components located in the control panel and used tocontrol the measurement system (mainly to power on/off and to communicate themachine with a computer).

81 : Machine frame, where all components are mounted.

- Workpiece positioning - subsystem W

10 : 1st axis (W1) - Turntable axis with 3-jaw chuck for the workpiece holder.20 : 2nd axis (W2) - Linear unit : linear left/right movement of the turntable.30 : 3rd axis (W3) - Linear unit : forward/back movement of the turntable.

- Camera positioning - subsystem K

40 : 4th axis (K1) - Reversing module : pan the camera. At the K1 axis the camerais mounted.

50 : 5th axis (K2) - Rotating module : rotating the camera.60 : 6th axis (K3) - Linear unit : lifting movement of the camera.

The system has been adapted to allow the projection of structured light patterns on

the inspected surfaces. For this purpose, three additional parts have been included into the

system. (i) a DLP projector has been mounted in the upper part of the system in order to

project the stripe patterns from the computer. (ii) a mirror, mounted in front of the projector

Page 155: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

4.3. PROPOSED METHOD FOR THE INSPECTION OF FREE-FORM SURFACESWITH PHASE-SHIFTING TECHNIQUE 155

Figure 4.24 — building of the projection system on the machine (behind view).

at a certain distance and inclination angle, serves to reflect the stripe patterns projected by

the DLP projector into the direction of the surface where (iii) a translucent screen has been

mounted between the mirror and the inspected workpiece. Figure 4.24 shows the included

parts into the measurement system to build the projection system.

The experimental setup with all main components is depicted in Figure 25. It shows

a schematic experimental setup of a deflectometric inspection system. It is consisting of a

projection unit (DLP projector), a mirror, a screen, an image acquisition unit (CCD camera),

and the inspected workpiece. Sinusoidal phase-shifted fringe patterns are synchronously sent

by the computer to the projector. These patterns are then fully reflected by the mirror and

projected on the screen. The translucent screen permits to have a diffuse light source, and can

be considered as a secondary source of light patterns. These diffuse light patterns are then

reflected by the reflective surfaces onto the matrix sensor of the camera and saved as images

in the computer. The pattern generation, projection and recording steps are automatically

synchronized by the computer, which is connected to the camera, the projector and the Xemo-

MC-controller.

For these purposes, a software interface has been developed in C++ allowing to : (i)

control the positioning subsystem, (ii) generate the phase-shifted sinusoidal patterns taking

into account the different parameters (size, orientation, period, intensity, shifting-step), (iii)

Page 156: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

156CHAPTER 4. STRUCTURED LIGHT FOR THE INSPECTION OF FREE-FORM

METAL SURFACES

Figure 4.25 — schematic experimental setup.

project the generated patterns with possibility to project external patterns generated with

other software, and (iv) save the camera views corresponding to the reflected patterns by the

surface and synchronize the projection and recording processes. The software also helps to

adapt the projected patterns to the inspected area on the surface.

The main difference between the projection techniques and the deflectometry principle is

that, in the former, a projector (e.g. a laser or a beamer) casts a known sequence of patterns on

the surface to be inspected (Figure 4.26a). The camera receives the mapping of the projected

pattern, which is deformed by the surface. The principle is based on the evaluation of the

triangles which are established by the projection and the imaging of patterns on the surface.

It requires that a part of the incident light is diffusely reflected, whereas the exact reflectance

or color is irrelevant. As result, projection techniques provide spatial positions of surface

points, which can then be combined to obtain the shape information of the surface. Common

realizations are laser triangulation, line scanning, and stripe projection.

Contrariwise, the principle of deflectometry can be compared to the way a human observer

inspects specular surfaces : instead of looking on the surface itself, he observes the reflection of

a structured environment in the surface (Figure 4.26b). If the surface is not ideally even, the

reflection of the environment is deformed. The surface becomes part of the imaging system.

By evaluating the deformation, the human observer as well as a deflectometric inspection

Page 157: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

4.3. PROPOSED METHOD FOR THE INSPECTION OF FREE-FORM SURFACESWITH PHASE-SHIFTING TECHNIQUE 157

Figure 4.26 — principle of projection (left) and deflectometry (right).

Figure 4.27 — Measurement sensitivity of projection (left) and deflectometry (right).

system obtains information on the shape of the surface in form of the local inclination. The

major precondition of the applicability of deflectometry is a significant specular reflection on

the surface. If the surface shows mainly diffuse reflection (e.g. matte or rough surfaces), a

transition from visual light to larger wavelengths (e.g. near infrared) can be helpful, since the

portion of specular surface reflection increases with the wavelength. Objects with multiple

reflections (e.g. glass plates or glass mirrors) cannot be inspected.

In contrast to projection techniques, deflectometry is sensitive to variations of the local

inclination (Figure 4.27). When the local inclination is varied, the camera observes another

point on the display in a deflectometric setup, whereas for projection techniques, a change in

the local inclination causes no direct measurement effect.

Deflectometry, as is the case with projection techniques, is based on structured light pat-

terns. The camera views the surface, but it records a reflection of the pattern generated by the

screen. In this configuration, the surface which is part of the optical system (as said before),

distorts the observed pattern [103].

Page 158: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

158CHAPTER 4. STRUCTURED LIGHT FOR THE INSPECTION OF FREE-FORM

METAL SURFACES

WP and defect No Photo Zoom on the defect

AL25_S06

AL109_S03

Table 4.2 — inspected work-pieces.

4.3.5.2 Inspected workpieces

Table 4.2 shows examples of defective car wheels surfaces that have to be inspected (WP

is the workpiece).

4.3.5.3 Experimental results and discussion

The developed software has been used to elaborate a dataset of stripe images for the

WPs listed in Table 4.2. Once the WP is mounted in the 6-axis positioning system, a binary

pattern with black background and a while window inside is generated (see an example in

Figure 4.28). The size and the position of the white window in the pattern are not important

at the beginning, since they will be changed later. This pattern is projected on the screen and

its corresponding camera view is recorded. Both, the projected pattern and the camera view

are displayed in the user’s monitor, as seen in Figure 4.29, in order to help him/her to choose

the ideal ROI.

The task of the user is therefore to adapt the projected white window to the desired

ROI in the WP to be inspected by changing the size (increasing/decreasing the number of

rows/columns) and the position of the while window using the developed software. Once

the white window is fixed, its position and size are then used in the generation process of

the sinusoidal patterns, in such a way that the sinusoidal patterns to be projected will be

generated in this position and with this size. The size corresponds to Nx and Ny, the number

Page 159: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

4.3. PROPOSED METHOD FOR THE INSPECTION OF FREE-FORM SURFACESWITH PHASE-SHIFTING TECHNIQUE 159

Figure 4.28 — generated binary pattern (black background with while window inside)used to select the ROI of the inspected area in the WP’s surface.

Figure 4.29 — user’s monitor consists of software interface (right window), projectedpattern (bottom-left window) and camera view of the surface (top-left window).

of columns and rows in pixel of the stripe patterns defined in paragraph 4.3.2.

After fixing the size and position of the sinusoidal stripes, the next step consists of choosing

the orientation (horizontal or vertical projection), amplitude (A), period (number of stripes

Nbstripes) and the phase-shifting step (δ).

We have chosen the following configuration for the inspected WPs :

– Vertical and horizontal projection,

– A=255,

– Nbstripes=[4, 5, 6, 7, 8, 9, 10, 15],

– Varying delta in the period range.

Application to the WP "AL25_S06"

For this defect, we have chosen a vertical projection, which is more adapted to this kind

of surface, since the reflection of the light in the horizontal projection is affected by the

Page 160: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

160CHAPTER 4. STRUCTURED LIGHT FOR THE INSPECTION OF FREE-FORM

METAL SURFACES

Figure 4.30 — example of recorded pattern corresponding to horizontal projection of basicsinusoidal stripe pattern with a period of 7 stripes.

orientation of the surface, as seen in Figure 4.30 which shows a recorded pattern corresponding

to a horizontal projection with Nbstripes = 7 and δ = 0.

The recorded vertical patterns corresponding to different periods are shown in Table 4.3

with δ = 0 (the basic sinusoidal stripe pattern).

Nbstripes Recorded stripe patterns Nbstripes Recorded stripe pattern

4 5

6 7

8 9

10 15

Table 4.3 — recorded vertical patterns for different periods with δ = 0.

Page 161: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

4.3. PROPOSED METHOD FOR THE INSPECTION OF FREE-FORM SURFACESWITH PHASE-SHIFTING TECHNIQUE 161

Figure 4.31 — calculated phase images with Nbstripes = 4 and δ = 50.

We recall that the defect detection is based on stripe analysis, where the detected stripes

within the recorded patterns are compared to each other. The comparison is based on calcu-

lating the distances between the pixels constituting two neighbor stripes. In the case where

one stripe passes through the defect pixels, the presence of the defect distorts the shape of

the stripe. This distortion is used to locate the position of the anomaly in the surface.

The patterns in Table 4.3 show that, the larger the width of the fringes (larger than the

defect), the less they are distorted by the presence of the defect. This can be clearly seen

in the phase images and the detected stripes in Table 4.4. The phase images are obtained

with the 4-step phase-shifting algorithm applied on the basic sinusoidal stripe pattern and

three phase-shifted others described in Eq. ( 4.8). The phase images are used to detect the

stripes with the curve-creation algorithm described in paragraph 4.3.4.2. The periods of the

recorded patterns are automatically calculated from the phase images and are also reported

in Table 4.4.

For example, in the case where Nbstripes = 4, we see in the phase and curves images in

Table 4.4 that the defect is located approximately between two stripes. Since the period of

the recorded pattern (approximately 119 pixels) is very large than the defect width (about 15

pixels), the presence of the defect does not distort the stripes in the phase image. We need

to shift the basic sinusoidal stripe pattern several times in order to obtain distorted fringe

patterns. In this case, the first distorted fringes are obtained after a shift of δ = 50. The

obtained phase and curves images with δ = 50 are shown in Figures 4.31 and 4.32.

In that case, the presence of the defect affects the regularity of the stripe passing through

the defect as shown in Figure 4.32. The disadvantage, when the period of the stripes is very

Page 162: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

162CHAPTER 4. STRUCTURED LIGHT FOR THE INSPECTION OF FREE-FORM

METAL SURFACES

Nbstripes Phase image Period in pixel Detected stripes

4 119

5 93

6 74

7 60

8 55

9 49

10 44

15 29

Table 4.4 — calculated phase images for different values of Nbstripes with the 4-step phase-shifting algorithm and their corresponding calculated periods in pixel.

Figure 4.32 — detected stripes from the phase images with Nbstripes = 4 and δ = 50.

larger than the width of the defect, is that we should scan the entire period range of the

stripes by shifting δ period-times to obtain distorted stripes in order to detect the presence

Page 163: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

4.3. PROPOSED METHOD FOR THE INSPECTION OF FREE-FORM SURFACESWITH PHASE-SHIFTING TECHNIQUE 163

Figure 4.33 — detected stripes, with Nbstripes = 15 and δ = 0, superimposed on therecorded pattern in order to match period of the stripes with the recoded pattern.

of the defect in the surface. The entire period range scanning is of course time consuming,

which is not desirable in the case of industrial applications.

A solution is to consider fringe pattern with small period, where, at most, the period

of the fringe pattern should be twice the width of the defect. Since the used curve-creation

algorithm detects the maxima (peaks) and the minima (valleys) in each period, as seen in

Figure 4.33, which means that in each period two stripes are detected, the period range can

be halved. Figure 4.33 shows an example of the period range calculated from the phase image

obtained with Nbstripes = 15 and δ = 0, where the stripe image is superimposed on the

recorded pattern.

Knowing that the width of the defect in the WP "AL25_S06" is about 15 pixels, the

optimal period for this defect is about 30 pixels corresponding to Nbstripes = 15. If this

period is considered, the choice of the shifting step δ is not very important, since at least

one curve should pass through the defect area and will be distorted. The distortion is used to

locate the non-normal regions between the stripes.

For Nbstripes = 15, we have used four sinusoidal patterns I1, I2, I3 and I4, depicted in

Figure 4.34a-d, to calculate the phase image φ, shown in Figure 4.35. I1 is the basic sinusoidal

stripe pattern with δ = 0. I2, I3 and I4 are three phase-shifted patterns from the basic one

with respectively δ = P/4, δ = (2× P )/4, and δ = (3× P )/4. P is the period of the stripes,

in this case P = 29 pixels.

The phase image φ is calculated as described in Eq. (4.8). The obtained image is depicted

in Figure 4.35.

Page 164: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

164CHAPTER 4. STRUCTURED LIGHT FOR THE INSPECTION OF FREE-FORM

METAL SURFACES

a) b)

c) d)

Figure 4.34 — four patterns : a) basic sinusoidal stripe pattern I1 and three phase-shiftedpatterns b) I2, c) I3 and d) I4, used to calculate the phase image depicted in Figure 4.35.

Figure 4.35 — calculated phase image φ from the patterns I1, I2, I3 and I4, shown inFigure 4.34.

The phase image φ is then used to detect the stripes by applying the curve-creation

algorithm, where the detected stripes are plotted in different colors in Figure 4.36.

The detected anomalies (pixels marked in red) between the curves as results of the stripe-

inspection algorithm are shown in Figure 4.37.

Page 165: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

4.3. PROPOSED METHOD FOR THE INSPECTION OF FREE-FORM SURFACESWITH PHASE-SHIFTING TECHNIQUE 165

Figure 4.36 — detected fringes with the curve-creation algorithm applied to the phaseimage φ shown in Figure 4.35.

Figure 4.37 — detected anomalies with the stripe-inspection algorithm applied to thecurves image shown in Figure 4.36.

Application to the WP "AL109_S03"

The same processing steps followed for the WP "AL25_S06" have also been considered for

the WP "AL109_S03". Horizontal projection has been chosen for this defect. The recorded

horizontal patterns for different periods are shown in Table 4.5 corresponding to δ = 0 (the

basic sinusoidal stripe pattern).

Table 4.6 reports the calculated periods and shows the phase and stripes images obtained

respectively with the 4-step phase-shifting and curve-creation algorithms.

Knowing that the height of the defect in the WP "AL109_S03" is about 10 pixels, the

optimal period for this defect is about 20 pixels corresponding to the Nbstripes ≥ 7.

For Nbstripes = 7, we have used the four sinusoidal patterns I1, I2, I3 and I4, depicted in

Figure 4.38a-d, to calculate the phase image φ, shown in Figure 4.39. I1 is the basic sinusoidal

stripe pattern with δ = 0. I2, I3 and I4 are three phase-shifted patterns from the basic one

Page 166: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

166CHAPTER 4. STRUCTURED LIGHT FOR THE INSPECTION OF FREE-FORM

METAL SURFACES

NbstripesRecorded stripe

patternsNbstripes

Recorded stripepattern

4 5

6 7

8 9

10 15

Table 4.5 — recorded horizontal patterns for different periods with δ = 0.

a) b)

c) d)

Figure 4.38 — four patterns : a) basic sinusoidal stripe pattern I1 and three phase-shiftedpatterns b) I2, c) I3 and d) I4, used to calculate the phase image depicted in Figure 4.35.

with respectively δ = P/4, δ = (2× P )/4 and δ = (3× P )/4, and P = 20 pixels.

Page 167: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

4.3. PROPOSED METHOD FOR THE INSPECTION OF FREE-FORM SURFACESWITH PHASE-SHIFTING TECHNIQUE 167

Nbstripes Phase image Period in pixel Detected stripes

4 37

5 29

6 24

7 20

8 18

9 16

10 14

15 10

Table 4.6 — calculated phase images for different values of Nbstripes with the 4-step phase-shifting algorithm and their corresponding calculated periods in pixel.

Figure 4.39 — calculated phase image φ from the patterns I1, I2, I3 and I4, shown inFigure 4.38.

The detected stripes with the curve-creation algorithm are plotted in different colors in

Figure 4.40.

Page 168: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

168CHAPTER 4. STRUCTURED LIGHT FOR THE INSPECTION OF FREE-FORM

METAL SURFACES

Figure 4.40 — detected fringes with the curve-creation algorithm applied to the phaseimage φ shown in Figure 4.39.

Figure 4.41 — detected anomalies with the stripe-inspection algorithm applied to thecurves image shown in Figure 4.40.

The detected anomalies (pixels parked in red) between the curves as results of the stripe-

inspection algorithm are shown in Figure 4.41.

4.3.6 Performance analysis

The proposed stripe-inspection algorithm is able to detect the discontinuities within the

stripe images, shown in Figures 4.36 and 4.40 for both WPs "AL25_S06" and "AL109_S03",

respectively. The detected discontinuities inform about the presence of surface defects within

the inspected WP. The detected anomalies (red pixels) in Figures 4.37 and 4.41 are obtained

with the application of the stripe-inspection algorithm on one phase image. In the case when

only one phase image is used, meaning that only 1/P of the period range of the stripes is

considered ; the pixels of the defect are not all detected. This means that only a part of the

defective area is detected. But, the important is that the algorithm indicates the presence of

Page 169: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

4.4. CONCLUSION 169

the defect in the surface. However, if all the pixels of the defect want to be detected, the entire

range of the period should be considered. For this, we vary the number of the phase images

from 1 to P with different step values. The detected defects in all phase images are grouped

together, forming one single detection card, which is superimposed with the one image of the

recorded patterns. Tables 4.7 and 4.8 show the detected pixels, marked in red color, with

different step values for both WPs "AL25_S06" and "AL109_S03" respectively.

The results in Tables 4.7 and 4.8 shows that the smaller the step value, the bigger the

number of phase images, the more pixels of the defect are detected and better the defect is

located.

On other hand, the more we increase the number of treated phase images, the more the

inspection time is important. This can be clearly seen in Table 4.9, which reports the times

in second needed to inspect the WPs "AL25_S06" and "AL109_S03" with different numbers

of phase images. The spatial dimension (in pixels) of the processed images for the WPs

"AL25_S06" and "AL109_S03" are respectively (164×444) and (150×342). All calculations

were done with Matlab (R2009b) on a basic computer.

The inspection time without taking into account the acquisition of the images, as seen

in Figure 4.42, varies proportionally to the number of the phase images considered in the

inspection process. However optimal results are obtained for both inspected WPs with three

phase images, where the defect can be well located (as seen in Tables 4.7 and 4.8) with

very low inspection times, 11.78 and 4.76 seconds, respectively for the WPs "AL25_S06"

and "AL109_S03". The high inspection time in case of WP "AL25_S06", comparing to WP

"AL109_S03", is due to the spatial dimension of the inspected area and basically to the large

number of stripes Nbstripes, since Nbstripes = 15 and Nbstripes = 7 have been considered in the

case of the WPs "AL25_S06" and "AL109_S03", respectively. To summarize, the inspection

time is dependent on the period of the stripes (related to the size of the defect) and number

of phase images considered in the inspection process.

4.4 Conclusion

In this chapter, an approach for the inspection of free-form surfaces has been proposed.

It is based on the combination of phase-shifting technique and fringe analysis. The classical

Page 170: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

170CHAPTER 4. STRUCTURED LIGHT FOR THE INSPECTION OF FREE-FORM

METAL SURFACES

StepNumber of considered

phase imagesDetected anomalies

29 = P 1

15 2

10 3

5 6

4 8

3 10

2 15

1 29

Table 4.7 — detected pixels, marked in red, with the stripe-inspection algorithm withdifferent step values in the period rang for the WP "AL25_S06".

free-form inspection methods are based on inverse fringe analysis used to invert the inspection

process, which consists on projecting regular fringe patterns onto the inspected surface and

Page 171: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

4.4. CONCLUSION 171

StepNumber of considered

phase imagesDetected anomalies

20 = P 1

10 2

8 3

5 4

4 5

3 7

2 10

1 20

Table 4.8 — detected pixels, marked in red, with the stripe-inspection algorithm withdifferent step values in the period rang for the WP "AL109_S03".

recording their camera views. The 3D shape of the surface and the presence of the defects on

the surface disturb the regularity of the projected patterns. In the inverse fringe approach,

Page 172: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

172CHAPTER 4. STRUCTURED LIGHT FOR THE INSPECTION OF FREE-FORM

METAL SURFACES

WP "AL25_S06" WP "AL109_S03"Number of phase images Inspection time (s) Number of phase images Inspection time (s)

1 4,32 1 1,762 8,22 2 3,263 11,78 3 4,766 23,23 4 6,208 31,25 5 7,6810 38,53 7 10,4115 58,21 10 14,7629 111,23 20 29,25

Table 4.9 — inspection times (in second) with different numbers of phase images for theWPs : "AL25_S06" and "AL109_S03".

Figure 4.42 — inspection time in second according to the number of considered phaseimages.

the 3D information about the inspected part is needed in order to simulate an inverse fringe

pattern used to correct the disturbed patterns. Once this inverse fringe pattern is projected

onto the surface, the camera records a regular pattern. If some irregularities are observed,

this indicates the presence of defects in the surface. The CAD model of the inspected object

is more often unavailable information, so that a preliminary 3D scanning of the surface is

necessary. The 3D scanning of the surface requires an accurate calibration of the system,

which needs a precise work and generally consumes a lot of processing time.

The proposed approach, in this chapter, overcomes the need of the CAD model and the

3D scanning of the inspected object and does not require a calibration step of the inspection

Page 173: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

4.4. CONCLUSION 173

system. It consists of using a structured light technique (phase-shifting approach) in case

of deflectometric recording. The 4-step phase shifting algorithm has been considered. This

algorithm requires a total number of four projection patterns, one basic sinusoidal fringe

pattern and three phase-shifted patterns of the basic one. We used several basic sinusoidal

patterns with different periods and we created a list of sinusoidal patterns shifted through the

whole range of the considered period. Two workpieces, containing different surface anomalies,

have been used in the context of this work. An industrial deflectometric inspection system has

been used to send synchronously the created patterns by the computer to the projector. These

patterns are then fully reflected by a mirror and projected on a translucent screen permitting

to have a diffuse light source, which can be considered as a secondary source of light patterns.

These diffuse light patterns are then reflected by the surface onto the camera. We have used

the recorded patterns to calculate phase-images, obtained from each quadruplet of four phase-

shifted images. We developed an algorithm to detect the stripes within the calculated phase

images, where the detected stripes have been used to inspect the surface. The inspection is

based on a developed fringe analysis algorithm, which seeks to detect the non-normal regions

between each two neighbor detected stripes. The anomalies are defined when the algorithm

detects a discontinuity in the stripe or when the distance between the pixels constituting

the stripes exceeds the more common distance between the two checked stripes, caused by

the presence of the defect in the surface. The defect is then localized in the area where the

anomalies are detected.

When the size of the defect is known, the optimal detection is obtained with patterns

that have a period which is twice the size of the defect. Since in one period, two stripes

are detected, corresponding to the peaks and valleys of the sinusoidal pattern. In that case,

when the period of the pattern is twice the size of the defect, the presence of the defect in

the surface will disturb the regularity of the stripes and the algorithm is able to localize its

position. When no knowledge about the defect is available, the whole range of the period

should be scanned. However, if the period of the pattern is too large comparing to the defect

size, the stripes can remain regular if they are projected away from the defect region. In this

case, and in order to obtain non-continuous or distorted stripes, the whole range of the period

should be considered. In that case, the inspection time is very important, since it is depending

on the period of the projected patterns and on the number of the considered phase-images.

Page 174: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

174CHAPTER 4. STRUCTURED LIGHT FOR THE INSPECTION OF FREE-FORM

METAL SURFACES

Then, some a priori knowledge on the size of the researched defect allows time reduction.

Finally, we have experimentally shown that the proposed method gives interesting results

on images of industrial quality, even for non-planes surfaces which introduce distortion on the

reflected fringe.

Page 175: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

Concluding remarks

THIS thesis focuses on supervised and unsupervised inspection of metal surfaces with

multimodal recording and processing. A particular attention is given to two main ap-

plications : surface examination of nuclear components and car wheel with, respectively, ther-

mography imagery and stripe projection techniques. Hyperspectral imagery (HSI) algorithms,

presented in chapter 1, are applied on multimodal recorded dataset. Chapter 2 and 3 deal

with the application of HSI algorithms on multispectral imagery combined with polarization,

and thermographic imagery. Chapter 4 focuses on the examination of free-form surfaces by

means of stripe projection techniques.

We recall in chapter 1 the "Hughes phenomenon", characteristics of HSI and related to the

large dimension of the processed data space, where a signal space dimensionality reduction

step is often required. We describe the most used denoising and dimensionality reduction algo-

rithms applied to HSI. The singular value decomposition (SVD), which projects the data from

its original space to its eigenspace ; and the maximum noise fraction (MNF), which finds the

non-orthogonal directions by maximizing the signal to noise ratio (SNR). Traditionally, SVD

and MNF are used for dimension reduction, but they are also useful for visually identifying

the dominant image components. The hyperspectral signal identification by minimum error

(HySime) is used to reduce the dimensionality of the data and to estimate automatically the

virtual dimension (VD) of the reduced data space. The minimum description length (MDL)

and the Akaike information criterion (AIC) are two criteria that have been used to determine

the singular vectors associated with the dominant eigenvalues for the estimation of the VD of

the reduced data space.

We also recall in this chapter the used multivariate imaging algorithms dedicated to target

and anomaly detection. The reference supervised detectors are the adaptive matched filter

(AMF) and the adaptive cosine/coherence estimator (ACE). Another approach of supervised

target detection, which consists of calculating spectral distance measures between the target

Page 176: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

176 CONCLUDING REMARKS

and tested-pixel spectra, is used. The spectral angle mapper (SAM), spectral information

divergence (SID) and the Kendall’s τ measure (TAU) are used in this thesis. If no priori

information about the spectral signature of the target is known, unsupervised target detection

approach is then considered, where the most popular unsupervised detectors are the anomaly

detection algorithms. We used the Reed and Xiaoli Yu (RX) detector and the regularized

adaptive RX (RARX) algorithm, which estimates the spatial distribution of the target in

the neighborhood of the tested pixel. AMF, ACE and RX have the constant false alarm

rate (CFAR) property. These supervised and unsupervised algorithms are tested for different

imaging modalities.

If no priori information about the spectral signature of the target is known, unsupervised

target detection approach is then considered, where the most popular unsupervised detectors

are the anomaly detection algorithms. The basic idea of these algorithms is that the anomalies

are defined with reference to a model of the background. The anomaly detection is also based

on statistical hypothesis-tests. We used the Reed and Xiaoli Yu (RX) detector based on

the assumption that the background follows a local Gaussian multivariate distribution. RX

estimates of Mahalanobis distance between the tested pixel and the mean background. We also

used the regularized adaptive RX (RARX) algorithm, which estimates the spatial distribution

of the target in the neighborhood of the tested pixel. AMF, ACE and RX have the constant

false alarm rate (CFAR) property. These supervised and unsupervised algorithms are tested

for different imaging modalities.

In chapter 2, we present a first application on multispectral images obtained with multiple

illumination modalities. We propose a simple technique to produce pseudo-spectral-cubes

(PSC) using white source light and monochromatic source lights in combination with polarized

light to illuminate flat metal parts containing artificial surface defects. Light emitting diodes

(LEDs), emitting radiation with a distinctive spectral position, are used to illuminate the

inspected surface. One LED source is used to illuminate the surface with white light in visible

domain, and nine LED-array sources are used to illuminate with monochromatic light in

visible and near infrared domains (from 470 nm to 940 nm). These two basic illumination

modalities are combined with polarized and unpolarized light in order to obtain two other

modalities. We evaluated the influence of these lighting modalities on the detection of the

defects by means of HSI algorithms. We used the algorithms AMF, ACE, SAM, SID and

Page 177: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

CONCLUDING REMARKS 177

TAU to detect four different targets : two are chosen in two different positions on the defect

area, one is chosen as a background pixel and the last is the mean spectrum of the data

cube. The fact that targets are chosen as background spectrum or as the mean spectrum of

the data cube can make the detection quasi- or totally-unsupervised. For those targets, we

considered the complement-to-one of the normalized detection map, as the objects of interest

are the defects and not the background. We also used RX for unsupervised detection. The

supervised approaches lead to very high false alarm rates. This is justified by the lack of

information about the spectral signature of the defects, since it is difficult to determine a

single representative spectral signature for a real defect because there can be a wide variation

for the same defect. Moreover, it may be worthless or even impossible to investigate all the

possible defects signatures. This is why we investigated the unsupervised methods. However,

SAM, which is widely used for material identification, showed interesting when the target to

be detected is chosen as a background pixel or as the mean of the data cube.

In chapter 3, we present another application of HSI algorithms on thermography images for

unsupervised detection of surface and subsurface anomalies within nuclear metal components.

Two active infrared thermography (IRT) processes : pulsed and lock-in thermography (PT and

LT respectively) are used for the inspection of three metallic parts containing open cracks,

and open and closed notches with different sizes and depths. In both processes, the specimen

is heated by the inductor for a few seconds (from 1 to 10s) and is left to cool for 5 to 10s. The

IR camera images the temperature variations as thermograms during the heating and cooling

phases. The acquired thermal images are grouped in a sequence of thermograms, where the

first two dimensions represent the spatial information and the third dimension represents the

temperature profile of the surface. A dataset of thermal cubes is established by changing the

parameters of the heating sequence. The acquired thermal cubes are very huge data, where the

number of the images in cubes exceeds 900 images. We investigate an unsupervised approach

for the detection of the anomalies, that why, only RX and RARX are used here. We show that

using the whole cube is not efficient for the detection of the anomalies. Since we investigate

an unsupervised approach, we propose to consider both parts of the temperature profile and

to keep the whole information about the temporal behavior of each pixel and to reduce the

data space of the acquired data cubes using different denoising and dimensionality reduction

algorithms. SVD and MNF are used for different K subspace dimensions fixed between 2 and

Page 178: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

178 CONCLUDING REMARKS

15 components. The results are compared by means of false alarm rates and ROC curves. The

results show that the optimal detections are obtained with small subspace signals (from 2 to 10

components). The results are better when only three main classes are present within the data :

background, heating tool and defect pixels and no additive perturbation pixels are present

on the scene corresponding to the reflexion of the heating tool on the surface, or additional

marker in the scene. AIC and MDL criteria, and HySime algorithm are used to estimate the

VDs of the reduced data-spaces. The estimated VDs are often overestimated (hundreds of

components), which is not optimal, since have experimentally shown that optimal false alarm

rates are obtained with subspaces that have small dimensions. We propose an empiric method

for estimating the VD of the reduced data. It is based on the energy and SNR evolution. These

two quantities are arranged in a descending order. The VD is selected when the difference of

the energy/SNR between two adjacent components becomes very low. The obtained values

from this method are very close to those experimentally obtained. All of the considered defects

were partly detected, even for disturbed hyper-thermal images.

Finally, we propose a method for reducing the dimensionality of the data spaces to one. It

consists of projecting the original data on the direction of one selected principal component

(PC). We use principal component analysis (PCA) to reduce the data space. The PC that

has the maximum value of kurtosis is selected, where the data are then projection on its

direction. The kurtosis criterion informs about the Gaussianity of the data. The selected PC

that has the maximum value of the kurtosis is considered as the most abnormal component

which is supposed to correspond to the anomaly. A hypothesis test is done, consisting of

testing each pixel to determine whether it follows a normal distribution or not. The obtained

results are to those obtained from SVD and MNF. The proposed one-dimensionality reduction

approach allows fast detection, which is an important parameter in the context of industrial

applications, and gives better results when the thermographic images are acquired in ideal

conditions to be in good quality and noisily as low as possible. When the image quality is not

sufficient, the approach with SVD/RARX shows good robustness and should be preferred.

In chapter 4, we present an inspection approach for reflected free-form surfaces based on an

industrial machine vision system. This approach uses structured light techniques in case of de-

flectometric recording, to produce optical images, and fringe analysis, to detect the anomalies

within the inspected metallic surfaces (we use car wheels in our experiments). The inspection

Page 179: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

CONCLUDING REMARKS 179

system is a 6-axis system which allows to project fringe patterns via a DLP-projector onto

a light translucent screen. The reflection of the screen in the inspected surface is observed

by a CCD-camera. The proposed technique is based on the combination of phase-shifting

technique and fringe analysis. We create a regular sinusoidal pattern with a certain period

and we shift it through the whole range of its period. From these patterns, we create a list

of phase-shifted patterns, we project them with the 6-axis system and we record the corres-

ponding patterns with the CCD-camera. The 3D shape of the surface and the presence of

the defects on the surface disturb the regularity of the projected patterns. Two car wheels,

containing different surface anomalies, are used in these experiments. We use 4-phase-shifting

algorithm to calculate the phase-images, obtained from each quadruplet of four phase-shifted

images. We detect the stripes within the calculated phase images with a developed algorithm.

The detected stripes are used to inspect the surface. The defects are located where the dis-

continuities in the stripe are detected or when the distance between the pixels constituting

the stripes exceeds the more common distance between the two checked stripes, caused by the

presence of the defect in the surface. Different periods of the created patterns are tested. The

results show that, in the case when some a priori information about the size of the defect is

known, the optimal detections are obtained when the period of the recorded patterns is twice

the size of the defect. In that case, the defect disturbs the regularity of the stripes throughout

the whole period range. Only four phase-shifted projected patterns are needed to localize the

defect. When unsupervised approach is investigated, the whole range of the period of any

recorded pattern should be scanned. In this case, the inspection time is very important, since

it is depending on the period of the projected patterns and on the number of the considered

phase-images.

This method allows performing defect detection on free-form surfaces. Actually these re-

sults were not obtained with methods based on preliminary correction of the fringes, in order

to straighten them before analysis.

Perspectives

For the multispectral approach, in a future work, it might be interesting to investigate

hyperspectral images by using a hyperspectral sensor for the inspection of metallic surfaces.

We could then perform dimension reduction for hyperspectral data, as well as develop a deeper

study on the main interesting parts of the spectral response of currently inspected materials.

Page 180: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

180 CONCLUDING REMARKS

Indeed, in our experiments, we found no pseudo-spectral signature for the inspected pieces,

mainly due to the influence of the geometry of the defect, which was not able to be predicted.

The hyper-thermal detection showed interesting results. In this case a consistent thermal

signature can be extracted, and the dimensionality reduction is relevant. We proposed a robust

approach, and a second one, depending on the image quality. We could improve the latter

method choosing a compromise criterion between energy and kurtosis instead the one we

proposed. This could be developed in future works.

The results of the structured light detection should be enhanced by a more precise analysis

of the fringes, in order to obtain a better shape of the defect, as well as some more parameters

extraction. It would also be interesting to develop this approach in a wide range of pieces, in

order to obtain statistical measures of performance.

The overall comparison between two different techniques such as the multi-/hyper- spectral

approaches and the structured light approach, would also be interesting, whereas difficult to

develop rigorously.

Page 181: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

Appendices

Page 182: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …
Page 183: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

ANNEXE

A Fundamentals of infrared

thermography (IRT)

A.1 Thermal energy

A.1.1 Heat transfer

Heat transfer allows us to predict the energy transfer, tacking place between two bodies

due only to a temperature difference. This science is important for all energy-related applica-

tions, such as power plants, industrial processes, refrigeration, and electronics. Heat transfer

problems are very important in TT, helping to explain observed phenomena such as abnormal

temperature patterns. More specially, heat transfer is concerned with calculations of tempe-

rature distribution and heat transfer exchanges in a given system, knowing the operating

conditions and also the opposite, finding the operating conditions from known temperature

distribution and heat transfer exchanges.

Energy can be changed from one form to another. For instance, a car engine converts the

chemical energy of gasoline to thermal energy. That, in turn, produces mechanical energy,

as well as electrical energy for lights or ignition, and heat energy for the defroster or air

conditioner. During these conversions, although the energy becomes more difficult to harness,

none of it is lost. This is the first law of thermodynamics. A byproduct of nearly all energy

conversions is heat or thermal energy.

When there is a temperature difference between two objects, or when an object is changing

temperature, heat energy is transferred from the warmer areas to the cooler areas until thermal

equilibrium is reached. This is the second law of thermodynamics. A transfer of heat energy

Page 184: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

184 ANNEXE A. FUNDAMENTALS OF INFRARED THERMOGRAPHY (IRT)

results either in electron transfer or increased atomic or molecular vibration.

Heat energy can be transferred by any of three modes : conduction, convection, or ra-

diation. Heat transfer by conduction occurs primarily in solids, and to some extent in fluids,

as warmer molecules transfer their energy directly to cooler, adjacent ones. Convection takes

place in fluids and involves the mass movement of molecules. Radiation is the transfer of

energy between objects by electromagnetic radiation. Because it needs no transfer medium,

it can take place even in a vacuum.

Transfer of heat energy can be described as either steady-state or transient. In the steady-

state condition, heat transfer is constant and in the same direction over time. A fully warmed-

up machine under constant load transfers heat at a steady-state rate to its surroundings. In

reality, there is no such thing as true steady-state heat flow. Although we often ignore them,

there are always small transient fluctuations. A more accurate term is quasi-steady-state heat

transfer. When heat transfer and temperatures are constantly and significantly changing with

time, heat flow is said to be transient. A machine warming up or cooling down is an example.

Because thermographers are often concerned with the movement of heat energy, it is vital to

understand what type of heat flow is occurring in a given situation.

Heat energy is typically measured in British thermal units (Btu) or calories (c). A Btu is

defined as the amount of energy needed to raise the temperature of one pound of water one

degree Fahrenheit. A calorie is the amount of heat energy needed to raise the temperature of

one gram of water one degree Celsius. Temperature is a measure of the relative "hotness" of a

material compared to some known reference. There are many ways to measure temperature.

The most common is to use our sense of touch. We also use comparisons of various material

properties, including expansion (liquid and bimetal thermometers), a change in electrical vol-

tage (thermocouple), and a change in electrical resistance (bolometers). Infrared radiometers

infer a temperature measurement from detected infrared radiation.

Regardless of how heat energy is transferred, thermographers must understand that ma-

terials also change temperatures at different rates due to their thermal capacitance. Some

materials, like water, heat up and cool down slowly, while others, like air, change temperature

quite rapidly. The thermal capacitance or specific heat of a material describes this rate of

change. Without an understanding of these concepts and values, thermographers will not be

able to properly interpret their findings, especially with regard to transient heat flow situa-

Page 185: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

A.1. THERMAL ENERGY 185

tions. Although potentially confusing, these properties can also be used to our advantage.

Finding liquid levels in tanks, for example, is possible because of the differences between the

thermal capacitance of the air and the liquid.

A.1.2 Latent Heat

As materials change from one state or phase (solid, liquid or gas) to another, heat energy

is released or absorbed. When a solid changes state to a liquid, energy is absorbed in order

to break the bonds that hold it as a solid. The same thing is true, as a liquid becomes a gas ;

energy must be added to break the bonds. As gases condense into liquids, and as liquids freeze

into solids, the energy used to maintain these high-energy states is no longer needed and is

released.

This energy, which can be quite substantial, is called latent energy because it does not

result in the material changing temperature. The impact of energy released or absorbed during

phase change often affects thermographers. The temperature of a roof surface, for instance,

can change very quickly as dew or frost forms, causing problems during a roof moisture survey.

A wet surface or a rain-soaked exterior wall will not warm up until it is dry, thus masking

any subsurface thermal anomalies. On the positive side, state changes enable thermographers

to see thermal phenomena, such as whether or not solvents have been applied evenly to a

surface.

A.1.3 Conduction

Conduction is the transfer of thermal energy from one molecule or atom directly to another

adjacent molecule or atom with which it is in contact. This contact may be the result of

physical bonding, as in solids, or a momentary collision, as in fluids. Fourier’s law of conduction

describes how much heat is transferred by conduction :

Q =k

L×A×∆T(A.1)

where, Q is the transferred heat, k is the thermal conductivity, L is the thickness of materials,

A is the area normal to flow and ∆T is the temperature difference.

Page 186: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

186 ANNEXE A. FUNDAMENTALS OF INFRARED THERMOGRAPHY (IRT)

Material k(W m−1 C−1) Material k(W m−1 C−1)

Silver (pure) 410 Chromium 90

Copper (pure) 385 Iron (pure) 73

Gild 320 Germanium 60

Aluminum (pure) 202 Carbon steel, 1C 43

Silicon 150 Lead (pure) 35

Nickel (pure) 93 Chrome-nickel steel(18% Cr, 8% Ni)

16.3

Table A.1 — thermal conductivity value k of common metal materials at room temperature(source : adapted from [104]).

The thermal conductivity (k) is the quantity of heat energy that is transferred through

one square foot of a material, which is one inch thick, during one hour when there is a one-

degree temperature difference across it. The metric equivalent (in watts) is W m−1 C−1 and

assumes a thickness of one meter. Materials with high thermal conductivities, such as metals,

are efficient conductors of heat energy. We use this characteristic to our advantage by making

such things as cooking pans and heat sinks from metal. Differences in conductivity are the basis

for many thermographic applications, especially the evaluation of flaws in composite materials

or the location of insulation damage. Materials with low thermal conductivity values, such

as wool, fiberglass batting, and expanded plastic foams, do not conduct heat energy very

efficiently and are called insulators. Their insulating value is due primarily to the fact that

they trap small pockets of air, a highly inefficient conductor. Table A.1 lists value of k for

several common metal substances. The complete list for other common nonmetal materials

can be found in [104].

For a given substance, the thermal conductivity depends on the temperature. Consi-

der, for example, nitrogen (gas), which has a k value of 0.1 W m−1 C−1 at 1000 K and

0.01 W m−1 C−1 at 100 K. Aluminum also exhibits huge thermal conductivity variations :

W m−1 C−1 at 1 K, 20,000 W m−1 C−1 at 10 K, flattening to 300 W m−1 C−1 at in the

Page 187: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

A.1. THERMAL ENERGY 187

temperature range 100 to 1000 K. Materials with such a huge thermal conductivity at low

temperatures are referred to as super-thermal conductors.

The term R-value, or thermal resistance, is a measure of the resistance to conductive heat

flow. It is defined, as the inverse of conductivity, or 1/k. R-value is a term that is generally

used when describing insulating materials.

Another important material property is thermal diffusivity. Thermal diffusivity is the rate

at which heat energy moves throughout the volume of a material. Diffusivity is determined

by the ratio of the material’s thermal conductivity to its thermal capacitance. Differences in

diffusivity and consequent heat flow are the basis for many active thermography applications

in TT.

A.1.4 Convection

Heat energy is transferred in fluids, either gases or liquids, by convection. During this

process, heat is transferred by conduction from one molecule to another and by the subsequent

mixing of molecules. In natural convection, this mixing or diffusing of molecules is driven by

the warmer (less dense) molecules’ tendency to rise and be replaced by more dense, cooler

molecules. Cool cream settling to the bottom of a cup of hot tea is a good example of natural

convection. Forced convection is the result of fluid movement caused by external forces such

as wind or moving air from a fan. Natural convection is quickly overcome by these forces,

which dramatically affect the movement of the fluid. Newton’s law of cooling describes the

relationship between the various factors that influence convection :

Q = hA×∆T (A.2)

where, Q is the heat energy, h is the coefficient of convective heat transfer, A is the area and

∆T is the temperature difference.

The coefficient of convective heat transfer is often determined experimentally or by es-

timation from other test data for the surfaces and fluids involved. The exact value depends

on a variety of factors, of which the most important are velocity, orientation, surface condi-

tion, geometry, and fluid viscosity. Changes in h can be significant due merely to a change in

orientation. The topside of a horizontal surface can transfer over 50 % more heat by natural

Page 188: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

188 ANNEXE A. FUNDAMENTALS OF INFRARED THERMOGRAPHY (IRT)

convection than the underside of the same surface. In both natural and forced convection,

a thin layer of relatively still fluid molecules adheres to the transfer surface. This boundary

layer, or film coefficient, varies in thickness depending on several factors, the most important

being the velocity of the fluid moving over the surface. The boundary layer has a measu-

rable thermal resistance to conductive heat transfer. The thicker is the layer, the greater is

the resistance. This, in turn, affects the convective transfer as well. At slow velocities, these

boundary layers can build up significantly. At higher velocities, the thickness of this layer and

its insulating effect are both diminished.

A.1.5 Radiation

In addition to heat energy transfer by conduction and convection, heat can also be trans-

ferred by radiation. Thermal infrared radiation is a form of electromagnetic energy similar

to light, radio waves, and X-rays. All forms of electromagnetic radiation travel at the speed

of light, 3 × 108 m/second. All forms of electromagnetic radiation travel in a straight line

as a waveform ; they differ only in their wavelength. Infrared radiation that is detected with

thermal imaging systems has wavelengths between approximately 2 and 15 microns (µm).

The amount and the exact number of radiated wavelengths depend primarily on the tem-

perature of the object. It is this phenomenon that allows us to see radiant surfaces with

infrared sensing cameras.

Due to atmospheric absorption, significant transmission through air occurs in only two

"windows" or wavebands : the short (2− 6 µm) and long (8− 15 µm) wavebands. Both can

be used for many thermal applications. With some applications, one waveband may offer a

distinct advantage or make certain applications feasible.

The amount of energy emitted by a surface depends on several factors, as shown by the

Stefan-Boltzmann formula :

Q = σ × �× T 4 absolute (A.3)

where, Q is the energy transmitted by radiation, σ is the Stefan-Boltzmann constant, � is the

emissivity value of the surface and T is the absolute temperature of the surface.

When electromagnetic radiation interacts with a surface several events may occur. Thermal

radiation may be reflected by the surface, just like light on a mirror. It can be absorbed by the

Page 189: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

A.1. THERMAL ENERGY 189

surface, in which case it often causes a change in the temperature of the surface. In some cases,

the radiation can also be transmitted through the surface ; light passing through a window is

a good example. The sum of these three components must equal the total amount of energy

involved. This relationship, known as the conservation of energy, is stated as follows :

α+ ρ+ τ = 1 (A.4)

where, α is the absorbed energy (absorptivity or absorbance), ρ is the reflected energy (re-

flectance of reflectivity) and τ is the transmitted energy (transmitivity or transmittance) .

Radiation is never perfectly transmitted, absorbed, or reflected by a material. Two or three

phenomena are occurring at once. For example, one can see through a window (transmission)

and also see reflections in the window at the same time. It is also known that glass absorbs

a small portion of the radiation because the sun can cause it to heat up. For a typical glass

window, 92% of the light radiation is transmitted, 6% is reflected, and 2% is absorbed.

Infrared radiation, like light and other forms of electromagnetic radiation, also behaves in

this way. When a surface is viewed, not only radiation that has been absorbed may be seen,

but also radiation that is being transmitted through the target and/or reflected by it. Neither

the transmitted nor reflected radiation provides any information about the temperature of

the surface.

The combined radiation reflecting from a surface to the infrared system is called its radio-

sity. The job of the thermographer is to distinguish the emitted component from the others

so that more about the target temperature can be understood.

Only a few materials transmit infrared radiation very efficiently. The lens material of the

camera is one. Transmissive materials can be used as thermal windows, allowing viewing into

enclosures. The atmosphere is also fairly transparent, at least in two wavebands. In the rest of

the thermal spectrum, water vapor and carbon dioxide absorb most thermal radiation. As can

be seen from Figure A.2, radiation is transmitted quite readily in both the short (3− 5 µm)

and long (8−14 µm) wavebands. Infrared systems have been optimized to one of these bands

or the other. Broadband systems are also available and have some response in both wavebands.

A transmission curve for glass would show us that glass is somewhat transparent in the

short waveband and opaque in the long waveband. It is surprising to try to look thermally

Page 190: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

190 ANNEXE A. FUNDAMENTALS OF INFRARED THERMOGRAPHY (IRT)

through a window and not be able to see much of anything. Many thin plastic films are

transparent in varying degrees to infrared radiation. A thin plastic bag may be useful as a

camera cover in wet weather or dirty environments. However, all thin plastic films are not

the same. While they may look similar, it is important to test them for transparency and

measure the degree of thermal attenuation. Depending on the exact atomic makeup of the

plastic, they may absorb strongly in very narrow, specific wavebands. Therefore, to measure

the temperature of a thin plastic film, a filter must be used to limit the radiation to those areas

where absorption (and emission) occurs. The vast majority of materials are not transparent.

Therefore, they are opaque to infrared radiation. This simplifies the task of looking at them

thermally by leaving one less variable to deal with. This means that the only radiation we

detect is that which is reflected and absorbed by the surface ρ+ α = 1

If ρ = 1, the surface would be a perfect reflector. Although there are no such materials, the

reflectivity of many polished shiny metals approaches this value. They are like heat mirrors.

Kirchhoff’s law says that for opaque surfaces the radiant energy that is absorbed must also

be reemitted, or α = E. By substitution, it is concluded that the energy detected from an

opaque surface is either reflected or emitted (ρ + E = 1). Only the emitted energy provides

information about the temperature of the surface. In other words, an efficient reflector is

an inefficient emitter, and vice versa. For thermographers, this simple inverse relationship

between reflectivity and emissivity forms the basis for interpretation of nearly all of that is

seen. Emissive objects reveal a great deal about their temperature. Reflective surfaces do not.

In fact, under certain conditions, very reflective surfaces typically hide their true thermal

nature by reflecting the background and emitting very little of their own thermal energy.

If E = 1, all energy is absorbed and reemitted. Such an object, which exists only in

theory, is called a blackbody. Human skin with an emissivity of 0.98 is nearly a perfect

blackbody, regardless of skin color. Emissivity is a characteristic of a material that indicates

its relative efficiency in emitting infrared radiation. It is the ratio of thermal energy emitted

by a surface to that energy emitted by a blackbody of the same temperature. Emissivity is

a value between zero and one. Most non-metals have emissivities above 0.8. Metals, on the

other hand, especially shiny ones, typically have emissivities below 0.2. Materials that are not

blackbodies are called real bodies. Real bodies always emit less radiation than a blackbody

at the same temperature. Exactly how much less depends on their emissivity.

Page 191: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

A.1. THERMAL ENERGY 191

Material Emissivity (E)∗

Human skin 0.98

Black paint (flat) 0.90

White paint (flat) 0.90

Paper 0.90

Lead, oxidized 0.40

Copper, oxidized to black 0.65

Copper, polished 0.15

Aluminum, polished 0.10

∗ Values will vary with exact surface type and wavelength.

Table A.2 — emissivity values (source : adapted from [104]).

Several factors can affect what the emissivity of a material is. Besides the material type,

emissivity can also vary with surface condition, temperature, and wavelength. The emittance

of an object can also vary with the angle of view.

It is not difficult to characterize the emissivity of most materials that are not shiny metals.

Many of them have already been characterized, and their values can be found in tables such

as Table A.2. These values should be used only as a guide.

It is interesting to note that cracks, gaps, and holes emit thermal energy at a higher rate

than the surfaces around them. The same is true for visible light. The pupil of your eye is

black because it is a cavity, and the light that enters it is absorbed by it. When all light is

absorbed by a surface, we say it is "black". The emissivity of a cavity will approach 0.98 when

it is seven times deeper than it is wide. From an expanded statement of the Stefan-Boltzmann

law, the impact that reflection has on solving the temperature problem for opaque materials

can be seen :

Q = ρ× �× T 4 +�ρ× (1− �)× T 4 background

�(A.5)

Page 192: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

192 ANNEXE A. FUNDAMENTALS OF INFRARED THERMOGRAPHY (IRT)

The second part of the equation (between brackets) represents that portion of the radiosity

that comes from the reflected energy. When using a radiometric system to make a measure-

ment, it is important to characterize and account for the influence of the reflected background

temperature.

Consider these two possible scenarios :

– When the object being viewed is very reflective, the temperature of the reflected back-

ground becomes quite significant.

– When the background is at a temperature that is extremely different from the object

being viewed, the influence of the background becomes more pronounced.

It becomes clear that repeatable, accurate radiometric measurements can be made only

when emissivities are high. This is a fundamental limitation within which all thermographers

work. Generally, it is not recommended to make temperature measurements of surfaces with

emissivities below approximately 0.50, in other words all shiny metals, except under tightly

controlled laboratory conditions. However, with a strong understanding of how heat energy

moves in materials and a working knowledge of radiation, the value of infrared thermography

as a noncontact temperature measurement tool for nondestructive evaluation is remarkable.

A.2 Infrared systems fundamentals

A.2.1 Thermal emission

All bodies above the temperature 0 K emit electromagnetic radiation. All objects are

composed of continually vibrating atoms, with higher energy atoms vibrating more frequently.

The vibration of all charged particles, including these atoms, generates electromagnetic waves.

The higher is the temperature of an object, the faster is the vibration, and thus the higher

is the spectral radiant energy. As a result, all objects are continually emitting radiation at a

rate with a wavelength distribution that depends upon the temperature of the object and its

spectral emissivity, �(λ) [105].

Radiant emission is usually treated in terms of the concept of a blackbody. A blackbody

is an object that absorbs all incident radiation and, conversely according to the Kirchhoff’s

law, is a perfect radiator. The energy emitted by a blackbody is the maximum theoretically

Page 193: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

A.2. INFRARED SYSTEMS FUNDAMENTALS 193

Figure A.1 — Planck’s law for spectral emittance (after : [106]).

possible for a given temperature. The radiative power (or number of photon emitted) and its

wavelength distribution are given by the Planck radiation law :

W (λ, T ) =2πhc2

λ5

�exp

�hc

λkBT

�− 1

�−1

W/(cm2µm) (A.6)

W (λ, T ) =2πc

λ4

�exp

�hc

λkBT

�− 1

�−1

photons/(s cm2µm) (A.7)

where λ is the wavelength, T is the temperature, h is the Planck’s constant, c is the velocity

of light, and kB is the Boltzmann’s constant.

Figure A.1 shows a plot of these curves for a number of blackbody temperatures. As the

temperature increases, the amount of energy emitted at any wavelength increases too, and

the wavelength of peak emission decreases. An interesting relationship between temperature

and wavelength is expressed in Wien’s displacement law. The product of temperature T of

a blackbody with the wavelength λmax at which it is radiating the maximum intensity is

constant.Tλmax_w = 2898[K µm], for maximum watts

Tλmax_p = 3670[K µm], for maximum photons(A.8)

The loci of these maxima are shown in Figure A.1. Note that for an object at an ambient

Page 194: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

194 ANNEXE A. FUNDAMENTALS OF INFRARED THERMOGRAPHY (IRT)

Abbreviation Wavelength Name of waveband

NIR 0.7 µm− 1.4 µm narrow IR

SWIR 1.4 µm− 3 µm short wave IR

MWIR 3 µm− 8 µm mid wave IR

LWIR 8 µm− 15 µm long wave IR

FIR 15 µm− 1000 µm far IR

Table A.3 — IR spectral bands (after : [107])

temperature of 310 K, λmax_w and λmax_p occur at 10.0 µm and 12.7 µm, respectively. We

need detectors operating near 10 µm if we expect to see room temperature objects such as

people, trees and truck without the aid of reflected light. For hotter objects such as engines,

maximum emission occurs at shorter wavelengths. Thus, the waveband 2− 15 µm in infrared

or thermal region of the electromagnetic spectrum contains the maximum radiative emission

for thermal imaging purposes.

A.2.2 Spectral bands

The infrared spectral band is above the visual spectral band and stretches from about

0.7 µm to 1000 µm. This very broad spectral band is subdivided into the bands shown in

Table A.3, but the limits of the respective spectral bands are somewhat arbitrary (many

proposals of division of IR range have been published).

A.2.3 Choice of infrared band

The transmission of electromagnetic radiation through the atmosphere is not constant over

the spectrum. It depends on certain weather phenomena and some atmospheric constituents

which absorb parts of the spectrum. The major radiation absorbers are water vapor, carbon

dioxide, nitrous oxide, carbon monoxide and ozone. A very common presentation of the at-

mospheric transmission is along a 1 km horizontal path at sea level ; see Figure A.2. In this

Page 195: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

A.2. INFRARED SYSTEMS FUNDAMENTALS 195

Figure A.2 — atmospheric transmission from 0.7 µm to 14 µm at sea level and the spectralbands (after : [107]).

figure, different atmospheric windows can be seen. The subdivision into the spectral bands

almost coincides with the atmospheric windows. In the MWIR spectral band, the atmospheric

absorption reduces the useful part from about 3− 5µm the MWIR band and the wavelength

range from 8− 14 µm the LWIR band.

The division into the respective wavelength bands can also be observed on the detector

side. For example, there are detectors for the SWIR, MWIR, LWIR and NIR + SWIR spectral

bands. In general, the 8 − 14 µm band is preferred for high performance thermal imaging

because of it higher sensitivity to ambient temperature objects and its better transmission

through mist and smoke. However, the 3 − 5 µm band may be more appropriate for hotter

object, or if sensitivity is less important than contrast.

A.2.4 Infrared radiation from the object to the image

In lens design there is a quite simple model in describing the imaging chain : an object is

transformed to the image by the optical lens system :

Object → Optical Lens System → Image.

This model can be expanded and explained with other words. The objects are the targets

and backgrounds seen by a sensor through the atmosphere. The sensor consists of the optical

system and the detector. In infrared optics and for electro optical systems there are some

additional components. These are electronics with software, a visual display and human per-

ception. So we have : Targets and backgrounds → Atmosphere → Optical system → Detector

→ Electronics with software → Human perception.

Page 196: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

196 ANNEXE A. FUNDAMENTALS OF INFRARED THERMOGRAPHY (IRT)

In the case of automatic target recognition, the display and human perception are not

necessary.

In [108] M.J. Riedl has given a formal description of evaluating the performance of a

complete system which he calls the "extended simplified radiometric performance equation".

This is a signal-to-noise consideration. The two elements electronics with software and human

perception are not considered. This has the advantage of being very instructive. In addition

to this one there are other formal descriptions of complete systems. Signal-to-noise-ratio =

[target power − background power] × [atmospheric transmission] × [optical throughput] ×[detector efficiency]

orS

N= [WT �T −WB�B].[τA].

�τ0d

4(f#)2

�.

�D∗√δf

�(A.9)

The target power or source power is the product of the radiant emittance Wand the

emissivity �. We have the same formula for the background power. There must be difference

between the target power and the background power to detect a signal. The emissivity � des-

cribes the emission characteristic of an object. The atmospheric transmittance τA is variable

and depends on concentrations of several gases and aerosols. The optical throughput is the

term that concerns the optical system. The transmittance of the optical system is τ0, d’ is

the linear size of the detector element and f# is the f-number, a common measure for the

aperture. The last term relates to the detector. The specific detectivity D∗ depends on the

type of detector : thermal, photon or photoconductive, as well as the material of the detector

and the wavelength band. The noise equivalent electrical bandwidth is abbreviated by ∆f .

A.2.5 Detectors

The detector transforms the incoming infrared radiation into an electrical signal. In order

to get a real time image of the scenery, the response time has to be short. A common parameter

used to characterize a detector is the specific detectivity D∗. It is given by

D∗ =

√A ∆f

NEP(A.10)

Page 197: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

A.2. INFRARED SYSTEMS FUNDAMENTALS 197

where A is the area of the photosensitive region of the detector, ∆f is the effective noise

bandwidth and NEP is the noise equivalent power. D∗ is expressed in cm.Hz1/2/W , also

called "Jones". The noise equivalent power is defined as the radiant power that produces a

signal-to-noise ratio of unity at the output of a given optical detector at a given data-signaling

rate or modulation frequency, operating wavelength and effective noise bandwidth.

For a good spatial resolution, the pixel size should be quite small. A pixel size, which is

much smaller than the Airy disc, is not useful. The diameter of an Airy disc φAiry is given by

φAiry = 2.44 λf# (A.11)

There are many different types of detectors. They can be distinguished by the following

features :

❶ Operating temperature : cooled or uncooled detectors.

❷ Operating principle : thermal or photon detectors.

❸ Size of the sensitive array : single element, line or matrix detector.

❹ Characteristic spectral sensitivity.

The operating temperature of a cooled detector is in the range from about 70 K to about

200 K. the temperature during the operating time should be constant. The spectral sensiti-

vity of a detector material and the spectral bandwidth vary with temperature. The spectral

sensitivity is dependent on the wavelength. A cooled detector (see Figure A.3a) usually has

the following elements : a detector window, as a part of the housing ; a cold filter ; a cold

shield ; and the sensitivity array which is either a line or a matrix element and is attached

onto the cold finger. If the cold shield limits all incoming ray bundles and works as a stop

surface it is called the cold stop. The exit pupil of the optical lens system should be at the

cold shield. All rays that strike the sensitive array are inside the cone which is formed from

the sensitive array and the cold shield. There is a vacuum inside the housing. An uncooled

detector (see Figure A.3b) works at an ambient temperature. It has the following elements :

detector window, as a part of the housing ; a filter ; and the sensitivity array.

Photon detection is the process which occurs when an incident photon, being absorbed by

the detecting material, interacts with electrons that are either bound to atoms or are free. The

photon detectors show a selective wavelength dependence of the response per unit incident

Page 198: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

198 ANNEXE A. FUNDAMENTALS OF INFRARED THERMOGRAPHY (IRT)

Figure A.3 — a) cooled detector and b) uncooled detector (after : [107]).

radiation power. They exhibit both high signal-to-noise performance and a very fast response.

But to achieve this, the photon detectors require cryogenic cooling. Cooling requirements

are the main obstacle to the more widespread use of IR systems based on semiconductor

photodetectors making them bulky, heavy, expensive and inconvenient to use. Depending on

the nature of interaction, the class of photon detectors is further sub-divided into different

types. The most important are : intrinsic detectors (HgCdTe, InGaAs, InSb, PbS, PbSe),

extrinsic detectors (Si :As, Si :Ga), photoemissive (metal silicide Schottky barriers) detectors,

and quantum well detectors (GaAs/AlGaAs QWIPs). Thermal detection is defined as a me-

chanism that changes the measurable properties of a material due to a temperature rise in

that material caused by absorption of electromagnetic radiation. The responsivity of an ideal

thermal detector does not vary with wavelength ; the signal depends upon the radiant power

(or its rate of change) but not upon its spectral content. In pyroelectric detectors, a change in

the internal spontaneous polarization is measured, whereas in the case of bolometers a change

in the electrical resistance is measured. In contrast to photon detectors, thermal detectors

typically operate at room temperature. They are usually characterized by modest sensitivity

and slow response but they are cheap and easy to use. The reaction time of thermal detectors

is in the range of milliseconds whereas the reaction time of photon detectors is in the range

of microseconds.

There are also single element, line and matrix detectors. For single element and line de-

tectors the scenery has to be scanned. In order to obtain a two-dimensional image of the

scenery to be observed one has to employ two scanners for a single element detector and one

scanner for a line detector. Single element detectors are not so common nowadays for ima-

Page 199: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

A.2. INFRARED SYSTEMS FUNDAMENTALS 199

Figure A.4 — different detector arrays with their scan line (after : [107]).

Size in pixels Pixel pitch[µm] Material Spectral response [µm] Operatingtemperature[K]

320× 256 30 HgCdTe 7.7− 11.0 70

320× 256 30 HgCdTe 3.7− 4.8 80− 120

640× 512 15 HgCdTe 3.7− 4.8 80− 120

384× 256 25 InSb 3.0− 5.0 78

640× 512 25 InSb 3.0− 5.0 78

320× 256 30 HgCdTe 0.8− 2.5 200

Table A.4 — examples of cooled detectors (after : [107]).

ging applications. A matrix detector is already two-dimensional, so scanning is not necessary.

Typical cooled and uncooled detectors are matrix arrays. The spatial resolution can be im-

proved by microscanning. This involves, for example, moving the scanner half the pixel pitch

in the horizontal and vertical directions So that the pixels seem to be doubled horizontally

and vertically. The unit where the thermal image is displayed must have the ability to show

two times more lines and rows than the detector. In Figure A.4, a single element and line

detector with their scan lines are shown. Also depicted is a matrix detector.

Detectors can also be distinguished after their spectral response. The spectral sensitivity

depends on the detector material and the filter in front of the sensitive array. In Tables A.4

and A.5 some detectors are listed with their basic characteristics.

Today, the spatial resolution up to 2048× 2048 pixels of the sensors can be found.

Page 200: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

200 ANNEXE A. FUNDAMENTALS OF INFRARED THERMOGRAPHY (IRT)

Size in pixels Pixel pitch[µm] Material Spectral response [µm] Operatingtemperature[K]

320× 240 45 res. amorph Si 7.0− 14.0 ambiant

384× 288 35 res. amorph Si 8.0− 14.0 ambiant

640× 480 25 res. amorph Si 8.0− 14.0 ambiant

320× 240 30 InGaAs 0.9− 1.7 ambiant

320× 256 30 InGaAs 1.1− 2.5 ambiant

640× 512 25 InGaAs 0.9− 1.7 ambiant

Table A.5 — examples of uncooled detectors (after : [107]).

A.3 Infrared thermography

A.3.1 Overview of nondestructive testing methods

Nondestructive testing and evaluation (NDT&E) involves all inspecting techniques used

to examine a part or material or system without impairing its usefulness. There exist a wide

variety of NDT&E techniques, none of which is able to reveal all the required information. The

appropriate technique depends on the thickness and nature of the material being inspected, as

well as in the type of discontinuity that must be detected. The National Materials Advisory

Board (NMAB) Ad Hoc Committee on Nondestructive Evaluation adopted a classification

system of six major method categories [104] :

❶ Mechanical-optical (visual testing) ;

❷ Penetrating radiation (radiographic testing) ;

❸ Electromagnetic-electronic (Eddy current testing, magnetic particle testing) ;

❹ Sonic-ultrasonic (ultrasonic testing) ;

❺ Thermal and Infrared (infrared thermography) ;

❻ Chemical-analytical (liquid penetrant testing).

Page 201: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

A.3. INFRARED THERMOGRAPHY 201

Each of these NDT&E techniques has appropriate and adequate treatments to inspect

the objects. Visual testing is the observation of a test object, either directly with the eyes

or indirectly using optical instruments, by an inspector to evaluate the presence of surface

discontinuities and the object’s conformance to specification. Radiographic testing is used to

inspect almost any material for surface and subsurface defects. X-rays can also be used to

locate and measure internal features, confirm the location of hidden parts in an assembly, and

measure thickness of materials. In radiographic testing, access to both sides of the structure

is usually required ; relatively expensive equipment investment is required ; and possible ra-

diation hazard for personnel. Eddy current testing is used to detect surface and near-surface

flaws in conductive materials, such as the metals. Eddy current inspection is also used to

sort materials based on electrical conductivity and magnetic permeability, and measures the

thickness of thin sheets of metal and nonconductive coatings such as paint. In Eddy current

testing, only conductive materials can be inspected ; ferromagnetic materials require special

treatment to address magnetic permeability ; depth of penetration is limited ; surface finish

and roughness may interfere ; and reference standards are needed for setup. Magnetic prac-

tice testing is used to inspect ferromagnetic materials (those that can be magnetized) for

defects that result in a transition in the magnetic permeability of a material. Magnetic par-

ticle inspection can detect surface and near surface defects. In magnetic practice testing, only

ferromagnetic materials can be inspected ; smooth surfaces are relatively required ; paint or

other nonmagnetic coverings adversely affect sensitivity ; and demagnetization and post clea-

ning is usually necessary. Ultrasonic testing is used to locate surface and subsurface defects

in many materials including metals, plastics, and wood. Ultrasonic inspection is also used to

measure the thickness of materials and otherwise characterize properties of material based

on sound velocity and attenuation measurements. In UT, skill and training required is more

extensive than other technique ; surface finish and roughness can interfere with inspection ;

thin parts may be difficult to inspect ; and linear defects oriented parallel to the sound beam

can go undetected.

Liquid penetrant testing is used to locate cracks, porosity, and other defects that break

the surface of a material and have enough volume to trap and hold the penetrant material. It

is used to inspect large areas very efficiently and will work on most nonporous materials. In

liquid penetrant testing, only surface breaking defects can be detected ; surface preparation

Page 202: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

202 ANNEXE A. FUNDAMENTALS OF INFRARED THERMOGRAPHY (IRT)

is critical as contaminants can mask defects ; relatively smooth and nonporous surface are

required ; and chemical handling precautions are necessary (toxicity, fire, waste).

Infrared thermography is an imaging technology, which is contactless and completely non-

destructive and secure. Since the temperature is one of the most useful parameter that indi-

cates the structural health of an object, infrared thermography is used to detect surface and

sub-surface defects by determining the surface temperature of the object using an IR camera.

There are many other methods of NDT, including optical methods such as holography,

shearography and moiré imaging ; material identification methods such as chemical spot tes-

ting, spark testing and spectroscopy ; strain gaging ; and acoustic methods such as vibration

analysis and tapping.

The objective of a NDT&E technique is to provide information about (at least one of) the

following parameters [104] :

❶ discontinuities and separations (cracks, voids, inclusions, delaminations etc.) ;

❷ structure or malstructure (crystalline structure, grain size, segregation, misalignment

etc.) ;

❸ dimensions and metrology (thickness, diameter, gap size, discontinuity size etc.) ;

❹ physical and mechanical properties (reflectivity, conductivity, elastic modulus, sonic ve-

locity etc.) ;

❺ composition and chemical analysis (alloy identification, impurities, elemental distribu-

tions etc.) ;

❻ stress and dynamic response (residual stress, crack growth, wear, vibration etc.) ;

❼ signature analysis (image content, frequency spectrum, field configuration etc.) ;

❽ abnormal source of heat.

Terms used in this block are further defined in reference [104] with respect to specific

objectives and specific attributes to be detected, defined and measured.

A.3.2 Infrared thermography in the NDT&E scene

Infrared and thermal testing involve temperature and heat flow measurements to predict or

diagnose failure. Since before the 1960s, Infrared thermography (IRT) has been successfully

Page 203: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

A.3. INFRARED THERMOGRAPHY 203

used, in many applications as an NDT&E technique to measure the surface temperature

variations in response to induced energy. IRT is a technique for producing a visible image

of invisible (to our eyes) of the infrared radiations emitted by objects, which reflect their

thermal conditions. The energy creates a temperature contrast at material discontinuities

that can be detected by an infrared (IR) camera [43]. The IR cameras detect radiation in

the IR range of the electromagnetic spectrum and generate images of IR or thermal emission

called thermograms, allowing very sensitive non-contact temperature measurement [105]. IRT

is also used for defect characterization and material property evaluation and inspection since

it is completely non-contact and may be faster than many other techniques that are being

used. Due to its noncontact character that allows for quick 2D surface mapping, it represents

a powerful tool for NDE of materials and structures. IRT is being used in a wide range

of areas, such as in agriculture [109], [110], civil engineering and architecture [111], [112],

[113], diagnosing electrical equipment [114], [115], [116], automotive industry [117], medicine

and biology [118], [119], manufacturing industry [120], [121], food quality control [122] and

protection of historic heritage [123].

Some of the advantages of IRT are that it is noncontact, rapid, capable of imaging large

areas, applicable to complex geometries, and quantitative. The technique is safe where only a

small amount of heat is applied to the surface of the structure. Thermography has shown good

potential for detection of various defects in both metallic and composite structures. In metallic

structures, corrosion and disbonds are detectable. In composite structures, defects such as

delaminations, disbonds, gross porosity, and fiber volume fraction variations are detectable

using thermography [53].

On the basis of the source of heating, IRT is categorized into two approaches usually indica-

ted as passive and active thermography. The passive approach tests materials and structures,

which are naturally at different (often higher) temperature than ambient. The temperature

is monitored without employing any heating of the sample induced by the measurement pro-

cedure. Features of the temperature distribution, like differences with respect to a reference

level, allow to obtain qualitative information about the specimen under examination. In many

industrial processes, the temperature is an essential parameter to assess proper operation

and passive thermography aims at such measurement. Important applications of the passive

approach are in production [121], [117], [120] ; predictive maintenance, e.g., electronics com-

Page 204: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

204 ANNEXE A. FUNDAMENTALS OF INFRARED THERMOGRAPHY (IRT)

ponent inspection and power line inspection [114], [115], [116] ; medicine [124], [125], [118] ;

fire forest detection [126] ; road traffic monitoring [127] ; civil engineering and architecture

[113], [128], [129] ; agriculture [110] ; and biology [130].

Contrary to the passive approach, in the active approach, an external stimulus is necessary

to induce relevant temperature differences not present otherwise. Knowing the characteristics

of this external stimulus (example : time t0 when it is applied), active thermography allows

to obtain both qualitative and quantitative evaluations by monitoring the transient of the

temperature change induced in the anomalies by means of adequate artificial light emitting

heating techniques, such as e.g. flashes or direct current (DC) lamps, lasers or other light

sources [123].

Depending on the way of thermal excitation, different approaches of active thermography

have been developed. There are basically four techniques widely used in NDT&E, that differ

from each other mainly in the way data is acquired and/or processed [54] : pulsed thermogra-

phy (PT), step-heating (SH), lock-In thermography (LT), and vibrothermography (VT).

PT is one of the most popular thermal stimulation methods in IRT. One reason for this

popularity is the quickness of the inspection relying on a thermal stimulation pulse. In PT,

a short heating pulse is applied to the specimen and the cooling data is monitored in the

transient domain [54]. PT is being routinely used for quantitative evaluation of defect in both

metallic and composite specimens.

In SH, the temperature rise is monitored in the transient domain, where a long heating

pulse is applied. The sample is continuously heated, at low power. Variations of surface tem-

perature with time are related to specimen features as in pulse thermography. This technique

is sometimes referred to as long pulse thermography or time resolved infrared radiometry

(TRIR). The time-resolved part means the temperature is monitored as it evolves during and

after the heating process [131], [104]. SH finds various applications such as for coating thick-

ness evaluation (including multilayered coatings, ceramics), integrity of the coating-substrate

bond determination or evaluation of composite structures, characterization of airframe hidden

corrosion among others. More details about this technique can be found in [132], [133].

LT, also called photothermic radiometry, is carried out in stationary domain, where a

modulated heat wave is launched on the sample for heating which travels through the bulk

Page 205: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

A.3. INFRARED THERMOGRAPHY 205

by diffusion and reflects back from the defect sites [134]. LT refers refer to the necessity to

monitor the exact time dependence between the output signal and the reference input signal

(i.e. the oscillating - also called modulated - heating). The resulting oscillating temperature

field (following the oscillating thermal stimulation) in the stationary regime (that is after

the transient regime) is recorded [131]. LT has been extensively used to find quantitative

information of subsurface defects, corrosion protective paints, morphologies of defects (like

circular, square-like, etc.).

VT, under the effect of external mechanical vibrations and/or ultrasonic excitation [135]

to the structure at a few fixed frequencies (based on the availability of commercial equip-

ment), heat is released by friction precisely at defect locations. VT is used to find surface

and near surface defects. Also known as thermosonics and sonic IR, VT detects and locates

cracks, disbonds, and delaminations using the heat generated by these defects when they are

vibrated. The generated heat diffuses away from these thermal indications and radiates from

the surfaces. Radiated heat is observed and measured observing the surface of the structure

containing the defect indications. There is significant industrial interest in vibrothermography

due to its ability to rapidly and accurately detect cracks and other defects in structures ; howe-

ver, it has been hindered by issues such as repeatability, due in part, to a lack of understanding

of the physics governing the heat generation process in vibrating defects [136].

A.3.3 IRT system

The basic elements of an IRT system for NDT&E are depicted in Figure A.5 : 1 a thermal

excitation source ; 2 a target ; 3 an IR camera ; 4 a signal and image analysis system (PC) ;

and 5 the results (display).

If a thermal gradient between the scene and the object of interest exist, the target can

be inspected using the passive approach. However, when the object or feature of interest is

in equilibrium with the rest of the scene, it is possible to create a thermal contrast on the

surface using a thermal source 1 , this is known as the active approach. Thermal excitation

introduces heat noise, i.e. non-uniformities dues to imperfect heating. This is a well-known

problem in active thermography [65].

The target 2 is the specimen or the scene of interest. It can be, for example, a subsurface

Page 206: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

206 ANNEXE A. FUNDAMENTALS OF INFRARED THERMOGRAPHY (IRT)

Figure A.5 — IRT imaging system.

flaw on a specimen or a gas leak in a complex scene. The infrared radiation measured by the

IR camera results from the contribution of three different sources : the thermal energy emitted

from the object ; the energy reflected from the background ; and the energy transferred through

the material. Additionally, the atmosphere attenuates the oncoming thermal signatures.

A radiometer (IR camera) 3 captures the thermal signatures coming from the target.

Here again, every element of the radiometer contributes to further degrade the signal, i.e. op-

tical, electronic and electromagnetic noise. As a result, a data processing step 4 is generally

required. Traditional and new IR image processing techniques are reviewed in reference [59].

These techniques are intended to reduce noise at pre- and post-processing stages, to enhance

image contrast and to retrieve useful information from the images. Finally, the resulting pro-

cessed data must provide qualitative or quantitative outputs allowing to assess the conditions

of the target 5 .

A.3.4 Conditions for using IRT

The most important condition for IRT to provide useful results is that a temperature

difference or thermal contrast ∆T , exists between the feature of interest, e.g. people on a

scene or an internal flaw on a specimen ; and its surroundings, e.g. the scene or the specimen

matrix. A second condition is to have the appropriate thermal imaging equipment to produce

thermal images or thermograms. In addition, it is necessary to count with an experienced

thermographer to interpret thermographic results. The thermographer should have a basic

Page 207: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

A.3. INFRARED THERMOGRAPHY 207

knowledge of the radiation principles, the fundamentals of heat transfer, the inspected material

and/or process, and the equipment. Personnel qualification and certification standards (level

I, II and III) for Infrared and Thermal Testing exists [104], indicating that human expertise

is a critical part of the Thermography system. Analysis of raw thermal data is a qualitative

inspection method relying on the training and experience of the thermographer.

Image processing techniques help on the completion of this task. The active approach

is used on materials or systems that do not present significant differences in temperature

with respect to their surroundings. Hence, for the active approach to be effectively applied, a

fourth condition must be added, i.e. the thermophysical properties of the internal defect (e.g.

voids, inclusions, etc.) have to be different from those of the specimen’s material. Without

this condition, no defect detection is possible.

A.3.5 Advantages and difficulties of IRT

Each NDT technique has its own strengths and weaknesses. In the case of IRT, the

strengths are summarized that it is fast, non-contact and secure for the personnel. On the

other hand, there are some difficulties specific to IRT :

– There must be a temperature difference for certain surveys

– Difficulty in obtaining a quick, uniform and highly energetic thermal stimulation over a

large surface ;

– Effects of thermal losses (convective, radiative) which induce spurious contrasts affecting

the reliability of the interpretation ;

– Cost of the equipment ;

– Capability of detecting only defects resulting in a measurable change of the thermal

properties ;

– Ability to inspect a limited thickness of material under the surface ;

– Emissivity problems.

More details advantages and limitations of IRT ; as a general passive and active NDT

method and its derivative active techniques, such as PT, LT, SH and VT ; can be found in

[104], [54].

Page 208: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

208 ANNEXE A. FUNDAMENTALS OF INFRARED THERMOGRAPHY (IRT)

A.3.6 Pulsed thermography

PT is one of the most popular thermal stimulation methods in IRT. One reason for this

popularity is the quickness of the test relying on a short thermal stimulation pulse, with

duration going from a few millisecond for high thermal conductivity material inspection (such

as metal parts) to a few seconds for low thermal conductivity specimens (such as plastics,

graphite epoxy components).

In PT, the data acquisition and processing is carried out as depicted in Figure 5 and

can be summarized as follows. Basically, it consists of briefly heating the specimen 1 and

then recording the temperature decay curve. Qualitatively, the phenomenon is as follows. The

temperature of the material changes rapidly after the initial thermal pulse because the thermal

front propagates, by diffusion, under the surface. The presence of discontinuity 2 reduces the

diffusion rate so that when observing the surface temperature, discontinuities appear as areas

of different temperatures with respect to surrounding sound area once the thermal front has

reached them. Consequently, deeper discontinuities will be observed later and with a reduced

contrast. In fact the observation time t is a function of the square of the depth z and loss of

contrast C is proportional to the cube of the depth.

t ∼= z2

α(A.12)

and

C ∼= 1

z3(A.13)

where α is the thermal diffusivity of the material.

Energy sources 3 (e.g. xenon flash tubes) are used to pulse-heat the specimen surface. The

duration of the pulse may vary from a few ms (∼ 5− 15 ms using flashes) to several seconds

(using lamps), depending on the thermophysical properties of both, the specimen and the flaw.

If the temperature of the part to inspect is already higher than ambient temperature, it can be

of interest to use a cold thermal source such as line or air jets (or water jets, sudden contact

with ice or snow etc.). Obviously, following the Fourier equation of conduction, a thermal

front propagates the same way whether hot or cold : what is important is the temperature

differential between the thermal source and the specimen.

Page 209: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

A.3. INFRARED THERMOGRAPHY 209

Figure A.6 — observation techniques for PT : (a) in reflection and (b) in transmission.

The specimen is heated from one side while thermal data is collected. There are two basic

arrangements for observation : (1) in reflection, the thermal source and detector are located

on the same side of the inspected component ; (2) in transmission, the heating source and the

detector are located one on each side of the component to inspect. Generally, the reflection

approach is used for detection of discontinuities located close to the heated surface whereas

the transmission approach allows detection of discontinuities close to the rear surface because

of the spreading effect of the thermal front. In general, resolution is higher in reflection and it

is easier to deploy given that both sides of the specimen do not need to be available. Although

deeper defects can be detected in transmission, depth information is loss since thermal waves

will travel the same distance whether their strength is reduced by the presence of a defect

or not [54]. Hence, depth quantification is not possible in transmission. Defective zones will

appear at higher or lower temperature with respect to non-defective zones on the surface,

depending on the thermal properties of both the material and the defect. The temperature

evolution on the surface is then monitored in transitory regime using an infrared camera 4 .

A thermal map of the surface or thermogram is recorded at regular time intervals forming

a 3D matrix, where the 3rd coordinate corresponds to the time evolution. The thermogram

matrix can now be processed 5 using image processing techniques.

Pulsed thermal waves

It is very important to make the link with thermal wave theory, discussed earlier. After the

heating excitation, the temperature rise ∆T at a given point of the sample surface diminishes

Page 210: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

210 ANNEXE A. FUNDAMENTALS OF INFRARED THERMOGRAPHY (IRT)

with time t, due to the heat diffusion into the sample, according to the expression [54] :

∆T =Q

e√πt

(A.14)

where ∆T is the temperature increase of the surface, Q is the quantity of energy absorbed by

the surface, and e =√kρC is the sample thermal effusivity of the material with k being the

thermal conductivity, ρ the mass density, C the specific heat, and t the time.

A.3.7 Lock-in thermography

In LT, the data acquisition and processing is carried out as depicted in Figure A.7. The

inspected specimen’s surface 1 is simultaneously heated by means of a modulated lamp 2

in the form of periodic thermal waves in order to generate a field of temperature oscillations,

which are then continuously recorded by a synchronized IR camera 3 and sent to the pro-

cessing unit (PC) 4 . After all images have been recorded over several modulation cycles, a

Fourier analysis is performed at each pixel in order to compute the local amplitude and phase

of this pixel. As a result, the amplitude and phase images of the oscillating temperature field

are retrieved. This information derived from the set of thermographic images is then presented

as one pair of images where the amplitude image is the superposition of illumination intensity,

optical surface absorption, thermal emission coefficient and thermal features, while the phase

image of temperature modulation displays the thermal features.

In particular, the phase image, which is associated with the thermal wave propagation

time, is independent of the optical properties of the sample and, consequently, it is not affected

by any possible local difference in the emissivity or light absorption that, on the contrary,

strongly influences the amplitude images. Consequently, the phase image constitutes a useful

tool for reliable quantitative characterizations.

Periodic thermal waves The Fourier’s Law one-dimensional solution for a periodic

thermal wave propagating through a semi-infinite homogeneous material may be expressed

as :

T (t, z) = T0exp

�−z

µ

�cos

�2πz

λ− ωt

�(A.15)

where T0 [ C] is the initial change in temperature produced by the heat source, ω [rad/s] is the

Page 211: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

A.3. INFRARED THERMOGRAPHY 211

Figure A.7 — principle of lock-in thermography using optical generation of thermal waves :camera monitors temperature filed while lamp intensity is modulated

modulation frequency (ω = 2πf , with f being the frequency in Hz), λ [m] is the wavelength ;

and µ [m] is the diffusion length given by :

µ =

�2α

ω=

�α

πf(A.16)

where α = k/ρCp [m2/s] is the thermal diffusivity, with k [W/m C] being the thermal

conductivity, ρ [kg/m3] the density, CP [J/kg C] the specific heat.

Amplitude and phase from LT

Amplitude and phase delay data are available when the periodic waveform is known [137].

When the intensity of the modulation I is sinusoidal and the resulting surface temperature

modulation S as well, amplitude and phase can be recovered from 4 images per modulation

cycle (see Figure A.8). If these images are symbolized by S1(x) to S4(x) where x denotes the

pixel address, then the amplitude image A(x) and phase image φ(x) are given [54] :

A(x) =

�((S1(x)− S3(x))

2 + (S2(x)− S4(x))2) (A.17)

φ(x) = arctan

�S1(x)− S3(x)

S2(x)− S4(x)

�(A.18)

A.3.8 The complete thermogram sequence

The complete thermogram sequence is composed of five distinctive elements depicted in

Figure A.9. At time t0, before heat reaches the specimen’s surface, a cold image 1 is captured.

The cold image can be used to eliminate spurious reflections due to emissivity variations

Page 212: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

212 ANNEXE A. FUNDAMENTALS OF INFRARED THERMOGRAPHY (IRT)

Figure A.8 — principle of computation of phase and amplitude images in lock-in ther-mography. During each heating cycle of the specimen (light intensity I, top) the infraredcamera records four images (S1 to S4). Following this process, for any pixel x within the

field of view, it becomes possible to reconstruct the local thermal wave.

and to reduce fixed pattern artifacts. This is attractive for thermal data visualization and

quantification although it is less useful when working with phase delay images. During (and

shortly after) the application of a heat pulse, the acquired thermograms could be temperature

saturated 2 , i.e. the reading is out of the calibration scale and no accurate measure can be

computed. The actual number of saturated thermograms depends on the sampling frequency

and on the thermal properties of the material being inspected : low conductivity materials

stay saturated longer than high conductivity materials, and more thermograms, saturated

or not, will be recorded using high sampling rates. Saturated thermograms give no valuable

information and therefore can be safely discarded from the processing stage.

The first useful thermogram that comes into sight after saturation is known as the early

recorded thermogram (ERT) 3 . Ideally, defects are still not visible on the ERT, however, this

condition is not always encountered in practice, especially when inspecting shallow defects on

high conductivity materials using low sampling frequencies and/or when strong non-uniform

heating is present. Normally, this situation does not constitute a problem for defect detection

purposes. However, since depth is a function of time : z ∼ t1/2 [54], special care must be

taken in order to perform quantitative analysis. Starting at the ERT at t1, all subsequent

thermograms are of interest for defect inspection and constitute the thermogram sequence 4 .

The last acquired image at tN 5 corresponds to the last recorded thermogram. From this

point, temperature variations are considered negligible.

Deviation from the t1/2 dependency on the useful part of the thermogram provides an

indication of the presence of a defective area. This is in fact the basis for defect detection in

active thermography.

Page 213: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

A.3. INFRARED THERMOGRAPHY 213

Figure A.9 — complete thermogram sequence : cold image, saturated thermograms, ERT,thermogram sequence, and last recorded thermogram (source from [65]).

We end this part by summarizing some interesting characteristics of PT and LT in

Table A.6.

Page 214: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

214 ANNEXE A. FUNDAMENTALS OF INFRARED THERMOGRAPHY (IRT)

Pulsed thermography Lock-In thermography

Heat source Heat pulse Periodic thermal waves

Regime Transitory Permanent

Advantages

Fast, Low power thermal waves.

A single experience launches aseries of thermal waves at se-veral frequencies.

Little impact of non-uniformheating, environmental reflec-tions, emissivity variations andnon-planar surfaces.

Depth inversion is straightfor-ward

DisadvantagesInversion techniques are com-plex.

Requires a test for every ins-pected depth.

Affected by non- uniform hea-ting.

Slow : a permanent regime hasto be reached.

Table A.6 — comparative characteristics of PT and LT. (source from [65]).

Page 215: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

ANNEXE

B Bernoulli-Gaussian model

Let the matrix A be the HSI data, whose I columns are the spectral pixels and the L rows

are the spectral band images. Let µA and ΓA be the data mean vector and covariance matrix

respectively. The SVD of ΓA is given by ΓA = UDUT . U is a L×L matrix whose columns are

the eigenvectors of ΓA and D is a L × L diagonal matrix whose diagonal elements hold the

eigenvalues of ΓA. The whitened data Z are given by :

Z = D−1/2UT (A− µA111,L) (B.1)

where 111,L is a (1 × L)-matrix of one. The whitening operation is such that µZ = 0L,1 and

ΓZ = IdL (L-dimensional identity matrix).

Assume the value of an observed spectral pixel x is one realization of a random vector xi

(the subscript i indicates the associated random event), which can be observed under H0 as

background pixel and under H1 as target pixel.

The RX anomaly detector is based on following hypothesis :

H0 : x ∼ N (µb,Γb) i.e. x = b target absent

H1 : x ∼ N (µt,Γt) i.e. x = b+ (s− µb) target present(B.2)

where µb is the average spectrum of the background, s is the spectrum of the target, Γb is the

covariance matrix assumed common to both classes (target and background). b ∼ N (µb,Γb)

is the background clutter random vector in the direct space coordinates. Let p represent the

probability that a target s is present in the considered pixel x. If NT is the number of target

pixels, then p can be estimated by the frequency p = NT

I .

Here the two hypotheses are resumed in one equation, by introducing a new random

Page 216: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

216 ANNEXE B. BERNOULLI-GAUSSIAN MODEL

variable, β, witch is the indicator of the anomaly. In the whitened space :

H0 : z = b

H1 : z = b+ t(B.3)

in which b ∼ N (0L,1, IdL)

These hypotheses suggest that x can be modeled as follows :

z = b+ βt (B.4)

where β follows a Bernoulli distribution of parameter p :

β ∼ B(p) (B.5)

We perform the PCA, the principal components are obtained by projection onto unitary

Eigen vectors. Let w be one Eigen vector, and zw the projection of the whitened data onto

w :

zw = wTZ (B.6)

where the elements of the vector zw are sums of the background and target contributions and

can be modeled as :

zw = bw + βtw (B.7)

wherebw = wTb ∼ N (0, 1)

tw = wT t(B.8)

It can easily be shown that the kurtosis of the projection of the whitened data is given

by :

kurt(zw) = 3 + t4wp(1− 7p+ 12p2 − 6p3) (B.9)

Then, if the considered direction w is the closest one to the anomaly direction, tw = wT t

is maximum, and the kurtosis is also maximum among the considered components.

Page 217: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

ANNEXE

C Additional results

obtained with SVD

SVD Mean image Mask

S1 - Pulse (1 s)K 2 6 10 14

Detectionrate ↓

Detectionmap →

20 %DetectionmaskFAR (%) 0.08 0 0 0

30 %DetectionmaskFAR (%) 0.11 0 0 0

40 %DetectionmaskFAR (%) 0.17 0 0 0

Page 218: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

218 ANNEXE C. ADDITIONAL RESULTS OBTAINED WITH SVD

60 %DetectionmaskFAR (%) 0.52 0.01 0 0

80 %DetectionmaskFAR (%) 0.75 0.17 0 0

90 %DetectionmaskFAR (%) 1.38 0.76 0.06 0.01

Table C.1 — detection masks and their corresponding FARs for different fixed detectionrates after reducing the dimensionality of the cube S1 - Pulse (1 s) with SVD.

SVD Mean image Mask

S2-Lockin(1Hz) (WSPs)K 2 6 10 14

Detectionrate ↓

Detectionmap→

20 %DetectionmaskFAR(%)

9.95 1.65 1.45 2.41

Page 219: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

219

30 %DetectionmaskFAR(%)

15.61 5.12 4.86 5.47

40 %DetectionmaskFAR(%)

17.23 9.03 9.61 7.80

60 %DetectionmaskFAR(%)

21.21 17.23 19.76 13.90

80 %DetectionmaskFAR(%)

28.03 27.82 35.61 30.90

90 %DetectionmaskFAR(%)

48.87 45.47 50.03 54.57

Table C.2 — detection masks and their corresponding FARs for different fixed detectionrates after reducing the dimensionality of the cube S2 - Lockin (1 Hz) with SVD and without

tacking into account the saturated pixels (WSPs).

Page 220: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

220 ANNEXE C. ADDITIONAL RESULTS OBTAINED WITH SVD

SVD Mean image Mask

S2 - Pulse (1 s)K 2 6 10 14

Detectionrate ↓

Detectionmap →

20 %DetectionmaskFAR (%) 12.52 3.26 3.10 3.33

30 %DetectionmaskFAR (%) 20.61 4.53 4.62 4.46

40 %DetectionmaskFAR (%) 22.15 7.30 6.58 7.03

60 %DetectionmaskFAR (%) 27.75 33.87 36.51 30.98

80 %DetectionmaskFAR (%) 37.85 65.22 72.07 71.58

90 %DetectionmaskFAR (%) 44.13 76.91 86.94 82.72

Table C.3 — detection masks and their corresponding FARs for different fixed detectionrates after reducing the dimensionality of the cube S2 - Pulse (1 s) with SVD.

Page 221: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

221

SVD Mean image Mask

S3 - Lockin (1 Hz)K 2 6 10 14

Detectionrate ↓

Detectionmap →

20 %DetectionmaskFAR (%) 0 0 0 0.02

30 %DetectionmaskFAR (%) 0 0.05 0.04 0.06

40 %DetectionmaskFAR (%) 0.06 0.27 0.06 0.19

60 %DetectionmaskFAR (%) 0.56 2.49 1.97 2.76

80 %DetectionmaskFAR (%) 1.84 9.75 15.22 14.42

90 %DetectionmaskFAR (%) 3.28 12.71 20.58 26.74

Table C.4 — detection masks and their corresponding FARs for different fixed detectionrates after reducing the dimensionality of the cube S3 - Lockin (1 Hz) with SVD.

Page 222: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

222 ANNEXE C. ADDITIONAL RESULTS OBTAINED WITH SVD

SVD Mean image Mask

S3 - Pulse (5 s)K 2 6 10 14

Detectionrate ↓

Detectionmap →

20 %DetectionmaskFAR (%) 0.04 0.11 0.07 0.08

30 %DetectionmaskFAR (%) 0.07 0.56 0.10 0.20

40 %DetectionmaskFAR (%) 0.18 0.94 0.38 0.39

60 %DetectionmaskFAR (%) 3.58 4.05 4.67 5.16

80 %DetectionmaskFAR (%) 5.25 6.89 7.73 9.29

90 %DetectionmaskFAR (%) 6.77 7.29 10.19 12.33

Table C.5 — detection masks and their corresponding FARs for different fixed detectionrates after reducing the dimensionality of the cube S3 - Pulse (5 s) with SVD.

Page 223: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

ANNEXE

D Additional results

obtained with MNF

MNF Mean image Mask

S1 - Lockin (1 Hz)K 2 6 10 14

Detectionrate ↓

Detectionmap →

20 %DetectionmaskFAR (%) 0 0 0 0

30 %DetectionmaskFAR (%) 0 0 0 0

40 %DetectionmaskFAR (%) 0 0 0 0

Page 224: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

224 ANNEXE D. ADDITIONAL RESULTS OBTAINED WITH MNF

60 %DetectionmaskFAR (%) 0.01 0.02 0 0

80 %DetectionmaskFAR (%) 0.09 0.04 0.01 0

90 %DetectionmaskFAR (%) 0.18 0.17 0.18 0.15

Table D.1 — detection masks and their corresponding FARs for different fixed detectionrates after reducing the dimensionality of the cube S1 - Lockin (1 Hz) with MNF.

MNF Mean image Mask

S1 - Pulse (1 s)K 2 6 10 14

Detectionrate ↓

Detectionmap →

20 %DetectionmaskFAR (%) 0.05 0.02 0 0

Page 225: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

225

30 %DetectionmaskFAR (%) 0.07 0.04 0 0

40 %DetectionmaskFAR (%) 0.09 0.07 0 0

60 %DetectionmaskFAR (%) 0.12 0.27 0 0

80 %DetectionmaskFAR (%) 0.44 0.38 0.01 0.01

90 %DetectionmaskFAR (%) 0.88 2.11 0.06 0.01

Table D.2 — detection masks and their corresponding FARs for different fixed detectionrates after reducing the dimensionality of the cube S1 - Pulse (1 s) with MNF.

Page 226: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

226 ANNEXE D. ADDITIONAL RESULTS OBTAINED WITH MNF

MNF Mean image Mask

S2 - Lockin (1 Hz)K 2 6 10 14

Detectionrate ↓

Detectionmap →

20 %DetectionmaskFAR (%) 13.19 11.18 10.71 8.99

30 %DetectionmaskFAR (%) 20.61 21.04 22.92 22.08

40 %DetectionmaskFAR (%) 27.69 28.78 31.17 33.31

60 %DetectionmaskFAR (%) 46.38 49.07 51.64 54.48

80 %DetectionmaskFAR (%) 68.33 75.98 73.13 75.14

90 %DetectionmaskFAR (%) 81.09 86.56 87.04 85.04

Table D.3 — detection masks and their corresponding FARs for different fixed detectionrates after reducing the dimensionality of the cube S2 - Lockin (1 Hz) with MNF.

Page 227: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

227

MNF Mean image Mask

S2 - Pulse (1 s)K 2 6 10 14

Detectionrate ↓

Detectionmap →

20 %DetectionmaskFAR (%) 9.08 2.66 2.66 2.53

30 %DetectionmaskFAR (%) 15.86 6.71 7.09 8.07

40 %DetectionmaskFAR (%) 29.24 17.75 18.12 22.88

60 %DetectionmaskFAR (%) 45.86 40.09 39.56 38.95

80 %DetectionmaskFAR (%) 59.41 60.67 62.38 60.61

90 %DetectionmaskFAR (%) 70.97 73.40 79.40 76.81

Table D.4 — detection masks and their corresponding FARs for different fixed detectionrates after reducing the dimensionality of the cube S2 - Pulse (1 s) with MNF.

Page 228: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

228 ANNEXE D. ADDITIONAL RESULTS OBTAINED WITH MNF

MNF Mean image Mask

S3 - Lockin (1 Hz)K 2 6 10 14

Detectionrate ↓

Detectionmap →

20 %DetectionmaskFAR (%) 0.91 0.00 0.04 0.08

30 %DetectionmaskFAR (%) 1.08 0.06 0.07 0.17

40 %DetectionmaskFAR (%) 1.55 0.85 0.33 0.45

60 %DetectionmaskFAR (%) 2.28 2.19 1.17 1.17

80 %DetectionmaskFAR (%) 3.72 8.08 9.89 9.37

90 %DetectionmaskFAR (%) 4.01 12.28 12.82 13.28

Table D.5 — detection masks and their corresponding FARs for different fixed detectionrates after reducing the dimensionality of the cube S3 - Lockin (1 Hz) with MNF.

Page 229: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

229

MNF Mean image Mask

S3 - Pulse (5 s)K 2 6 10 14

Detectionrate ↓

Detectionmap →

20 %DetectionmaskFAR (%) 1.42 0.84 1.28 1.46

30 %DetectionmaskFAR (%) 1.59 1.34 1.65 1.69

40 %DetectionmaskFAR (%) 1.73 1.85 2.60 3.00

60 %DetectionmaskFAR (%) 2.00 3.38 3.24 4.94

80 %DetectionmaskFAR (%) 2.44 4.07 4.03 6.21

90 %DetectionmaskFAR (%) 3.37 5.74 7.59 10.22

Table D.6 — detection masks and their corresponding FARs for different fixed detectionrates after reducing the dimensionality of the cube S3 - Pulse (5 s) with MNF.

Page 230: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

ANNEXE

E Additional results

obtained with ICA

Cubes S1 - Lockin (1 Hz) S2 - Lockin (1 Hz) S3 - Lockin (1 Hz)

IC = 1

IC = 2

IC = 3

IC = 4

IC = 5

Detection maps from RX

Page 231: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

231

Detection maps from RARX

Table E.1 — first 5 ICs obtained from ICA and the detection results from RX and RARXof the cubes S1 - Lockin (1 Hz), S2 - Lockin (1 Hz) and S3 - Lockin (1 Hz).

Cubes S1 - Pulse (1 a) S2 - Pulse (1 s) S3 - Pulse (5 s)

IC = 1

IC = 2

IC = 3

IC = 4

IC = 5

Detection maps from RX

Detection maps from RARX

Table E.2 — first 5 ICs obtained from ICA and the detection results from RX and RARXof the cubes S1 - Pulse (1 s), S2 - Pulse (1 s) and S3 - Pulse (5 s).

Page 232: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

ANNEXE

F Principal components

obtained with SVD

Principal components - S1 - Pulse (1 s) - SVDPC(1) PC(2) PC(3) PC(4)

Table F.1 — first principal components selected from S1 - Pulse (1 s) after applying theproposed algorithm to estimate the VD based on the energy distribution calculated with

SVD.

Principal components - S2 - Pulse (1 s) - SVDPC(1) PC(2) PC(3)

Table F.2 — first principal components selected from S2 - Pulse (1 s) after applying theproposed algorithm to estimate the VD based on the energy distribution calculated with

SVD.

Page 233: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

233

Principal components - S3 - Pulse (5 s) - SVDPC(1) PC(2) PC(3)

PC(4) PC(5) PC(6)

Table F.3 — first principal components selected from S3 - Pulse (5 s) after applying theproposed algorithm to estimate the VD based on the energy distribution calculated with

SVD.

Page 234: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

ANNEXE

G Principal components

obtained with MNF

Principal components - S1 - Lockin (1 Hz) - MNFPC (1) PC (2) PC (3) PC (4)

PC (5) PC (6) PC (7) PC (8)

PC (9) PC (10) PC (11) PC (12)

PC (13) PC (14)

Table G.1 — first principal components selected from S1 - Lockin (1 Hz) after applyingthe proposed algorithm to estimate the VD based on the SNR distribution calculated with

MNF.

Page 235: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

235

Principal components - S1 - Pulse (1 s) - MNFPC (1) PC (2) PC (3) PC (4)

PC (5) PC (6) PC (7)

Table G.2 — first principal components selected from S1 - Pulse (1 s) after applying theproposed algorithm to estimate the VD based on the SNR distribution calculated with MNF.

Page 236: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

236 ANNEXE G. PRINCIPAL COMPONENTS OBTAINED WITH MNF

Principal components - S2 - Lockin (1 Hz) - MNFPC (1) PC (2) PC (3) PC (4)

PC (5) PC (6) PC (7) PC (8)

PC (9) PC (10) PC (11) PC (12)

PC (13) PC (14) PC (15) PC (16)

PC (17) PC (18) PC (19) PC (20)

PC (21) PC (22) PC (23) PC (24)

PC (25) PC (26) PC (27) PC (28)

PC (29) PC (30) PC (31) PC (32)

PC (33) PC (34) PC (35) PC (36)

PC (37) PC (38)

Table G.3 — first principal components selected from S2 - Lockin (1 Hz) after applyingthe proposed algorithm to estimate the VD based on the SNR distribution calculated with

MNF.

Page 237: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

237

Principal components - S2 - Pulse (1 s) - MNFPC (1) PC (2) PC (3) PC (4)

PC (5) PC (6) PC (7) PC (8)

PC (9) PC (10) PC (11) PC (12)

PC (13) PC (14) PC (15) PC (16)

PC (17) PC (18) PC (19) PC (20)

PC (21) PC (22) PC (23) PC (24)

Table G.4 — first principal components selected from S2 - Pulse (1 s) after applying theproposed algorithm to estimate the VD based on the SNR distribution calculated with MNF..

Page 238: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

238 ANNEXE G. PRINCIPAL COMPONENTS OBTAINED WITH MNF

Principal components - S3 - Lockin (1 Hz) - MNFPC (1) PC (2) PC (3) PC (4)

PC (5) PC (6) PC (7) PC (8)

PC (9) PC (10) PC (11) PC (12)

PC (13) PC (14) PC (15) PC (16)

PC (17) PC (18) PC (19)

Table G.5 — first principal components selected from S3 - Lockin (1 Hz) after applyingthe proposed algorithm to estimate the VD based on the SNR distribution calculated with

MNF.

Page 239: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

239

S3 - Pulse (5 s) - MNFPC (1) PC (2) PC (3) PC (4)

PC (5) PC (6) PC (7) PC (8)

PC (9) PC (10) PC (11) PC (12)

PC (13) PC (14) PC (15) PC (16)

PC (17) PC (18) PC (19) PC (20)

PC (21) PC (22)

Table G.6 — first principal components selected from S3 - Pulse (5 s) after applying theproposed algorithm to estimate the VD based on the SNR distribution calculated with MNF.

Page 240: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

References

[1] J. Benediktsson, J. Palmason and J. Sveinsson, Classification of hyperspectral

data from urban areas based on extended morphological profiles, Geoscience and Remote

Sensing, IEEE Transactions on, vol. 43, no. 3, pp. 480–491, 2005.

[2] J. Bioucas-Dias and J. Nascimento, Hyperspectral subspace identification, Geos-

cience and Remote Sensing, IEEE Transactions on, vol. 46, no. 8, pp. 2435–2445, 2008.

[3] C.-I. Chang, Hyperspectral Imaging : Techniques for Spectral Detection and Classifi-

cation, Kluwer Academic/Plenum Publishing Co., New York, 2003.

[4] R. N. Clark, G. A. Swayze, K. E. Livo, R. F. Kokaly, S. J. Sutley, J. B. Dal-

ton, R. R. McDougal and C. A. Gent, Imaging spectroscopy : Earth and planetary

remote sensing with the usgs tetracorder and expert systems , Journal of Geophysical

Research : Planets, vol. 108, no. E12, 2003.

[5] J. Chanussot, C. Collet and K. Chehdi, Multivariate Image Processing , Wiley-

ISTE, British Library Cataloguing-in-Publication Data, 2009.

[6] S. Schweizer and J. Moura, Efficient detection in hyperspectral imagery , Image

Processing, IEEE Transactions on, vol. 10, no. 4, pp. 584–597, 2001.

[7] R. Bellman, Dynamic programming and lagrange multipliers, Proceedings of the Na-

tional Academy of Sciences, vol. 42, no. 10, pp. 767–769, 1956.

[8] D. Donoho, High-dimensional data analysis : the curse and blessing of dimensionality ,

Math Challenges of the 21st Century. American Mathematical Society Edition, 2000.

Page 241: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

REFERENCES 241

[9] I. Atkinson, F. Kamalabadi, S. Mohan and D. Jones, Wavelet-based 2-d multi-

channel signal estimation, in Image Processing, ICIP 2003 Proceedings. International

Conference on, vol. 2, (pp. II–141–4 vol.3), 2003.

[10] C.-I. Chang and Q. Du, Estimation of number of spectrally distinct signal sources in

hyperspectral imagery , Geoscience and Remote Sensing, IEEE Transactions on, vol. 42,

no. 3, pp. 608–619, 2004.

[11] A. Green, M. Berman, P. Switzer and M. Craig, A transformation for orde-

ring multispectral data in terms of image quality with implications for noise removal ,

Geoscience and Remote Sensing, IEEE Transactions on, vol. 26, no. 1, pp. 65–74, 1988.

[12] L. Parra, C. Spence, P. Sajda, A. Ziehe and K. uller, Unmixing hyperspectral

data, Advances in Neural Information Processing Systems, vol. 12, pp. 942–948, 2000.

[13] H. Othman and S.-E. Qian, Noise reduction of hyperspectral imagery using hybrid

spatial-spectral derivative-domain wavelet shrinkage, Geoscience and Remote Sensing,

IEEE Transactions on, vol. 44, no. 2, pp. 397–408, 2006.

[14] R. O. Duda, P. E. Hart and D. G. Stork, Pattern Classification, 2nd Ed , JOHN

WILEY & SONS, INC., New York, 2001.

[15] J. Rissanen, Modeling by shortest data description, Automatica, vol. 14, no. 5, pp. 465–

471, 1978.

[16] G. Schwarz, Estimating the dimension of a model , Ann. Statist., vol. 6, no. 2, pp. 461–

464, 1978.

[17] H. Akaike, A new look at the statistical model identification, Automatic Control, IEEE

Transactions on, vol. 19, no. 6, pp. 716–723, 1974.

[18] M. Wax and T. Kailath, Detection of signals by information theoretic criteria, Acous-

tics, Speech and Signal Processing, IEEE Transactions on, vol. 33, no. 2, pp. 387–392,

1985.

[19] D. Manolakis, Detection algorithms for hyperspectral imaging applications : a signal

processing perspective, in Advances in Techniques for Analysis of Remotely Sensed Data,

2003 IEEE Workshop on, (pp. 378–384), 2003.

Page 242: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

242 REFERENCES

[20] F. Robey, D. Fuhrmann, E. Kelly and R. Nitzberg, A CFAR adaptive matched

filter detector , Aerospace and Electronic Systems, IEEE Transactions on, vol. 28, no. 1,

pp. 208–216, 1992.

[21] S. Kraut and L. Scharf, The CFAR adaptive subspace detector is a scale-invariant

GLRT , Signal Processing, IEEE Transactions on, vol. 47, no. 9, pp. 2538–2541, 1999.

[22] S. Kraut, L. Scharf and L. McWhorter, Adaptive subspace detectors , Signal Pro-

cessing, IEEE Transactions on, vol. 49, no. 1, pp. 1–16, 2001.

[23] E. Conte, M. Lops and G. Ricci, Asymptotically optimum radar detection in

compound-gaussian clutter , Aerospace and Electronic Systems, IEEE Transactions on,

vol. 31, no. 2, pp. 617–625, 1995.

[24] L. Scharf and L. McWhorter, Adaptive matched subspace detectors and adaptive

coherence estimators, in Signals, Systems and Computers, 1996. Conference Record of

the Thirtieth Asilomar Conference on, vol. 2, (pp. 1114–1117), 1996.

[25] Y. Du, C.-I. Chang, H. Ren, C.-C. Chang, J. O. Jensen and F. M. D’Amico, New

hyperspectral discrimination measure for spectral characterization, Optical Engineering,

vol. 43, no. 8, pp. 1777–1786, 2004.

[26] A. Huck and M. Guillaume, Concordance measure employed for spectral object detec-

tion in hyperspectral and multispectral images, Astronomical Data Analysis conference

IV, Sep. 2006.

[27] I. S. Reed and X. Yu, Adaptive multiple-band CFAR detection of an optical pattern

with unknown spectral distribution, Acoustics, Speech and Signal Processing, IEEE Tran-

sactions on, vol. 38, no. 10, pp. 1760–1770, 1990.

[28] X. Yu, L. Hoff, I. S. Reed, A. M. Chen and L. Stotts, Automatic target detection

and recognition in multiband imagery : a unified ml detection and estimation approach,

Image Processing, IEEE Transactions on, vol. 6, no. 1, pp. 143–156, 1997.

[29] S. Schweizer and J. Moura, Hyperspectral imagery : clutter adaptation in anomaly

detection, Information Theory, IEEE Transactions on, vol. 46, no. 5, pp. 1855–1871,

2000.

Page 243: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

REFERENCES 243

[30] D. W. J. Stein, S. Beaven, L. Hoff, E. Winter, A. Schaum and A. Stocker,

Anomaly detection from hyperspectral imagery , Signal Processing Magazine, IEEE,

vol. 19, no. 1, pp. 58–69, 2002.

[31] D. Manolakis and G. Shaw, Detection algorithms for hyperspectral imaging applica-

tions, Signal Processing Magazine, IEEE, vol. 19, no. 1, pp. 29–43, 2002.

[32] D. Manolakis, D. Marden and G. A. Shaw, Hyperspectral image processing for au-

tomatic target detection applications, Lincoln Laboratory Journal, vol. 14, no. 1, pp. 79–

116, 2003.

[33] J.-M. Gaucel, Méthodes tridimensionnelles pour la compression, restauration et dé-

tection en imagerie hyperspectrale, Ph.D. Thesis, Faculté des Sciences et Techniques,

Université Paul Cézanne Aix-Marseille (France), Oct. 2007.

[34] Z. Xue-Wu, D. Yan-Qiong, L. Yan-Yun, S. Ai-Ye and L. Rui-Yu, A vision inspec-

tion system for the surface defects of strongly reflected metal based on multi-class svm,

Expert Syst. Appl., vol. 38, no. 5, pp. 5930–5939, 2011.

[35] A. Hornberg, Handbook of Machine Vision, Weinheim, Germany : VILEY-VCH, 2006.

[36] D. Slater and G. Healey, Material classification for 3d objects in aerial hyperspectral

images, in Computer Vision and Pattern Recognition, 1999. IEEE Computer Society

Conference on., vol. 2, (pp. –273 Vol. 2), 1999.

[37] W. Singer, M. Totzeck and H. Gross, Handbook of Optical Systems, Volume 2,

Physical Image Formation, VILEY-VCH, Weinheim, Germany, 2005.

[38] D. Manolakis, Hyperspectral signal models and implications to material detection al-

gorithms, in Acoustics, Speech, and Signal Processing, 2004. Proceedings. (ICASSP ’04).

IEEE International Conference on, vol. 3, (pp. iii–117–20 vol.3), 2004.

[39] Allied Vision Technologies - Technical Manual V2.4.0 , Allied Vision Technologies, Stad-

troda, Germany, 2008.

[40] T. S. Newman and A. K. Jain, A survey of automated visual inspection, Computer

Vision and Image Understanding, vol. 61, no. 2, pp. 231–262, 1995.

Page 244: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

244 REFERENCES

[41] C. J. Hellier, Handbook of Nondestructive Evaluation, McGRAW-HILL, 2001.

[42] Non-destructive testing for plant life assessment , IAEA, INTERNATIONAL ATOMIC

ENERGY AGENCY, VIENNA, IAEA-TCS-26, 2005.

[43] J. Liu, W. Yang and J. Dai, Research on thermal wave processing of lock-in thermo-

graphy based on analyzing image sequences for NDT , Infrared Physics & Technology,

vol. 53, no. 5, pp. 348–357, 2010.

[44] B. Lahiri, M. Divya, S. Bagavathiappan, S. Thomas and J. Philip, Detection

of pathogenic gram negative bacteria using infrared thermography , Infrared Physics &

Technology, vol. 55, no. 6, pp. 485–490, 2012.

[45] J. P. de Brito Filho and J. R. Henriquez, Infrared thermography applied for high-

level current density identification over planar microwave circuit sectors, Infrared Phy-

sics & Technology, vol. 53, no. 2, pp. 84–88, 2010.

[46] N. Ludwig, R. Cabrini, F. Faoro, M. Gargano, S. Gomarasca, M. Iriti,

V. Picchi and C. Soave, Reduction of evaporative flux in bean leaves due to chitosan

treatment assessed by infrared thermography , Infrared Physics & Technology, vol. 53,

no. 1, pp. 65 – 70, 2010.

[47] A. A. Salaimeh, J. J. Campion, B. Y. Gharaibeh, M. E. Evans and K. Saito,

Real-time quantification of viable bacteria in liquid medium using infrared thermography ,

Infrared Physics & Technology, vol. 54, no. 6, pp. 517 – 524, 2011.

[48] J. H. Tan, E. Ng and U. R. Acharya, Evaluation of topographical variation in ocular

surface temperature by functional infrared thermography , Infrared Physics & Technology,

vol. 54, no. 6, pp. 469 – 477, 2011.

[49] F. Sham, Y. Huang, L. Liu, Y. Chen, Y. Hung and T. Lo, Computerized tomography

technique for reconstruction of obstructed temperature field in infrared thermography ,

Infrared Physics & Technology, vol. 53, no. 1, pp. 1 – 9, 2010.

[50] A. A. Salaimeh, J. J. Campion, B. Y. Gharaibeh, M. E. Evans and K. Saito,

Real-time quantification of staphylococcus aureus in liquid medium using infrared ther-

mography , Infrared Physics & Technology, vol. 55, no. 1, pp. 170 – 172, 2012.

Page 245: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

REFERENCES 245

[51] M. Picazo-Rodenas, R. Royo, J. Antonino-Daviu and J. Roger-Folch, Use of

infrared thermography for computation of heating curves and preliminary failure detec-

tion in induction motors, Electrical Machines (ICEM), 2012 XXth International Confe-

rence on, (pp. 525–531), 2012.

[52] Z. Hu, W. Du and X. He, Application of infrared thermography technology for irri-

gation scheduling of winter wheat , Multimedia Technology (ICMT), 2011 International

Conference on, (pp. 494–496), 2011.

[53] J. N. Zalameda, N. Rajic and W. P. Winfree, A comparison of image processing

algorithms for thermal nondestructive evaluation, SPIE Proceedings, vol. 5073, pp. 374–

385, 2003.

[54] C. Hellier, Theory and practice of Infrared thechnology for nondestructive Testing,

John Wiley-Interscience, 2001.

[55] X. Maldague and S. Marinetti, Pulse phase infrared thermography , Journal of Ap-

plied Physics, vol. 79, no. 5, pp. 2694–2698, 1996.

[56] C. Ibarra-Castanedo, J.-M. Piau, S. Guilbert, N. P. Avdelidis, M. Genest,

A. Bendada and X. P. V. Maldague, Comparative study of active thermography

techniques for the nondestructive evaluation of honeycomb structures, Research in Non-

destructive Evaluation, vol. 20, no. 1, pp. 1–31, 2009.

[57] M. Omar, R. Parvataneni and Y. Zhou, A combined approach of self-referencing and

principle component thermography for transient, steady, and selective heating scenarios ,

Infrared Physics & Technology, vol. 53, no. 5, pp. 358 – 362, 2010.

[58] N. Rajic, Principal component thermography for flaw contrast enhancement and flaw

depth characterisation in composite structures , Composite Structures, vol. 58, no. 4,

pp. 521 – 528, 2002.

[59] C. Ibarra-Castanedo, D. González, M. Klein, M. Pilla, S. Vallerand and

X. Maldague, Infrared image processing and data analysis, Infrared Physics & Tech-

nology, vol. 46, no. 1-2, pp. 75 – 83, 2004.

Page 246: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

246 REFERENCES

[60] A. B. Clemente Ibarra-Castanedo and X. Maldague, Thermographic image

processing for NDT , IV Conferencia Panamericana de END, vol. 79, 2007.

[61] M. X. Pilla M., Klein M. and S. A, New absolute contrast for pulsed thermography ,

7th Conference on Quantitative InfraRed Thermography (QIRT), 2002.

[62] F. J. Madruga, C. Ibarra-Castanedo, O. M. Conde, J. M. Lopez-Higuera

and X. Maldague, Infrared thermography processing based on higher-order statistics,

NDT & E International, vol. 43, no. 8, pp. 661 – 666, 2010.

[63] M. Benmoussat, K. Spinnler and M. Guillaume, Surface defect detection of metal

parts : Use of multimodal illuminations and hyperspectral imaging algorithms, in Imaging

Systems and Techniques (IST), 2012 IEEE International Conference on, (pp. 228–233),

2012.

[64] S. Marinetti, Y. A. Plotnikov, W. P. Winfree and A. Braggiotti, Pulse phase

thermography for defect detection and visualization, Proc. SPIE, Thermosense XXI,

vol. 3586, pp. 230–238, 1999.

[65] C. I. Castanedo, Quantitative subsurface defect evaluation by pulsed phase thermo-

graphy : depth retrieval with the phase, Ph.D. Thesis, Faculté des sciences et de génie,

université LAVAL Québec (Canada), Oct. 2005.

[66] G. Hughes, On the mean accuracy of statistical pattern recognizers, Information

Theory, IEEE Transactions on, vol. 14, no. 1, pp. 55–63, 1968.

[67] A. Huck and M. Guillaume, Asymptotically CFAR-Unsupervised Target Detec-

tion and Discrimination in Hyperspectral Images With Anomalous-Component Pursuit ,

Geoscience and Remote Sensing, IEEE Transactions on, vol. 48, no. 11, pp. 3980–3991,

2010.

[68] M. Benmoussat, M. Guillaume, Y. Caulier and K. Spinnler, Automatic me-

tal parts inspection : Use of thermographic images and anomaly detection algorithms,

Infrared Physics & Technology, vol. 61, pp. 68 – 80, 2013.

[69] T. Qian, R. Xu, C. Kwan and T. Griffin, Geoscience and remote sensing , IEEE

Transactions on, vol. 26, no. 1, pp. 65–74, 1988.

Page 247: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

REFERENCES 247

[70] P. Comon, Independent component analysis, a new concept ? , Signal Processing, vol. 36,

no. 3, pp. 287 – 314, 1994.

[71] C. Jutten and J. Herault, Blind separation of sources, part I : An adaptive algorithm

based on neuromimetic architecture, Signal Processing, vol. 24, no. 1, pp. 1 – 10, 1991.

[72] A. Hyvarinen and E. Oja, Independent component analysis : algorithms and applica-

tions, Neural Networks, vol. 13, no. 4-5, pp. 411 – 430, 2000.

[73] A. Back and T. Trappenberg, Input variable selection using independent component

analysis, in Neural Networks, 1999. IJCNN 99. International Joint Conference on,

vol. 2, (pp. 989–992 vol.2), 1999.

[74] H. Du, H. Qi, X. Wang, R. Ramanath and W. Snyder, Band selection using

independent component analysis for hyperspectral image processing , in Applied Imagery

Pattern Recognition Workshop, 2003. Proceedings. 32nd , (pp. 93–98), 2003.

[75] G. C. Loney and T. M. Ozsoy, NC machining of free form surfaces , Computer-Aided

Design, vol. 19, no. 2, pp. 85 – 90, 1987.

[76] Y. Li and P. Gu, Free-form surface inspection techniques state of the art review ,

Computer-Aided Design, vol. 36, no. 13, pp. 1395 – 1417, 2004.

[77] C. H. Bradley and G. W. Vickers, Free-form surface reconstruction for machine

vision rapid prototyping , Optical Engineering, vol. 32, no. 9, pp. 2191–2200, 1993.

[78] F. Chen, G. M. Brown and M. Song, Overview of three-dimensional shape measu-

rement using optical methods, Optical Engineering, vol. 39, no. 1, pp. 10–22, 2000.

[79] J. Geng, Structured-light 3D surface imaging : a tutorial , Adv. Opt. Photon., vol. 3,

no. 2, pp. 128–160, Jun 2011.

[80] X. Chen, J. Xi, Y. Jin and J. Sun, Accurate calibration for a camera-projector mea-

surement system based on structured light projection, Optics and Lasers in Engineering,

vol. 47, no. 3-4, pp. 310 – 319, 2009.

[81] M. I. A. Lourakis and A. A. Argyros, SBA : A software package for generic sparse

bundle adjustment , ACM Trans. Math. Softw., vol. 36, no. 1, pp. 2 :1–2 :30, March 2009.

Page 248: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

248 REFERENCES

[82] H. Opower, Multiple view geometry in computer vision, Optics and Lasers in Enginee-

ring, vol. 37, no. 1, pp. 85 – 86, 2002.

[83] F. Cuevas, M. Servin, O. Stavroudis and R. Rodriguez-Vera, Multi-layer neural

network applied to phase and depth recovery from fringe patterns, Optics Communica-

tions, vol. 181, no. 4-6, pp. 239 – 259, 2000.

[84] H. Liu, W.-H. Su, K. Reichard and S. Yin, Calibration-based phase-shifting pro-

jected fringe profilometry for accurate absolute 3d surface profile measurement , Optics

Communications, vol. 216, no. 1-3, pp. 65 – 80, 2003.

[85] S. Zhang and P. S. Huang, Novel method for structured light system calibration,

Optical Engineering, vol. 45, no. 8, pp. 083601–083601–8, 2006.

[86] J. Vargas, M. José Terrón-Lopez and J. Antonio Quiroga, Flexible calibra-

tion procedure for fringe projection profilometry , Optical Engineering, vol. 46, no. 2,

pp. 023601–023601–6, 2007.

[87] Z. Li, Y. Shi, C. Wang and Y. Wang, Accurate calibration method for a structured

light system, Optical Engineering, vol. 47, no. 5, pp. 053604–053604–9, 2008.

[88] J. Villa and et al., Transformation of phase to (x, y, z)−coordinates for the calibration

of a fringe projection profilometer , Optics and Lasers in Engineering, vol. 50, no. 2,

pp. 256–261, 2012.

[89] W. Li, S. Fang and S. Duan, 3D shape measurement based on structured light pro-

jection applying polynomial interpolation technique, Optik - International Journal for

Light and Electron Optics, vol. 124, no. 1, pp. 20 – 27, 2013.

[90] J. Batlle, E. Mouaddib and J. Salvi, Recent progress in coded structured light as a

technique to solve the correspondence problem : a survey , Pattern Recognition, vol. 31,

no. 7, pp. 963 – 982, 1998.

[91] W. Q. L. W. et al, 3D Reconstruction and Simulation for Raceway Groove of Bearings

Based on Structured Light , Proc. of SPIE, vol. 6150, 2006.

Page 249: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

REFERENCES 249

[92] C. Reich, R. Ritter and J. Thesing, 3-D shape measurement of complex objects by

combining photogrammetry and fringe projection, Optical Engineering, vol. 39, no. 1,

pp. 224–231, 2000.

[93] X. Han and P. Huang, Combined stereovision and phase shifting method : a new

approach for 3D shape measurement , Proc. SPIE, vol. 7389, pp. 73893C–73893C–8,

2009.

[94] H. Wang and J. Hu, Active stereo method for three-dimensional shape measurement ,

Optical Engineering, vol. 51, no. 6, pp. 063602–1–063602–8, 2012.

[95] Y. Caulier, K. Spinnler, S. Bourennane and T. Wittenberg, New structured

illumination technique for the inspection of high-reflective surfaces : Application for the

detection of structural defects without any calibration procedures , EURASIP Journal on

Image and Video Processing, vol. 2008, no. 1, 2007.

[96] Y. Caulier, S. Bourennane, K. Spinnler and T. Wittenberg, Specific features

for the analysis of fringe images, Optical Engineering, vol. 47, no. 5, pp. 057201–057201–

11, 2008.

[97] H. Zhi and R. B. Johansson, Interpretation and classification of fringe patterns ,

Optics and Lasers in Engineering, vol. 17, no. 1, pp. 9 – 25, 1992.

[98] A. Poesch, T. Vynnyka and E. Reithmeiera, Using inverse fringe projection to

speed up thedetection of local and global geometry defects on free form surfaces, Proc. of

SPIE, vol. 8500, 2012.

[99] Y. Caulier, Inspection of complex surfaces by means of structured light patterns , Opt.

Express, vol. 18, no. 7, pp. 6642–6660, Mar 2010.

[100] J. A. N. Buytaert and J. J. J. Dirckx, Fringe generation and phase shifting with

lcds in projection moiré topography , 2009.

[101] S. Ma, C. Quan, R. Zhu and C. Tay, Investigation of phase error correction for

digital sinusoidal phase-shifting fringe projection profilometry , Optics and Lasers in En-

gineering, vol. 50, no. 8, pp. 1107 – 1118, 2012.

Page 250: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

250 REFERENCES

[102] Y. Liu, J. Xi, Y. Yu and J. Chicharo, Phase error correction based on inverse

function shift estimation in phase shifting profilometry using a digital video projector ,

2010.

[103] S. Kammel and F. Leon, Deflectometric measurement of specular surfaces, Instru-

mentation and Measurement, IEEE Transactions on, vol. 57, no. 4, pp. 763–769, 2008.

[104] X. Maldague, American Society For Nondestructive Testing - ASNT, "Infrared and

Thermal Testing", Nondestructive Handbook on Infrared Technology , vol. 3, X. Maldague

technical ed., P. O. Moore ed., 3rd edition, Columbus, Ohio, 2001.

[105] A. Rogalski and K. Chrzanowski, Infrared devices and techniques, Opto-Electronics

Review, vol. 10, no. 2, pp. 111 – 136, 2002.

[106] S. Burnay, T. Williams and C. Jones, Applications of Thermal Imaging , Adam

Hilger, 1988.

[107] H. Gross, Handbook of Optical Systems, Surevy of optical instrumentations, Volume

4 , WILEY-VCH, 1988.

[108] M. J. Riedl, Optical Design Fundamentals for Infrared Systems, SPIE, Bellingham,

1995.

[109] L. Chaerle, L. I. and J. H. . van der Straeten D., Monitoring and screening

plant populations with combined thermal and chlorophyll fluorescence imaging , Journal

of Experimental Botany, vol. 58, pp. 773 – 784, 2007.

[110] S. O’Shaughnessy, S. Evett, P. Colaizzi and T. Howell, Using radiation ther-

mography and thermometry to evaluate crop water stress in soybean and cotton, Agri-

cultural Water Management, vol. 98, no. 10, pp. 1523 – 1535, 2011.

[111] H. Wiggenhauser, Active IR-applications in civil engineering , Infrared Physics &

Technology, vol. 43, no. 3-5, pp. 233 – 238, 2002.

[112] R. W. Arndt, Square pulse thermography in frequency domain as adaptation of pulsed

phase thermography for qualitative and quantitative applications in cultural heritage and

civil engineering , Infrared Physics & Technology, vol. 53, no. 4, pp. 246 – 253, 2010.

Page 251: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

REFERENCES 251

[113] C. Meola, R. D. Maio, N. Roberti and G. M. Carlomagno, Application of in-

frared thermography and geophysical methods for defect detection in architectural struc-

tures, Engineering Failure Analysis, vol. 12, no. 6, pp. 875 – 892, 2005.

[114] A. Irace, Infrared thermography application to functional and failure analysis of elec-

tron devices and circuits, Microelectronics Reliability, vol. 52, no. 9-10, pp. 2019 – 2023,

2012.

[115] O. Breitenstein, Lock-in IR thermography for functional testing of solar cells and

electronic devices, Quantitative InfraRed Thermography Journal, vol. 1, no. 2, pp. 151–

172, 2004.

[116] M. S. Jadin and S. Taib, Recent progress in diagnosing the reliability of electrical

equipment by using infrared thermography , Infrared Physics & Technology, vol. 55, no. 4,

pp. 236 – 245, 2012.

[117] M. Korukçu and M. Kilic, The usage of IR thermography for the temperature mea-

surements inside an automobile cabin, International Communications in Heat and Mass

Transfer, vol. 36, no. 8, pp. 872 – 877, 2009.

[118] B. Lahiri, S. Bagavathiappan, T. Jayakumar and J. Philip, Medical applications

of infrared thermography : A review , Infrared Physics & Technology, vol. 55, no. 4,

pp. 221 – 235, 2012.

[119] V. Hunt, G. Lock, S. Pickering and A. Charnley, Application of infrared thermo-

graphy to the study of behavioural fever in the desert locust , Journal of Thermal Biology,

vol. 36, no. 7, pp. 443 – 451, 2011.

[120] S. Dudić, I. Ignjatović, D. Šešlija, V. Blagojević and M. Stojiljković, Leakage

quantification of compressed air using ultrasound and infrared thermography , Measure-

ment, vol. 45, no. 7, pp. 1689 – 1694, 2012.

[121] C. Wallbrink, S. A. Wade and R. Jones, The effect of size on the quantitative

estimation of defect depth in steel structures using lock-in thermography , Journal of

Applied Physics, vol. 101, no. 10, pp. 104907–104907–8, 2007.

Page 252: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

252 REFERENCES

[122] A. Gowen, B. Tiwari, P. Cullen, K. McDonnell and C. O’Donnell, Applica-

tions of thermal imaging in food quality and safety assessment , Trends in Food Science

& Technology, vol. 21, no. 4, pp. 190 – 200, 2010.

[123] F. Mercuri, U. Zammit, N. Orazi, S. Paoloni, M. Marinelli and F. Scudieri,

Active infrared thermography applied to the investigation of art and historic artefacts ,

Journal of Thermal Analysis and Calorimetry, vol. 104, pp. 475–485, 2011.

[124] S. Bagavathiappan, J. Philip, T. Jayakumar and B. Raj, Correlation between

plantar foot temperature and diabetic neuropathy : A case study by using an infrared

thermal imaging technique, J Diabetes Sci Technol, vol. 4, no. 6, pp. 1386–1392, 2010.

[125] J. Vargas, M. Brioschi, F. Dias, M. Parolin, F. Mulinari-Brenner, J. Ordo-

nez and D. Colman, Normalized methodology for medical infrared imaging , Infrared

Physics & Technology, vol. 52, no. 1, pp. 42 – 47, 2009.

[126] S. Briz, A. de Castro, J. Aranda, J. Meléndez and F. Lopez, Reduction of

false alarm rate in automatic forest fire infrared surveillance systems, Remote Sensing

of Environment, vol. 86, no. 1, pp. 19 – 29, 2003.

[127] M. Marchetti, M. Moutton, S. Ludwig, L. Ibos, V. Feuillet and J. Dumoulin,

Implementation of an infrared camera for road thermal mapping , 10th International

Conference on Quantitative InfraRed Thermography, 2010.

[128] C. Meola, Infrared thermography of masonry structures, Infrared Physics & Techno-

logy, vol. 49, no. 3, pp. 228 – 233, 2007.

[129] E. Grinzato, C. Bressan, S. Marinetti, P. Bison and C. Bonacina, Monitoring

of the scrovegni chapel by IR thermography : Giotto at infrared , Infrared Physics &

Technology, vol. 43, no. 3-5, pp. 165 – 169, 2002.

[130] C. Laury, V. C. Wim, M. Eric, L. Hans, V. M. Marc and V. D. S. Dominique,

Presymptomatic visualization of plant-virus interactions by thermography , 1999.

[131] X. Maldague, Introduction to NDT by active infrared thermography , Materials Eva-

luation, vol. 60, no. 9, pp. 1060–1073, September 2002.

Page 253: HYPERSPECTRAL IMAGERY ALGORITHMS FOR THE …

REFERENCES 253

[132] J. W. Maclachlan Spicer, W. D. Kerns, L. C. Aamodt and J. C. Murphy,

Time-resolved infrared radiometry of multilayer organic coatings using surface and sub-

surface heating , Proc. SPIE, vol. 1467, pp. 311–321, 1991.

[133] R. Osiander, J. W. Maclachlan Spicer and J. M. Amos, Thermal inspection of

SiC/SiC ceramic matrix composites, Proc. SPIE, vol. 3361, pp. 339–349, 1998.

[134] B. Lahiri, S. Bagavathiappan, P. Reshmi, J. Philip, T. Jayakumar and B. Raj,

Quantification of defects in composites and rubber materials using active thermography ,

Infrared Physics & Technology, vol. 55, no. 2-3, pp. 191 – 199, 2012.

[135] M. Szwedo, L. Pieczonka and T. Uhl, Application of vibrothermography in nondes-

tructive testing of structures, 6th European Workshop on Structural Health Monitoring,

2012.

[136] J. Renshaw, J. C. Chen, S. D. Holland and R. B. Thompson, The sources of heat

generation in vibrothermography , NDT & E International, vol. 44, no. 8, pp. 736 – 739,

2011.

[137] D. Wu and G. Busse, Lock-in thermography for nondestructive evaluation of materials,

Revue Générale de Thermique, vol. 37, no. 8, pp. 693 – 703, 1998.