11
Inspection of specular and painted surfaces with centralized fusion techniques F. Puente Leo ´n a, * , S. Kammel b a Technische Universita ¨t Mu ¨ nchen, Lehrstuhl fu ¨ r Messsystem- und Sensortechnik, D-80290 Munich, Germany b Universita ¨ t Karlsruhe (TH), Institut fu ¨ r Mess- und Regelungstechnik, Postfach 6980, D-76128 Karlsruhe, Germany Received 8 January 2004; accepted 8 December 2005 Available online 30 January 2006 Abstract This contribution presents new strategies to inspect specular and painted surfaces. Structures on such surfaces are nor- mally only visible, if patterns of the environment are reflected in them. Thus, conventional approaches only yield a little information gain from a single measurement. In our approach, different intensity patterns are systematically generated in the environment of the surface such that these are reflected in the surface and captured by a camera. Following, the recorded images are processed simultaneously by a centralized fusion technique. Since the fused information is closer to the source, a better exploitation of the raw data is achieved. The fusion problem is formulated with an energy function. Its minimization yields the desired surface defects. The methodology is illustrated with two case studies: the analysis of machined surfaces, and the inspection of painted free-form surfaces. In both cases, a reliable yet cost-efficient inspection is attained matching the needs of industry. Ó 2005 Elsevier Ltd. All rights reserved. Keywords: Automated visual inspection; Image fusion; Bayesian fusion theory; Specular surfaces; Illumination; Painted surfaces; Car body parts; Machined surfaces; Smooth surfaces; Structured surfaces 1. Introduction Automated visual inspection tasks are often con- cerned with surfaces showing a partly or even per- fectly specular behavior. Examples are lenses, car body parts, perfume flacons, machined surfaces, painted surfaces, dies and molds, and, of course, mirrors. However, methods enabling an automated inspection of such surfaces featuring a sufficient accuracy, reliability, and speed are still lacking. Tri- angulation methods (projection, structured lighting, and shadow techniques) and shape from shading approaches are difficult to use, because they presup- pose a diffuse reflectance of the surface [1]. Addi- tionally, these methods feature an insufficient sensitivity regarding small variations of the surface curvature with respect to the demands of auto- motive industry. Stylus instruments and optical autofocus scanners provide highly accurate mea- surements that allow a quantitative characteriza- tion of defects but are too slow to be utilized in 0263-2241/$ - see front matter Ó 2005 Elsevier Ltd. All rights reserved. doi:10.1016/j.measurement.2005.12.007 * Corresponding author. Tel.: +49 89289 23350; fax: +49 89289 23348. E-mail addresses: [email protected] (F. Puente Leo ´ n), [email protected] (S. Kammel). Measurement 39 (2006) 536–546 www.elsevier.com/locate/measurement

Inspection of specular and painted surfaces with centralized fusion techniques

Embed Size (px)

Citation preview

Page 1: Inspection of specular and painted surfaces with centralized fusion techniques

Measurement 39 (2006) 536–546

www.elsevier.com/locate/measurement

Inspection of specular and painted surfaces withcentralized fusion techniques

F. Puente Leon a,*, S. Kammel b

a Technische Universitat Munchen, Lehrstuhl fur Messsystem- und Sensortechnik, D-80290 Munich, Germanyb Universitat Karlsruhe (TH), Institut fur Mess- und Regelungstechnik, Postfach 6980, D-76128 Karlsruhe, Germany

Received 8 January 2004; accepted 8 December 2005Available online 30 January 2006

Abstract

This contribution presents new strategies to inspect specular and painted surfaces. Structures on such surfaces are nor-mally only visible, if patterns of the environment are reflected in them. Thus, conventional approaches only yield a littleinformation gain from a single measurement. In our approach, different intensity patterns are systematically generated inthe environment of the surface such that these are reflected in the surface and captured by a camera. Following, therecorded images are processed simultaneously by a centralized fusion technique. Since the fused information is closer tothe source, a better exploitation of the raw data is achieved. The fusion problem is formulated with an energy function.Its minimization yields the desired surface defects. The methodology is illustrated with two case studies: the analysis ofmachined surfaces, and the inspection of painted free-form surfaces. In both cases, a reliable yet cost-efficient inspectionis attained matching the needs of industry.� 2005 Elsevier Ltd. All rights reserved.

Keywords: Automated visual inspection; Image fusion; Bayesian fusion theory; Specular surfaces; Illumination; Painted surfaces; Carbody parts; Machined surfaces; Smooth surfaces; Structured surfaces

1. Introduction

Automated visual inspection tasks are often con-cerned with surfaces showing a partly or even per-fectly specular behavior. Examples are lenses, carbody parts, perfume flacons, machined surfaces,painted surfaces, dies and molds, and, of course,mirrors. However, methods enabling an automated

0263-2241/$ - see front matter � 2005 Elsevier Ltd. All rights reserved

doi:10.1016/j.measurement.2005.12.007

* Corresponding author. Tel.: +49 89289 23350; fax: +49 8928923348.

E-mail addresses: [email protected] (F. Puente Leon),[email protected] (S. Kammel).

inspection of such surfaces featuring a sufficientaccuracy, reliability, and speed are still lacking. Tri-angulation methods (projection, structured lighting,and shadow techniques) and shape from shadingapproaches are difficult to use, because they presup-pose a diffuse reflectance of the surface [1]. Addi-tionally, these methods feature an insufficientsensitivity regarding small variations of the surfacecurvature with respect to the demands of auto-motive industry. Stylus instruments and opticalautofocus scanners provide highly accurate mea-surements that allow a quantitative characteriza-tion of defects but are too slow to be utilized in

.

Page 2: Inspection of specular and painted surfaces with centralized fusion techniques

F. Puente Leon, S. Kammel / Measurement 39 (2006) 536–546 537

production lines [2]. Interferometric techniques aretoo sensitive to rough industrial environments,though such methods are successfully being usedfor the inspection of a variety of optical components[3]. For all of these reasons, inspection of aestheticsurfaces is mostly done manually up to now. Theimage fusion strategies described in this paperenable one to automate common inspection tasksof a wide variety of surfaces in a robust manner.

In the scope of this paper, we will assume that thesurface s(n) to be inspected consists of two spatiallyvarying components that contribute to the imageintensities measured by the camera of the auto-mated inspection system in a different way [4]:

sðnÞ :¼qðhi;ui; ho;uo; nÞ

fðnÞ

� �; ð1Þ

where n :¼ (n,g)T denotes the lateral world coordi-nates. The first component of s(n),

qðhi;ui; ho;uo; nÞ ¼dLoðho;uo; n; EiÞ

dEiðhi;ui; nÞ; ð2Þ

is the bidirectional reflectance distribution function[5] (BRDF), where the indices ‘‘i’’ and ‘‘o’’ indi-cate ‘‘incident’’ and ‘‘observed,’’ respectively. TheBRDF describes the local optical properties of thesurface material. In a nutshell, it tells how bright asurface element viewed from the ho, uo direction willappear, if it is illuminated from the hi, ui direction.Mathematically, the BRDF is defined as the ratio ofradiance dLo observed in the ho, uo direction andcaused by Ei to the irradiance dEi caused by an inci-dent flux in the hi, ui direction. The elevation angle hand the azimuth u belong to a local spherical coor-dinate system with a polar axis parallel to the nor-mal vector ns(n) of the surface. Since we aredealing with (at least partly) specular surfaces, addi-tional information is available on this component.Provided that the surface shows no local defects,we will assume that this component is spatially con-stant. Consequently, all deviations of this behaviorwill be assumed to originate from defects that areto be detected.

The second component of s(n) is defined as thatpart of the surface relief which can be resolved bythe image acquisition system, that is the macrostruc-ture. Note that the 3-D microstructure, which can-not be spatially resolved, contributes to theBRDF. Here, f(n) represents the height of the mac-rostructure of the surface at the lateral location n.

The vector s(n) provides a comprehensivedescription of the optically relevant properties of

the surface. However, in practice it is usually impos-sible to determine both components of s(n) suffi-ciently fast and at an acceptable expense toaccomplish the inspection. Instead, mostly imagesof the surface are utilized, although some infor-mation contained in s(n) is lost in the imagingprocess.

A common problem in the context of imaging ofspecular surfaces relates to illumination. Perfectlyspecular surfaces only reflect the light along a spe-cific path such that the angle of the incident lightequals the angle of the reflected light. Consequently,if the camera cannot see an illuminated object (typ-ically the light source itself) reflected in the surface,the corresponding image will appear dark, so thatonly a little information gain will be achieved fromthat measurement. A possible solution to this prob-lem consists in recording several images with differ-ent lighting (or in a different environment).Typically, the information of interest is extractedfrom each image separately and is then combinedat a decision level. As will be demonstrated in thefollowing, a more powerful approach is to processall images simultaneously by means of a centralizedfusion strategy.

The remainder of the paper is organized as fol-lows. In the next section, a classification of specularsurfaces into structured and smooth surfaces isintroduced, and suitable strategies are presented toinspect both types of surfaces, no matter whethertheir specularity is perfect or only partial. After ashort introduction to energy minimization methods,Section 3 describes specific algorithms to accom-plish the fusion of the raw images. In Section 4,experimental results demonstrate the good perfor-mance of our strategy based on two case studies:the analysis of machined surfaces, and the inspec-tion of painted free-form surfaces. Finally, in Sec-tion 5 the fusion methods are applied to a systemthat enables one to inspect painted surfacesinteractively.

2. Inspection principle

Depending on the geometry of the surfaces to beinspected, two cases must be distinguished, for theyrequire a completely different inspection approach.Whenever the macrostructure f(n) of the surfaceunder consideration is a spatially slowly varying sig-nal with low-pass characteristics, we will consider itas a smooth surface. Otherwise, a structured surfacewill be assumed.

Page 3: Inspection of specular and painted surfaces with centralized fusion techniques

538 F. Puente Leon, S. Kammel / Measurement 39 (2006) 536–546

2.1. Structured surfaces

We assume a structured specular surface to becomposed of several facets such that a non-negligi-ble part of the incident light is reflected accordingto the law of reflection, although diffuse spreadmay also occur. As shown in [6], this assumptionholds for a wide range of technical and engineeringsurfaces.

To provide an accurate recording of the surfacestructure, in-focus imaging is required. The cameraacquires an image of the environment reflected inthe different facets of the surface. The structuresconfiguring the environment, however, appear usu-ally blurred due to the limited depth of field of theimaging optics and the optical properties of thesurface. This fact enables one to perform a detectionof defects of the surface topography by analyzingthe intensities measured by the camera for eachfacet of the surface. Defects concerning the surfacemicrostructure (i.e. the BRDF), such as anunwanted surface roughness, yield variations ofthe specularity and will normally produce a lossof contrast [7]. To inspect the second componentof the surface s(n), i.e. the topography f(n), light-field methods are appropriate. To check whetherthe slope of a particular facet is correct, a brightpattern is to be located in the environment of thesurface such that it can be reflected by an intactfacet and viewed by the camera. Obviously, acomprehensive inspection based on this procedurepresupposes that the patterns to be visualized beprogrammable via a computer.

Fig. 1(a) shows a possible configuration toinspect structured surfaces. The patterns are dis-played on a hemispherical screen. The surface under

Fig. 1. Setups for inspection of specular surfaces: (a) imaging

investigation is located in the center of the hemi-sphere, where it is viewed by a camera through anopening in the screen. The actual screen is opaque,although in this rendered scheme a transparencyhas been chosen to permit viewing of the inside.The binary periodic pattern of this example showstwo periods in azimuthal direction, and is constantin elevation direction, as depicted in the exampleshown at the lower right corner of Fig. 4(a). Thepatterns itself may be varied mechanically, or, asin our case, optically by means of projectiontechniques.

2.2. Smooth surfaces

Due to the smoothness of this class of surfaces,visual information inference on their shape is notstraightforward. An alternative to standard shapeinspection approaches, such as fringe projectiontechniques, is based on the deflectometric measuringapproach. Deflectometry exploits the fact that lightis reflected at a specular surface according to the lawof reflection. A camera observes a pattern displayedon a screen that is reflected by the surface, as shownin Fig. 1(b). In this configuration, the surfacebecomes a part of the optical system and thereforedistorts the recorded pattern [8]. Because the surfaceis not observed directly, there is a trade-off betweenthe lateral resolution and the sensitivity of the mea-surement system that is balanced by the position ofthe focal plane of the camera; see Fig. 2(left). Focus-sing on the surface would maximize the lateral reso-lution of the surface being inspected at the expenseof a reduced sensitivity of the method. In contrast,focussing on the screen would enable to exploitthe highest resolution possible for the pattern, but

of structured surfaces; (b) imaging of smooth surfaces.

Page 4: Inspection of specular and painted surfaces with centralized fusion techniques

Fig. 2. Measuring principle of deflectometric techniques shown for a flat (left) and a tilted (right) surface. Even a slight change of the localsurface slope (right) leads to mapping of a different area of the pattern displayed on the screen.

F. Puente Leon, S. Kammel / Measurement 39 (2006) 536–546 539

would also imply a blurring of the surface. If sinu-soidal patterns are projected onto the screen, theresolution of the method is limited only by the sen-sitivity of the image detector, because blurring ofsuch patterns only reduces their contrast.

As shown in Fig. 2(right), this method shows anextremely high sensitivity regarding the local surfaceslope. Even a slight change of the surface slope leadsto mapping of a completely different area of the pat-tern on the screen. Moreover, the sensitivity of themethod can be enhanced by just increasing the dis-tance between the surface and the screen. Thus, thismethod enables one to detect tiny defects that maybe considered aesthetically displeasing, even if theydo not entail any functional disadvantage.

3. Fusion

In the scope of this paper, fusion is concernedwith the combination of B images

D :¼ fdðx; piðxÞÞ; i ¼ 0; . . . ;B� 1g ð3Þto the wanted result r(x), where P ¼ fpiðxÞ; i ¼0; . . . ;B� 1g represents the set of patterns displayedon the screen, and the magnitude pi(x) describes theintensity of the pattern at the location indicated bythe vector x. For both classes of surfaces introducedin the last section, the consideration of two-dimen-sional patterns has proven to be sufficient to per-form the inspection. Consequently, the parametervector x may be expressed in polar coordinates bymeans of the azimuth / and the elevation angle h:

piðxÞ ¼ pið/; hÞ with x ¼ ð/; hÞT. ð4ÞIf the location of a point on the hemisphericalscreen depicted in Fig. 1(a) is expressed in sphericalcoordinates (with a constant radius), this represen-tation becomes immediately clear. Although in the

examples presented in Section 4 the result r(x) is ascalar feature image, in other applications one couldbe interested in obtaining images, symbolic imagedescriptors, or even a vector containing several re-sults instead; see [9].

The fusion of the sensor data shall take placecomplementarily. Though the information of inter-est is distributed over all the images of the series,for a certain location it remains concentrated to afew images. With regards to the fusion strategy tobe employed, a centralized approach provides themost robust way to accomplish this task. Since theinformation fused is closer to the source, a betterexploitation of the raw data is achieved [10]. Adrawback of such methods is that the resulting algo-rithms are often ad hoc solutions which in additiontend to be computationally expensive. Thanks to thehomogeneity of the sensor data, however, a prepro-cessing is not needed in the present case. Thus, thecomputing time can be kept within reasonablelimits.

3.1. Fusion approach

The theoretical origins of data fusion date backto the late sixties, although a broad application ofthese techniques did not took place until the earlyeighties. In the mean time, the bibliography on datafusion has become very extensive, and applicationsto several fields have been reported, such as robot-ics, pattern recognition, medicine, non-destructivetesting, geo-sciences, defense, and finances [11].However, most of the proposed approaches arenot systematic, but represent rather ad hoc solutionsto specific problems. Some of the few systematicframeworks documented during the last years toperform fusion of image data include statisticalapproaches (both classical and Bayesian ones),

Page 5: Inspection of specular and painted surfaces with centralized fusion techniques

540 F. Puente Leon, S. Kammel / Measurement 39 (2006) 536–546

Dempster–Shafer logical reasoning, fuzzy logic, andneural networks.

A fairly general approach to centralized imagefusion consists in expressing all knowledge availablea priori on desirable properties and reasonable con-straints regarding the raw data D, the nuisanceparameters N [12], and the fusion results R as wellas their interconnections in shape of generalizedenergy terms EkðD;R;NÞ [13]. These ‘‘energies’’Ek have to be chosen such that the knowledge avail-able and the requirements expressed are reflectedmonotonically in the sense that the result becomesmore desirable the lower the energy is. The energyterms are then combined to an energy function E

by means of weighted summation:

E ¼X

k

kkEkðD;R;NÞ; kk > 0. ð5Þ

E represents an implicit approach to the fusion task.Due to the monotony of the energy function, the fu-sion can be accomplished by minimizing E with re-spect to the fusion results R and the nuisanceparameters N:

fR�;N�g ¼ arg minR;NfEg; ð6Þ

where R� and N� denote the results of the opti-mization.

Some advantages of this approach are its general-ity as well as the possibility to incorporate additionalinformation and constraints by simply adding fur-ther energy terms. Moreover, there is an interestingconnection with Bayesian statistics. According tostatistical physics, a Gibbs probability density func-tion (PDF) can be defined for the energy function:

PDF / e�ET ¼

Yk

e�kkEk=T . ð7Þ

T can be thought of as a generalized temperature.Since the energy function is a sum, the PDF can be

Fig. 3. Machined surfaces of pressure sensors: (a) faultless membran

decomposed into factors. These factors can be inter-preted either as a likelihood function or as a prioriPDFs. By means of an appropriate normalizationof Eq. (7), the a posteriori PDF for the fusion resultgiven the image series is obtained. The monotony ofthe exponential function ensures that minimizing E isequivalent to maximizing Eq. (7). Therefore, theoptimization delivers the maximum a posteriori esti-mate of the fusion result. It should be emphasizedthat—since the energy terms do not only embodyobjective knowledge, but also subjective wishes andrequirements—the PDF Eq. (7) is a subjective prob-ability description. However, the advantage ofdescribing fusion from a probabilistic point of viewis that there exists a powerful set of mathematicaltools for treating Gibbs PDFs. An important exampleis the simulated annealing optimization method [14].

3.2. Case study 1: Machined surfaces of pressure

sensors

The first case study is concerned with the detec-tion of defects on machined surfaces acting as mem-branes of pressure sensors. The field of inspection isabout 10 mm2, whereas the defects itself are in theorder of a few hundredths of a square millimeter.Fig. 3(a) shows an example of such a non-defectivesurface illuminated with diffuse light, whereas themembrane of Fig. 3(b) features several defects, ascan be perceived in the zoomed detail picture onthe right. Obviously, these images hardly allow todiscern intact regions from defective areas.

To perform the inspection, an image series D ofthe surface is recorded by using binary patterns gen-erated according to the following equation:

piðh;/Þ ¼ pið/Þ ¼1

2sgn cos 2/� 2pi

B

� �� �þ 1

� �;

ð8Þ

e; (b) defective membrane and zoomed image showing defects.

Page 6: Inspection of specular and painted surfaces with centralized fusion techniques

Fig. 4. Images of a membrane of a pressure sensor and corresponding patterns used to record them (lower right corner). (a) i = 0; (b)i = B/4; (c) i = B/2; (d) i = 3B/4.

1 Despite the simplicity of this classification approach, a morerobust alternative will be employed in Section 4.

F. Puente Leon, S. Kammel / Measurement 39 (2006) 536–546 541

where sgn(.) denotes the signum function:

sgnðxÞ :¼1 for x P 0

�1 otherwise

�. ð9Þ

Fig. 4 shows exemplarily four images out of a seriesof 16 of a sensor membrane, as well as icons repre-senting the corresponding intensity patterns used torecord them. To illustrate the fusion principle, nowa certain fixed point x of the surface shall be consid-ered. If the course of the image intensities at thelocation x depending on the image index i is ana-lyzed, a typical signal pattern can be recognized innon-defective regions. Each of the grooves acts asa mirror that reflects the pattern intensities locatedin both directions perpendicular to their own localdirection. Similarly, dark areas should be expectedfor an intact texture, if the patterns perpendicularto its course are dark. In other words, the localintensity of an ideal groove texture reveals whetherthe patterns at both sides of it are bright or not.Bright regions emerging from a dark groove textureindicate local defects, as can be seen clearly inFig. 4(a) and (d). Though this fact would alreadyenable to formulate some detection algorithmsbased on the analysis of the neighborhood of eachpixel, fusion methods provide an enhanced way todetect even tiny surface defects more reliably. Noinformation regarding spatial neighborhoods willbe considered, but ‘‘neighborhoods’’ in patternspace will be exploited.

In this particular case, harmonic analysis of thesignals d(x,pi) enables us to define the following fea-ture image that robustly measures local surfacedefects:

mðxÞ :¼ jDðx; fp ¼ 1ÞjjDðx; fp ¼ 1Þj þ jDðx; fp ¼ 0Þj ; ð10Þ

where

Dðx; fpÞ :¼Fpfdðx; piÞg

¼XB�1

i¼0

dðx; piÞ � exp �j2pif p

B

� �ð11Þ

denotes the one-dimensional discrete Fourier trans-form (DFT) of the series with respect to the patterndimension, that is the index i. Eq. (10) computes afeature based on the comparison of two frequencycomponents of the image intensities at the locationx: the fundamental oscillation and the DC compo-nent. It is easy to recognize that the resulting valuesare all within the range [0,1], and that the ratio 0.5 isobtained when the energy of both components is thesame. It is thus straightforward to segment the im-age into defective and intact regions by choosing athreshold t = 0.5.1 A value higher than t wouldmean that the fundamental oscillation dominates,i.e. the texture contains no defects at this location.Otherwise, x is assigned to the set of defectiveregions.

The resulting energy function is trivial and con-sists of only one addend:

E ¼X

x

½rðxÞ � mðxÞ�2 ¼ EfeatureðmðxÞ; rðxÞÞ. ð12Þ

Since for the optimal solution r*(x) = m(x) holds, acostly minimization is not necessary in this case.

3.3. Case study 2: Painted car body parts

In the second case study, defects on painted cardoors are to be detected. Similar to the examinationof the membranes of pressure sensors, an image

Page 7: Inspection of specular and painted surfaces with centralized fusion techniques

542 F. Puente Leon, S. Kammel / Measurement 39 (2006) 536–546

series D of the surface is recorded using a set of two-dimensional binary patterns

piðu; vÞ ¼1

2sgn cos n � 2pu

umax

� 2piB

� �� �þ 1

� �; ð13Þ

where umax denotes the maximum coordinate valuein u direction displayable by the device generatingthe patterns (e.g. the projector), and n is the numberof periods to display. For the sake of simplicity, theresulting set of fringe patterns pi(u,v) has been de-scribed in a Cartesian coordinate system (u,v), butis actually projected onto the inside of a hemispher-ical screen to perform real measurements. Accord-ing to the law of reflection, a flat screen wouldrestrict the deflectometric measurement system tothe inspection of surfaces with a small spread ofnormal vector directions. The hemispherical shapeof the screen, however, allows to view the patternson the screen after they have been reflected at thesurface for the case of most free-form surfaces[15]. Again, the course of the image intensities at afixed location x describes a typical signal patternthat enables one to recognize defective regions bymeans of appropriate analysis techniques.

In addition to defects like scratches, which causelocal changes of the BRDF, bubbles and inclusionsin the paint or coating should be detected as well.The latter defects are characterized by a much highercurvature as compared with intact surface parts.Therefore, from defective surface areas, the camerareceives a compressed view of a large region of thescreen. Details of the screen cannot be discriminatedfrom pixels of defective areas, because of the discretenature of the image formation as well as out-of-focusimaging. Thus, defects are typically imaged with themean brightness of the visible screen area.

In contrast, image areas that correspond withfaultless regions ideally show a sharp image of thescreen pattern. To ensure sharp images with highcontrast in non-defective areas, it is recommendableto adapt the displayed pattern to the curvature ofthe surface being examined [16]. Displaying consec-utive phase shifted patterns pi(u,v) results in distinctintensity changes within the areas of the cameraimage that correspond to faultless surface regions.Defective regions, however, show nearly the sameconstant grey level. This means that the contrastand thus also the spread of any point x in thedimension spanned by the pattern index i is highfor faultless surface areas and low for defectiveregions. A suitable measure for the contrast isdefined as

mcðxÞ :¼ maxifdðx; piÞg �min

ifdðx; piÞg

with i ¼ 0; . . . ;B� 1. ð14Þ

The measure mc(x) performs well, if the surfacebeing inspected shows a high degree of specularity,and the generated patterns feature a high contrast.On the other hand, this measure is rather susceptibleto noise due to its dependence on the extremal val-ues. Thus, for noisy images it is preferable to useother measures, such as the variance

mvðxÞ :¼ 1

B� 1

XB�1

i¼0

dðx;piÞ � dðxÞð Þ2

with �dðxÞ ¼ 1

B

XB�1

i¼0

dðx; piÞ. ð15Þ

The resulting energy function as well as its optimalsolution can be calculated according to Eq. (12).

4. Results

4.1. Structured surfaces

To record the image series of Fig. 4, a commer-cially available image acquisition system has beenused [17]; see Fig. 5(a). It features 768 dimmablelight emitting diodes (LED) that allow a flexible,computer-controlled generation of a wide varietyof patterns, as shown in the example of Fig. 5(b).The light is incident on a parabolic screen, in thecenter of which the surface under investigation islocated. A CCD camera and a microscope are usedto capture the images through an opening in thescreen.

Fig. 6 shows the fusion results obtained for bothmembranes presented in Fig. 3 with the method pro-posed in Section 3.2. The feature image on the leftdoes not contain any noticeable defects, whereasthe image of Fig. 6(b) clearly features several defec-tive areas. The zoomed image on the right of thisfigure shows an overlay with the results of a furtherdefect detection step. To this end, an edge detectionmethod based on a Laplacian-of-Gaussian (LoG)filter has been used [18].

4.2. Smooth surfaces

For the inspection of car body parts, the setupshown in Fig. 5(c) has been used. This setup consistsof an arched screen with a diameter of 1.5 m. A pro-jector with a special fisheye lens is placed in the focal

Page 8: Inspection of specular and painted surfaces with centralized fusion techniques

Fig. 5. Image acquisition systems: (a) commercial system GE/2; (b) detail of GE/2 showing a structured test surface and a patterngenerated by means of LEDs; (c) setup to inspect car body parts.

Fig. 6. Fusion results for the membrane of a pressure sensor: (a) resulting feature image for the intact membrane shown in Fig. 3(a); (b)resulting feature image for the defective membrane shown in Fig. 3(b); (c) corresponding detection results.

F. Puente Leon, S. Kammel / Measurement 39 (2006) 536–546 543

point of the screen to project arbitrary patterns. Thecar body part—in this case a car door—is placedabove the projector.

First, three different regions on six car doors havebeen inspected. The car doors constitute a standardset used by a car manufacturer for the evaluation ofpaint inspection systems. In Fig. 7(a1–a3), threeexamples of the fusion results for flat regions ofthese car doors coated with different paints aredepicted. The images have been inverted to high-light the faulty regions caused by bubbles and inclu-sions. The size of the region acquired by the camerais about 25 cm2. Based on a LoG filtering technique,the fusion results have been segmented and classi-fied into three classes, as demanded by industrystandards: tolerable defects (0), removable defects(1), and defects that lead to the rejection of theinspected car body part (3). The lateral dimensionsof the found defects are in the range of a few milli-

meters and the depth of the defects ranges from 10to 200 lm. All defects had been previously classifiedby the car manufacturer, and with our methods a100% classification reliability could be achieved(i.e. in all cases the different defects could not onlybe reliably detected but also assigned to the sameclass as given by the car manufacturer).

In a second approach, a larger region (about0.2 m · 0.8 m) of a car door has been inspectedusing the described fusion technique. Fig. 8 showsthe region of the painted car door together withtwo fusion results. In Fig. 8(a) an image of a regionof the inspected car door taken under diffuse light-ing is shown. It is almost impossible to perceiveany defect with this illumination. In Fig. 8(b),however, the fusion strategy described in Section3.3 has been applied to images taken while focussingon the screen. This focus setting maximizes the sen-sitivity of the system at the expense of a low spatial

Page 9: Inspection of specular and painted surfaces with centralized fusion techniques

Fig. 7. Paint defects on flat regions of a car body part: (a1–a3) fusion results; (b1–b3) classification results.

544 F. Puente Leon, S. Kammel / Measurement 39 (2006) 536–546

Page 10: Inspection of specular and painted surfaces with centralized fusion techniques

Fig. 8. Images of an area on a car door recorded under diffuse illumination (a) and fusion results for images recorded while focussing onthe screen (b) and the door (c).

Fig. 9. Head-mounted inspection system: (a) HMD device; (b)interactive inspection procedure is demonstrated with a car door.

F. Puente Leon, S. Kammel / Measurement 39 (2006) 536–546 545

resolution. The high sensitivity allows to visualizedeformations caused by a misadjustment of theforming machine that are nearly invisible to ahuman observer. Note that in this case the depthof the defects visualized are in the order of magni-tude of only a few microns!

In contrast, Fig. 8(c) shows the result of fusingimages in which the camera was focussed on thedoor region directly. With this setting, the resolu-tion is maximized at the expense of a reduced sensi-tivity. The increased resolution permits a reliabledetection of paint defects such as bubbles, inclu-sions or scratches with a magnitude hardly measur-able with alternative methods.

Obviously, since the different focus settings allowto image different spatial frequency components ofthe surface being inspected, both results comple-ment each other. Together, they provide a muchmore accurate insight into surface defects as com-pared with conventional techniques to inspect spec-ular surfaces.

5. Interactive inspection system

A major requisite for a robust in-line inspectionof painted surfaces is to combine the reliability ofautomated systems with the flexibility of human-based techniques to enable an efficient manualrefinishing of defective areas. Stepping up to thischallenge, we have developed a system that consistsof the setup shown in Fig. 5(c), and a mobile inspec-tion device equipped with a head-mounted display(HMD) and a video camera, as shown in Fig. 9(a).The HMD consists of two LC displays to visual-ize the feature image, as well as a camera used torecord the raw data. In Fig. 9(b), the interactiveinspection procedure is demonstrated with a car door.

During operation, the camera continuously cap-tures images of different fringe patterns reflected inthe painted surface, thus emulating the behaviorof human examiners. However, before displayingthe images in the HMD, they are combined bymeans of the signal analysis techniques describedin Section 3.3 to form an image in which defectsappear efficiently enhanced. This way, the informa-tion gathered from multiple lighting constellations isconcentrated in one single image. Additionally,quantitative detection results can be overlaid tothe visualized image to provide an objective assess-ment.

6. Conclusions

New strategies to inspect specular and paintedsurfaces have been presented. They are based onimage fusion techniques, and feature a very robustbehavior in the case of structured as well as smoothsurfaces. To record the different images to be fused,complementary imaging conditions are systemati-cally generated by projecting light patterns onto a

Page 11: Inspection of specular and painted surfaces with centralized fusion techniques

546 F. Puente Leon, S. Kammel / Measurement 39 (2006) 536–546

screen. To accomplish the fusion, a powerful cen-tralized approach based on energy minimizationhas been selected. The performance of the presentedstrategy has been demonstrated by means of twodemanding case studies: the analysis of machinedsurfaces, and the inspection of painted free-formsurfaces.

Based on the example of membranes of pressuresensors, a method has been introduced to assess thetexture of structured specular surfaces. Conven-tional methods often yield poor results whenapplied to such surfaces [19]. The proposed method,however, allows a classification of the discrete imagelocations point by point, and performs thus veryaccurately even in the case of defects of only smallextent. The second example refers to the inspectionof painted surfaces. With the proposed technique,not only scratches, bubbles and inclusions can bedetected reliably. Moreover, with the very samesetup, also bumps and dents can be visualized justby shifting the focal plane of the camera used. Fur-thermore, the range of applications of the describedmethods can easily be extended to diffusely reflect-ing surfaces, such as unpainted car body parts, byusing light of a larger wavelength and recordingthe corresponding images with an infrared camera.

In both cases, the results show that a reliable yetcost-efficient inspection of specular and painted sur-faces is attained matching the needs of industry. Theincreased expense regarding the acquisition of theimage data is more than compensated thanks to asimplified signal processing.

References

[1] B. Denkena, H. Ahlers, F. Berg, T. Wolf, Fast inspection oflarger sized curved surfaces by stripe projection, in: Annalsof CIRP 51(1), 2002, pp. 499–502.

[2] D.J. Whitehouse, Handbook of Surface Metrology, Instituteof Physics Publishing, Philadelphia, 1992.

[3] T. Dresel, G. Hausler, H. Venzke, 3D-sensing with aconfocal microscope, Applied Optics 31 (1992) 919–925.

[4] J. Beyerer, F. Puente Leon, Suppression of inhomogeneitiesin images of textured surfaces, Optical Engineering 36 (1)(1997) 85–93.

[5] F.E. Nicodemus, J.C. Richmond, J.J. Hsia, I.W. Ginsberg,T. Limperis, Geometrical Considerations and Nomenclaturefor Reflectance, NBS Monograph 160, National Bureau ofStandards, U.S. Department of Commerce, Washington DC,1977.

[6] P. Kierkegaard, Reflection properties of machined metalsurfaces, Optical Engineering 35 (3) (1996) 845–857.

[7] F. Puente Leon, Enhanced imaging by fusion of illuminationseries, in: O. Loffeld (Ed.), Sensors, Sensor Systems, andSensor Data Processing, Proc. SPIE 3100, 1997, pp. 297–308.

[8] D. Perard, Automated visual inspection of specular surfaceswith structured-lighting reflection techniques, Fortschritt-Berichte VDI, Reihe 8, No. 869, VDI Verlag, Dusseldorf,2001.

[9] F. Puente Leon, Complementary image fusion for inspectionof technical surfaces, Technisches Messen 69 (4) (2002) 161–168 (in German).

[10] D.L. Hall, Mathematical Techniques in Multisensor DataFusion, Artech House Inc., Norwood, MA, 1992.

[11] X.E. Gros, NDT Data Fusion, Arnold, London, 1997.[12] J. Beyerer, Is it useful to know a nuisance parameter? Signal

Processing 68 (1) (1998) 107–111.[13] J.J. Clark, A.L. Yuille, Data Fusion for Sensory Information

Processing Systems, Kluwer Academic Publishers, Boston,1990.

[14] S. Kirkpatrick, C.D. Gelatt Jr., M.P. Vecchi, Optimizationby simulated annealing, Science 220 (1983) 671–680.

[15] J. Beyerer, D. Perard, Automatische Inspektion spiegelnderFreiformflachen anhand von Rasterreflexionen, TechnischesMessen 64 (10) (1997) 394–400.

[16] S. Kammel, Automated optimization of measurement setupsfor the inspection of specular surfaces, in: K.G. Harding,J.W.V. Miller (Eds.), Machine Vision and Three-Dimen-sional Imaging Systems for Inspection and Metrology II,Proc. SPIE 4567, 2002, 199–206.

[17] M. Heizmann, Automated comparison of striation markswith the system GE/2, in: Z.J. Geradts, L.I. Rudin (Eds.),Investigative Image Processing II, Proc. SPIE 4709, 2002,80–91.

[18] J. Beyerer, F. Puente Leon, Detection of defects in groovetextures of honed surfaces, International Journal of MachineTools and Manufacture 37 (3) (1997) 371–389.

[19] F. Puente Leon, Model-based inspection of shot peenedsurfaces using fusion techniques, in: K.G. Harding, J.W.V.Miller, B.G. Batchelor (Eds.), Machine Vision and Three-Dimensional Imaging Systems for Inspection and Metrol-ogy, Proc. SPIE 4189, 2001, 41–52.