9

Click here to load reader

Ball stud inspection system using machine vision

Embed Size (px)

Citation preview

Page 1: Ball stud inspection system using machine vision

d imple-tomizedand thefor the

n a plane.lting inensions

omation

ISATRANSACTIONS®

ISA Transactions 41~2002! 3–11

Ball stud inspection system using machine vision

Dongik Shin,a,* Changsoo Han,a Young Shik MoonbaDepartment of Mechanical Engineering, Hanyang University, Sa-dong, Ansan-si, Kyonggi-do, South Korea

bDepartment of Computer Science Engineering, Hanyang University, Sa-dong, Ansan-si, Kyonggi-do, South Korea

~Received 2 August 2000; accepted 15 January 2001!

Abstract

In this paper, a vision-based inspection system that measures the dimensions of a ball stud is designed anmented. The system acquires silhouetted images by backlighting and extracts the outlines of the nearly dichoimages in subpixel accuracy. The sets of boundary data are modeled with reasonable geometric primitivesparameters of the models are estimated in a manner that minimizes error. Jig-fixtures and servo systemsinspection are also contrived. The system rotates an inspected object to recognize the objects in space not oThe system moves the object vertically so that it may take several pictures of different parts of the object, resuimprovement of measuring resolution. The performance of the system is evaluated by measurement of the dimof a standard ball, a standard cylinder, and a ball stud. © 2002 ISA—The Instrumentation, Systems, and AutSociety.

Keywords: Ball stud; Inspection; Machine vision

alndiety

ardlitsmrs.aitere

ate

ro-hen-tsnu-ghc-

aleminen

venus-

c-ex-it-

1. Introduction

A ball stud is a three-dimensional mechanicpart of the ball joint that is used in the steering asuspension systems of an automobile. The Socof Automotive Engineers~SAE! defines some keyfeatures of a ball stud and recommends standdimensions for those@1#. Fig. 1 shows a typicashape of the ball stud. In industry, some ofdimensions such as ball diameter and shank diaeter are gauged by micrometer or Vernier calipeTo measure more complicated dimensions suchdistance from the center of ball to gauge-line,should be sent to a measurement laboratory whit is measured by a three-dimensional coordinmeasuring machine~CMM! or by special gauging

*Corresponding author. Tel.: 182-31-400-5247;fax: 182-31-406-6242; E-mail address:[email protected]

0019-0578/2002/$ - see front matter © 2002 ISA—The Instru

-

s

machinery. These conventional inspection pcesses lessen the productivity and quality of tball stud. Automation of the inspection process eables both reduction in the failure-rate of producand provision of important feedback informatioto design. In other words, development of an atomated inspection system is essential for hiquality control, immediate analysis of the prodution line, and prediction of failure.

Many methodologies for automated dimensioninspection have been proposed and some of thhave been commercialized. Among them, machvision is one that is based on a biological visiosystem. Since it provides abundant and intuitiinformation on objects and environment, it is aactive research area for both academia and indtry. The application of machine vision to inspetion fields has been proved successful to sometent, especially, to connectivity of a printed circuboard~PCB! and to dimensional gauging of a pla

mentation, Systems, and Automation Society.

Page 2: Ball stud inspection system using machine vision

-notal oeasnalto-redfure

ur-a-epsex-ec-3.

theith

re

-heisn-ts.ngityentro-

-ughthatn-reheiredbi-c--hed,ndre-n

aror

4 Dongik Shin, Changsoo Han, and Young Shik Moon / ISA Transactions 41 (2002) 3–11

nar body @2,3#. Although applications to threedimensional gauging of mechanical parts areas successful as the above cases, the potentimachine vision has fueled research in these arThere are various methods for three-dimensiovision: stereovision, shape from shading, phometric stereo, shape from texture, and structulighting method @4,5#. However, applications othese methods to precise dimensional measments are uncommon.

In this paper, we propose a dimensional measing system for a ball stud. Automated visual mesurement can be seen as composed of two stthree-dimensional imaging and measurementecution. We describe the imaging system in Stion 2 and measurement algorithms in SectionIn Section 4, we execute measurements withimplemented system and compare the results wthose of CMM to verify its feasibility. In Section5, we summarize our study and propose futuworks.

Fig. 1. Ball stud.

f.

-

:

2. Imaging system

In the design of the imaging system, primal emphasis was placed on measuring resolution. Tobjective measuring resolution of the system0.01 mm, which is determined based on the geeral tolerances of cold-forged mechanical parSimplicity in the system structure and measuriprocess is also taken into consideration. Simplicis an important factor in automated measurembecause it is related to user-friendliness and pcessing time.

Fig. 2 shows the schematic diagram of the imaging system. The system acquires images throbacklighting. Backlighting consists of a lighsource behind an object so that a silhouette of tobject is formed. It is used where the outline iformation of an object and its features are moimportant than the surface features, which is tcase with the present system. The image acquby this technique is stable as to interior and ament illumination conditions. Ideally, the image aquired by backlighting consists of only two highcontrast intensity levels: one for the object and tother for the background. In the actual worlhowever, there exists intensity transition arouthe boundary and noisy variation inside eachgion. Locating the boundary under this situatiowill be dealt with in Section 3. A737 in.2 plateilluminator is used to generate an almost-planlight source and black-coated housing with a do

Fig. 2. Imaging mechanism.

Page 3: Ball stud inspection system using machine vision

n-thtediltshetha

inrese-

era

engon

oree--eirheti-

vo-Thees

ize

ckac-0p-3wsns.

r aits

lyuds.istssesatener

i-betoaenthem-

nti-ofor-aken.

elob-

5Dongik Shin, Changsoo Han, and Young Shik Moon / ISA Transactions 41 (2002) 3–11

equipped as shown in Fig. 3 to eliminate unitended light sources. To align the illuminator withe image plane of the camera, we have computotal sensed intensity as the plate pans or twithout any object between the illuminator and tcamera so that we can select the configurationmaximizes the intensity.

The system rotates a ball stud to recognize itspace not on a plane. In other words, it captuvertical cross sections of the object and synthsizes them to reconstruct the object. The camused in the system is a high resolution(202932044) CCD camera, which is adapted to thsystem in order to provide sufficient measuriresolution. One alternative to the high-resoluticamera is to magnify fields of views~FOV! ofimages, but this process makes the system mcomplex. In addition, large magnification may rsult in an ill-conditioned situation when the parameters of features are estimated from thboundary points. Another strategy to improve tmeasuring resolution is to move an object vercally if necessary. The shape of a ball stud adcates this scheme: it is an elongated shape.system takes several pictures of different latitudof the object and synthesizes them. To optim

Fig. 3. Implemented system.

t

the FOV of the image, the camera is moved baand forth and the focus of the lens is adjustedcordingly. The configurable FOV is from 30 to 6mm, or equivalently, the resolution range is aproximately from 0.015 to 0.030 mm/pixel. Fig.shows the implemented system and Table 1 shoelements of the imaging system and specificatio

3. Algorithms

The goal of the present system is to recoveball stud from its images and then to measuredimension quantitatively. Algorithms accordingconsist of two phases: recognition of a ball stand quantitative measurement of its dimension

The first stage in the recovery of the ball studboundary extraction. The set of boundary poinwill be fitted to geometric primitives such aspheres, cylinders, cones, and lines. The primitivthemselves and their relationships may genersecondary primitives such as center axis or corpoints. The algebraic representations of the primtives in the pixel coordinate system shouldtransformed into the world coordinate systemnormalize its dimension. Imaging geometry orcamera model provide the relationship betwethe two coordinate systems. The parameters ofcamera model have to be determined by the caera calibration. The dimensions are scalar quaties, neither equations nor points. Extractionthese quantities can be conducted by simple fmulas. The measured value can be used to mdecisions by referring to the drawing informatioFig. 4 shows these procedures.

3.1. Boundary extraction

The boundary of the object exists where pixvalues change abruptly in an image. When an

Table 1Specification of elements of the system.

Elements Specifications

Camera 202932044 pixelsLens 60 mm focal lengthFrame Grabber 1 frame/sIlluminator 737 in.2 plateCamera jig/servo 300 mm strokeVertical jig/servo 100 mm strokeRotation jig/servo 0.01 deg/pulse

Page 4: Ball stud inspection system using machine vision

ndx-

sess.y-ourldinea-histhedng

d.tedt’s

ex-il-eale

andult-n

ctich

6 Dongik Shin, Changsoo Han, and Young Shik Moon / ISA Transactions 41 (2002) 3–11

ject has one intensity value and the backgrouhas a different intensity value, the boundary etraction is a simple process. One of the purpoof backlighting is to produce this simple procesIn the real world, however, there are intensittransient regions around the boundary and readnoise interferes. Fig. 5 presents this real-wosituation. Knowing where the ideal boundary isthe transient region is prerequisite for precise msurement. The boundary extraction used in tstudy is composed of two phases: extraction ofinitial set of boundary points in the pixel unit anrefinement of the set in subpixel accuracy alothe normal direction to the boundary.

Fig. 4. Flow of inspection algorithms.

t

To locate the initial boundary~digital boundary!,the gray level of the boundary was estimateBased on this value, the boundary can be extracby a contour following algorithm such as PaperTurtle @6#. To determine the optimal threshold~thegray level of the boundary!, the histogram of thepixel values has to be examined. Fig. 6 showsamples of an ideal gray-level histogram of a shouette and an actual one, respectively. The idhistogram has only two intensity values but thactual case is contaminated with readout noisethe gradual transition around the boundary, resing in the distribution of gray levels grouped itwo dominant modes. The optimal thresholdT* isdetermined by minimizing the sums2(T) of vari-ances of each mode@7#:

s2~T!5(i 50

T

@ i 2m1~T!#2h~ i !

1 (i 5T11

255

@ i 2m2~T!#2h~ i !, ~1!

whereh( i ) is the number of pixels of gray leveli ,and m1 and m2 are average gray levels of objemode and background mode, respectively, whare computed as follows:

m1~T!5

(i 50

T

ih~ i !

(i 50

T

h~ i !

, m2~T!5

(i 5T11

255

ih~ i !

(i 5T11

255

h~ i !

.

~2!

Fig. 5. Phenomenon around boundary of a silhouette image.

Page 5: Ball stud inspection system using machine vision

7Dongik Shin, Changsoo Han, and Young Shik Moon / ISA Transactions 41 (2002) 3–11

Fig. 6. Typical histogram of an image.

t omrysisne

e-ionn

-n-

isigi-l

el.ed

r at

ec-

Computing s2(T) for every gray level, we canchoose the optimumT* . Note that for the idealimage whose histogram is shown in Fig. 6~a!, thevariances2(T) is equal to 0 wheneverT is be-tween the two intensity levels.

Based on the threshold, we can generate a seboundary points by Papert’s Turtle. This algorithis initiated by putting the turtle at a boundapoint. If the current pixel is on the object, it turnleft and advances one pixel; if the current pixelon the background, it turns right and advances opixel. The algorithm terminates when the turtle rturns to the start point or it goes beyond the regof interest~ROI!. The procedure is illustrated iFig. 7.

The previous algorithm for digital boundary extraction is best for a binary image, but as me

Fig. 7. Papert’s Turtle.

f

tioned before, our image is not binary and therea transient region around the boundary. Each dtal boundary point(X,Y) is refined into a subpixepoint ~x, y! on a line segmentL that is normal tothe digital boundary at the point(X,Y) and pass-ing through the point@8,9#. The sampled gray-level profilehk (k52N,...,N) along the line seg-ment L of length 2N11 is used to estimate theparameters of the cross-boundary intensity modThe model of the profile may be step, saturatramp, or sigmoid. The step model as in Fig. 5~b! isadopted in this study. The direction ofL can bedetermined by the approximated gradient vectothe digital boundary point:

gx~X,Y!5h~X11,Y21!12h~X11,Y!

1h~X11,Y11!2h~X21,Y21!

22h~X21,Y!2h~X21,Y11!,

gy~X,Y!5h~X21,Y11!12h~X,Y11!

1h~X11,Y11!2h~X21,Y21!

22h~X,Y21!2h~X11,Y21!.

~3!

The ideal boundary model along the search dirtion L can be represented as

h~u!5H hO, u,u

hB u.u,~4!

where

Page 6: Ball stud inspection system using machine vision

ry

ry

ar-ntsre

as

ua-

isre-isho-r innding

eor

ed

8 Dongik Shin, Changsoo Han, and Young Shik Moon / ISA Transactions 41 (2002) 3–11

u: signed distance from the digital boundapoint to the sampled point onL.

u: signed distance from the digital boundapoint to the subpixel boundary point.

hO: the gray level of the object.hB : the gray level of the background.

Estimation of the parameters of the model is cried out such that first, second, and third momeof the boundary model and boundary data aequal:

ELh~u!du5(

ihi,m1 ,

ELh2~u!du5(

ihi

2,m2 , ~5a!

ELh3~u!du5(

ihi

3,m3 .

Integrals on the left sides of the equations are eily solvable:

ELh~u!du5E

2N2 1/2

uhOdu1E

u

N1 1/2hBdu

5~hB1hO!S N11

2D2~hB2hO!u

5m1 ,

ELh2~u!du5~hB

21hO2 !S N1

1

2D2~hB22hO

2 !u

5m2 , ~5b!

ELh3~u!du5~hB

31hO3 !S N1

1

2D2~hB32hO

3 !u

5m3 .

There are two solutions of the simultaneous eqtions, andu of the feasible(hB.hO) solution isgiven as follows:

u5

2m1323~112N!m1m2

1~112N!2m3

2A 4m13m323m1

2m221~112N!

3~4m2326m1m2m3!1~112N!2m3

2

.

~6!

-

3.2. Feature extraction

The boundary as a simple collection of pointsinsufficient to process; instead it should be repsented through a geometric model. To accomplthis, the boundary points are fitted to proper gemetric parameters. The features that may appeathe cross section of the ball stud are line acircle. In this subsection, the process of estimatthe parameters of these features is explained.

The equation of line is

ax1by1c50,

a21b251. ~7!

The constrainta21b251 is adopted rather thanb51, which we cannot represent by a vertical linand we may meet an ill-conditioned situation fthe case of an almost vertical line. Withn bound-ary points, the data-fit problem can be transforminto a constrained optimization problem:

min(i 51

n

~axi1byi1c!2 ~8!

subject to

12a22b250 ~9!

or, introducing a Lagrange multiplierl, Eqs. ~8!and ~9! are combined into

minR5(i 51

n

~axi1byi1c!21l~12a22b2!.

~10!

The minimumR* of R is the minimal eigenvalueof

F Sxi2 Sxiyi

Sxiyi Syi2 G2

1

n F ~Sxi !2 SxiSyi

SxiSyi ~Syi !2 G

~11!

and the best estimate@ a b#T for a andb is a cor-responding normalized eigenvector@10#. The esti-matec for c can be computed as follows@10#:

c521

n (i 51

n

axi1byi . ~12!

The equation of circle is

x21y21ax1by1c50. ~13!

Page 7: Ball stud inspection system using machine vision

a

tesedt,onr-hen.

ial

fol-

ndl

ed. 9he

ctgeons-n-

9Dongik Shin, Changsoo Han, and Young Shik Moon / ISA Transactions 41 (2002) 3–11

If we haven boundary points, we can establishlinear system of equations

@xiyi1#F abcG5@2xi

22yi2# ~14!

or

Au5b ~15!

The least square solutionu of the equation is

u5~ATA!21ATb. ~16!

Fig. 8. Imaging geometry with radial lens distortion.

3.3. Transformation to a real coordinate system

The relationship between the pixel coordinasystem and the world coordinate system is baon imaging geometry or camera model. At firswe tried to model our imaging system basedTsai @11#. That is, the first-order radial lens distotion coefficient was considered. Fig. 8 shows timaging geometry with the radial lens distortioA point P(x,y,z) in the world coordinate systemO2xyz is imaged intoPu(Xu ,Yu) in the imageplane if there is no lens distortion. Under the radlens distortion, the image is formed atPd(Xd ,Yd)and their relationship can be represented aslows:

Xd1k1Xd~Xd21Yd

2!5Xu ,

Yd1k1Yd~Xd21Yd

2!5Yu . ~17!

When we calibrated our imaging system, we fouthatk1 was of1026 order. Since the pixel-to-pixedistance of the CCD array is 9mm, the distortionratio for a corner~the worst case! of FOV is about10210. With this observation, we have assumour imaging system as a pinhole camera. Figdepicts the coordinate systems for our system. Tobject plane where the outline of the objeformed is assumed to be parallel to the imaplane and the object plane is invariant to rotatiand vertical movement of the object. We also asume perfectly planar light source. The relatio

Fig. 9. Coordinate systems of the system.

Page 8: Ball stud inspection system using machine vision

isisardervoer-er

bycylardise

ntri-

es.ar

hered

of

to

a-

thebyallanAllus-

onle-

10 Dongik Shin, Changsoo Han, and Young Shik Moon / ISA Transactions 41 (2002) 3–11

ship between a point~X, Y! in the pixel coordinatesystem and the corresponding point~x, y! in realworld is

FxyG5F s 0

0 sG FXYG1F0t G , ~18!

wheres is the scale of the transformation thatdetermined by camera calibration. Calibrationachieved by measuring the diameter of a standball. t is a vertical translation of the image. Thpresent system obtains this value from the sewithout any calibration because the general tolance for length of an object is relatively largthan for other aspects.

4. Experiments

The performance of the system is evaluatedthe measurement of a standard ball, a standardinder, and a ball stud. Even though the standball is used for calibration of the camera, itvaluable for the evaluation of the precision of thsystem. A standard cylinder that is of a differeshape from the calibration standard provides vefication of consistency for various object shapTable 2 shows the measurements of the standball and standard cylinder.

We execute the measurement for a ball stud. Tdimensions of the ball stud that we have measuare

~A! the diameter of the ball,~B! the diameter of the cylindrical shank,~C! the distance form gauge line to the center

the ball,~D! the distance from the center of the ball

top end, and~E! the eccentricity of the center of the ball rel

tive to the taper center axis,

Table 2Measurements of standard objects.

Nominalvalue Tolerance

Measuredvalue

Standarddeviation

Standardball~diameter!

20 60.001 20.000 0.0034

Standardcylinder~diameter!

20 60.001 20.007 0.0028

-

d

as shown in Fig. 10. The measurements forball stud by the system are compared to thoseCMM. Table 3 shows the measurement of the bstud. Blanks in the last column of the table methe dimensions cannot be measured by CMM.the measurements are executed in the real indtrial environment.

5. Conclusion

In this paper, an automated visual inspectisystem for a ball stud is designed and imp

Fig. 10. Measured dimensions of ball stud.

Table 3Measurements of a ball stud.

Nominalvalue Tolerance

Measuredvalue CMM

A 35 20.0310.05

34.998 34.9706

B 23.15 1010.01

23.195

C 29.14 10.110.3

29.280

D 16.3 1010.3

16.382 15.9766

E 0 1010.04

0.028 0.0335

Page 9: Ball stud inspection system using machine vision

a-hen,

fi-of

tud.

E

a,y,

c-–

c-–

3D

-ust-tal

ion,

el

b-rn

tyn-In-

n:–

orff-m.

11Dongik Shin, Changsoo Han, and Young Shik Moon / ISA Transactions 41 (2002) 3–11

mented. This system acquires the outline informtion of the ball stud using backlighting, rotates tobject to achieve three-dimensional inspectioand moves the object vertically to acquire sufcient resolution. The system has a precision0.01 mm. The elapsed time to inspect a ball swith two images and 36 cross sections is 100 s

References

@1# SAE Handbook, Vol. 2: Parts & Components, SAInc., 1988, pp. 14.01–14.18.

@2# Moganti, M., Ercal, F., Dagli, C., and TsunekawS., Automatic PCB inspection algorithms: a surveComput. Vis. Image Underst.63 ~2!, 287–313~1996!.

@3# Davies, E., Machine Vision: Theory, Algorithms, Praticalities. Academic Press, New York, 1997, pp. 471502.

@4# Davies, E., Machine Vision: Theory, Algorithms, Praticalities. Academic Press, New York, 1997, pp. 373416.

@5# Sansoni, G., Lazzari, S., Peli, S., and Docchio, F.,

Imager for dimensional gauging of industrial workpieces: state of the art of the development of a roband versatile system, in:Proceedings of the International Conference on Recent Advances in 3-D DigiImaging and Modeling, IEEE, 1997, pp. 19–26.

@6# Papert, S., Uses of Technology to Enhance EducatTechnical Report 298, AI Lab, MIT, 1973.

@7# Otsu, N., A threshold selection method from gray levhistograms. IEEE Trans. Syst. Man. Cybern.9 ~1!,62–66~1979!.

@8# Tabatabai, A. and Mitchell, O., Edge location to supixel values in digital imagery. IEEE Trans. PatteAnal. Mach. Intell.6 ~2!, 188–201~1984!.

@9# Huertas, A. and Medioni, G., Detection of intensichanges with subpixel accuracy using LaplaciaGaussian masks. IEEE Trans. Pattern Anal. Mach.tell. 8 ~5!, 651–664~1986!.

@10# Faugeras, O., Three Dimensional Computer VisioGeometric View Point. MIT Press, 1993, pp. 623624.

@11# Tsai, R., A versatile camera calibration technique fhigh-accuracy 3D machine vision metrology using othe-shelf TV camera and Lenses. IEEE J. Rob. AutoRA-3 ~4!, 323–344~1987!.