6
IIIS Proc. of SCI2000, Vol.V, pp.317–322 (2000 7) An Automatic Camera Calibration Method with Image Registration Technique Toru TAMAKI †‡ * Tsuyoshi YAMAMURA [ Noboru OHNISHI †‡ Dept. of Info. Eng., [ Faculty of Info. Sci. Tec., Bio-Mimetic Control Nagoya Univ. Aichi Prefectural Univ. Research Center, RIKEN Nagoya 464-8603 Japan Aichi 480-1198 Japan Nagoya 463-0003 Japan ABSTRACT We propose a novel technique of automated camera calibration method to obtain internal cam- era parameters, which enables us to compensate the distortion of an image taken by a camera with zooming lens. The proposed method is based on image registration. First a calibration pat- tern is transformed onto the distorted image of the pattern using affine transformation. Then the registration of the two images are performed us- ing a nonlinear optimization with Gauss-Newton method that minimizes residuals of pixel value of two images. Finally the distortion parameters are estimated to minimize residuals that remain after second step. The experimental results show the usefulness of the proposed method. Keywords: calibration, lens distortion, non- linear optimization, gauss-newton method, image registration 1. INTRODUCTION Calibrating a camera and compensating distor- tion of lens are one of important processes for computer vision. Although self-calibration have been studied recently, many researches (for ex- amples, [1, 2]) formulate their problems without considering distortion because of simplicity. So pre-calibration of internal camera parameter is re- quired. Some codes of calibration have been available via the internet (e.g., Tsai’s method[3] is avail- able from[4]), however, such ordinary techniques require a lot of correspondences between points on an image and a known three-dimensional coordi- nates (on a plane or even on some structure like a cube or a house), then the transformation of the points is estimated. In such a method, the correspondences should be done by a human operator, and it is not reli- able because of the manual correspondence errors. Moreover, it takes much time and patience, and it * e-mail : [email protected] is too hard to measure a change of the distortion parameter as a camera zooms in/out. An alternative procedure is detecting markers. It can be done by a template matching technique and maybe of sub-pixel order. However, there is an another correspondence problem; which mark- er on the image corresponds to which point in a space known in advance. It cannot be neglected as the number of the markers increases to improve the accuracy of the estimation. If the problem can be avoided, the number of the points used for cor- respondence is limited. In this paper, we propose a new calibration method which compensate the distortion of an im- age due to lens zooming. The proposed method makes the correspondence, and more precise esti- mation than the marker detection is expected be- cause this method uses not several points of mark- ers but all points of the image. The our method is based on an image registration technique which is used in the area of motion analysis, and consists of the following three stages; affine transformation, plane projective transformation, lens distortion re- covery. 2. THREE ESTIMATION STEPS WITH REGISTRATION The basic idea is that a calibration needs point- to-point correspondence and a registration can supply that. The proposed method makes a cor- respondence between an ideal calibration pattern and a distorted image of the printed pattern taken by a camera. Since the sizes of the pattern printed on a paper is measured easily, three dimensional coordinate of each pixel in the pattern can be de- cided easily. Another features of our method are that any image can be used as the calibration pat- tern, and that once the parameters are estimated they can be used as long as the zooming of lens doesn’t change. Our method consists of the following three pro- cedures. The first step is to roughly transform the pattern into the image, which is represented by affine and translation parameters. Then the

An Automatic Camera Calibration Method with Image Registration Technique · 2000. 9. 22. · IIISProc. ofSCI2000,Vol.V,pp.317{322(20007) 1 An Automatic Camera Calibration Method with

  • Upload
    others

  • View
    4

  • Download
    0

Embed Size (px)

Citation preview

Page 1: An Automatic Camera Calibration Method with Image Registration Technique · 2000. 9. 22. · IIISProc. ofSCI2000,Vol.V,pp.317{322(20007) 1 An Automatic Camera Calibration Method with

IIIS Proc. of SCI2000, Vol.V, pp.317–322 (2000 7) 1

An Automatic Camera Calibration Method

with Image Registration Technique

Toru TAMAKI†‡ ∗ Tsuyoshi YAMAMURA[ Noboru OHNISHI†‡

†Dept. of Info. Eng., [Faculty of Info. Sci. Tec., ‡Bio-Mimetic ControlNagoya Univ. Aichi Prefectural Univ. Research Center, RIKEN

Nagoya 464-8603 Japan Aichi 480-1198 Japan Nagoya 463-0003 Japan

ABSTRACT

We propose a novel technique of automatedcamera calibration method to obtain internal cam-era parameters, which enables us to compensatethe distortion of an image taken by a camera withzooming lens. The proposed method is basedon image registration. First a calibration pat-tern is transformed onto the distorted image ofthe pattern using affine transformation. Then theregistration of the two images are performed us-ing a nonlinear optimization with Gauss-Newtonmethod that minimizes residuals of pixel value oftwo images. Finally the distortion parameters areestimated to minimize residuals that remain aftersecond step. The experimental results show theusefulness of the proposed method.

Keywords: calibration, lens distortion, non-linear optimization, gauss-newton method, imageregistration

1. INTRODUCTION

Calibrating a camera and compensating distor-tion of lens are one of important processes forcomputer vision. Although self-calibration havebeen studied recently, many researches (for ex-amples, [1, 2]) formulate their problems withoutconsidering distortion because of simplicity. Sopre-calibration of internal camera parameter is re-quired.

Some codes of calibration have been availablevia the internet (e.g., Tsai’s method[3] is avail-able from[4]), however, such ordinary techniquesrequire a lot of correspondences between points onan image and a known three-dimensional coordi-nates (on a plane or even on some structure like acube or a house), then the transformation of thepoints is estimated.

In such a method, the correspondences shouldbe done by a human operator, and it is not reli-able because of the manual correspondence errors.Moreover, it takes much time and patience, and it

∗e-mail : [email protected]

is too hard to measure a change of the distortionparameter as a camera zooms in/out.

An alternative procedure is detecting markers.It can be done by a template matching techniqueand maybe of sub-pixel order. However, there isan another correspondence problem; which mark-er on the image corresponds to which point in aspace known in advance. It cannot be neglectedas the number of the markers increases to improvethe accuracy of the estimation. If the problem canbe avoided, the number of the points used for cor-respondence is limited.

In this paper, we propose a new calibrationmethod which compensate the distortion of an im-age due to lens zooming. The proposed methodmakes the correspondence, and more precise esti-mation than the marker detection is expected be-cause this method uses not several points of mark-ers but all points of the image. The our method isbased on an image registration technique which isused in the area of motion analysis, and consistsof the following three stages; affine transformation,plane projective transformation, lens distortion re-covery.

2. THREE ESTIMATION STEPS WITH

REGISTRATION

The basic idea is that a calibration needs point-to-point correspondence and a registration cansupply that. The proposed method makes a cor-respondence between an ideal calibration patternand a distorted image of the printed pattern takenby a camera. Since the sizes of the pattern printedon a paper is measured easily, three dimensionalcoordinate of each pixel in the pattern can be de-cided easily. Another features of our method arethat any image can be used as the calibration pat-tern, and that once the parameters are estimatedthey can be used as long as the zooming of lensdoesn’t change.

Our method consists of the following three pro-cedures. The first step is to roughly transformthe pattern into the image, which is representedby affine and translation parameters. Then the

Page 2: An Automatic Camera Calibration Method with Image Registration Technique · 2000. 9. 22. · IIISProc. ofSCI2000,Vol.V,pp.317{322(20007) 1 An Automatic Camera Calibration Method with

preside parameters of a transformation of planeunder perspective projection are estimated with anonlinear optimization. Finally the distortion pa-rameters are estimated to minimize residuals thatremain after applying second step due to the lensdistortion.

3. AFFINE TRANSFORM WITH

DETECTED MARKERS

At first, a pattern image, I1 should be creat-ed. It must have three color (r,g,b) markers withcoordinates mr, mg, mb on the corners, then beprinted with colors to make it easy to detect themarkers.

Then a camera, which is to be calibrated, takesan image of the printed pattern, I2. The coordi-nates of the markers m′

r, m′

g , m′

b in the image I2

are detected by thresholding or template match-ing.

Here what is calculated is the parameters of thetransformation from the pattern I1 into the pat-tern taken in the image I2. This transformation isrepresented by six parameters θ

a = (θa1 , . . . , θa

6)T ,first four of them are the affine parameters, andthe last two are translation.

Let p = (x, y)T be a point on I1, and p +a(p; θa) be the point on I2 corresponding to p,where

a(p; θa) =

(

x y 0 0 1 00 0 x y 0 1

)

θa (1)

and solve the following system of linear equations

m′

i = mi + a(mi; θa), i = r, g, b (2)

to obtain the parameters θa.

4. IMAGE REGISTRATION;

PERSPECTIVE FITTING

The first step described above uses only threecorresponding points with respect to the affinetransformation. Next we make precise correspon-dences of every points, that is, image registration,using plane projective transformation.

Modeling and formulation

Now, what to do is minimizing residuals of in-tensities between pi in I1 and pi + u(pi; θ

u) in I2

corresponding to pi;

ri = I1(pi)− I2(pi + u(pi; θu)) (3)

where

u(p; θu) =

(

x y 0 0 1 0 x2 xy

0 0 x y 0 1 xy y2

)

θu

= Muθu (4)

and the function which should be minimized is asfollows;

minθu

i

ρ(ri), ρ(ri) = r2

i (5)

Here u() is the displacement of a point on a planewhen the view changes from one to the other un-der perspective projection. It is represented bythe eight parameters θu = (θu

1, . . . , θu

8)T , which is

a motion model of a plane under perspective pro-jection and is often used for motion analysis[5].

Minimization method

Estimating the parameters θu, the function (5)

are minimized by Gauss-Newton method, a famousnonlinear optimization technique. The parametersare updated by the following rule.

θu ← θ

u + δθu (6)

We use the the affine parameters θa obtained at

first stage as the initial value of the first six ele-ments of θu. The last two of θu are initialized to0.

According to [6], the decent direction δθu arecalculated as follows; 1

δθu = −(JDJT )−1JDr (7)

J = J(θu) =∂r

∂θu =

[

∂ri

∂θuj

]

(8)

D = diag

[

ρ(ri)

ri

]

(9)

ρ(ri) =∂ρ(r)

∂r

r=ri

(10)

This is the same as the least square formulation,that is, the system of linear equations[5] which iswritten as∑

l,i

ρ(ri)

ri

∂ri

∂θuk

∂ri

∂θul

δθul = −

i

ρ(ri)

ri

ri

∂ri

∂θuk

(11)

for k = 1, . . . , 8. The partial derivatives are trans-formed as below using the chain rule of differential.

∂r

∂θu =

∂u

∂θu

∂r

∂u= −MuT∇I2(p + u(p)) (12)

The iteration of calculating δθu in (6) are re-peated until it converges. At each iteration, theparameters estimated at previous iteration areused for the calculation of u(p).

When the iteration stops, we write the estimat-

ed parameters as θu

5. IMAGE REGISTRATION;

DISTORTION FITTING

At the end of the previous step, the image regis-tration between the pattern image and the patternin the distorted image are finished except for theeffect of the lens distortion.

1The reason to introduce D is to make it easy to deal

with a robust function as ρ instead of least-square when the

pattern in the image are partially occluded or out of scene.

Page 3: An Automatic Camera Calibration Method with Image Registration Technique · 2000. 9. 22. · IIISProc. ofSCI2000,Vol.V,pp.317{322(20007) 1 An Automatic Camera Calibration Method with

Modeling of distortion

The relationship between an undistorted anda distorted coordinate in an image is usuallymodeled by the following five internal cameraparameters[7]; the distortion parameters κ1 andκ2, coordinate of image center (cx, cy)T , and thescale sx which is the ratio of width and heightof pixel. We write these parameters as θd =(κ1, κ2, cx, cy, sx)T .

The distortion is represented by a coordinatesystem which has its origin at (cx, cy)

T , while thesystem used in the previous section has the originat top-left corner. Therefore, we introduce anothernotation below.

Let pu = (xu, yu)T and pd = (xd, yd)T be points

in the un-distorted and distorted image, both ofthem have their origins at the top-left corner oftheir images. And let (ξu, ηu)T and (ξd, ηd)

T bepoints in the un-distorted and distorted image re-spectively with the origin at the center of image(cx, cy)

T . These can be written as follows;

(ξu, ηu)T = (xu, yu)T − (cx, cy)T (13)

(ξd, ηd)T = (ξu, ηu)T − (κ1R

2 + κ2R4)(ξd, ηd)

T(14)

(xd, yd)T = (sxξd, ηd)

T + (cx, cy)T (15)

where R =√

ξd2 + ηd

2. As shown above, ξu isexplicitly written as the function of ξd, while ξd isnot. In order to obtain ξd from ξu, we solve thefollowing equations iteratively [7];

(ξdk, ηdk)T =(ξu, ηu)T

1 + κ1Rk−12 + κ2Rk−1

4(16)

where Rk =√

ξdk2 + ηdk

2, starting with R0 =√

ξu2 + ηu

2. The iteration stops at k = 8 becauseit is an enough approximation [7].

Using the relations above, we obtain two func-tions between pu and pd about the system of top-left corner origin,

pd = d(pu; θd) (17)

pu = f(pd; θd)

=

( xd−cx

sx(1 + κ1R

′2 + κ2R′4) + cx

(yd − cy)(1 + κ1R′2 + κ2R

′4) + cy

)

(18)

where

R′ =

(

xd−cx

sx

)2

+ (yd − cy)2 (19)

f and d are the inverse of each other, but d is not aclosed-form function of pu because d correspondsto the procedure (16).

Anyway, we can write the transformation be-tween images using Eq.(17) and (18). Let Iu

1be

the image of the pattern transformed by apply-

ing θu

to I1, and Iud1 be the image of the pattern

transformed by applying θd to Iu1. That is,

I1(p) = Iu1 (p + u(p; θ

u)) (20)

Iu1(p) = Iud

1(d(p)) (21)

Iu1 (f(p)) = Iud

1 (p) (22)

I1uI1 I1

ud

p+u(p;θ)d(p)

f(p)

p

p

p

I2

Figure 1: Relations between each transformation.

and Fig. 1 shows these relationships.

Minimization with inverse registration

If the transformation from I1 to Iud1

was closed-form, the same strategy could be used to estimateθ

d. However, since d isn’t an explicit function, theregistration between I1 to Iud

1 can not estimatedirectly the parameters.

Now consider an inverse registration. As you seein Fig. 1, we intend to match I2 with Iud

1 , that is,the minimization of residuals of intensities betweentwo images;

ri = Iud1 (pi)− I2(pi) (23)

But this can be rewritten as follows using Eq.(22);

ri = I2(pi) − Iu1 (f (pi; θ

d)) (24)

Hence, the estimation method becomes the sameone in the previous step. The minimization is doneabout the following function with Gauss-Newtonmethod.

minθd

Ω

ρ(ri) (25)

where Ω = i; pi ∈ I2, ∃p∈ I1, f(pi) = p + u(p),which means that the minimization should usepoints in I2 within the region corresponding to thepattern I1.

The system of equations which should be solvedis the same form as Eq.(11);

l, i∈Ω

ρ(ri)

ri

∂ri

∂θdk

∂ri

∂θdl

δθdl = −

i∈Ω

ρ(ri)

ri

ri

∂ri

∂θdk

(26)

and the derivatives in Eq.(26) are as follows;

∂r

∂θd=

∂f

∂θd

∂r

∂f=

∂f

∂θd(−∇Iu

1 (f(p))) (27)

According to Eq.(18), the Jacobian is

∂f(pd)

∂θd

=

R′2 xd−cx

sxR′2(yd − cy)

R′4 xd−cx

sxR′4(yd − cy)

∂xu

∂cx

∂yu

∂cx

∂xu

∂cy

∂yu

∂cy

∂xu

∂sx

∂yu

∂sx

(28)

Page 4: An Automatic Camera Calibration Method with Image Registration Technique · 2000. 9. 22. · IIISProc. ofSCI2000,Vol.V,pp.317{322(20007) 1 An Automatic Camera Calibration Method with

where

∂xu

∂cx

= 1−1

sx

(1 + κ1R′2 + κ2R

′4)

− 2(κ1 + 2κ2R′2)

(xd − cx)2

sx3

(29)

∂yu

∂cx

= −2(κ1 + 2κ2R′2)

xd − cx

sx2

(yd − cy) (30)

∂xu

∂cy

= −2(κ1 + 2κ2R′2)

xd − cx

sx

(yd − cy) (31)

∂yu

∂cy

= 1− (1 + κ1R′2 + κ2R

′4)

− 2(yd − cy)2(κ1 + 2κ2R′2) (32)

∂xu

∂sx

=−(xd − cx)

sx2

(1 + κ1R′2 + κ2R

′4)

− 2(κ1 + 2κ2R′2)

(xd − cx)3

sx4

(33)

∂yu

∂sx

= −2(yd − cy)(κ1 + 2κ2R′2)

(xd − cx)2

sx3

(34)

Initial parameters of cx, cy, κ2 and sx to solveEq.(26) are set to half of width and height of I2, 0and 1, respectively. On the other hand, κ1 is ran-domly initialized to avoid that all of Eq.(29)∼(32)become 0 by initializing κ1 = κ2 = 0. We chooseempirically κ1 ∈ [−10−7, 10−7].

6. SOME STRATEGIES

Interpolation of pixel value

When we need a intensity of a pixel whose coor-dinate isn’t on an integer grid, we use the bilinearinterpolation among the neighbor pixel values asthe pixel value.

Histogram matching

Since the image taken by the camera oftenchange the intensities of the pattern, the histogramof the image is transformed so that it becomes thesame as that of the pattern.

Coarse-to-fine

To reduce a computation time, and to obtainan accurate estimation even when there is a a rel-atively large residual in an initial state, the coarse-to-fine strategy is employed. The procedures men-tioned above are applied to a filtered image whichis much blurred at first and then gradually be-comes fine one. Therefore, the second and thirdstep are repeated in turn as changing the resolu-tion of the images I1, Iu

1 and I2.

7. EXPERIMENTS

We have conducted experiments with the pro-posed method to real images that is taken a cameraas changing its zoom parameter. We use a photo-graph as a calibration pattern (see Fig.2) which is

Figure 2: The calibration pattern. 640×480

Table 1: Estimation results as changing zoomFig.3(a) Fig.3(e)

κ1 2.804e-07 -6.7631e-08κ2 2.992e-13 5.219e-13cx 327.8 326.6

cy 214.3 184.2sx 0.9954 0.9997

printed by a laser monochrome printer (EPSONLP-9200PS2), and take images of it by a CCDcamera (Sony EVI-D30) fixed on a prop with acapturing software (of SGI O2). As changing thecamera zooming from the widest view angle, wetook two images of the pattern along with a gridpattern shown in the left column of Fig.3. As yousee, the wider the view angle is (Fig.3(a)(c)), thelarger the effect of distortion becomes (the lines ofgrid curve).

The estimation results are shown in Tab.1 andFig.3(b)(d)(f)(h) shows the images compensatedby Eq.(18) with the estimated parameters. Thecurved lines in grid pattern should be transformedto straight with the compensation.

The results have some error as seeing in theright column of Fig.3 that the lines of grid stillslightly curves especially around the corners of theimage, because the gradation of illumination of theimage can not be removed by the simple histogramtransformation which should be replaced with anestimation of illumination change by some method,such as a linear brightness constraint[8].

Note that in simulation experiments usingtransformed pattern as a distorted image with ad-ditional noise at each pixel, the proposed methodworked very well even when the amplitude ofadded uniform noise is greater than ± 50.

8. CONCLUSIONS

We have proposed a new technique of automat-ed camera calibration method to obtain internalcamera parameters in order to compensate the dis-tortion of an image. The proposed method is basedof image registration and consists of two nonlinear

Page 5: An Automatic Camera Calibration Method with Image Registration Technique · 2000. 9. 22. · IIISProc. ofSCI2000,Vol.V,pp.317{322(20007) 1 An Automatic Camera Calibration Method with

optimization steps; perspective fitting with geo-metric transformation and distortion fitting. Ex-perimental results demonstrate the efficiency ofthe proposed method that can reduce human op-erator’s labor. The nonlinear optimization some-what takes time, but it is enough to run as a batchprocess.

9. REFERENCES

[1] T. Mukai and N. Ohnishi, “The recovery of ob-ject shape and camera motion using a sensingsystem with a video camera and a gyro sensor,”in Proc. of ICCV’99, pp. 411–417, 1999.

[2] J. B. Shim, T. Mukai, and N. Ohnishi, “Im-proving the accuracy of 3D shape by fusingshapes obtained from optical flow,” in Proc.

of CISST’99, pp. 196–202, 1999.

[3] R. Y. Tsai, “An efficient and accurate cameracalibration tecnique for 3D machine vision,” inProc. of CVPR’86, pp. 364–374, 1986.

[4] R. Willson, “Camera calibration using Tsai’smethod,” 1995. ftp://ftp.vislist.com/

SHAREWARE/CODE/CALIBRATION/Tsai-method

-v3.0b3/.

[5] H. S. Sawhney and S. Ayer, “Compact rep-resentations of videos through dominant andmultiple motion estimation,” T-PAMI, vol. 18,no. 8, pp. 814–830, 1996.

[6] G. A. F. Seber and C. J. Wild, Nonlinear Re-

gression. New York: Wiley, 1989.

[7] R. Klette, K. Schluns, and A. Koschan, Com-

puter Vision Three-Dimensional Data from

Images. Singapore: Springer-Verlag, 1998.

[8] M. J. Black, D. J. Fleet, and Y. Yacoob, “Ro-bustly estimating changes in image appear-ance,” CVIU, vol. 78, no. 1, pp. 8–31, 2000.

Page 6: An Automatic Camera Calibration Method with Image Registration Technique · 2000. 9. 22. · IIISProc. ofSCI2000,Vol.V,pp.317{322(20007) 1 An Automatic Camera Calibration Method with

(a) (b)

(c) (d)

(e) (f)

(g) (h)

Figure 3: The images of the calibration pattern taken by a camera. (left column) original images. (rightcolumn) compensated images. (upper two rows) images of calibration pattern and grid at the widest viewangle. (lower tow rows) images at zoomed out.