12
Accurate linear technique for camera calibration considering lens distortion by solving an eigenvalue problem Sheng-Wen Shih Academia Sinica Institute of Information Science Nankang, Taipei, Taiwan and National Taiwan University Institute of Electrical Engineering Taipei, Taiwan Yi-Ping Hung, MEMBER SPIE Academia Sinica Institute of Information Science Nankang, Taipei, Taiwan Wei-Song Un National Taiwan University Institute of Electrical Engineering Taipei, Taiwan 1 Introduction 1 .1 Importance of Camera Calibration Camera calibration in the context of 3-D machine vision, as defined by is the process of determining the internal camera geometric and optical characteristics (intrinsic pa- rameters) and/or the 3-D position and orientation of the camera frame relative to a certain world coordinate system (extrinsic parameters) . To infer 3 -D objects using two or more images, it is essential to know the relationship between the 2-D image coordinate system and the 3-D object co- ordinate system. This relationship can be described by the following two transformations: 1 Perspective projection of a 3-D object point onto a 2-D image point: Given an estimate of a 3-D object point and its error covariance, we can predict its projection (mean and covariance) on the 2-D image. This is useful for reducing the searching space in matching features between two images or for hypothesis verification in scene analysis. 2. Back projection of a 2-D image point to a 3-D ray: Given a 2-D image point, there is a ray in the 3-D space within which the corresponding 3-D object point must lie on. If we have two (or more) views available, Paper AO-023 received Jan. 31, 1992; revised manuscript received June 7, 1992; accepted for publication June 28, 1992. This paper is a revision of a paper presented at the SPIE conference on Optics, Illumination, and Image Sensing for Machine vision VI, November 1991, Boston, Mass. The paper presented there appears (unrefereed) in SPIE Proceedings Vol. 1614. 1993 Society of Photo-Optical Instrumentation Engineers. 0091-3286/93/$2.OO. 138 / OPTICAL ENGINEERING / January 1993 / Vol. 32 No. 1 Abstract. A new technique for calibrating a camera with high accuracy and low computational cost is proposed. The geometric camera param- eters considered include camera position, orientation, focal length, radial lens distortion, pixel size, and optical axis piercing point. With this method, the camera parameters to be estimated are divided into two parts: the radial lens distortion coefficient K and a composite parameter vector C composed of all the above-mentioned geometric camera parameters other than K. Instead of using nonlinear optimization techniques, the estimation of K 5 transformed into an eigenvalue problem of an 8 x 8 matrix. The method is fast because it requires only linear computation; it is accurate because the effect of the lens distortion is considered and because all the information contained in the calibration points is used. Computer simulations and real experiments show that the calibration method can achieve an accuracy of 1 part in 1 0,000 in 3-D measurement, which is better than that of the well-known method proposed by R. Y. Tsai. Subject terms: camera calibration; lens distortion; camera model; stereo vision; eigenvalue problem; three-dimensional metrology. Optical Engineering 32(1), 138- 149 (January 1993). an estimate of the 3-D point location can be obtained by using triangulation. This is useful for inferring 3-D information from 2-D image features. For applications that require these two transformations, e.g. , automatic assembling, 3-D metrology, robot calibra- tion, tracking, trajectory analysis, and vehicle guidance, it is essential to calibrate the camera either on-line or off-line. In many cases, the overall system performance strongly depends on the accuracy of camera calibration. In general, the calibration method that achieves higher accuracy re- quires more computations, and the trade-offs between ac- curacy and computational cost depends on the requirements of the application. Our objective is to develop a camera calibration method that is accurate enough to meet most application requirements while maintaining a computational cost low enough to fulfill the real-time requirement. 1 .2 Existing Techniques for Camera Calibration Many techniques have been developed for camera calibra- tion because of the strong demand of applications . These techniques can be classified into two categories: one that considers lens distortion,15 and one that neglects lens dis- tortion . 6-9 A typical linear technique that does not consider lens distortion is the one that estimates the perspective transfor- mation matrix8 H. The estimated H can be used directly for forward and backward 3-D to 2-D projection. If necessary, given the estimated H, the geometric (or physical) camera parameters 13 can be easily determined.7'10'11 The precise definition of 13 is given in Sec. 2. For those techniques that

Accurate linear technique for camera calibration considering lens …ntur.lib.ntu.edu.tw/bitstream/246246/141336/1/14.pdf · ACCURATE LINEAR TECHNIQUE FOR CAMERA CALIBRATION 2 Camera

  • Upload
    others

  • View
    0

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Accurate linear technique for camera calibration considering lens …ntur.lib.ntu.edu.tw/bitstream/246246/141336/1/14.pdf · ACCURATE LINEAR TECHNIQUE FOR CAMERA CALIBRATION 2 Camera

Accurate linear technique for camera calibrationconsidering lens distortion by solving aneigenvalue problem

Sheng-Wen ShihAcademia SinicaInstitute of Information ScienceNankang, Taipei, TaiwanandNational Taiwan UniversityInstitute of Electrical EngineeringTaipei, Taiwan

Yi-Ping Hung, MEMBER SPIEAcademia SinicaInstitute of Information ScienceNankang, Taipei, Taiwan

Wei-Song UnNational Taiwan UniversityInstitute of Electrical EngineeringTaipei, Taiwan

1 Introduction

1 .1 Importance of Camera CalibrationCamera calibration in the context of 3-D machine vision,as defined by is the process of determining the internalcamera geometric and optical characteristics (intrinsic pa-rameters) and/or the 3-D position and orientation of thecamera frame relative to a certain world coordinate system(extrinsic parameters) . To infer 3 -D objects using two ormore images, it is essential to know the relationship betweenthe 2-D image coordinate system and the 3-D object co-ordinate system. This relationship can be described by thefollowing two transformations:

1 Perspective projection of a 3-D object point onto a 2-Dimage point: Given an estimate of a 3-D object pointand its error covariance, we can predict its projection(mean and covariance) on the 2-D image. This is usefulfor reducing the searching space in matching featuresbetween two images or for hypothesis verification inscene analysis.

2. Back projection of a 2-D image point to a 3-D ray:Given a 2-D image point, there is a ray in the 3-Dspace within which the corresponding 3-D object pointmust lie on. If we have two (or more) views available,

Paper AO-023 received Jan. 31, 1992; revised manuscript received June 7, 1992;accepted for publication June 28, 1992. This paper is a revision of a paperpresented at the SPIE conference on Optics, Illumination, and Image Sensing forMachine vision VI, November 1991, Boston, Mass. The paper presented thereappears (unrefereed) in SPIE Proceedings Vol. 1614.

1993 Society of Photo-Optical Instrumentation Engineers. 0091-3286/93/$2.OO.

138 / OPTICAL ENGINEERING / January 1993 / Vol. 32 No. 1

Abstract. A new technique for calibrating a camera with high accuracyand low computational cost is proposed. The geometric camera param-eters considered include camera position, orientation, focal length, radiallens distortion, pixel size, and optical axis piercing point. With this method,the camera parameters to be estimated are divided into two parts: theradial lens distortion coefficient K and a composite parameter vector Ccomposed of all the above-mentioned geometric camera parameters otherthan K. Instead of using nonlinear optimization techniques, the estimationof K 5 transformed into an eigenvalue problem of an 8 x 8 matrix. Themethod is fast because it requires only linear computation; it is accuratebecause the effect of the lens distortion is considered and because allthe information contained in the calibration points is used. Computersimulations and real experiments show that the calibration method canachieve an accuracy of 1 part in 1 0,000 in 3-D measurement, which isbetter than that of the well-known method proposed by R. Y. Tsai.

Subject terms: camera calibration; lens distortion; camera model; stereo vision;eigenvalue problem; three-dimensional metrology.

Optical Engineering 32(1), 138- 149 (January 1993).

an estimate of the 3-D point location can be obtainedby using triangulation. This is useful for inferring 3-Dinformation from 2-D image features.

For applications that require these two transformations,e.g. , automatic assembling, 3-D metrology, robot calibra-tion, tracking, trajectory analysis, and vehicle guidance, itis essential to calibrate the camera either on-line or off-line.In many cases, the overall system performance stronglydepends on the accuracy of camera calibration. In general,the calibration method that achieves higher accuracy re-quires more computations, and the trade-offs between ac-curacy and computational cost depends on the requirementsof the application. Our objective is to develop a cameracalibration method that is accurate enough to meet mostapplication requirements while maintaining a computationalcost low enough to fulfill the real-time requirement.

1 .2 Existing Techniques for Camera Calibration

Many techniques have been developed for camera calibra-tion because of the strong demand of applications . Thesetechniques can be classified into two categories: one thatconsiders lens distortion,15 and one that neglects lens dis-tortion .6-9

A typical linear technique that does not consider lensdistortion is the one that estimates the perspective transfor-mation matrix8 H. The estimated H can be used directly forforward and backward 3-D to 2-D projection. If necessary,given the estimated H, the geometric (or physical) cameraparameters 13 can be easily determined.7'10'11 The precisedefinition of 13 is given in Sec. 2. For those techniques that

Page 2: Accurate linear technique for camera calibration considering lens …ntur.lib.ntu.edu.tw/bitstream/246246/141336/1/14.pdf · ACCURATE LINEAR TECHNIQUE FOR CAMERA CALIBRATION 2 Camera

ACCURATE LINEAR TECHNIQUE FOR CAMERA CALIBRATION

2 Camera Model

XC = r1x0 + r2yo+ r3z0+ t1

yc=r4xo+r5yo+r6zo+t2zC = r7x0+ r8yo + r9z0+ t3

or i'c =Tfo with

OPTICAL ENGINEERING / January 1993 / Vol. 32 No. 1 / 139

x

Optical Axis

- - Optical

vy OCS —— Object Coordinate System (3D)CCS —— Camera Coordinate System (3D)ICS —— computer Image Coordinate System (2D)

Fig. 1 Pinhole camera model with lens distortion, where P is a 3-Dobject point and Q and Q' are its undistorted and distorted imagepoints, respectively.

is used. Section 2 introduces the camera model adopted inthis paper. Section 3 describes the details of the proposedcamera calibration method. Results of computer simulationsand real experiments shown in Sec. 4 demonstrate the per-formance of our method compared to others.

estimate the perspective transformation matrix, note that thehomogeneous projection matrix H is not unique. The pro-jection matrix is undetermined up to a scale factor, thus oneconstraint must be imposed to ensure a unique solution.Usually, the constraint is to set one of the entries of H toa constant. Faugeras and Toscani6 showed that by imposinga physical constraint, the estimation problem remains linear.

Grosky and Tamburino5 regard the camera calibration ashaving two independent phases; the first is to remove geo-metric camera distortion so that rectangular calibration gridsare straightened in the image plane, and the second is touse a calibration method similar to the described linear one.Grosky and Tamburino5 have found some physical con-straints and try to unify the approach to the linear cameracalibration problem; both coplanar and noncoplanar cali-bration points can be used, however, the former case re-quires three user-supplied constraints. Although the lensdistortion is assumed compensated at phase 1 , the phase-2calibration method also can be applied directly to an un-compensated image.

Martins et 12 developed a two-plane method that doesnot use an explicit camera model. The back-projection prob-lem is solved by computing the vector that passes throughthe interpolated points on each calibration plane. However,the projection problem is not solved in their paper.

Faig's method2 is a good example of those that considerlens distortion. For methods of this type, equations are es-tablished that relate the camera parameters to the 3-D objectcoordinates and 2-D image coordinates of the calibrationpoints . Nonlinear optimization techniques are then used tosearch for the camera parameters with an objective to mm-imize residual errors of these equations. Disadvantages ofthis kind of method include the fact that a good initial guessis required to start the nonlinear search and that it is com-putationally expensive.

A few years ago, Tsai proposed an efficient two-stagetechnique using the ' 'radial alignment constraint.' ' Hismethod involves a direct solution for most of the calibrationparameters and some iterative solutions for the remainingparameters. Some drawbacks of Tsai's method have beenmentioned by Weng et al.3 Our experience has also shown'3that Tsai's method can be worse than the simple linearmethod8 if lens distortion is relatively small.

Recently, Weng et al.3 presented some experimental re-sults using a two-step method. The first step involves aclosed-form solution based on a distortion-free camera model,and the second step improves the camera parameters esti-mated in the first step by taking the lens distortion intoaccount. This method solves the initial guess problem inthe nonlinear optimization, and is more accurate than Tsai'smethod, according to our experiments.

In this paper, we develop a camera calibration methodwhich is fast and accurate. With this method, the cameraparameters to be estimated are divided into two parts: theradial lens distortion coefficient K and a composite param-eter vector C composed of all the geometric camera param-eters other than K. Instead of using nonlinear optimizationtechniques, the estimation of K is transformed into an ei-genvalue problem of an 8 x 8 matrix . Our method is fastbecause it requires only linear computation. It is accuratebecause the effect of the lens distortion is considered andbecause all the information contained in the calibration points

Consider the pinhole camera model with lens distortion, asshown in Fig. 1 . LetP be an object point in the 3-D space,and ro = [XO yo ZOI be its coordinates, in millimeters, withrespect to a fixed object coordinate system (OCS). Let thecamera coordinate system (CCS), also in millimeters, haveits x-y plane parallel to the image plane (such that the x axisis parallel with the horizontal direction of the image and they axis is parallel with the vertical one), with its origin locatedat the optical center and its z axis aligned with the opticalaxis of the lens (see Fig. 1). Let rC = [XC YC zCIt be thecoordinates of the 3-D point P with respect to the CCS . Ifthere is no lens distortion, the corresponding image pointof P on the image plane would be Q (see Fig. 1). However,because of the effect of lens distortion, the actual imagepoint is Q' . Let Sj = [uj vj] denote the 2-D coordinates (inpixels) with respect to the computer image coordinate sys-tem (ICS) of the actual image point Q', where the originof ICS is located at the center of the frame memory coor-dinate [e.g. , the origin of the ICS is right at (256,256) fora 512 by 512 image].

As shown in Fig. 2, the 3-D to 2-D transformation fromro to Sj can be divided into the four steps described next.

2.1 Rotation and Translation from the OCS to theccs

The transformation from r to rC can be expressed as

(1)

Page 3: Accurate linear technique for camera calibration considering lens …ntur.lib.ntu.edu.tw/bitstream/246246/141336/1/14.pdf · ACCURATE LINEAR TECHNIQUE FOR CAMERA CALIBRATION 2 Camera

SHIH, HUNG, and UN

rn r2 r3 ti'1

T [Rt1 I r4 r5 r6 t2 1=

0 iii r7r8r9t3 I

Lo 0 0 1]

XC YCUFf, VFfZC ZC

[1 0 0 0SF =Hc with H= 0 1 0 0

Lo 0 1/fO

2.3 Lens Distortion from Q to Q'

2.4 Scaling and Translation of 2-D ImageCoordinates

J u =(u! — uo)iliVF= Vj —

VO)Ov

[1/au 0 uoor j =T1 with Tc= I 0 1/ v

Lo o 1

(1 — Kp2)(uj —uo) fxori +yor2 + zor3 + ti

Xor7+yor8+zor9+t3

(1 — kp2)(vj— vo) =jXor4+yor5 + zor6 + t2

XQr7 + yor8 + zQr9 + t3

2½where p = [(uj—UO) + (vj —v) I

140/ OPTICAL ENGINEERING / January 1993 / Vol. 32 No. 1

Fig. 2 Relation between different transformation matrices.

(8)

where and are the horizontal and vertical pixel spacing(millimeters/pixel) and uo and vO are the coordinates (inpixels) of the origin of the CCS in the computer imagecoordinate system.

Using this notation for camera parameters , the geometriccamera parameters 3 = [ti t2 t3 4 0 tji f K u vo]. Thevertical scaling factor iv is not included here because it isa known parameter when we use a solid state camera—

(2) otherwise, only the ratios f/ andf/ can be determined.Combining Eqs. ( 1), (3), (5), and (7), we have

where tilde () denotes homogeneous14 coordinates, t = (t1 (9)t2 t3)t is a translation vector, and R is a 3 X 3 rotationmatrix determined by the three Euler angles, , 0, ii, ro-tating about the z, y, and zaxes sequentially. (10)

2.2 Perspective Projection from a 3-D Object Pointin the CCS to a 2-D Image Point on the ImagePlane

Let f be the effective focal length, and let SF = UF VFI bethe 2-D coordinates (in millimeters) of the undistorted image New Camera Calibration Techniquepoint Q lying on the image plane. Then, we have

Given a set of 3-D calibration points and their corresponding2-D image coordinates, the problem is to estimate 3, the

. (3) parameters of the camera model. Instead of estimating 3directly, we first estimate the coefficient K and the composite

Also, we can express this perspective projection in the ho- parameters c [as described following Eq. (16)1, then de-mogeneous coordinates as compose them into 3 . Two camera calibration algorithms

are described in this paper; one requires a set of noncoplanarcalibration points, and the other only needs coplanar cali-

(4) bration points. Although the former requires noncoplanarcalibration points, it is more accurate than the latter.

Our method requires initial guess for UO, VO, and ,which can be easily set as follows. Let fcamera denote thepixel scanning rate of the camera (e .g. , fcamera 14.31818

For practical reasons, we consider only the first term of the MHz for PULNiX TM-745E), and fdigitizer denote the pixelradial lens distortion, i.e. ,

scanning rate of the digitizer or frame grabber (e.g. , fciigi-tizer 10 MHz for ITI Series 15 1). Let denote the hori-

IUF(1Kp)U' F

1 VF (1 — Kp2)vwhere p — UF + VF (5)

zontal pixel spacing of the solid state imager (e.g. , = 112_ 2 2pm for PULNiX TM-745E). Then a good estimate ofcan be obtained by the following equation:

or sF=(1—KIIsFII)sF , (6)= , fcamera

(11)where s =(u v )t are the coordinates of the distorted 2-DU

Ufdigitizerimage (in millimeters). In this paper, K is given in inversesquare millimeters. The other two parameters (uio,o) can be temporally set1 to

(0,0) if no other reliable information about them is available.More accurate estimates of the three parameters can beobtained with our calibration procedure, which can then be

The transformation from s (in millimeters) to Sj (in pixels) used as a better initial guess iteratively. If the amount of

involves (1) scaling from millimeters to pixels and radial lens distortion is not unreasonably large, say IKI0.0008 mm 2 and the computational speed is not the main(2) translation due to the misalignment of the sensor array

with the optical axis of the lens. Hence, concern, then another two or three iterations can give evenhigher accuracy according to our experience (see Fig. 9 inSec. 4.1).

(7) Hereafter, for simplicity, we will use u, v, x, y, and zto denote Uj, Vj, X, yO, and zo, respectively.

Page 4: Accurate linear technique for camera calibration considering lens …ntur.lib.ntu.edu.tw/bitstream/246246/141336/1/14.pdf · ACCURATE LINEAR TECHNIQUE FOR CAMERA CALIBRATION 2 Camera

ACCURATE LINEAR TECHNIQUE FOR CAMERA CALIBRATION

3.1 New Camera Calibration Technique UsingNoncoplanar Calibration Points

Rewrite Eqs. (9) and (10) as

rifl(1— Kp2)(u—uØ)[x y z 1] = [x y z

r2f/r9 r3fl

tifl

r4f/(1— Kp2)(v—vO)[x y z 1] = [x y z 11

r5f/. (13)

r9 r6f/&

t2fliFrom Eq. (12) we have

r7

(u—uO)[x y z 1]r8 —

Kp2(U — UO)[xr9

t3

which leads to

[

rifl + r7uo1 [r71r2f/+r8uo I

r8 I

uz—uII I[x y z j r3fl +r9uoj

+ [— — —r9

t1fl+t3uo [t3J

[r71

+Kp2(u—uo)[xyz ii

r8J=0

. (15)T9

Similarly, from Eq. (13), we have

[

r4flv+r7vo1 rr7lr5f/+r8vo I

r8

T6fli+ r9vØ r9 I[xyzl]

J+[—vx—vy—vz—v1I

I

t2f/+t3vo [t3j

[r71

+Kp2(v—vo)[xyz iir8J=0 .

(16)r9t3

rifl +r7uo r4f/+r7vor2fl+ r8uo r5fl + r8V0

r3fli + r9uo T6fliv +T9VØ

t1fl + t3U0 t2fl +t3v0

[r71

[p11P3 r8I , and c I p2 I

r9

(12)L3i

Then using Eqs. (15) and (16), for all 2-D to 3-D pairs, wehave

(u — UO)PXJ+K(v — vo)pXj

(u(v3

— UO)pyj

—vO)pyj

(u3

(v

— UO)PZJ

—VO)PZJ

(u

(v3

— uo)p

—vo)p

OPTICAL ENGINEERING / January 1993 / Vol. 32 No. 1 / 141

XJYJZJ10000 110000XJy3ZJ1 LP2

+ UJXJ Ujj UJZJ

—VjX3 Vjj VjZjrrif/1

[r81

x

I=[xyz1'[T2

I , (14)r9 r3fl It3j tiflJ

Uj r3

—Vj

I

xP3=[] (17)

with p=(uJ_uo)2+(vj_vo)2Define

A

B —jXj j•yj —jZj —Uj—

I:ixi l;iYi

_:Jzi _vJLet

Page 5: Accurate linear technique for camera calibration considering lens …ntur.lib.ntu.edu.tw/bitstream/246246/141336/1/14.pdf · ACCURATE LINEAR TECHNIQUE FOR CAMERA CALIBRATION 2 Camera

SHIH, HUNG, and LIN

CEE

(u3—

UO)PXJ (u3 —UO)pyj (u3 —UO)PZj (u —uo)p(vj — vo)pxj (vj —Vo)pyj (vj —vo)pzj (vj —vo)p

p[1]/IIP3I

and qP3/IP3It

Because the 2-D observation noise always exists, Eq. (17)will not be exactly zero, i.e.,

Ap+Bq+KCq=e*O.In practice, the estimate of obtained by Eq. (1 1) is

usually quite accurate, and the image center (uo,vo) is rel-atively small in comparison to almost all the image featurepoints (u ,Vj )—when estimating K, we can use only thosecalibration points that are relatively far away from the imagecenter. Therefore, we can use the initial estimates of uo,V, and as if they were the true values of uO, VO, andin the matrix C. The deviation of the matrix C caused bythe estimation error in Ilo, o, and is negligible in thefollowing estimation of p and q because K is usually verysmall and the deviation of the matrix C times K in Eq. (17)is even more trivial. In other words, larger lens distortionshould result in more calibration inaccuracy when using theproposed algorithm, because of this approximation of uo,V, and It is indeed the case, as shown in Fig. 8 in Sec.4.1. However, a reasonable amount of lens distortion, asin most practical cases, should not cause any difficulties inusing this approximation. Using this approximation, thematrix C is treated as a known matrix in the followingcomputation.

Hence, the parameters to be estimated, K and c (or equiv-alently, K, p, and q) can be computed by minimizing thefollowing criterion E with respect to K, p, and q:

EIIAp+Bq+KCqII2

subject to the constraint 1q112= 1.To minimize E, we form the Lagrangian

LllAp + Bq + KCqII2 + X(1 —11q112)

and set 3L/8p =0, 3L/8q =0, 8L/3X =0, and 3L/8K= 0, which

= AtAp + AtBq+ KAtCq = 0,2 3p

! BtBq+ K2ctcq+ BtAp2 3q

+ KCtAp+ KCtBq+ KBtCq — Xq = 0,

= qtCtAp + qtCtBq + KqtCtCq = 02 0K

From Eq. (21), we have

p = — [(AtA) 'AtB + K(AtA) - 1AtC}q

142 / OPTICAL ENGINEERING / January 1993 / Vol. 32 No. 1

1 3L

j—=[D—XIIq=O

qt[KT+s/2Iqo(18) 2 OK

where

DEE[K2T+KS+RI

(19) RBtB—BtA(AtAy'AtB,T CtC-CtA(AtA) -

and

Substituting Eq. (24) into Eqs. (20), (22), and (23), we have

L=qt[D—XI]q+X , (25)

(26)

(27)

(28)

(29)

S— CtB —CtA(AtA) 1kB + BtC — BtA(AtA) 1AtC .(30)

Notice that X is the eigenvalue of D, and that

EI = x subject to 11q112 = 1,ap 'aq 'a,(

i.e. , the minimal residual error is Xmin min{eigenvalues ofD}. Given a K, D is uniquely determined and so is theminimal residual error When there is no 2-D ob-servation error and the initial estimates of uO, VO, and areaccurate, the minimal residual error Xmjfl S equal to zero.Then from Eqs. (26), we have

T'[D—0I]q=[K21+KT1S+T'R]q=0 . (31)

It follows that

K(Kq)=[—T'R—T'S]f q T , (32)

and

(20) K(q)=[0 ii{} . (33)

It is now obvious that K is the real eigenvalue of an 8 x 8matrix K, where

(21) _ 0 I

K=LT1R —T1S

since

(22) KI q KI q (34)Kqj Kqj

(23) But in practice, no real K can be found by solving theeigenvalues of K, i.e. , we will usually obtain a complex(impractical) K that makes the residual error equal to zero.In our experience, we can choose to use the real part of the

(24) eigenvalue with the smallest absolute imaginary part as an

yields

Page 6: Accurate linear technique for camera calibration considering lens …ntur.lib.ntu.edu.tw/bitstream/246246/141336/1/14.pdf · ACCURATE LINEAR TECHNIQUE FOR CAMERA CALIBRATION 2 Camera

ACCURATE LINEAR TECHNIQUE FOR CAMERA CALIBRATION

estimate of K, denoted as k. Of course, we can use k asthe initial guess, and perform the one-dimensional nonlinearsearch to find the optimal K. However, because the k isaccurate enough, further nonlinear search gains little ac-cording to our experience. Therefore, we omit the nonlinearsearch in the calibration procedure. Once k is determined,the vector q can be obtained by selecting the eigenvectorcorresponding to the smallest eigenvalue of the matrix[k2T+ s+ RI.

Substituting q into Eq. (24), we then have p. Using thefact that the nine rs, in the definition of P1, P2, and P3,are components of a rotation matrix, we can decompose thevectors p and q, and obtain7" the parameters R, ti, t2,t3,f, U, V, and & given

3.2 New Camera Calibration Technique UsingCoplanar Calibration Points

When only coplanar calibration points are available, thederivation of the algorithm is quite similar to the one thatuses noncoplanar calibration points. But the decompositionof the parameters should be done carefully because the es-timated results are more sensitive to noise. Without loss ofgenerality, we can choose the x-y plane of the OCS to bethe plane where the calibration plane lies, and all z's in Eqs.(12) and (13) vanish, thus

[r71 r4fRi(1— Kp2Xv —vO)[x y ill r8 = [X y 11 r5f/

[t3J

Similar to Sec. 3.1, we define

[r71PcIr8I (37)

[t3J

(u — UO)PXj+K(v —

VO)pjXj

(38)

To estimate the composite camera parameters, we repeatthe derivation similar to that in Sec. 3.1, except that thedefinitions of matrices A, B, and C are replaced by

and the decomposition procedure are modified as follows.Suppose that the estimated composite parameters are

(36) P(=y[, a2 a3}t, P=ty[a4 a5 a6It and Pçty[a7 a81] where a, i = 1, ..., 8, are real numbers. Substituting•these values into the definitions of Psi, P, and Pc, i.e.,Eq. (37), we have the following eleven equations in elevenunknowns: r, r2, r4, r5, r7, r8, t,, t2, t3, andf:

r = (a, — a7uo)&t3/f

r2= (a2 — a8uo)t3/f= (a3 —UO)ut3/f

r4 = (a4 —a7vo)it3/fr5 = (a5 —a8vo)t3/ft2= (a6 — vo)t3/f (39)

r7=a7t3T8= a8t3

r +r+r= 1r+r+r= 1

rlr2+r4r5+r7r8=O

where the last three equations are the constraints of a rotationmatrix. Substituting the first eight equations into the lastthree, we have

OPTICAL ENGINEERING / January 1993 / Vol. 32 No. 1 / 143

(UJ—UO)PYJ (u

(vjvO)yj (v300

A' j• J•

? ?

BC— jXj — j•Yj —Uj

—1J.xj

V3Yj

C= (u—Io)x (u—io)y (u—iio)(35) —

(vj-o)xj (VjO)yj (o)[r,flul

Er71(1— Kp2)(u —uo)[x y 11 r8 I = [x y ill r2fl I

t3] [ti]

and then have

)1000 [c1 _:€I Cl0 0 0 x )i 1 2 VJXJ

— U.

3

1/.

UJYJ

—VjYj

Page 7: Accurate linear technique for camera calibration considering lens …ntur.lib.ntu.edu.tw/bitstream/246246/141336/1/14.pdf · ACCURATE LINEAR TECHNIQUE FOR CAMERA CALIBRATION 2 Camera

SHIH, HUNG, and UN

o3D ANGULAR ERROR (in degrees)

Fig. 3 Definition of 3-D angular error.

(a 1 — a7uo)2 (a4 —a7vo)2 a(a2—a8uo)2 (a5—a8vo)2

(al —a7uo)(a2 —a8uo) (a4 —a7vo)(a5 —a8vo) a7a8

ut3,f)2l [ii

XL

(t3/f)2 1=1112 I LO]t3 J

(40)

By solving Eq. (40), t3, , and f can be found, becausethey are known to be positive numbers. Knowing the valuesof t3, , and f, we can easily obtain all the extrinsic pa-rameters in Eq. (39). Finally, the last column of the rotationmatrix can be obtained by calculating the cross product ofthe first two columns, i.e.,

1r31 [ru [r2r6 = r4 X r5

Lr9i [r7] Lr8

4 Experimental Results

(41)

To evaluate the accuracy of the camera calibration for 3-Dvision applications, it is necessary to define certain typesof error measures. The measure used in this paper is the3-D angular error, i.e. , the angle <) POP' shown in Fig.3, where P is the 3-D test point, 0 is the estimated lenscenter, and OP' is the 3-D ray back projected from theobserved 2-D image of P using the estimated camera pa-rameters. A 3-D angular error of0.005 deg is roughly equiv-alent to 1 part in 10,000, because tan(0.005 deg) 1/10,000.

For convenience, let 13LK, 13T, and 3N denote the estimateof 3 obtained respectively by our algorithm, Tsai's1 two-stage algorithm, and the Weng et al.3 two-step nonlinearalgorithm (only radial lens distortion is considered). Tocompute I3LK and 3T, we require that uo and vo are knownapriori, or at least, we must have some initial guesses. Inthe real experiments, the true parameter values are un-known, and we set the initial values for uo and vo to zeroand the initial value for to be (fcamera/fdigitizer) for thereasons explained in Sec . 3. In the simulations , the param-eters of the synthetic camera is set (and known) to bef= 25.85mm, = 15 .66 rim, and = 13 xm, whereas the initialvalue for iu is chosen to be (fcamera/fdigitizer) 1 1m(14.3 1818 MHz/10 MHz) =15.75 pm, which introducesabout 0.58% error for . The images used in the experi-ments are 480 X 512 pixels in size. The data shown later inFigs. 5 to 16 and Figs. 18 to 22 are an average of at least10 random trials.

Fig. 4 A typical image of the calibration plate containing 25 calibra-tion points used in the real experiment.

Finally, in Sec. 4.3, one of the most interesting appli-cations, stereo vision, is used to test the performance ofdifferent camera calibration techniques . Theaccuracy of theestimate of a 3-D point position is strongly related to the3-D angular error defined above. This experiment demon-strates that our calibration method can be used to achievehighly accurate 3-D measurements as expected.

4.1 Performance of Different Techniques UsingNoncoplanar Calibration Points

The first experiment is to determine a proper number ofcalibration points to be used for camera calibration. Theimage acquisition system we use includes a PULNiXTM-745E CCD camera and an ITI Series 151 frame grabber.The calibration object is a 300 by 300-mm white plate hay-ing 25 black calibration circles on it. This calibration platewas mounted rigidly on a translation stage, and was initiallyabout 1800 mm away from the camera. For this experimentone image was taken each time the calibration plate wasmoved toward the camera by 25 mm, and altogether 21images of the calibration plate were taken when the cali-bration plate was moved from the range of 1 800 mm to therange of 1300 mm. A typical image is shown in Fig. 4. Thecenters of the black circles (and the black doughnuts) areused as the calibration points. The 2-D image coordinatesof those calibration points can be estimated with subpixelaccuracy. Because the calibration plate and the translationstage are manufactured with very high accuracy, the 3-Dpositions of those calibration points can be treated as knownvalues with respect to a fixed object coordinate system.Thus, we have 25 X 21 =525 pairs of 2-D, 3-D coordinatesof control points. Here, we randomly choose Ncalib points(Ncalib 10, 20, . . . , 200) from the 525 2-D, 3-D pairs tocalibrate the camera and use all remaining points to test thecalibrated parameters. The average of the results of 10 ran-dom trials are plotted in Fig. 5, which shows the 3-D angularerror decreases as the number of calibration points increases,no matter which of the three methods is used. When thenumber of calibration points is greater than 60, increasingNcjb results in little gain in accuracy , for all the three

1 44 / OPTICAL ENGINEERING / January 1 993 / Vol. 32 No. 1

Pp

Estimated lens centerObserved 2D image of P

Page 8: Accurate linear technique for camera calibration considering lens …ntur.lib.ntu.edu.tw/bitstream/246246/141336/1/14.pdf · ACCURATE LINEAR TECHNIQUE FOR CAMERA CALIBRATION 2 Camera

ACCURATE LINEAR TECHNIQUE FOR CAMERA CALIBRATION

Computer Simulation: Noncoplanar points

calibration methods. Therefore, we simply choose Nib 60in the simulations shown in Figs. 6 through 16. Becausethe positions of the 3-D calibration points can be controlledwith high accuracy, we assume that the 3-D coordinates ofthe calibration points are known exactly, and the only sourceof the measurement error is the error in estimating the 2-Dimage coordinates of the 3-D calibration points, i.e. , the2-D observation error. The standard deviation of the 2-Dobservation error is denoted by o.

The purpose of the first simulation is to check the resultsof these real experiments. In this simulation, the radial lensdistortion is chosen to be K= 0.0003 mm 2 (which is theestimate of K in these real experiments, and corresponds toroughly 2 to 3 pixels distortion near the four corners of a480 x 512 image) and o is assumed to be 0. 1 pixel (whichis roughly the subpixel accuracy we can achieve using thecircular calibration patterns shown in Fig. 4). Figure 6 showssimilar decreasing of the 3-D angular error as Ncaljb flcreases, except that the 3-D angular error can decrease fur-ther as Ncalib increases when using the nonlinear method.This is mainly because the 2-D and 3-D coordinates of thetest points used in the simulations are noiseless, whereas inthe real experiment, ground truth is never known and themeasurement error dominates.

The next simulation shows the deterioration of the threecalibration techniques as the 2-D observation errors increase

Fig. 7 3-D angular error versus the 2-D observation noise.

0.009- —_________

Fig. 8 3-D angular error versus the radial lens distortion coefficient.

(see Fig. 7). The 3-D angular error of Tsai's' technique isalways greater than the other two. This is partially becauseeach 2-D, 3-D pair of the calibration points contributes onlyone equation when using the radial alignmentwhereas the other two techniques use both the horizontaland vertical components of the perspective projection re-lation and each 2-D, 3-D pair contributes two equations inthe minimization. Another reason is that Tsai's1 algorithmdoes not estimate the image center. Although the imagecenter could be estimated via a separate procedure,3 we didnot implement that image center estimation algorithm. In-stead, we showed that, given the same guess for the imagecenter, our method performs better than Tsai's method.When given a more accurate image center, both Tsai's andour techniques obtain better results , but our method stillperforms better than Tsai's does. Sometimes, our methodhas even better performance than the Weng et al.3 method,because their method can be trapped in a local minimum.

In the third simulation, we set the 2-D observation error€T 0. 1 and the number of calibration points Ncalib 60 andobserve that the 3-D angular error varies versus the radiallens distortion coefficient (see Fig. 8). Notice that bothTsai's2 method (3T) and our method (13LK) degrade when Kbecomes larger. This is partially caused by being given awrong image center. Figure 8 also displays a phenomenonthat we predicted in Sec. 3, i.e., our technique should have

OPTICAL ENGINEERING / January 1 993 / Vol. 32 No. 1 / 145

Real Experiment: Noncoplanar points

U)ci)i)0)l)0

0ci)

Co

0)Co

0C')

0.014-

0.012-

0.010-

0.008-

0.006-

0.004-

0.002-

fiT: Tsai's method\ Computer Simulation: Noncoplanar points

60, Ic = + 0.0003 mm20.020

U)ci)ci) --

:.:0.015

fiT: Tsai's method -

0.010- .

)0.005- T8Lc : Our method

0 '-----—- fiN : Nonlinear methodC') .

0.2 0.4 0.6 0.8

Standard deviation of 2D observation noise (in pixels)

0 £0 () 0 ióo 1o üo io 1o 2 0

Number of calibration points

Fig. 5 3-D angular error versus the number of calibration points.

a = 0.1 Tz.xel, ic = + 0.0003 mm2U)a)0)a)a)0C

00)

Co

0)CCo

0C')

0.014-

0.012-

0.010-

0.008-

0.006-

0.004-

0.002-

Computer Simulation: Noncoplanar points

Ncdb 60, a = 0.1 pixel

/ fiT: Tsai's method

\ ,' / fiLK : Our method

\ , fiN : Nonlinear method

U)a)0)C)0)0C

00)

Co:0)CCo

0C')

0 '0 do do io io io io io 2 0

Number of calibration points

Fig. 6 3-D angular error versus the number of calibration points.

0.008- fiT : Tsai's method - -

00020.001 -

fiN : Nonlinear method

01 . -.2 -.1 O o1 02 03 05 iO- (mm2)Radial lens distortion coefficient K

Page 9: Accurate linear technique for camera calibration considering lens …ntur.lib.ntu.edu.tw/bitstream/246246/141336/1/14.pdf · ACCURATE LINEAR TECHNIQUE FOR CAMERA CALIBRATION 2 Camera

SHIH, HUNG, and LIN

deraon

3rtetiojfiN: Nonlinear method

U x1O3mm2- .5 -O.4 -O.3 -O.2 -O.i O 0.1 0.2 0.3 0.4 0.5

Radial lens distortion coefficient K

Fig. 9 3-D angular error versus the radial lens distortion coefficient.

Radial lens distortion coefficient K

Fig. 10 3-0 angular error versus the radial lens distortion coefficient.

betterperformance when the radial lens distortion is smaller.When the given image center is not accurate and K>0, ourtechnique tends to overestimate K. Thus, for a large positiveK, our technique may give a worse estimate than those givenby others if the initial estimate for the image center is notgood enough. However, in applications, we often chooseto use a standard lens (IKI 0) or a wide-angle lens (K<0),rather than a lens with K' 0. Therefore, the proposed tech-nique is suitable for most 3-D computer vision applicationsrequiring high accuracy. If for some reason, we need tochoose a lens with relatively large positive K,ourcalibrationprocedure can be executed iteratively by substituting theoriginally given image center (uØ,vO) and with the newlyestimated ones . Results of the first three iterations are shownin Fig. 9. Notice that the performance of the nonlinearmethod, which typically requires 40 to 100 iterations, issometimes even worse than that obtained after the second(or the third) iteration of our method.

The fourth simulation shows how Tsai's1 method andour method degrade as the error of the given image centerbecomes large. In this simulation, the true image center is[Rcenter cos(a), Rcenter sin(a)I, Rcenter =0, 5, 10, 15, andthe a priori given image center is set to (0,0). Each datapoint shown in Figs. 10 and 11 is the average of the resultsof 40 random trials, 10 for each angle of a = 45, 135

146 / OPTICAL ENGINEERING / January 1993 / Vol. 32 No. 1

02 03 04Radial lens distortion coefficient K

Fig. 1 1 3-D angular error versus the radial lens distortion coefficient.

deg. Notice that when the given image center is less ac-curate, both Tsai's method (3T) and our method (3LK) de-grade more when K becomes larger. If we use the true imagecenter in the simulation, then the curves corresponding to3T and 13LK are both quite flat.

4.2 Performance of Different Techniques UsingCoplanar Calibration Points

In Sec . 3.2 we proposed a method for calibrating a camerausing coplanar calibration points. Because the calibrationtechnique using coplanar calibration points (coplanar tech-nique, for short) behaves somewhat differently comparedto using a noncoplanar one (noncoplanar technique, for short),we discuss the experiments using coplanar techniques in thissection. We set the 2-D observation error a =0. 1 and thenumber of calibration points Ncalib 60 for the followingexperiments. Note that neither Tsai's' coplanar techniquesnor our coplanar techniques estimate the image center, butours gives an estimate of , whereas Tsai ' s does not.

We first show that Tsai's1 coplanar technique is lesssensitive to the error of the a priori given image center.Similar to the fourth simulation described in Sec. 4. 1 , thetrue image center is [Rcenter cos(a) , Rcenter sin(a)I , whereasthe image center given to the algorithm is (0,0). Figures 12and 1 3 are the results obtained by using the true , whereasFigs. 14 and 15 are obtained by using Eq. (11) as an estimateof , which has a relative error of about 0.58% . Noticethat our coplanar technique is more robust when the givenestimate of the horizontal pixel spacing is not exactlythe true value, whereas Tsai's method is more robust to thedeviation of the given estimate of the image center (uo,vO).The former is because the parameter is estimated in ourcoplanar technique, and the latter is because what we obtainfrom our linear calibration procedure are composite param-eters. The composite parameters are the redundant combi-nation of the real parameters, which means an erroneouscombination of these parameters can still make a good fitbetween experimental observations and model prediction.1

Also, it is interesting to note that Tsai's1 coplanar tech-nique performs almost equally to Tsai's noncoplanar tech-nique, if the given is accurate (see Fig. 13). However,the performance of Tsai's coplanar technique is sensitive to

Computer Simulation: Noncoplanar points

Nb = 60, ci = 0.1 pixelU)

a)C)

0a)

a)

C)Ca)

0(V)

0.008-

0.007-

0.006-

0.005-

0.004-

0.003-

0.002-

0.001-

Computer Simulation: Noncoplanar points

= 60, ci = 0.1 pixel

Reenter JU + VU)a)a)C)a)0C

0a)

a)

C)Ca)

0(I)

0.008-

0.007-

0.006-

0.005-

0.004-

0.003-

0.002-

0.001-

Renier 15 — —. —

Reenter 10

. . -..

fiT : Tsai's method

x iO- (mm2)

U)a)a)C)a)0C

0a)

a)C)Ca)0C,)

Computer Simulation: Noncoplanar points

Ncgjj 60, ci = 0.1 pixel

Reenter fu + v0.009- —

0.008- 1Reenter 15 I

0.007- — I0.006- Reenter 10

0.005- Reenter 5 .

:: Reenter 0

0.002- -- - - ---- . ---.

0.001 - fiLK : Our method0- IlilIlIll x103(mm2)- .5 -0.4 -0.3 -0.2 -0.1 0 0.1 0.2 0.3 0.4 0.5

Page 10: Accurate linear technique for camera calibration considering lens …ntur.lib.ntu.edu.tw/bitstream/246246/141336/1/14.pdf · ACCURATE LINEAR TECHNIQUE FOR CAMERA CALIBRATION 2 Camera

ACCURATE LINEAR TECHNIQUE FOR CAMERA CALIBRATION

x iO- (mm2)

Fig. 13 3-D angular error versus the radial lens distortion coefficientusing Tsai's1 method with true .

the accuracy of the given (e.g. , referring to Fig. 15 , a0.58% error of induced a large error on the calibrationresult). For comparison, each curve in Fig. 16 is an averageof the four curves in each of the four graphs in Figs. 12 to1 5 . Therefore, unless a very precise estimate of is avail-able,4 our calibration technique is a better choice whetherthe calibration points are coplanar or noncoplanar.

4.3 Performance of Different Techniques TestedUsing Stereo Vision

The following experiment demonstrates that our calibrationmethod can be used to achieve highly accurate 3-D mea-surements , asexpected , and that the accuracy of the estimateof a 3-D point position is strongly related to the 3-D angularerror used in Secs. 4. 1 and 4.2. In stereo vision, the 3-Dposition of an object point seen in two images is estimatedby back projecting two straight lines corresponding to thetwo image points and finding the midpoint of their commonnormal line segment. The stereo vision setup is shown inFig. 17. The distance between two optical centers of thecameras, i.e., the length of the base line, is approximately300 mm.

.5 -d.4 -.3 -.2 -72O3 O4 O5 x iO (mm2)Radial lens distortion coefficient K

Fig. 14 3-D angular error versus the radial lens distortion coefficientusing our method with an estimate of 5, having 0.58% of error.

Computer Simulation: Coplanar pointsN,J1b = 60, a = 0.1 pixel

Reenter \/;4!i+

x1O3(mm).5 -0.4 -0.3 -0.2 -0.1 0 0.1 0.2 0.3 0.4 0.5

Radial lens distortion coefficient K

Fig. 15 3-D angular error versus the radial lens distortion coefficientusing Tsai's1 method with an estimate of 5, having 0.58% of error.

First, 525 points distributed in the depth range of 1300to 1 800 mm are obtained for each camera by means of theprocedures described in Sec. 4. 1 . We randomly choose Ncalibpoints (Ncaljb 10, 20, .. . , 200) from the 525 2-D, 3-Dpairs to calibrate the camera and use all remaining pointsto test the calibrated parameters. Correspondences betweenthe two images of the test points can be determined easilybecause of the specific arrangement of the 23 black circlesand two doughnuts. Figures 18 through 20 show the esti-mation error of the 3-D position in the x, y, and zdirections,respectively, using a stereo vision system withKleft 2.1 x iO—4 —2 and Knght= 1.3 x i0 —2Combining the results in Figs. 18 to 20, the 3-D distanceerrors for three different techniques are shown in Fig . 21.Here, the 3-D distance error is defined as the Euclideannorm of the estimation error of a 3-D point position. Asshown earlier in the simulation, the accuracy of a singlecamera will be 1 part in 10,000 for our setup when morethan 60 calibration points are used. Having the test pointsat the distance of approximately 1500 mm, the estimationerror in the x and y directions should be approximately 0.15mm in each direction. However, the stereo accuracy maybe slightly worse than that we have predicted, because the

OPTICAL ENGINEERING / January 1 993 / Vol. 32 No. 1 / 147

Computer Simulation: Coplanar points

Ncgjib 60, a = 0.1 pixel

Reenter Ju + vI9Lic : Our method with true OU)

a)a)C)a)0

0a)

Cs

C)CCO

Cr)

0.035-

0.030-

0.025-

0.020

0.015-

0.010-

0.005-

Computer Simulation: Coplanar points60, a = 0.1 pixel

Reenter

Reenter 15

Reenter 10

Reenter 5Reenter 0

C')

0.035-C). 0.030-. 0.025-

0.020-

0.015-CO

) 0010CCOa(V)

fiLic : Our method with Ou = Ou'

I Ix 10-i (mm2)

.5 -0.4 -0.3 -0.2 -d.i

Radial lens distortion coefficient K

Fig. 12 3-0 angular error versus the radial lens distortion coefficientusing our method with true ô.

Computer Simulation: Coplanar points

Ncdlb 60, a = 0.1 pixel

Renger Iu + vU)a)a)C)a)0C

0a)

CO

C)CCOa(!)

fiT : Tsai's method with true c5u

U)a)a)C)a)

C

0a)CO

C)CCOaC')

0.035-

0.030-

0.025-

0.020-

0.015-

0.010-

0.005-

RCenter157//Reenter 10 // //Renier 5 /Reenter 0

fiT : Tsai's method with & = 6 ' fcarnera

Idier

Radial lens distortion coefficient K

Page 11: Accurate linear technique for camera calibration considering lens …ntur.lib.ntu.edu.tw/bitstream/246246/141336/1/14.pdf · ACCURATE LINEAR TECHNIQUE FOR CAMERA CALIBRATION 2 Camera

SHIH, HUNG, and LIN

estimation error may come from both of the stereo cameras.Furthermore, because the length of the base line, 300 mm,is much smaller than the depth, 1500 mm, the estimationerror in the z direction (0.6 mm) is much larger than thatin the x (0. 15 mm) or y (0. 1 1 mm) directions. Notice that,if the lens distortion is relatively small, our method is muchbetter than Tsai 's, and its accuracy is close to that of thenonlinear method (as in the above experiment) . However,when the lens distortion coefficients are positive and rela-tively large, our method is only slightly better than Tsai'smethod, as shown in Fig. 22.

5 ConclusionsIn this paper, we describe a new camera calibration tech-nique that is computationally fast and can achieve very highcalibration accuracy for 3-D computer vision applications.Our method is fast because it requires only linear compu-tation and does not require any iterations. With a SPARC-station, the computation for the calibration can be donewithin a fraction of a second. Our method can achieve veryhigh accuracy for 3-D estimation (i.e., small 3-D angularerror) because the effect of the lens distortion is consideredand all the information contained in the calibration pointsare used. We have shown that our new noncoplanar cali-bration method can achieve the 3-D angular error of 0.005

Fig. 18 3-D error in the x direction versus the number of calibrationpoints with the distortion coefficient on the order of 1 O (barreldistortion).

Real Experiment: Noncoplanar points

ICleft= — 2.1 x 104mm2, 1right = — 1.3 x 1Omm2.

0.5-

0.4- fiT: Tsai's method

20 40 60 80 1&) 1O 1O 1O 1O 200Number of calibration points

Fig. 19 3-D error in the y direction versus the number of calibrationpoints with the distortion coefficient of the order of 10 (barrel dis-tortion).

deg, or an accuracy of 1 part in 10,000 in 3-D measurement.This means that we will make only 0. 1 mm of error in 3-Dposition estimation along the plane parallel to the imageplane when the object is 1 m away from the camera. How-ever, when the length of the base line is much smaller thanthe depth, the estimation error in the zdirection (the depth)is much larger than that in the x or y directions. When usinggood quality off-the-shelf lenses, our linear method is goodenough for almost all 3-D applications, and further nonlineariteration is not necessary. Note also that it may not be worththe effort to increase the calibration accuracy using nonlin-ear minimization techniques unless we can increase the ac-curacy of 2-D feature extraction accordingly.

AcknowledgmentsThe authors would like to thank Dr. Roger Tsai for his helpin the initial stages of this work and the reviewers for theirhelpful suggestions. This research is supported in part bythe National Science Council, Taiwan, under grant NSC8 1—0408-E-00 1-04.

References1. R. Y. Tsai, "A versatile camera calibration technique for high-accuracy

3-D machine vision metrology using off-the-shelf TV cameras andlenses," IEEE J. Robot. Automat. RA.3(4), 323—344 (1987).

148 / OPTICAL ENGINEERING / January 1993 / Vol. 32 No. 1

Computer Simulation:

N0Jjb = 60,

Coplanara = 0.1

points

pixel

Real Experiment:

'eft = —2.1 x 10mm2,Noncoplanar

Knght =

points— 1.3 X 1O4mm2.

'I)0.035-

0.030-

. 0.025-0.020-

0.015-

.) 0.010-0.005—a

a)

fiT : Tsai's method with = 6 'fdigitizer

fiLK : Our method with 6 = 6'fdigitizer

fiLK : Our method with true 6

— fiT : Tsai's method with true 6

EE

.—. CC0

8)

.— (I)

a(1)

Vicl.5 -.2 -.1! o 02 X 10 (mm2)

Radial lens distortion coefficient K

Fig. 16 3-D angular error versus the radial lens distortion coefficient.

Number of calibration points

C0C)U)

0>C

EEC

0U)

ci)C)CCo(I)0aa)

Fig. 17 Picture of the stereo vision setup.

Page 12: Accurate linear technique for camera calibration considering lens …ntur.lib.ntu.edu.tw/bitstream/246246/141336/1/14.pdf · ACCURATE LINEAR TECHNIQUE FOR CAMERA CALIBRATION 2 Camera

ACCURATE LINEAR TECHNIQUE FOR CAMERA CALIBRATION

4b o do 16O1O1O1O1O2Number of calibration points

Fig. 20 3-D error in the z direction versus the number of calibrationpoints with the distortion coefficient of the order of 10 (barrel dis-tortion).

EE

E .

U)VaC!)

do do ioio'ioioioiNumber of calibration points

Fig. 21 3-0 distance error versus the number of calibration pointswith the distortion coefficient ofthe order of iO (barrel distortion).

2. W. Faig, ' 'Calibration of close-range photogrammetry systems: math-ematical formulation,' ' Photogram. Eng. Remote Sensing 41(12),1479—1486 (1975).

3. J. Weng, P. Cohen, and M. Herniou, ' 'Calibration of stereo camerasusing a non-linear distortion model,' ' in Proc. IAPR 10th Inter. Conf.on Pattern Recognition, pp. 246—253 (1990).

4. R. K. Lenz and R. Y. Tsai, ''Techniques for calibration of the scalefactor and image center for high accuracy 3-D machine vision me-trology," IEEE Trans. Pattern Anal. Mach. Intell. 10(5), 7 13—720(1988).

5. W. I. Grosky and L. A. Tamburino, "A unified approach to the linearcamera calibration problem," IEEE Trans. PatternAnal. Mach. Intell.12(7), 663—671 (1990).

6. 0. D. Faugeras and G. Toscani, ' 'The calibration problem for stereo,"in Proc. Conf. on Computer Vision and Pattern Recognition, pp.15—20 (1986).

7. T. M. Strat, Recovering the Camera Parameters from a Transfor-mation Matrb, DARPA Image Understanding Workshop, pp. 264—271(1984).

8. I. Sutherland, "Three-dimensional data input by tablet," Proc. IEEE62(4), 453—461 (1974).

9. K. Kanatani and Y. Onodera, ' 'Noise robust camera calibration usingvanishing points," IEICE Trans. Inform. Syst. E74(10) (1991).

10. S. Ganapaphy, ' 'Decomposition of transformation matrices for robotvision,' ' in Proc. IEEE Int. Conf. on Robotics and Automation, pp.130—139 (1984).

11. Y. P. Hung, ''Three dimensional surface reconstruction using a mov-ing camera: a model-based probabilistic approach,' ' PhDDissertation,Division of Engineering, Brown University, Providence, RI. (1990);also, Technical Report LEMS-63 (1989).

12. H. A. Martins, J. R. Birk, and R. B. Kelley, "Camera models basedon data from two calibration planes," Comput. Graph. Image Process.17, 173—180 (1981).

EE.

.G)0)

.-.. U)Va

Fig. 22 3-D distance error versus the number of calibration pointswith the distortion coefficient of the order of i0 (pincushion dis-tortion).

13. Y. P. Hung and S. W. Shih, "When should we consider lens distor-tion in camera calibration,' ' in JAPR Workshop on Machine VisionApplications, Tokyo, pp. 367—370 (1990).

14. R. 0. Duda and P. E. Hart, Pattern Recognition and Scene Analysis,Wiley, New York (1973).

Sheng-Wen Shih received the MS degreefrom the National Taiwan University in elec-trical engineering in 1990. Since then hehas joined the Institute of Information Sci-ence, Academia Sinica. He is now also aPhD student in the Institute of ElectricalEngineering at the National Taiwan Uni-versity. His current research interests arein active vision, image sequence analysis,and robotics.

Yl-Ping Hung received the BSEE degreefrom the National Taiwan University in 1982,the ScM degree in electrical engineering,the ScM degree in applied mathematics,and the PhD degree in electrical engineer-ing, allfrom Brown University, in 1987, 1988,and 1990, respectively. He is currently anassociate research fellow in the Institute ofInformation Science, Academia Sinica, andan adjunct associate professor at the Na-.— tional Taiwan University. His research in-

terests include computer vision and robotics.

Wei-Song Lin received the BS degree inengineering science and the MS degree inelectrical engineering from National ChengKung University, Taiwan, in 1973 and 1975,respectively, and attained a distinguishedpaper award from the Association of ChineseElectrical Engineers. He received the PhDdegree from the Institute of Electrical En-gineering of National Taiwan University in1 982 and was an associate professor there

——-— from 1 983 to 1 987. From 1 982 to 1 984 hewas also the head of Electronic Instrument Division of Ching LingIndustrial Research Center, where he worked mainly on design andimplementation of electronic instruments. Since 1 987 he has beena professor at National Taiwan University. He is now a member ofthe International Association of Science and Technology for Devel-opment and the National Committee of the Internal Union of RadioScience. He received an Excellent Research Award from the NationalScience Council in 1 991 . His current research interests are in thefield of computer control, computer-based sensing and instrumen-tation, analysis and design of dynamic control systems, system en-gineering, and automation.

OPTICAL ENGINEERING / January 1 993 / Vol. 32 No. 1 / 149

EE

—..0.

V C)

.— U)Va

Real Experiment: Noncoplanar points

Kieft 2.1 x 10mm2, ICright 13x10mm2.1.4- — fiT : Tsai '5 method

1.- fiLic : Our method

Real Experiment: Noncoplanar points

'Cleft 3 x 10'mm2, Knght 4 x 104mm2.

1.4-

1.2-

1—

0.8-

0.6-

0.4-

0.2-

,., fiT : Tsai's method

0 '0 ' do do ' io io io ' io io 2Number of calibration points

Real Experiment: Noncoplanar points

Kieft 2.1 x 10mm2, ICright 1.3 X 10Snm2.

1.4-

1.2-

1—

0.8-

0.6-

0.4-

0.2-

:—------- fiT : Tsai's method

\ fiLK : Our method. . - . . . . - fiN Nonlinear method