15
Lens distortion models evaluation Carlos Ricolfe-Viala* and Antonio-Jose Sanchez-Salmeron Department of Systems Engineering and Automatic Control, Polytechnic University of Valencia, Camino de Vera s/n, Valencia, D.C. 46022, Spain *Corresponding author: [email protected] Received 7 July 2010; revised 12 August 2010; accepted 27 August 2010; posted 30 August 2010 (Doc. ID 131273); published 19 October 2010 Many lens distortion models exist with several variations, and each distortion model is calibrated by using a different technique. If someone wants to correct lens distortion, choosing the right model could represent a very difficult task. Calibration depends on the chosen model, and some methods have un- stable results. Normally, the distortion model containing radial, tangential, and prism distortion is used, but it does not represent high distortion accurately. The aim of this paper is to compare different lens distortion models to define the one that obtains better results under some conditions and to explore if some model can represent high and low distortion adequately. Also, we propose a calibration technique to calibrate several models under stable conditions. Since performance is hard conditioned with the cali- bration technique, the metric lens distortion calibration method is used to calibrate all the evaluated models. © 2010 Optical Society of America OCIS codes: 150.0155, 150.1135, 150.1488. 1. Introduction Lens distortion calibration is mandatory for systems where rigorous accuracy is demanded. In laser galva- nometric scanning systems, where accuracy of better than 1 part per 10,000 is required, distortion calibra- tion is necessary to correct system positioning [1]. This system is commonly used in various image dis- play fields, such as medical imaging, laser display, and material processing. Fringe projection profilo- metry has been exhaustively studied and is widely used in many fields, such as product inspection, re- verse engineering, and computer animation [2]. The calibration procedure greatly influences the accuracy and precision of a fringe projection profilometry sys- tem, as is demonstrated in [3,4]. In 1919 Conrady introduced camera decentering lens distortion. Fifty years after, Brown [5,6] proposed the radial, decentering, and prism distortion model, which has been extensively accepted to represent low distortion of the image [710]. Some modifica- tions to this model that focus on the mathematical treatment of the initial model have been reported [1116], or from a conceptual point of view, without any quantitative evaluation [17]. A nonparametric model, which considers only radial distortion, was proposed by Hartley and Kang in [18]. When fish-eye or other high-distortion lenses are used, nonlinear distortion is built in to obtain a wide- angle image of the scene. In these cases, the higher- order terms of a radial, decentering, and prism distortion series and the distortion center have to be considered [13,19], although they do not fully sa- tisfy the camera lens distortion [10]. On the other hand, since higher-order terms of radial, decenter- ing, and prism distortion are not able to represent this effect in the image, it is better to use a distortion model that tries to mimic this high-distortion effect in the image [13]. Following this idea, alternative nonlinear distortion models arise. For example, [12] used a logarithmic or fish-eye transform distortion model to represent fish-eye lens distortion. The fish-eye transformation model is based on the obser- vation that fish eyes have high resolution at the fo- vea and show a nonlinear decrease toward the periphery. Devernay and Faugeras [10] introduced what they call the field-of-view distortion model of the corresponding ideal fish-eye lens. In this case, the distance of an image point from the principal 0003-6935/10/305914-15$15.00/0 © 2010 Optical Society of America 5914 APPLIED OPTICS / Vol. 49, No. 30 / 20 October 2010

Lens distortion models evaluation

Embed Size (px)

Citation preview

Page 1: Lens distortion models evaluation

Lens distortion models evaluation

Carlos Ricolfe-Viala* and Antonio-Jose Sanchez-SalmeronDepartment of Systems Engineering and Automatic Control, Polytechnic University of Valencia,

Camino de Vera s/n, Valencia, D.C. 46022, Spain

*Corresponding author: [email protected]

Received 7 July 2010; revised 12 August 2010; accepted 27 August 2010;posted 30 August 2010 (Doc. ID 131273); published 19 October 2010

Many lens distortion models exist with several variations, and each distortion model is calibrated byusing a different technique. If someone wants to correct lens distortion, choosing the right model couldrepresent a very difficult task. Calibration depends on the chosen model, and some methods have un-stable results. Normally, the distortion model containing radial, tangential, and prism distortion is used,but it does not represent high distortion accurately. The aim of this paper is to compare different lensdistortion models to define the one that obtains better results under some conditions and to explore ifsomemodel can represent high and low distortion adequately. Also, we propose a calibration technique tocalibrate several models under stable conditions. Since performance is hard conditioned with the cali-bration technique, the metric lens distortion calibration method is used to calibrate all the evaluatedmodels. © 2010 Optical Society of AmericaOCIS codes: 150.0155, 150.1135, 150.1488.

1. Introduction

Lens distortion calibration is mandatory for systemswhere rigorous accuracy is demanded. In laser galva-nometric scanning systems, where accuracy of betterthan 1 part per 10,000 is required, distortion calibra-tion is necessary to correct system positioning [1].This system is commonly used in various image dis-play fields, such as medical imaging, laser display,and material processing. Fringe projection profilo-metry has been exhaustively studied and is widelyused in many fields, such as product inspection, re-verse engineering, and computer animation [2]. Thecalibration procedure greatly influences the accuracyand precision of a fringe projection profilometry sys-tem, as is demonstrated in [3,4].

In 1919 Conrady introduced camera decenteringlens distortion. Fifty years after, Brown [5,6] proposedthe radial, decentering, and prism distortion model,which has been extensively accepted to representlow distortion of the image [7–10]. Some modifica-tions to this model that focus on the mathematicaltreatment of the initial model have been reported

[11–16], or from a conceptual point of view, withoutany quantitative evaluation [17]. A nonparametricmodel, which considers only radial distortion, wasproposed by Hartley and Kang in [18].

When fish-eye or other high-distortion lenses areused, nonlinear distortion is built in to obtain a wide-angle image of the scene. In these cases, the higher-order terms of a radial, decentering, and prismdistortion series and the distortion center have tobe considered [13,19], although they do not fully sa-tisfy the camera lens distortion [10]. On the otherhand, since higher-order terms of radial, decenter-ing, and prism distortion are not able to representthis effect in the image, it is better to use a distortionmodel that tries to mimic this high-distortion effectin the image [13]. Following this idea, alternativenonlinear distortion models arise. For example, [12]used a logarithmic or fish-eye transform distortionmodel to represent fish-eye lens distortion. Thefish-eye transformation model is based on the obser-vation that fish eyes have high resolution at the fo-vea and show a nonlinear decrease toward theperiphery. Devernay and Faugeras [10] introducedwhat they call the field-of-view distortion model ofthe corresponding ideal fish-eye lens. In this case,the distance of an image point from the principal

0003-6935/10/305914-15$15.00/0© 2010 Optical Society of America

5914 APPLIED OPTICS / Vol. 49, No. 30 / 20 October 2010

Page 2: Lens distortion models evaluation

point is usually roughly proportional to the anglemade by the corresponding three-dimensional (3D)point, the optical center, and the optical axis.Fitzgibbon [20] modified the radial distortion modeland suggested the use of the division model, whichcan express high distortion at a much lower orderthan the radial model. In particular, for many cam-eras, one parameter suffices. Svoboda [21] was firstto generalize epipolar geometry from pinhole cam-eras to catadioptric systems. Micusik and Pajdla in[22] extended the division model by using trigonome-try functions to cope with a variety of wide-anglelenses and catadioptric cameras. Geyer and Daniili-dis [23] lifted image points to a four-dimensional“circle space” to obtain a 4 × 4 matrix similar tothe fundamental matrix, which can be fit to imagepoints. Also, Sturm [24] uses another lifting strategyto present models for backprojection of rays intoimages from the affine perspective and paradioptricscameras. Claus and Fitzgibbon [25] proposed an ex-tension of the lifting strategies [24,25] to build apowerful general purpose model to model a rangeof highly distorted cameras. It is called the rationalfunction lens distortion model. This model is a deri-vate of the rational cubic camera described byHartley and Saxena [26]. This model follows the ideapresented by Grossberg and Nayar [27] of a generalimaging model that performs a mapping from incom-ing scene rays to photosensitive elements on the im-age detector. Many other authors have designedseveral additional models based on the idea of a un-ique model to represent a nontraditional camerausing a curved mirror [28–30].

Several methods exist to compute lens distortionmodels parameters. Some of them are called non-metric or self-calibration methods since no knowl-edge of the scene points, the calibration objects, orany known structure are needed. These methodsuse geometric invariants of some image features,such as straight lines [6,10,31–33], vanishing points[34], or the image of a sphere [35]. It has been re-ported in [6,32] that including both the distortioncenter and the decentering coefficients in nonlinearoptimization may lead to instabilities of the non-metric lens distortion estimation algorithm. Usingepipolar geometry, other methods utilize correspon-dences between points in different images from mul-tiple views to compute camera distortion parameters[20,36,37]. Kang [38] showed how the system param-eters and motion could be simultaneously extractedby nonlinear estimation, in a manner similar toZhang’s technique [36] for estimation of radial lensdistortion. Geyer and Daniilidis described in [23]an algorithm that permits linear estimation of ego-motion and lens geometry. They are not easy to solveand are likely to produce some false data in the dis-tortion algorithm. On the other hand, using calibra-tion templates, the pinhole model is computedtogether with distortion parameters [7–9,13]. Be-cause both models are coupled, the iterative search-ing step of the calibration process could end on a local

minima. To resolve this problem, Hartley and Kang[18] proposed a method for simultaneously calibrat-ing the radial distortion function of the camera, alongwith the other internal calibration parameters. Fi-nally, in the absence of any calibration informationor explicit knowledge of the imaging device, distor-tions can be removed by means of specific higher-order correlations that lens distortion introducesin the frequency domain. These correlations can bedetected by using tools from polyspectral analysis,and the amount of distortion is then estimated byminimizing these correlations [39]. Taking into ac-count the experience with the calibration of the pin-hole camera model, metric calibration methods arealways more stable and give better results than non-metric methods, called self-calibration.

At this point, if camera lens distortion must beremoved, selecting themodel and the calibration tech-nique could be a very difficult task. On the evaluationof lens distortion models, Schneider et al. [40] pub-lished research that examined the accuracy of onlyfish-eye projection functions using spatial resectionand bundle adjustment. Also, Hughes et al. [41] ex-tended the Schneider et al. study, including some dis-tortion models apart from projection functions butexcluding the rational function model and assumingthat the distortion center is equal to the principalpoint computed previously, and that the tangential(decentering) distortion is negligible. In our experi-ence, the rational function model obtains betterresultsandthecomputedmodel fits todataaccurately,if the distortion center is computed together with thedistortion model. We have computed a particular dis-tortion center for eachmodel and they have been com-pared with the principal point computed using themethod described in [42]. The distortion center foreachmodel is different from the principal point. Simi-lar resultswere obtained byHartley andKang in [18].On the other hand, if low distortion is modeled withthe radial, tangential, and prism distortion model,tangential (decentering) distortion is negligible andcan be compensated for just by using distortion centerestimation [37]. If high distortion or fish-eye lensesare modeled, the tangential distortion componenthelps get better results. We have improved the poly-nomial distortion model performance by including atangential distortion component.

In this paper we compare lens distortion models byestablishing a common criterion to define the validityof each model. We propose using a metric method forcalibrating all lens distortion models and establish-ing which model represents distortion accurately.This common criterion will help us to compute allmodels easily under equal circumstances and understable conditions. With the metric method, all eval-uated lens distortion models can be fully definedwithout risk of instabilities and without any addi-tional work, since metric information from the sceneis used to compute camera pinhole model parameters[43]. Metric calibration consists of correcting dis-torted points in the image by following ratios and

20 October 2010 / Vol. 49, No. 30 / APPLIED OPTICS 5915

Page 3: Lens distortion models evaluation

constraints between them, which is true with pointsin the scene used for lens distortion calibration. Idealpoints represent correct positions of detected pointsin the image. With both sets of points, the distortedpoints and the corrected ones, distortion models arecomputed easily.

Because all of the distortion models in the paperdeal with functions that convert rectilinear distancesto distorted distances, or vice versa, the proposedmethod can only deal with cameras of less than180° of field of view.This is because, for a rayof greaterthan 90° incident angle on the camera, the projectionof the point to the rectilinear image makes no sense.

This paper is divided as follows. First, models to becompared are explained in a few words. Second, pointcorrection is briefly described. Third, models arecomputed using both sets of points, those extractedfrom the image and the ideal corrected ones. The pa-per ends with experimental results and conclusions.

2. Distortion Models

The distortion model is usually given as a mappingfrom the distorted image coordinates qd ¼ ðud; vdÞ,which are observable in the images, to the undis-torted image coordinates, which are not physicallymeasurable, qp ¼ ðup; vpÞ [11]. This mapping has dif-ferent models according to the type of lens that pro-duces the distortion. Some models are elementaryalgebraic and other more transcendental. In the fol-lowing subsections, brief descriptions of several mod-els that are going to be compared are given. Theyhave been selected according to their relevance in thestate of the art. According to the chosen model, non-linear distortion is represented by a different set ofparameters, which are computed from detected andcorrected points in the image by using themetric lensdistortion calibration method.

A. Radial and Tangential Distortion Model

This is the traditional model. In this case the map-ping between the observed point coordinates in theimage qd ¼ ðud; vdÞ and the undistorted ones qp ¼ðup; vpÞ is given by

up ¼ ud − δu; vp ¼ vd − δv ; ð1Þ

such that

δu ¼ Δud · ðk1 · r2d þ k2 · r4d þ…Þ þ p1ð3Δu2d þΔv2dÞ

þ 2p2 ·Δud ·Δvd þ s1 · r2d;

δv ¼ Δvd · ðk1 · r2d þ k2 · r4d þ…Þ þ 2p1 ·Δud ·Δvd

þ p2ðΔu2d þ 3Δv2dÞ þ s2 · r2d: ð2Þ

This represents a nonlinear camera polynomialdistortion model where r is the distance from thepoint qd ¼ ðud; vdÞ to the distortion center, definedas c ¼ ðu0; v0Þ, Δud ¼ ud − u0, and Δvd ¼ vd − u0. rdis computed as r2d ¼ Δu2

d þΔv2d. Radial distortionis modeled using coefficients k1, k2, k3;…; and pro-

duces a displacement of point position along the lineconnecting with the distortion center that is causedby defects in the lens curve. Negative radial dis-placement is also called barrel distortion; positiveradial displacement is known as cushion distortion.Another kind of distortion is called image decenteringor tangential distortion, and it arises when opticallens centers are not in the same line. This is modeledby p1, p2, p3;…. Finally, if the camera lenses are notcorrectly aligned and are not perpendicular to thecamera optical axis, prism distortion occurs, and itis modeled by s1, s2, s3;…. The sum of the three dis-tortions previously characterized is the effective dis-tortion. In view of the physical meaning of eachdistortion parameter and possible coupling betweenthem, a simplified nonlinear distortion model can beobtained [44]. Normally, terms higher than 2 arecomparatively insignificant and are discarded [7,8].

B. Logarithmic Fish-Eye Lenses Distortion Model

Basu and Licardie [12] based the logarithmic fish-eyetransformation model and the polynomial fish-eyedistortion model on the observation that fish eyeshave high resolution at the fovea, and show a non-linear decrease toward the periphery. Let ðrp; αÞdenote the polar coordinates of a point qp ¼ ðup; vpÞin the image, where r2p ¼ Δu2

p þΔv2p and α ¼ arctanðΔvp=ΔupÞ; the distorted polar coordinates ðrd; α�Þcan be obtained as

rd ¼ s · logð1þ λ · rpÞ; α� ¼ α: ð3ÞThe corresponding distorted Cartesian coordinates

qd ¼ ðud; vdÞ are given by

ud ¼ rd · cos α�; vd ¼ rd · sin α�; ð4Þ

where s is a scaling factor and λ controls the amountof distortion over the entire distorted image. The in-verse mapping is given by

rd ¼ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiΔu2

d þΔv2d

q;

α� ¼ arctanðΔv2d=Δu2dÞ;

rp ¼ ðerd=s − 1Þ=λ; α ¼ α�;up ¼ rp · cos α; vp ¼ rp · sin α: ð5Þ

C. Polynomial Fish-Eye Distortion Model

In this case, the polynomial model is similar to thelogarithmic fish-eye lens distortion model, exceptthat rd ¼ GðrpÞ, where (rp) is the radial distancefor a undistorted image, rd is the distorted measure,and GðrpÞ has the form

GðrpÞ ¼ a0 þ a1 · rp þ a1 · r2p þ…a1 · rnp ¼Xki¼0

ai · rip;

ð6Þwhere k represents the degree of the polynomial fish-eye distortion model. This is the polynomial fish-eye

5916 APPLIED OPTICS / Vol. 49, No. 30 / 20 October 2010

Page 4: Lens distortion models evaluation

transform and differs from the logarithmic model inthat GðrpÞ is a polynomial in rp. Basu and Licardie[12] found that the fourth or fifth order for the poly-nomial fish-eye transformationmodel is a reasonablygood approximation of the average distortion.

To improve the modeling power of the polynomialdistortion model, authors have added the tangential(decentering) distortion component, which is notnegligible if high-distortion or fish-eye lenses aremodeled. In general, logarithmic and polynomialmodels work like the radial distortion model whenjust the radial distortion is considered. With thesedistortion models, image correction is considered ra-dial from the distortion center. Undistortion is doneby following the line through the pixel location andthe distortion center. Normally, if high distortion ispresent, when pixel qd is undistorted to qp, a correc-tion of the angle should be done. It has been demon-strated that the vast majority of tangential distortioncan be compensated for just by using distortion cen-ter estimation [18,37], but this is true only when lowdistortion is modeled with the radial, tangential, andprism distortion model. This angle correction is in-cluded in the tangential component of the radialand tangential model. Using the radial and tangen-tial model, the location of the undistorted pixel willbe out of the straight line through the distortion cen-ter and the pixel qd. Therefore, to improve the perfor-mance of the polynomial model and to be able torepresent high-distortion or fish-eye lenses, an anglecorrection is added to the original model. This correc-tion can be based on the angle that the pixel has inthe image referred to the distortion center, or the dis-tance of the pixel to the distortion center. Trying bothoptions, we propose a correction based on the angle,since better results are computed. The improvedpolynomial distortion model obtains the distortedpolar coordinates ðrd; αdÞ as follows:

rd ¼ a0 þ a1 · rp þ a2 · r2p þ…an · rnp ¼Xkai¼0

ai · rip;

αd ¼ b0 þ b1 · αp þ b2 · α2p þ…bn · αnp ¼Xkbj¼0

bj · αjp;

ð7Þ

where r2p ¼ Δu2p þΔv2p, αp ¼ arctanðΔvp=ΔupÞ,

Δup ¼ up − u0, Δvp ¼ vp − u0, and the distortioncenter is ðu0; v0Þ. ka and kb are the degrees of the

polynomial fish-eye distortion model. They can bedifferent.

D. Field-of-View Distortion Model

Devernay and Faugeras [10] introduced the field-of-view distortion model of the corresponding ideal fish-eye lens, in which the distance of an image point fromthe principal point is usually roughly proportional tothe angle formed by the corresponding 3D point, theoptical center, and the optical axis. Thus, the angularresolution is roughly proportional to the image reso-lution along an image radius. The correspondingnonlinear distortion model and its inverse are

rd ¼ 1ω arctan

�2 · rp · tan

ω2

�; rp ¼ tanðrd · ωÞ

2 · tan ω2

:

ð8Þ

This model has only one parameter, which is thefield of view ω of the corresponding ideal fish-eyelens. If this one-parameter model is not sufficientto model the complex distortion of fish-eye lenses,the polynomial distortion model represented byEq. (2) , can be applied before Eq. (8), with k1 ¼ 0(ω as a first-order distortion parameter would be re-dundant with k1). A second-order field-of-view distor-tion model will have k2 ≠ 0 and a third-order willhave k3 ≠ 0.

E. Division Model

Fitzgibbon [20] introduced the division model to ap-proximate the radial distortion model expressed asthe polynomial model without tangential correction:

rd ¼ rp1þ β1 · r2p þ β2 · r4p þ…

: ð9Þ

The most remarkable advantage of the division mod-el over the polynomial model is that it is able to ex-press high distortion at much lower order. Inparticular, for many cameras, one parameter suffices[20,25]. Following, we use a single-parameter divi-sion model.

F. Rational Function Distortion Model

In this case, the mapping between distorted imagecoordinate qd ¼ ðud; vdÞ and undistorted qpðup; vpÞis done, including ud and vd in higher-order polyno-mials, in particular, quadratic:

dðud; vdÞ ¼

264a11 · u2

d þ a12 · ud · vd þ a13 · v2d þ a14 · ud þ a15 · vd þ a16

a21 · u2d þ a22 · ud · vd þ a23 · v2d þ a24 · ud þ a25 · vd þ a26

a31 · u2d þ a32 · ud · vd þ a33 · v2d þ a34 · ud þ a35 · vd þ a36

375: ð10Þ

20 October 2010 / Vol. 49, No. 30 / APPLIED OPTICS 5917

Page 5: Lens distortion models evaluation

This model may be written as a linear combinationof the distortion parameters, in a 3 × 6matrix A, anda six-vector x, of monomials in ud and vd. x isdefined as the lifting of the image point ðud; vdÞ toa six-dimensional space:

xðud; vdÞ ¼ ½u2d ud · vd v2d ud vd 1 �T : ð11Þ

The rational function model is given by

dðud; vdÞ ¼ A · xðud; vdÞ; ð12Þwhere d is a vector in the camera coordinates thatrepresents the ray direction along which pixelðud; vdÞ samples. Undistorted image coordinatesðup; vpÞ are computed by the perspective projectionof d:

qp ¼ ðup; vpÞ ¼�aT1 · xðud; vdÞ

aT3 · xðud; vdÞ

;aT2 · xðud; vdÞ

aT3 · xðud; vdÞ

�; ð13Þ

where rows of A are denoted by aT1::3.

3. Metric Lens Distortion Calibration

The metric point correction of detected points in theimage is based in the idea described in [43] for lensdistortion calibration. The aim is to maintain the per-spective projection in the image. This means that animage of a chessboard should preserve its portionsaccording to a perspective projection. Therefore,cross ratios and straight lines in the calibration tem-plate will remain true in the image. With these re-strictions, distorted points detected in the imageare corrected to satisfy restrictions of the template.Distortion correction is done with images of a chess-board, since points of a chessboard are arranged instraight lines that are parallel and perpendiculareach other. With both sets of points, the distortedpoints detected in the image and the true correctedpoints, several distortion models can be calibrated.

If image coordinates of template points p1, p2, p3,and p4 are q1d ¼ ðu1d; v1dÞ, q2d ¼ ðu2d; v2dÞ, q3d ¼ðu3d; v3dÞ, and q4d ¼ ðu4d; v4dÞ, the following equationbased on the cross-ratio invariability arises:

CRðq1d; q2d; q3d; q4dÞ ¼s13 · s24s14 · s23

¼ CRðp1; p2;p3;p4Þ;ð14Þ

where sij represents the distance between the pointsqi and qj, defined as s2ij ¼ ðui − ujÞ2 þ ðvi − vjÞ2, andp1;p2;p3;p4 are four points of the planar calibrationtemplate lying on the same line. is equal to all foursets of points in the planar template, which areequally distributed and, in consequence, any imageof these four sets of points should satisfyCRðp1; p2;p3;p4Þ also. Distorted coordinates qi;d willsatisfy the cross ratio CRðp1;p2;p3;p4Þ if they arecorrected to their ideal positions qi;p. Points in theimage are separated in n sets of m points that form

straight lines, where n is the number of straight linesin the calibration template and m is the number ofpoints in each line. So, qk;l is a point k of the straightline l in the image, l ¼ 1…n, k ¼ 1…m. To find theideal position qk;l;p of each distorted point in theimage qk;l;d, a nonlinear searching, starting withqk;l;d, must be done to minimize the following errorfunction:

JCR ¼Xnl¼1

Xm−3

k¼1

‖CRðqk; qkþ1;l; qkþ2;l; qkþ3;lÞ

− CRðp1;p2;p3;p4Þ‖: ð15Þ

CRðp1;p2;p3;p4Þ is computed previously when theplanar template is designed.

On the other hand, point correction should be doneto fit in the line perfectly. So, if a point qi ¼ ðui; viÞ fitsin the straight line l, the following expression is true:

al · ui þ bl · vi þ cl ¼ 0; ð16Þ

where al, bl, cl represent the set of parameters thatdefines the straight line l. Consequently, the residualof the computation must be true if all points fit in thestraight line perfectly:

JST ¼Xnl¼1

Xmi¼1

‖al · ui þ bl · vi þ cl‖: ð17Þ

To correct image point positions, an error functionis minimized; it includes cross-ratio invariability andstraight line restriction:

JCP ¼Xnl¼1

�Xmi¼1

‖al · ui þ bl · vi þ cl‖

þXm−3

k¼1

‖CRðqk; qkþ1;l; qkþ2;l; qkþ3;lÞ

− CRðp1;p2;p3;p4Þ‖�: ð18Þ

The result is a correction of distorted image pointsqi;d to their true positions qi;p. For the proposedmeth-od, the number of images to resolve the model thatmaps from qi;d to qi;p is only one, although severalimages can be used by arranging all image pointstogether.

The next step consists of computing the param-eters of lens distortion models that map from dis-torted qi;d points to ideal ones qi;p. According tothe estimated model, different steps should be fol-lowed. The following subsections describes how tocompute the models described in Section 2.

A. Radial and Tangential Distortion Model

First, radial, tangential, and prism distortion iscalibrated. We consider only a second-order radial,tangential, and prism distortion model, but it isstraightforward to extend the result if higher-order

5918 APPLIED OPTICS / Vol. 49, No. 30 / 20 October 2010

Page 6: Lens distortion models evaluation

terms are used. Assuming second-order terms for ra-dial, tangential, and prism distortions, the lens dis-tortion model is parameterized with k1, k2, p1, p2, s1,s2, and u0, v0, where ki models radial distortion, pirepresents tangential distortion, si is the prism dis-tortion, and u0, v0 represent the distortion center. Bytransforming Eq. (2) into a matrix form, mappingbetween observed distorted points qi;d and true cor-rected computed points qi;p is given by the followingexpression:

�Δui;d · r2i;d Δui;d · r4i;d 3Δu2i;d þΔv2i;d 2 ·Δui;d ·Δvi;d r2i;d 0

Δvi;d · r2i;d Δvi;d · r4i;d 2 ·Δui;d ·Δvi;d Δu2i;d þ 3Δv2i;d 0 r2i;d

�·

26666664

k1k2p1

p2

s1s2

37777775¼

�δu;iδv;i

�: ð19Þ

To solve the radial, tangential, and prism non-linear camera lens distortion model, we first give aclosed-form solution, which is improved with a non-linear optimization stage. For the closed-form solu-tion step, the center of the distortion c ¼ ðu0; v0Þ isthe same as the principal point, considered to bein the center of the image. Given n ·m points, wecan stack all equations together to obtain a total of 2 ·n ·m equations, or, in matrix form, as W · x ¼ w,where x ¼ ½k1; k2;p1;p2; s1; s2�T. The linear leastsquare solution is given by x ¼ ðW ·WÞT ·WT ·w.The above solution is obtained through minimizingan algebraic distance that is not physically meaning-ful. We can refine throughmaximum likelihood infer-ence. It can be obtained by minimizing the followingerror function:

JNLPD ¼Xn·mi¼1

�‖δu;i −Δui;d · ðk1 · r2i;d þ k2 · r4i;dÞ

þ p1ð3Δu2i;d þΔv2i;dÞ þ 2p2 ·Δui;d ·Δvi;d

þ s1 · r2i;d‖þ ‖δv;i −Δvi;d · ðk1 · r2i;d þ k2 · r4i;dÞþ 2p1 ·Δui;d ·Δvi;d þ p2ðΔu2

i;d þ 3Δv2i;dÞ

þ s2 · r2i;d‖�: ð20Þ

Equation (20) is a nonlinear minimization problemthat is solved with the Levenberg–Marquart algo-rithm. An initial guess of nonlinear camera lens dis-tortion parameters k1, k2, p1, p2, s1, s2 is obtained byusing the closed-form solution and ðu0; v0Þ is initia-lized with the principal point. The nonlinear search-ing always converges to a solution, also solving thedistortion center. In some cases, the distortion cen-ter can create instabilities because the distortion

parameters are coupled with the distortion center.Instabilities mean that the nonlinear searching doesnot converge to a correct solution. In this case, no in-stability occurs also computing an extended set ofdistortion parameters.

B. Logarithmic Fish-Eye Lenses Distortion Model

In this case, the logarithmic distortion model tries tomimic effects that produce the fish-eye lenses byusing Eq. (3), where r2i;d ¼ Δu2

i;d þΔv2i;d, r2i;p ¼

Δu2i;p þΔv2i;p, and Δui;d ¼ ui;d − u0, Δvi;d ¼ vi;d − v0,

Δui;p ¼ ui;p − u0, and Δvi;p ¼ vi;0 − v0, consideringthe distortion center c ¼ ðu0; v0Þ. Parameters to beadjusted are the scaling factor s and λ, which controlthe amount of distortion over the entire distorted im-age. To adjust them, the following error function isminimized:

JFET ¼Xn·mi¼1

ðri;d − s · logð1þ λ · ri;pÞÞ2: ð21Þ

Unfortunately, distortion parameters are not in alinear relation to the equation and, therefore, aclosed-form solution cannot be defined. Instead, suc-cessive evaluation techniques and Newton’s methodcan be used to minimize Eq. (21) and resolve cameradistortion parameters.

C. Polynomial Fish-Eye Distortion Model

Using the polynomial distortion model, a set of n ·mpoints per image give n ·m pairs ðri;d; ri;pÞ. To resolvethe angle correction αd, the same set of n ·m pointsgives n ·m pairs ðαi;d; αi;pÞ. The objective of the leastsquares method is to minimize the following errorfunction:

JPFET ¼Xn·mi¼1

�ri;d −

Xkat¼0

ðat · rTi;pÞ�

2

þXn·mi¼1

�αi;d −

Xkbt¼0

ðbt · αTi;pÞ�

2; ð22Þ

where ka and kb represent the degree of the polyno-mial fish-eye distortion model for ratio and angle. Inthis case, a fifth order has been considered for ka andkb. To minimize this set of linear equations, the

20 October 2010 / Vol. 49, No. 30 / APPLIED OPTICS 5919

Page 7: Lens distortion models evaluation

Levenberg–Marquart algorithm has been used. Thedistortion center is initialized with the principalpoint, and initial values for a1…5 and b1…5 are ob-tained with a closed-form solution by using the n ·m pairs ðri;d; ri;pÞ for a1…5 and ðαi;d; αi;pÞ for b1…5.For a1…5, the following expression arises:

26664

1 r1;p r21;p r31;p r41;p r51;p… … … … … …

1 rn·m;p r2n·m;p r3n·m;p r4n·m;p r5n·m;p

37775 ·

266666666664

a0

a1

a2

a3

a4

a5

377777777775

¼

264

r1;d…

rn·m;p

375: ð23Þ

If Eq. (23) is expressed as W · x ¼ w, wherex ¼ ½a0;a1;a2; a3;a4;a5�T , the linear least square so-lution is given by x ¼ ðW ·WÞT ·WT ·w. Initializationof b1…5 is similar to that of a1…5, but uses ðαi;d; αi;pÞ.D. Field-of-View Distortion Model

Adjusting the field-of-view distortion model is quitesimilar to the logarithmic fish-eye distortion model.In this case, the function to be minimized is

JFOV ¼Xn·mi¼1

�ird −

1ω arctan

�2 · irp · tan

ω2

��2: ð24Þ

Similar to Eq. (21), Eq. (24) can be minimized byusing successive evaluation techniques.

E. Division Model

In this case, a first-order division model is going to beadjusted:

rd ¼ rp1þ β · r2p

: ð25Þ

Given a set of n ·m points per image, we have n ·mpairs ðri;d; ri;pÞ, which are used to initialize β asfollows:2

64 r21;p · r1;d…

r2n·m;p · rn·m;d

375 · β ¼

264 r1;d − r1;p

…rn·m;d − rn·m;p

375: ð26Þ

If Eq. (26) is expressed as W · β ¼ w, the initial valuefor β is β ¼ ðW ·WÞT ·WT ·w. To refine the linear so-lution and the distortion center, the following errorfunction is minimized with the Levenberg–Marquartalgorithm:

JDM ¼Xn·mi¼1

�ri;d −

ri;p1þ β · r2i;p

�2: ð27Þ

Distortion center u0, v0 is initialized with theprincipal point.

F. Rational Function Distortion Model

In this case, parameters of the rational functionmodel are elements of matrix A:

dðui;d; vi;dÞ ¼ A · xðui;d; vi;dÞ: ð28Þ

If rows of A are denoted by aT1::3, ideal image coordi-

nates ðui0; vi;0Þ are computed by the perspectiveprojection of d:

qi;0 ¼ ðui;0; vi;0Þ ¼�aT1 · xðui;d; vi;dÞ

aT3 · xðui;d; vi;dÞ

;aT2 · xðui;d; vi;dÞ

aT3 · xðui;d; vi;dÞ

�:

ð29ÞBy rearranging this expression, we have

aT3 · xðui;d; vi;dÞ · ui;0 ¼ aT

1 · xðui;d; vi;dÞ;aT3 · xðui;d; vi;dÞ · vi;0 ¼ aT

2 · xðui;d; vi;dÞ: ð30Þ

If Eq. (30) is expressed in amatrix form, the followingexpression arises:

"−xðui;d; vi;dÞT 0 ui;0 · xðui;d; vi;dÞT

0 −xðui;d; vi;dÞT vi;0 · xðui;d; vi;dÞT

#

·

264a1

a2

a3

375 ¼ 0: ð31Þ

Given n ·m points, we can stack all equationstogether to obtain a total of 2 · n ·m equations, orin matrix form as W · a ¼ 0, where a ¼ ½a11;a12;a13;a14;a15;a16; a21;a22;a23;a24;a25; a26;a31;a32;a33;a34;a35;a36�T . The solution is given by the eigenvec-tor associated with the small eigenvalue of matrixW.To refine this solution through maximum likelihoodinference, the following error function is minimized:

JRT ¼Xn·mi¼1

�‖ui;0 −

aT1 · xðud; vdÞ

aT3 · xðud; vdÞ

þ ‖vi;0 −aT2 · xðui;d; vi;dÞ

aT3 · xðui;d; vi;dÞ

�:

ð32Þ

Equation (32) is a nonlinear minimization problemsolved with the Levenberg–Marquart algorithm. Aninitial guess of matrix A is obtained by using theclosed-form solution, and u0, v0 is initialized withthe principal point. The nonlinear searching alwaysconverges to a solution, also solving the distortioncenter.

4. Experimental Results

The described lens distortion calibration pro-cess is solved by taking one or several images of a

5920 APPLIED OPTICS / Vol. 49, No. 30 / 20 October 2010

Page 8: Lens distortion models evaluation

chessboard pattern. Detected control points in cap-tured images are corrected and described modelsare adjusted with both sets of points. The aims areto compare the accuracy of described nonlinear lensdistortion models and to test the robustness of themetric lens distortion calibration process. Real datafrom images captured by an 8 mm lens and a 2:7 mmlens have been used to test the model’s performance.The 8 mm lens generates low distortion and the2:7 mm lens represents a high-distortion lens. Thelenses are from Tamron and they are mounted onan Ethernet Axis 211 W camera with a CCD of 640 ×480 pixels. The 8 mm (1=3 in:) lens gives a 34° field ofview and the 2:7 mm lens gives 85°. Figures 1(a) and1(b) show one image from each lens. Lens distortionmodel calibration is done by using one image and ca-librated models are evaluated with the remainingimages. The training model plane is a 210 mm ×297 mm checkerboard with 165 corner points(15 × 11). Corner detection has been done with thefunction cvFindChessBoardCorners() from theopenCV library [45]. Figures 1(c) and 1(d) show

distorted detected points in both images and truepoints computed by using the metric method de-scribed in Section 3. In both cases, the error functionin Eq. (18), evaluated with true corrected points, isalways zero. This means that, for the calibrationstep, ideal corrected points satisfy both restrictionsof cross-ratio invariability and straight lines per-fectly. Models are adjusted to transform distorted de-tected points to true corrected ones by using methodsdescribed in Section 3. A point’s correction perfor-mance depends on the number of points and the pixelcoordinate noise. These experiments were done in[43]. In this case, model performance using correctedpoints is going to be tested. To evaluate model effi-ciency following steps have been taken.

1. First, models are computed with the detectedqd and the ideal corrected qp points of one image.

2. Second, the remaining images are correctedwith computed models.

3. Third, the corners of undistorted images aredetected by using the cvFindChessBoardCorners().

Fig. 1. (Color online) Images captured with 2:7 mm and 8 mm lenses. (a) Image with 8 mm lens generates low distortion. (b) Image with2:7 mm lens generates high distortion (c) Detected distorted points in the image and undistorted corrected points for low distortion. (d)Same for high distortion.

20 October 2010 / Vol. 49, No. 30 / APPLIED OPTICS 5921

Page 9: Lens distortion models evaluation

4. Fourth, performance evaluation is obtained bycomputing error function (18) with detected pointsfrom corrected images qp.

In this case, the computed model accuracy dependson the number of parameters of the model. For thenonlinear radial and tangential model, a secondorder is considered. For the polynomial fish-eye dis-tortion model, a fifth order for angle and radial cor-rection is adjusted. A first order is considered for thedivision model. For the logarithmic and the rationalfunction models, the numbers of parameters do notchange.

A. Number of Points

Figures 2(a) and 2(b) show the effect of number ofpoints in the model calibration process. In this case,the distribution of points in the image is directly re-flected in the result. Several images of the template,where points are arranged all together in a part ofthe image, will give worse results than just one im-age where points are equally distributed over the im-age. This equal distribution is represented by theimage in Fig. 1(a), where points are distributed overthe entire image. Points represent lens distortion inall image areas and the model will adjust to them. Ifpoints are in the center of the image, the distortionmodel will be not correctly adjusted, since the distor-tion effect is higher in the border of the image. As ageneral rule, we propose using one image wherepoints are equally distributed over the entire imageto resolve the mapping from the distorted to the un-distorted points. However, to increase the number ofsamples and to obtain amore accurate model, severalimages where points are distributed over the entireimage can be used. In our experiments, we havechange the number of points, taking into accountthat sample data must represent distortion over

the entire image. Figures 2(a) and 2(b) show the eva-luation of the error function in Eq. (18) by usingpoints qp extracted from undistorted images with thecomputed models when the number of pointschanges. Similar performance is obtained with the8 mm low-distortion lens if more than 100 pointsare used and they are equally distributed over theimage. With the 2:7 mm high-distortion lens, 140points are necessary to resolve the calibration pro-cess efficiently. In both cases, the rational functionmodel has obtained better performance.

Time consumption is related to the number ofpoints. Figure 3 shows how time consumptionchanges with the number of points. Ninety-nine

Fig. 2. (Color online) Calibration error of each model depending on the number of points. Calibration error is measured by evaluatingthe error function in Eq. (18) with undistorted points. The error function in Eq. (18) is zero for the corrected points that are used in thecalibration process.

Fig. 3. (Color online) Time consumption depends on the numberof points. Ninety-nine percent of the time is used for searching thetrue positions of distorted points in the corrected image. Calibra-tion has been computed in an AMD Atlon Dual Core 5600þ2:81 GHz with 2 Gbytes of RAM implemented in MATLAB.

5922 APPLIED OPTICS / Vol. 49, No. 30 / 20 October 2010

Page 10: Lens distortion models evaluation

Table 1. Calibration Resultsa

Low Distortion High Distortion

Distances Distances

Calibration Error Mean Std. Max. Min. Calibration Error Mean Std. Max. Min.

Radial tangential 1.3432 2.1823 1.0549 4.7248 0.1664 4.3809 6.9610 4.0147 18.5446 1.3413Field of view 0.9365 1.6511 0.6467 3.5926 0.1652 4.0454 6.2156 3.8457 17.0958 1.3158Division model 0.8657 1.4685 0.5947 3.0685 0.1618 3.7589 5.7456 3.3247 16.0847 1.2896Logarithmic 0.7635 1.2265 0.4676 2.9092 0.1584 3.3504 5.2915 2.3231 14.9609 1.2065Polynomial 0.6139 0.9477 0.3776 2.2854 0.0456 2.6149 4.0667 1.9870 9.6165 0.4662Rational function 0.3236 0.5247 0.2216 1.0937 0.0370 1.1062 1.7189 0.9244 5.8150 0.0617aCalibration error is the evaluation of the error function in Eq. (18) with undistorted points using the calibrated model. Distances are

differences in pixels between the corrected ideal points qp used to calibrate the model and the undistorted points computed with thecalibrated model.

Fig. 4. (Color online) Low-distortion correction with different models. Images captured with 8 mm low-distortion lens. (a) Rationaltangential model. (b) Logarithmic model. (c) Field-of-view model. (d) Division model. (e) Polynomial model. (f) Rational function model.

20 October 2010 / Vol. 49, No. 30 / APPLIED OPTICS 5923

Page 11: Lens distortion models evaluation

percent of the time is used for searching for the truepositions of distorted points in the corrected image.When both sets of points are ready, model computa-tion is faster. Calibration has been computed inan AMD Atlon Dual Core 5600þ 2:81 GHz with2 Gbytes of RAM implemented in MATLAB.

B. Image Rectification

Table 1 gives the results for the function error inEq. (18) if 165 corner points have been used. This er-ror gives an approximation of which model better re-presents lens distortion in a general framework.

Error with the rational function model is smaller.If rectification in each part of the image is consid-ered, Fig. 4, for low distortion, and Fig. 5, for highdistortion, show the general aspect of how each mod-el removes the distortion in each area of the image.The black lines show how the calibratedmodel undis-torts the image and the similarity with the idealcorrected image. Differences in pixels for each partof the image are shown in Figs. 6 and 7 for lowand high distortion, respectively. Differences aremeasured as the distance between the true computedpositions of points qp and the undistorted points with

Fig. 5. (Color online) High-distortion correction with different models. Images captured with 2:7 mm high-distortion lens. (a) Rationaltangential model. (b) Logarithmic model. (c) Field-of-view model. (d) Division model. (e) Polynomial model. (f) Rational function model.

5924 APPLIED OPTICS / Vol. 49, No. 30 / 20 October 2010

Page 12: Lens distortion models evaluation

each calibrated model. In general, if low distortion ispresent, the differences between models are not sig-nificant. With low distortion, if the radial, tangential,and prism distortion model is used, differences varybetween 0.2 and 4.7 pixels. Accurate distortion cor-rection is done with the rational function model withdifferences from 0.03 to 1 pixel. The polynomial, loga-rithmic, field-of-view, and division distortion modelshave similar performance. The differences are shownin Table 1. With high distortion, the radial, tangen-tial, and prismmodel performance is poor. The differ-

ences are between 1.3 and 18.5 pixels, depending onthe area in the image. The rational function modelhas better performance again, with differences be-tween 0.06 and 5.8 pixels. According to the results,the rational function model describes the distortionmost accurately, followed by the polynomial anddivision models. The polynomial model has betterperformance than the logarithmic, division, andfield-of-view models, since an angle correction com-ponent has been included in the model. This fact de-monstrates that, if high distortion is modeled, the

Fig. 6. (Color online) Low-distortion correction error in different image areas with different models. Images captured with 8 mm low-distortion lens. (a) Rational tangential model. (b) Logarithmic model. (c) Field-of-view model. (d) Division model. (e) Polynomial model. (f)Rational function model.

20 October 2010 / Vol. 49, No. 30 / APPLIED OPTICS 5925

Page 13: Lens distortion models evaluation

tangential component helps improve the results. Theradial, tangential, and prism distortion model can beused, with soft distortion also avoiding the tangen-tial component. Results presented in Table 1 arequite similar to those presented by Hughes et al.[41] for the polynomial, logarithmic, field-of-view,and division distortion models.

C. Distortion Center

A comparison of model parameters is not possible be-cause each model has a different set of parameters.

However, all of them have computed the distortioncenter. Because the distortion center can be equalto the principal point, it has been computed also,with the method described in [42]. Table 2 showsthe distortion center computed with each model com-pared with the principal point. This table shows thatthe distortion center should be computed togetherwith the model, because results are different for eachmodel and also different for the principal point. Withlow distortion, differences are significant only withthe rational function model. With high distortion,

Fig. 7. (Color online) High-distortion correction error in different image areas with different models. Images captured with 2:7 mm high-distortion lens. (a) Rational tangential model. (b) Logarithmic model. (c) Field-of-view model. (d) Division model. (e) Polynomial model. (f)Rational function model.

5926 APPLIED OPTICS / Vol. 49, No. 30 / 20 October 2010

Page 14: Lens distortion models evaluation

models that base the distortion correction on radialdisplacement have computed similar distortion cen-ters. These models are the logarithmic, division, andfield-of view-models. Again, a significant difference iscomputed with the rational function model.

Source code used for the experimental results canbe downloaded from http://personales.upv.es/cricolfe/calibration/.

5. Conclusion

A general method for calibrating several lens distor-tion models under stable conditions has been pro-posed. A comparison of all the calibrated distortionmodels has also been done. The proposed calibrationmethod computes the lens distortionmodel using twoset of points. These two sets are the distorted pointsdetected from the image and the true corrected onesaccording to ratios and constraints between them,which are true in the calibration template. This gen-eral calibration method allows us to compare allcalibrated models under a common criterion and todecide which model represents distortion betterunder different conditions.

Experimental results show that the rational func-tion lensdistortionmodel canrepresent lowdistortionand high distortion accurately. The logarithmic, divi-sion, and field-of-view distortion models have similarperformance. The tangential distortion componentimproves the results if high distortion is present.The polynomial model has obtained better results be-cause an angle correction (tangential component) hasbeen included in the model. With soft distortion, thetraditional radial, tangential, and prism distortionmodel can be used. Concerning the number of pointsnecessary for calibration, if themetricmethod is used,a calibration template with more than 100 points isnecessary to obtain reliable results. Finally, if the dis-tortion center is computed together with the model,accurate results are computed.

References1. S. Cui, X. Zhu, W. Wang, and Y. Xie, “Calibration of a laser

galvanometric scanning system by adapting a camera model,”Appl. Opt. 48, 2632–2637 (2009).

2. Z. Wang, H. Du, S. Park, and H. Xie, “Three-dimensionalshape measurement with a fast and accurate approach,” Appl.Opt. 48, 1052–1061 (2009).

3. A. Asundi and Z. Wensen, “Unified calibration technique andits applications in optical triangular profilometry,” Appl. Opt.38, 3556–3561 (1999).

4. L. Huang, P. S. K. Chua, and A. Asundi, “Least-squares cali-bration method for fringe projection profilometry consideringcamera lens distortion,” Appl. Opt. 49, 1539–1548 (2010).

5. D. C. Brown, “Decentering distortion of lenses,” Photogram.Eng. 32, 444–462 (1966).

6. D. C. Brown, “Close-range camera calibration,” Photogram.Eng. 37, 855–866 (1971).

7. R. Tsai, “A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-self TVcameralenses,” IEEEJ.RoboticsAutom.RA-3, 323–344(1987).

8. J. Weng, P. Cohen, and M. Herniou, “Camera calibration withdistortion models and accuracy evaluation,” IEEE Trans.Pattern Anal. Machine Intell. 14, 965–980 (1992).

9. Z.Zhang,“Aflexiblenewtechniqueforcameracalibration,”IEEETrans. Pattern Anal. Machine Intell. 22, 1330–1334 (2000).

10. F. Devernay and O. Faugeras, “Straight lines have to bestraight,” Machine Vis. Apps. 13, 14–24 (2001).

11. C. McGlone, E. Mikhail, and J. Bethel, Manual of Photogram-metry, 5th ed. (American Society of Photogrammetry andRemote Sensing, 2004).

12. A. Basu and S. Licardie, “Alternative models for fish-eyelenses,” Pattern Recogn. Lett. 16, 433–441 (1995).

13. S. Shah and J. K. Aggarwal, “Intrinsic parameter calibrationprocedure for a (high distortion) fish-eye lens camera with dis-tortion model and accuracy estimation,” Pattern Recogn. 29,1775–1778 (1996).

14. S. S. Beauchemin and R. Bajcsy, “Modelling and removingradial and tangential distortions in spherical lenses, multi im-age analysis,” Lect. Notes Comput. Sci. 2032, 1–21 (2001).

15. L.Ma,Y.Q.Chen,andK.L.Moore, “Flexible cameracalibrationusing a new analytical radial undistortion formula with appli-cation to mobile robot localization,” in Proceedings of IEEEInternational Symposium on Intelligent Control (IEEE, 2003).

16. J. Mallon and P. F. Whelan, “Precise radial un-distortion ofimages,” in Proceedings of the 17th International Conferenceon Pattern Recognition (IEEE Computer Society, 2004), pp.18–21.

17. P. Sturm and S. Ramalingam, “A generic concept for cameracalibration,” in Proceedings of the 5th European Conference onComputer Vision (Springer, 2004).

18. R. Hartley and S. Kang, “Parameter-free radial distortioncorrection with center of distortion estimation,” IEEE Trans.Pattern Anal. Machine Intell. 29, 1309–1321 (2007).

19. J. Lavest, M. Viala, and M. Dhome, “Do we really need accu-rate calibration pattern to achieve a reliable camera calibra-tion,” in Proceedings of the 2nd European Conference onComputer Vision (Springer, 1998).

20. A. Fitzgibbon, “Simultaneous linear estimation of multipleview geometry and lens distortion,” in Proceedings of the Con-ference on Computer Vision and Pattern Recognition (IEEEComputer Society, 2001), pp. 125–132.

21. T. Svoboda, T. Pajdla, and H. Hlavac, “Epipolar geometry forpanoramic cameras,” in Proceedings of the 2nd EuropeanConference on Computer Vision (Springer, 1998), pp. 218–232.

22. B. Micusik and T. Pajdla, “Estimation of omnidirectionalcamera model from epipolar geometry,” in Proceedings of the

Table 2. Distortion Center Computed with the Model Parameters and Compared with the Principal Point Computed Alonea

Low Distortion High Distortion

PrincipalPoint

RadialTangential Logarith.

FieldView

DivisionModel Polyn.

RationalFunction

PrincipalPoint

RadialTangential Logarith.

FieldView

DivisionModel Polyn.

RationalFunction

u0 323.4 322.0 321.1 321.5 322.6 321.2 282.8 312.5 298.8 297.8 297.1 297.3 298.3 285.5v0 242.6 240.4 239.9 240.2 239.7 240.1 241.7 225.4 218.8 218.4 217.9 217.4 216.9 240.0

aEach model needs a different distortion center to minimize errors which is different of the principal point. Differences increase ifhigh-distortion is modeled or the Rational function is used.

20 October 2010 / Vol. 49, No. 30 / APPLIED OPTICS 5927

Page 15: Lens distortion models evaluation

International Conference on Computer Vision and PatternRecognition (IEEE Computer Society, 2003), Vol. 1, pp.485–490.

23. C. Geyer and K. Daniilidis, “Structure and motion from unca-librated catadioptric views,” inProceedings of the InternationalConference onComputer Vision andPatternRecognition (IEEEComputer Society, 2001), pp. 279–286.

24. P. Sturm, “Mixing catadioptric and perspective cameras,” inProceedings of the Workshop on Omnidirectional Vision (IEEEComputer Society, 2002), pp. 37–44.

25. D. Claus andA. Fitzgibbon, “A rational function lens distortionmodel for general cameras,” inProceedings of the InternationalConference onComputer Vision andPatternRecognition (IEEEComputer Society, 2005), pp. 213–219.

26. R. I. Hartley and T. Saxena, “The cubic rational polynomialcamera model,” in Proceedings DARPA Image UnderstandingWorkshop (IEEE Computer Society, 1997), pp. 649–653.

27. M. D. Grossberg and S. K. Nayar, “A general imaging modeland a method for finding its parameters,” in Proceedings ofthe International Conference on Computer Vision (IEEE Com-puter Society, 2001), pp. 108–115.

28. X. Ying and Z. Hu, “Can we consider central catadioptriccameras and fisheye cameras within a unified imaging mod-el?,” in Proceedings of the 5th European Conference on Compu-ter Vision (Springer, 2004, pp. 442–455.

29. C. Geyer and K. Daniilidis, “A unifying theory for centralpanoramic systems and practical applications,” in Proceedingsof the 3rd European Conference on Computer Vision (Springer,2000, pp. 445–461.

30. J. Barreto and K. Daniilidis, “Wide area multiple camera cali-bration and estimation of radial distortion,” in Proceedings ofthe 5th European Conference on Computer Vision (Springer,2004).

31. B.PrescottandG.McLean, “Line-basedcorrectionof radial lensdistortion,” Graph. Models Image Process. 59, 39–47 (1997).

32. R. Swaminathan and S. Nayar, “Non-metric calibration ofwide-angle lenses and polycameras,” IEEE Trans. PatternAnal. Machine Intell. 22, 1172–1178 (2000).

33. M. Ahmed and A. Farag, “Non-metric calibration of cameralens distortion: differential methods and robust estimation,”IEEE Trans. Image Process. 14, 1215–1230 (2005).

34. S. Becker and V. Bove, “Semi-automatic 3D model extractionfrom uncalibrated 2D camera views,” in Proceedings of VisualData Exploration and Analysis (1995), Vol. 2, pp. 447–461.

35. M. Penna, “Camera calibration: a quick and easy way todetection of scale factor,” IEEE Trans. Pattern Anal.MachineIntell. 13, 1240–1245 (1991).

36. Z. Zhang, “On the epipolar geometry between two images withlens distortion,” in Proceedings of the International Conferenceon Computer Vision and Pattern Recognition (IEEE ComputerSociety, 1996), pp. 407–411.

37. G. Stein, “Lens distortion calibration using point correspon-dences,” in Proceedings of the International Conference onComputer Vision and Pattern Recognition (IEEE ComputerSociety, 1997), pp. 602–608.

38. S. B. Kang, “Catadioptric self-calibration,” in Proceedings ofthe Conference on Computer Vision and Pattern Recognition(IEEE Computer Society, 2000), pp. 201–207.

39. H. Farid and A. C. Popescu, “Blind removal of lens distortion,”J. Opt. Soc. Am. A 18, 2072–2078 (2001).

40. D. Schneider, E. Schwalbe, and H.-G. Maas, “Validation ofgeometric models for fisheye lenses,” ISPRS J. Photogram.Remote Sens. 64, 259–266 (2009).

41. C. Hughes, P. Denny, E. Jones, and M. Glavin, “Accuracy offish-eye lens models,” Appl. Opt. 49, 3338–3347 (2010).

42. C.Hughes,R.McFeely,P.Denny,M.Glavin,andE.Jones, “Equi-distant (f θ) fish-eye perspective with application in distortioncentre estimation,” Image Vis. Comput. 28, 538–551 (2010).

43. C. Ricolfe-Viala and A. J. Sanchez-Salmeron, “Robust metriccalibration of non-linear camera lens distortion,” PatternRecogn. 43, 1688–1699 (2010).

44. J. Wang, F. Shi, J. Zhang, and Y. Liu, “A new calibration modelof camera lens distortion,” Pattern Recogn. 41, 607–615(2008).

45. G. Bradski and A. Kaehler, Learning OpenCV (O’ReillyMedia, 2008).

5928 APPLIED OPTICS / Vol. 49, No. 30 / 20 October 2010