Camera Calibration - Yonsei. Camera...2. O.D. Faugeras, et al., “The calibration problem for...

Preview:

Citation preview

E-mail: hogijung@hanyang.ac.krhttp://web.yonsei.ac.kr/hgjung

Camera CalibrationCamera Calibration

E-mail: hogijung@hanyang.ac.krhttp://web.yonsei.ac.kr/hgjung

Parameter EstimationParameter Estimation

E-mail: hogijung@hanyang.ac.krhttp://web.yonsei.ac.kr/hgjung

LS and SVDLS and SVD

1) AX=B

X=A-1B

2) AX=0

X belongs to A’s null space and is sometimes called a null vector of A. X can be characterized as a right singular vector corresponding to a singular value of A that is zero.

SVD (Singular Value Decomposition)

http://en.wikipedia.org/wiki/Singular_value_decomposition

E-mail: hogijung@hanyang.ac.krhttp://web.yonsei.ac.kr/hgjung

Meaning of Eq. in Homo. Meaning of Eq. in Homo. CoordCoord. Sys.. Sys.

• Equation in homogeneous coordinate systems is not an identity, but a calculation formula.1) If we have a transform matrix and input vectors, then we can calculate the

output vectors.

2) However, when we have input vectors and output vectors, we only know that LHS (Left Hand Side) and RHS (Right Hand Side) is in a relationship of scalar production, that is, equal up to scale.

, given and Y A X A XY A X

:, :, :, :,

, given and

, 0j j j j j

Y A X X Y

Y A X Y k A X Y A X

, ,: :,

,: :,, ,: :,,

, ,: :,,: :,

i j j i j

j i ji j i ji j

N j N jj N j

Y k A X

k A XY A XY

Y A Xk A X

ItIt’’s an identity.s an identity.

E-mail: hogijung@hanyang.ac.krhttp://web.yonsei.ac.kr/hgjung

SVD vs. Pseudo InverseSVD vs. Pseudo Inverse

• Disadvantage of SVD1) The computation of SVD is heavy.2) If the unknown should be found deterministically, one more equation is needed.

• Therefore, with ignorable error, pseudo inverse is preferable.• It is noticeable that SVD is LS (Least Squared) estimate w.r.t. algebraic error and

pseudo inverse is LS w.r.t. output variable.

ex) ax+by+c=0 [x y 1][a b c]’=0 SVD minimizing the error orthogonal to the line.

y=mx+b y=[x 1][m b]’ pseudo inverse minimizing the error of output variable, y.

E-mail: hogijung@hanyang.ac.krhttp://web.yonsei.ac.kr/hgjung

Camera CalibrationCamera Calibration

E-mail: hogijung@hanyang.ac.krhttp://web.yonsei.ac.kr/hgjung

ReferencesReferences

1. Joaquim Salvi, Xavier Armangué, Joan Batlle, “A comparative review of camera calibrating methods with accuracy evaluation,” Pattern Recognition 35 (2002) pp. 1617-1635.

E-mail: hogijung@hanyang.ac.krhttp://web.yonsei.ac.kr/hgjung

Camera Calibration [1]Camera Calibration [1]

1.1. Camera modelingCamera modeling: mathematical approximation of the physical and optical behavior of the sensor by using a set of parameters

2.2. Estimation of the parametersEstimation of the parameters• Intrinsic parameters: the internal geometry and optical

characteristics of the image sensor.How is the light projected through the lens onto the image plane of the sensor?

• Extrinsic parameters: the position and orientation of the camera with respect to a world coordinate system.

E-mail: hogijung@hanyang.ac.krhttp://web.yonsei.ac.kr/hgjung

Camera Calibration MethodsCamera Calibration Methods

1. E.L. Hall, et al., “Measuring curved surfaces for robot vision,” Comput. J. 15 (1982) 42-54.

2. O.D. Faugeras, et al., “The calibration problem for stereo,” CVPR 1986, pp. 15-20.

3. Faugeras non-linearJ. Salvi, “An approach to coded structured light to obtain three dimensional information,” Ph.D. Thesis, 1997.J. Salve, et al., “A robust-coded pattern projection for dynamic 3D scene measurement,” Int. J. Pattern Recognition Lett. 19 (1998) 1055-1065.

4. R. Y. Tsai, “A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the shelf TV cameras and lenses,” IEEE Int. J. Robot. Automat. RA-3 (1987) 323-344.

5. J. Weng, et al., “Camera calibration with distortion models and accuracy evaluation,” PAMI 14 (1992) 965-980.

E-mail: hogijung@hanyang.ac.krhttp://web.yonsei.ac.kr/hgjung

NotationsNotations

RCPReference

Coordinate system

E-mail: hogijung@hanyang.ac.krhttp://web.yonsei.ac.kr/hgjung

Camera ModelingCamera Modeling

• Camera modeling is usually broken down into 4 steps.1. Translation & rotation 2. Projection3. Lens distortion4. Image coordinates

W CW WP P

C CW uP P

C Cu dP P

C Id dP P

E-mail: hogijung@hanyang.ac.krhttp://web.yonsei.ac.kr/hgjung

Camera Modeling: Step1Camera Modeling: Step1

• Changing the world to the camera coordinate system W CW WP P

The orientation of the world coordinate system {W} with respect to the axis of the camera coordinate system {C}.

The position of the origin of the world coordinate system measured with respect to {C}.

E-mail: hogijung@hanyang.ac.krhttp://web.yonsei.ac.kr/hgjung

Camera Modeling: Step2Camera Modeling: Step2

• Optical sensor is modeled as a pinhole camera.

The image plane is located at a distance f from the optical center OC

, and is parallel to the plane defined by the coordinate axis XC

and YC

.

C CW uP P

E-mail: hogijung@hanyang.ac.krhttp://web.yonsei.ac.kr/hgjung

Camera Modeling: Step3Camera Modeling: Step3

• Modeling the distortion of the lens.

• Faugeras-Toscani

model

•• Tsai modelTsai model

• Weng

model

C Cu dP P

• The radial distortion

• The decentering

distortion

• The thin prism distortion

E-mail: hogijung@hanyang.ac.krhttp://web.yonsei.ac.kr/hgjung

Camera Modeling: Step4Camera Modeling: Step4

• Changing from the camera image to the computer image coordinate system

(ku

,kv

) transformation from metric measures with respect to the camera

coordinate system to pixels with respect to the computer image coordinate system(u0

,v0

) defines the projection of the focal point in the plane image in pixels, i.e. the principal point.

C Id dP P

E-mail: hogijung@hanyang.ac.krhttp://web.yonsei.ac.kr/hgjung

Image Center EstimationImage Center Estimation

Orthocenter Theorem: Image Center from Vanishing PointsB. CAPRILE and V. TORRE, “Using Vanishing Points for Camera Calibration,” IJCV 4, pp. 127-140 (1990).

PROPERTY 3. Let Q, R, S be three mutually orthogonal straight lines in space, and let VQ = (xQ , yQ , f), vR = (xR , yR ,f), VS = (xs , ys ,f) be the three vanishing points associated with them.The orthocenter of the triangle with vertexes in the three vanishing points is the intersection of the optical axis and the image plane.

E-mail: hogijung@hanyang.ac.krhttp://web.yonsei.ac.kr/hgjung

The Method of Hall (1/2)The Method of Hall (1/2)

i

i

i

WW

WW

WW

X

Y

Z

i

i

i

WW

WW

WW

X

Y

Z

E-mail: hogijung@hanyang.ac.krhttp://web.yonsei.ac.kr/hgjung

The Method of Hall (2/2)The Method of Hall (2/2)

i

i

i

WW

WW

WW

X

Y

Z

Consider without loss of generality that

By applying the pseudo-inverse

i

i

i

WW

WW

WW

X

Y

Z

E-mail: hogijung@hanyang.ac.krhttp://web.yonsei.ac.kr/hgjung

The Method of The Method of FaugerasFaugeras

(1/2)(1/2)

A can be estimated by Hall’s method.

=

E-mail: hogijung@hanyang.ac.krhttp://web.yonsei.ac.kr/hgjung

The Method of The Method of FaugerasFaugeras

(2/2)(2/2)

The orientation of the vectors ri must be orthogonal and each ri is unit vector.r1 r2 =r2 r3 =r3 r1 =0 r1 r1 =r2 r2 =r3 r3 =1

E-mail: hogijung@hanyang.ac.krhttp://web.yonsei.ac.kr/hgjung

Orthogonal and ParallelOrthogonal and Parallel

• Unit Vectors

0//0

BABABABA

E-mail: hogijung@hanyang.ac.krhttp://web.yonsei.ac.kr/hgjung

The Method of Tsai (1/8)The Method of Tsai (1/8)

The method of Tsai models the radial lens distortion but assumes that there are some parameters of the camera which are provided by manufacturers.

u0 , v0 , dx ’, dy

CXd ’ and CYd ’ are obtained in metric coordinates from the pixel coordinates IXd and IYd .

E-mail: hogijung@hanyang.ac.krhttp://web.yonsei.ac.kr/hgjung

The Method of Tsai (2/8)The Method of Tsai (2/8)

Considering the radial distortion of lens, the relationship between the image point Pd (in metric coordinates) and the object point Pw .

E-mail: hogijung@hanyang.ac.krhttp://web.yonsei.ac.kr/hgjung

The Method of Tsai (3/8)The Method of Tsai (3/8)

Even with the radial distortion,

E-mail: hogijung@hanyang.ac.krhttp://web.yonsei.ac.kr/hgjung

The Method of Tsai (4/8)The Method of Tsai (4/8)

E-mail: hogijung@hanyang.ac.krhttp://web.yonsei.ac.kr/hgjung

The Method of Tsai (5/8)The Method of Tsai (5/8)

After expanding (60), divide it by ty :

E-mail: hogijung@hanyang.ac.krhttp://web.yonsei.ac.kr/hgjung

The Method of Tsai (6/8)The Method of Tsai (6/8)

For n points, combine (61) and (55)

1

7

aA

a

A can be estimated by LS.

121

122

123

y

y

y

t r

t r

t r

E-mail: hogijung@hanyang.ac.krhttp://web.yonsei.ac.kr/hgjung

The Method of Tsai (7/8)The Method of Tsai (7/8)

Using the case ty is definitely positive,

r3 can be calculated by a cross product between r1 and r2 .

4 /x y xt a t s

E-mail: hogijung@hanyang.ac.krhttp://web.yonsei.ac.kr/hgjung

The Method of Tsai (8/8)The Method of Tsai (8/8)

Parameters still unknown: the focal length f, the radial distortion coefficient k1 , and the translation of the camera w.r.t. the Z axis tz .

Assuming k1 =0 to get the initial guess of f and tz .

Iterate the non-linear optimization routine using (45)

-

E-mail: hogijung@hanyang.ac.krhttp://web.yonsei.ac.kr/hgjung

Fisheye Lens CalibrationFisheye Lens Calibration

E-mail: hogijung@hanyang.ac.krhttp://web.yonsei.ac.kr/hgjung

Generic Camera Model [1]Generic Camera Model [1]

The perspective projection of a pinhole camera can be described by the following formula

where θ is the angle between the principal axis and the incoming ray, r is the distance between the image point and the principal point, and f is the focal length.

r

E-mail: hogijung@hanyang.ac.krhttp://web.yonsei.ac.kr/hgjung

Generic Camera Model [1]Generic Camera Model [1]

Fish-eye lenses instead are usually designed to obey one of the following projections:

r

E-mail: hogijung@hanyang.ac.krhttp://web.yonsei.ac.kr/hgjung

Generic Camera Model [1]Generic Camera Model [1]

The real lenses do not, however, exactly follow the designed projection model.

From the viewpoint of automatic calibration, it would also be useful if we had only one model suitable for different types of lenses. Therefore, we consider projections in the general form

where, without any loss of generality, even powers have been dropped. This is due to the fact that we may extend r onto the negative side as an odd function while the odd powers span the set of continuous odd functions.

We found that first five terms, up to the ninth power of , give enough degrees of freedom for good approximation of different projection curves.

E-mail: hogijung@hanyang.ac.krhttp://web.yonsei.ac.kr/hgjung

Caltech Calibration Toolbox [3]Caltech Calibration Toolbox [3]

http://www.vision.caltech.edu/bouguetj/calib_doc/index.html

E-mail: hogijung@hanyang.ac.krhttp://web.yonsei.ac.kr/hgjung

Problem of Caltech Calibration Toolbox [2]Problem of Caltech Calibration Toolbox [2]

Figure 1. Input image captured with 150°

FOV Lens (size: 1024x768) Figure 2. Undistorted image by Caltech Calibration Toolbox (size: 4096x3072)

E-mail: hogijung@hanyang.ac.krhttp://web.yonsei.ac.kr/hgjung

Problem of Caltech Calibration Toolbox [2]Problem of Caltech Calibration Toolbox [2]

Assuming that Caltech Calibration Toolbox can successfully estimate effective focal lengths Fx

,

Fy

, and optical center (Ox

,Oy

). Only radial terms are considered herein.

Therefore, Caltech Calibration Toolbox is thought to estimate only a1

,

a3

like Eq. (2). Where, kc

is the estimate of distortion parameters.

E-mail: hogijung@hanyang.ac.krhttp://web.yonsei.ac.kr/hgjung

Problem of Caltech Calibration Toolbox [2]Problem of Caltech Calibration Toolbox [2]

Because there are three inflection points, total radial distortion curve has four curve portions with different shape. positive a1

It is found that the incorrect estimation of radial distortion parameters is the inevitable result of least square based curve fitting with limited data. As mentioned previously, in practical situation, ru

-rd

data covering whole lens FOV is inaccessible, because plane calibration pattern cannot be extended unlimitedly.

E-mail: hogijung@hanyang.ac.krhttp://web.yonsei.ac.kr/hgjung

Refinement by Inverse mappingRefinement by Inverse mapping--Based Extrapolation [2]Based Extrapolation [2]

Because limited ru

-rd

range used for least square based curve fitting is the primary cause of incorrect estimation, by securing wide range of

ru

-rd

data, distortion parameters can be refined. Compared with ru

-rd

graph, rd

-ru

graph, i.e. inverse mapping, has suitable characteristics for least square based curve fitting.

E-mail: hogijung@hanyang.ac.krhttp://web.yonsei.ac.kr/hgjung

Refinement by Inverse mappingRefinement by Inverse mapping--Based Extrapolation [2]Based Extrapolation [2]

E-mail: hogijung@hanyang.ac.krhttp://web.yonsei.ac.kr/hgjung

ReferencesReferences

1. Juho Kannala, Sami S. Brandt, “A Generic Camera Model and Calibration Method for Conventional, Wide-Angle, and Fish-Eye Lenses,” IEEE PAMI, Vol. 28, No. 8, Aug. 2006, pp. 1335-1340.

2. Ho Gi Jung, Yun Hee Lee, Pal Joo Yoon, Jaihie Kim, “Radial Distortion Refinement by Inverse Mapping-Based Extrapolation,” ICPR’06.

3. Caltech calibration toolbox, http://www.vision.caltech.edu/bouguetj/calib_doc/index.html

Recommended