15
J Math Imaging Vis (2009) 33: 281–295 DOI 10.1007/s10851-008-0113-2 Error Analysis in Homography Estimation by First Order Approximation Tools: A General Technique Pei Chen · David Suter Published online: 5 September 2008 © Springer Science+Business Media, LLC 2008 Abstract This paper shows how to analytically calculate the statistical properties of the errors in estimated parame- ters. The basic tools to achieve this aim include first order approximation/perturbation techniques, such as matrix per- turbation theory and Taylor Series. This analysis applies for a general class of parameter estimation problems that can be abstracted as a linear (or linearized) homogeneous equation. Of course there may be many reasons why one might which to have such estimates. Here, we concentrate on the situation where one might use the estimated parameters to carry out some further statistical fitting or (optimal) refine- ment. In order to make the problem concrete, we take ho- mography estimation as a specific problem. In particular, we show how the derived statistical errors in the homography coefficients, allow improved approaches to refining these coefficients through subspace constrained homography es- timation (Chen and Suter in Int. J. Comput. Vis. 2008). Indeed, having derived the statistical properties of the er- rors in the homography coefficients, before subspace con- strained refinement, we do two things: we verify the cor- rectness through statistical simulations but we also show how to use the knowledge of the errors to improve the sub- P. Chen ( ) School of Information Science and Technology, Sun Yat-sen University, Guangzhou, China e-mail: [email protected] P. Chen Shenzhen Institute of Advanced Integration Technology, CAS/CUHK, Shenzhen, China D. Suter ARC Centre for Perceptive and Intelligent Machines in Complex Environments, Department of Electrical and Computer Systems Engineering, Monash University, Melbourne, Australia space based refinement stage. Comparison with the straight- forward subspace refinement approach (without taking into account the statistical properties of the homography coef- ficients) shows that our statistical characterization of these errors is both correct and useful. Keywords Error analysis · Matrix perturbation theory · Singular value decomposition · Low rank matrix approximation · Homography · First order approximation · Mahalanobis distance 1 Introduction Parameter estimation is a common problem in engineering. In some applications, parameter estimation is the ultimate goal, while in others, estimated parameters are further fed to follow-up procedures. In the latter case, knowledge of the statistical properties of the errors in estimated parameters (such as knowing the covariance matrix) helps one “intelli- gently” design algorithms that use these parameters for fur- ther calculations. Suppose the parameter θ = f(x) estimation problem (given data x) can be abstracted as: F(θ, x) = 0 (1) We are specifically interested in the special case Aθ = 0 (2) where A is a linear operator (matrix) formed by the data values. If we had the more simple (to analyze) case θ = f(x) (3)

Error Analysis in Homography Estimation by First Order Approximation Tools: A General Technique

Embed Size (px)

Citation preview

Page 1: Error Analysis in Homography Estimation by First Order Approximation Tools: A General Technique

J Math Imaging Vis (2009) 33: 281–295DOI 10.1007/s10851-008-0113-2

Error Analysis in Homography Estimation by First OrderApproximation Tools: A General Technique

Pei Chen · David Suter

Published online: 5 September 2008© Springer Science+Business Media, LLC 2008

Abstract This paper shows how to analytically calculatethe statistical properties of the errors in estimated parame-ters. The basic tools to achieve this aim include first orderapproximation/perturbation techniques, such as matrix per-turbation theory and Taylor Series. This analysis applies fora general class of parameter estimation problems that can beabstracted as a linear (or linearized) homogeneous equation.

Of course there may be many reasons why one mightwhich to have such estimates. Here, we concentrate on thesituation where one might use the estimated parameters tocarry out some further statistical fitting or (optimal) refine-ment. In order to make the problem concrete, we take ho-mography estimation as a specific problem. In particular, weshow how the derived statistical errors in the homographycoefficients, allow improved approaches to refining thesecoefficients through subspace constrained homography es-timation (Chen and Suter in Int. J. Comput. Vis. 2008).

Indeed, having derived the statistical properties of the er-rors in the homography coefficients, before subspace con-strained refinement, we do two things: we verify the cor-rectness through statistical simulations but we also showhow to use the knowledge of the errors to improve the sub-

P. Chen (�)School of Information Science and Technology, Sun Yat-senUniversity, Guangzhou, Chinae-mail: [email protected]

P. ChenShenzhen Institute of Advanced Integration Technology,CAS/CUHK, Shenzhen, China

D. SuterARC Centre for Perceptive and Intelligent Machines in ComplexEnvironments, Department of Electrical and Computer SystemsEngineering, Monash University, Melbourne, Australia

space based refinement stage. Comparison with the straight-forward subspace refinement approach (without taking intoaccount the statistical properties of the homography coef-ficients) shows that our statistical characterization of theseerrors is both correct and useful.

Keywords Error analysis · Matrix perturbation theory ·Singular value decomposition · Low rank matrixapproximation · Homography · First order approximation ·Mahalanobis distance

1 Introduction

Parameter estimation is a common problem in engineering.In some applications, parameter estimation is the ultimategoal, while in others, estimated parameters are further fedto follow-up procedures. In the latter case, knowledge of thestatistical properties of the errors in estimated parameters(such as knowing the covariance matrix) helps one “intelli-gently” design algorithms that use these parameters for fur-ther calculations.

Suppose the parameter θ = f(x) estimation problem(given data x) can be abstracted as:

F(θ,x) = 0 (1)

We are specifically interested in the special case

Aθ = 0 (2)

where A is a linear operator (matrix) formed by the datavalues.

If we had the more simple (to analyze) case

θ = f(x) (3)

Page 2: Error Analysis in Homography Estimation by First Order Approximation Tools: A General Technique

282 J Math Imaging Vis (2009) 33: 281–295

then conceptually, the computation of the covariance matrixof the errors in θ can be expressed as:

�θ = ∂θ

∂xCx

(∂θ

∂x

)T

(4)

where Cx is the covariance matrix of �x.Although (4) looks direct and simple, it is not always the

case that there are direct ways available to compute the par-tial differentiation in (4). In particular, we do not have suchan explicit expression when calculating the singular vectorof a matrix associated with its least singular value.

In related work [13, 14], Haralick proposed to calculatethe propagations of perturbations in observed variables x, inthe minimization problem of

minF(θ,x) (5)

The perturbation of �x results in an error in θ , �θ , as:

�θ = −(

∂2F

∂θ2

)−1∂2F

∂θ∂x�x (6)

Then, the covariance matrix of �θ can be calculated as

Cθ =(

∂2F

∂θ2

)−1∂2F

∂θ∂xCx

(∂2F

∂θ∂x

)T (∂2F

∂θ2

)−1

(7)

However, in many cases, two difficulties prevent us fromdirectly employing (6) and (7) to calculate Cθ . First, as men-tioned above, there is no explicit formula for the partialsin (6), for example, when calculating the singular vectors.

Second, ∂2F

∂θ2 is not always invertible, as in homography esti-mation and other parameter estimation problems where thedegree of freedom of the parameters is less than the parame-ter number.

We note that there is much work that concentrates onhow to estimate the optimal parameters: including Taubinmethod [31], the renormalization method [17–19], the HEIVmethod [22], the FNS method and its variants [7–9], and theequilibration method [24–27]. In [21], it is reported that amore accurate estimate can be obtained by taking into ac-count higher-order error. In [6, 20, 21], a rigorous KCR(Kanatani-Cramer-Rao) bound for the uncertainty in the es-timated parameters is given. For comprehensive reviews onparameter estimation and its applications in computer vi-sion, see [5, 10, 11, 16, 20, 21, 28, 29, 35–37]. However,though many of these target the same general forms as above(e.g. (2)), they do not generally characterize the error inthose estimated parameters.

In contrast, in this paper our primary focus is to ana-lytically characterize the uncertainty of the estimated pa-rameters. As said earlier, this is useful when those esti-mated parameters are further fed to follow-up statistical fit-ting/refinements. We consider the class of parameter estima-

tion problems that can be abstracted as solving a (homoge-neous) linear equation (2), where the singular vector associ-ated with the least singular value is the estimate of the pa-rameters of interest (essentially the Direct Linear Transformor DLT algorithm—see below). In particular, we focus onhomography estimation. This is because if one derives a se-ries of such homographies, from the same pair of images; inprinciple, one can improve the accuracy of the estimated ho-mographies by further statistical refinement (by exploiting arank constraint). However, such a refinement stage requiresestimates of the error correlations between the homographycoefficients—which is thus a useful example to illustrate ouranalysis.

In Sect. 2, we first review the normalized Direct LinearTransformation (DLT) algorithm [15] and the subspace con-strained homography estimation [4]. In Sect. 3, we presenthow to analytically compute the statistical property of the er-rors in estimated homography parameters, and generally inother linearized parameter estimation problems. In Sect. 4,we present simulations that fit the analytically calculatedstatistics very well. In Sect. 5, the usefulness of this statis-tical analysis is demonstrated in the subspace constrainedapproach to homography estimation.

2 Normalized DLT Algorithm and SubspaceConstrained Homography Estimation

2.1 Normalized DLT

In this section, we review the normalized direct linear trans-formation (DLT) algorithm [15] in general (and for homog-raphy estimation in particular).

The DLT approach is essentially to take the singular sub-space as the solution to (2) (and this is invariably by SVD).In what follows, we concentrate on homography estimationbut, as far as the DLT goes, the methodology of this paperapplies to any setting (e.g. estimation of fundamental matri-ces) that leads to the same form (homogeneous linear equa-tion as in (2))—it is just that the particular instantiation of Awill be different.

It is well known that the direct approach (via SVD) doesnot always produce good results and that a “normalizing”step generally improves the results—see the end of this sub-section (which is explained in the case of homography esti-mation but easily generalizes to other cases).

For a homography

H =

⎡⎢⎢⎣

h1 h2 h3

h4 h5 h6

h7 h8 h9

⎤⎥⎥⎦

Page 3: Error Analysis in Homography Estimation by First Order Approximation Tools: A General Technique

J Math Imaging Vis (2009) 33: 281–295 283

which maps x = [x1 x2 1]T on the first view as x′ =[x′

1 x′2 1]T on the second view: x′ = λHx.

From x′ ×Hx = 0, each pair of the matches, {xi ,x′i}, pro-

duces a 3 × 9 matrix:

Ai =

⎡⎢⎢⎣

0 −xTi x′

2,ixTi

xTi 0 −x′

1,ixTi

−x′2,ix

Ti x′

1,ixTi 0

⎤⎥⎥⎦ (8)

which satisfies Aih = 0, with h = [h1 h2 . . . h9]T . Stack{Ai} as

A = [AT

1 . . . ATn

]T (9)

Ah = 0 holds. The solution of h is the right singular vectorof A, associated with the least singular value. This is theDLT algorithm [15] for homography estimation.

In [15], a normalization step is recommended. It con-sists of a translation and a scaling, so that the centroid ofthe transformed points is the origin (0,0) and the averagedistance from the origin is

√2. Suppose the centroid of the

original points is (c1, c2) and the average distance to thiscentroid is l. The normalization transform T is⎡⎢⎢⎣

1l

0 − c1l

0 1l

− c2l

0 0 1

⎤⎥⎥⎦ (10)

Similarly, there exists a normalization transform for the sec-ond view, T′.

The normalized DLT algorithm takes the DLT algorithmas its core algorithm. First, calculate the transformed pointsfor each view and their associated normalization transformsT and T′. Second, using DLT, calculate the homography Hfrom the normalized matches. Last, in the denormalizationstep, set

H = T′−1HT (11)

as the homography in the original views.

2.2 Homography Estimation Embedded in a DimensionFour Subspace

Thus far, we are not saying anything new. The above proce-dure can now be considered routine in the computer visioncommunity. However, the settings we are really concernedwith are ones where the output of the DLT is to be usedin further estimations/refinements. Here, one does generallyneed to take account of the correlations in the outputs of theDLT (and hence our focus on calculating those correlations).We illustrate with homography refinement.

It is well known that one can collect the coefficients of thehomographies between two views into a large, rank four ma-trix H. A brief review of this can be found in Appendix A.In this section, we review how to calculate the homogra-phy embedded in a dimension four subspace [4]. Suppose(just for now) the dimension four subspace basis U ∈ R9,4

is known and the linearization matrix is A as in (9). Thesubspace constrained DLT solution is as follows: First, cal-culate the solution of AUx = 0 as x (standard smallest sin-gular value way). Second, take the Ux as the solution ofthe homography, which is obviously embedded in the sub-space U.

As in the normalized DLT, we also use a normalizationstep in this dimension-four constrained homography estima-tion. Suppose n (n ≥ 4) planes are available. The subspaceconstrained algorithm is:

1. Taking all the feature points in the n planes as a wholeset, calculate the normalization transforms T and T′, forthe first view and the second view respectively.

2. For each normalized plane, calculate its homography.3. Calculate the dimension four subspace U of these homo-

graphies.4. For each normalized plane, calculate its subspace-U con-

strained homography.5. Calculate the denormalized homographies for all the

planes, as in the denormalization step of the normalizedDLT.

There are two approaches1 to calculate the dimension-four subspace U, in step 3 above. One obvious approach isto employ the SVD [12] to calculate the dimension four sub-space2 U of the rank-four matrix H [33, 34]. We refer tothis approach as the SVD-based subspace constrained ap-proach, or SVD-Sub-Cnstr, if the SVD is employed in step 3.However, the errors in estimated parameters produced by theDLT (step 2), can not be modeled as independent (much lessas i.i.d. Gaussian). Thus, although the estimated subspaceby the SVD method is the “best” in terms of the Frobenius-norm distance, it is generally not optimal.

In the other approach, the statistical properties of the er-rors in estimated parameters (homographies from step 2) areutilized to more optimally calculate the subspace U. We re-fer to this approach as the statistical subspace constrainedapproach or Sta-Sub-Cnstr. More formally, with the covari-ance matrix of the error, we first employ the bilinear ap-proach [1, 3] or the alternating projection approach [23] to

1For a practical approach other than Sta-Sub-Cnstr, refer to the algo-rithm in our companion paper [4], where more constraints are utilizedto produce a more accurate estimate. The algorithm in [4] can be ap-plied, even to the case of as few as three planes.2We remind the reader that in such a scheme there are now two stagesof SVD calculation. First in the DLT for individual homographies(step 2) and here for step 3.

Page 4: Error Analysis in Homography Estimation by First Order Approximation Tools: A General Technique

284 J Math Imaging Vis (2009) 33: 281–295

calculate weighted rank-four approximation matrix of H:H4 (see Appendix B). Then, the subspace of H4 can rea-sonably be taken as a solution of U.

3 A Statistical Analysis of the Errors in EstimatedHomography

In this section, we present a statistical analysis of the er-rors in estimated homography parameters. The covariancematrix of the errors in nine parameters is analytically com-puted. First, we show how to calculate the covariance matrixof the errors for the DLT algorithm. Then, we extend thisto the normalized DLT algorithm. Finally, we estimate thenoise level. We assume a small noise level so that first orderexpansion/approximation techniques can be used.

3.1 The Case of the DLT Algorithm

Suppose the matrix A in (9) is obtained from n noise freefeature matches and that the ith noise free feature match

of xi and x′i is corrupted with the noise of (εi,1, εi,2), and

(ε′i,1, ε

′i,2), respectively.3

The essence of the analysis below is to represent the er-rors in the estimated homography in terms of the randomvariables {εi,1, εi,2, ε

′i,1, ε

′i,2} for (1 ≤ i ≤ n). Here, we use

the second subscript in εi,• (or ε′i,•) to denote the x or y

coordinates in 2d images.Using the SVD [12], A is decomposed as:

A = USVT (12)

where U ∈ R3n,3n, V ∈ R9,9, UUT = I3n, VVT = I9,and S = diag{s1, s2, . . . , s8,0} ∈ R3n,9. The noise-free ho-mography vector is the 9th column of V: v9. Due tothe noise of {εi,1, εi,2} and {ε′

i,1, ε′i,2}, in xi and x′

i re-spectively, the error Ei in the ith block of A, Ai , is:

Ei =

⎡⎢⎢⎣

0 0 0 −εi,1 −εi,2 0 Ei,{1,7} Ei,{1,8} ε′i,2

εi,1 εi,2 0 0 0 0 Ei,{2,7} Ei,{2,8} −ε′i,1

Ei,{3,1} Ei,{3,2} −ε′i,2 Ei,{3,4} Ei,{3,5} ε′

i,1 0 0 0

⎤⎥⎥⎦ (13)

where Ei,{1,7} = x′i,2εi,1 + xi,1ε

′i,2 = −Ei,{3,1}, Ei,{1,8} =

x′i,2εi,2 +xi,2ε

′i,2 = −Ei,{3,2}, Ei,{2,7}=−x′

i,1εi,1 −xi,1ε′i,1 =

−Ei,{3,4}, and Ei,{2,8} = −x′i,1εi,2 − xi,2ε

′i,1 = −Ei,{3,5}.

Quadratic terms of εi,•ε′i,◦ in Ei have been dropped.

Define C as the transformed error matrix:

C = UT EV (14)

From matrix perturbation theory [30, 32], the first order per-turbed solution for the DLT algorithm is

v9 = v9 −8∑

i=1

ci,9

sivi (15)

The second term in (15) is the errors in the estimated para-meters.

The entries in Ei are random variables, and so are ci,j .Consequently, each entry of E, ci,j , is a linear combina-tion of the 4n random variables: {εi,1, εi,2, ε

′i,1, ε

′i,2} for

3Note that x in Sect. 2.1 is used for the homogeneous representationof a feature point. By a slight abuse of notation, we will also use x torepresent the feature points in non-homogeneous form: with x and y

coordinates as its two entries.

1 ≤ i ≤ n; and the second term in (15) is also a randomvector: each entry of which is a linear combination of the4n random variables. Thus, we can express the errors in the9 parameters using a 9 × 4n matrix: �h ∈ R9,4n, as is ourstaring point of this analysis.

In order to analytically do this, we represent the error ma-trix of E as a linear combination of 4n matrices: a stackof 4n 3n × 9 matrices, each of which represents the errorcomponent in one of 4n “directions”: {εi,1, εi,2, ε

′i,1, ε

′i,2} for

1 ≤ i ≤ n.

E =n∑

i=1

(εi,1E4i−3 + εi,2E4i−2 + ε′

i,1E4i−1 + ε′i,2E4i

)(16)

For example, the (4(i − 1) + j)th matrix E4(i−1)+j , for 1 ≤i ≤ n and 1 ≤ j ≤ 4, is

[0T . . . 0T

[E

4(i−1)+j ]T 0T . . . 0T]T (17)

where 0 is a 3 × 9 zero matrix and only the ith block

E4(i−1)+j

is nonzero. From (13), the 3 × 9 matrix E4(i−1)+j

can be calculated: see Appendix C.

Page 5: Error Analysis in Homography Estimation by First Order Approximation Tools: A General Technique

J Math Imaging Vis (2009) 33: 281–295 285

Then, for each 3n × 9 matrix Ei , calculate Ci as

Ci = UT EiV (18)

Substituting (18) and (16) into (14), the transformed errormatrix C is represented as a 3n × 9 random matrix:4

C =n∑

i=1

(εi,1C4i−3 + εi,2C4i−2 + ε′

i,1C4i−1 + ε′i,2C4i

)(19)

Consequently, the second term in (15) can be computed.Specifically, take the 9 × 1 vector ξj

ξj = −8∑

i=1

cj

i,9

sivi (20)

as the j th column of �h: �h = [ξ1 ξ2 . . . ξ4n].The error in estimated homography is represented in

terms of the random variables of {εi,1, εi,2, ε′i,1, ε

′i,2} for

(1 ≤ i ≤ n):

�(h) = �he (21)

where e = [ε1,1 ε1,2 ε′1,1 ε′

1,2 . . . εn,1 εn,2 ε′n,1 ε′

n,2]T4n×1.�(•) here is used to denote a random vector: the errors inthe vector of •. Similar usages will appear in the following:not only for a vector but also for a matrix or a scalar. Thegeneral rule is: The symbol of �(•) denotes for a generalrandom variable, which can be characterized by a stack of4n quantities (scalars, vectors or matrices).

From (21), the error covariance matrix is

Ch = �hCx�Th (22)

where Cx is the 4n × 4n covariance matrix for the noise ein the image points. In the special case, where i.i.d. 0-mean-σ 2-variance Gaussian noise (in feature point) is assumed,the error covariance matrix in the homography is

Ch = σ 2�h�Th (23)

Note that, although the analysis above looks complicated,the computation can be greatly reduced by taking into con-sideration these two facts: From (20), only the first 8 entriesof the 9th column of Cj are needed; and each Ej has onlyone nonzero 3 × 9 block in (17).

4 In (14), E is a random matrix, which can be represented by a series ofmatrices {Ei} in (16). The operation on this random matrix E, such asmatrix computations in (14), is decomposed into the same operation onthe series of matrices {Ei}, as in (18). The transformed random matrixC in (14) is a linear combination of transformed Ci , as in (19). This, orsimilar operations apply to random variables (vector, scalar, or matrix),in the following of this paper.

3.1.1 Replacing Noise Free Data

When presenting a statistical analysis of the errors in esti-mated homography above, we assumed that noise free fea-ture points are available. We now examine this assumption.

From (15) and (20), each column ξ is a linear combi-nation of {vi |i < 9}. This means that the matrix �h lies inthe subspace spanned by these 8 vectors. Consequently, thecovariance matrix Ch in (23) and (22) has a zero singularvalue and the associated singular vector is the ground truthhomography.

In practice, we do not have this knowledge of the groundtruth data. A practical solution, as adopted in this paper, isto use noisy data (actually observed) instead. In assessingthe impact of this approximation, we use the following mea-sures to describe the differences:

|(h − h)T ui − (h − h)T ui ||(h − h)T ui |

(24)

where h and h denote the homographies, calculated fromnoisy data and noise free data, respectively, and ui and ui

are the singular vectors of the covariance matrices Ch andCh also from noisy data and noise free data, respectively.Equation (24) measures the differences of the errors’ pro-jections upon the directions ui and ui , i.e., the effect of thereplacement of noisy data for noise free data. Experimentsin Sect. 4 show that, for i < 9, the above measure is lessthan 0.01. This means that the difference introduced by thisreplacement of noisy data for noise free data can be over-looked.

3.1.2 Out of First-order Perturbation: Second-order Effecton the 9th Vector u9

It is quite another matter when one considers the 9th direc-tion u9 of Ch. From the calculations and analysis above, itcan be seen that, even with noisy data, Ch still has a rankof 8, and its null vector u9 is the calculated homography.This means that, in the direction of u9, there is no error,even in noisy data cases. This is obviously not the truth. Thereason for this can be ascribed to the first-order perturba-tion technique, employed above. Such an effect needs to becharacterized.

Supposing ground truth feature points are available, u9

can be expressed as, in terms of first order perturbation:

u9 +8∑

i=1

ςiui (25)

where ςi � 0 for 1 ≤ i ≤ 8. In practice, because of‖u9‖F = 1,

u9 = u9 + ∑8i=1 ςiui

‖u9 + ∑8i=1 ςiui‖F

= u9 + ∑8i=1 ςiui√

1 + ∑8i=1 ς2

i

Page 6: Error Analysis in Homography Estimation by First Order Approximation Tools: A General Technique

286 J Math Imaging Vis (2009) 33: 281–295

≈(

1 − 1

2

8∑i=1

ς2i

)(u9 +

8∑i=1

ςiui

)(26)

Thus, the projection of the errors upon the u9 direction is

(u9 − u9

)T u9 = γ − 3γ 2 + 2γ 3 (27)

where

γ = 1

2

8∑i=1

ς2i (28)

The projection of the errors upon ui , for i < 9, is:

(u9 − u9

)T ui = ςi(1 − γ ) ≈ ςi (29)

It is clear that, from (29), ςi is the errors’ projection uponui , i.e., approximately the errors’ projection upon ui , be-cause the difference between these two directions, measuredby (24), can be ignored. (Note that h and h are u9 and u9,respectively).

From (29), the first order perturbation of (25) suffices tocharacterize the errors in the directions of ui , for i < 9.From (27), the error in the direction of u9 is zero in terms offirst order perturbation; however actually, it is not zero, buta second order error, as γ . (Since ςi � 1, terms of γ 2 andγ 3 in (27) can also be ignored.)

To first-order perturbation, ςi is indeed a 0-mean Gaus-sian random variable, with its variance as the ith largest sin-gular value of Ch: λi . Thus, γ in (28) is a chi-square-likerandom variable: Its expectation is

E(γ ) = 1

2

8∑i=1

λi (30)

and its variance is

var(γ ) = 1

2

8∑i=1

λ2i (31)

In order to account for the error γ in the u9 direction, wescale the normalized homography up to a factor of 1−E(γ )

and set the 9th singular value of Ch as var(γ ).

3.1.3 Generalization to Other Linearized Systems

The technique above can be generalized to other parame-ter estimation problem, which is abstracted as solving a lin-ear or linearized system. Suppose the data matrix A ∈ Rm,r

and the parameter θ ∈ Rr satisfy the constraint of Aθ = 0,which approximately holds in practice because the data ma-trix A is generally corrupted with an error of E. By analyz-ing the linearization process, E is generally represented asE = ∑

i εiEi , where εi is a random variable, representing

an error in “raw” data. As done in the homography example,the covariance matrix of the error in estimated parameter θ

is analytically calculated in the following steps:

1. Calculate the SVD factors of A: A = USVT , as in (12);2. Represent the error E = ∑

εiEi , as in (16);3. Calculate Ci = UT EiV, as in (18);

4. Calculate ξj = −∑r−1i

cji,r−1si

vi , as in (20); and make �θ

as: �θ = [ξ1 ξ2 . . . ξn].5. Calculate the error covariance matrix by (22) or by (23).

3.2 Extension to the Normalized DLT Algorithm

The above analysis ignored the normalization step that isalmost always recommended. We now address this versionof DLT. In the normalized DLT algorithm (11), two factorshave to be considered: First, T and T′ depend on the mea-surements and are random matrices; Second, the error in thenormalized matches will not be i.i.d. Gaussian noise.

From (11),

�(H) = �(T′−1)HT + T′−1�(H)T

+ T′−1H�(T) = �He (32)

In (32), �(•) denotes a 3 × 3 random matrix, which canbe represented by a stack of 4n 3 × 3 matrices. The oper-ations on the random matrices in (32) are similar to thoseof (14), which are related by (16), (18) and (19).

3.2.1 Calculation of �(H)

The critical step to calculate �(H) in (32) is to analyze therandom variable of the inverse of the scale, as 1

lin (10);

where l =√∑n

i=1(xi,1−c1)2+(xi,2−c2)

2

2n, c1 =

∑ni=1 xi,1

nand c2 =∑n

i=1 xi,2n

. Define xi,• as the centered coordinates: xi,1 =xi,1 − c1 and xi,2 = xi,2 − c2. The error in the centered co-ordinates is

�(xi,1

) = [−1/n . . . − 1/n (n − 1)/n − 1/n . . . − 1/n]× [ε1,1 . . . εn,1]T

�(xi,2

) = [−1/n . . . − 1/n (n − 1)/n − 1/n . . . − 1/n]× [ε1,2 . . . εn,2]T

where the (n − 1)/n is the ith components. Thus, the errorin the inverse of l is:

(1

l

)= − 1

2nl3

n∑i=1

[xi,1�

(xi,1

) + xi,2�(xi,2

)](33)

The normalized image feature isxi,•l

. The error in it is

�(xi,•l

) = xi,•�( 1l) + �(xi,•)

l, which can be expressed as

Page 7: Error Analysis in Homography Estimation by First Order Approximation Tools: A General Technique

J Math Imaging Vis (2009) 33: 281–295 287

pTi,•e. Similarly, the error in the second normalized view is

p′Ti,•e. We stack the vectors pT and p′T as

P = [pi,1 pi,2 p′1,1 p′

1,2 . . . pn,1 pn,2 p′n,1 p′

n,1]T

Pe are the errors in the normalized coordinates. Accord-ing to (21),

�(H) = �Pe = �He (34)

where � is calculated as �h in (21), however, � is arrangedas a stack of 4n 3 × 3 matrices.

3.2.2 Calculation of �(T)

Another quantity that will be used in calculating �(T) and�(T′−1) is the error in the centroid of the original featurepoints, �(ci):

�(c•) = 1

n

n∑i=1

εi,• (35)

From (10),

�(T) =

⎡⎢⎢⎣

�( 1l) 0 −c1�( 1

l) − �(c1)

l

0 �( 1l) −c2�( 1

l) − �(c2)

l

0 0 0

⎤⎥⎥⎦ (36)

Substituting (33) and (35) into (36), we calculate �T.From first order approximation, (T + �T)−1 = T−1 −

T−1�TT−1

�(T′−1) = −T′−1�(T′)T′−1 (37)

where �(T′) can be calculated as in (36).Substituting (34), (36) and (37) into (32), we obtain

�(H) and rearrange it as �(h).

3.2.3 The Effect of the Normalization Step

In this subsection, we analyze the effect of the normalizationstep on the calculated homography. Here, different from thatin the normalized DLT algorithm, the normalization step isto scale the homography so that its Frobenius norm is 1.

(h

‖h‖F

)= �(h)

‖h‖F

+ h�

(1

‖h‖F

)(38)

where �(h) is defined in (21), �( 1‖h‖F

) = − 1‖h‖3

F

∑9i=1 hi ×

�(hi), hi is the ith component of h and �(hi) is the ith rowof �(h).

In matrix terms, (38) can be expressed as:

(h

‖h‖F

)= �(h)

‖h‖F

− hhT �(h)

‖h‖3F

= �(h)

‖h‖F

(I9 − hhT

‖h‖2F

)(39)

where (I9 − hhT

‖h‖2F

) is a projection matrix upon the subspace,

which is orthogonal to h.It should be emphasized that the errors in the NON-

normalized h from (32) are not orthogonal to h, in contrastto the analysis in Sect. 3.1.1. However, from (39), the errorin the normalized h is still orthogonal to h, due to the effectof the projection matrix in the normalization step (39). Thiscan be interpreted from the meaning of “normalization”: Be-cause the normalization seeks to essentially make the Frobe-nius norm of the homography set to 1, there is no random-ness in this direction.

In practice, there is also error in the direction of esti-mated homography, because we calculate the homographyfrom noisy data. This effect of the noisy data is same as thatin Sect. 3.1.1. And, the analysis concerning the second or-der effect, in Sect. 3.1.2, also applies for the normalized DLThomography estimation.

3.3 Noise Level Estimation

Now, we have represented the errors in homography para-meters in terms of random variables, i.e. the noise in im-age feature points. For i.i.d. Gaussian noise or general noisein feature points, the covariance matrix of the errors in theparameters, can be obtained by (23) or (22). To do so, weneed to know some statistical properties of the noise in im-age points. Here, we consider the simplest case, where thenoise in image points is i.i.d. 0-mean-σ 2-variance Gaussiannoise. In this case, the noise level σ needs to be estimated.The major tool, as in the sections above, is also the first orderapproximation.

Due to the noise in image points, there exists a differencebetween the projection Hx and x′. In the following, we willshow that the projection error can also be represented as arandom variable, which depends on the noise in the images:{εi,1, εi,2, ε

′i,1, ε

′i,2} for (1 ≤ i ≤ n).

Suppose that the noise free homography is

H =

⎛⎜⎜⎝

h1 h2 h3

h4 h4 h5

h7 h8 h9

⎞⎟⎟⎠

Page 8: Error Analysis in Homography Estimation by First Order Approximation Tools: A General Technique

288 J Math Imaging Vis (2009) 33: 281–295

H projects each point {xi,1, xi,2} of the first view on the sec-ond view as, by taking x′

i,1 as an example:

x′i,1 = h1 ∗ xi,1 + h2 ∗ xi,2 + h3

h7 ∗ xi,1 + h8 ∗ xi,2 + h9(40)

Due to the noise, the projection upon the second view is

[h1 + �(h1)] ∗ (xi,1 + εi,1) + [h2 + �(h2)] ∗ (xi,2 + εi,2) + h3 + �(h3)

[h7 + �(h7)] ∗ (xi,1 + εi,1) + [h8 + �(h8)] ∗ (xi,2 + εi,2) + h9 + �(h9)(41)

According to first order approximation, a+�ab+�b

= ab

+ �ab

−a�b

b2 approximately holds. From this, (41) equals to

x′i,1 + A

E− BD

E2(42)

where A = h1εi,1 +h2εi,2 +xi,1�(h1)+xi,2�(h2)+�(h3),D = h1 ∗ xi,1 + h2 ∗ xi,2 + h3, B = h7εi,1 + h8εi,2 +xi,1�(h7) + xi,2�(h8) + �(h9), and E = h7 ∗ xi,1 + h8 ∗xi,2 +h9. Note that second order terms, like �(h•)εi,◦, havebeen dropped. Including the noise in the observed x′

i,1, the

projection error is actually AE

− BD

E2 − ε′i,1. It can be repre-

sented as qTi,1e. Similarly, from the projection of the second

coordinate, qTi,2e can be obtained. Stack qT

i,• as

Q = [q1,1 q1,2 . . . qn,1 qn,2]T (43)

In practice, the projection error is actually available,as μ. Then, Qe = μ approximately holds. Because the i.i.d.Gaussian noise is assumed here, ‖qi,•‖2

F σ 2 = μ22(i−1)+•.

Then, the noise level is estimated as:

σ = ‖μ‖F

‖Q‖F

(44)

Note, the noise levels in two views can be assumed as dif-ferent, up to a known scale. Suppose σ1 and σ2 are the noiselevels in the first and second views, respectively; however,unknown. Suppose, further, σ1 = κσ2. Then, by multiply-ing the (4 • +1)th and (4 • +2)th columns of Q by a factorof κ , we can calculate σ2, according to (44). Consequently,σ1 = κσ2.

4 Simulations of the Errors in the HomographyCoefficients

The above concludes our methodology. Our purpose is nowto confirm the validity of the correlation information theabove methodology provides, and then demonstrate the ef-fectiveness of the information.

In this section, we carry out simulations to confirm thestatistical analysis in Sect. 3. From the theory in Sect. 3, thestatistical properties of the errors in estimated homography

can be analytically calculated while calculating the homog-raphy using the normalized DLT algorithm.

We compare this theoretical result with simulations. Ofcourse we know the “ground truth data” in simulations.Thus, first from noise free data, we calculate its homogra-phy and the covariance matrix for the errors in estimatedhomography from (23). After adding i.i.d. Gaussian noise tofeature points, we similarly calculate its estimated homogra-phy and the covariance matrix from noisy data. This processwith noisy data repeats 20,000 times to obtain enough datafor statistical properties. Note that the ground truth data isthe same for these 20,000 times and each time random noiseis added to the feature points.

Suppose, by (23), the covariance matrices are C and C (Cis different every time), calculated from noise free featurepoints and noisy ones respectively:

C = U diag{λ1 λ2 . . . λ8 0}UT and

C = U diag{λ1 λ2 . . . λ8 0}UT

We calculate three types of indexes, for 1 ≤ i ≤ 8:

ρi = (u9 − u9)T ui√

λi

(45)

ρi = (u9 − u9)T ui√

λi

(46)

τi = (u9 − u9)T ui − (u9 − u9)

T ui

|(u9 − u9)T ui | (47)

Note that u9 and u9 are the estimated homography fromnoisy data and the ground truth homography, respectively,from the analysis in Sect. 3.1.1. The numerator parts of ρi

and ρi are the errors projected upon the directions of ui andui , respectively. Because C and C are the covariance matri-ces of the errors of u9 − u9, ρi and ρi should be of 0-mean-1-variance Gaussian distribution. τi quantifies the differencecaused by the replacement of noisy data for noise free data.

The simulations in Fig. 1 and Fig. 2 show that both ρi andρi obey the 0-mean-1-variance Gaussian distribution. Fromthe fact that ρi in Fig. 1 is a 0-mean-1-variance Gaussianvariable, we can draw a conclusion that the replacement ofnoisy data for noise free data can be overlooked. This is also

Page 9: Error Analysis in Homography Estimation by First Order Approximation Tools: A General Technique

J Math Imaging Vis (2009) 33: 281–295 289

Fig. 1 Simulations of ρi for 1 ≤ i ≤ 8

confirmed by the simulations in Fig. 3, where we can see themagnitude of τi exceeds 0.01 in very few cases.

Consider now the 9th direction. As discussed inSect. 3.1.2, we scale the normalized homography up to afactor of 1 − E(γ ) and set the 9th singular value of C asvar(γ ), as in (30) and (31). Here, we use simulations to val-idate this approach. We only simulate the projection of theerrors upon the direction u9 (because of (u9 − u9)

T u9 =−(u9 − u9)

T u9). We know its expectation E(γ ) and in sim-ulations, we can furthermore calculate γ from (29) and (28).Thus, we calculate the following indexes:

ε = (u9 − u9

)T u9 − E(γ ) (48)

ε = (u9 − u9

)T u9 − γ (49)

Note that in practice, we can only calculate E(γ ). We cal-culate ε only for the purpose of validating (28).

From the analysis in Sect. 3.1.2, ε should be almost 0,compared with ε, because from (27) ε only has the 2nd and3rd order terms of γ ; and ε is a Chi-square like random vari-able. From the simulations of ε and ε in Fig. 4, the error ofε can be totally overlooked, compared with ε (with a scaleup to 10−5). This means that the error of (u9 − u9)

T u9 is al-most modeled by γ in (28). The simulation of ε shows thatε is a Chi-square like random variable, also confirming therationality of (30) and (31).

Page 10: Error Analysis in Homography Estimation by First Order Approximation Tools: A General Technique

290 J Math Imaging Vis (2009) 33: 281–295

Fig. 2 Simulations of ρi for 1 ≤ i ≤ 8

5 Simulation Result of Subspace ConstrainedHomography Estimation

It has been shown [2] that, for the case of > 4 planes over2 views, the accuracy of the homographies can be improvedby utilizing the rank 4 constraint. However, the experimen-tal setting in [2] was impractical so that we could avoid thecomplications that the current paper now addresses (SVDbeing sub-optimal in the presence of non-i.i.d.-Gaussiannoise). In this section, we will show that the mapping ac-curacy of the homographies, in more practical setting, can

be improved by employing the statistical properties of the

homography coefficients. Because we need the ground truth

data in the comparison, we also resort to simulations.

First, we compare the subspace constrained homography

estimation in two cases. One is to use the SVD [12] to cal-

culate the rank 4 subspace from more than 4 homographies,

then use the subspace constrained method to refine each ho-

mography. We refer to this method as SVD-Sub-Cnstr. An-

other is same except we use the correlation information, de-

rived in this paper, in calculating the rank 4 subspace. More

Page 11: Error Analysis in Homography Estimation by First Order Approximation Tools: A General Technique

J Math Imaging Vis (2009) 33: 281–295 291

Fig. 3 Simulations of τi for 1 ≤ i ≤ 8

formally, we use the Bilinear approach5 in [1, 3] to calcu-late the rank 4 weighted approximation matrix: H4. Thenthe subspace spanning H4 is taken as the solution of thesubspace. We refer to this second method as Sta-Sub-Cnstr.From Fig. 5, we can see that the general SVD based methodSVD-Sub-Cnstr even increases the mapping error, comparedwith the normalized DLT algorithm. The superiority of theSta-Sub-Cnstr can be easily seen in Fig. 5.

5The alternate projection (AP) approach in [23] achieves the same aim.However, the Bilinear approach in [1, 3] is preferred here because theerrors in each homography can be reasonably assumed to independentto those in another homography.

The next experiment shows how the statistical propertiescan be “intelligently” employed in calculating the rank 4subspace. In the simulations above, we add the same levelof noise to feature points in all planes. In the following ex-periment, we add equal-level noise to the first n − 1 planes,and add a much stronger noise in the last plane. By instinct,in the SVD based method SVD-Sub-Cnstr, other planes withweak noise will be affected by the plane that is severely pol-luted by noise. What will happen in the Sta-Sub-Cnstr? Notethat the plane with stronger noise is treated the same as oth-ers when using subspace constrained methods, in the SVD-Sub-Cnstr or the Sta-Sub-Cnstr; i.e., we do not know the

Page 12: Error Analysis in Homography Estimation by First Order Approximation Tools: A General Technique

292 J Math Imaging Vis (2009) 33: 281–295

Fig. 4 Simulations of ε and ε

difference of the last plane from other planes in calculatingthe subspace.

Here we consider the cases of n ≥ 6 planes, where thenth plane is severely corrupted (in this example, five timesnoise level as that in other n − 1 planes). We calculate thehomographies in two rounds. First, calculate all the n ho-mographies, by the normalized DLT algorithm, the SVD-Sub-Cnstr and the Sta-Sub-Cnstr. In this round, we have toconsider two different mapping errors: one for the mappingerror for the first n − 1 planes (They are denoted as “SVD-Sub-Cnstr, 1st round”, “Sta-Sub-Cnstr, 1st round” and “Nor-malized DLT” in Fig. 6a) and the other for the nth plane (inFig. 6b). In another round, we calculate the first n − 1 ho-mographies, by the three methods (discarding the nth plane.)They are also shown in Fig. 6a, denoted as “SVD-Sub-Cnstr,2nd round” and “Sta-Sub-Cnstr, 2nd round”. For the normal-ized DLT algorithm, the results are the same in two rounds.In total, we list 8 curves in Fig. 6.

It should be noted that in Fig. 6a, for example, althoughthe abscissa is 6 (i.e. n), only the first 5 (i.e., n − 1) planesare used to calculate the subspace constrained homographiesin the second round. Though, in the first round, we jointlycalculate the 6 (i.e. n) homographies, we only calculate theperformance of the first 5 (i.e. n − 1) homographies. Thus,we can compare the mapping accuracy of the same 5 (i.e.,n − 1) homographies in these two rounds.

The advantage of using the statistical properties in theSta-Sub-Cnstr can be found from two aspects. First, the firstn− 1 planes are NOT affected by the nth severely corruptedplane; instead, this badly corrupted plane helps to improvethe accuracy of the other n − 1 planes: On the first n − 1planes, the mapping errors in the first round are smaller than

Fig. 5 Simulations of mapping errors, compared with the normalizedDLT algorithm

those in the second round. For the SVD based method SVD-Sub-Cnstr, it is quite another matter: the badly corruptedplane deteriorates the accuracy of the other planes. On thefirst n − 1 planes, the mapping errors in the first round arelarger than those in the second round (let alone their accu-racy, compared with the normalized DLT or the Sta-Sub-Cnstr).

Second, the badly corrupted plane is also improved byutilizing the statistical properties in the Sta-Sub-Cnstr, ascan be seen from Fig. 6b.

Page 13: Error Analysis in Homography Estimation by First Order Approximation Tools: A General Technique

J Math Imaging Vis (2009) 33: 281–295 293

Fig. 6 Simulations of mapping errors, compared with the normalized DLT algorithm, when one plane is severely corrupted with noise

6 Conclusion

In this paper, we show how to analytically compute the sta-tistical properties (specifically correlations) of the errors inestimated parameters from the (normalized) DLT. We spe-cialize this to homography parameters. Simulations confirmthe results. To illustrate the usefulness of being able to de-rive such information, we consider the subspace constrainedmethod for estimating a collection of homographies. The re-sults of these simulations not only confirm the usefulness butprovide interesting illustrations of precisely how excessivenoise in part of that data influences the outcomes in terms ofthe individual refined parameters.

Though we have focussed on homography estimation,our work is potentially useful in many problems, where theestimated parameters are used as the input for further analy-sis. A direct application may be to employ the same tech-niques in the calculation of the induced dimension-4 homol-ogy subspace in the cases of two-plane-over-multiple-viewor multiple-plane-over-multiple-view [33, 34].

Appendix A: Rank-4 Constraint

First, we cite the Result 12.1 on p. 312 of [15], which de-scribes the relationship between a homography and the pro-jection matrices. Given the projection matrices for 2 views

P = [I|0] P′ = [R|t] (50)

and the ith plane defined by πTi X = 0 with πi = [vT

i 1]T ,the homography induced by the plane is x′ = Hix with amatrix representation:

Hi = R − tvTi (51)

Note that this is a particular representation (we call it thecanonical representation): all matrices related to this matrixby a scale are also representations of the same homography.

In some applications, we need to relate the matrix ho-mography Hi to its vector form hi . Suppose

Hi =

⎡⎢⎢⎣

h1,i h2,i h3,i

h4,i h5,i h6,i

h7,i h8,i h9,i

⎤⎥⎥⎦

hi is defined to be hi = [h1,i h2,i . . . h9,i]T . The matrixH = [h1 h2 . . . hnt]9,n. whose columns are homographiesin canonical form, can be expressed as the following, interms of R, t, and {vi}:

H = vec(RT )[

1 1 . . . 1]

1,n

− Ut[

v1 v2 . . . vn

]3,n

(52)

where

Ut =

⎡⎢⎢⎣

t1I3

t2I3

t3I3

⎤⎥⎥⎦

9,3

From (52), the homography matrix H=[h1 h2 . . . hn]9,n,has a rank of 4. In addition, the homography matrix H in(52) has special structure that can be employed to producemore constraints so that even 2 homographies suffice to cal-culate the dimension 4 subspace [4]. However, in this pa-per, we are only interested in the rank 4 constraint, impliedby (52).

Page 14: Error Analysis in Homography Estimation by First Order Approximation Tools: A General Technique

294 J Math Imaging Vis (2009) 33: 281–295

Appendix B: Definition of Weighted Rank-rApproximation Matrix

We suppose 0-mean noise in the entries of the matrix M ∈Rm,n but we do not assume row or column independence.In order to characterize the noise in M, we first rearrange Mas a vector vec(M) ∈ Rmn,1. Suppose the covariance matrixfor the noise in vec(M) is C. The weighted rank-r approxi-mation matrix of M is defined to be Mr that has properties:rank(Mr ) = r and Mr minimizes the objective function of(vec(M − X))T C−vec(M − X). Methods for finding rank r

approximation matrix can be found, as the Bilinear approachin [1, 3] and the alternate projection (AP) approach in [23].

Appendix C: Definition of E4(i−1)+k

in (17) in Sect. 3.1

E4(i−1)+1 =

⎡⎢⎢⎣

0 0 0 −1 0 0 x′i,2 0 0

1 0 0 0 0 0 −x′i,1 0 0

−x′i,2 0 0 x′

i,1 0 0 0 0 0

⎤⎥⎥⎦

E4(i−1)+2 =

⎡⎢⎢⎣

0 0 0 0 −1 0 0 x′i,2 0

0 1 0 0 0 0 0 −x′i,1 0

0 −x′i,2 0 0 x′

i,1 0 0 0 0

⎤⎥⎥⎦

E4(i−1)+3 =

⎡⎢⎢⎣

0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 −xi,1 −xi,2 −1

0 0 0 xi,1 xi,2 1 0 0 0

⎤⎥⎥⎦

E4(i−1)+4 =

⎡⎢⎢⎣

0 0 0 0 0 0 xi,1 xi,2 1

0 0 0 0 0 0 0 0 0

−xi,1 −xi,2 −1 0 0 0 0 0 0

⎤⎥⎥⎦

References

1. Chen, P.: An investigation of statistical aspects of linear subspaceanalysis for computer vision applications. Ph.D. Thesis, MonashUniversity (2004)

2. Chen, P., Suter, D.: An analysis of linear subspace approachesfor computer vision and pattern recognition. Int. J. Comput. Vis.68(1), 83–106 (2006)

3. Chen, P., Suter, D.: A bilinear approach to the parameter estima-tion of a general heteroscedastic linear system, with application toconic fitting. J. Math. Imaging Vis. 28(3), 191–208 (2007)

4. Chen, P., Suter, D.: Rank constraints for homographies over twoviews: Revisiting the rank four constraint. Int J. Comput. Vis.(2008, to appear)

5. Chernov, N.: On the convergence of fitting algorithms in computervision. J. Math. Imaging Vis. 27(3), 231–239 (2007)

6. Chernov, N., Lesort, C.: Statistical efficiency of curve fitting algo-rithms. Comput. Stat. Data Anal. 47(4), 713–G728 (2004)

7. Chojnacki, W., Brooks, M.J., van den Hengel, A., Gawley, D.: Onthe fitting of surfaces to data with covariances. IEEE Trans. PatternAnal. Mach. Intell. 22(11), 1294–1303 (2000)

8. Chojnacki, W., Brooks, M.J., van den Hengel, A., Gawley, D.:A new approach to constrained parameter estimation applicable tosome computer vision problems. Image Vis. Comput. 22(2), 85–91 (2004)

9. Chojnacki, W., Brooks, M.J., van den Hengel, A., Gawley, D.:From fns to heiv: a link between two vision parameter estimationmethods. IEEE Trans. Pattern Anal. Mach. Intell. 26(2), 264–268(2004)

10. Chum, O., Pajdla, T., Sturm, P.: The geometric error for homogra-phies. Comput. Vis. Image Underst. 97(1), 86–102 (2005)

11. Chum, O., Werner, T., Matas, J.: Two-view geometry estimationunaffected by a dominant plane. In: Proc. Conf. Computer Visionand Pattern Recognition (1), pp. 772–779 (2005)

12. Golub, G.H., Loan, C.F.V.: Matrix Computations, 3nd edn. JohnsHopkins Press, Baltimore (1996)

13. Haralick, R.M.: Propagating covariance in computer vision. In:Proc. of 12th ICPR, pp. 493–498 (1994)

14. Haralick, R.M.: Propagating covariance in computer vision. Int. J.Pattern Recogn. Artif. Intell. 10(5), 561–572 (1996)

15. Hartley, R.I., Zisserman, A.: Multiple View Geometry in Com-puter Vision, 2nd edn. Cambridge Univ Press, Cambridge (2003)

16. Jain, A.K., Mao, J., Duin, R.: Statistical pattern recognition: A re-view. IEEE Trans. Pattern Anal. Mach. Intell. 22(1), 4–37 (2000)

17. Kanatani, K.: Unbiased estimation and statistical analysis of 3-drigid motion from two views. IEEE Trans. Pattern Anal. Mach.Intell. 15(1), 37–50 (1993)

18. Kanatani, K.: Statistical bias of conic fitting and renormalization.IEEE Trans. Pattern Anal. Mach. Intell. 16(3), 320–326 (1994)

19. Kanatani, K.: Statistical Optimization for Geometric Computa-tion: Theory and Practice. Elsevier, Amsterdam (1996)

20. Kanatani, K.: Uncertainty modeling and model selection for geo-metric inference. IEEE Trans. Pattern Anal. Mach. Intell. 26(10),1307–1319 (2004)

21. Kanatani, K.: Statistical optimization for geometric fitting: Theo-retical accuracy bound and high order error analysis. Int. J. Com-put. Vis. (2008, in print)

22. Leedan, Y., Meer, P.: Heteroscedastic regression in computer vi-sion: Problems with bilinear constraint. Int. J. Comput. Vis. 37(2),127–150 (2000)

23. Manton, J.H., Mahony, R., Hua, Y.: The geometry of weightedlow-rank approximations. IEEE Trans. Signal Process. 51(2),500–514 (2003)

24. Mühlich, M., Mester, R.: A considerable improvement in non-iterative homography estimation using tls and equilibration. Pat-tern Recogn. Lett. 22(11), 1181–1189 (2001)

25. Mulich, M., Mester, R.: The role of total least squares in motionanalysis. In: ECCV, pp. 305–321 (1998)

26. Mulich, M., Mester, R.: Subspace methods and equilibration incomputer vision. In: Scandinavian Conference on Image Analysis(2001)

27. Mulich, M., Mester, R.: Unbiased errors-in-variables estimationusing generalized eigensystem analysis. In: ECCV WorkshopSMVP, pp. 38–49 (2004)

28. Nadabar, S.G., Jain, A.K.: Parameter estimation in Markov ran-dom field contextual models using geometric models of objects.IEEE Trans. Pattern Anal. Mach. Intell. 18(3), 326–329 (1996)

29. Nayak, A., Trucco, E., Thacker, N.A.: When are simple ls estima-tors enough? An empirical study of ls, tls, and gtls. Int. J. Comput.Vis. 68(2), 203–216 (2006)

30. Stewart, G.W., Sun, J.G.: Matrix Perturbation Theory. AcademicPress, San Diego (1990)

31. Taubin, G.: Estimation of planar curves, surfaces, and nonplanarspace curves defined by implicit equations with applications toedge and range image segmentation. IEEE Trans. Pattern Anal.Mach. Intell. 13(11), 1115–1138 (1991)

32. Wilkinson, J.H.: The Algebraic Eigenvalue Problem. Clarendon,Oxford (1965)

Page 15: Error Analysis in Homography Estimation by First Order Approximation Tools: A General Technique

J Math Imaging Vis (2009) 33: 281–295 295

33. Zelnik-Manor, L., Irani, M.: Multi-view subspace constraints onhomographies. In: Proc. Int’l Conf. Computer Vision, pp. 710–715(1999)

34. Zelnik-Manor, L., Irani, M.: Multi-view subspace constraints onhomographies. IEEE Trans. Pattern Anal. Mach. Intell. 24(2),214–223 (2002)

35. Zhang, Z.: Parameter estimation techniques: A tutorial with appli-cation to conic fitting. Image Vis. Comput. 15, 59–76 (1997)

36. Zhang, Z.: Determining the epipolar geometry and its uncertainty:A review. Int. J. Comput. Vis. 27(2), 161–195 (1998)

37. Zhang, Z.: On the optimization criteria used in two-view motionanalysis. IEEE Trans. Pattern Anal. Mach. Intell. 20(7), 717–729(1998)

Pei Chen received two Ph.D. de-grees, on wavelets and computer vi-sion, respectively, from ShanghaiJiaotong University in 2001, andfrom Monash University in 2004.He worked as a postdoctoral re-searcher with Monash University;as a Senior Research Engineer withMotorola Labs; then as a ResearchProfessor with Shenzhen Institute ofAdvanced Integration Technology,CAS/CUHK, China. He is currentlya Professor with School of Informa-tion Science and Technology, SunYat-sen University, China. His main

research interest includes subspace analysis in computer vision, struc-ture from motion, and wavelet application in image processing.

David Suter holds the positionof Professor of Computer Sciencein the School od Computer Sci-ence at The University of Adelaide,South Australia. He was previouslyProfessor of Computer Systems inthe Department of Electrical andComputer Systems Engineering atMonash University. During 2008-2010, Professor Suter will also serveas a member of the Australian Re-search Council College of Experts.He is a Senior Member of the IEEE.His main research interests are Im-age Processing, Computer Vision,

Video Compression, Computer Graphics and Visualization, Data Min-ing and Artificial Intelligence. He currently serves on the editorialboard of four international journals: Journal of Mathematical Imag-ing and Vision; Machine Vision and Applications, IPSJ Transactionson Computer Vision and Applications, and the International Journal ofComputer Vision. He was previously a member of the editorial boardof the International Journal of Image and Graphics.