Upload
avramescu-ticutz
View
219
Download
0
Embed Size (px)
Citation preview
8/2/2019 procams2006_okatani_t
1/8
Autocalibration of an Ad Hoc Construction of Multi-Projector Displays
Takayuki Okatani and Koichiro Deguchi
Graduate School of Information Sciences, Tohoku University6-6-01 Aramaki Aza Aoba, 980-8579 Sendai, Japan
{okatani,kodeg}@fractal.is.tohoku.ac.jp
Abstract
In this paper, we present a method for geometric calibra-
tion of a multi-projector display system. It enables easy
calibration of the system such that the user needs only to
take a picture of the projected images on the planar screen
with a hand-held camera to accomplish entire calibration.
Using the calibration method, one can realize a large high-
resolution image display by placing multiple projectors on
a desk etc. in an arbitrary manner. The calibration requires
four or more projectors and includes not only alignment of
the images but overall rectification of the stitched image.
The problem to be solved is to recover the Euclidean struc-
ture of the system (at least partially); its difficulties arise
from the fact that the intrinsics of the projectors, especially
the focal lengths, need to be estimated along with other pa-
rameters. For the problem, we show uniqueness of solutions
and several critical configurations, which were unclear in
previous studies, and then present an algorithm. We present
several experimental results that demonstrate the feasibility
of the proposed method.
1 Introduction
Systems of using multiple projectors to display a seam-
less, large high-resolution image on a planar surface (e.g.
a screen) have been studied and already put into use [2, 8,
10, 7, 11]. An extensive survey is given in [1].
One of the most fundamental problems in the systems to
realize high-quality images is geometric calibration. It can
be divided into two subproblems. One is to make the projec-
tors share a common coordinate system so that the projected
images are precisely stitched to produce a seamless image.The other is to rectify the stitched image so that it has a rect-
angular shape with correct aspect ratio. For the purpose of
the former calibration, cameras are used to take images on
the screen of the projected images and to obtain necessary
information. This calibration needs to be highly accurate,
e.g. with subpixel accuracy. For the purpose of the latter
calibration, fiducial markers on the screen [13] or physical
sensors measuring the world coordinate such as tilt sensors
[9] are often used. Although it need not be so accurate as
the former calibration, this calibration is necessary, too.
In this paper, we present an easy calibration method for
the multi-projector display system. It is such that, for exam-
ple, even when arbitrarily placing projectors on a desk etc.
toward a planar screen, the above two calibrations can be
performed by just taking a picture of the projected images
on the screen with a still camera. The multi-projector dis-play systems are usually designed and constructed as ded-
icated systems. The proposed method enables flexible, ad
hoc construction of a multi-projector display system. As
long as a sufficient number (at least four) of projectors and a
device for distributing video signals to those projectors are
available, the system can be constructed anywhere. From
an application point of view, the study [9] by Raskar and
Beardsley and the one [12] by Steele et al. are the closest
to this. However, the method in [9] uses tilt sensors. The
method in [12] assumes a calibrated stereo pair of a camera
and a projector. We use only ordinary data projectors and a
camera, and the geometry among the camera and projectors
is assumed to be unknown.The problem to be solved here is formulated as autocal-
ibration of a projector-camera system. More specifically,
it is to reconstruct (at least partially) the Euclidean struc-
ture of the system from only an image taken by the camera,
which needs the determination of the intrinsics of the pro-
jectors. The intrinsics other than the focal lengths are usu-
ally constant and can be determined in advance, while the
focal lengths will vary whenever the projectors are (re)set,
and therefore, they need to be estimated along with other
parameters such as the extrinsics.
In [6], Raij and Pollefeys deal with this very problem.
Using a dedicated system of multi-projector display, they
demonstrate that the autocalibration is possible. They as-sume a fully calibrated camera and partially calibrated pro-
jectors; to be specific, for the projectors, their focal lengths
are unknown and the vertical component of the principal
points is also unknown but the same for all the projectors.
Then they present an algorithm based on nonlinear mini-
mization to estimate those parameters. However, unique-
ness of solutions and critical configurations are not shown,
although uniqueness is implied by a plot of the objective
8/2/2019 procams2006_okatani_t
2/8
function that they minimize. In [5], we deal with a simi-
lar autocalibration problem for a projector camera system,
where calibrated projectors are assumed.
In this paper, we consider a different setting from [6],
in which the camera is fully uncalibrated and projectors
are partially calibrated, to conform to several requirements
of the mentioned flexible system. For this setting, we dis-cuss uniqueness of solutions and then present a few critical
configurations, which could actually happen and needs to
be carefully considered when implementing numerical al-
gorithms. These are shown in Section 3. In Section 2, we
formulate the problem of calibrating the multi-projector dis-
play system. A numerical algorithm is presented in Section
4 and several experimental results are shown in Section 5.
Throughout this paper, we use the vector-matrix notation
used in the textbook [3]; vectors are in bold face (e.g. v)
and matrices in the Courier font (e.g. M).
2 Representation of autocalibrationconstraints
2.1 Raij-Pollefeys method
The Raij-Pollefeys method [6] is based on the formulation
given by Triggs [14] for the problem of autocalibration of
cameras from planar scenes. It can be summarized as fol-
lows. Let Kc and Kp (p = 1, . . . , n) be intrinsics matrices
of the camera and the projectors, respectively. Denoting
the homography between the camera and a projector by Hcp(p = 1, . . . , n), the autocalibration constraints can be repre-
sented as
Xc (Kc K
1c )Xc = 0, (1a)
(Hcp Xc)(Kp K
1p )(Hcp Xc) = 0, p = 1, . . . , n, (1b)
where Xc =1
2(xc iyc) is called circular points, which
are complex vectors with four degrees of freedom. These
equations are nonlinear with respect to the unknowns to be
determined. Thus, they minimize the algebraic errors of the
above equations to compute the intrinsics of the projectors
and the circular points. They then calculate calibrated ver-
sions ofHcp , from which the extrinsics of the projectors etc.
are determined and the Euclidean structure is recovered.
Since each of the above equations gives a constraint oftwo degrees of freedom, there are 2n + 2 constraints. Then,
it follows from a counting argument that up to 2n + 2 pa-
rameters could be estimated. However, only representing
the available constraints as above does not prove unique-
ness of solutions or yield details of critical configurations.
As in [14], there are multiple representations of the con-
straints and Eq.(1) is merely one of them. In order to discuss
uniqueness or critical configurations, we need to use a direct
representation of the constraints in terms of the raw, physi-
cal parameters, and then examine it analytically, as will be
done in the next section.
2.2 A critical configuration
There are several critical configurations for the autocalibra-tion problem considered here, as will be shown in detail
later. One of them could pose a problem for solutions based
on Eq.(1). It is the case where a projector has the optical
axis perpendicular to the screen plane. In this case, the focal
length of the projector and the distance from the projector to
the screen plane are mutually coupled and can never be de-
coupled. This is intuitively reasonable. When the projector
has the optical axis perpendicular to the screen, it follows
that the projector projects an already correctly rectified im-
age. If so, even when the projector arbitrarily moves along
its optical axis, there should always exist a focal length
value such that the projected image remains unchanged.
Algebraically, this is explained as follows. Suppose thatonly focal length f is unknown among the intrinsics. Then,
Kp can be represented as a diagonal matrix by applying ap-
propriate normalization to the image coordinates. Thus, the
homography between the screen and the projector is given
as
Hsp Kp
r11 r12 t1r21 r22 t2r31 r32 t3
=f r11 f r12 f t1f r21 f r22 f t2r31 r32 t3
, (2)
where ri j and ti are components of the rotation matrix and
the translation vector representing the projector pose rela-
tive to a coordinate frame aligned to the screen plane. When
the projector has an optical axis perpendicular to the screen
plane, r31 = r32 = 0. Then the homography becomes
Hsp
(f/t3)r11 (f/t3)r12 (f/t3)t1(f/t3)r21 (f/t3)r22 (f/t3)t2
0 0 1
. (3)
Thus, f and t3 are coupled and it becomes impossible to
decouple them on the homography. Suppose an algorithm
in which f is first determined and then other parameters
are computed using the estimate of f. (This is the case
with solutions based on Eqs.(1).) Although it might work in
generic cases, the algorithm will encounter numerical prob-
lems in this critical case, due to the degeneracy.Note that considering our original purpose, which is to
obtain correctly rectified images, the above degeneracy is
only formal. That is, when the projector has the perpen-
dicular axis to the screen, as far as that projector is con-
cerned, the image rectification is not necessary. If our goal
is to determine the physical parameters including the focal
lengths and the projector positions, the configuration needs
to be avoided. However our goal is the rectification of the
8/2/2019 procams2006_okatani_t
3/8
overall image. As will be discussed later, by using a differ-
ent representation of constraints and employing appropriate
numerical algorithms, this critical configuration need not be
avoided.
2.3 A fundamental representation of the au-
tocalibration constraints
In order to thoroughly investigate uniqueness of solutions
and critical configurations, we represent the autocalibra-
tion constraints in terms of raw, physical parameters. In
what follows, we assume the camera to be fully uncali-
brated and projectors to be partially calibrated; only their
focal lengths are unknown. These assumptions reflect the
requirement of the ad hoc construction of multi-projector
displays. We suppose that the projectors comprising the
system can be different products with different specifica-
tions. (Note that, in [6], the y component of the principal
point is also estimated along with the focal lengths, assum-
ing that the projectors are the same product and their prin-cipal points are the same.) For each projector, other pa-
rameters than the focal length is expected to be constant
for different zoom/focus setting, and thus we may use their
specification values from a database of projector products.
As for the camera, there is a much wider choice than for
projectors, and thus we assume it to be uncalibrated. (On
Eqs.(1), this is equivalent to ignoring the constraint (1a).)
Assuming a coordinate frame aligned to the screen plane
(such that its xy plane coincides with the screen plane), the
homography Hps between the projector and the screen can
be represented as:
Hps TpRpCp, (4)
where
Tp =
zp 0 xp0 zp yp0 0 1
, (5)
where [xp,yp,zp] and Rp are translation and rotational com-
ponents of the projector pose, respectively, and Cp K1p .The only unknown intrinsic is the focal length, and thus we
may write Cp as a diagonal matrix by applying an appropri-
ate coordinate normalization:
Cp =
1/fp 0 0
0 1/fp 0
0 0 1
. (6)
Let Hsc be the homography between the screen and the
camera. Among Hsc, Hps , and the homography Hpc between
a projector and the camera, there is a relation Hpc HscHps .From this relation and Eq.(4), we have
Hpc HscTpRpCp. (7)
This equation gives a different representation from Eqs.(1)
of the same autocalibration constraint; the homography Hpcshould be factorized as above, where Tp and Cp should have
the above forms and Rp should be a rotation matrix.
3 Analysis of uniqueness and criticalconfigurations
3.1 Problem formulation
Among the homographies, Hpc (p = 1, . . . , n) can be com-
puted from image correspondences between the projector
image and the camera image. Thus, from Hpc (p = 1, . . . , n),
we determine unknowns using the constraint that Hpc should
be factorized as above. Among many unknowns, the most
important one is the homography Hsc between the screen
and the camera, since once this is determined, Hps can be re-
covered by Hps H1sc Hpc, using which the projected imagescan be arbitrarily manipulated. Furthermore, it can be seenthat even if the factorization of the remaining part TpRpCp is
not unique, as in the case of the described critical configura-
tion, our goal of correcting the projected images is achieved.
Thus, we need only to consider uniqueness of Hsc, and the
problem is stated as follows.
Problem 3.1. Under the above assumptions, given Hpc (p =
1, . . . , n), findHsc such that the factorization of Eq.(7) is pos-
sible for any p.
We will show the next in what follows.
Proposition 3.1. Except for a few critical configurations,
Hsc can be uniquely determined up to a matrix transforma-tion s(X; S0) = XS
10
with any matrix S0 given by
S0 =
cos sin s13
sin cos s230 0 s33
, (8)
where , s13, s23 , and s33 are arbitrary numbers and is
either+1 or1.The arbitrariness of S0 corresponds to that of defining
the coordinate frame on the screen plane. This means that
after the alignment and the rectification of the projected im-
ages are performed, the overall image still has freedom of
scaling, two-dimensional translation and rotation.
3.2 Uniqueness of solutions
Whether Hsc can be uniquely determined or not depends on
the existence of a 3 3 matrix S that enables the followingrefactorization:
HTRC HS1STRC (HS1)(STRC) HTRC
8/2/2019 procams2006_okatani_t
4/8
Thus, what needs to be checked is whether S exists or not
such that in an appropriate setting, STRC is factorized as:
STRC TRC. (9)
In what follows, we call a matrix V TR-factorizable when
V
can be factorized asV
TR
, and a matrixV
TRC-factorizable when V can be factorized as V TRC, whereT is any matrix of the form of Eq.(5), R is any orthogonal
matrix, and C is any matrix of the form of Eq.(6).
Lemma 3.2. LetV be a real 3 3 matrix: V = [v1, v2, v3].Define w1 v2 v3 and w2 v1 v3. Then, V is TRC-factorizable, if and only if
(w21 w22)w13w23 = (w213 w223)w1 w2, (10)
where w13 and w23 are the components of w1 =
[w11, w12, w13] andw2 = [w21, w22, w23].
Proof. IfV is TRC-factorizable, there should exist a matrixC of the form (6) such that V VC1 is TR-factorizable, andvice versa. It can be assumed without loss of generality that
C1 =
1 0 0
0 1 0
0 0 d
,
where d is a real positive number. Thus, V is TRC-
factorizable if and only if d exists such that V VC1 isTR-factorizable. Let V = [v
1, v
2, v
3]. It can be shown [4]
that V is TR-factorizable if and only if
(v2 v3)(v1 v3) = 0 and |v2 v3| = |v1 v3|.Using w1 and w2 defined, we can rewrite the vector products
of the above equations as v2 v
3= [dw11, dw12, w13]
andv
1v
3= [dw21, dw22, w23]
. Then the above two equationsbecome
d2(w11w21 + w12w22) + w13w23 = 0, (11a)
d2(w211 + w212) + w
213 = d
2(w221 + w222) + w
223, (11b)
respectively. A necessary and sufficient condition that dex-
ists satisfying Eqs.(11) is given as
(w211+w212w221w222)w13w23 = (w11w21+w12w22)(w213w223).By adding (w2
13 w2
23)w13w23 to both sides Eq.(10) is de-
rived.
Lemma 3.3. LetS be a 33 matrix. For any matrix T of theform (5), any orthogonal matrixR, and any diagonal matrix
C of the form (6), multiplicationSTRC is TRC-factorizable,
if and only ifS is represented as S0 of Eq.(8).
Proof. Since STRC TRC is equivalent to STR TRCC1 and CC1 has the form (6), if STRC is TRC-
factorizable, STR is TRC-factorizable and vice versa.
Therefore, from Lemma 3.2, a necessary and sufficient con-
dition that STRC be TRC-factorizable is that V STR satis-fies Eq.(10).
We want to rewrite Eq.(10) into an equation expressedwith the components of S by substituting V = STR into
Eq.(10). To do this, we write
S =
s11 s12 s13s21 s22 s23s31 s32 s33
and T =
a 0 b
0 a c
0 0 1
.
Because of the orthogonality ofR, only the third row vector
ofR appears in the resulting equation. We denote this vector
by r3 [r1, r2, r3]. Then, Eq.(10) can be expressed as theform of
g(a, b, c, r3, S) = 0. (12)
From the assumption of this lemma, this equation shouldhold for arbitrary a, b, c, and r3, which gives constraints
on S. We first consider the identity relations obtained from
the arbitrariness of a, b, and c, which are related to the as-
sumption that the projector positions are generically differ-
ent from each other.
The function g is a polynomial function of a, b, and c,
and each polynomial term aibjck should have a zero coef-
ficient. Among many terms available, we choose the term
a3, b3, c3, bc2, and b2c1. At least their coefficients should
always be 0:
6r3(r2 s31 r1 s32)g1(S) = 0, (13a)
6r1r2 s31g1(S
) = 0, (13b)6r1r2 s32g1(S) = 0, (13c)
2{r1r2 s31 + (r21 r22)s32}g1(S) = 0, (13d)2{(r22 r21)s31 + r1r2 s32}g1(S) = 0, (13e)
where
g1(S) = {(s2 s1)s3} {(s12 s31 s11 s32)2 + (s22 s31 s21 s32)2},
where si (i = 1, 2, 3) is the ith row vector ofS. From these,
we derive explicit expressions for constraints on S. We first
examine the case ofg1(S
) = 0. There are two possible cases,(s2 s1)s3 = 0 or (s12 s31 s11 s32)2 + (s22 s31 s21 s32)2 = 0.The first case is impossible since it directly means detS = 0.
The second case is impossible, either, unless s31 = s32 = 0,
since it means det S = 0 unless s31 = s32 = 0. Next we
examine the possibility that other terms in the above coef-
ficients will be 0. It is obvious that there are two possible
cases, s31 = s32 = 0 or r1 = r2 = 0.
1We used Mathematica to calculate these.
8/2/2019 procams2006_okatani_t
5/8
Substituting s31 = s32 = 0 to g = 0, we have
s433(s12 s21 s11 s22)(r21(s11 s12+ s21 s22)r22(s11 s12+ s21 s22)+ r1r2(s
212 + s
222 s211 s221)) = 0.
Neither of s33 = 0 or s12 s21
s11 s22 = 0 is possible, since
each results in det S = 0. Thus, the following should hold:
(r21r22)(s11 s12+s21 s22)+r1r2(s212+s222s211s221) = 0. (14)
Assuming that r1 and r2 differs for each p, the following
equations can be derived:
s11 s12 + s21 s22 = s212 + s
222 s211 s221 = 0.
These mean that S is represented as S0 of Eq.(8).
Lemma 3.3 is merely a different expression of Proposi-
tion 3.1, and thus we have shown Proposition 3.1.
3.3 Critical configurations
In the above proof, we first use arbitrariness of the projec-
tor position [b, c, a] or T, and then use arbitrariness of the
projector orientation R (specifically, r1 and r2, the first two
components of the third row vector r3 ofR). Critical con-
figurations are found for the cases where the arbitrariness
is not available. As for the projector positions, there might
be cases where the projectors (rigorously, their projection
centers) are exactly on some curve, for which some of the
terms ofg(a, b, c, r3,S) are coupled, so that Hsc becomes not
unique. However, there are many terms of several orders,
and it does not seem necessary to seriously consider suchcases. It seems more important to consider critical configu-
rations due to degenerate orientations of the projectors; they
are more likely to occur, considering possible constructions
of the system. It can be seen from the above derivation that
there are two cases:
1. The case where r1 = r2 = 0 for any projector (ev-
ery projector has the optical axis perpendicular to the
screen), s31 = s32 = 0 is the only available constraint
on S. When this holds for some of the projectors, infor-
mation available from them is correspondingly limited.
2. The case where r1 and r2 are identical for all or some
of the projectors. This means that the projectors share
an identical optical axis. In this case, only the upper
left 2 2 submatrix ofS is constrained by Eq.(14).
The case (1) is the configuration we discussed in Section
2.2. If one of the projectors has the perpendicular orien-
tation to the screen, then the full constraints on Hsc are not
available from the projector. When the number of remaining
projectors is not sufficient, we might not be able to uniquely
determine Hsc. However, in that case, we can use the projec-
tor causing the degeneracy to perform the rectification of the
overall image. This is done by simply using the projected
image by the projector as a key frame and manipulating the
other projectors so that their projected images are aligned to
the key frame. As for the case (2), there is not an effective
solution. Thus, we need to make sure that when placing theprojectors, they have different orientations from each other.
Considering the real world construction of the system, this
requirement does not seem so difficult to be satisfied, since
the projectors are not likely to have an identical orientation
by accident, except for the orientation perpendicular to the
screen plane.
4 Algorithm
Although the proposition 3.1 guarantees that Hsc can actu-
ally be determined from Hpc (p = 1, . . . , n), it does not show
how to compute Hsc. Because of the nonlinear nature of the
computation, we use direct minimization to find a solution,
by assuming that good estimates of the focal lengths of the
projectors are somehow available. Assuming temporarily
the estimates to be correct (which means the projectors are
calibrated), we apply the closed form algorithm of [5] that
is designed for the case of calibrated projectors. To do this,
we have the projectors project calibration patterns such as
a checkerboard pattern and take an image (or images) of
them. From this image, Hpcs are computed, and other pa-
rameters are computed. Using these as initial values, we
minimize the sum of the re-projection errors to compute the
parameters. The overall algorithm is summarized in Fig.1,where fp is the focal length estimate for the p-th projector.
A natural question is to what extent the focal length es-
timate fp needs to be accurate to make the minimization
converge to a global minimum. As will be shown in the
next section, they need not be so accurate. The range of the
initial values of the focal lengths that makes the algorithm
converge to the global minimum is indeed comparable to a
typical zooming range of data projectors.
The minimum number of projectors for performing the
computation is four (n 4). The number is derived froma counting argument as follows. On Eq.(7), one projector
pose gives a constraint of one degree of freedom on Hsc,
since Hpc has eight degrees of freedom, whereas the un-knowns, Tp, Rp, and Cp have three, three, and one degrees
of freedom, respectively, and thus degrees of freedom avail-
able for determining Hsc is 87 = 1. Hsc has eight degrees offreedom, from which we can determine only four degrees of
freedom, due to the ambiguity of four degrees of freedom
associated with S0. It can be seen from this that the min-
imum number of projectors is four. This number four is
convenient in practice, since an ideal image with the same
8/2/2019 procams2006_okatani_t
6/8
1. Set Cp to be:
C(0)p
fp s fp up0 fp vp0 0 1
1
2. ComputeH
pc from the correspondences between thecamera image and the projector image.
3. Apply the closed form algorithm of [5] and compute
H(0)sc enabling the following factorization:
HpcC(0) 1p H(0)sc T(0)p R(0)p
4. Using the estimated H(0)sc , T
(0)p , R
(0)p , and also C
(0)p as ini-
tial values, minimize the sum of the reprojection error
with respect to Hsc, Tp, Rp, and Cp:
J=
p,i
|xpi xpi|2 + |ypi ypi|2,
where xpi and ypi are the measured coordinates of the
ith feature point on the pth projector image, and xpiand ypi are their estimates given as
[ xpi, ypi, 1] HscTpRpCpmpi,
where mpi is the homogeneous coordinate of the fea-
ture point on the original projector image. The min-
imization is performed by a standard iterative algo-
rithm such as the Levenberg-Marquardt method.
Figure 1: The proposed algorithm.
aspect ratio as a single projector image can be generatedfrom exactly four projector images.
As for the freedom of S 0 of Eq.(8) that is still left un-
determined, which represents the scaling, two-dimensional
translation and rotation of the overall images. Among these,
the scaling and translation can be uniquely determined so
that, for example, the final image is maximized within the
possible display area depending on the layout and configu-
rations of the projectors. The rotation of the images is still
left undetermined, which could be resolved, too, by intro-
ducing further assumption (e.g. some information about the
geometry among the camera and the projectors). We prefer
to leave this to the users manual adjustment.
5 Experimental results
5.1 Synthetic data
We conducted experiments using synthetic data to examine
convergence performance of the algorithm and accuracy of
the estimation. The data are synthesized in the following
0
5
10
15
20
25
4 8 12 16 20Mean reprojection error (x 0.0001)
0
5
10
15
20
25
4 8 12 16 20Mean reprojection error (x 0.0001)
Figure 2: Histograms of minimization residues (mean of
reprojection errors) of 100 trials in the case of n = 10
and = 0.001 for two cases of choosing initial values of
the focal lengths.Initial values are randomly selected from
[0.5f0, f0/0.5] (left) and [0.3f0, f0/0.3] (right).
Table 1: The number of trials in which the algorithm con-
verges to the global minimum (in percentage).(a) = 0.001
= 0.8 0.65 0.5 0.3
n = 4 100 100 100 93
10 100 100 100 98
20 100 100 100 99
(b) = 0.005 = 0.8 0.65 0.5 0.3
n = 4 100 100 94 91
10 100 100 100 97
20 100 100 99 97
way. Firstly, the poses ofn projectors are generated in a ran-
dom manner. To be specific, their positions are randomly
chosen within a unit cube. Selecting a particular side of
the cube as the screen, the orientations of the projectors are
chosen so that they are oriented toward a random point on
the screen. For every projector, the image size and the fo-
cal length is set to 1.0 and 2.0, respectively. Generating 10regularly-spaced feature points in the image of each projec-
tor, the correspondences between these points and their pro-
jections on the camera image are used for computing the ho-
mography Hpc between the projector and the camera. When
computing the projections of the points on the camera im-
age, Gaussian noise with mean 0 and variance 2 is added
to their x and y coordinates.
Convergence performance As described, the proposed
algorithm requires initial values for the focal lengths of the
projectors. In order to examine dependency of convergence
performance on these initial values, we run the algorithm
with randomly initial values. For this purpose, an interval[f0 : f0/] is used, where f0 is the true focal length and
is a parameter for changing the width of the interval. A uni-
form random value is chosen from this interval and used as
the initial value fp. We checked if the algorithm converged
to the correct solution for different values of . Figure 2
shows examples for = 0.5 and 0.3; other parameters are
set as n = 10 and = 0.001. It can be seen that, in the case
of = 0.3, there were a few trials in which the algorithm
8/2/2019 procams2006_okatani_t
7/8
05
1015202530
4 6 8 10 12 14 16 18 20MSErot.(x1e-5)
Number of projectors
0
2.5
5
7.5
10
4 6 8 10 12 14 16 18 20MSErot.(x1e-3)
Number of projectors
Figure 3: MSE (mean squared error) of the estimates of theprojector orientation over 100 trials. Left: = 0.001 (1%
of the image width). Right: = 0.005 (5%)
did not converge to the correct solution. More detailed re-
sults are shown in Table 1. In the experiment, the algorithm
converged to the correct solution whenever is lower than
0.65. The true focal length f0 is set to 2.0, while the image
size is 1.0, which is equivalent to a projector with a diagonal
projecting angle of about 20 degrees. When converted into
a diagonal projecting angle, the interval given by = 0.65
corresponds to as wide a range as from 26 to 57 degrees.
This assures that the range of the initial value for which thealgorithm successfully converges to a global minimum is
fairly wide, and therefore very rough estimates of the focal
lengths can be used.
Performance w.r.t. the number of projectors As men-
tioned earlier, the calibration requires four or more projec-
tors. It is easily anticipated that the number of projectors
will affect the accuracy of the calibration. In order to exam-
ine this, we conducted experiments by varying the number
of projectors. The results are shown in Fig.3. Although
Hsc is the most important among the parameters, it is not
straightforward to measure its estimation accuracy. There-fore, the orientations of the projectors are recovered using
the estimate ofHsc and then their accuracy is measured. In
the figure, MSE over 100 trials with = 0.001 and 0.005
are shown. It can be seen from the results that the estimation
is actually possible from four projectors and the accuracy of
the estimate gradually increases with the number of projec-
tors as expected.
5.2 Real data
We also conducted experiments using real data to examine
the applicability of the proposed method to real systems.
Figure 4 shows the experimental setup. The system consistsof four data projectors (three Epson and one NEC projec-
tors) with 1024 768 pixels, a digital camera (NIKON D1)with 2000 1312 pixels, and several notebook PCs supply-ing the projectors with images. The white wall of the room
is used for the screen. The projectors are placed in a ran-
dom manner, at least as long as their projection areas on the
wall form a single closed area of approximately rectangular
shape.
Figure 4: Two views of the experimental system of inte-
grating four projector images to generate a single seamless
high-resolution image. The four projectors are placed in an
arbitrary manner on a desk. A white wall of the room is
used as the screen.
The proposed method requires known intrinsics other
than the focal length of the projectors, and they were spec-
ified in the following way. Firstly, the skew and the aspect
ratio are assumed to be 0 and 1, respectively, for every pro-
jector. Then the principal point of each projector is roughlyestimated by visual inspection using its zoom function. By
varying its zoom while having the projector project a test
pattern, the magnification center of the images is identified,
which is expected to coincide with the principal point. Al-
though the resulting estimates of the principal points can be
somewhat inaccurate, we may assume that it will not cause
fatal errors in the final estimation, as in the problem of struc-
ture from motion. (Errors in the principal point are expected
to be absorbed in the estimation of the projector orientation,
since the principal points and the orientation are highly cor-
related and difficult to separate.) Furthermore, the errors af-
fect only accuracy of the overall image rectification, which
need not be so highly accurate as the image alignment, after
all.
The calibration is performed using checkerboard patterns
projected by the projectors. On the image taken by the cam-
era, each pattern belonging to each projector is identified
and then the corner points are detected with subpixel accu-
racy by calculating a crossing point of the two lines locally
fitted to the corners. As for the initial values of the focal
lengths of the projectors, a rough estimate, 2000[pixels], is
used for every projector.
The proposed algorithm was applied to these data and
then it converged in about 30 iterations. The residue of the
sum of the reprojection errors after the convergence wasabout 0.6 pixels per point. Although this number seems
larger than expected from the accurate corner detection, it
might probably be because of the lens distortion of the pro-
jector lenses and imperfect flatness of the wall used as the
screen. Then, using the homography Hsc thus computed,
a single image is generated. Several results are shown in
Fig.5. It can be seen that the images are accurately stitched
and that the synthesized image is of correct rectangular
8/2/2019 procams2006_okatani_t
8/8
Figure 5: A few results. Top-left: Overview of an stitched
image. Top-right: Geometry of the images. Bottom-left and
right: Stitched images taken from a direction perpendicular
to the wall. They are of exact rectangular shape, showingthat they are correctly rectified.
shape.
6 Summary
We have shown a method for calibration of an ad hoc con-
struction of multi-projector displays. Assuming that the
camera is uncalibrated and the projectors are partially cali-
brated (only focal lengths are unknown), we have examined
uniqueness of solutions and critical configurations for thecorresponding calibration problem. Four or more projectors
are required for the calibration to be performed. The criti-
cal configurations shown depend on the orientations of the
optical axes of the projectors. Firstly, when there is a pro-
jector that has an optical axis perpendicular to the screen
plane, the focal length and position of that projector can-
not be uniquely determined. Secondly, when there are a
pair of projectors whose optical axes coincide, no informa-
tion can be derived from the pair. The first critical config-
uration need not be avoided, since it means that the image
of the projector is already correctly rectified, and therefore
the overall image can be rectified by aligning the images of
other projectors to the images of the projector. However,there is no solution to the second critical configuration and
we need to make sure that this will not occur. Since the
minimum number of projectors is four, it is necessary to
make at least four of the projectors not share an identical
orientation.
Also, we have presented an algorithm based on nonlinear
minimization, in which initial values of focal lengths need
to be specified. The experimental results using synthetic
data show that the initial values need not be so accurate
to make the minimization converge to a global minimum.
They need only to be within a typical range of focal lengths
corresponding to zoom ranges of usual projectors. We have
also confirmed through experiments using real data that the
method works for a real system.
References
[1] M. Brown, A. Majumder, and R. Yang. Camera-basedcalibration techniques for seamless multiprojector displays.
IEEE Transactions on Visualization and Computer Graph-ics, 11(2):193206, 2005.
[2] Y. Chen, D. W. Clark, A. Finkelstein, T. Housel, and K. Li.Automatic alignment of high-resolution multi-projector dis-plays using an un-calibrated camera. In IEEE Visualization,pages 125130, 2000.
[3] R. Hartley and A. Zisserman. Multi-View Geometry in Com-puter Vision. Cambridge University Press, 2000.
[4] A. Heyden and K. Astrom. Minimal conditions on intrin-sic parameters for Euclidean reconstruction. In Proceedingsof Asian Conference on Computer Vision, pages 169176,1998.
[5] T. Okatani and K. Deguchi. Autocalibration of a projector-camera system. IEEE Transactions on Pattern Analysis and
Machine Intelligence, 27(12):18451855, 2005.
[6] A. Raij and M. Pollefeys. Auto-calibration of multi-projectordisplay walls. In Proceedings of International Conference onPattern Recognition, pages 1417, 2004.
[7] J. Raskar, R. van Baar, P. Beardsley, T. Willwacher, S. Rao,and C. Forlines. iLamps: Geometrically aware and self-configuring projectors. In Proceedings of ACM SIGGRAPH,pages 809818, 2003.
[8] R. Raskar. Immersive planar display using roughly alignedprojector. In Proceedings of IEEE Virtual Reality, pages109116, 2000.
[9] R. Raskar and P. Beardsley. A self correcting projector. InCVPR, pages 626631, 2001.
[10] R. Raskar, M. S. Brown, R. Yang, Towles H. Chen, W. C.,B. Seales, and H. Fuchs. Mutli-projector displays using cam-era based registration. In Proceedings of IEEE Visualization,pages 161168, 1999.
[11] J. M. Rehg, M. Flagg, T. J. Cham, R. Sukthankar, andG. Sukthankar. Projected light displays using visual feed-back. In Proceedings of International Conference on Con-trol, Automation, Robotics and Vision, 2002.
[12] R. M. Steele, S. Webb, and C. Jaynes. Monitoring and cor-rection of geometric distortion in projected displays. Journalof the Winter School of Computer Graphics, 10(2):429440,2002.
[13] R. Sukthankar, R. G. Stockton, and M. D. Mullin. Smarterpresentations: Exploting homography in camera-projectorsystems. In Proceedings of IEEE International Conferenceon Computer Vision, pages 247253, 2001.
[14] B. Triggs. Autocalibration from planar scenes. In Proceed-ings of European Conference on Computer Vision , pages 89105, 1998.