Upload
others
View
17
Download
0
Embed Size (px)
Citation preview
Uncalibrated View Synthesis with Homography Interpolation
Pasqualina Fragneto(∗) STMicroelectronics
Via Olivetti 2, Agrate B.za, [email protected]
Andrea FusielloUniversita degli Studi di Udine - DIEGM
Via delle Scienze, 208 - 33100 Udine, [email protected]
Beatrice Rossi∗ Luca Magri∗ Matteo Ruffini∗
Abstract
This paper presents a novel approach to uncalibratedview synthesis that overcomes the sensitivity to the epipoleof existing methods. The approach follows a interpolate-
then-derectify scheme, as opposed to the previous derectify-
then-interpolate strategy. Both approaches generate a tra-jectory in an uncalibrated framework that is related to aspecific Euclidean counterpart, but our method yields awarping map that is more resilient to errors in the estimateof the epipole, as it is confirmed by synthetic experiments.
1. IntroductionView synthesis (VS) is the rendering of virtual views of a
scene starting from a few reference pictures of the scene it-
self. Uncalibrated view synthesis (UVS) is a particular case
of view synthesis performed without the need of a calibrated
camera system. The key advantage of UVS is the possibil-
ity to perform VS without any knowledge on the imaging
device nor the need of any kind of user interaction includ-
ing manual camera calibration. In the last years examples of
applications of VS techniques have grown considerably. In
free view point TV for instance, such techniques are used
for the creation of videos where the user can look at the
scene from a point of the scene itself, moving freely in the
filmed scene. Deducing a set of information about the scene
geometry without knowing any information more than some
scene photos, VS is also used on many augmented reality
systems such as digital archives or augmented panorama.
Finally, VS is widely exploited in sports production allow-
ing, one for all, camera views on the pitch. Observe that all
these application fields can take a great benefit from fully
automated procedures, and thus from UVS techniques.
Related work The VS is based on geometric relationships
that are found between the positions of pixels representing
the same point in the scene observed from different view-
points [4]. For example, given the internal and external
parameters of the camera, and the depth of a scene point
(with respect to the camera), it is easy to obtain the position
of the point in any synthetic view [13]. Where no knowl-
edge on the imaging device can be assumed, uncalibrated
point transfer techniques utilize image-to-image constraints
such as the fundamental matrices [11], homographies [3],
trifocal tensors [2], plane+parallax [10], or relative affine
structure [15, 16] to re-project pixels from a small number
of reference images to a given view. All these techniques
requires some user interaction in order to specify a position
for the virtual camera. In [5, 6] the authors focuses on a
new automatic method for positioning virtual views based
on the relative affine structure as an implicit geometry de-
scriptor. Given two (or more) reference images, the tech-
nique yields a 1-parameter family of virtual camera poses
along the geodesic between reference views and allows in-
terpolation as well as extrapolation. In [8], this procedure is
extended by allowing additional virtual camera poses from
a 1-parameter family to positions in 3-D space along and
orthogonally to the line of sight.
Contribution Starting from state-of-the-art UVS algo-
rithms, we are going to describe an alternative UVS pro-
cedure to generate trajectories which correspond to a well
defined rigid motion (similar to the one in [6]) and which
are also naturally robust to the error introduced by epipo-
lar geometry estimation. This last property holds since
the epipole value, usually computed from image correspon-
dences and numerically instable, is not involved in the gen-
eration of rectified virtual views. In our procedure epipolar
geometry intervenes only implicitly during the rectification
step. On the whole this results in a greater numerical sta-
bility with respect to the procedures in [6] and [8] in cases
when epipole computation is affected by errors raising from
inaccurate image correspondences.
The outline of the paper is as follows. In Sec. 2 we re-
view the baseline UVS algorithm more in detail. Sec. 3
2012 Second Joint 3DIM/3DPVT Conference: 3D Imaging, Modeling, Processing, Visualization & Transmission
978-0-7695-4873-9/12 $26.00 © 2012 IEEE
DOI 10.1109/3DIMPVT.2012.39
270
describes our alternative solution to performed UVS. Ex-
periments are presented in Sec. 4. Finally some discussion
is reported and the concluding remarks are drawn in Sec. 5.
2. Baseline approachThe approach described in [6] can be taken as our base-
line. In this section we will briefly summarize it (see Fig. 1).
Given two reference images I1 and I2, this pipeline consists
of the following steps.
Keypoint matching. SIFT features matches are com-
puted on the two reference images as in [12].
Robust epipolar geometry. RANSAC estimation of the
fundamental matrix is performed in order to discard bad
matches (outliers). The surviving correspondences (inliers)
are used as input to the rectification and the epipole e2 of
the second image is computed as the left zero vector of the
fundamental matrix.
Rectification. Since camera calibration parameters are
unknown, an uncalibrated rectification procedure based on
sparse correspondences is employed, see [7] for details.
This procedure provides an approximation of the rectifying
homographies H1 and H2 which rectify respectively I1 and
I2, and, as a by-product, the infinite plane homography
H∞12 = H−12 H1 (1)
between the reference images.
Stereo matching. Once the images are rectified, dense
correspondences can be obtained using any stereo match-
ing algorithm, for example [18, 17].
De-rectification. Dense correspondences are transferred
back to the original reference images by applying the in-
verse of the rectifying homographies (de-rectification).
Relative affine structure computation. The relativeaffine structure is an implicit geometry descriptor of the
scene and one of the key ingredients of this approach. Let
(mk1 ,mk
2) with k = 1, . . . , m be the dense set of corre-
spondences on the two reference images. They are related
to each other by the following equation:
mk2 � H∞12mk
1 + e2γk
1 , (2)
where γ k1 is the relative affine structure of mk
1 and the sign
� means equality up to a scale factor. The relative affine
structure γ k1 is obtained by solving for γ in Eq. 2:
γ k1 =
(mk2 × e2)T (H∞12mk
1 ×mk2)∥∥mk
2 × e2
∥∥2 . (3)
It is possible to demonstrate that the relative affine struc-
ture depends only on the source image I1, i.e. does not
depend on the second reference image. More details on rel-
ative affine structure can be found in [16].
Trajectory generation. This phase provides a 1-
parameter family of uncalibrated rigid transformation
matrices each of which describes a points transfer map
from the source image to a virtual image.
According to Eq. 2, the points transfer map from the
source image I1 to the second one I2 is:
mk2 �
[I 0
]D12
[mk
1
γ k1
], (4)
where D12 is the so-called uncalibrated rigid transforma-tion matrix between I1 and I2 which can be expressed as
a combination of the infinite plane homography H∞12 and
the epipole e2 of the second image (please note the resem-
blance to a rigid transformation matrix):
D12 =[H∞12 e2
0 1
]. (5)
Since the relative affine structure does not depend on I2,
arbitrary new views It, for t ∈ [0, 1], can be synthesized
by substituting D12 with the matrix D1t that represents the
transformation mapping the source image I1 to the virtual
one:
mkt �
[I 0
]D1t
[mk
1
γ k1
]. (6)
This equation allows to transfer points from the source im-
age to the synthetic image It. We now see how to compute
D1t.
As explained in [5], the set of uncalibrated rigid trans-
formation matrices is a group isomorphic to the group of
Euclidean rigid transformation SE(3, R). Let
G12 :=[R12 t120 1
]∈ SE(3, R), (7)
where R is a rotation matrix and t is a vector representing a
translation. The isomorphism is given by the conjugacy:
D12 =[KR12K
−1 Kt120 1
]= KG12K
−1 (8)
with K being the (upper triangular) matrix of the internal
parameters of the camera, and K = [ K 00 1 ].
Since SE(3, R) is a Lie group, it is possible to continu-
ously parameterize virtual camera positions. In particular,
Alexa in [1] defines some operators that allow to interpolate,
extrapolate and combine rigid transformation in SE(3, R),which, thanks to the isomorphism, can be mapped to the
group of uncalibrated rigid transformation. These operators
271
Figure 1: The baseline UVS approach.
are used to compute the uncalibrated rigid transformation
D1t. In particular, given the uncalibrated transformation
matrix D12 between two reference cameras, D1t is defined
as its scalar multiple:
D1t := Dt12 := exp (t log(D12)) t ∈ [0, 1]. (9)
Varying the value of t we obtain a 1-parameter family of vir-
tual camera which moves along a geodesic path in SE(3, R)interpolating the position and orientation of the two refer-
ence cameras.
Warping. Warping is the last phase of the pipeline and
implements the pixels transfer from the source image to the
virtual image It. Once computed D1t, the points transfer
map is given by Eq. 6. To preserve the coherence of surfaces
and their visibility, a pixel splatting with back-to-front ren-
dering can be used as in [6]. Observe that, in order to better
handle occlusions, it is possible to exploit information from
both reference images as explained in [8].
3. The interpolate-then-derectify approach
Whereas the baseline approach follows a derectify-then-
interpolate (DTI) scheme, we propose a novel interpolate-
then-derectify (ITD) approach to UVS. More specifically,
while the DTI pipeline interpolates the actual (uncalibrated)
motion between reference images, in the ITD approach the
interpolation is (trivially) performed between rectified im-
ages, and the complexity is moved onto the de-rectification
step, which brings the interpolated rectified images back in
the original (i.e. unrectified) domain, using the inverse rec-
tifying homography. The problem is that this homography
is known only for the reference images, while for all the
intermediate synthetic images it should be interpolated.
The advantage, however, is that the ITD approach turns
out to be intrinsically robust to the error committed in
epipolar geometry estimation, since the generation of rec-
tified virtual views involves only the theoretical value of the
epipole in the rectified configuration, i.e. the direction of
the x-axis.
In [14] a similar ITD approach was taken: the rectifi-
cation and de-rectification were called pre-warp and post-warp respectively. However the framework was calibrated,
so the post-warping was easily defined in a Euclidean refer-
ence frame, or manually specified via control points. Here
we propose a more elegant solution which exploits the ex-
ponential map to directly interpolate the rectifying homo-
graphies relative to the reference images. We will see that
the trajectory generated in this way corresponds to a well
defined rigid motion although it is not a geodesic path in
SE(3, R) as in [6].
The ITD pipeline outline is shown in Fig. 2. Please ob-
serve that only the second part of the chain is modified. The
main difference with the baseline one is that UVS is now
performed by taking as source image the rectified reference
image, whereas the baseline approach (DTI) started from
the original reference image. This produces rectified vir-
tual images which can be de-rectified by using interpolated
de-rectifying homographies.
Let IR1 and IR
2 be the two reference images after being
rectified: the steps that characterizes the ITD pipeline are
described in the following.
Rectified trajectory generation. This phase provides a
1-parameter family of uncalibrated rigid transformation ma-
trices in the rectified configuration. Each matrix describes a
points transfer map from the source image IR1 to a rectified
virtual image IRt .
Similarly to Eq. (4), points transfer map between IR1 and
272
Figure 2: The proposed ITD approach.
IR2 has the following form:
mkR2 � [
I 0]DR
12
[mkR
1
dk1
], (10)
where (mkR1 ,mkR
2 ) with k = 1, . . . , m is the dense set of
correspondences on the two rectified reference images, dk1
is the disparity value of mkR1 and
DR12 =
[HR∞12 eR
2
0 1
]=
⎡⎢⎣
1 0 0 10 1 0 00 0 1 00 0 0 1
⎤⎥⎦ (11)
is the uncalibrated transformation matrix between IR1 and
IR2 . Please observe how DR
12 has a very simple form. This
holds since in the rectified configuration the infinity plane
homography is trivial and the epipole has the direction of
the x-axis, i.e. HR∞12 = I and eR
2 =[1 0 0
]�. Further-
more, observe that in this case the relative affine structure
and the disparity computed by the stereo matching coincide
(up to a scale factor).
The 1-parameter family of virtual rectified cameras
which moves along a geodesic path in SE(3, R) passing
through the position and orientation of IR1 and IR
2 has thus
the following very simple form:
DR1t := exp (t log(DR
12)) =
⎡⎢⎣
1 0 0 t0 1 0 00 0 1 00 0 0 1
⎤⎥⎦
t ∈ [0, 1].
(12)
Warping. In the ITD pipeline, the warping phase is per-
formed in the rectified configuration. Once computed DR1t,
the points transfer map to use has the very simple form:
mkRt �mkR
1 +
⎡⎣tdk
1
00
⎤⎦ . (13)
Interpolated de-rectification. To obtain a virtual image
on a path interpolating the position and orientation of the
two original reference cameras, we must de-rectify the vir-
tual rectified image IRt . The de-rectifying homography,
though, is known only for the reference images, whilst for
intermediate virtual images it must be interpolated.
Let H1 and H2 the homographies that rectify I1 and I2
respectively. As pointed in App. A, the geodesic interpola-
tion between the two is given by:
Ht := H1(H−11 H2)t = H1 exp(t log(H−1
1 H2)) (14)
with t ∈ [0, 1].To de-rectify IR
t we can simply use H−1t . Please note
in Eq. (14) that H−11 H2 = H−1
∞12, and that H∞12 is conju-
gated (or similar) to a rotation matrix, hence its eigenvalues
are {1, e±iθ}. This guarantees that H∞12 has a real loga-
rithm, as discussed in App. A (with reference to that analy-
sis, H∞12 falls in case b).)Let us now consider the following 4× 4 matrices:
H1 =[H1 00 1
]. (15)
Ht =[Ht 00 1
]. (16)
273
With reference to the following diagram:
I1
D1t−−−→ It
H1
⏐⏐� �⏐⏐H−1t
IR1 −−−→
DR1t
IRt
(17)
it is easy to see that:
D1t = H−1t DR
1tH1 =
⎡⎢⎣H−1
t H1 H−1t
[t00
]
0 1
⎤⎥⎦
=
⎡⎢⎣(H−1
2 H1)t (H−12 H1)tH−1
1
[t00
]
0 1
⎤⎥⎦
=[Ht∞12 tHt
∞12e12
0 1
].
(18)
So D1t is conjugated (cf. Eq. 8) with the rigid motion:[Rt
12−tRt12R
T12t12
0 1
]=
[Rt
12 00 1
] [I −tRT
12t12
0 1
]
=[R12 00 1
]t [I −RT
12t12
0 1
]t (19)
because e1 = −KRT12t12 (whereas e2 = Kt12).
The geodesic path in SE(3, R) would have been:
[R12 t12
0 1
]t
(20)
but since (AB)t �= AtBt Eq. 19 is not the geodesic. As
a matter of fact, at time t our trajectory corresponds to the
rigid motion obtained by composing a rotation Rt12 and a
translation of tC, where, C is the center of the second refer-
ence camera.
4. ResultsIn this section we show and discuss some results ob-
tained with a MATLAB R© implementation of the two
pipelines. Experiments show that the ITD is more robust
than the DTI pipeline with respect to the error introduced
by automatic sparse correspondence extraction and epipo-
lar geometry estimation.
In the first experiment, we synthetically generated 1000
reference images pairs by projecting a cloud of 50 3D-
points with randomly chosen camera matrices lying on a
sphere enclosing the points. We run the two pipelines for in-
creasing values of t and considered the final point positions
as a reference ground truth (different for DTI and ITD). We
Figure 3: Average difference. On the x-axis the values for
the t parameter, on the y-axis the average absolute differ-
ence (in pixels) between the point positions obtained with
ITD/DTI and the reference position. Green dashed lines
correspond to DTI, whereas red solid lines correspond to
ITD. Different markers correspond to different values of the
variance.
Figure 4: Percentage of Corrupted Points. On the x-axis
the values for the t parameter, on the y-axis the percentage
of points that are mapped by ITD/DTI more that 5 pixels
away reference position. Green dashed lines correspond to
DTI, whereas red solid lines correspond to ITD. Different
markers correspond to different values of the variance. The
red lines corresponding to ITD are not visible as they are
horizontal and set to 0%.
repeated the experiment by progressively adding Gaussian
random noise to the reference images (noise has 0 mean and
variance respectively of 0.1, 0.4 and 0.7) and evaluating for
each pipeline the absolute value of the difference between
the final point positions and the reference positions. This
simulates the typical precision on keypoints location in real
cases.
Results are shown in Fig. 3 and 4. It can be seen that as
the noise variance increases, ITD keeps on outperforming
the DTI pipeline, resulting in a more robust behavior. Re-
markably, no point is mapped more that 5 pixels away from
its reference position by ITD, whereas DTI maps about 15%
of points outside the 5-pixels radius. This is due to the in-
dependence of ITD from the lack of precision in the epipo-
274
(a) Synthetic frames generated with DTI.
(b) Synthetic frames generated with ITD.
Figure 5: Synthetic frames generated with the two pipelines. This figure is best viewed in colour. In red the ground truth,
in black the virtual views computed after perturbing the epipolar geometry. Clearly ITD (bottom) is closer to the ground truth
than DTI (top).
lar geometry computation, despite the non-linear refinement
with minimization of the Sampson error.
In the second experiment, whose results are shown in
Fig. 5, we generated two synthetic sequences using respec-
tively ITD and DTI pipelines with point correspondences
corrupted with Gaussian random noise with zero mean and
variance σ2 = 0.7). In this particular instance, the epipolar
geometry computation is particularly instable, for an error
of order 103 is committed over e2. As the parameter t in-
creases the DTI output (in black) departs visibly from the
ground truth (in red), whereas the ITD output is indistin-
guishable from the noise-free one.
5. Conclusions
We described a modified view synthesis pipeline that
renders a synthetic sequence of virtual images from uncali-
275
brated reference images. We demonstrated that our method
generates trajectories which correspond to a well defined
rigid motion and which are also naturally robust to the error
introduced by epipolar geometry estimation. This alterna-
tive approach can thus be particularly useful when refer-
ence images present poor precision in keypoints correspon-
dences, or when epipolar geometry estimation is degener-
ate. In these situations the gain of view synthesis quality
is demonstrated with some experiments. The visual quality
of the output now depends almost exclusively on the stereo
matching procedure. Future work will aim at improving this
stage.
Acknowledgments
A.F. is grateful to Giandomenico Orlandi for hints and
discussions on homography interpolation.
A. On interpolating homographiesIn this appendix we will set and discuss the problem of
interpolating homographies of the projective plane. This
is similar to the problem of interpolating rigid motion that
arise in the field o robotics and computer graphics, but un-
fortunately it does not have an equivalent closed form solu-
tion, to the best of our knowledge.
Before, let us state some known results in linear groups
theory.
Definition A.1. The linear group GL(n, R) is the group ofthe invertible n× n real matrices.
GL(n, R) := {A ∈ M(n, R)(n, R) : det(A) �= 0}where M(n, R)(n, R) denote the space of all n × n realmatrices.
The group of homographies (or projectivity) of P2 is iso-
morphic to GL(3, R)/∼, where ∼ represents the equiva-
lence relation “up to a scale factor”.
Definition A.2. For each matrix A ∈ M(n, R) let us define:
exp(A) := I +∞∑
k=1
Ak
k!
Proposition A.3. There exist an open neighborhood U �0 in M(n, R) which is mapped homeomorphically by themap X → exp(X) onto an open neighborhood V � I inGL(n, R).
The surjectivity of exp restricted to U implies the ex-
istence of the inverse map log : V → U such that
exp(log(X)) = X. Indeed it turns out that the series
log(A) := −∞∑
k=1
(I −A)k
k
is convergent for all matrices in the ball ‖A− I‖F < 1, and
exp(log(A)) = A.
The logarithm map projects a neighborhood V of I in
GL(n, R) into a neighborhood U of 0 in M(n, R), which
can be identified with the tangent space to GL(n, R) at I.
This picture is reminiscent of Riemann’s projection of the
sphere onto a plane. The exponential map projects this tan-
gent space back into GL(n, R). A straight path in the tan-
gent space emanating from 0 is mapped onto a geodesic in
GL(n, R) emanating from I. This is formalized by the fol-
lowing:
Proposition A.4. Given a matrix A ∈ GL(n, R), the matrixAt ∈ GL(n, R)
At := exp(t log(A)),
where t ∈ [0, 1], describes a geodesic path in GL(n, R)from I to A
Every matrix in GL(n, R) has a logarithm, albeit not
necessarily real. For obvious reasons, though, we are in-
terested in real matrices, hence the following proposition is
particularly useful [9]:
Theorem A.5. Given A ∈ GL(n, R), a real logarithm ex-ists iff each of its Jordan blocks associated to a negativeeigenvalue occours an even number of times.
Corollary A.6. A sufficient condition for A ∈ GL(n, R) tohave a real logarithm is that A has no eigenvalues on theclosed negative real axis.
When exists, the real logarithm is not unique. How-
ever, among all real logarithms there is a unique one whose
eigenvalues has imaginary part lying in ]−π, π [. This
unique logarithm is called the principal logarithm of A. It
will be denoted by log A.
We can now state the problem as follows:
Problem A.7. Given two homographies of P2 H1 and H2
find a “natural” interpolating homography H(t) where t ∈[0, 1].
As homography matrices are defined up to a scale factor,
we can assume det(H1) > 0 and det(H2) > 01. The prob-
lem can therefore be re-formulated – with no generality loss
– as follows:
Problem A.8. Given a homography of P2 whose matrix H
is real and with det(H) > 0, find an interpolating pathH(t) ∈ GL(3, R), from I to H , where t ∈ [0, 1].
1As these matrices are 3x3 one can change the sign of the determinant
by multiplying by -1
276
The original problem can be reduced to the latter by tak-
ing H = H−11 H2. Given such an interpolating path H(t),
the solution to the original problem is H1H(t).Based on Proposition A.4, the natural solution is to take
H(t) := Ht = exp(t log H). (21)
As a matter of fact, the formula above solves our problem
as long as log H is real, hence we shall study the spectral
properties of H in order to apply Theorem A.5.
The eigenvalues of any matrix in GL(3, R) are either all
real, or one real and two conjugate complex. In our case,
as det(H) > 0, the eigenvalues of H are either a1) all real
and positive, a2) two negatives and one positive, or b) one
real positive and two conjugate complex.
In cases a1) and b) the real logarithm of H exists thanks
to Corollary A.6. Let us now concentrate on case a2). Let
H = V
[A 00 λ3
]V −1
be the eigendecomposition of H in this case. It turns out
that either a2.1) A is diagonal A =[−λ1 0
0 −λ2
]if H
can be diagonalized, or a2.2) A is a 2 × 2 Jordan block
A =[−λ 1
0 −λ
]if H cannot be diagonalized. Unfortunately,
according to Theorem A.5, A does not have a real logarithm
in these two cases, unless λ1 = λ2 in case a2.1.
Hence, a solution to the problem of interpolating gen-eral homographies is still missing. However, as far as infi-
nite plane homographies are concerned, the favorable case
b applies.
References[1] M. Alexa. Linear combination of transformations. In
Proceedings of the 29th annual conference on Com-puter graphics and interactive techniques, pages 380–
387. ACM Press, 2002. 2
[2] S. Avidan and A. Shashua. Novel view synthesis
by cascading trilinear tensors. IEEE Transactions onVisualization and Computer Graphics, 4(4):293–306,
Oct-Dec 1998. 1
[3] B. S. Boufama. The use of homographies for view
synthesis. In Proceedings of the International Con-ference on Pattern Recognition, pages 563–566, 2000.
1
[4] O. D. Faugeras and L. Robert. What can two images
tell us about a third one? In Proceedings of the Euro-pean Conference on Computer Vision, pages 485–492,
Stockholm, 1994. 1
[5] A. Fusiello. Specifying virtual cameras in uncali-
brated view synthesis. IEEE Transactions on Cir-cuits and Systems for Video Technology, 17(5):604–
611, May 2007. 1, 2
[6] A. Fusiello and L. Irsara. An uncalibrated view-
synthesis pipeline. In Proceedings of the InternationalConference on Image Analysis and Processing, pages
609–614, 10-14 September 2007. 1, 2, 3
[7] A. Fusiello and L. Irsara. Quasi-euclidean epipolar
rectification of uncalibrated images. Machine Visionand Applications, 22(4):663 – 670, 2011. 2
[8] F. Gigengack and X. Jiang. Improved uncalibrated
view synthesis by extended positioning of virtual cam-
eras and image quality optimization. In Asian Confer-ence on Computer Vision, pages 438–447, 2009. 1,
3
[9] R. Horn and C. Johnson. Topics in Matrix Analysis.
Cambridge University Press, London, 1991. 7
[10] M. Irani and P. Anandan. Parallax geometry of pairs
of points for 3D scene analysis. In Proceedings of theEuropean Conference on Computer Vision, pages 17–
30, 1996. 1
[11] S. Laveau and O. Faugeras. 3-D scene representation
as a collection of images and foundamental matrices.
Technical Report 2205, INRIA, Institut National de
Recherche en Informatique et an Automatique, Febru-
ary 1994. 1
[12] D. G. Lowe. Distinctive image features from scale-
invariant keypoints. International Journal of Com-puter Vision, 60(2):91–110, 2004. 2
[13] L. McMillan and G. Bishop. Head-tracked stereo dis-
play using image warping. In Stereoscopic Displaysand Virtual Reality Systems II, number 2409 in SPIE
Proceedings, pages 21–30, San Jose, CA, 1995. 1
[14] S. Seitz and C. Dyer. View morphing. In SIGGRAPH96, pages 21–30, 1996. 3
[15] A. Shashua and N. Navab. Relative affine structure:
Theory and application to 3-D reconstruction from
perspective views. In Proceedings of the IEEE Con-ference on Computer Vision and Pattern Recognition,
pages 483–489, 1994. 1
[16] A. Shashua and N. Navab. Relative affine structure:
Canonical model for 3D from 2D geometry and ap-
plications. IEEE Transactions on Pattern Analysisand Machine Intelligence, 18(9):873–883, September
1996. 1, 2
[17] B. M. Smith, L. Zhang, and H. Jin. Stereo match-
ing with nonparametric smoothness priors in feature
space. In IEEE Conference on Computer Vision andPattern Recognition, pages 485–492, 2009. 2
[18] Z.-F. Wang and Z.-G. Zheng. A region based stereo
matching algorithm using cooperative optimization.
In IEEE Conference on Computer Vision and PatternRecognition, 2008. 2
277