11
A panoramic 3D reconstruction system based on the projection of patterns immediate Abstract This work presents the implementation of a 3D reconstruction system capable of reconstructing a 360-degree scene with a single acquisition using a projection of patterns. The system is formed by two modules: the first module is a CCD camera with a parabolic mirror that allows the acquisition of catadioptric images. The second module consists of a light projector and a parabolic mirror that is used to generate the pattern projections over the object that will be reconstructed. The projection system has a 360-degree field of view and both modules were calibrated to obtain the extrinsic parameters. To validate the functionality of the system, we performed 3D reconstructions of three objects, and show the reconstruction error analysis. Keywords Panoramic 3D Reconstruction, Calibration, Catadioptric Camera, Catadioptric Projector 1. Introduction In recent times, 3D reconstruction systems have become fundamental for object modelling applications such as virtual environments development for archaeology [1, 2], video surveillance [3, 4], quality control processes [5, 6], disease diagnosis [7, 8], and measurement [9, 10]. The 3D reconstruction systems that incorporate lenses and mirrors into acquired images with an omnidirectional field of view are called catadioptric systems. A common approach for 3D reconstruction is the use of structured light projection with a 360-degree laser line combined with omnidirectional stereo cameras that take images with multiple views [11, 12]. The disadvantage of these approaches is that we can only recover 3D information in the area of the laser incidence, limiting the reconstruction area. Another approach is the use of an omnidirectional camera constructed by a traditional perspective camera and a pyramid reflector [13]. In [14], the authors propose a panoramic sensor using a convex mirror capable of producing panoramic depth maps by feature matching from two panoramic images. However, this technique fails with textureless surfaces, being unable to match the texture features. A stereo system with a texture projector can eliminate or reduce error matching. For this reason we propose a new 3D reconstruction system based on a projection system and a single catadioptric camera. In order to obtain a good reconstruction, it is necessary to calibrate the catadioptric cameras, estimating the geometric model of the camera. In recent decades, several algorithms have been proposed for the calibration of omnidirectional cameras [14–18]. The 3D reconstruction systems based on conventional cameras have been well-studied [19–22]. However, for Diana-Margarita Córdova-Esparza, José-Joel Gonzalez-Barbosa, Juan-Bautista Hurtado-Ramos and Francisco-Javier Ornelas-Rodriguez: A Panoramic 3D Reconstruction System Based on the Projection of Patterns 1 ARTICLE Int J Adv Robot Syst, 2014, 11:55 | doi: 10.5772/58227 1 Instituto Politécnico Nacional, CICATA, Querétaro, Mexico * Corresponding author E-mail: [email protected] Received 30 Aug 2013; Accepted 13 Jan 2014 DOI: 10.5772/58227 ∂ 2014 The Author(s). Licensee InTech. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Diana-Margarita Córdova-Esparza 1,* , José-Joel Gonzalez-Barbosa 1 , Juan-Bautista Hurtado-Ramos 1 and Francisco-Javier Ornelas-Rodriguez 1 A Panoramic 3D Reconstruction System Based on the Projection of Patterns Regular Paper International Journal of Advanced Robotic Systems

3D reconstruction

Embed Size (px)

Citation preview

A panoramic 3D reconstructionsystem based on the projection ofpatterns

immediate�

Abstract This work presents the implementation ofa 3D reconstruction system capable of reconstructinga 360-degree scene with a single acquisition using aprojection of patterns. The system is formed by twomodules: the first module is a CCD camera with aparabolic mirror that allows the acquisition of catadioptricimages. The second module consists of a light projectorand a parabolic mirror that is used to generate the patternprojections over the object that will be reconstructed.The projection system has a 360-degree field of viewand both modules were calibrated to obtain the extrinsicparameters. To validate the functionality of the system, weperformed 3D reconstructions of three objects, and showthe reconstruction error analysis.

Keywords Panoramic 3D Reconstruction, Calibration,Catadioptric Camera, Catadioptric Projector

1. Introduction

In recent times, 3D reconstruction systems have becomefundamental for object modelling applications such asvirtual environments development for archaeology [1, 2],video surveillance [3, 4], quality control processes [5, 6],disease diagnosis [7, 8], and measurement [9, 10].

The 3D reconstruction systems that incorporate lensesand mirrors into acquired images with an omnidirectional

field of view are called catadioptric systems. A commonapproach for 3D reconstruction is the use of structuredlight projection with a 360-degree laser line combinedwith omnidirectional stereo cameras that take imageswith multiple views [11, 12]. The disadvantage of theseapproaches is that we can only recover 3D information inthe area of the laser incidence, limiting the reconstructionarea. Another approach is the use of an omnidirectionalcamera constructed by a traditional perspective cameraand a pyramid reflector [13].

In [14], the authors propose a panoramic sensor usinga convex mirror capable of producing panoramic depthmaps by feature matching from two panoramic images.However, this technique fails with textureless surfaces,being unable to match the texture features. A stereosystem with a texture projector can eliminate or reduceerror matching. For this reason we propose a new 3Dreconstruction system based on a projection system and asingle catadioptric camera.

In order to obtain a good reconstruction, it is necessaryto calibrate the catadioptric cameras, estimating thegeometric model of the camera. In recent decades, severalalgorithms have been proposed for the calibration ofomnidirectional cameras [14–18].

The 3D reconstruction systems based on conventionalcameras have been well-studied [19–22]. However, for

Diana-Margarita Córdova-Esparza, José-Joel Gonzalez-Barbosa, Juan-Bautista Hurtado-Ramos and Francisco-Javier Ornelas-Rodriguez: A Panoramic 3D Reconstruction System Based on the Projection of Patterns

1

ARTICLE

Int J Adv Robot Syst, 2014, 11:55 | doi: 10.5772/58227

1 Instituto Politécnico Nacional, CICATA, Querétaro, Mexico* Corresponding author E-mail: [email protected]

Received 30 Aug 2013; Accepted 13 Jan 2014

DOI: 10.5772/58227

∂ 2014 The Author(s). Licensee InTech. This is an open access article distributed under the terms of the CreativeCommons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use,distribution, and reproduction in any medium, provided the original work is properly cited.

Diana-Margarita Córdova-Esparza1,*, José-Joel Gonzalez-Barbosa1, Juan-Bautista Hurtado-Ramos1 and Francisco-Javier Ornelas-Rodriguez1

A Panoramic 3D Reconstruction System Based on the Projection of PatternsRegular Paper

International Journal of Advanced Robotic Systems

applications where we need to process a 360-degree fieldof view (for example inside hollow objects), conventionalsystems implement a piecemeal reconstruction. In thispaper, we describe the 360-degree 3D reconstruction ofa scene from a single acquisition using a projection ofpatterns.

2. Geometrical model of the catadioptric vision system

2.1 Experimental setup

Figure 1 shows a schematic overview of the experimentalsetup for the proposed panoramic 3D reconstructionsystem, which consists of two parabolic mirrors, a MarlinF-080 CCD camera, a composite lens (L), and a SonyVPL-ES5 3LCD light projector.

Figure 1. Schematic drawing of the experimental setup for theproposed panoramic 3D reconstruction system.

First, a catadioptric camera was assembled, aligning themirror (PM1) with the CCD camera in order to capturethe full environment that is projected onto the mirror.Using a chessboard calibration pattern, we calibrate thecatadioptric camera obtaining the intrinsic and extrinsicparameters. Once the camera is calibrated, we design theoptical setup of the pattern projection system, placing themirrors (PM1, PM2) back-to-back. In front of the projector,a composite lens (L) was inserted in order to reduce theprojected image towards the mirror (PM2), as seen inFigure 2.

Projector

Mirror

PM1

Mirror

PM2

Camera

CCD

Lens

(L)

Figure 2. Experimental setup for the panoramic 3Dreconstruction system. The system consists of a Marlin F-080 CCDcamera, two parabolic mirrors (PM1 and PM2) on a back-to-backconfiguration, and a Sony VPL-ES5 3LCD light Projector with acomposite lens (L).

We focused our work on two areas: calibration and 3Dreconstruction. For calibration, we used a plane witha printed calibration pattern and a second calibrationpattern projected by the PM2-projector system. Theprinted pattern was used to calibrate the camera-PM1system, while the projected pattern was used to calibratethe PM2-projector, and to calculate the rotation andtranslation matrix between the PM1 and PM2 mirrors.For reconstruction, we performed matching between theprojected image and the acquired image. For matching,we used the epipolar curves constraints. Finally, we usethree objects to evaluate the reconstruction system: twoperpendicular planes, a cylinder, and a semi-sphere.

2.2 Geometrical model description for the catadioptric cameracalibration

In order to calibrate the catadioptric camera, we use thegeometric model proposed in [18]. To solve this model, weuse a chessboard pattern to obtain the intrinsic parameters:mirror curvature and image centre, as well as the extrinsicparameters: rotation and translation between the camerareference system and the pattern.

The general model for the catadioptric camera has tworeference systems: the camera image plane, representedby (u′, v′), and the sensor plane, represented by (u′′, v′′).Let X be a point in the scene, then we assume thatu′′ = [u′′, v′′]T is the projection of X onto the sensor planeand u′ = [u′, v′]T is the image in the camera plane (seeFigure 3). As described in [23], both systems are related byan affine transformation u′′ = Au′+t A ∈ �2x2 y t ∈ �2x1.

An imaging function g relates a point u′′ in the sensorplane with the vector p = [u′′, v′′, w′′] coming from theviewpoint O toward a scene point X (see Figure 3). Therelation between a point in pixels u′ and a point in thescene X is given by eq. (1).

λ · p = λ · g(u′′) = λ · g(Au′ + t) = PX (1)

Figure 3. General model for the catadioptric camera. A pointX in the scene is projected in the sensor plane (u′′), in metriccoordinates, and in the image plane (u′), in pixel coordinates. p isthe point X seen from the viewpoint O. R and T are the extrinsicparameters (rotation matrix and translation vector) that relates thecamera’s coordinate system with the world coordinate system.

Int J Adv Robot Syst, 2014, 11:55 | doi: 10.5772/582272

where X ∈ �4 is represented in homogeneous coordinatesand P ∈ �3x4 is the perspective projection matrix, and λ isan scaling factor. Through camera calibration, we estimatethe matrix A, the vector t, and the non-linear function g,such that all the vectors g(Au′ + t) satisfy eq. (1). Eq. (2)is proposed for g.

g(u′′, v′′) = (u′′, v′′, f (u′′, v′′))T (2)

The function f depends on u′′ and v′′ only and can bedescribed in polynomial form as shown in Eq. (3).

f (u′′, v′′) = a0 + a1ρ′′ + a2ρ′ ′2 + . . . + aNρ′′N (3)

where ρ′′ =√

u′ ′2 + v′ ′2. The coefficients ai, i =0, 1, 2, ..., N and the polynomial degree N are determinedby the calibration process.

Thus, eq. (1) can be rewritten as eq. (4).

λ ·

u′′

v′′

w′′

= λ · g(Au′ + t)

= λ

[(Au′ + t)f (u′′, v′′)

]

= P · X, λ > 0 (4)

For cameras with parabolic mirrors we use eq. (5) asproposed in [25]. This simplification allows us to assumea1 = 0, and rewriting eq. (3) we obtain eq. (6).

d fdρ

∣∣∣∣ρ=0

= 0 (5)

f (u′′, v′′) = a0 + a2ρ′ ′2 + . . . + aNρ′ ′N (6)

From the camera’s calibration process, the parameters [A,t,a0,a2,...,aN] are estimated with an iterative procedure.Parameters A and t describe an affine transformationthat relates the sensor plane with the image plane of thecamera. A is the rotation matrix and t is a translationvector between Ic and Oc in the image plane of the camera(see Figure 3).

According to this, we assume that A = I, and t = 0,therefore u′′ = u′. Replacing this relation in eq. (4) andusing eq. (6), we get the projection shown in eq. (7).

λ ·

u′′

v′′

w′′

= λ · g(u′)

= λ ·

u′

v′

f (ρ′)

= λ ·

u′

v′

a0 + a2ρ′2 + ... + aNρ′N

= P · X λ > 0 (7)

-0.1

924

-0.1

374

-0.0

826

-0.0

276

0.0

276 mm

0.0

826

0.1

374

0.1

924

Figure 4. Calibration pattern flatness (in mm) obtained using theMitutoyo BJ1015 Coordinate measuring machine. This calibrationpattern was used to calibrate the catadioptric camera (CCD-PM1)and the projection system (PM2).

where u′ and v′ are the pixel coordinates of an image, λ isthe scale factor, and ρ′ is the Euclidean distance betweenu′ and v′. Accordingly, we only need to estimate theparameters a0, a2, ..., aN .

To perform the calibration, we used a printed chessboardpattern in different positions, which relates the camera’scoordinate system with the world W through a rotationmatrix RW→PM1 and a translation vector TW→PM1, calledextrinsic parameters. Let Ii be the calibration patternimage, XW = [x, y, z] representing the 3D coordinatesof a point in the calibration plane reference system, andmij = [uij, vij] are the pixel coordinates in the image plane.Considering a calibration pattern with a flatness shown inFigure 4, we assume that zij = 0. Consequently, we obtaineq. (8).

λ · pij = λij ·

uijvij

a0 + ... + aNρNij

= Pi · XW

=[

ri1 ri

2 ri3 Ti

W→PM1

xijyij01

=[

ri1 ri

2 TiW→PM1

xijyij1

(8)

For the calibration process, we obtain the extrinsicparameters for each image that indicate the position ofthe calibration pattern with respect to the camera referencesystem.

To eliminate the depth scale dependency λij , both sides ofeq. (8) are multiplied vectorially by pij resulting in eq. (9).

λij · pij × pij = pij ×[

ri1 ri

2 TiW→PM1

xijyij1

= 0 ⇒

Diana-Margarita Córdova-Esparza, José-Joel Gonzalez-Barbosa, Juan-Bautista Hurtado-Ramos and Francisco-Javier Ornelas-Rodriguez: A Panoramic 3D Reconstruction System Based on the Projection of Patterns

3

uijvij

a0 + ... + aNρNij

×

[ri

1 ri2 Ti

W→PM1

xijyij1

= 0 (9)

From eq. (9), for each point pij of the calibration pattern,we obtain the three homogeneous equations:

vj · q3 − f (ρj) · q2 = 0 (10.1)

f (ρj) · q1 − uj · q3 = 0 (10.2)

uj · q2 − vj · q1 = 0 (10.3)

where xj, yj and zj and uj, vj are known, and correspondto the mirror points and image points, respectively. q1, q2and q3 are given by:

q1 = (r11xj + r12yj + t1)

q2 = (r21xj + r22yj + t2)

q3 = (r31xj + r32yj + t3)

Eq. (10.3) is linear and has the unknown parameters r11,r12, r21, r22, t1, t2. Rewriting eq. (10.3) for L points of thecalibration pattern as a system of linear equations, we geteq. (11).

M · H = 0 (11)

where:

H = [r11, r12, r21,r22, t1, t2]T

M =

−v1x1 −v1y1 u1x1

......

...−vLxL −vLyL −uLxL

u1y1 −v1 u1...

......

uLyL −vL uL

(12)

Using singular value decomposition (SVD), we canrepresent M = U ∑ VT , the solution to the system can befound in the singular vector v6 associated with the leastsingular vector σ6.

Once eq. (10.3) is solved to find the camera extrinsicparameters, we substitute the estimated values into theequations (10.1) and (10.2), and solve for the cameraintrinsic parameters a0,a2,...,aN that describe the shape ofthe imaging function g.

Equations (10.1) and (10.2) are rewritten again as a linearsystem of equations. But now, we incorporate all the kcalibration pattern observations to obtain eq. (13).

A1 A1ρ21 · · · A1ρN

1 −v1 0 · · · 0C1 C1ρ2

1 · · · C1ρN1 −u1 0 · · · 0

...... · · ·

......

... . . ....

Ak Akρ2k · · · AkρN

k 0 0 · · · −vkCk Ckρ2

k · · · CkρNk 0 0 · · · −uk

· r = B (13)

a) b)

Figure 5. a) Projected calibration pattern, b) Image capturedby the catadioptric camera showing two calibration patternsin a white plane: the printed pattern is used to calibratethe catadioptric camera (camera CCD-PM1 mirror), and theprojected pattern is used to calibrate the projection system (PM2mirror-projector).

where:Ai = ri

21xi + ri22yi + ti

2, Bi = vi · (ri31xi + ri

32yi), Ci =

ri11xi + ri

12yi + ti1, Di = ui · (ri

31xi + ri32xi).

r =[

a0a2 · · · aNt13t2

3 · · · tk3

]T

B = [B1D1 · · · BkDk]T

The solution of the system is obtained using thepseudoinverse matrix. Thus, the intrinsic parametersa0,a2,...,aN which describe the model, are now known. Inorder to compute the polynomial degree N, we actuallystart from N=2, then we increase N by unitary steps andwe compute the average value of the reprojection errorof all calibration points. The procedure stops when theminimum error is found.

2.3 Calibration of the catadioptric camera with the lightprojection system

In order to calibrate the catadioptric camera with theprojection system (see Figure 6), we develop the followingprocedure:

A calibration pattern image was projected (see Figure 5a))in different positions over the mirror PM2. The mirror PM2reflects the image until the calibration plane W is found.The calibration plane position is known with respectto PM1, (as seen in Figure 5b)) because the calibrationplane contains the printed pattern used to calibratethe camera-PM1 system. Once a catadioptric imageis acquired by the camera-PM1, a subpixel algorithmprovides the points of the projected calibration pattern.These points are represented as ui

PM1.

The plane where the projection pattern is located is definedby Πi

w and is calculated with eq. (14) [24].

Πiw =

[ni

eDi

e

]=

[R−1

W→PM1 0T

−(R−1W→PM1 ∗ TW→PM1)

T1

] [ni

Di

]

(14)

where RW→PM1 and TW → PM1 represent the rotationmatrix and translation vector respectively between theworld and the mirror PM1. 0 denotes the null vector, ni

and nie are unit normal vectors to the plane Πi

w, and finally

Int J Adv Robot Syst, 2014, 11:55 | doi: 10.5772/582274

Di and Die denote the distance from the mirror PM1 points

to the plane Πiw.

The points uiPM1 are mapped to the plane Πi

w, and aredenoted as Xi

PM1. These points are defined on the mirrorPM1. A rigid transformation is required to represent thepoints Xi

PM1 on the mirror PM2 coordinate system, asdescribed in eq. (15).

XiPM2 = RPM1→PM2Xi

PM1 + TPM1→PM2 (15)

When the points XiPM2 are projected onto the mirror PM2,

they are represented by xiPM2. Eq. (7) is used to define

the points xiPM2 in the projector image as vi

PM2. However,as the rotation matrix RPM1→PM2 and the translationvector TPM1→PM2 are unknown parameters, a function thatoptimizes the value for these parameters was developed.

This function represented by g2 has known inputparameters such as: the points Xi

PM1, the extracted pointsui

PM2 from the projected pattern (see Figure 5a)), and theunknown parameters: rotation RPM1→PM2 and translationTPM1→PM2.

The viPM2 points are used to calculate the error with

eq. (16). This error is minimized according tog2(θxθyθz, txtytz) using the Levenberg-Marquardtoptimization method in MATLAB®, resulting inthe rotation and translation between the camera-mirror(PM1) and the mirror(PM2)-projector.

e =(

uiPM2 − vi

PM2

)2(16)

2.4 Epipolar geometry for panoramic cameras

Epipolar geometry describes the relationship betweenpositions of corresponding points in a pair of imagesacquired by central catadioptric cameras [14, 25], as seenin Figure 7. In this section, the alignment parameters(rotation and translation) between the camera-mirror(PM1) and the mirror(PM2)-projector are described.

Figure 6. Schematic diagram of the panoramic 3D reconstructionsystem. The projections of 3D points XW on the mirrors (PM1,PM2) are denoted by xi

PM1 and xiPM2. ui

PM1 are the image pointsacquired by the catadioptric camera, and ui

PM2 are the points ofthe projected image.

Figure 7. Epipolar geometry between two catadioptric cameraswith a parabolic mirror. RPM1→PM2 is the rotation matrixand TPM1→PM2 the translation vector between both mirrors(PM1-PM2). Π is the epipolar plane.

The projections of 3D points XW on the mirrors (PM1,PM2) are denoted by xi

PM1 and xiPM2. The coplanarity of

vectors xiPM1, xi

PM2 and TPM1→PM2 = [tx, ty, tz]T can beexpressed with eq. (17):

(xiPM2)

T RPM1→PM2(TPM1→PM2 × xiPM1) = 0 (17)

where × denotes the cross product. Simplifying thecoplanarity constraint described in eq. (17), we obtain eq.(18):

(xiPM2)

T ExiPM1 = 0 (18)

where:E = RPM1→PM2[T]xPM1→PM2 (19)

E is the essential matrix and [T]xPM1→PM2 is theantisymmetric matrix given by eq. (20):

[T]xPM1→PM2 =

0 −tz tytz 0 −tx−ty tx 0

(20)

To each point u1 in one image, an epipolar conic isuniquely assigned in the other image u2. To find thecorresponding points between both images, we use eq.(21):

u2T A2(E, u1)u2 = 0 (21)

In a general case, the matrix A2(E, u1) is a nonlinearfunction of the essential matrix E, the point u1, and thecalibration parameters of the catadioptric camera.

The vector TPM1→PM2 defines the epipolar plane Π.Hence, the normal vector of the plane Π expressed in thecamera’s coordinates system is given by eq. (22):

n1 = TPM1→PM2 × xiPM1 (22)

The normal vector n1 can be expressed in the projector’scoordinates system using the essential matrix resulting ineq. (23):

n2 = RPM1→PM2n1

= RPM1→PM2(TPM1→PM2 × xiPM1)

= RPM1→PM2[T]xPM1→PM2xiPM1

= ExiPM1 (23)

Diana-Margarita Córdova-Esparza, José-Joel Gonzalez-Barbosa, Juan-Bautista Hurtado-Ramos and Francisco-Javier Ornelas-Rodriguez: A Panoramic 3D Reconstruction System Based on the Projection of Patterns

5

Thus, eq. (23) is denoted by eq. (24) as:

n2 = [p, q, s]T (24)

we can write the equation of the plane Π in the projector’scoordinate system resulting in eq. (25):

px + qy + sz = 0 (25)

The derivation of A2(E, u1) from eq. (21) based on aparabolic mirror, z is substituted into eq. (25) resulting ineq. (26) that allows the calculation of the epipolar conicsfor this type of mirror:

sx2 + 2b2 px + sy2 + 2b2qy − sb2 = 0 (26)

where b2 is the mirror parameter and p, q, s are defined byeq. (24).

2.5 3D reconstruction

Knowing the rotation RPM1→PM2 and translationTPM1→PM2 between both mirrors, we can find therelationship between the pixel coordinates in the imageand the angles defined in the parabolic mirror referencesystem (see Figure 8). Having this information, 3D pointcoordinates are calculated from the correspondence ofpoints in the projected image and the acquired image bythe catadioptric camera [14].

Given two points xiPM1 and xi

PM2 on the surfaces of bothmirrors PM1 and PM2, and the coordinates [x, y, z] from apoint xi

PM2, we have eq. (27):

φ1 = arctan( y

x

)

θ1 = arctan

(√x2 + y2

z

)(27)

ω1 = arccos

((xi

PM1)T(−RPM1→PM2TPM1→PM2)∣∣(xi

PM1)T∣∣ |TPM1→PM2 |

)

ω2 = arccos

((xi

PM2)TTPM1→PM2∣∣(xi

PM2)T∣∣ |TPM1→PM2 |

)

Figure 8. ω1 and ω2 are the angles between the points xiPM1

and xiPM2, |D| is the distance between these points. X are the

reconstructed 3D points.

From the distance |D| between two points xiPM1 and xi

PM2,the coordinates of a point X are determined with eq. (28).

|D| = |TPM1→PM2| sin ω2sin(ω1 + ω2)

X =

|D| sin θ1 cos φ1|D| sin θ1 sin φ1

|D| cos θ1

(28)

3. Results

3.1 Calibration

We used 26 images to calibrate the catadioptric camera,some of these are shown in Figure 9. As we can see, thewhite plane (Πi

w) contains two calibration patterns: oneprinted and the other projected.The printed pattern is used to calibrate the catadioptriccamera formed by the CCD camera and the PM1 mirror,and to compute the plane Πi

w. Once the camera-PM1setup is calibrated, the projected pattern is mapped to themirror and then to plane Πi

w. This pattern is used tocompute the rotation and translation between the mirrorsPM1 and PM2, as explained in section 2.3, and to calibratethe PM2-projector setup.

Figure 9. Calibration images. Twenty-six images were used tocalibrate the catadioptric camera. Each image contains a whiteplane with the two calibration patterns in different positions andorientations.

The intrinsic parameters from the catadioptric camera arepresented in Table 1. The values of the mirror’s curvatureallow us to confirm that it is a parabola.

Polynomial coefficients f (ρ) Image centrea0 a1 a2 a3 a4 xc yc

-147.8362 0.0000 0.0033 0.0000 0.0000 542.3271 702.6857

Table 1. Intrinsic parameters of the camera-PM1

The intrinsic parameters obtained for the PM2-projectorsystem are presented in Table 2.

Polynomial coefficients f (ρ) Image centrea′0 a′1 a′2 a′3 a′4 xc yc

55.1000 0 -0.0056 0.0000 0.0000 575.0802 499.7993

Table 2. Intrinsic parameters of the PM2-projector

Int J Adv Robot Syst, 2014, 11:55 | doi: 10.5772/582276

The rotation and translation matrix between the mirrorPM1 and the mirror PM2 as described in section 2.4,correspond to the values shown in eq. (29).

R =

−0.3924 0.9184 0.0505−0.8897 −0.3929 0.23240.2333 0.0463 0.9713

(29)

[T]x =

0 −108.2657 −0.4395108.2657 0 −44.99610.4395 44.9961 0

When building the optical setup for the panoramic 3Dreconstruction system, we carefully aligned the PM1 andPM2 mirrors. However, as can be seen in the rotationmatrix R, a small misalignment exists.

To calculate the calibration error in the camera-mirror PM1system, we considered the position of the 3D points onthe printed calibration pattern in the catadioptric image.With the rotation and translation parameters, these pointsare projected towards the PM1 mirror, using the intrinsicparameters (shown in Table 1) that are mapped to theimage. This way, we compared the points from the printedpattern projected to the catadioptric image with the pointsextracted from the acquired image by the camera.

The calibration error in the catadioptric projection system(PM2-projector) is obtained as follows: we extract thepositions of the 3D points from the projected calibrationpattern in the catadioptric image, and then these pointsare projected towards the mirror PM1 and are representedin the plane Πi

w. With the 3D points in the plane, wecalculate the rotation and translation matrix between thePM1 and PM2 mirrors in order to represent the pointsin the PM2 mirror reference system, using the intrinsicparameters (shown in Table 2) that are mapped towardsthe projector. Finally, we compared these points from theprojected pattern to obtain the error.

The calibration error from the catadioptric camera(camera-PM1) is about 0.4 pixels, and the calibration errorfrom the catadioptric projection system (PM2-proyector) is1.56 pixels.

3.2 Angle of view

Figure 10 shows the direction of T1 as function of thecamera focus (F1) and the sensor size (L1). T1 is definedby eq. (30):

T1 = λ

[F1L1

]=

[Z1ρm

1

](30)

where Z1 = a0 + a2(ρm1 )

2, ρm1 corresponds to the largest

radius of the mirror that can be seen by the camera. λ isa scale factor, a0, a2 are the polynomial coefficients thatdescribe the PM1 mirror curvature.

There are two solutions for ρm1 and λ, the solution

associated with our experimental setup is eq. (31):

λ =F1−

√F2

1 −4a0a2 L21

(2a2 L21)

, ρm1 =

F1−√

F21 −4a0a2 L2

1(2a2 L1)

(31)

0 50 100 150 200 250 300 350−100

−50

0

50

100

150

Z [mm]

ρ [mm]

L1

PM1

N2

(Z2, ρ

2m)

PM2 Z

F1

N1

S2

(Z1, ρ

1m)

S1

θ

T1 T

2

F2

L2

Figure 10. Real dimensions of the experimental setup. L1, F1, L2and F2 are shown at a larger size for visualization purposes. L1,F1, L2 and F2 are the sensor size, camera focus, display size andprojector focus, respectively.

The normal unit vector of the mirror in (Z1, ρm1 ) is given

by:

n1 =N1√

N1TN1

where:

N1 =

[2a2ρm

1−1

]

Considering that the angle formed by T1, N1 is the sameas the one formed by N1, S1, we obtain the eq. (32):

S1 = 2(T1Tn1)n1 − T1; (32)

Approximating RPM1→PM2 = I, we have the eq. (33):

S2 = 2(T2Tn2)n2 − T2; (33)

where:

n2 = N2√N2

T N2, N2 =

[−2a′2ρm

2−1

]and T2 =

[−a′0 − a′2(ρ

m2 )

2

ρm2

]

ρm2 =

F2−√

F22 −4a′0a′2 L2

2(2a′2 L2)

, L2 and F2 are the display size and

focus of the projector, while a′0 and a′2 are the polynomialcoefficients that model the curvature of the mirror PM2.

The angle of view θ of the reconstruction system is givenby:

θ = cos−1

S1

TS2√S1

TS1

√S2

TS2

Substituting the values of the camera, projector andthe mirrors PM1 and PM2, we obtain θ = 125.4125 degrees.

Diana-Margarita Córdova-Esparza, José-Joel Gonzalez-Barbosa, Juan-Bautista Hurtado-Ramos and Francisco-Javier Ornelas-Rodriguez: A Panoramic 3D Reconstruction System Based on the Projection of Patterns

7

3.3 Reconstruction

In order to test the functionality of the system, wereconstruct the object shown in Figure 11 containing twoperpendicular planes.

Figure 11. Test object 1: Perpendicular planes. A set of twoperpendicular planes was used to test the functionality of thesystem.

The perpendicularity of the two planes was measuredusing the Mitutoyo BJ1015 Coordinate-measuring machineand Metrolog XG® software (see Figure 12) obtaining anangle of 89.192 degrees between them.

To perform the reconstruction, we projected a pattern ofpoints onto the perpendicular planes (see Figure 13), andthen we acquired an image from this projection usingthe catadioptric camera (see Figure 14). The projectedpattern points have random colours. We perform thecorrespondence between the acquired image and theprojected image using the colour of the projected pointsand the epipolar curves constraints (described in section2.4).

Epipolar geometry from the 3D reconstruction systemis shown in Figure 15. Here we can see that the coniclines obtained with eq. (26) intersect in the epipole andpassed through the corresponding points. Once thecorrespondence is found, we build the 3D reconstructionof the object. The reconstructed planes can be seen inFigure 16.

Figure 12. Measuring perpendicularity of the two planes usingthe Mitutoyo BJ1015 Coordinate-measuring machine.

Figure 13. Projected pattern. The pattern projected by thePM2-projector system over the perpendicular planes consists ofa set of points with random colours.

Figure 14. Catadioptric image with the points projected on theperpendicular planes, captured by the camera CCD-mirror PM1.

Figure 15. Epipolar geometry. Coloured projected points withtheir corresponding conics (epipolar lines).

Having previous knowledge about the object formed bythe two planes, we segmented the reconstructed pointsin the objects. Each object corresponds to one of the twoperpendicular planes Π1 and Π2. Then, we calculate theerror, considering the perpendicular distance di betweenthe points and the corresponding plane using eq. (34).

Int J Adv Robot Syst, 2014, 11:55 | doi: 10.5772/582278

Figure 16. 3D reconstruction of the perpendicular planes.

Figure 17. Reconstruction error in perpendicular planes.Perpendicular distance error in millimetres of the points in planeone (red) and plane two (blue).

e =1n

n

∑i=1,pi∈Πj

di(pi, Πj) (34)

In Figure 17, the perpendicular distance in millimetresfrom each point with the corresponding plane is shown.The graph shows a mean error of 2.4 mm on plane 1 and3.6 mm on plane 2. The angle between the two planes is89.8488 degrees.

We performed two additional reconstructions test usinga cylindrical object (see Figure 18) and a semi-sphere (seeFigure 19).

A pattern of points with random colours was projectedinside the cylinder to obtain the results shown in Figure 20.

Figure 18. Test object 2: Cylindrical object. A section of aplastic tube was used to test the functionality of the system withcurved-shape objects.

Figure 19. Test object 3: A semi-sphere was used to test thefunctionality of the system.

Figure 20. 3D reconstruction of the plastic tube.

In order to know the real values of the cylinder, weperform a curve fitting with the 3D reconstructed pointsusing Metrolog XG® software. The results indicate anerror of 4.7657 mm (see Table 3) with respect to the realobject radius.

ResultsReal radius[mm] Radius [mm] Error [mm]

115.79 111.02 4.7657

Table 3. Curve fitting results of reconstructed 3D points usingMetrolog XG® software

Diana-Margarita Córdova-Esparza, José-Joel Gonzalez-Barbosa, Juan-Bautista Hurtado-Ramos and Francisco-Javier Ornelas-Rodriguez: A Panoramic 3D Reconstruction System Based on the Projection of Patterns

9

Figure 21. 3D reconstruction of the semi-sphere.

The reconstruction of the semi-sphere shown in Figure 21.

4. Conclusions and future work

The panoramic 3D reconstruction systems found in theliterature are based on two catadioptric cameras. Thus,the 3D reconstruction is based on the matching ofcorresponding points between two views. However, thesemethods fail in applications with textureless objects. Inthis work, we proposed a new reconstruction systembased on a single catadioptric camera and a catadioptricprojection system. The reconstruction can be done with asingle acquisition or with multiple acquisitions moving theposition of the projected points. For this, we first calibratethe catadioptric vision system, and then we calibrate thecatadioptric projector. The reconstruction error in an objectcomposed by two perpendicular planes was 2.4 mm on thefirst plane, and 3.6 mm on the second plane, with an angleof 89.8488 degrees between planes. The reconstructionerror in the cylindrical object was 4.7 mm. Our future workwill focus on obtaining denser 3D reconstructions andcharacterization of reconstruction uncertainty. Moreover,increasing the reconstruction area by moving the systeminside the object.

5. Acknowledgements

The authors wish to acknowledge the financial support forthis work by the Consejo Nacional de Ciencia y Tecnología(CONACYT) through project SEP-2005-O1-51004/25293and the financial support scholarship number 256319,and by the Instituto Politécnico Nacional through projectSIP-20130165.

6. References

[1] Pollefeys M, Van Gool L, Vergauwen M, Cornelis K,Verbiest F, Tops J (2001) Image-based 3D Acquisitionof Archaeological Heritage and Applications,Proceedings of VAST, 255-262.

[2] Earl G, (2013) Modeling in Archaeology: ComputerGraphic and Other Digital Pasts. Perspectives onScience, vol.21, no.2, 226-244.

[3] Muller K, Smolic A, Drose M, Voigt P, Wiegand T(2005) 3-D reconstruction of a dynamic environmentwith a fully calibrated background for traffic scenes,IEEE Trans. Circuits Syst. Video Technol., vol.15, no.4,538_549.

[4] Ko JH, Kim ES (2006) Stereoscopic video surveillancesystem for detection of target’s 3D location coordinatesand moving trajectories. Optics Communications,vol.266, no.1, 67-79.

[5] Bascle B, Hampel K, Navab N, Baxter B, Zhang X(2001) An industrial 3D graphical interface for processcontrol. IEEE Symposium on Emerging Technologiesand Factory Automation, ETFA, 559-562.

[6] Dogar A, (2010) Integrating 3D quality controlfunction into an automated visual inspection systemfor manufacturing industry. University Politehnicaof Bucharest Scientific Bulletin, Series C: ElectricalEngineering, 72(2), 63-76.

[7] Zachow S, Zilske M, Hege HC, (2007) 3Dreconstruction of individual anatomy from medicalimage data: Segmentation and geometry processing.In 25. ANSYS Conference, CADFEM Users, Meeting,Techn. Report: ZR 07-41, Zuse Institute Berlin.

[8] Morgado PM, Silveira M, Marques JS (2013) Extendinglocal binary patterns to 3D for the diagnosis ofAlzheimer’s Disease. Proceedings - InternationalSymposium on Biomedical Imaging, 17-120.

[9] Shang J, Lee C, Hwang-bo S, Lee J, Kim J (2012)3D shape reconstruction system for dielectricmaterial specimens surface measurement. ConditionMonitoring and Diagnosis (CMD), InternationalConference on Digital Object, 332-335.

[10] Furferi R, Governi L, Volpe Y, Carfagni M (2013)Design and assessment of a machine vision systemfor automatic vehicle wheel alignment. InternationalJournal of Advanced Robotic Systems, vol.10, no. 242.

[11] Gledhill D, Yun G, Taylor D, Clarke D (2004)3D Reconstruction of a Region of Interest UsingStructured Light and Stereo Panoramic Images. IEEEProceedings of the Eighth International Conference onInformation Visualisation, 1007-1012.

[12] Sang D (2008) Simultaneous calibration of astructured light-based catadioptric stereo sensor.Ph.D Thesis, Universitat de Girona.

[13] Zhou F, Peng B, Cui Y, Wang Y, Tan H (2013)A novel laser vision sensor for omnidirectional 3Dmeasurement. Optics & Laser Technology, 1-12.

[14] Gonzalez-Barbosa JJ, Lacroix S (2005) Fastdense panoramic stereovision. IEEE InternationalConference on Robotics and Automation, 1210-1215.

[15] Mei C, Rives P (2007) Single View PointOmnidirectional Camera Calibration from PlanarGrids. IEEE International Conference on Robotics andAutomation, 3945-3950.

[16] Puig L, Bastanlar Y, Sturm P, Guerrero J.J, BarretoJ (2011) Calibration of Central Catadioptric CamerasUsing a DLT-like Approach. International Journal ofComputer Vision, 1-14.

[17] Barreto J, Araujo H (2005) Geometric Propertiesof Central Catadioptric Line Images and TheirApplication in Calibration. IEEE Transactions onPattern Analysis and Machine Intelligence, vol. 27,1327-1333.

[18] Scaramuzza D, Martinelli A, Siegwart R (2006) Atoolbox for easily calibrating omnidirectional cameras.IEEE International Conference on Intelligent Robotsand Systems, 1327-1333.

Int J Adv Robot Syst, 2014, 11:55 | doi: 10.5772/5822710

[19] Furukawa R, Kawasaki H (2005) Dense 3Dreconstruction with an uncalibrated stereo systemusing coded structured light. IEEE Computer SocietyConf. Computer Vision and Pattern Recognition,107-107.

[20] Zhang S, Huang P (2006) High-resolution, real-timethree-dimensional shape measurement. OpticalEngineering, vol. 45.

[21] Chen S (2008) Vision Processing for Real-time 3-DData Acquisition Based on Coded Structured Light.IEEE Transactions on Image Processing, vol.45.

[22] Zhang S (2010) Recent progresses on real-time 3Dshape measurement using digital fringe projectiontechniques. Optics and Lasers in Engineering, vol.48,149-158.

[23] Micusik B, Pajdla T (2004) Para-catadioptricCamera Auto-calibration from Epipolar Geometry.Proceedings of the Asian Conference on ComputerVision.

[24] Garcia-Moreno AI, Gonzalez-Barbosa JJ (2012) GPSPrecision Time Stamping for the HDL-64E LidarSensor and Data Fusion, Electronics, Robotics andAutomotive Mechanics Conference (CERMA), IEEE.48-53.

[25] Svoboda T, Pajdla T (2002) Epipolar Geometry forCentral Catadioptric Cameras. International Journalof Computer Vision, vol.45, no.1, 23-37.

Diana-Margarita Córdova-Esparza, José-Joel Gonzalez-Barbosa, Juan-Bautista Hurtado-Ramos and Francisco-Javier Ornelas-Rodriguez: A Panoramic 3D Reconstruction System Based on the Projection of Patterns

11