11

Click here to load reader

A New Solution for Camera Calibration and Real-Time Image Distortion Correction in Medical Endoscopy–Initial Technical Evaluation

Embed Size (px)

Citation preview

Page 1: A New Solution for Camera Calibration and Real-Time Image Distortion Correction in Medical Endoscopy–Initial Technical Evaluation

634 IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 59, NO. 3, MARCH 2012

A New Solution for Camera Calibration andReal-Time Image Distortion Correction in

Medical Endoscopy–Initial Technical EvaluationRui Melo∗, Joao P. Barreto, and Gabriel Falcao, Member, IEEE

Abstract—Medical endoscopy is used in a wide variety of diag-nostic and surgical procedures. These procedures are renownedfor the difficulty of orienting the camera and instruments insidethe human body cavities. The small size of the lens causes radialdistortion of the image, which hinders the navigation process andleads to errors in depth perception and object morphology. Thisarticle presents a complete software-based system to calibrate andcorrect the radial distortion in clinical endoscopy in real time. Oursystem can be used with any type of medical endoscopic technology,including oblique-viewing endoscopes and HD image acquisition.The initial camera calibration is performed in an unsupervisedmanner from a single checkerboard pattern image. For oblique-viewing endoscopes the changes in calibration during operationare handled by a new adaptive camera projection model and analgorithm that infer the rotation of the probe lens using only im-age information. The workload is distributed across the CPU andGPU through an optimized CPU+GPU hybrid solution. This en-ables real-time performance, even for HD video inputs. The systemis evaluated for different technical aspects, including accuracy ofmodeling and calibration, overall robustness, and runtime profile.The contributions are highly relevant for applications in computer-aided surgery and image-guided intervention such as improvedvisualization by image warping, 3-D modeling, and visual SLAM.

Index Terms—Camera calibration, camera model, exchangeableoptics, GPU, medical endoscopy, radial distortion.

I. INTRODUCTION

R IGID medical endoscopes typically combine an endo-scopic lens with a charge-coupled device (CCD) camera,

as shown in Fig. 1(a). The lens covers the CCD sensor that

Manuscript received April 13, 2011; revised June 9, 2011, October 7, 2011,and November 9, 2011; accepted November 9, 2011. Date of publicationNovember 23, 2011; date of current version February 17, 2012. This work wassupported by the Portuguese Agency for Innovation (ADI) through the QRENco-promotion project n 2009/003512 sponsored by the Operational Program”Factores de Competitividade” and the European Fund for Regional Develop-ment. Asterisk indicates corresponding author.

∗R. Melo is with the Institute for Systems and Robotics and Department ofElectrical and Computer Engineering, Faculty of Science and Technology, Uni-versity of Coimbra, 3000-214 Coimbra, Portugal (e-mail: [email protected]).

J. P. Barreto is with the Institute for Systems and Robotics and Departmentof Electrical and Computer Engineering, Faculty of Science and Technology,University of Coimbra, 3000-214 Coimbra, Portugal (e-mail: [email protected]).

G. Falcao is with Instituto de Telecommunicacoes and Robotics, Departmentof Electrical and Computer Engineering, Faculty of Science and Technology,University of Coimbra, 3000-214 Combira, Portugal (e-mail: [email protected]).

This paper has supplementary downloadable material available at http://ieeexplore.ieee.org.

Color versions of one or more of the figures in this paper are available onlineat http://ieeexplore.ieee.org.

Digital Object Identifier 10.1109/TBME.2011.2177268

Fig. 1. The rigid endoscope combines a lens with a CCD camera (a). Thesystem enables the visualization of human body cavities whose access is difficultor limited, and is widely used for both surgery and diagnosis. (b) Image of theinterior of a knee joint, acquired by the 30◦ oblique viewing arthroscope, whichis used in the experiments throughout the paper.

acquires images with the dark aperture (circular region) beingvisible due to the optics converter used for mounting the lenson the camera [Fig. 1(b)]. Two types of rigid endoscopes arecommonly used in medicine—the forward viewing endoscopes,where the optical axis is aligned with the cylindrical probe, andthe oblique viewing endoscopes, where the optical axis makesan angle of 30◦ or 70◦ with respect to the symmetry axis of theprobe. The viewing direction of the latter can be changed with-out moving the camera head by simply rotating the endoscopearound its symmetry axis [1]–[3]. This rotation is typically in-ferred by observing the position of a triangular mark on theperiphery of the circular region [Fig. 1(b)]. Oblique viewingendoscopes are specially useful in inspecting narrow cavities,such as the articulations (arthroscopy) or the sinus (rhinoscopy),where the space to maneuver the probe is very limited.

Endoscopic video is often the only guidance for the medicalpractitioner during minimally invasive procedures. Navigationinside the human body is achieved by recognizing local anatom-ical landmarks from close range images, and the execution oftasks requires unnatural hand-eye coordination that is only mas-tered after a long training period. The manufacturers of endo-scopic equipment try to mitigate these difficulties by working toimproving the imaging conditions. In recent years, the introduc-tion of better camera technologies [e.g., high-definition (HD)and 3 CCD sensors for improved image and color resolution]and the development of new optics and lenses (e.g., increaseddepth of field, antifog effect) have considerably improved visualperception for the medical practitioner.

Despite all these developments, strong radial distortion (RD)is a problem that persists because of the small size of thelenses. The distortion causes a nonlinear geometric deforma-tion of the image, with the points being moved radially toward

0018-9294/$26.00 © 2011 IEEE

Page 2: A New Solution for Camera Calibration and Real-Time Image Distortion Correction in Medical Endoscopy–Initial Technical Evaluation

MELO et al.: NEW SOLUTION FOR CAMERA CALIBRATION AND REAL-TIME IMAGE DISTORTION CORRECTION 635

Fig. 2. Calibration of the endoscopic camera from a single image of a planarcheckerboard pattern. The calibration image of (a) is acquired in an uncon-strained manner using the setup shown in (b). The setup consists of a 10 ×10 cm acrylic box with a checkerboard pattern inside that is backlit uniformlyusing a NERLITE back-light of 350 mA. (a) Calibration image. (b) Calibrationsetup.

the center [4]. Fig. 2(a) shows an endoscopic image of a pla-nar checkerboard pattern. RD can be easily recognized becausestraight lines are projected into curves in the image periphery. Itcan also be observed that strong distortion severely affects theperception of relative size and depth, and the user is hardly ableto tell that the imaged scene is planar [5], [6]. This issue hasrecently begun to merit the attention of the manufacturers andlast year Striker announced the first laparoscopy system withreduced RD. The distortion is diminished by combining a noveloptical design with cropping of the image periphery where RDis more noticeable. Unfortunately, the images are far from beinggeometrically correct perspectives, the cropping decreases theeffective field of view (FOV), and the solution can neither be ex-tended to small diameter lenses (e.g., arthroscopy, rhinoscopy,etc), nor to legacy systems using standard-definition (SD) imageacquisition, and display.

Given the small size of the lenses, it is unlikely that im-provements in optical design will ever definitively solve the RDissue. An alternative is to model the lens distortion and correctthe acquired frames using image warping. This is a softwarebased solution that has important advantages: 1) it renders ge-ometrically correct perspective images, provided that the RD iscorrectly modeled and quantified; 2) it is flexible in that it canbe applied to any type of endoscopic equipment, regardless ofthe lens diameter or the image acquisition technology; and 3) itis a very cost-effective solution, as long as the computation usescommercial, off-the- Shelf (COTS) hardware. Despite these po-tential advantages, RD correction by image warping is not areality because of the following issues:

1) Camera calibration in the operating room (OR): Distor-tion compensation requires modeling the camera projec-tion, and estimating the corresponding parameters. Anendoscope cannot be calibrated in advance by the manu-facturer because it has exchangeable optics that are usuallyassembled in the OR before the procedure. Therefore, andsince the calibration must be done by the medical practi-tioner, the procedure must be robust and automatic so thata nonexpert user can execute it quickly.

2) Changes in the calibration due to lens rotation: In obliqueviewing endoscopes the motion between optics and cam-era sensor changes the projection parameters and the posi-tion and shape of the circular boundary separating the two

image regions [7]. Since it is not feasible to calibrate theendoscope for every possible lens position, the calibrationparameters must be updated according to a camera modelthat accounts for this relative motion.

3) Execution in real-time: All the computations, includingthe rendering of the corrected images, must be done inreal time. This is specially problematic in the case of HDsystems providing a high-frame resolution.

This article addresses the above issues and presents a fullyfunctional software-based system for calibrating and improv-ing visualization in medical endoscopy by correcting the imageradial distortion. The solution runs in real time on a standardcomputer with a COTS graphics processing unit (GPU), and nofurther equipment or instrumentation is required. Moreover, itcan be potentially used in any type of medical endoscopy, re-gardless of the type (forward-viewing or oblique-viewing), orthe image acquisition technology (SD or HD). User interventionis limited to the acquisition of a single calibration frame at thebeginning of the clinical procedure.

Throughout the paper, the experimental evaluations are car-ried using a 4-mm arthroscope with a lens cut of 30◦ that ismounted in a CCD camera with resolution 1280 × 960. We donot consider the situation where the endoscopic probe suffersmechanical torsion. This causes transitory changes in the pro-jection model [7] that do not affect the practical usefulness ofthe proposed system.

A. Notation

Vectors and vector functions are represented by bold sym-bols, e.g., x, F(x), scalars and scalar functions are indicatedby plain letters, e.g., r, f(x), g(r), matrices and image signalsare, respectively, denoted by capital letters in sans serif andtypewriter fonts, e.g., the matrix M and the image I. Points inthe plane are typically represented using homogeneous coor-dinates, with the symbol ∼ denoting the equality up to scale,e.g., x ∼ ( x1 x2 1 )T. Im indicates the m × m identity ma-trix, while 0m×n denotes a rectangular matrix with zeros. Coniccurves in the projective plane are represented by a 3 × 3 sym-metric matrix, e.g., Ω. If a matrix R is a function of a scalarα and/or a vector q, then the input parameters are indicated insubscript, e.g., Rα,q .

II. SYSTEM OVERVIEW AND RELATED WORK

We propose a complete solution for RD correction in medicalendoscopy that comprises the modules and blocks shown in thescheme in Fig. 3. The medical practitioner starts by acquiring asingle image of a checkerboard pattern using the setup in Fig. 2.The corners in the calibration frame are detected automatically,and both the matrix of intrinsic parameters K0 and the radialdistortion ξ are estimated without further user intervention. Afterthis brief initialization step, the processing pipeline on the rightof Fig. 3 is executed for each acquired image. At each frametime instant i, we detect the boundary contour Ωi , as well asthe position of the triangular mark pi . The detection resultsare used as input in an extended Kalman filter (EKF) which,given the boundary Ω0 and marker position p0 in the calibration

Page 3: A New Solution for Camera Calibration and Real-Time Image Distortion Correction in Medical Endoscopy–Initial Technical Evaluation

636 IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 59, NO. 3, MARCH 2012

Fig. 3. Scheme showing the different modules of the system that is proposedfor correcting the radial distortion of the image. The left-hand side concernsthe Initialization Procedure that is performed only once, after assembling theendoscopic lens with the CCD camera. The right-hand side shows the ProcessingPipeline that is executed at each frame time instant i.

frame, then estimates the relative image rotation due to the thelens probe rotation with respect to the camera head. This 2-Drotation is parametrized by the angle αi and the fixed point qi

that serve as input for updating the camera calibration based ona new adaptive projection model. Finally, the current geometriccalibration Ki , ξ is used for warping the input frame and correctthe radial distortion. This processing pipeline runs in real timewith computationally intensive tasks, like the image warping andthe boundary detection, being efficiently implemented using theparallel execution capabilities of the GPU. This section brieflyintroduces the system building blocks, discusses related work,and emphasizes the new contributions.

A. Boundary Detection

Finding the image contour that separates circular and frameregions is important not only to delimit meaningful visual con-tents, but also to infer the rotation of the lens probe with respectto the camera head. Section III presents a new algorithm fortracking the boundary contour and the triangular mark acrosssuccessive frames. The proposed approach is related to work byFukuda et al. [3] and Stehle et al. [8]. The first infers the lensrotation in oblique viewing endoscopes by extracting the tri-angular mark using conventional image processing techniques.The method assumes that the position of the circular region doesnot change during operation, which is in general not true, andit is unclear if it can run in real time. Stehle et al. proposestracking the boundary contour across frames by fitting a conic

curve to edge points detected in radial directions. The main dif-ference from our algorithm is that we use a hybrid CPU+GPUimplementation for rendering a polar image and carry out thesearch along horizontal lines. As discussed in III, this strategymakes it possible to reconcile robustness and accuracy with lowcomputational cost.

B. Intrinsic Camera Calibration

The Initialization Procedure aims at determining the intrin-sic calibration matrix K0 and the radial distortion ξ when thelens probe is at a reference position Ω0 , p0 . Planar regular pat-terns are widely used as a calibration object because they arereadily available and simplify the problem of establishing pointcorrespondences. Bouguets toolbox [9] is a popular softwarethat implements Zhang’s method [10] for calibrating a genericcamera from a minimum of three images of a checkerboard.Unfortunately, the Bouguet toolbox does not meet the usabil-ity requirements of our application. In practice the number ofinput images for achieving accurate results is way above three,and the detection of grid corners in images with RD needs sub-stantial user intervention. Several authors addressed the specificproblem of intrinsic calibration or RD correction in medicalendoscopes [5], [6], [11]–[14]. However, these methods are ei-ther impractical for use in the OR, or they employ circular dotpatterns to enable the automatic detection of calibration points,undermining the results accuracy [15].

Our system reconciles estimation accuracy with usability re-quirements by using the recent algorithm by Barreto et al. thatfully calibrates a camera with lens distortion from a single imageof a known checkerboard pattern [16]. The intervention of themedical practitioner is limited to the acquisition of a calibrationframe from arbitrary position using the setup of Fig. 2(b). Thepurpose of the calibration box is controlling the light conditionsfor assuring robust automatic detection of grid image corners.Section IV introduces the camera projection model that will beassumed throughout the article, with the interested reader beingreferred to our previous publication [16] for details about theintrinsic calibration method.

C. Estimation of Relative Rotation and Calibration Update

Rigid endoscopes with exchangeable optics allow rotationof the lens scope with respect to the camera-head, which en-ables observing the walls of narrow cavities without having todisplace the camera. The problem is that the relative motionbetween lens and camera head causes changes in the calibrationparameters that prevent the use of a constant model for correct-ing the distortion [7]. There is a handful of works proposingsolutions for this problem [1]–[3], [8], [17], but most of themhave the drawback of requiring additional instrumentation fordetermining the lens rotation [1], [2], [17]. The few methodsrelying only on image information for inferring the relative mo-tion either lack robustness [3] or are unable to update the fullset of camera parameters [8].

Section V proposes a new intrinsic camera model for endo-scopes with exchangeable optics that is driven simultaneouslyby experiment and by the conceptual understanding of the lens

Page 4: A New Solution for Camera Calibration and Real-Time Image Distortion Correction in Medical Endoscopy–Initial Technical Evaluation

MELO et al.: NEW SOLUTION FOR CAMERA CALIBRATION AND REAL-TIME IMAGE DISTORTION CORRECTION 637

Fig. 4. Boundary detection in extreme situations. The dashed and solid over-lays are, respectively, the initialization and the final contour. The algorithmconverges in three to five iterations depending on the initial estimate. (a) Lightspreading. (b) Low lighting. (c) Saturation.

arrangement. The parameters of the lens rotation required toupdate the camera model are estimated by a robust EKF thatreceives image information about the boundary contour andtriangular mark as input. Our dynamic calibration scheme hastwo important advantages with respect to [8]: 1) the entire pro-jection model is updated as a function of the lens rotation, andnot only the RD profile curve; and 2) the rotation of the lenscan still be estimated in the absence of the triangular mark [seeFig. 1(b)]. The approach is validated by convincing experimen-tal results in reprojecting 3-D points in the scene.

D. RD Correction Using a Hybrid CPU+GPU Solution

Section VI concerns the rendering of correct perspectiveframes by warping the endoscopic images. The required map-ping functions are derived and several visualization aspects arediscussed (e.g., resolution, image size). We also present exam-ples that illustrate the benefits of RD correction in scene percep-tion, and the importance of taking the lens rotation into account.Finally, Section VII shows that a CPU solution for carrying outthe computation fails to deliver real-time performance. Thus, wepropose a hybrid CPU+GPU solution with coalesced memoryaccesses that enables speedups between 50 and 200 dependingon the image resolution. We perform a detailed time profilingof the processing pipeline, showing that the system can correctHD images with an acquisition rate above 25 fps (please see thesupplementary video).

III. ROBUST REAL-TIME DETECTION OF THE BOUNDARY

CONTOUR AND TRIANGULAR MARK

This section discusses the localization of the image contourthat separates the circular and frame regions [see Fig. 1(b)].Since the lens probe moves with respect to the camera head thecontour position changes across frames, which prevents using aninitial off-line estimation. The boundary detection must be per-formed at each frame time instant, which imposes constraints onthe computational complexity of the chosen algorithm. Severalissues preclude the use of naive approaches for segmenting thecircular region [18]—the light often spreads to the frame region[Fig. 4(a)]; the circular region can have dark areas, dependingon the imaged scene and lighting conditions [Fig. 4(b)]; andthere are often highlights, specularity, and saturation that affectthe segmentation performance [Fig. 4(c)].

Fig. 5. Tracking of the boundary contour. The schemes from left to the rightrelate to the original frame acquired at time instant i, the warped image usingthe affine transformation S, and the polar image obtained by changing fromCartesian to spherical coordinates. The red dashed overlay relates to the previousboundary estimate, and the crosses are the detected points in the current contour.S maps Ωi−1 into a unit circle. The purple shaded area is the image region thatsuffers the polar warp and is where the search is performed.

A. Tracking the Boundary Contour Across Frames

It is reasonable to assume that the curve to be determined isalways an ellipse Ω with 5 DOF [4]. Thus, we propose to trackthe boundary across frames using this shape prior to achieve ro-bustness and quasi-deterministic runtime. Let Ωi−1 be the curveestimate at the frame time instant i − 1, as shown in Fig. 5(a).The boundary contour for the current frame i is updated asfollows:

1) Consider a set of equally spaced lines rj , with j =1, 2 . . . N , that intersect in the conic center wi−1 (this cen-ter can be easily computed by selecting the third columnof the adjoint of the 3 × 3 matrix Ωi−1 [4]).

2) For each rj , interpolate the image signal and compute the1-D directional derivative at every point location.

3) For each rj choose the first maximum of the 1-D derivativewhen moving from the periphery toward the center. Thedetected point sj is the probable location where the scanline rj intersects the boundary contour.

4) Compute Ωi by fitting an ellipse to the N points sj us-ing a standard RANdom SAmple Consensus (RANSAC)procedure [19], [20].

This tracking approach is robust provided that the number ofradial scan directions is high enough to handle possible outlierdetections. Unfortunately, the computational effort is propor-tional to the value of N , making difficult to reconcile robustnesswith strict time requirements1. We tackle this by using a hy-brid CPU+GPU implementation where the image interpolationis carried out in the parallel pipeline of the GPU. As shownin Fig. 5, instead of working in the radial directions we pro-pose to unfold the image according to a polar map, and carryout the edge detection in the horizontal directions. The warp-ing operation can be performed quickly in the GPU and thetopological change in the image domain significantly simplifiesboth the computation of the 1-D derivatives and the subsequentdetection of the contour points sj .

To implement this strategy, we must derive the mapping func-tion that transforms the current frame i, shown in Fig. 5(a), intothe polar image of Fig. 5(c). Let Ωi−1 be the estimation of the

1In our experiments, we use N = 180 for input image resolutions of 640 ×480 and N = 1000 for resolutions of 2448 × 2048.

Page 5: A New Solution for Camera Calibration and Real-Time Image Distortion Correction in Medical Endoscopy–Initial Technical Evaluation

638 IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 59, NO. 3, MARCH 2012

Fig. 6. Convergence of the boundary contour estimation after a poor initialization. We show from left to right a sequence of three successive frames (15 fps) ofthe oral cavity, with the rendered polar images in between. The green dots in the polar images are the point detections that are mapped back into the original imageusing FS . The RANSAC procedure divides the points into inliers (yellow dots) and outliers (red dots), and estimates the boundary contour at the current frametime instant. Note that after convergence the boundary is mapped into the middle vertical line of the polar image (cyan dashed overlay) and the triangular markcan be easily detected (white triangle in the last polar image) by scanning the vertical dashed yellow line for the maximum pixel value.

boundary contour in the frame i − 1. It is well known that thereis always an affine transformation that maps an arbitrary el-lipse into a unitary circle whose center is in the origin [4]. Suchtransformation S is given by

S ∼⎛⎝

r cos(φ) r sin(φ) −r (wi−1,x cos(φ) + wi−1,y sin(φ))− sin(φ) cos(φ) wi−1,x sin(φ) − wi−1,y cos(φ)

0 0 1

⎞⎠

(1)

where r is the ratio between the minor and major axis of Ωi−1 , φis the angle between the major axis and the horizontal direction,and (wi−1,x , wi−1,y ) are the nonhomogeneous coordinates ofthe conic center wi−1 . The transformation S is used to gen-erate the intermediate result of Fig. 5(b), and the polar image[Fig. 5(c)] is obtained by applying a change from Cartesian tospherical coordinates.

The edge points are detected by scanning the horizontal linesof Fig. 5(c) from the right to the left. These points, which are ex-pressed in spherical coordinates χ = (ρ, θ), are mapped backinto the original image points by the function FS of (2). Thecurrent conic boundary Ωi is finally estimated using the ro-bust conic fitting that avoids the pernicious effects of possibleoutliers.

x ∼ FS (χ) ∼ S−1

⎛⎝

ρ cos(θ)ρ sin(θ)

1

⎞⎠ . (2)

Fig. 6 shows the tracking behavior in a sequence of threeframes when the initial boundary estimate is significantly off.After correct convergence the boundary contour is mapped intothe central vertical line of the polar image enabling the robustdetection of the triangular lens mark. As shown in the right-handside of Fig. 6, we scan an auxiliary vertical line slightly deviatedto the right, and select the pixel location that has maximumintensity. It is also important to note that the search for contourpoints is limited to a ring region around the previous boundaryestimation [Fig. 5(b)], saving computational time both in thepolar warping and in the horizontal line scanning [Fig. 5(c)].The CPU+GPU hybrid implementation of the boundary trackingallows the computationally efficient reconciliation of robustnessand accuracy. As we shall see in Section VII, the detection of the

boundary and lens mark takes between 4 and 13 ms dependingon the image size.

IV. INITIAL INTRINSIC CALIBRATION

This section discusses the initial intrinsic calibration of theendoscopic camera. This is a fully automatic process that usesthe single image calibration (SIC) algorithm proposed in [16].

A. Projection Model

A camera equipped with a rigid endoscope is a compoundsystem with a complex optical arrangement. The projection iscentral [7] and the image distortion is described well by the so-called division model [16]. Let X be the vector of homogeneouscoordinates of a 3-D point represented in a world referenceframe. Point X is projected into point x in the endoscopic imagesuch that

x ∼ K0 Γξ

(PX

). (3)

P denotes the standard 3 × 4 projection matrix [4], Γξ is anonlinear projective function that accounts for the image radialdistortion, and K0 is the matrix of intrinsic parameters with thefollowing structure

K0 ∼

⎛⎝

a f s f cx

0 a−1 f cy

0 0 1

⎞⎠ (4)

where f , a, and s, stand, respectively, for the focal length,aspect ratio, and skew, and c = (cx, cy )T are the nonhomoge-neous coordinates of the image principal point. The image radialdistortion is described by the division model [16], with worldundistorted projected points xu ∼ (xu yu zu )T being mappedinto world distorted points xd according to:2

Γξ (xu ) ∼ ( 2xu 2 yu zu +√

z2u − 4ξ(x2

u + y2u ) )T . (5)

The amount of nonlinear distortion is quantified by a sin-gle scalar parameter ξ that always takes a nonpositive value(ξ ≤ 0). The mapping in the projective plane P

2 induced byΓξ is bijective, and the inverse function Γ−1

ξ transforms distortedpoints xd ∼ (xd yd zd)

T back into undistorted points xu [16]:

Γ−1ξ (xd) ∼ ( xd yd z2

d + ξ(x2d + y2

d ) )T . (6)

2Please note that xd ∼ K−10 x and that points xd and xu are expressed in a

system of coordinates with origin in the distortion center.

Page 6: A New Solution for Camera Calibration and Real-Time Image Distortion Correction in Medical Endoscopy–Initial Technical Evaluation

MELO et al.: NEW SOLUTION FOR CAMERA CALIBRATION AND REAL-TIME IMAGE DISTORTION CORRECTION 639

TABLE ICOMPARISON BETWEEN SIC AND BOUGUET CALIBRATION

If the distance between xd and the origin O ∼ (0 0 1)T is

rd =

√x2

d

z2d

+y2

d

z2d

(7)

then it follows from the inverse mapping of (6) that the distancebetween xu and O is

ru =rd

1 + ξ r2d

. (8)

Henceforth, and without loss of generality, it will be assumedthat the 3-D point X is expressed in the camera reference frame,which means that P ∼ (I3 0).

B. Single Image Calibration (SIC) in the OR

We showed in [16] that a camera that follows the projectionmodel described above can be calibrated from a single checker-board frame taken from an arbitrary position. Thanks to thisresult it is possible to limit the user intervention in the OR to theacquisition of an image of a planar grid that is backlit to helpautomatic corner detection in a robust and accurate manner.

Table I compares the calibration results obtained with our SICalgorithm, the nonparametric approach proposed by Hartley andKang [21], a robust variant of [21] that applies RANSAC forestimating the fundamental matrix that encodes the distortioncenter, and the Bouguet toolbox [9]. The experiment consid-ers ten images of the checkerboard pattern with a resolution of1280 × 960. The first row shows the mean and standard devia-tion of the ten independent calibration results achieved by SIC.The second and third rows refer to the nonparametric approachthat, unlike SIC, only recovers the center and discrete points ofthe distortion profile curve. Finally, the last row shows the resultsobtained with [9] using the ten images in simultaneous. SinceBouguet assumes a polynomial model, the estimated distortionprofile is fitted with the division model for comparison purposes.It can be observed that SIC exhibits a very good repeatability,and provides mean values for the calibration parameters that aresimilar to the ones achieved with [9] using all the images insimultaneous. It is also interesting to verify that the distortioncenter estimation is consistent with the parameter-free approach,which proves that the assumed lens model is appropriated.

V. MODELING THE EFFECT OF THE LENS ROTATION IN THE

INTRINSIC CALIBRATION

The initialization procedure determines the camera calibra-tion K0 , ξ when the lens probe is in a particular angular position,henceforth referred as the reference position. As discussed in

Section II, the relative rotation between lens and camera-headchanges the intrinsic calibration in a manner that affects the RDcorrection by image warping. In order to assess this effect weacquired ten calibration images while rotating the lens probefor a full turn. The calibration was estimated for each angu-lar position using the methodology in Section IV, and both theboundary Ω and the triangular mark were located as describedin Section III. Fig. 7(a) plots the results for the principal pointc, the boundary center w, and the lens mark p. Since the threepoints describe almost perfect concentric trajectories it seemsreasonable to model the effect of the lens rotation on the cameraintrinsics by means of a rotation around an axis orthogonal tothe image plane. This idea has already been advanced by Wuet al. [2], but they consider that the axis always goes throughthe principal point, an assumption that in general does not hold,as shown by our experiment.

A. Projection Model that Accounts for the Lens Rotation

The scheme in Fig. 7(b) and (c) aims to give the idea of theproposed model for describing the effect of the lens rotation inthe intrinsics. Let us assume that the endoscopic lens projectsa virtual image onto a plane I′ placed at the far end. We canthink of I′ as the image that would be seen by looking directlythrough the endoscope eye-piece. Kc is the intrinsic matrix ofthis virtual projection, and c′ is the point location where theoptical axis meets the plane. Now assume that a camera headis connected to the eye-piece, such that the CCD plane I isperfectly parallel to I′ and orthogonal to the optical axis. Theprojection onto I has intrinsics Kh , with the principal point cbeing the image of c′. So, if the camera is skewless (s=0) withsquare pixels (a=1) and K0 is the intrinsic matrix estimate

K0 ∼

⎛⎝

f 0 cx

0 f cy

0 0 1

⎞⎠ (9)

then, it can be factorized as

K0 ∼ Kh Kc ∼

⎛⎝

fh 0 cx

0 fh cy

0 0 1

⎞⎠

⎛⎝

fc 0 00 fc 00 0 1

⎞⎠ (10)

with fc being the focal length of the endoscopic lens, and fh

being the focal length of the camera head that converts metricunits into pixels.

Let us now consider that the lens probe is rotated around anaxis l by an angle α [Fig. 7(c)]. l is assumed to be orthogonalto the virtual plane I′, but not necessarily coincident with thelens axis. In this case the point c′ describes an arc of circle withamplitude α and, since I and I′ are parallel, the same happenswith its image c. The intrinsic matrix of the compound opticalsystem formed by the camera head and the rotated endoscopebecomes

K ∼ Kh Rα,q ′ Kc (11)

Page 7: A New Solution for Camera Calibration and Real-Time Image Distortion Correction in Medical Endoscopy–Initial Technical Evaluation

640 IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 59, NO. 3, MARCH 2012

Fig. 7. (a) is a plot of the principal point c, triangular mark p, and boundary center w, when the lens undergoes a full rotation. (b) and (c) illustrate the modeland assumptions that are considered for updating the intrinsic matrix of the endoscopic camera. The lens projects a virtual image onto a plane I′ that is imagedinto I by a CCD camera. The relative rotation of the lens is around an axis l that is parallel to the optical axis and orthogonal to planes I and I′. (c) represents theprincipal point and q is the center of the induced image rotation.

with Rα,q ′ being a plane rotation by α and around the point q′,where the axis l intersects I′.

Rα,q ′=

⎛⎝

cos(α) sin(α) (1 − cos(α))q′x − sin(α)q′y− sin(α) cos(α) sin(α)qx + (1 − cos(α))qy

0 0 1

⎞⎠ .

(12)The position of q′ is obviously unchanged by the rotation, andthe same is true of its image q ∼ Kh q′. Taking into account theparticular structure of Kh , we can rewrite (11) in the followingmanner3

K ∼ Rα,q Kh Kc

∼ Rα,q K0 . (13)

We have just derived a projection model for the endoscopiccamera that accommodates the rotation of the lens probe and isconsistent with the observations of Fig. 7(a). The initializationprocedure estimates the camera calibration K0 , ξ at an arbitraryreference position (α=0). At a certain frame time instant i, thematrix of intrinsic parameters becomes

Ki ∼ Rαi ,q iK0 (14)

where αi is the relative angular displacement of the lens, andqi is the image point that remains fixed during the rotation.Since the radial distortion is a characteristic of the lens, theparameter ξ is unaffected by the relative motion with respect tothe camera-head. Thus, from (3), it follows that a generic 3-Dpoint X represented in the camera coordinate frame is imagedat:

x ∼ Ki Γξ

(( I3 0 ) X

). (15)

B. Rotation Estimation

The update of the intrinsic parameters matrix at each frametime instant requires knowing the relative angular displacementαi and the image rotation center qi . We now describe how theseparameters can be inferred from the position of the boundarycontour Ω and the triangular mark p.

3The assumption of square pixels and zero skew is valid for most CCDcameras. If a �= 1 or s �= 0, then (11) and (13) are no longer strictly equivalent.However, the latter is typically a good approximation of the former, and theconsequences in terms of modeling accuracy are negligible.

Let wi and w0 be, respectively, the center of the boundarycontours Ωi and Ω0 in the current and reference frames. Like-wise, pi and p0 are the positions of the triangular markers inthe two images. We assume that both wi , w0 and pi , p0 arerelated by the plane rotation Rαi ,q i

whose parameters we aim toestimate. This situation is illustrated in Fig. 8(a) where it can beeasily seen that the rotation center qi must be the intersectionof the bisectors of the line segments defined by wi , w0 and pi ,p0 . Once qi is known the estimation of αi is trivial. Wheneverthe triangular mark is unknown (if it does not exist or cannotbe detected), the estimation of qi requires a minimum of threedistinct boundary contours [Fig. 8(b)].

In order to avoid under-constrained situations and increasethe robustness to errors in measuring w and p, we decided touse a stochastic EKF [22] for estimating the rotation parameters.The state transition assumes a constant velocity model for themotion and stationary rotation center. The equation is linear onthe state variables, with T depending on the frame acquisitioninterval δt⎛⎝

αi+1αi+1qi+1

⎞⎠ =

(T 02×3

03×2 I3

) ⎛⎝

αi

αi

qi

⎞⎠ T =

(1 δt0 1

).

(16)The measurement equation is nonlinear in αi and qi

(wi

pi

)=

(Rαi ,q i

03×303×3 Rαi ,q i

) (w0p0

)(17)

with the two last equations being discarded whenever the detec-tion of the triangular mark fails.

C. Experimental Validation

The proposed model was validated by reprojecting grid cor-ners onto images of the checkerboard pattern acquired for differ-ent angles α (Fig. 9). The SIC was performed for the referenceposition (α = 0), enabling the determination of the matrix K0 ,the RD parameter ξ, and the 3-D coordinates of the grid points.Then, the camera head was carefully rotated without movingthe lens probe in order to keep the relative pose with respect tothe calibration pattern. The rotation center qi and the angulardisplacement αi were estimated for each frame using the ge-ometry in Fig. 8. Finally, the 3-D grid points were projected

Page 8: A New Solution for Camera Calibration and Real-Time Image Distortion Correction in Medical Endoscopy–Initial Technical Evaluation

MELO et al.: NEW SOLUTION FOR CAMERA CALIBRATION AND REAL-TIME IMAGE DISTORTION CORRECTION 641

Fig. 8. Computing the image rotation center qi when the triangular mark iscorrectly detected (a), and when there is no mark information (b).

Fig. 9. Experimental validation of the model for updating the camera calibra-tion. The left-hand side shows the image of the planar grid for a lens rotationangle of α = 127◦. The red crosses are the actual position of the image corners,the green crosses refer to the reprojected grid points using the derived model.There is a total of 130 evaluation points, many of which are located in the imageperiphery where the distortion is more pronounced. The red curve in the graphicof (b) shows the RMS value of the re-projection error for different angulardisplacements α of the lens probe. The two other curves refer to the error offitting a line to a set of collinear points (the yellow circles in Fig. 9(a)) aftercorrecting the image distortion with (15) and without (3) taking into accountthe lens rotation.

onto the frame using (15), and the error distance to the actualimage corner locations was measured. The red curve of Fig. 9(b)shows the RMS of the reprojection error for different angulardisplacements α. The values vary between 2 and 5.6 pixels,but no systematic behavior can be observed. We believe thatthe reason for the errors is more in the experimental conditionsthan in the camera modeling. Since the images are very closerange (≈ 10 mm), a slight touch on the lens while rotating thecamera head is enough to cause a reprojection error of severalpixels. The results of Fig. 9(a) not only validate the accuracyof the calibration, but show that having a model that accountsfor the lens rotation is of key importance for achieving a correctcompensation of the lens distortion.

The presented experiment only focused on the intrinsic pa-rameters, while [1] and [2] consider both intrinsic and extrinsiccalibration and employ additional instrumentation. Althoughno direct comparison can be made, it is worth mentioning thatour reprojection error is smaller than [2] and equivalent to [1],where only points close to the image center were considered.From the above, and despite all conjectures, the experimentalresults clearly validate the proposed model.

VI. RADIAL DISTORTION CORRECTION

This section discusses the rendering of the correct perspectiveimages that are the final output of the visualization system. Aspointed out in [23], the efficient warping of an image by a

particular transformation should be performed using the inversemapping method. Thus, we must derive the function F that mapspoints y in the desired undistorted image into points x in theoriginal distorted frame. From (15), it follows that

F(y) ∼ Ki Γξ

(R−αi ,q ′′

iK−1

y y). (18)

Ky specifies certain characteristics of the undistorted image(e.g., center, resolution), R−αi ,q ′′

irotates the warping result back

to the original orientation, and q′′i is the back projection of the

rotation center qi

q′′i ∼ ( q′′i,x q′′i,y 1 )T ∼ Γ−1

ξ

(K−1

i qi

). (19)

According to feedback from our medical partners, it is ex-tremely important to preserve object’s scale in the center re-gion, otherwise the practitioner may be reluctant to adopt theproposed visualization solution. So, instead of correcting the RDand rescaling the result to the resolution of the original framewe decided to expand the image periphery and keep the size ofthe undistorted center region, avoiding loss of information inthe image center. This was done by computing the size u of thewarped image from the radius of the boundary contour. Let rd

be the distance between the origin and the point Ki−1 p0 (the

distorted radius). The desired image size u is given by u = f ru ,where f is the camera focal length, and ru is the undistortedradius determined using (8). Accordingly, the matrix Ky mustbe

Ky ∼

⎛⎝

f 0 −f q′′i,x0 f −f q′′i,y0 0 1

⎞⎠ (20)

with the center of the warped image being the locus where theimage rotation center qi is mapped. Note that a conjugation ofextreme RD with a small original image resolution can producepixelation in the periphery of the image. This effect is not no-ticeable when using high resolution input images. Anyway, inolder systems this problem can be tackled by either improv-ing the interpolation technique (currently we are using bilinearinterpolation) or reducing the overall resolution of the targetimage. Fig. 10 shows the RD correction results for some framesof a video sequence. The examples clearly show the improve-ments in the scene’s perception, and the importance of takinginto account the lens rotation during the correction of the image[Fig. 10(b)].

VII. GPU IMPLEMENTATION AND TIME PROFILING

The rendering of warped images described in Sections IIIand VI requires high performance computational resources toprocess data in real-time. In this section, we describe the paral-lelization of our algorithms for correcting the RD on the GPUusing the compute unified device architecture (CUDA) [24]interface. The natural parallelism underlying warping opera-tions, where all data elements are computed by interpolating acommon mapping function, and the efficient memory accessprocedure herein developed allow significant speedups. Theseturn out to be decisive in accomplishing the real-time require-ments of the visualization application, especially when us-ing HD input frames. We evaluate the impact of our hybrid

Page 9: A New Solution for Camera Calibration and Real-Time Image Distortion Correction in Medical Endoscopy–Initial Technical Evaluation

642 IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 59, NO. 3, MARCH 2012

Fig. 10. Radial distortion correction of endoscopic video sequences with lens probe rotation. The original and warped frames are presented in the top and bottomrows, respectively. (a) shows the reference position (α = 0) for which the initial calibration is performed. (b) compares the distortion correction results on theGPU without (left) and with (right) compensation of the lens rotation. (c), (d), and (e) present distortion correction results in various environments. All the resultswere obtained from a video sequence of 36 min with no system recalibration or reinitialization. (a) Calibration image. (b) Rotation of the lens cylinder. (c) Oralcavity. (d) Nasal cavity. (e) Artificial scene.

Fig. 11. Execution time per frame for various output image resolutions. Theinput images have a resolution of 1600 × 1200 pixels and the output imageshave variable square size. The table on the right shows the speedup achievedwith the optimized hybrid CPU+GPU approach relative to the CPU solution.

CPU+GPU solution by comparing three approaches: 1) a purelyCPU based solution running under an Intel Core2 Quad CPU; 2)an unoptimized hybrid CPU+GPU version; and 3) an optimizedhybrid CPU+GPU version, both using a 8800 GTX GPU.

Fig. 11 compares the execution times (required to process asingle frame) achieved with the solutions mentioned above. Thespeedup table, a measure of how much faster the GPU is thanthe CPU, presents the ratio between the CPU and the optimizedhybrid CPU+GPU execution times. The comparison given inFig. 11 shows that the CPU is not able to handle large warpingoperations (the most time consuming task of the algorithms) asefficiently as the GPU.

A. Hybrid CPU+GPU Solution

Fig. 12 presents the optimized hybrid CPU+GPU implemen-tation steps of the radial distortion correction algorithm and thedetails of the implementation strategy adopted in the develop-ment of the hybrid program to execute on the CPU and GPU.The parallelization of the algorithms on the GPU is divided intothree main steps:

1) Image conversions: The image is divided into its RGBchannels and a grayscale conversion is performed.

Fig. 12. Radial distortion correction algorithm steps. The red blocks representCUDA kernels invocations. The green dotted blocks represent allocated devicememory. Except for the purple block, which is computed on the CPU, allprocessing is performed on the GPU.

2) Boundary estimation: The grayscale image is bound to atexture [24] and the mapping to spherical coordinates (2)along with the contour enhancement kernels are launched.

3) Radial distortion correction: The three image channels,from the Image Conversions stage, are bound to texturesand the RD correction kernel is launched.

There is an intermediary step (purple block in Fig. 12) that iscomputed in the CPU. Each line of the polar image is scannedfor the contour points to which the ellipse is fitted. The ellipseparameters are fed to the EKF discussed in Section V-B and thecalibration parameters are updated and passed as arguments tothe distortion correction kernel. This step is implemented on theCPU rather than the GPU because of the sequential nature ofthe processes involved.

The optimized hybrid CPU+GPU solution relies on a dataprealignment procedure, that allows to perform a single mem-ory access per group of threads (see Fig. 13), which is known bycoalescence [25], [26]. If the data to be processed by the GPUis misaligned instead, no coalesced accesses are performed andseveral memory transactions occur for each group of threads,reducing significantly the performance of the kernel under exe-cution.

Although the alpha channel of the image is not being used,it is necessary to fulfill the memory layout requirement for

Page 10: A New Solution for Camera Calibration and Real-Time Image Distortion Correction in Medical Endoscopy–Initial Technical Evaluation

MELO et al.: NEW SOLUTION FOR CAMERA CALIBRATION AND REAL-TIME IMAGE DISTORTION CORRECTION 643

Fig. 13. Coalesced memory access by a group of threads (t0 to t15 ). Eachchannel (RGBA) of a pixel is represented using 8-bit precision.

TABLE IIEXECUTION TIMES, IN MILLISECONDS, AND FRAMES PER SECOND (fps)

performing fully coalesced accesses. An increase in the amountof data to be transferred introduces a penalty of 10.6% in thetransfer time while the coalescence achieved reduces the kernelsexecution time by 66.1%. In sum, the coalesced implementationsaves 38.7% of computational time relatively to the unoptimizedhybrid CPU+GPU implementation.

B. Time Profiling

Table II presents the time taken by each step in Fig. 12 whileprocessing a 150-frame video sequence (the times represent themean time value of all frames processed for each resolution).By implementing coalesced accesses to global memory and ex-ploiting the texture memory space of the GPU (optimized hybridCPU+GPU solution) we were able to significantly shorten theexecution time required to process a frame, compared with theother solutions presented in Fig. 11. Regarding the optimizedhybrid CPU+GPU solution, whose processing times are given inTable II, 4.8% to 15.0% of the total time is spent in CPU calcu-lations (contour extraction, ellipse fitting and EKF for rotationcenter estimation), 41.4% to 61.1% is spent in data transfersto/from the host/device and 32.7% to 41.3% of the time is spentin the kernels’ execution.

VIII. CONCLUSION

The article proposes a versatile and low-cost system for cali-brating and correcting the RD in clinical endoscopy. The solu-tion runs in real time in COTS hardware and takes into accountusability constraints specific to medical procedures. Moreover,it can be used with any type of endoscopy technology, includingoblique-viewing endoscopes and HD acquisition.

The development of the system led to new methods and mod-els for the following problems: 1) intrinsic camera calibrationin the OR with minimum user intervention; 2) robust segmen-tation of the circular region; 3) inference of the relative rotationbetween lens probe and camera-head using uniquely image in-formation; and 4) on-line updating of the camera calibrationduring the clinical procedure. In addition, we proposed an opti-mized hybrid CPU+GPU solution for the computation and this,

along with carefully designed algorithms, makes it possible cor-rect HD video inputs in real time. Furthermore, and since theapproach is scalable, it will be suitable for execution on futureGPU generations that are likely to have more cores. It will, there-fore, easily support more complex processing or the processingof larger HD images.

The generation of correct perspectives of the scene in realtime is likely to improve the depth perception of the surgeon.As future work, we intend to use this solution to quantify theinfluence of the RD in the success rate of surgeries. Moreover,the contributions of this work to endoscopic camera modelingand calibration are of major importance for applications in com-puter aided surgery and image guided intervention. Examplesinclude improved visualization by image warping, 3-D model-ing and registration from endoscopic video, augmented realityand overlay of preoperative information, visual SLAM for sur-gical navigation, etc.

REFERENCES

[1] T. Yamaguchi, T. Nakamota, Y. Sato, Y. Nakajima, K. Konishi,M. Hashizume, T. Nishii, N. Sugano, H. Yoshikawa, K. Yonenobu, andS. Tamura, “Camera model and calibration procedure for oblique-viewingendoscope,” in Proc. MICCAI, 2003, pp. 373–381.

[2] C. Wu, B. Jaramaz, and S. G. Narasimhan, “A full geometric and photo-metric calibration method for oblique-viewing endoscope,” Int. J. Com-put. Aided Surg., vol. 15, pp. 19–31, 2010.

[3] N. Fukuda, Y. Chen, M. Nakamoto, T. Okada, and Y. Sato, “A scopecylinder rotation tracking method for oblique-viewing endoscopes withoutattached sensing device,” in Proc. 2nd Int. Conf. Softw. Eng. Data Mining,Chengdu, China, Jun. 2010, pp. 684–687.

[4] R. I. Hartley and A. Zisserman, Multiple View Geometry in ComputerVision, 2nd ed. Cambridge, U.K.: Cambridge Univ. Press, 2004.

[5] K. Asari, S. Kumar, and D. Radhakrishnan, “A new approach for nonlin-ear distortion correction in endoscopic images based on least squaresestimation,” IEEE Trans. Med. Imag., vol. 18, no. 4, pp. 345–354,Apr. 1999.

[6] J. Helferty, C. Zhang, G. McLennan, and W. Higgins, “Videoendoscopicdistortion correction and its application to virtual guidance of endoscopy,”IEEE Trans. Med. Imag., vol. 20, no. 7, pp. 605–617, Jul. 2001.

[7] J. Barreto, J. Santos, P. Menezes, and F. Fonseca, “Ray-basedcalibration of rigid medical endoscopes,” in Proc. 8th WorkshopOmnidirectional Vision, Camera Networks and Non-classical Cameras-OMNIVIS, Marseille, France, Sep. 2008. Available: http://hal.inria.fr/view_by_stamp.php?&halsid=oq08v1s59vbflhntl29e4nmeg6&label=OMNIVIS2008&langue=en&action_todo=view&id=inria-00325388&version=1.

[8] T. Stehle, M. Hennes, S. Gross, A. Behrens, J. Wulff, and T. Aach, “Dy-namic distortion correction for endoscopy systems with exchangeable op-tics,” in Proc. Bildverarbeitung fur die Medizin, Berlin, Germany, 2009,pp. 142–146.

[9] J.-Y. Bouguet. (Jul. 2010) Camera Calibration Toolbox forMatlab. [Online]. Available: http://www.vision.caltech.edu/bouguetj/calib_doc/index.html#ref

[10] Z. Zhang, “Flexible camera calibration by viewing a plane from unknownorientations,” in Proc. ICCV, 1999, pp. 666–673.

[11] R. Shahidi, M. R. Bax, C. R. Maurer, Jr., J. A. Johnson, E. P. Wilkinson,B. Wang, J. B. West, M. J. Citardi, K. H. Manwaring, and R. Khadem,“Implementation, calibration and accuracy testing of an image-enhancedendoscopy system,” IEEE Trans. Med. Imag., vol. 21, no. 12, pp. 1524–1535, Jan. 2002.

[12] C. Wengert, M. Reeff, P. Cattin, and G. Szekely, “Fully automatic endo-scope calibration for intraoperative use,” in Proc. Bildverarbeitung fur dieMedizin, Hamburg, Germany, Mar. 2006, pp. 419–423.

[13] F. Vogt, S. Kruger, H. Niemann, and C. H. Schick, “A system for real-timeendoscopic image enhancement,” in Proc. MICCAI, Montreal, Canada,Nov. 15–18, 2003, pp. 356–363.

[14] S. Kruger, F. Voqt, W. Hohenberger, D. Paulus, H. Niemann, andC. H. Schick, “Evaluation of computer-assisted image enhancement in

Page 11: A New Solution for Camera Calibration and Real-Time Image Distortion Correction in Medical Endoscopy–Initial Technical Evaluation

644 IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 59, NO. 3, MARCH 2012

minimal invasive endoscopic surgery,” Methods Inf. Med., vol. 43,pp. 362–366, 2004.

[15] J. Mallon and P. F. Whelan, “Which pattern? Biasing aspects of planar cal-ibration patterns and detection methods,” Pattern Recognit. Lett., vol. 28,no. 8, pp. 921–930, Jan. 2007.

[16] J. Barreto, J. Roquette, P. Sturm, and F. Fonseca, “Automatic cam-era calibration applied to medical endoscopy,” in Proc. British Ma-chine Vision Conf. (BMVC), London, U.K., Sep. 7–10, 2009. Available:http://www.bmva.org/bmvc/2009/BMVC_2009_Authors.htm#18

[17] S. D. Buck, F. Maes, A. D’Hoore, and P. Suetens, “Evaluation of a novelcalibration technique for optically tracked oblique laparoscopes,” in Proc.MICCAI, Brisbane, Australia, 2007, pp. 467–474.

[18] R. Melo, J. P. Barreto, and G. Falcao. (2011, Mar.). “Superior visual-ization in endoscopy,” Tech. Rep., [Online]. Available: http://arthronav.isr.uc.pt/reports/visualizationreport.zip.

[19] A. Fitzgibbon, M. Pilu, and R. Fisher, “Direct least square fitting ofellipses,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 21, no. 5, pp. 476–480, May 1999.

[20] M. Fischler and R. Bolles, “Random sample consensus: A paradigm formodel fitting with applications to image analysis and automated cartogra-phy,” Commun. ACM, vol. 24, no. 6, pp. 381–395, 1981.

[21] R. Hartley and S. B. Kang, “Parameter-free radial distortion correctionwith centre of distortion estimation,” in Proc. 10th IEEE Intl. Conf. Com-puter Vision (ICCV) 2005, Beijing, China, Oct. 17–21, pp. 1834–1841.

[22] S. M. Bozic, Digital and Kalma Filtering.. London, U.K.: EdwardArnold, 1979.

[23] L. Velho, A. C. Frery, and J. Gomes, Image Processing for ComputerGraphics and Vision. New York: Springer-Verlag, 2008.

[24] NVIDIA, CUDA Programming Guide 3.0, Santa Clara, CA: NVIDIA,Feb. 2010.

[25] D. Kirk and W.-m. Hwu, Programming Massively Parallel Processors: AHands-on Approach. San Mateo, CA: Morgan Kaufmann, 2010.

[26] G. Falcao, L. Sousa, and V. Silva, “Massively LDPC Decoding on Mul-ticore Architectures,” IEEE Trans. Parallel Distrib. Syst., vol. 22, no. 2,pp. 309–322, Feb. 2011.

Rui Melo received the Integrated Master’s degree(B.Sc. and M.Sc.) in electrical and computers engi-neering from the University of Coimbra, Coimbra,Portugal, in 2009, where he is currently working to-ward the Ph.D. degree.

Since 2009, he has been a Computer Vision Re-searcher at the Institute for Systems and Robotics,Coimbra. His research interests include computer vi-sion and parallel programming.

Joao P. Barreto received the “Licenciatura” andPh.D. degrees from the University of Coimbra, Coim-bra, Portugal, in 1997 and 2004, respectively.

From 2003 to 2004, he was a PostdoctoralResearcher with the University of Pennsylvania,Philadelphia. Since 2004, he has been an AssistantProfessor with the University of Coimbra, where heis also a Senior Researcher with the Institute for Sys-tems and Robotics. He is the author of more than30 peer-reviewed publications. His current researchinterests include different topics in computer vision,

with a special emphasis in geometry problems and applications in robotics andmedicine.

Dr. Barreto is a regular Reviewer for several conferences and journals, havingreceived 4 Outstanding Reviewer Awards in the last few years.

Gabriel Falcao (S’07–M’10) received the Gradu-ate degree in 1997 and the M.Sc. degree in the areaof digital signal processing in 2002, both from theUniversity of Porto, Porto, Portugal, and the Ph.D.degree in electrical and computer engineering fromthe Faculty of Science and Technology, University ofCoimbra (FCTUC), Coimbra, Portugal.

He is currently an Assistant Professor with the Fac-ulty of Science and Technology, FCTUC. In 2011, hebecame a Visiting Professor at Ecole polytechniquefederale de Lausanne, Switzerland. He is also a Re-

searcher at the Instituto de Telecomunicacoes. His research interests includehigh-performance and parallel computing, hybrid computation on heteroge-neous systems and digital signal processing algorithms, namely those relatedwith biomedical engineering.

Dr. Falcao is a member of the IEEE Signal Processing Society.