18
Robotica http://journals.cambridge.org/ROB Additional services for Robotica: Email alerts: Click here Subscriptions: Click here Commercial reprints: Click here Terms of use : Click here Mind the gap: detection and traversability analysis of terrain gaps using LIDAR for safe robot navigation Arnab Sinha and Panagiotis Papadakis Robotica / Volume 31 / Issue 07 / October 2013, pp 1085 - 1101 DOI: 10.1017/S0263574713000349, Published online: 22 May 2013 Link to this article: http://journals.cambridge.org/abstract_S0263574713000349 How to cite this article: Arnab Sinha and Panagiotis Papadakis (2013). Mind the gap: detection and traversability analysis of terrain gaps using LIDAR for safe robot navigation. Robotica, 31, pp 1085-1101 doi:10.1017/S0263574713000349 Request Permissions : Click here Downloaded from http://journals.cambridge.org/ROB, IP address: 194.199.21.84 on 03 Oct 2013

Robotica Mind the gap: detection and traversability analysis of

Embed Size (px)

Citation preview

Page 1: Robotica Mind the gap: detection and traversability analysis of

Roboticahttp://journals.cambridge.org/ROB

Additional services for Robotica:

Email alerts: Click hereSubscriptions: Click hereCommercial reprints: Click hereTerms of use : Click here

Mind the gap: detection and traversability analysis of terrain gaps usingLIDAR for safe robot navigation

Arnab Sinha and Panagiotis Papadakis

Robotica / Volume 31 / Issue 07 / October 2013, pp 1085 - 1101DOI: 10.1017/S0263574713000349, Published online: 22 May 2013

Link to this article: http://journals.cambridge.org/abstract_S0263574713000349

How to cite this article:Arnab Sinha and Panagiotis Papadakis (2013). Mind the gap: detection and traversability analysis of terrain gaps usingLIDAR for safe robot navigation. Robotica, 31, pp 1085-1101 doi:10.1017/S0263574713000349

Request Permissions : Click here

Downloaded from http://journals.cambridge.org/ROB, IP address: 194.199.21.84 on 03 Oct 2013

Page 2: Robotica Mind the gap: detection and traversability analysis of

http://journals.cambridge.org Downloaded: 03 Oct 2013 IP address: 194.199.21.84

Robotica (2013) volume 31, pp. 1085–1101. © Cambridge University Press 2013doi:10.1017/S0263574713000349

Mind the gap: detection and traversability analysis of terraingaps using LIDAR for safe robot navigationArnab Sinha and Panagiotis Papadakis∗

ALCOR, Vision, Perception and Cognitive Robotics Laboratory, Department of Computer, Control and ManagementEngineering, University of Rome, “La Sapienza,” Italy

(Accepted April 10, 2013. First published online: May 14, 2013)

SUMMARYSafe navigation of robotic vehicles is considered as akey pre-requisite of successful mission operations withinhighly adverse and unconstrained environments. Whilethere has been extensive research in the perception ofpositive obstacles, little progress can be accredited to thefield of negative obstacles. This paper hypostatizes anelaborative attempt to address the problem of negativeobstacle detection and traversability analysis in the form ofgaps by processing 3-dimensional range data. The domainof application concerns Urban Search and Rescue scenariosthat reflect environments of increased complexity in terms ofdiverse terrain irregularities. To allow real-time performanceand, in turn, timely prevention of unrecoverable roboticstates, the proposed approach is based on the application ofefficient image morphological operations for noise reductionand border following the detection and grouping of gaps.Furthermore, we reason about gap traversability, a conceptthat is novel within the field. Traversability assessments arebased on features extracted through Principal ComponentAnalysis by exploring the spatial distribution of the interiorof the individual gaps or the orientation distribution of thecorresponding contour. The proposed approach is evaluatedwithin a realistic scenario of a tunnel car accident siteand a challenging outdoor scenario. Using a contemporarySearch and Rescue robot, we have performed extensiveexperiments under various parameter settings that allowedthe robot to always detect the real gaps, and either optimallycross over those that were traversable or otherwise avoidthem.

KEYWORDS: Mobile robots; Navigation; Motion planning;Computer vision; Man–machine systems.

1. IntroductionIn parallel to common applications where mobile robotsoperate in indoor-structured environments, there has beenan evident interest in advancing robot technology to increasethe degrees of freedom in their operation so that they canbe deployed within outdoor, off-road, natural as well asunnatural environments. Typical scenarios concern planetaryexploration, military, forestry, agriculture, and mining,

* Corresponding author. E-mail: [email protected]

together with Urban Search and Rescue (USAR) robotics,which is the focus of this paper.

Robots are able to operate in challenging environments byusing reconfigurable components that (passively or actively)adapt to rough terrain. In such applications, several issuesneed to be addressed such as: (i) assessment of terraintraversability, (ii) planning optimal paths with respect togiven criteria and (iii) automatically adapting the articulatingparts of the robot.

The aim of the Natural Human-Robot Cooperation inDynamic Environments (NIFTi) project, where the presentwork contributes to, is to develop a robotic system thatteams with human operators and firefighters for first-responsemissions in USAR. Our first extensive experience with theconsortium’s robotic platform2 was at a joint exercise in July201115 and recently within an end-user evaluation takingplace in a tunnel at the VVF training site in Montelibrettiin December 20119 (see Fig. 1). The scenario spanned abroad area into the tunnel filled with debris, pallets, barrels,crashed vehicles and smoke. The overall setting comprisedvarious hazards for the mobility of the robot, among whichwere several gaps, some of which were traversable and otherswere not.

A collective observation was that due to the low pointof view from on-board sensors, constrained lighting andpresence of smoke, it was very difficult for the users tomanually navigate the robot, let alone perceive and avoidor traverse gaps in an optimal way. This strived the need toimprove the autonomous navigation capabilities of the robot,but in a way that it would be transparent to the user so that itcould be trusted.

In this work we address the problem of negative obstacleperception and traversability analysis in the form of gaps,hence dealing with the first highlighted issue, namely,assessing terrain traversability. This is certainly one of themost challenging perception problems, since the presence ofa negative obstacle can only be inferred through the absenceof data that can have various interpretations.

We propose a methodology that is applicable in real-time and provides accurate traversability assessments undervarious challenging conditions. Our approach for gapperception and analysis is based on 3-dimensional (3D) pointcloud processing, which is considered to be more reliable ingeneral compared to other methods that are based either onvision or other sensor modalities. From this perspective andin comparison to previous methods that rely on 3D scene

Page 3: Robotica Mind the gap: detection and traversability analysis of

http://journals.cambridge.org Downloaded: 03 Oct 2013 IP address: 194.199.21.84

1086 Mind the gap

Fig. 1. (Colour online) Tunnel car accident at Montelibretti’s firefighter Italian school used for USAR evaluation.

analysis, the main contributions of the present work aresummarized as follows:

� Organization/grouping of regions that correspond to gaps.� Traversability analysis of gaps allowing for non-binary and

higher level classification.� Extensive experiments in USAR scenarios.

In detail, we employ 2D image morphological andcontour detection (through border following) algorithms forperceiving gaps spread around the ground in the vicinityof the robot, without the need of processing the complete3D scene, unless gap detection is required for more thanone plane. For each detected gap that is captured by thecorresponding contour, we perform a traversability analysisbased on its eigen-decomposition, where we consider thespatial distribution of the interior of individual gaps orthe orientation distribution of the corresponding contourto assess the optimal traversal path accounting for vehiclemobility constraints.

The remainder of the paper is organized as follows: InSection 2 we first review the previous works in negativeobstacle detection and in particular for gap perception tomotivate the directions that we followed in the proposedmethodology. In Section 3 we proceed to the detaileddescription of the proposed gap perception and traversabilityanalysis approach, and finally in Section 4 we present theresults of an extensive set of experiments within a tunnel caraccident scenario and an outdoor environment, and elaborateon the performance of our approach.

2. Previous WorksIn addressing the gap detection problem, various directionshave been explored, nonetheless the problem remainschallenging, especially for Search and Rescue environmentsthat are highly diverse and unconstrained in terms of terrainirregularities. With respect to the sensor modalities that havebeen employed for sensing the environment, we may tabulatethe previous works as shown in Table I.

Probably the first elaborative attempt to deal with thisproblem can be attributed to the work of Matthies et al.,13

where the presence of negative obstacles is inferred byperforming ray tracing for every pixel within the range image,and comparing the actual range values along the ray with theexpected range values according to the position of the groundplane. If the distance between the ranges of consecutive pixels

Table I. Sensors used in earlier approaches.

Earlier work Stereo camera Laser Other sensors

Matthies et al.13 Yes No NoBellutta et al.1 Yes No NoMatthies et al.14 Yes No ThermalDima et al.5 Yes Yes Infrared cameraKelly et al.8 Two Four Omnidirectional

camera, monochromedigital camera

Crane et al.4 No Yes NoDubbelmanand et al.6 Yes No NoHeckman et al.7 No Yes NoLarson et al.11, 12 No Yes No

was greater than a threshold, then this indicated the presenceof negative slope or ravine.

The approach of Bellutta et al.1 for terrain perception wasbased on the combination of geometric and visual featuresthrough a rule-base system. Terrain was geometricallyclassified into negative or positive obstacles by inspection ofthe height profile of elevation data, while the terrain supportwas statistically learned through expectation maximizationin colour space.

An alternative approach in terms of perception fordetection of negative obstacles during night was laterproposed in ref. [14], wherein range data were combinedwith thermal features of the terrain that highlight cavities aspotential negative obstacles. The method was based on theobservation that negative obstacles retain more heat duringnight than planar surfaces.

Dima et al.5 used feature and classifier fusion forobstacle detection and terrain traversability where the basisfeatures that are computed for various perceptual modalitiescorrespond to the mean and variance of pixel values along aset of image patches that span the whole image. Combiningfeatures that incorporate domain knowledge,10 differentclassifier fusion strategies are evaluated that show improvedclassification scores for road, human, and negative obstacledetection in comparison with single feature-based classifiers.

Kelly et al.8 describe the design and operation of ahuman–robot team for off-road navigation, wherein terrainclassification is based on geometry-based features combinedwith multi-spectral image-based features. The robot-supportsurface is extracted by ray-tracing of laser-beams and traininga neural network to derive the load-bearing surface whentraversing over vegetated areas, while negative obstacles

Page 4: Robotica Mind the gap: detection and traversability analysis of

http://journals.cambridge.org Downloaded: 03 Oct 2013 IP address: 194.199.21.84

Mind the gap 1087

Fig. 2. The modules of the proposed methodology. The Gap perception block perceives the gaps with the parameters written below eachsub-block traversability analysis can be performed in two ways namely, with contour information or using the space within the gap.

are found by the absence of laser hits in the directionperpendicular to the support surface.

Crane et al.4 describe the traversability grid data structurethat served as a common interface between components thateither produced or consumed perception data. A collection ofsmart sensors complied with a fixed underlying traversabilityassessment protocol which assigned traversability scoresto each grid cell. Three LIDAR sensors were employedtogether with a camera sensor, with distinct traversabilityanalysis roles. A LIDAR sensor directed toward the groundwas dedicated for detecting negative obstacles based onthe assumption that the unmanned ground vehicle (UGV)follows a level path, hence cells lying below the prescribedplane were assigned traversability values mainly based ontheir range distance.

In ref. [6], obstacles were detected using dense 3D terraindata reconstructed from stereo disparities in the directionof image columns. First, a disparity validity measure wasemployed together with an image pyramid to produce reliabledisparity estimates. In the following, the traversability wascomputed for each pixel of the disparity image by estimatingthe maximum vertical (positive or negative) slope and usinghysteresis thresholding that was driven by morphologicalopening and region filling.

Heckman et al.7 performed detection of potential negativeobstacles by initially performing ray-tracing for occlusionlabeling and finally for context-based labeling. Given a 3Dvoxel grid where cells were classified into linear, surfaceand scatter, ray-tracing was used to propagate the classof occupied voxels to the corresponding occluded voxelswhereas context-based labeling was used to differentiatebetween four cases that could be the cause of dataabsence and hence reason about the presence of negativeobstacles.

In the work of Larson et al.,12 terrain traversability wasdetermined by the presence of positive–negative obstacles,step edge obstacles, slope steepness and terrain roughness.Patches of missing range data that exceeded some size wereconsidered as potential negative obstacles, and a consecutivefiltering process determined whether these could be theresult of shadowing from positive obstacles. Larson andTrivedi11 in their work explored a two-stage (long andshort-range) negative obstacle detection framework. Initially,potential negative obstacles were detected at a distance usingthe NODR classification approach and then further refined

and filtered using support vector machines (SVM) whenthe UGV has sufficiently approached the surrounding area.Negative Obstacle DetectoR (NODR) comprises a multi-passdetection process that first looked for steps and next for gapswhose characteristics could either be directly measured fromthe available range data, or inferred by using contextualcues, such as sudden negative or positive elevation drops.Eventually, using an SVM model trained on ground truthdata, true and false positives of negative obstacles weredistinguished once the UGV had sufficiently approached.

As can be derived from Table I, there are only three worksthat correspond to same sensor allotment (highlighted inbold). The basis of the corresponding works as describedin refs. [4, 7, 11, 12] corresponds to laser ray tracing.Unfortunately, for lack of a generative framework anddue to their dependence on the underlying laser device,a straightforward comparison seems infeasible. In contrastto earlier works, we propose a gap detection method thatcan be used by any laser device. Moreover, all previousworks discussed so far concern applications where roboticvehicles operate within natural environments. To the authors’knowledge, there is no previous work on gap perceptionregarding USAR scenarios wherein the complexity of theterrain is by far more increased, and no previous work ingap traversability analysis regardless of the environment ofoperation. From this perspective, the present work constitutesan important step toward a better comprehension of theproblem and proposes a solution that is robust under diverseconditions.

3. Proposed Gap Detection and Traversability AnalysisMethodologyThe problem that we are addressing is decomposed intotwo sub-problems, namely, gap detection and traversabilityanalysis. In Fig. 2, we provide a schematic overview ofindividual steps followed in the proposed methodology.

Gap detection: Given a 3D point cloud P = {pi |pi =(xi, yi, zi) , i = 1, 2, . . . , NP }, we seek to detect sets of pointclouds Gj, j = 1, 2, . . . , Ng that correspond to the gaps inthe vicinity of the robot, where NP denotes the total numberof points, and Ng is the total number of different gapsdetected.

Page 5: Robotica Mind the gap: detection and traversability analysis of

http://journals.cambridge.org Downloaded: 03 Oct 2013 IP address: 194.199.21.84

1088 Mind the gap

Fig. 3. (Colour online) Constrained sensory box in front of a robot.

Gap traversability: We assess the traversability–mobility ofthe gaps, that is, we reason about whether it is safe for arobot to traverse a detected gap as well as derive the optimaltraversal mode. This information is stored for each gapwithin a vector tj = {t, pcenter, pstart, pend}, where t ∈ {0, 1}designates whether a gap is traversable or not, and in theformer case, pcenter provides the center of the gap contourtogether with the optimal START and END poses, pstart andpend, respectively, that the robot should follow to traverseover the gap.

An implicit assumption that we make here, in accordanceto ref [7] which shares most familiarity to the present work, isthat the set of points belonging to a given gap contour satisfya plane equation up to a certain error constant in terms oftheir distance from that plane. In other words, we restrict theproblem of gap detection from the complete 3D scene to aground slice of fixed thickness in front of the robot, which iseventually represented by the projections of 3D points withinthe slice onto the 2D ground plane equation.

3.1. Gap detectionThe gap detection stage has been split into the following threeparts to assist in the comprehension of overall methodology:

� Binary image formation (Section 3.1.1)� Image processing (Section 3.1.2)� Gap point cloud generation for further processing (Section

3.1.3)

The pseudo-code of the overall methodology is given inAlgorithm 1, and finally in Section 3.1.4 we describe thecomputational complexity of the proposed approach.

3.1.1. Binary image formation. We begin by constrainingthe space of gap detection from the complete �3 to thespace within a virtual sensory box defined as [xmin, R] ×[−R, R] × [−zmin, zmax] ⊂ �3 (as shown in Fig. 3 (left))according to the specifications and dimensions of roboticplatform used in our experiments2 (we elaborate more on thisin Section 4.2). In general, zmin can be thought of as infinity,

or the maximum range distance that a particular laser sensorcan support.

Our gap detection approach is applied onto the set of pointsthat reside close to a 2D plane that could support the roboticvehicle. We detect these points and project them onto a 2Dbinary image I that will be used in next steps (Sections 3.1.2and 3.1.3).

One could attempt to estimate the robot supporting planeby fitting a plane into the point cloud within the sensorybox, using, for example, regression, maximum likelihood orRANSAC. However, it is inherently implausible to followthis approach. Due to the presence of gaps, the supportingplane would not be easily distinguishable due to lack ofsufficient data (inliers) that could result in an erroneousplane estimation. Instead, a more suitable approach couldbe to consider a fixed planar area in front of the robot. Indetail, we take a slice of terrain mainly for two reasons,namely, (i) to compensate for a small variance in the real3D position of coplanar points that could be due to erroror noise, and (ii) to allow the perception of gaps not onlywithin perfectly planar terrain in front of the robot of zeroinclination, but also within slightly inclined but highly planarterrains in the foreground. The plane that corresponds tothis slice is estimated by taking into account the 3D poseestimation of the robot that is computed by fusing sensoryinformation from an Inertia Measurement Unit (IMU) andthe registration of 3D point clouds as acquired by the LIDARsensor. A particular decision on how to regress the planedepends on the density of laser sensing, the dimensions ofthe sensing box and the nature of the terrain that is expectedto be encountered. Without loss of generality, we continuethe description of our approach on the basis of an underlyingplane estimation process.

Using the standard notation ax + by + cz + d = 0 for therepresentation of a 3D plane, by employing a threshold P ltha point pi can be checked whether it resides on the robot’ssupporting plane by using the following equation:

ti = axi + byi + czi + d√a2 + b2 + c2

. (1)

Page 6: Robotica Mind the gap: detection and traversability analysis of

http://journals.cambridge.org Downloaded: 03 Oct 2013 IP address: 194.199.21.84

Mind the gap 1089

Fig. 4. (Colour online) Consecutive stages of the gap detection process. Left: Binary image of the plane point cloud within the polarcoordinate space, the rectangle shows the robot position and the X, Y axes with respect to robot. Middle: After morphological operations.Right: Detected gap contours.

If |ti | ≤ P lth then the ith point belongs to the robot supportingplane, otherwise if ti > P lth, the point represents a positiveobstacle. It could be argued that it is not appropriate to fix athreshold for positive obstacles or that a single point cannotrepresent an obstacle. This issue is alleviated as describedin the next section, where by means of image processing wecan filter out noisy points and outliers. Moreover, as shownin Algorithm 1, once pi is classified as a positive obstaclepoint, we further mark those pixels of the binary image thatcorrespond to the points behind pi along the ray that connectsthe laser’s origin and pi as NON gap pixels in order to notmisinterpret as gaps the absence of data that is due to positiveobstacle occlusion.

Finally, given a point pi , the computation of itscorresponding image pixel coordinates (ki, li) is performedby the following:

ki =⌊

tan−1( yi

xi)

θint

⌋, (2)

li =⌊

logδr

(ri

rd

+ 1

)⌋, (3)

ri =√

xi2 + yi

2, (4)

rd = R

δIheight−1r − 1

. (5)

In the above equations, parameter θint = π/Iwidth denotesthe step between two consecutive θ values as shown in Fig. 3(right). Let us assume that the radial values are r1, r2, . . . , rn.If we set r2 − r1 = rd as the initial interval, the remainingconsecutive intervals are r3 − r2 = rd + δr , r4 − r3 = rd +2δr and so on. Since the maximum radial value is fixed toR and also the resolution (number of radial values) is fixedby the parameter Iheight, the two parameters rd and δr areinterdependent. We fix the parameter δr (discussed in Section4.2) and evaluate rd as shown in Eq. (5).

3.1.2. Image processing. We proceed by applying a noisefiltering operation to the 2D binary image that was computedin the previous step. Due to the fact that this image is formedby employing a hard threshold P lth, the image will not besmooth in the sense that there may be some regions withpoints belonging to the robot supporting terrain but have been

excluded. Moreover, there could be some regions where somespurious obstacle points have been detected and resulted ina noisy image. At this point, we are making a local spatialcoherence assumption, that is, in the presence of a “gap” oran “obstacle,” the corresponding feature should be present inthe neighbourhood as well. Following this idea, we smooththe binary image with binary image morphological filters.First we apply an erosion filter and then a similar dilationfilter. The filtered image can be termed as ID .

Finally, we detect the gaps in the filtered image ID(i, j )by a state-of-the-art contour detection algorithm proposedby Suzuki et al.,20 which is based on the idea of borderfollowing. This method provides an accurate estimate ofouter contours and holes, and a very efficient implementationcan be found within the OpenCV library.3 In Fig. 4 wedemonstrate the consecutive stages of gap contour extractionin a representative example.

3.1.3. Gap point cloud extraction. At the final step, weassemble the gap point clouds Gj that are going to be used asinput in the “gap traversability analysis” stage. In that stage(detailed in Section 3.2), we may require the interior of thegaps or only the gap contour. In Algorithm 1, this condition iscontrolled by the boolean variable K. The conversion from animage pixel (i, j ) to the corresponding 3D point coordinates(x, y, z) is easily obtained by inversion of Eqs. (2)–(4).

3.1.4. Computational complexity. The computational com-plexity of the binary image formation algorithm is linearto the total number of points NP , hence O(NP ). Erosionand dilation operations are performed on a binary image,hence the computational complexity is O(Iwidth × Iheight),where Iwidth and Iheight correspond to the width and heightdimensions of the image respectively. The complexity of theborder following algorithm by Suzuki et al.20 for contourdetection is linear to the number as well as the length ofcontours. We may therefore implicitly disregard the lattercost as it is dominated by the computational complexity ofbinary image-formation which is linear to the total numberof 3D points whose numbers greatly exceed the total numberof points belonging to the contours. The total worst-casecomplexity of the proposed algorithm is O(NP + Iwidth ×Iheight). Since we eventually fix the resolution of the image,namely, Iwidth and Iheight, the overall complexity of the

Page 7: Robotica Mind the gap: detection and traversability analysis of

http://journals.cambridge.org Downloaded: 03 Oct 2013 IP address: 194.199.21.84

1090 Mind the gap

Algorithm 1: Gap point cloud extractionInput:P = {pi}: Point cloudno: number of morphological operations{a, b, c, d}: Robot supporting plane extractionK: Boolean (TRUE: if interior, FALSE: if contour)Output:G = {Gi}: Set of point clouds corresponding to eachgap Gi

beginTransform point cloud P to robot local coordinatesystem: P − − > Q

Filter point cloud Q according to sensory box:Q − − > S

I = ones: Initialize the binary image.for Each point si within point cloud S do

if si point is on robot support plane thenFind (θi, ri) corresponding to si

I (θi, ri) = 0: Mark the pixel as “not a GAP”endelse if (si point is from a positive obstacle) then

Find (θi, ri) corresponding to si

for all pixels behind (occluded) positiveobstacle: r = ri to Ih

doI (θi, r) = 0: Mark the pixel as “not aGAP”

endend

endID = Morphological operation(I, no)C = {Cj ; j = 1, . . . , Ng} FindContour(ID)if (K=TRUE) then

for each contour Cj ∈ C doGenerate interior point cloud Gj accordingto the interior of contour Cj

endendelse if (K=FALSE) then

for each contour Cj ∈ C doGenerate contour point cloud Gj accordingto the contour Cj

endend

end

proposed approach is linear to the number of points withinthe scene.

3.2. Gap traversabilityFollowing the stage of gap detection, we analyse the shapeof gap contours in order to address the following issues:

� Gap traversability: Determining whether the robot cancross over the gap considering its dimensions.

� Gap traversal path: If the gap is traversable, what are theSTART and END poses that the robot should reach fortraversing the gap.

Algorithm 2: Gap traversability analysisInput: Gj : Point cloud representing the gapOutput: tj : Traversability analysis data for the gapbegin

if Contour thenConj = convex(Gj ): Extract convex point cloudCtj = Contour(Conj ): Generate uniformcontour point cloudmj = centroid(Ctj ): Required for START andEND pose evaluationDj = direction(Ctj ): Extract direction vectorsCovj = covariance(Dj )[e, λ] = PCA(Covj ): Eigen-vectors e andeigen-values λ of the gap contour orientationeopt ← 1st eigen-vector

endelse if Interior then

mj = centroid(Gj )Covj = covariance(Gj − mj )[e, λ] = PCA(Gj ): Eigen-vector e andeigen-value λ of the gapeopt ← 2nd eigen-vector

endG

′j = {g′

j,k = gj,keTopt}: projected points onto the

optimal principal direction eopt

lenj : maxk,l ||g′j,k − g

′j,l||

if lenj > dr thentj (1) = 0

endelse

tj (1) = 1, tj (2 : 4) = mj

ST ART = mj + ηeopt

END = mj − ηeopt

if ||ST ART || > ||END|| thentj (5 : 7) = END, tj (8 : 10) = ST ART

endelse

tj (5 : 7) = ST ART, tj (8 : 10) = END

endend

end

Our approach to address the above issues resides in usingtwo alternative feature spaces. Afterwards, on either ofthese two feature spaces, we apply Principal ComponentAnalysis (PCA) and elaborate on the traversability ofindividual gaps. The two feature spaces are constructed eitherfrom the contour orientation using the Normal PrincipalComponent Analysis (NPCA) method17, 18 or from the spatialdistribution of the interior of the detected gap point cloud.In the sequel, we will discuss both of these feature spaceextraction methodologies and compare their performance inSection 4.7. The overall traversability analysis is given inAlgorithm 2.

3.2.1. Contour-based feature extraction. Given an individualgap point cloud Gj , we first extract the convex polygonConj of this planar point cloud. Afterwards, we uniformly

Page 8: Robotica Mind the gap: detection and traversability analysis of

http://journals.cambridge.org Downloaded: 03 Oct 2013 IP address: 194.199.21.84

Mind the gap 1091

Fig. 5. (Colour online) Gap traversability analysis.

sample the convex contour Conj and obtain Ctj , from whichwe extract the directional feature space by the estimationof the normal vector at each point along the contour Ctj .The main idea is explained in detail in our previous work,18

where NPCA was formulated and proposed for the rotationnormalization of 3D objects. Here instead NPCA is appliedin the 2D domain for the purpose of feature extraction asexemplified in Fig. 5(a).

3.2.2. Interior-based feature extraction. Alternatively, wemay perform feature extraction in the original space, takinginto account the non-uniform distribution of acquired 3Dpoints due to the laser mode of sensing. To alleviate thislimitation of non-uniformity, we down-sample the pointcloud according to a neighbourhood size. First we buildup a Kd-tree19 for each gap point cloud. We terminate thebranching of the tree when we reach at a leaf size below adistance threshold ε. The leaf size is defined as the maximumdistance of a point from the centroid of its cluster. Withthis algorithm, we would like to make the point cloud Gj

uniform. It is true that if the maximum of minimum distancebetween any two points within Gj is τ , then the leaf size ofthe tree should be ε = τ to make Gj uniform. Since Gj isgenerated from the image domain according to contour Cj

(see Algorithm 1), this value of τ can be evaluated from themaximum radial value within an individual gap point cloudas described by the Eqs. (2)–(5). Let us denote the maximumvalue of the radial index li (see Eq. (3)) for the j th gap withinthe binary image as max(li) = L. Then the value of τ shouldbe derived according to the following formula:

τ = (δL−1r − δL−2

r

)rd . (6)

3.2.3. Traversability analysis. After extracting the featurespace (either contour or interior), we employ PCA to derivethe shape characteristics of the detected gap. In order toassess traversability, we assign eopt, the optimal eigen-vectoras the first principal direction for the contour-based featurespace and the second principal eigen-vector for the interior-based feature space. This can be better apprehended fromFig. 5 and Algorithm 2. Afterwards, we project all theoriginal gap points Gj onto the optimal principal directioneopt for the evaluation of START and END poses. Moreover,let us say that the set of projected points are denoted asG

′j = {g′

j,k = gj,keTopt}, where k = 0, 1, . . . , Nj for the j th

Table II. Parameters.

Robot andenvironment-specific Laser-specific Algorithmic-specificparameters parameter parameters

Plane threshold:P lth

Increment of inter-vals between twoconsecutive radiusvalues: δr

Number of morpho-logical operations:no

xmin resolution of r: Iheight

zmax resolution of θ : Iwidth

R

gap and Nj is the number of points in the uniformlysampled point cloud Gj . We define maxk,l ||g′

j,k − g′j,l|| as

the maximum distance between any two points g′j,k and g

′j,l

from the set G′j . The value that is finally assigned to lenj

reflects the length of the gap if it were traversed from thenarrowest side. Essentially, this value is used to conditionthe traversability of the corresponding gap, considering themobility capabilities of the robotic vehicle. In particular, thegap Gj is deemed as untraversable in the following cases:

1. If lenj is more than a threshold dr , which is based uponthe length of the robot footprint.

2. If estimated START or END poses reside very close toanother gap Gk or close to a positive obstacle.

3.3. ParametersIn this section we highlight those parameters of the proposedapproach that are more relevant to the overall performance.We may roughly classify the set of parameters into threegroups, as described in the following, and summarized inTable II.

1. Robot and environment-specific parameters:� Plane threshold (P lth): The value of this parameter

depends upon two factors, namely, the expected terrainroughness, and the degree of roughness that thevehicle can tolerate without adapting its articulatingcomponents.

� xmin: The minimum range along the x-direction fromwhere we are bounding the sensing box.

Page 9: Robotica Mind the gap: detection and traversability analysis of

http://journals.cambridge.org Downloaded: 03 Oct 2013 IP address: 194.199.21.84

1092 Mind the gap

Fig. 6. Snapshots from the tunnel car accident scenario atMontelibretti’s firefighters school in Italy.

� zmax: The maximum range along the z-direction thatcould be due to a positive obstacle. This parameteris fixed by the vehicle’s height. The proposedmethodology does not concentrate on positive obstacledetection. Any 3D point above the robot supportingplane is considered as a positive obstacle. If we assumethat a 3D point p is above this threshold zmax, andthere is no other 3D point q at the nearby (x, y)coordinate which is having less z value, then it can

be easily understood that the point p will not affect thetraversability of the robot.

� R: Maximum sensing radius. The sensing boxboundaries along the x and y-axes are also fixed bythis parameter.

2. Laser-specific parameters:� δr : Increment in intervals of consecutive r values, as

explained in Fig. 3.3. Algorithmic-specific parameters:

� no: Number of erosion (and dilation) operations. Thisparameter depends on laser specifications and thecorresponding δr value.

� Resolutions of r and θ are Iheight and Iwidth respectively.These parameters define the number of different r and θ

values within the image plane. In Section 3.1, we haveused the parameter θint that is equal to π/Iwidth. Again,given the values of δr and Iheight, the individual r valuescan be calculated using Eqs. (3)–(5).

4. ExperimentsIn this section we evaluate the proposed gap detection andtraversability analysis approach within USAR and outdoor,urban scenarios. Our aim is to evaluate the performance of the

Fig. 7. (Colour online) Top row: Schematic top-down views of two experiment scenarios. Red lines correspond to negative obstacles, andblack (straight) lines signify occlusion due to a positive obstacle. Bottom row: Top-down views of acquired 3D point clouds within thecorresponding scenarios.

Page 10: Robotica Mind the gap: detection and traversability analysis of

http://journals.cambridge.org Downloaded: 03 Oct 2013 IP address: 194.199.21.84

Mind the gap 1093

Fig. 8. (Colour online) Scenario 3: (a) Schematic representation of the scene. Blue rectangle and arrow denote the robot position andorientation, gaps are signified through red lines, while the distance from the robot to the gap is shown in green. (b) Point cloud acquiredfrom position c shown in coloured coding. Yellow signifies the minimum z-value, while blue signifies the maximum z-value.

proposed methodology under varying parameter settings thatassist in deriving its limitations and encountering its failurecases which finally provides us with the optimal modes ofoperation. The experiments that we have conducted furtherdemonstrate the performance of our method with respect todifferent viewing angles and locations of the vehicle withinthe scene.

4.1. Evaluation scenarios4.1.1. Tunnel scenario. The first two experiments wereperformed within a realistic tunnel car accident site (depictedin Figs. 1 and 6). These experiments were conducted for thepurpose of end user’s evaluation of robotic platform usedwithin the EU FP7 IP NIFTi project (www.nifti.eu) thatconcerns Natural Human Robot Cooperation in DynamicEnvironments.

We have evaluated our approach within two variations of ascene as depicted in Fig. 7. These two scenarios correspondto two different situations where the negative obstacle is tooclose to the robot. The 3D point clouds corresponding to eachscenario are shown at the bottom of Fig. 7.

1. The first scenario corresponds to a constrained situationwherein the robot is unable to move in any direction.

2. The second scenario allows a single moving direction(corresponding to a small and traversable gap in the

foreground). The added complexity of this scenario incomparison to the first scenario is the presence of a positiveobstacle (barrel) that occludes a portion of the scene.

4.1.2. Outdoor scenario. In Fig. 8(a) we provide theschematic diagram of the outdoor scenario (scenario 3) andthe robot positions (a, b, . . . , g, h) with robot-pose direction(depicted as a blue arrow). In Fig. 8(b), we show thecorresponding point cloud acquired from the robot at positionc to give a view of the surrounding area. This scenario is usedto evaluate the performance of the proposed methodologyunder different viewing angles and robot positions.

4.2. Effect of parametersThe robot and environment-specific parameters may varyaccording to robot specifications. For the specific robotused in these experiments, we have fixed the values ofthe corresponding parameters (in metres) as P lth = 0.1,xmin = 0.6, zmax = 2.0, R = 3.0.

4.2.1. Parameter δr . The value of δr should generally begreater than one and according to Larson et al.,12 the optimalvalue is δr = 2. In Fig. 9, we show two results with respectto two different δr values, namely, δr = 1.1 and δr = 2.The point cloud (shown in blue in Fig. 9(a) and yellowin Fig. 9(b)) corresponds to the grid of the image plane.

Fig. 9. (Colour online) Effect of the laser parameter δr : The blue and yellow pixels in (a) and (b), respectively, show the generated pointcloud from the image plane, while orange pixels show the original point cloud. It can be seen that for the laser used in these experiments,δr = 2.0 is not the preferred setting in contrast to the approach followed within.12

Page 11: Robotica Mind the gap: detection and traversability analysis of

http://journals.cambridge.org Downloaded: 03 Oct 2013 IP address: 194.199.21.84

1094 Mind the gap

Fig. 10. (Colour online) Effect of resolution parameter k by keeping parameter no fixed. Resolution is defined as Iwidth = Iheight = 2k + 1.

Fig. 11. (Colour online) Effect of morphological parameter no by keeping the resolution parameter fixed at k = 6; with no morphologicaloperation (a) all the gaps are detected but not grouped; with too many morphological operations (c) some gaps are not detected.

Fig. 12. (Colour online) Positive obstacle occlusion.

Page 12: Robotica Mind the gap: detection and traversability analysis of

http://journals.cambridge.org Downloaded: 03 Oct 2013 IP address: 194.199.21.84

Mind the gap 1095

Fig. 13. (Colour online) Failure cases: occasionally gaps may be detected, although they correspond to false positives.

Here we show the result when the optimal δr value for ourparticular laser sensor is used, namely, δr = 1.1, where theinterval between two consecutive radial point acquisitionsmost reliably resembles the radial interval with respect tothe original point cloud (shown in orange). Therefore, inthe remaining set of experiments we have set δr = 1.1. Ingeneral, δr can be fixed according to the specifications of theLIDAR sensor that is used.

4.2.2. Resolution parameters. Finally, we are interested tosee the results with respect to the three parameters of thealgorithm, namely, no, Iheight and Iwidth. Without loss ofgenerality, we set Iheight = Iwidth = 2k + 1. Therefore, a smallvariation in k will result in large variation in the resolutionof both r and θ that has a direct effect on the performance ofthe proposed methodology, as will be shown.

In examining the effects of the parameters no and k onthe gap detection result, we choose the first scenario asshown on the left of Fig. 7. Initially, we fix the parameterno = 1 to examine the dependence of the result with respectto parameter k. It can be derived from Fig. 10 that aftera certain threshold (k = 6) the performance increase istrivial, if present. We concluded that the benefit in terms ofaccuracy in gap perception for values of k higher than k = 6was not significant enough to compensate for increase incomputational cost (see Section 3.1.4) that has been kept verylow in order to allow for real-time performance (≤ 50 ms,with k = 6). The overall time performance is discussed inSection 4.8.

4.2.3. Number of morphological operations no. In the nextstage of our evaluation, we fix the resolution parameter to the

optimal value (k = 6) and vary the no parameter within theinterval [0, 2] (since this range is sufficient to explainthe effect). Figure 11 shows the effect of morphologicaloperation with respect to scenario 1.

As can be observed from Fig. 11(a), with no morphologicaloperation only two gaps are detected (with light green anddark green colours), one just in front of the robot and the otherin its surroundings. On the other hand, when the number ofmorphological operation is increased to two as shown inFig. 11(c), some gaps are not detected. Finally, when thenumber of morphological operations is one, as can be seenin Fig. 11(b), all the gaps are consistently detected. Fromthese results, one can easily conclude that morphologicaloperation helps to segment different gap regions, althoughthe perceptual information maybe be distorted above a certainnumber of operations.

The grouping of gaps is advantageous, since it helps tostudy the traversability with respect to each gap and reasonbased upon these results, as described in Section 3.2.3.Therefore, it should be estimated according to the robotmobility capabilities. In our experiments, we finally considera single morphological operation, as this option provided themost stable results.

4.3. Positive obstacle occlusionAs shown in Fig. 7 for the second scenario, in the presenceof a positive obstacle in front of the robot, the absence ofpoints behind this positive obstacle cannot be used as a cluefor the presence of negative obstacles or gaps. We test thisscenario using the optimal parameter setting as described inSection 4.2. Figure 12(a) shows the result of not detecting the

Page 13: Robotica Mind the gap: detection and traversability analysis of

http://journals.cambridge.org Downloaded: 03 Oct 2013 IP address: 194.199.21.84

1096 Mind the gap

Fig. 14. (Colour online) Gap detection results within the outdoor scenario: Different gaps are coded in different colours and white pointcloud corresponds to the original point cloud. Fixed parameter set: δr = 1.1, no = 1, P lth = 0.1. Variable parameter set: Iheight = 26 + 1,Iwidth = 25 + 1, R = 3 m. Occasionally the gaps are not detected due to the small radius R of the sensing area.

absence of points (occluded by positive obstacle) as a gap.Moreover, within the third scenario, there is one instancewhere two human subjects move in front of the robot asshown in the Fig. 12(b). The absence of points beyondthese positive (moving) obstacles should not be labelledas gaps. As shown in Fig. 12(c), gaps are detected onlywhen the absence of points is not caused by positive obstacleocclusion.

4.4. Comparison with earlier approachesAs explained in Section 2 and Table I, there are fourearlier works4, 7, 11, 12 that correspond to the same sensorallotment, that is, gap detection using the LIDAR data.The methodology that we propose differs from these worksin two perspectives, namely, the gap detection approach

and the consecutive traversability analysis that can supporthigh-level reasoning during the path planning process.Our gap detection methodology incorporates the followingtwo key features: First, the non-uniform sampling of therobot supporting plane as expressed by Eqs. (2)–(5) assistsin tuning the laser-dependent parameter as described inSection 4.2.1. Second, we employ image contour analysisin combination to mathematical morphology that allowsthe grouping of individual gaps for traversability analysis.And last but not least, we perform traversability analysisof individual gaps that allows for high-level traversabilityanalysis, a concept that herein is explored for the first time.In this direction, we apply PCA in two different feature spacesfor the extraction of corresponding traversability informationfor each individual gap.

Page 14: Robotica Mind the gap: detection and traversability analysis of

http://journals.cambridge.org Downloaded: 03 Oct 2013 IP address: 194.199.21.84

Mind the gap 1097

Fig. 15. (Colour online) Gap detection results with a radius of 10 m within the outdoor scenario. Different gaps are coded in differentcolours and the white point cloud corresponds to the original point cloud. Fixed parameter set: δr = 1.1, no = 1, P lth = 0.1. Variableparameter set: Iheight = 26 + 1, Iwidth = 26 + 1, R = 10 m. Gaps are detected in all cases, since the sensory radius R is sufficiently large.

Fig. 16. (Colour online) Snapshots of the outdoor scenario scene.

Page 15: Robotica Mind the gap: detection and traversability analysis of

http://journals.cambridge.org Downloaded: 03 Oct 2013 IP address: 194.199.21.84

1098 Mind the gap

Fig. 17. (Colour online) Scenario 1: Gap traversability analysis results using conventional PCA (interior) and NPCA (contour).

Unfortunately, earlier approaches for gap detection cannotbe easily reproduced due to limited descriptions in refs. [4,11, 12] in terms of learning or adjusting the internal andexternal parameters. In contrast, we extensively discuss theparameters of the proposed algorithm in order to renderthe proposed methodology eligible for testing on differentplatforms by suitably adjusting the corresponding parametersas described in Section 4.2.

4.5. Failure casesIn Fig. 13, we give an example of false positives detectionusing our gap detection methodology. Two cases are shown(respectively in two rows). For each case, at the samelocation two point clouds are generated. Each row showsthe result of the proposed algorithm. For these results, we

obtain that the proposed methodology occasionally assessthe presence of a gap although there is no real gap. Wehave observed that this behaviour is often attributed to thesusceptibility of the 3D pose estimation of the robot to errorsthat results in inconsistency for the underlying supportingplane computation. This is an expected result according to theformulation of the proposed gap detection approach, whichrelies on a robust 3D pose estimation process. However,the confidence of the 3D pose estimation can be easilyquantified, which in turn allows for weighing the certaintyin the gap detection result. As far as safe robot navigation isconcerned, a high false positive ratio may result in a relativelyintimidating robot behaviour; however, it is guaranteed thatif a gap really exists, then the algorithm will detect thecorresponding gap so that we obtain a true positive 100%.

Page 16: Robotica Mind the gap: detection and traversability analysis of

http://journals.cambridge.org Downloaded: 03 Oct 2013 IP address: 194.199.21.84

Mind the gap 1099

Fig. 18. (Colour online) Scenario 3: Gap traversability analysis results using conventional PCA (interior) and NPCA (contour).

4.6. Outdoor scenario evaluationIn Figs. 14 and 15, we show the results of applyingthe proposed methodology within the outdoor scenario asdescribed in Section 4.1.2 and shown in Fig. 16. It canbe seen that the proposed approach is robust to changes inthe viewing angle and distances. Occasionally, as has beenalready discussed at Section 4.5, false positives may appear.This is almost always attributed to the sparsity of acquired3D points due to increased distance from the robotic vehicle.Through a careful look at the acquired point clouds, it can beseen that the regions where there is no laser data (which is notdue to obstacle occlusion) should be detected as a gap region.This is an expected result according to what the proposedapproach is designed to perform. Therefore, from a completeview of the scene it could be derived that these correspond

to false positives but this cannot be assessed solely from asingle 3D scene acquisition.

4.7. Results of traversability analysisIn Figs. 17–19, we demonstrate the gap traversability analysisresults with respect to different individual gaps as detectedwithin different point clouds. These figures also show thecomparison between the two alternative methodologies thatwe explored for traversability analysis, namely, using eitherthe contour direction information or the interior points ofthe gap in the original 3D space. As can be seen, bothmethodologies perform quite similarly and produce goodresults even if the gaps are not of elliptical or rectangularshapes. In these results we prompt to show the effectivenessof our proposed traversability direction evaluation. We do

Page 17: Robotica Mind the gap: detection and traversability analysis of

http://journals.cambridge.org Downloaded: 03 Oct 2013 IP address: 194.199.21.84

1100 Mind the gap

Fig. 19. (Colour online) Scenario 3: Gap traversability analysis results using conventional PCA (interior) and NPCA (contour).

not show whether the gaps are traversable or not since whengiven the START and END poses, it is straightforward toassess the traversability of any gap.

The similar performance that we attest in the twoapproaches is a way in which we can evaluate the stabilityof the proposed gap analysis methodology. If the twoapproaches gave significantly different results, then thiswould be a strong indication of instability of the approach.As is discussed in previous work,18 NPCA and ContinuousPCA (CPCA) give different results that result in anoverall highly complementary behaviour when both directioninformation and spatial information are used for the purposeof normalizing the rotation of 3D objects. However, in theapplication that is considered in this paper, namely, 2D gapcontour analysis, the inverse behaviour is observed, hence

one can choose to perform the analysis of the shape of acontour in any of the two feature spaces and obtain a verysimilar result.

4.8. Time performanceThe experiments reported within this work have beenperformed by using a system equipped with a 64 bit, Intel-I7 CPU and 7.8 GB memory. The overall time cost (gapdetection and traversability analysis) ranges from 7 to 30 msfor an acquired 3D scene, with an average of 15 ms.

5. ConclusionsWe have proposed a novel methodology for effective andefficient detection of negative obstacles in the form of gaps,

Page 18: Robotica Mind the gap: detection and traversability analysis of

http://journals.cambridge.org Downloaded: 03 Oct 2013 IP address: 194.199.21.84

Mind the gap 1101

together with a framework to analyse the gap traversabilityby considering not only the terrain but also the mobilitycapabilities of a robotic vehicle. Together with the proposedgap traversability analysis approach that allows for non-binary traversability assessments for the first time, in thepresent work, we have reported on the various difficultiesthat are encountered in the perception of negative obstaclesand proposed solutions to address each individual problemthrough extensive experiments within USAR environmentsand conditions that had not so far been explored.

Through our experiments, we have shown that the problemof gap detection and traversability analysis can be alleviatedby suitably employing state-of-the-art signal processingtechniques that allow a robotic vehicle to navigate in asafe as well as optimal mode of operation in very adverseand cluttered environments such as those encountered inUSAR scenarios. Ultimately, the proposed framework canbe used to enhance the performance of robot path planninglocally, as it can signify the presence of gaps and suggestthe most adequate path plan according to the given roboticvehicle. In particular, the proposed gap perception andtraversability analysis could be seamlessly combined withthe ability of the robotic vehicle to assess the traversability–mobility of a given 3D terrain automatically as described byPapadakis and Pirri16 through physics-based optimization.Such an integration of functionalities would allow the UGVto primarily filter out the regions that have been deemedas untraversable gaps and subsequently evaluate the 3Dtraversability of continuous solid areas.

AcknowledgementsThis paper describes research supported by the EU-FP7ICT 247870 NIFTi project. We would further like to thankthe reviewers for their constructive feedback both in termsof improving the quality of the manuscript as well as forstimulating and inspiring our future work.

References1. P. Bellutta, R. Manduchi, L. Matthies, K. Owens and A. Rankin,

“Terrain Perception for Demo III. Proceedings of the IEEEIntelligent Vehicles Symposium (2000).

2. Bluebotics, Mobile robot. No. PCT/EP2011/060937 (BlueBot-ics SA, Switzerland, Jun. 2011).

3. Intel Corporation, Open source computer vision library,Available at: http://opencv.willowgarage.com/wiki/ (Aug.2011). (Accessed April 2013)

4. C. D. Crane III, D. G. Armstrong II, R. Touchton, T. Galluzzo,S. Solanki, J. Lee, D. Kent, M. Ahmed, R. Montane, S.Ridgeway, S. Velat, G. Garcia, M. Griffis, S. Gray, J. Washburnand G. Routson, “Team cimar’s navigator: An unmannedground vehicle for the 2005 darpa grand challenge,” J. FieldRobot. 23(8), 599–623 (2006).

5. C. Dima, N. Vandapel and M. Hebert, “Classifier Fusion forOutdoor Obstacle Detection,” Proceedings of the InternationalConference on Robotics and Automation (2004).

6. G. Dubbelmanand, W. van der Mark, J. C. J. van den Heuveland F. C. A. Groen, “Obstacle Detection During Day and NightConditions Using Stereo Vision,” Proceedings of the IEEE/RSJInternational Conference on Intelligent Robots and Systems(2007).

7. N. Heckman, J.-F. Lalonde, N. Vandapel and M. Hebert,“Potential Negative Obstacle Detection by OcclusionLabeling,” In: Proceedings of the IEEE/RSJ InternationalConference on Intelligent Robots and Systems (2007) pp. 2168–2173.

8. A. Kelly, A. Stentz, O. Amidi, M. Bode, D. Bradley,A. Diaz-Calderon, M. Happold, H. Herman, R. Mandelbaum,T. Pilarski, P. Rander, S. Thayer, N. Vallidis and R. Warner,“Toward Reliable Off Road Autonomous Vehicles Operatingin Challenging Environments,” Int. J. Robot. Res. 25(5–6),449–483 (2006).

9. G.-J. Kruijff, M. Janicek, S. Keshavdas, B. Larochelle, H.Zender, N. Smets, T. Mioch, M. Neerincx, J. van Diggelen,F. Colas, M. Liu, F. Pomerleau, R. Siegwart, V. Hlavac, T.Svoboda, T. Petrickek, M. Reinstein, K. Zimmerman, F. Pirri,M. Gianni, P. Papadakis, A. Sinha, B. Patrick, N. Tomatis, R.Worst, T. Linder, H. Surmann and V. Tretyakov, “Experiencein System Design for Human-Robot Teaming in Urban Search& Rescue,” Proceedings of the International Conference onField and Service Robotics (2012).

10. J. F. Lalonde, N. Vandapel, D. Huber and M. Hebert,” “Naturalterrain classification using three-dimensional ladar data forground robot mobility,” J. Field Robot. 23(1), 839–861 (2006).

11. J. Larson and M. Trivedi, “Lidar Based Off-Road NegativeObstacle Detection and Analysis,” Proceedings of theIEEE International Conference on Intelligent TransportationSystems (2011).

12. J. Larson, M. Trivedi and M. Bruch, “Off-RoadTerrain Traversability Analysis and Hazard Avoidance forUGVs,” Technical Report (2010). Department of ElectricalEngineering, University of California San Diego.

13. L. Matthies, A. Kelly, T. Litwin and G. Tharp, “ObstacleDetection for Unmanned Ground Vehicles: A Progress Report,”In: Proceedings of the IEEE Intelligent Vehicles Conference(1995) pp. 66–71.

14. L. Matthies and A. Rankin, “Negative Obstacle Detection byThermal Signature,” IEEE/RSJ International Conference onIntelligent Robots and Systems (2003).

15. T. Mioch, N. J. J. M. Smets and M. A. Neerincx, Assessinghuman-robot performances in complex situations with unit tasktests, Proceedings of the 21th IEEE International Symposiumon Robot and Human Interactive Communication, pp. 621–626, Paris, France.

16. P. Papadakis and F. Pirri, “3D Mobility Learning andRegression of Articulated, Tracked Robotic Vehicles byPhysics-Based Optimization,” In: Virtual Reality Interactionand Physical Simulation (2012) pp. 147–156.

17. P. Papadakis, Content-Based 3D Model Retrieval Consideringthe User’s Relevance Feedback. PhD Thesis (University ofAthens, Athens Greece, 2009).

18. P. Papadakis, I. Pratikakis, S. Perantonis and T. Theoharis,“Efficient 3D shape matching and retrieval using aconcrete radialized spherical projection representation,”Pattern Recognit. 40(9), 2437–2452 (2007).

19. M. Shevtsov, A. Soupikov and A. Kapustin, “Highly parallelfast KD-tree construction for interactive ray tracing of dynamicscenes,” Comput. Graph. Forum 26(3), 395–404 (2007).

20. S. Suzuki and K. Abe, “Topological structural analysis ofdigitized binary images by border following,” Comput. Vis.Graph. Image Process. 30(1), 32–46 (1985).