29
Hilal Tayara ADVANCED INTELLIGENT ROBOTICS 1 Depth Camera Based Indoor Mobile Robot Localization and Navigation

Hilal Tayara

  • Upload
    gizela

  • View
    58

  • Download
    0

Embed Size (px)

DESCRIPTION

Hilal Tayara. Depth Camera Based Indoor Mobile Robot Localization and Navigation. Advanced Intelligent robotics. Content. Introduction FAST SAMPLING PLANE FILTERING LOCALIZATION Vector Map Representation and Analytic Ray Casting Projected 3D Point Cloud Observation Model - PowerPoint PPT Presentation

Citation preview

Page 1: Hilal Tayara

1

Hilal Tayara

ADVANCED INTELLIGENT ROBOTICS

Depth Camera Based Indoor Mobile Robot Localization and Navigation

Page 2: Hilal Tayara

2Content Introduction FAST SAMPLING PLANE FILTERING LOCALIZATION

A. Vector Map Representation and Analytic Ray CastingB. Projected 3D Point Cloud Observation ModelC. Corrective Gradient Refinement for Localization

NAVIGATION EXPERIMENTAL RESULTS

Page 3: Hilal Tayara

3Motivation

Goal: Indoor Mobile Robot Localization & Mapping Challenges / Constraints:

Clutter Volume of data : 9.2 M points/sec compared to the 6800 points/sec 2D! Limited computation power on robot

Page 4: Hilal Tayara

4Possible approaches

Paper introduce the Fast Sampling Plane Filtering (FSPF) algorithm to reduce the volume of the 3D point cloud by: sampling points from the depth image. and classifying local grouped sets of points as belonging to planes in

3D or points that do not correspond to planes within a specified error margin.

Paper then introduce a localization algorithm based on an observation model that down-projects the plane filtered points on to 2D And assigns correspondences for each point to lines in the 2D map.

Page 5: Hilal Tayara

5Possible approaches

The full sampled point cloud is processed for obstacle avoidance for autonomous navigation.

All algorithms process only the depth information, and do not require additional RGB data.

The FSPF, localization and obstacle avoidance algorithms run in real time at full camera frame rates (30Hz) with low CPU requirements (16%).

Page 6: Hilal Tayara

6FAST SAMPLING PLANE FILTERING

The Problem: (Efficiently) Estimate points P and normals R belonging to planes given depth image image

Depth cameras: Pixel: color and depth values

depth information: camera intrinsic(fh, fv, w, H). I(i; j) is the depth of a pixel at location d = (i; j).

3D point p = (px; py; pz) is reconstructed using the depth value I(d).

Page 7: Hilal Tayara

7FAST SAMPLING PLANE FILTERING ALGORITHM

Page 8: Hilal Tayara

8

sampling three locations d0 randomlyd1,d2 neighborhood of size n around d0

The 3D coordinates for the corresponding points p0, p1, p2 are computed

A search window of width w0 and height h0 is computed based on the mean depth (z-coordinate) of the points p0, p1, p2 and the minimum expected size S of the planes in the world.

Page 9: Hilal Tayara

9

all the inlier points are added to the list P, and the associated normals to the list R.

RANSAC is performed in the search window

If more than αinl inlier points are produced as a result αinl of running RANSAC in the search window

Page 10: Hilal Tayara

10Fast Sampling Plane Filtering

Page 11: Hilal Tayara

11Fast Sampling Plane FilteringSample point p1, then p2 and p3 within distance η of p1

Page 12: Hilal Tayara

12Fast Sampling Plane FilteringEstimate Plane parameters (normal, offset)

Page 13: Hilal Tayara

13Fast Sampling Plane FilteringCompute Window size ~ World plane size at mean depth

Page 14: Hilal Tayara

14Fast Sampling Plane FilteringSample l -3 additional points within window

Page 15: Hilal Tayara

15Fast Sampling Plane FilteringIf fraction of inliers > f , store all inliers + normals

Page 16: Hilal Tayara

16Fast Sampling Plane FilteringDo nmax times, or until num inlier points > kmax

Page 17: Hilal Tayara

17LOCALIZATION For the task of localization, the plane filtered point cloud P and

the corresponding plane normal estimates R need to be related to the 2D map.

Map - List of Geometric Features (Lines), Use Available Architectural Plans

The observation model has to compute the line segments likely to be observed by the robot given its current pose and the map.

This is done by an analytic ray cast step.

Page 18: Hilal Tayara

18A. Vector Map Representation and Analytic Ray Casting

To compute the observation likelihoods based on observed planes, the first step is to estimate which lines on the map are likely to be observed (the “scene lines”), given the pose estimate of the robot.

This ray casting step is analytically computed using the vector map representation.

from the location x. The algorithm calls the helper procedure TrimOcclusion(x; l1; l2;L) that accepts a location x, two lines l1 and l2 and a list of lines L.

Page 19: Hilal Tayara

19TrimOcclusion(x; l1; l2;L)

Page 20: Hilal Tayara

20

Page 21: Hilal Tayara

21

Thus, given the robot pose, the set of line segments likely to be observed by the robot is computed.

Based on this list of line segments, the actual observation of the plane filtered point cloud P is related to the map using the projected 3D point cloud model, which we introduce next.

Page 22: Hilal Tayara

22B. Projected 3D Point Cloud Observation Model 3D {P,R} Projected into 2D(P’,R’).

Points that correspond to ground plane detections are rejected at this step.

The pose of the robot x be given by x = {x1; x2}

The observable scene lines list L is computed using an analytic ray cast.

The observation likelihood p(y|x) (where the observation y is the 2D projected point cloud P’) is computed as follows:

Page 23: Hilal Tayara

23

1) For every point pi in P’, line li (li Є L) is found such that the ray in the direction of pi ---xi and originating from x1 intersects li.2) Points for which no such line li can be found are discarded.3) Points pi for which the corresponding normal estimates ri differ from the normal to the line li by a value greater than a threshold max are discarded.4) The perpendicular distance di of pi from the (extended) line li is

computed.5) The total observation likelihood p(yjx) is then given by:

𝜃

Page 24: Hilal Tayara

24Localization: MCL - CGR

Monte Carlo Localization, with Corrective Gradient Refinement Key Idea: Use observation likelihood to refine proposal distributions

(rotation + translation)

for CGR we need to compute both the observation likelihood, as well as its gradients.

The observation likelihood is computed using previous Equation, and the corresponding gradients are therefore given by

Page 25: Hilal Tayara

25NAVIGATIONFor the robot to navigate autonomously, it needs to be able to successfully avoid obstacles in its environment. This is done by computing open path lengths available to the robot for different angular directions.Obstacle checks are performed using the 3D points from the sets P and O.r the robot radius.d the desired direction of travel.a is unit vector in the direction of . The open path length d( )and the chosen obstacle avoidance direction arecalculated as:

𝜃�̂�𝜃 𝜃

Page 26: Hilal Tayara

26EXPERIMENTAL RESULTS omnidirectional indoor mobile robot CoBot equipped by

Kinect (640X480) 30 HZ Hokuyo URG-04LX 2D laser rangefinder scanner as a reference.

All trials were run single threaded, on a single core of an Intel Core i7 950 processor.

Compare results with: Kinect-Raster (KR) approach. Kinect-Sampling (KS) approach. Kinect-Complete (KC) approach.

Page 27: Hilal Tayara

27

Page 28: Hilal Tayara

28

Page 29: Hilal Tayara

29

Thank You