Perception IV: Place Recognition, Line Extraction · Autonomous Mobile Robots Margarita Chli, Paul...

Preview:

Citation preview

| Autonomous Mobile Robots

Margarita Chli, Paul Furgale, Marco Hutter, Martin Rufli, Davide Scaramuzza, Roland Siegwart

ASL Autonomous Systems Lab

1

Perception IV: Place Recognition, Line Extraction

Autonomous Mobile Robots

Davide Scaramuzza – University of Zurich

Margarita Chli, Paul Furgale, Marco Hutter, Roland Siegwart

| Autonomous Mobile Robots

Margarita Chli, Paul Furgale, Marco Hutter, Martin Rufli, Davide Scaramuzza, Roland Siegwart

ASL Autonomous Systems Lab

Place recognition using Vocabulary Tree

Line extraction from images

Line extraction from laser data

Introduction 2

Outline of Today’s lecture

| Autonomous Mobile Robots

Margarita Chli, Paul Furgale, Marco Hutter, Martin Rufli, Davide Scaramuzza, Roland Siegwart

ASL Autonomous Systems Lab

K-means clustering - Review

Minimizes the sum of squared Euclidean distances between points 𝒙𝑖 and their nearest

cluster centers 𝒎𝑘

Algorithm:

Randomly initialize K cluster centers

Iterate until convergence:

Assign each data point to the nearest center

Recompute each cluster center as the mean of all points assigned to it

k

ki

kiMXDcluster

clusterinpoint

2)(),( mx

| Autonomous Mobile Robots

Margarita Chli, Paul Furgale, Marco Hutter, Martin Rufli, Davide Scaramuzza, Roland Siegwart

ASL Autonomous Systems Lab

Source: http://shabal.in/visuals/kmeans/1.html

K-means clustering - Demo

| Autonomous Mobile Robots

Margarita Chli, Paul Furgale, Marco Hutter, Martin Rufli, Davide Scaramuzza, Roland Siegwart

ASL Autonomous Systems Lab

Feature-based object recognition - Review

Most of the Book’s keypoints are present in the Scene

A: The Book is present in the Scene

Q: Is this Book present in the Scene?

Look for corresponding matches

| Autonomous Mobile Robots

Margarita Chli, Paul Furgale, Marco Hutter, Martin Rufli, Davide Scaramuzza, Roland Siegwart

ASL Autonomous Systems Lab

Taking this a step further…

Find an object in an image

Find an object in multiple images

Find multiple objects in multiple images

?

?

? As the number of images increases, feature

based object recognition becomes

computationaly unfeasable

| Autonomous Mobile Robots

Margarita Chli, Paul Furgale, Marco Hutter, Martin Rufli, Davide Scaramuzza, Roland Siegwart

ASL Autonomous Systems Lab

Fast visual search

“Scalable Recognition with a Vocabulary Tree”, Nister and Stewenius, CVPR 2006.

“Video Google”, Sivic and Zisserman, ICCV 2003

Query in a database of 110 million images in 5.8 seconds

| Autonomous Mobile Robots

Margarita Chli, Paul Furgale, Marco Hutter, Martin Rufli, Davide Scaramuzza, Roland Siegwart

ASL Autonomous Systems Lab

Slide

How much is 110 million images?

| Autonomous Mobile Robots

Margarita Chli, Paul Furgale, Marco Hutter, Martin Rufli, Davide Scaramuzza, Roland Siegwart

ASL Autonomous Systems Lab

How much is 110 million images?

Slide

| Autonomous Mobile Robots

Margarita Chli, Paul Furgale, Marco Hutter, Martin Rufli, Davide Scaramuzza, Roland Siegwart

ASL Autonomous Systems Lab

How much is 110 million images?

| Autonomous Mobile Robots

Margarita Chli, Paul Furgale, Marco Hutter, Martin Rufli, Davide Scaramuzza, Roland Siegwart

ASL Autonomous Systems Lab

Bag of Words

Extension to scene/place recognition:

Is this image in my database?

Robot: Have I been to this place before?

Use analogies from text retrieval:

Visual Words

Vocabulary of Visual Words

“Bag of Words” (BOW) approach

| Autonomous Mobile Robots

Margarita Chli, Paul Furgale, Marco Hutter, Martin Rufli, Davide Scaramuzza, Roland Siegwart

ASL Autonomous Systems Lab

Indexing local features

With potentially thousands of features per image, and hundreds to millions of images to

search, how to efficiently find those that are relevant to a new image?

Quantize/cluster the descriptors into `visual words’

Inverted file indexing schemes

| Autonomous Mobile Robots

Margarita Chli, Paul Furgale, Marco Hutter, Martin Rufli, Davide Scaramuzza, Roland Siegwart

ASL Autonomous Systems Lab

For text documents, an efficient

way to find all pages on which a

word occurs is to use an index

We want to find all images in

which a feature occurs

To use this idea, we’ll need to

map our features to “visual words”

Indexing local features: inverted file text

| Autonomous Mobile Robots

Margarita Chli, Paul Furgale, Marco Hutter, Martin Rufli, Davide Scaramuzza, Roland Siegwart

ASL Autonomous Systems Lab

Building the Visual Vocabulary

Image Collection

Extract Features

Cluster Descriptors Descriptors space

Examples of

Visual Words:

| Autonomous Mobile Robots

Margarita Chli, Paul Furgale, Marco Hutter, Martin Rufli, Davide Scaramuzza, Roland Siegwart

ASL Autonomous Systems Lab

Video Google

“Video Google” [J.Sivic and A. Zisserman, ICCV 2003]

Demo: http://www.robots.ox.ac.uk/~vgg/research/vgoogle/

These features map to the same visual word

| Autonomous Mobile Robots

Margarita Chli, Paul Furgale, Marco Hutter, Martin Rufli, Davide Scaramuzza, Roland Siegwart

ASL Autonomous Systems Lab

Video Google System

1. Collect all words within query region

2. Inverted file index to find relevant frames

3. Compare word counts

4. Spatial verification

Sivic & Zisserman, ICCV 2003

• Demo online at :

http://www.robots.ox.ac.uk/~vgg/research/vgoogle

Query

region

Retrie

ved fra

mes

| Autonomous Mobile Robots

Margarita Chli, Paul Furgale, Marco Hutter, Martin Rufli, Davide Scaramuzza, Roland Siegwart

ASL Autonomous Systems Lab

Efficient Place/Object Recognition

We can describe a scene as a collection of

words and look up in the database for images

with a similar collection of words

What if we need to find an object/scene in a

database of millions of images?

Build Vocabulary Tree via hierarchical clustering

Use the Inverted File system for efficient indexing

[Nister and Stewénius, CVPR 2006]

| Autonomous Mobile Robots

Margarita Chli, Paul Furgale, Marco Hutter, Martin Rufli, Davide Scaramuzza, Roland Siegwart

ASL Autonomous Systems Lab

Recognition with K-tree – Populate the descriptor space

| Autonomous Mobile Robots

Margarita Chli, Paul Furgale, Marco Hutter, Martin Rufli, Davide Scaramuzza, Roland Siegwart

ASL Autonomous Systems Lab

Recognition with K-tree – Populate the descriptor space

| Autonomous Mobile Robots

Margarita Chli, Paul Furgale, Marco Hutter, Martin Rufli, Davide Scaramuzza, Roland Siegwart

ASL Autonomous Systems Lab

Recognition with K-tree – Populate the descriptor space

| Autonomous Mobile Robots

Margarita Chli, Paul Furgale, Marco Hutter, Martin Rufli, Davide Scaramuzza, Roland Siegwart

ASL Autonomous Systems Lab

Recognition with K-tree – Populate the descriptor space

| Autonomous Mobile Robots

Margarita Chli, Paul Furgale, Marco Hutter, Martin Rufli, Davide Scaramuzza, Roland Siegwart

ASL Autonomous Systems Lab

Recognition with K-tree – Populate the descriptor space

| Autonomous Mobile Robots

Margarita Chli, Paul Furgale, Marco Hutter, Martin Rufli, Davide Scaramuzza, Roland Siegwart

ASL Autonomous Systems Lab

Recognition with K-tree – Populate the descriptor space

| Autonomous Mobile Robots

Margarita Chli, Paul Furgale, Marco Hutter, Martin Rufli, Davide Scaramuzza, Roland Siegwart

ASL Autonomous Systems Lab

| Autonomous Mobile Robots

Margarita Chli, Paul Furgale, Marco Hutter, Martin Rufli, Davide Scaramuzza, Roland Siegwart

ASL Autonomous Systems Lab

| Autonomous Mobile Robots

Margarita Chli, Paul Furgale, Marco Hutter, Martin Rufli, Davide Scaramuzza, Roland Siegwart

ASL Autonomous Systems Lab

| Autonomous Mobile Robots

Margarita Chli, Paul Furgale, Marco Hutter, Martin Rufli, Davide Scaramuzza, Roland Siegwart

ASL Autonomous Systems Lab

| Autonomous Mobile Robots

Margarita Chli, Paul Furgale, Marco Hutter, Martin Rufli, Davide Scaramuzza, Roland Siegwart

ASL Autonomous Systems Lab

| Autonomous Mobile Robots

Margarita Chli, Paul Furgale, Marco Hutter, Martin Rufli, Davide Scaramuzza, Roland Siegwart

ASL Autonomous Systems Lab

| Autonomous Mobile Robots

Margarita Chli, Paul Furgale, Marco Hutter, Martin Rufli, Davide Scaramuzza, Roland Siegwart

ASL Autonomous Systems Lab

| Autonomous Mobile Robots

Margarita Chli, Paul Furgale, Marco Hutter, Martin Rufli, Davide Scaramuzza, Roland Siegwart

ASL Autonomous Systems Lab

| Autonomous Mobile Robots

Margarita Chli, Paul Furgale, Marco Hutter, Martin Rufli, Davide Scaramuzza, Roland Siegwart

ASL Autonomous Systems Lab

| Autonomous Mobile Robots

Margarita Chli, Paul Furgale, Marco Hutter, Martin Rufli, Davide Scaramuzza, Roland Siegwart

ASL Autonomous Systems Lab

| Autonomous Mobile Robots

Margarita Chli, Paul Furgale, Marco Hutter, Martin Rufli, Davide Scaramuzza, Roland Siegwart

ASL Autonomous Systems Lab

| Autonomous Mobile Robots

Margarita Chli, Paul Furgale, Marco Hutter, Martin Rufli, Davide Scaramuzza, Roland Siegwart

ASL Autonomous Systems Lab

| Autonomous Mobile Robots

Margarita Chli, Paul Furgale, Marco Hutter, Martin Rufli, Davide Scaramuzza, Roland Siegwart

ASL Autonomous Systems Lab

Building the inverted file index

| Autonomous Mobile Robots

Margarita Chli, Paul Furgale, Marco Hutter, Martin Rufli, Davide Scaramuzza, Roland Siegwart

ASL Autonomous Systems Lab

Building the inverted file index

| Autonomous Mobile Robots

Margarita Chli, Paul Furgale, Marco Hutter, Martin Rufli, Davide Scaramuzza, Roland Siegwart

ASL Autonomous Systems Lab

Building the inverted file index

| Autonomous Mobile Robots

Margarita Chli, Paul Furgale, Marco Hutter, Martin Rufli, Davide Scaramuzza, Roland Siegwart

ASL Autonomous Systems Lab

Building the inverted file index

| Autonomous Mobile Robots

Margarita Chli, Paul Furgale, Marco Hutter, Martin Rufli, Davide Scaramuzza, Roland Siegwart

ASL Autonomous Systems Lab

Building the inverted file index

| Autonomous Mobile Robots

Margarita Chli, Paul Furgale, Marco Hutter, Martin Rufli, Davide Scaramuzza, Roland Siegwart

ASL Autonomous Systems Lab

Inverted File index

101

102

105

105

180

180

180

… Voting Array for Q

0

… 101

102

103

104

105

Inverted File DB

Query image Q

Visual

words in Q

+1 +1 +1

Visual word List of images that this word appears in

Inverted File DB lists all possible visual words

Each word points to a list of images where this word occurs

Voting array: has as many cells as the images in the DB – each word in query image, votes for an image

| Autonomous Mobile Robots

Margarita Chli, Paul Furgale, Marco Hutter, Martin Rufli, Davide Scaramuzza, Roland Siegwart

ASL Autonomous Systems Lab

FABMAP [Cummins and Newman IJRR 2011]

Place recognition for robot localization

Use training images to build the BoW database

Probabilistic model of the world

At a new frame, compute:

P(being at a known place)

P(being at a new place)

Captures the dependencies

of words to distinguish the

most characteristic structure

of each scene (using the Chow-Liu tree)

Binaries available online

| Autonomous Mobile Robots

Margarita Chli, Paul Furgale, Marco Hutter, Martin Rufli, Davide Scaramuzza, Roland Siegwart

ASL Autonomous Systems Lab

p = probability of images coming from the same place

FABMAP example

robots.ox.ac.uk/~mjc/appearance_based_results.htm

| Autonomous Mobile Robots

Margarita Chli, Paul Furgale, Marco Hutter, Martin Rufli, Davide Scaramuzza, Roland Siegwart

ASL Autonomous Systems Lab

FABMAP example

robots.ox.ac.uk/~mjc/appearance_based_results.htm

p = probability of images coming from the same place

| Autonomous Mobile Robots

Margarita Chli, Paul Furgale, Marco Hutter, Martin Rufli, Davide Scaramuzza, Roland Siegwart

ASL Autonomous Systems Lab

Robust object/scene recognition

Visual Vocabulary holds appearance information but discards the spatial relationships

between features

Two images with the same features shuffled around in the image will be a 100% match

when using only appearance information.

If different arrangements of the same features are expected then one might use

geometric verification

Test the k most similar images to the query image for geometric consistency (e.g. using

RANSAC)

Further reading (out of the scope of this course):

[Cummins and Newman, IJRR 2011]

[Stewénius et al, ECCV 2012]

| Autonomous Mobile Robots

Margarita Chli, Paul Furgale, Marco Hutter, Martin Rufli, Davide Scaramuzza, Roland Siegwart

ASL Autonomous Systems Lab

Place recognition using Vocabulary Tree

Line extraction from images

Line extraction from laser data

Introduction 46

Outline of Today’s lecture

| Autonomous Mobile Robots

Margarita Chli, Paul Furgale, Marco Hutter, Martin Rufli, Davide Scaramuzza, Roland Siegwart

ASL Autonomous Systems Lab

Line extraction

Supose that you have been commissioned to implement a lane detection for a

car driving-assistance system. How would you proceed?

| Autonomous Mobile Robots

Margarita Chli, Paul Furgale, Marco Hutter, Martin Rufli, Davide Scaramuzza, Roland Siegwart

ASL Autonomous Systems Lab

Line extraction

How do we extract lines from edges?

| Autonomous Mobile Robots

Margarita Chli, Paul Furgale, Marco Hutter, Martin Rufli, Davide Scaramuzza, Roland Siegwart

ASL Autonomous Systems Lab

Two popular line extraction algorithms

Hough transform (used also to detect circles, ellipses, and any sort of shape)

RANSAC (Random Sample Consensus)

| Autonomous Mobile Robots

Margarita Chli, Paul Furgale, Marco Hutter, Martin Rufli, Davide Scaramuzza, Roland Siegwart

ASL Autonomous Systems Lab

Hough-Transform

Finds lines from a binary edge image using a voting procedure

The voting space (or accumulator) is called Hough space

1. P. Hough, Machine Analysis of Bubble Chamber Pictures, Proc. Int. Conf. High Energy Accelerators and Instrumentation, 1959

2. J. Richard, O. Duda, P.E. Hart (April 1971). "Use of the Hough Transformation to Detect Lines and Curves in Pictures". Artificial Intelligence

Center (SRI International)

| Autonomous Mobile Robots

Margarita Chli, Paul Furgale, Marco Hutter, Martin Rufli, Davide Scaramuzza, Roland Siegwart

ASL Autonomous Systems Lab

Let 𝑥0, 𝑦0 be an image point

We can represent all the lines passing through it by 𝑦0 = 𝑚𝑥0 + 𝑏

The Hough transform works by parameterizing this expression in terms of 𝑚 and 𝑏: 𝑏 = −𝑥0𝑚 + 𝑦0

This is represented by a line in the Hough space

Hough-Transform

𝑏 = −𝑥0𝑚 + 𝑦0 Every point votes a line

in the Hough space

𝑥0, 𝑦0

| Autonomous Mobile Robots

Margarita Chli, Paul Furgale, Marco Hutter, Martin Rufli, Davide Scaramuzza, Roland Siegwart

ASL Autonomous Systems Lab

Let 𝑥0, 𝑦0 be an image point

We can represent all the lines passing through it by 𝑦0 = 𝑚𝑥0 + 𝑏

The Hough transform works by parameterizing this expression in terms of 𝑚 and 𝑏: 𝑏 = −𝑥0𝑚 + 𝑦0

This is represented by a line in the Hough space

Hough-Transform

𝑥0, 𝑦0 𝑏 = −𝑥0𝑚 + 𝑦0 Every point votes a line

in the Hough space

𝑥1, 𝑦1 𝑏 = −𝑥1𝑚 + 𝑦1

| Autonomous Mobile Robots

Margarita Chli, Paul Furgale, Marco Hutter, Martin Rufli, Davide Scaramuzza, Roland Siegwart

ASL Autonomous Systems Lab

How do we determine the line (𝑏∗, 𝑚∗) that contains both 𝑥0, 𝑦0 and 𝑥1, 𝑦1 ?

It is the intersection of the lines 𝑏 = −𝑥0𝑚 + 𝑦0 and 𝑏 = −𝑥1𝑚 + 𝑦1

Hough-Transform

𝑥0, 𝑦0 𝑏 = −𝑥0𝑚 + 𝑦0 Every point votes a line

in the Hough space

𝑥1, 𝑦1 𝑏 = −𝑥1𝑚 + 𝑦1

𝑏∗

𝑚∗

| Autonomous Mobile Robots

Margarita Chli, Paul Furgale, Marco Hutter, Martin Rufli, Davide Scaramuzza, Roland Siegwart

ASL Autonomous Systems Lab

Each point in image space, votes for line-parameters in Hough parameter space

Hough-Transform

𝑥0, 𝑦0 𝑏 = −𝑥0𝑚 + 𝑦0 Every point votes

a line in the Hough

space

𝑥1, 𝑦1 𝑏 = −𝑥1𝑚 + 𝑦1

| Autonomous Mobile Robots

Margarita Chli, Paul Furgale, Marco Hutter, Martin Rufli, Davide Scaramuzza, Roland Siegwart

ASL Autonomous Systems Lab

Hough-Transform

Problems with the (𝑚, 𝑏) space:

Unbounded parameter domain

𝑚, 𝑏 can assume any value in [−∞, +∞]

| Autonomous Mobile Robots

Margarita Chli, Paul Furgale, Marco Hutter, Martin Rufli, Davide Scaramuzza, Roland Siegwart

ASL Autonomous Systems Lab

Problems with the (𝑚, 𝑏) space:

Unbounded parameter domain

𝑚, 𝑏 can assume any value in [−∞, +∞]

Alternative line representation: polar representation

sincos yx

Hough-Transform

| Autonomous Mobile Robots

Margarita Chli, Paul Furgale, Marco Hutter, Martin Rufli, Davide Scaramuzza, Roland Siegwart

ASL Autonomous Systems Lab

sincos yx

= 0

= 180

= 0 = 100

𝑥

𝑦

Hough-Transform

Each point in image space maps to a sinusoid in the parameter space (𝜌, 𝜃)

| Autonomous Mobile Robots

Margarita Chli, Paul Furgale, Marco Hutter, Martin Rufli, Davide Scaramuzza, Roland Siegwart

ASL Autonomous Systems Lab

sincos yx

= 0

= 180

= 0 = 100

Each point in image space maps to a sinusoid in the parameter space (𝜌, 𝜃)

𝑥

𝑦

Hough-Transform

| Autonomous Mobile Robots

Margarita Chli, Paul Furgale, Marco Hutter, Martin Rufli, Davide Scaramuzza, Roland Siegwart

ASL Autonomous Systems Lab

1. Initialize: set all accumulator cells to zero

2. for each edge point (x,y) in the image

for all θ in [0 : step : 180]

Compute 𝜌 = 𝑥 cos 𝜃 + 𝑦 sin 𝜃

𝐻(𝜃, 𝜌) = 𝐻(𝜃, 𝜌) + 1

end

end

3. Find the values of (𝜃, 𝜌) where 𝐻(𝜃, 𝜌) is a local maximum

4. The detected line in the image is given by: 𝜌 = 𝑥 cos 𝜃 + 𝑦 sin 𝜃

Hough-Transform

| Autonomous Mobile Robots

Margarita Chli, Paul Furgale, Marco Hutter, Martin Rufli, Davide Scaramuzza, Roland Siegwart

ASL Autonomous Systems Lab

Examples

| Autonomous Mobile Robots

Margarita Chli, Paul Furgale, Marco Hutter, Martin Rufli, Davide Scaramuzza, Roland Siegwart

ASL Autonomous Systems Lab

20 40 60 80 100 120 140 160 180

100

200

300

400

500

600

700

Hough Transform

Examples

Notice, however, that the Hough only find the parameters of the line, not the ends of it. How do you find them?

| Autonomous Mobile Robots

Margarita Chli, Paul Furgale, Marco Hutter, Martin Rufli, Davide Scaramuzza, Roland Siegwart

ASL Autonomous Systems Lab

Examples

| Autonomous Mobile Robots

Margarita Chli, Paul Furgale, Marco Hutter, Martin Rufli, Davide Scaramuzza, Roland Siegwart

ASL Autonomous Systems Lab

Examples

| Autonomous Mobile Robots

Margarita Chli, Paul Furgale, Marco Hutter, Martin Rufli, Davide Scaramuzza, Roland Siegwart

ASL Autonomous Systems Lab

Effects of noise: peaks get fuzzy and hard to locate

How to overcome this?

Increase bin size (increase resolution of the Hough space); however, this reduces the accuracy of the line parameters

Convolute the output with a box filter; why?

Problems

| Autonomous Mobile Robots

Margarita Chli, Paul Furgale, Marco Hutter, Martin Rufli, Davide Scaramuzza, Roland Siegwart

ASL Autonomous Systems Lab

RANSAC (RAndom SAmple Consensus)

It has become the standard method for model fitting in the presence of outliers (very noisy

points or wrong data)

It can be applied to line fitting but also to thousands of different problems where the goal is

to estimate the parameters of a model from noisy data (e.g., camera calibration,

structure from motion, DLT, homography, etc.)

Let’s now focus on line extraction

M. A.Fischler and R. C.Bolles. Random sample consensus: A paradigm for model fitting with applicatlons to image analysis and

automated cartography. Graphics and Image Processing, 24(6):381–395, 1981.

| Autonomous Mobile Robots

Margarita Chli, Paul Furgale, Marco Hutter, Martin Rufli, Davide Scaramuzza, Roland Siegwart

ASL Autonomous Systems Lab

RANSAC

| Autonomous Mobile Robots

Margarita Chli, Paul Furgale, Marco Hutter, Martin Rufli, Davide Scaramuzza, Roland Siegwart

ASL Autonomous Systems Lab

RANSAC

• Select sample of 2 points at

random

| Autonomous Mobile Robots

Margarita Chli, Paul Furgale, Marco Hutter, Martin Rufli, Davide Scaramuzza, Roland Siegwart

ASL Autonomous Systems Lab

RANSAC

• Select sample of 2 points at

random

• Calculate model parameters

that fit the data in the sample

| Autonomous Mobile Robots

Margarita Chli, Paul Furgale, Marco Hutter, Martin Rufli, Davide Scaramuzza, Roland Siegwart

ASL Autonomous Systems Lab

RANSAC

• Select sample of 2 points at

random

• Calculate model parameters

that fit the data in the sample

• Calculate error function for

each data point

| Autonomous Mobile Robots

Margarita Chli, Paul Furgale, Marco Hutter, Martin Rufli, Davide Scaramuzza, Roland Siegwart

ASL Autonomous Systems Lab

RANSAC

• Select sample of 2 points at

random

• Calculate model parameters

that fit the data in the sample

• Calculate error function for each

data point

• Select data that supports

current hypothesis

| Autonomous Mobile Robots

Margarita Chli, Paul Furgale, Marco Hutter, Martin Rufli, Davide Scaramuzza, Roland Siegwart

ASL Autonomous Systems Lab

RANSAC

• Select sample of 2 points at

random

• Calculate model parameters

that fit the data in the sample

• Calculate error function for each

data point

• Select data that supports

current hypothesis

• Repeat sampling

| Autonomous Mobile Robots

Margarita Chli, Paul Furgale, Marco Hutter, Martin Rufli, Davide Scaramuzza, Roland Siegwart

ASL Autonomous Systems Lab

RANSAC

• Select sample of 2 points at

random

• Calculate model parameters

that fit the data in the sample

• Calculate error function for each

data point

• Select data that supports

current hypothesis

• Repeat sampling

| Autonomous Mobile Robots

Margarita Chli, Paul Furgale, Marco Hutter, Martin Rufli, Davide Scaramuzza, Roland Siegwart

ASL Autonomous Systems Lab

RANSAC

Set with the maximum number of inliers

obtained after k iterations

| Autonomous Mobile Robots

Margarita Chli, Paul Furgale, Marco Hutter, Martin Rufli, Davide Scaramuzza, Roland Siegwart

ASL Autonomous Systems Lab

RANSAC

How many iterations does RANSAC need?

• Ideally: check all possible combinations of 2 points in a dataset of N points.

• Number of all pairwise combinations: N(N-1)/2

computationally unfeasible if N is too large.

example: 10’000 edge points need to check all 10’000*9999/2= 50 million combinations!

• Do we really need to check all combinations or can we stop after some iterations?

Checking a subset of combinations is enough if we have a rough estimate of the percentage of inliers in

our dataset

• This can be done in a probabilistic way

| Autonomous Mobile Robots

Margarita Chli, Paul Furgale, Marco Hutter, Martin Rufli, Davide Scaramuzza, Roland Siegwart

ASL Autonomous Systems Lab

How many iterations does RANSAC need?

• w := percentage of inliers: number of inliers/N

N := total number of data points

w : fraction of inliers in the dataset w = P(selecting an inlier-point out of the dataset)

• Let p := P(selecting a set of points free of outliers)

• Assumption: the 2 points necessary to estimate a line are selected independently

w 2 = P(both selected points are inliers)

1-w 2 = P(at least one of these two points is an outlier)

• Let k := no. RANSAC iterations executed so far

• ( 1-w 2 ) k = P(RANSAC never selects two points that are both inliers)

• 1-p = ( 1-w 2 ) k and therefore :

RANSAC

)1log(

)1log(2w

pk

| Autonomous Mobile Robots

Margarita Chli, Paul Furgale, Marco Hutter, Martin Rufli, Davide Scaramuzza, Roland Siegwart

ASL Autonomous Systems Lab

How many iterations does RANSAC need?

The number of iterations k is

knowing the fraction of inliers w, after k RANSAC iterations we will have a probability p of finding a set of points free of

outliers

Example: if we want a probability of success p=99% and we know that w=50% k=16 iterations – these are dramatically

fewer than the number of all possible combinations!

Notice that the number of points does not influence the estimated number of iterations, only w does!

In practice we need only a rough estimate of w.

More advanced variants of RANSAC estimate the fraction of inliers and adaptively change it on every iteration

RANSAC

)1log(

)1log(2w

pk

| Autonomous Mobile Robots

Margarita Chli, Paul Furgale, Marco Hutter, Martin Rufli, Davide Scaramuzza, Roland Siegwart

ASL Autonomous Systems Lab

Let A be a set of N points

1. repeat

2. Randomly select a sample of 2 points from A

3. Fit a line through the 2 points

4. Compute the distances of all other points to this line

5. Construct the inlier set (i.e. count the number of points whose distance < d)

6. Store these inliers

7. until maximum number of iterations k reached

8. The set with the maximum number of inliers is chosen as a solution to the problem

RANSAC - Algorithm

| Autonomous Mobile Robots

Margarita Chli, Paul Furgale, Marco Hutter, Martin Rufli, Davide Scaramuzza, Roland Siegwart

ASL Autonomous Systems Lab

RANSAC = RANdom SAmple Consensus.

A generic & robust fitting algorithm of models in the presence of outliers (i.e. points which do not satisfy a model)

Can be applied in general to any problem where the goal is to identify the inliers which satisfy a predefined model.

Typical applications in robotics are: line extraction from 2D range data, plane extraction from 3D data, feature matching, structure from motion, camera calibration, homography estimation, etc.

RANSAC is iterative and non-deterministic the probability to find a set free of outliers increases as more iterations are used

Drawback: a non-deterministic method, results are different between runs.

?

RANSAC - Applications

| Autonomous Mobile Robots

Margarita Chli, Paul Furgale, Marco Hutter, Martin Rufli, Davide Scaramuzza, Roland Siegwart

ASL Autonomous Systems Lab

Place recognition using Vocabulary Tree

Line extraction from images

Line extraction from laser data

RANSAC (same as for line dtection from images)

Hough (same as for line dtection from images)

Split and Merge (only laser scans: uses sequential order of scan data)

Line regression (only laser scans: uses sequential order of scan data)

Introduction 79

Outline of Today’s lecture

| Autonomous Mobile Robots

Margarita Chli, Paul Furgale, Marco Hutter, Martin Rufli, Davide Scaramuzza, Roland Siegwart

ASL Autonomous Systems Lab

Popular algorithm, originates from Computer Vision.

A recursive procedure of fitting and splitting.

A slightly different version, called Iterative end-point-fit, simply connects the end points for line

fitting.

Introduction 80

Algorithm 1: Split-and-Merge (standard)

Let S be the set of all data points

Split

• Fit a line to points in current set S

• Find the most distant point to the line

• If distance > threshold split set & repeat with left and

right point sets

Merge

• If two consecutive segments are collinear enough, obtain the common line and find the most distant point

• If distance <= threshold, merge both segments

| Autonomous Mobile Robots

Margarita Chli, Paul Furgale, Marco Hutter, Martin Rufli, Davide Scaramuzza, Roland Siegwart

ASL Autonomous Systems Lab

Algorithm 1: Split-and-Merge (iterative end-point-fit)

Iterative end-point-fit: simply connects the end points for line fitting

| Autonomous Mobile Robots

Margarita Chli, Paul Furgale, Marco Hutter, Martin Rufli, Davide Scaramuzza, Roland Siegwart

ASL Autonomous Systems Lab

Algorithm 1: Split-and-Merge (iterative end-point-fit)

Iterative end-point-fit: simply connects the end points for line fitting

Split

| Autonomous Mobile Robots

Margarita Chli, Paul Furgale, Marco Hutter, Martin Rufli, Davide Scaramuzza, Roland Siegwart

ASL Autonomous Systems Lab

Algorithm 1: Split-and-Merge (iterative end-point-fit)

Iterative end-point-fit: simply connects the end points for line fitting

Split

Split

Split

| Autonomous Mobile Robots

Margarita Chli, Paul Furgale, Marco Hutter, Martin Rufli, Davide Scaramuzza, Roland Siegwart

ASL Autonomous Systems Lab

Algorithm 1: Split-and-Merge (iterative end-point-fit)

Iterative end-point-fit: simply connects the end points for line fitting

No more

Splits

Split

Split

Split

| Autonomous Mobile Robots

Margarita Chli, Paul Furgale, Marco Hutter, Martin Rufli, Davide Scaramuzza, Roland Siegwart

ASL Autonomous Systems Lab

Algorithm 1: Split-and-Merge (iterative end-point-fit)

Iterative end-point-fit: simply connects the end points for line fitting

Merge No more

Splits

Split

Split

Split

| Autonomous Mobile Robots

Margarita Chli, Paul Furgale, Marco Hutter, Martin Rufli, Davide Scaramuzza, Roland Siegwart

ASL Autonomous Systems Lab

Algorithm 2: Line-Regression

“Sliding window” of size Nf points

Fit line-segment to all points in each window

Then adjacent segments are merged if their line parameters are close

Introduction 86

Nf = 3 Line-Regression

• Initialize sliding window size Nf

• Fit a line to every Nf consecutive points (i.e. in each window)

• Merge overlapping line segments + recompute line parameters for each segment

| Autonomous Mobile Robots

Margarita Chli, Paul Furgale, Marco Hutter, Martin Rufli, Davide Scaramuzza, Roland Siegwart

ASL Autonomous Systems Lab

Algorithm 2: Line-Regression

“Sliding window” of size Nf points

Fit line-segment to all points in each window

Then adjacent segments are merged if their line parameters are close

Nf = 3 Line-Regression

• Initialize sliding window size Nf

• Fit a line to every Nf consecutive points (i.e. in each window)

• Merge overlapping line segments + recompute line parameters for each segment

Recommended