56
1 Particle Filtering TAs: Matthew Rockett, Gilwoo Lee, Matt Schmittle Sanjiban Choudhury (Slides adapted from Drew Bagnell, Dieter Fox)

Particle Filtering...• Roulette wheel • Binary search, n log n • Stochastic universal sampling • Systematic resampling • Linear time complexity • Easy to implement, low

  • Upload
    others

  • View
    6

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Particle Filtering...• Roulette wheel • Binary search, n log n • Stochastic universal sampling • Systematic resampling • Linear time complexity • Easy to implement, low

1

Particle Filtering

TAs: Matthew Rockett, Gilwoo Lee, Matt Schmittle

Sanjiban Choudhury

(Slides adapted from Drew Bagnell, Dieter Fox)

Page 2: Particle Filtering...• Roulette wheel • Binary search, n log n • Stochastic universal sampling • Systematic resampling • Linear time complexity • Easy to implement, low

Question: What makes a good proposal distribution?

2

p(x) q(x)

Page 3: Particle Filtering...• Roulette wheel • Binary search, n log n • Stochastic universal sampling • Systematic resampling • Linear time complexity • Easy to implement, low

3

bel(xt) = ⌘P (zt|xt)

Zp(xt|ut, xt�1)bel(xt�1)dxt�1

Target distribution : Posterior

Applying importance sampling to Bayes filtering

Page 4: Particle Filtering...• Roulette wheel • Binary search, n log n • Stochastic universal sampling • Systematic resampling • Linear time complexity • Easy to implement, low

3

bel(xt) = ⌘P (zt|xt)

Zp(xt|ut, xt�1)bel(xt�1)dxt�1

Target distribution : Posterior

Proposal distribution : After applying motion model

bel(xt) =

Zp(xt|ut, xt�1)bel(xt�1)dxt�1

Why is this easy to sample from?

Applying importance sampling to Bayes filtering

Page 5: Particle Filtering...• Roulette wheel • Binary search, n log n • Stochastic universal sampling • Systematic resampling • Linear time complexity • Easy to implement, low

3

bel(xt) = ⌘P (zt|xt)

Zp(xt|ut, xt�1)bel(xt�1)dxt�1

Target distribution : Posterior

Importance Ratio:

r =bel(xt)

bel(xt)= ⌘P (zt|xt)

Proposal distribution : After applying motion model

bel(xt) =

Zp(xt|ut, xt�1)bel(xt�1)dxt�1

Why is this easy to sample from?

Applying importance sampling to Bayes filtering

Page 6: Particle Filtering...• Roulette wheel • Binary search, n log n • Stochastic universal sampling • Systematic resampling • Linear time complexity • Easy to implement, low

4

Question: What are our options for non-parametric

belief representations?

1. Histogram filter

2. Normalized importance sampling

3. Particle filter

Page 7: Particle Filtering...• Roulette wheel • Binary search, n log n • Stochastic universal sampling • Systematic resampling • Linear time complexity • Easy to implement, low

Approach 2: Normalized Importance Sampling

5

bel(xt�1) =

⇢x

1t�1

w

1t�1

,

x

2t�1

w

2t�1

, · · · , xMt�1

w

Mt�1

�0.5

0.250.25

for i = 1 to M

sample x

it ⇠ P (xt|ut, x

it)

0.5

0.25

0.25

0.5 * 0.02

0.25 * 0.1

0.25 * 0.05

for i = 1 to M

w

it = P (zt|xi

t)wit�1

=0.01

=0.025

=0.0125

for i = 1 to M

wit =

witP

i wit

0.21

0.53

0.26bel(xt) =

⇢x

1t

w

1t, · · · , x

Mt

w

Mt

Page 8: Particle Filtering...• Roulette wheel • Binary search, n log n • Stochastic universal sampling • Systematic resampling • Linear time complexity • Easy to implement, low

Problem: What happens after enough iterations?

6

Page 9: Particle Filtering...• Roulette wheel • Binary search, n log n • Stochastic universal sampling • Systematic resampling • Linear time complexity • Easy to implement, low

Problem: What happens after enough iterations?

6

Particles don’t move - can get stuck in regions of low probability

This is the same complaint we had about histogram filters!

True posterior True posterior True posterior

Page 10: Particle Filtering...• Roulette wheel • Binary search, n log n • Stochastic universal sampling • Systematic resampling • Linear time complexity • Easy to implement, low

7

Key Idea: Resample!

Why? Get rid of bad particles

Page 11: Particle Filtering...• Roulette wheel • Binary search, n log n • Stochastic universal sampling • Systematic resampling • Linear time complexity • Easy to implement, low

Approach 3: Particle Filtering

8

for i = 1 to M

sample x

it ⇠ P (xt|ut, x

it)

for i = 1 to Mfor i = 1 to M

for i = 1 to M

bel(xt�1) =

⇢x

1t�1

w

1t�1

,

x

2t�1

w

2t�1

, · · · , xMt�1

w

Mt�1

�0.5

0.250.25

0.5

0.25

0.25

0.5 * 0.02

0.25 * 0.1

0.25 * 0.05

=0.01

=0.025

=0.0125

all weights = 1/M

w

it = P (zt|xi

t)wit�1

for i = 1 to M

sample x

it ⇠ w

it

bel(xt) =

⇢x

1t

1, · · · , x

Mt

1

�2

Page 12: Particle Filtering...• Roulette wheel • Binary search, n log n • Stochastic universal sampling • Systematic resampling • Linear time complexity • Easy to implement, low

Virtues of resampling

9

ut

Page 13: Particle Filtering...• Roulette wheel • Binary search, n log n • Stochastic universal sampling • Systematic resampling • Linear time complexity • Easy to implement, low

Virtues of resampling

9

ut zt

Page 14: Particle Filtering...• Roulette wheel • Binary search, n log n • Stochastic universal sampling • Systematic resampling • Linear time complexity • Easy to implement, low

Virtues of resampling

9

ut zt resample

2

Page 15: Particle Filtering...• Roulette wheel • Binary search, n log n • Stochastic universal sampling • Systematic resampling • Linear time complexity • Easy to implement, low

Virtues of resampling

9

ut zt ut+1resample

2

Page 16: Particle Filtering...• Roulette wheel • Binary search, n log n • Stochastic universal sampling • Systematic resampling • Linear time complexity • Easy to implement, low

Virtues of resampling

9

ut zt ut+1 zt+1resample

2

Page 17: Particle Filtering...• Roulette wheel • Binary search, n log n • Stochastic universal sampling • Systematic resampling • Linear time complexity • Easy to implement, low

Virtues of resampling

9

ut zt ut+1 zt+1resample

2

resample

3

Page 18: Particle Filtering...• Roulette wheel • Binary search, n log n • Stochastic universal sampling • Systematic resampling • Linear time complexity • Easy to implement, low

Why use particle filters?

10

1. Can answer any query

2. Will work for any distribution, including multi-modal (unlike Kalman filter)

3. Scale well in computational resources (embarassingly parallel)

4. Easy to implement!

Page 19: Particle Filtering...• Roulette wheel • Binary search, n log n • Stochastic universal sampling • Systematic resampling • Linear time complexity • Easy to implement, low

11

Page 20: Particle Filtering...• Roulette wheel • Binary search, n log n • Stochastic universal sampling • Systematic resampling • Linear time complexity • Easy to implement, low

12

Page 21: Particle Filtering...• Roulette wheel • Binary search, n log n • Stochastic universal sampling • Systematic resampling • Linear time complexity • Easy to implement, low

13

Page 22: Particle Filtering...• Roulette wheel • Binary search, n log n • Stochastic universal sampling • Systematic resampling • Linear time complexity • Easy to implement, low

14

Page 23: Particle Filtering...• Roulette wheel • Binary search, n log n • Stochastic universal sampling • Systematic resampling • Linear time complexity • Easy to implement, low

15

Page 24: Particle Filtering...• Roulette wheel • Binary search, n log n • Stochastic universal sampling • Systematic resampling • Linear time complexity • Easy to implement, low

16

Page 25: Particle Filtering...• Roulette wheel • Binary search, n log n • Stochastic universal sampling • Systematic resampling • Linear time complexity • Easy to implement, low

17

Page 26: Particle Filtering...• Roulette wheel • Binary search, n log n • Stochastic universal sampling • Systematic resampling • Linear time complexity • Easy to implement, low

18

Page 27: Particle Filtering...• Roulette wheel • Binary search, n log n • Stochastic universal sampling • Systematic resampling • Linear time complexity • Easy to implement, low

19

Histogram Filter

Normalized Importance Sampling

Particle Filter

Grid up state space

Use a fixed set of samples

Resample

Non-parametric FiltersSa

me

fund

amen

tal B

ayes

rul

e ag

ain

and

agai

n …

Page 28: Particle Filtering...• Roulette wheel • Binary search, n log n • Stochastic universal sampling • Systematic resampling • Linear time complexity • Easy to implement, low

20

Are we done?

No! Lots of practical

problems to deal with

Page 29: Particle Filtering...• Roulette wheel • Binary search, n log n • Stochastic universal sampling • Systematic resampling • Linear time complexity • Easy to implement, low

Problem 1: Two room challenge

21

(a) Two rooms with equal probability (b) The same filter, after repeated resampling

Figure 4: A problem with assuming the independence of the observations and using naıve sampling.Without gaining any new information, the particle filter converges on one of the rooms.

the same observations; however, over the course of 100 timesteps, the filter has become certain thatit is in the right room.

What has happened? The filter is non-deterministic, and so when we pick the particles withequal weight, it is unlikely that we will equal numbers from each room. As time progresses, thisselection process only becomes more imbalanced: with observations equally likely, if a state has veryfew particles in its vicinity, in expection, it will have very few particles after resampling. Further,once a state space loses its particles, there are no ways to regain them without motion happening.

Figure 5: Low Variance Resampling; in practice, this approach is preferred over the naıve approach.

Fixes: There are a number of ways to prevent situations like this from happening in practice.

1. One way is to detect situations in which you do not have enough new information to resam-ple, and only run the resampling when your observation model has su�ciently new obser-vation data to process. One statistic that might be useful for evaluting this is looking atmax({w

i

})/min({wi

}), or looking at the entropy of the weights.

2. One easy fix is to run a more sophisticated resampling method, the low-variance resampler. Inthis approach, which is depicted in Fig. 5, a single number r is drawn, and then particles areselected at every 1/N units on the dartboard in our example. E↵ectively, the same sampler isused, but the random numbers generated are r+i/N for all (positive and negative) i such thatr + i/N 2 [0, 1]. This has desirable properties: for instance, every particle with normalizedweight over 1/N is guaranteed to be selected at least once. Due to these properties and itsease of implementation, it is always recommended that you implement this sampler ratherthan the naıve approach. Note that this is also known as the “Stochastic Universal Sampler”,in the genetic algorithms domain.

3. One easy, but undesirable approach is to just inject uniform samples. This should be treatedas a last resort as it is unprincipled.

5

Given: Particles equally distributed, no motion, no observation

What happens?

Page 30: Particle Filtering...• Roulette wheel • Binary search, n log n • Stochastic universal sampling • Systematic resampling • Linear time complexity • Easy to implement, low

Problem 1: Two room challenge

21

(a) Two rooms with equal probability (b) The same filter, after repeated resampling

Figure 4: A problem with assuming the independence of the observations and using naıve sampling.Without gaining any new information, the particle filter converges on one of the rooms.

the same observations; however, over the course of 100 timesteps, the filter has become certain thatit is in the right room.

What has happened? The filter is non-deterministic, and so when we pick the particles withequal weight, it is unlikely that we will equal numbers from each room. As time progresses, thisselection process only becomes more imbalanced: with observations equally likely, if a state has veryfew particles in its vicinity, in expection, it will have very few particles after resampling. Further,once a state space loses its particles, there are no ways to regain them without motion happening.

Figure 5: Low Variance Resampling; in practice, this approach is preferred over the naıve approach.

Fixes: There are a number of ways to prevent situations like this from happening in practice.

1. One way is to detect situations in which you do not have enough new information to resam-ple, and only run the resampling when your observation model has su�ciently new obser-vation data to process. One statistic that might be useful for evaluting this is looking atmax({w

i

})/min({wi

}), or looking at the entropy of the weights.

2. One easy fix is to run a more sophisticated resampling method, the low-variance resampler. Inthis approach, which is depicted in Fig. 5, a single number r is drawn, and then particles areselected at every 1/N units on the dartboard in our example. E↵ectively, the same sampler isused, but the random numbers generated are r+i/N for all (positive and negative) i such thatr + i/N 2 [0, 1]. This has desirable properties: for instance, every particle with normalizedweight over 1/N is guaranteed to be selected at least once. Due to these properties and itsease of implementation, it is always recommended that you implement this sampler ratherthan the naıve approach. Note that this is also known as the “Stochastic Universal Sampler”,in the genetic algorithms domain.

3. One easy, but undesirable approach is to just inject uniform samples. This should be treatedas a last resort as it is unprincipled.

5

Given: Particles equally distributed, no motion, no observation

What happens?

All particles migrate to the other room!!

(a) Two rooms with equal probability (b) The same filter, after repeated resampling

Figure 4: A problem with assuming the independence of the observations and using naıve sampling.Without gaining any new information, the particle filter converges on one of the rooms.

the same observations; however, over the course of 100 timesteps, the filter has become certain thatit is in the right room.

What has happened? The filter is non-deterministic, and so when we pick the particles withequal weight, it is unlikely that we will equal numbers from each room. As time progresses, thisselection process only becomes more imbalanced: with observations equally likely, if a state has veryfew particles in its vicinity, in expection, it will have very few particles after resampling. Further,once a state space loses its particles, there are no ways to regain them without motion happening.

Figure 5: Low Variance Resampling; in practice, this approach is preferred over the naıve approach.

Fixes: There are a number of ways to prevent situations like this from happening in practice.

1. One way is to detect situations in which you do not have enough new information to resam-ple, and only run the resampling when your observation model has su�ciently new obser-vation data to process. One statistic that might be useful for evaluting this is looking atmax({w

i

})/min({wi

}), or looking at the entropy of the weights.

2. One easy fix is to run a more sophisticated resampling method, the low-variance resampler. Inthis approach, which is depicted in Fig. 5, a single number r is drawn, and then particles areselected at every 1/N units on the dartboard in our example. E↵ectively, the same sampler isused, but the random numbers generated are r+i/N for all (positive and negative) i such thatr + i/N 2 [0, 1]. This has desirable properties: for instance, every particle with normalizedweight over 1/N is guaranteed to be selected at least once. Due to these properties and itsease of implementation, it is always recommended that you implement this sampler ratherthan the naıve approach. Note that this is also known as the “Stochastic Universal Sampler”,in the genetic algorithms domain.

3. One easy, but undesirable approach is to just inject uniform samples. This should be treatedas a last resort as it is unprincipled.

5

Page 31: Particle Filtering...• Roulette wheel • Binary search, n log n • Stochastic universal sampling • Systematic resampling • Linear time complexity • Easy to implement, low

Reason: Resampling increases variance

22

Resampling collapses particles, reduces diversity, increases variance w.r.t true posterior

Particles

True posteriorresample resample

Page 32: Particle Filtering...• Roulette wheel • Binary search, n log n • Stochastic universal sampling • Systematic resampling • Linear time complexity • Easy to implement, low

Fix 1: Choose when to resample

23

Key idea: If variance of weights low, don’t resample

Page 33: Particle Filtering...• Roulette wheel • Binary search, n log n • Stochastic universal sampling • Systematic resampling • Linear time complexity • Easy to implement, low

Fix 1: Choose when to resample

23

Key idea: If variance of weights low, don’t resample

We can implement this condition in various ways

Page 34: Particle Filtering...• Roulette wheel • Binary search, n log n • Stochastic universal sampling • Systematic resampling • Linear time complexity • Easy to implement, low

Fix 1: Choose when to resample

23

Key idea: If variance of weights low, don’t resample

We can implement this condition in various ways

1. All weights are equal - don’t resample

Page 35: Particle Filtering...• Roulette wheel • Binary search, n log n • Stochastic universal sampling • Systematic resampling • Linear time complexity • Easy to implement, low

Fix 1: Choose when to resample

23

Key idea: If variance of weights low, don’t resample

We can implement this condition in various ways

1. All weights are equal - don’t resample

2. Entropy of weights high - don’t resample

Page 36: Particle Filtering...• Roulette wheel • Binary search, n log n • Stochastic universal sampling • Systematic resampling • Linear time complexity • Easy to implement, low

Fix 1: Choose when to resample

23

Key idea: If variance of weights low, don’t resample

We can implement this condition in various ways

1. All weights are equal - don’t resample

2. Entropy of weights high - don’t resample

3. Ratio of max to min weights low - don’t resample

Page 37: Particle Filtering...• Roulette wheel • Binary search, n log n • Stochastic universal sampling • Systematic resampling • Linear time complexity • Easy to implement, low

Fix 2: Low variance sampling

24

4

w2

w3

w1wn

Wn-1

Resampling

w2

w3

w1wn

Wn-1

• Roulette wheel• Binary search, n log n

• Stochastic universal sampling• Systematic resampling• Linear time complexity• Easy to implement, low variance

1. Algorithm systematic_resampling(S,n):

2.3. For Generate cdf4.5. Initialize threshold

6. For Draw samples …7. While ( ) Skip until next threshold reached8.9. Insert10. Increment threshold

11. Return S�

Resampling Algorithm

11,' wcS =Æ=

ni !2=i

ii wcc += -1

1],,0[~ 11 =- inUu

nj !1=

1-+= nuu jj

ij cu >

{ }><È= -1,'' nxSS i

1+= ii

Also called stochastic universal sampling

Particle Filters)|(

)()()|()()|()(

xzpxBel

xBelxzpw

xBelxzpxBel

aaa

¬

-

-

-

Sensor Information: Importance Sampling

Page 38: Particle Filtering...• Roulette wheel • Binary search, n log n • Stochastic universal sampling • Systematic resampling • Linear time complexity • Easy to implement, low

Fix 2: Low variance sampling

25

4

w2

w3

w1wn

Wn-1

Resampling

w2

w3

w1wn

Wn-1

• Roulette wheel• Binary search, n log n

• Stochastic universal sampling• Systematic resampling• Linear time complexity• Easy to implement, low variance

1. Algorithm systematic_resampling(S,n):

2.3. For Generate cdf4.5. Initialize threshold

6. For Draw samples …7. While ( ) Skip until next threshold reached8.9. Insert10. Increment threshold

11. Return S�

Resampling Algorithm

11,' wcS =Æ=

ni !2=i

ii wcc += -1

1],,0[~ 11 =- inUu

nj !1=

1-+= nuu jj

ij cu >

{ }><È= -1,'' nxSS i

1+= ii

Also called stochastic universal sampling

Particle Filters)|(

)()()|()()|()(

xzpxBel

xBelxzpw

xBelxzpxBel

aaa

¬

-

-

-

Sensor Information: Importance Sampling

4

w2

w3

w1wn

Wn-1

Resampling

w2

w3

w1wn

Wn-1

• Roulette wheel• Binary search, n log n

• Stochastic universal sampling• Systematic resampling• Linear time complexity• Easy to implement, low variance

1. Algorithm systematic_resampling(S,n):

2.3. For Generate cdf4.5. Initialize threshold

6. For Draw samples …7. While ( ) Skip until next threshold reached8.9. Insert10. Increment threshold

11. Return S�

Resampling Algorithm

11,' wcS =Æ=

ni !2=i

ii wcc += -1

1],,0[~ 11 =- inUu

nj !1=

1-+= nuu jj

ij cu >

{ }><È= -1,'' nxSS i

1+= ii

Also called stochastic universal sampling

Particle Filters)|(

)()()|()()|()(

xzpxBel

xBelxzpw

xBelxzpxBel

aaa

¬

-

-

-

Sensor Information: Importance Sampling

Assumption: weights sum to 1

Page 39: Particle Filtering...• Roulette wheel • Binary search, n log n • Stochastic universal sampling • Systematic resampling • Linear time complexity • Easy to implement, low

Why does this work?

26

4

w2

w3

w1wn

Wn-1

Resampling

w2

w3

w1wn

Wn-1

• Roulette wheel• Binary search, n log n

• Stochastic universal sampling• Systematic resampling• Linear time complexity• Easy to implement, low variance

1. Algorithm systematic_resampling(S,n):

2.3. For Generate cdf4.5. Initialize threshold

6. For Draw samples …7. While ( ) Skip until next threshold reached8.9. Insert10. Increment threshold

11. Return S�

Resampling Algorithm

11,' wcS =Æ=

ni !2=i

ii wcc += -1

1],,0[~ 11 =- inUu

nj !1=

1-+= nuu jj

ij cu >

{ }><È= -1,'' nxSS i

1+= ii

Also called stochastic universal sampling

Particle Filters)|(

)()()|()()|()(

xzpxBel

xBelxzpw

xBelxzpxBel

aaa

¬

-

-

-

Sensor Information: Importance Sampling

1. What happens when all weights equal?

2. What happens if you have ONE large weight and many tiny weights?

w1 = 0.5, w2 = 0.5/1000, w3 = 0.5/1000, …. w1001 = 0.5/1000

Page 40: Particle Filtering...• Roulette wheel • Binary search, n log n • Stochastic universal sampling • Systematic resampling • Linear time complexity • Easy to implement, low

Problem 2: Particle Starvation

27

No particles in the vicinity of the current state

Page 41: Particle Filtering...• Roulette wheel • Binary search, n log n • Stochastic universal sampling • Systematic resampling • Linear time complexity • Easy to implement, low

Problem 2: Particle Starvation

27

No particles in the vicinity of the current state

Why?1. Unlucky set of samples

2. Committed to the wrong mode in a multi-modal scenario3. Bad set of measurements

Page 42: Particle Filtering...• Roulette wheel • Binary search, n log n • Stochastic universal sampling • Systematic resampling • Linear time complexity • Easy to implement, low

Fix: Add new particles

28

Page 43: Particle Filtering...• Roulette wheel • Binary search, n log n • Stochastic universal sampling • Systematic resampling • Linear time complexity • Easy to implement, low

Fix: Add new particles

28

Which distribution should be used to add new particles?

Page 44: Particle Filtering...• Roulette wheel • Binary search, n log n • Stochastic universal sampling • Systematic resampling • Linear time complexity • Easy to implement, low

Fix: Add new particles

28

Which distribution should be used to add new particles?

1. Uniform distribution

2. Biased around last good measurement

3. Directly from the sensor model

Page 45: Particle Filtering...• Roulette wheel • Binary search, n log n • Stochastic universal sampling • Systematic resampling • Linear time complexity • Easy to implement, low

Fix: Add new particles

29

When should we add new samples?

Page 46: Particle Filtering...• Roulette wheel • Binary search, n log n • Stochastic universal sampling • Systematic resampling • Linear time complexity • Easy to implement, low

Fix: Add new particles

29

When should we add new samples?

Key Idea: As soon as importance weights become too small, add more samples

Page 47: Particle Filtering...• Roulette wheel • Binary search, n log n • Stochastic universal sampling • Systematic resampling • Linear time complexity • Easy to implement, low

Fix: Add new particles

29

When should we add new samples?

Key Idea: As soon as importance weights become too small, add more samples

1. Threshold the total sum of weights

2. Fancy estimator that checks rate of change.

Page 48: Particle Filtering...• Roulette wheel • Binary search, n log n • Stochastic universal sampling • Systematic resampling • Linear time complexity • Easy to implement, low

Problem 3: Observation model too good!

30

Observation model is so peaky, that all particles die!

Page 49: Particle Filtering...• Roulette wheel • Binary search, n log n • Stochastic universal sampling • Systematic resampling • Linear time complexity • Easy to implement, low

Problem 3: Observation model too good!

30

Observation model is so peaky, that all particles die!

1. Sample from a better proposal distribution than motion model!

2. Squash the observation model (apply a power of 1/m to all probabilities. m observations count as one)

3. Last resort: Smooth your observation model with a Gaussian (you are pretending your observation model is worse than it is)

Fixes

Page 50: Particle Filtering...• Roulette wheel • Binary search, n log n • Stochastic universal sampling • Systematic resampling • Linear time complexity • Easy to implement, low

Fix 1: Sample from a better proposal distribution

31

Key Idea: Sample and weigh particles correctly

Contact observation may kill ALL particles!

bel(xt) = ⌘P (zt|xt)

ZP (xt|xt�1, ut) bel(xt�1)dxt�1

(Sample) (Reweigh)

Koval et al. 2017

Page 51: Particle Filtering...• Roulette wheel • Binary search, n log n • Stochastic universal sampling • Systematic resampling • Linear time complexity • Easy to implement, low

Problem 4: How many samples is enough?

32

Example: We typically need more particles at the beginning of run

Key idea: KLD Sampling (Fox et al. 2002)

1. Partition the state-space into bins

2. When sampling, keep track of the number of bins

3. Stop sampling when you reach a statistical threshold that depends on the number of bins

(If all samples fall in a small number of bins -> lower threshold)

Page 52: Particle Filtering...• Roulette wheel • Binary search, n log n • Stochastic universal sampling • Systematic resampling • Linear time complexity • Easy to implement, low

Page 14!

KLD-sampling

KLD-sampling

33

KLD sampling

Page 53: Particle Filtering...• Roulette wheel • Binary search, n log n • Stochastic universal sampling • Systematic resampling • Linear time complexity • Easy to implement, low

Closing: Myth busting Particle filters

34

1. Particle Filter = Sample from motion model, weight by observation

2. Particle filters are for localization

3. Particle filters are to do with samples

Page 54: Particle Filtering...• Roulette wheel • Binary search, n log n • Stochastic universal sampling • Systematic resampling • Linear time complexity • Easy to implement, low

Closing: Myth busting Particle filters

34

1. Particle Filter = Sample from motion model, weight by observation

(sample from any good proposal distribution)

2. Particle filters are for localization

3. Particle filters are to do with samples

Page 55: Particle Filtering...• Roulette wheel • Binary search, n log n • Stochastic universal sampling • Systematic resampling • Linear time complexity • Easy to implement, low

Closing: Myth busting Particle filters

34

1. Particle Filter = Sample from motion model, weight by observation

(sample from any good proposal distribution)

2. Particle filters are for localization

3. Particle filters are to do with samples

(any continuous space estimation problem)

Page 56: Particle Filtering...• Roulette wheel • Binary search, n log n • Stochastic universal sampling • Systematic resampling • Linear time complexity • Easy to implement, low

Closing: Myth busting Particle filters

34

1. Particle Filter = Sample from motion model, weight by observation

(sample from any good proposal distribution)

2. Particle filters are for localization

3. Particle filters are to do with samples

(any continuous space estimation problem)

(normalized importance sampling also uses samples but no resampling)