Upload
vidor
View
38
Download
1
Tags:
Embed Size (px)
DESCRIPTION
Markov Localization & Bayes Filtering. with Kalman Filters Discrete Filters Particle Filters. Slides adapted from Thrun et al., Probabilistic Robotics. Markov Localization. - PowerPoint PPT Presentation
Citation preview
Markov Localization & Bayes Filtering
1
with
Kalman Filters
Discrete Filters
Particle Filters
Slides adapted from Thrun et al.,
Probabilistic Robotics
Markov Localization
2
The robot doesn’t know where it is. Thus, a reasonable initial believe of it’s position is a uniform distribution.
Markov Localization
3
A sensor reading is made (USE SENSOR MODEL) indicating a door at certain locations (USE MAP). This sensor reading should be integrated with prior believe to update our believe (USE BAYES).
Markov Localization
4The robot is moving (USE MOTION MODEL) which adds noise.
Markov Localization
5
A new sensor reading (USE SENSOR MODEL) indicates a door at certain locations (USE MAP). This sensor reading should be integrated with prior believe to update our believe (USE BAYES).
Markov Localization
6The robot is moving (USE MOTION MODEL) which adds noise. …
7
Bayes Formula
evidence
prior likelihood
)(
)()|()(
)()|()()|(),(
yP
xPxyPyxP
xPxyPyPyxPyxP
8
Bayes Rule with Background Knowledge
)|(
)|(),|(),|(
zyP
zxPzxyPzyxP
9
Normalization
)()|(
1)(
)()|()(
)()|()(
1
xPxyPyP
xPxyPyP
xPxyPyxP
x
yx
xyx
yx
yxPx
xPxyPx
|
|
|
aux)|(:
aux
1
)()|(aux:
Algorithm:
10
Recursive Bayesian Updating
),,|(
),,|(),,,|(),,|(
11
11111
nn
nnnn
zzzP
zzxPzzxzPzzxP
Markov assumption: zn is independent of z1,...,zn-1 if we know x.
)()|(
),,|()|(
),,|(
),,|()|(),,|(
...1...1
11
11
111
xPxzP
zzxPxzP
zzzP
zzxPxzPzzxP
ni
in
nn
nn
nnn
11
Putting oberservations and actions together: Bayes Filters• Given:
• Stream of observations z and action data u:
• Sensor model P(z|x).• Action model P(x|u,x’).• Prior probability of the system state P(x).
• Wanted: • Estimate of the state X of a dynamical system.• The posterior of the state is also called Belief:
),,,|()( 11 tttt zuzuxPxBel
},,,{ 11 ttt zuzud
12
Graphical Representation and Markov Assumption
Underlying Assumptions• Static world• Independent noise• Perfect model, no approximation errors
),|(),,|( 1:1:11:1 ttttttt uxxpuzxxp )|(),,|( :1:1:0 tttttt xzpuzxzp
13111 )(),|()|( ttttttt dxxBelxuxPxzP
Bayes Filters
),,,|(),,,,|( 1111 ttttt uzuxPuzuxzP Bayes
z = observationu = actionx = state
),,,|()( 11 tttt zuzuxPxBel
Markov ),,,|()|( 11 tttt uzuxPxzP
Markov11111 ),,,|(),|()|( tttttttt dxuzuxPxuxPxzP
1111
111
),,,|(
),,,,|()|(
ttt
ttttt
dxuzuxP
xuzuxPxzP
Total prob.
Markov111111 ),,,|(),|()|( tttttttt dxzzuxPxuxPxzP
•Prediction
•Correction
111 )(),|()( tttttt dxxbelxuxpxbel
)()|()( tttt xbelxzpxbel
15
Bayes Filter Algorithm
1. Algorithm Bayes_filter( Bel(x),d ):2. 0
3. If d is a perceptual data item z then4. For all x do5. 6. 7. For all x do8.
9. Else if d is an action data item u then10. For all x do11.
12. Return Bel’(x)
)()|()(' xBelxzPxBel )(' xBel
)(')(' 1 xBelxBel
')'()',|()(' dxxBelxuxPxBel
111 )(),|()|()( tttttttt dxxBelxuxPxzPxBel
16
Bayes Filters are Familiar!
• Kalman filters
• Particle filters
• Hidden Markov models
• Dynamic Bayesian networks
• Partially Observable Markov Decision Processes (POMDPs)
111 )(),|()|()( tttttttt dxxBelxuxPxzPxBel
17
SA-1
Probabilistic Robotics
Bayes Filter Implementations
Gaussian filters
),(~),(~ 22
2
abaNYbaXY
NX
Linear transform of Gaussians
2
2)(
2
1
2
2
1)(
:),(~)(
x
exp
Nxp
-
Univariate
Gaussians
• We stay in the “Gaussian world” as long as we start with Gaussians and perform only linear transformations.
),(~),(~ TAABANY
BAXY
NX
Multivariate Gaussians
12
11
221
11
21
221
222
111 1,~)()(
),(~
),(~
NXpXpNX
NX
21
Discrete Kalman Filter
tttttt uBxAx 1
tttt xCz
Estimates the state x of a discrete-time controlled process that is governed by the
linear stochastic difference equation
with a measurement
22
0000 ,;)( xNxbel
Linear Gaussian Systems: Initialization
• Initial belief is normally distributed:
23
• Dynamics are linear function of state and control plus additive noise:
tttttt uBxAx 1
Linear Gaussian Systems: Dynamics
ttttttttt RuBxAxNxuxp ,;),|( 11
1111
111
,;~,;~
)(),|()(
ttttttttt
tttttt
xNRuBxAxN
dxxbelxuxpxbel
24
• Observations are linear function of state plus additive noise:
tttt xCz
Linear Gaussian Systems: Observations
tttttt QxCzNxzp ,;)|(
ttttttt
tttt
xNQxCzN
xbelxzpxbel
,;~,;~
)()|()(
25
Kalman Filter Algorithm
1. Algorithm Kalman_filter( t-1, t-1, ut, zt):
2. Prediction:3. 4.
5. Correction:6. 7. 8.
9. Return t, t
ttttt uBA 1
tTtttt RAA 1
1)( tTttt
Tttt QCCCK
)( tttttt CzK
tttt CKI )(
26
Kalman Filter Summary
•Highly efficient: Polynomial in measurement dimensionality k and state dimensionality n: O(k2.376 + n2)
•Optimal for linear Gaussian systems!
•Most robotics systems are nonlinear!
27
Nonlinear Dynamic Systems
•Most realistic robotic problems involve nonlinear functions
),( 1 ttt xugx
)( tt xhz
28
Linearity Assumption Revisited
29
Non-linear Function
30
EKF Linearization (1)
31
EKF Linearization (2)
32
EKF Linearization (3)
33
•Prediction:
•Correction:
EKF Linearization: First Order Taylor Series Expansion
)(),(),(
)(),(
),(),(
1111
111
111
ttttttt
ttt
tttttt
xGugxug
xx
ugugxug
)()()(
)()(
)()(
ttttt
ttt
ttt
xHhxh
xx
hhxh
34
EKF Algorithm
1. Extended_Kalman_filter( t-1, t-1, ut, zt):
2. Prediction:3. 4.
5. Correction:6. 7. 8.
9. Return t, t
),( 1 ttt ug
tTtttt RGG 1
1)( tTttt
Tttt QHHHK
))(( ttttt hzK
tttt HKI )(
1
1),(
t
ttt x
ugG
t
tt x
hH
)(
ttttt uBA 1
tTtttt RAA 1
1)( tTttt
Tttt QCCCK
)( tttttt CzK
tttt CKI )(
35
Localization
• Given • Map of the environment.• Sequence of sensor measurements.
• Wanted• Estimate of the robot’s position.
• Problem classes• Position tracking• Global localization• Kidnapped robot problem (recovery)
“Using sensory information to locate the robot in its environment is the most fundamental problem to providing a mobile robot with
autonomous capabilities.” [Cox ’91]
36
Landmark-based Localization
37
EKF Summary
•Highly efficient: Polynomial in measurement dimensionality k and state dimensionality n: O(k2.376 + n2)
•Not optimal!•Can diverge if nonlinearities are large!•Works surprisingly well even when all
assumptions are violated!
38
• [Arras et al. 98]:
• Laser range-finder and vision
• High precision (<1cm accuracy)
Kalman Filter-based System
[Courtesy of Kai Arras]
39
Multi-hypothesisTracking
40
• Belief is represented by multiple hypotheses
• Each hypothesis is tracked by a Kalman filter
• Additional problems:
• Data association: Which observation
corresponds to which hypothesis?
• Hypothesis management: When to add / delete
hypotheses?
• Huge body of literature on target tracking, motion
correspondence etc.
Localization With MHT
41
MHT: Implemented System (2)
Courtesy of P. Jensfelt and S. Kristensen
SA-1
Probabilistic Robotics
Bayes Filter Implementations
Discrete filters
43
Piecewise Constant
44
Discrete Bayes Filter Algorithm
1. Algorithm Discrete_Bayes_filter( Bel(x),d ):2. 0
3. If d is a perceptual data item z then4. For all x do5. 6. 7. For all x do8.
9. Else if d is an action data item u then10. For all x do11.
12. Return Bel’(x)
)()|()(' xBelxzPxBel )(' xBel
)(')(' 1 xBelxBel
'
)'()',|()('x
xBelxuxPxBel
45
Grid-based Localization
46
Sonars and Occupancy Grid Map
SA-1
Probabilistic Robotics
Bayes Filter Implementations
Particle filters
Sample-based Localization (sonar)
Represent belief by random samples
Estimation of non-Gaussian, nonlinear processes
Monte Carlo filter, Survival of the fittest, Condensation, Bootstrap filter, Particle filter
Filtering: [Rubin, 88], [Gordon et al., 93], [Kitagawa 96]
Computer vision: [Isard and Blake 96, 98] Dynamic Bayesian Networks: [Kanazawa et al., 95]d
Particle Filters
Weight samples: w = f / g
Importance Sampling
Importance Sampling with Resampling:Landmark Detection Example
Particle Filters
)|()(
)()|()()|()(
xzpxBel
xBelxzpw
xBelxzpxBel
Sensor Information: Importance Sampling
'd)'()'|()( , xxBelxuxpxBel
Robot Motion
)|()(
)()|()()|()(
xzpxBel
xBelxzpw
xBelxzpxBel
Sensor Information: Importance Sampling
Robot Motion
'd)'()'|()( , xxBelxuxpxBel
1. Algorithm particle_filter( St-1, ut-1 zt):
2.
3. For Generate new samples
4. Sample index j(i) from the discrete distribution given by wt-
1
5. Sample from using and
6. Compute importance weight
7. Update normalization factor
8. Insert
9. For
10. Normalize weights
Particle Filter Algorithm
0, tS
ni 1
},{ it
ittt wxSS
itw
itx ),|( 11 ttt uxxp )(
1ij
tx 1tu
)|( itt
it xzpw
ni 1
/it
it ww
draw xit1 from Bel(xt1)
draw xit from p(xt | xi
t1,ut1)
Importance factor for xit:
)|()(),|(
)(),|()|(ondistributi proposal
ondistributitarget
111
111
tt
tttt
tttttt
it
xzpxBeluxxp
xBeluxxpxzp
w
1111 )(),|()|()( tttttttt dxxBeluxxpxzpxBel
Particle Filter Algorithm
Start
Motion Model Reminder
Proximity Sensor Model Reminder
Laser sensor Sonar sensor
61
Initial Distribution
62
After Incorporating Ten Ultrasound Scans
63
After Incorporating 65 Ultrasound Scans
64
Estimated Path
Localization for AIBO robots
66
Limitations
•The approach described so far is able to • track the pose of a mobile robot and to• globally localize the robot.
•How can we deal with localization errors (i.e., the kidnapped robot problem)?
67
Approaches
•Randomly insert samples (the robot can be teleported at any point in time).
• Insert random samples proportional to the average likelihood of the particles (the robot has been teleported with higher probability when the likelihood of its observations drops).
68
Global Localization
69
Kidnapping the Robot
71
Summary
• Particle filters are an implementation of recursive Bayesian filtering
• They represent the posterior by a set of weighted samples.
• In the context of localization, the particles are propagated according to the motion model.
• They are then weighted according to the likelihood of the observations.
• In a re-sampling step, new particles are drawn with a probability proportional to the likelihood of the observation.