View
0
Download
0
Category
Preview:
Citation preview
1
Vlad Estivill-CastroRobots for People
--- A project for intelligent integrated systems
Probabilistic Map-based Localization
(Kalman Filter)
Chapter 5 (textbook)
© V. Estivill-Castro 2
This image IIIS
Based on◗ textbook
• Introduction to Autonomous Mobile Robots
• second editionRoland Siegwart, Illah R. Nourbakhsh, and Davide Scaramuzza
◗ Michael Williams• “Introduction to Kalman Filters”
© V. Estivill-Castro 3
This image IIIS
Belief
◗ In one dimension:
© V. Estivill-Castro 4
This image IIIS
Example 4: Gaussian distribution◗ Famous (analytically tractable) bell-shaped
curved defined by the formula• N(µ,σ)(x)= e –(x-µ)/2σ^2 /√2πσ• where µ is the mean value and σ is the
standard deviation
© V. Estivill-Castro 5
This image IIIS
Action and perception updates◗ In robot localization, we distinguish two
update steps1. Action update
• the robot moves and estimates its position through its propioceptive sensors (motion model). During this step, the robot uncertainty grows
2. Perception update• the robot makes an observation using
exteroceptive sensors and corrects its position by opportunely combining its belief before the observation with the probability of making exactly that observation. During this step, the robot’s uncertainty shrinks.
© V. Estivill-Castro 6
x1 x2 x3
x2x’2x’’2
Robots belief beforethe observation
Updated belief(fusion)
Probability of makingthis observation
This image IIIS
Kalman filter
◗ Continuous, recursive and very compact solution to probabilistic position estimation
◗ Assumes normal distribution is the suitable sensor (and motion) model
© V. Estivill-Castro 7
This image IIIS
In general◗ The solution to the probabilistic
localization problem• method to compute the probability
distribution of the configuration during each ACTION step and PERCEPTION step
© V. Estivill-Castro 8
This image IIIS
Solution to the probabilistic localization --- ingredients
1. The initial probability distribution Prob(x)t=0
2. A motion model: The statistical error model of the propioceptive sensors (e.g. wheels encoders).
3. A sensor model: The statistical error model of the exteroceptive sensors (e.g. laser, sonar, camera)
4. Map of the environment
© V. Estivill-Castro 9
This image IIIS
Solution to the probabilistic localization --- ingredients
1. The initial probability distribution Prob(x)t=0
2. A motion model: The statistical error model of the propioceptive sensors (e.g. wheels encoders).
3. A sensor model: The statistical error model of the exteroceptive sensors (e.g. laser, sonar, camera)
4. Map of the environment
© V. Estivill-Castro 10
This image IIIS
Markov versus KalmanlocalizationKalman◗ The probability distribution
of both, the robot configuration and the sensor model is assumed to be continuous and Gaussian
◗ Since a Gaussian distribution is described by the mean value and the variance we only update the parameters. Computational cost is low
Markov◗ The configuration space is
divided into many cells◗ The probability distribution
of the sensor model is also discrete
◗ During the action and Perception, all the cells are updated
© V. Estivill-Castro 11
This image IIIS 12
The Problem
◗ System state cannot be measured directly◗ Need to estimate “optimally” from measurements
Measuring Devices Estimator
MeasurementError Sources
System State (desired but not known)
External Controls
Observed Measurements
Optimal Estimate of System State
SystemError Sources
System
Black Box
This image IIIS
Kalman Filter Localization
© V. Estivill-Castro 13
System
Measuring Devices Kalmanfilter
Observedmeasurement
System error source
Control
System state(desired butnot known)
Measurement error source
Optimal estimateof system state
This image IIIS 14
What is a Kalman Filter?◗ Recursive data processing algorithm◗ Generates optimal estimate of desired quantities
given the set of measurements◗ Optimal?
• For linear system and white Gaussian errors, Kalmanfilter is “best” estimate based on all previous measurements
• For non-linear system optimality is ‘qualified’◗ Recursive?
• Doesn’t need to store all previous measurements and reprocess all data each time step
This image IIIS 15
Conceptual Overview
◗ Simple example to motivate the workings of the Kalman Filter
◗ Theoretical Justification to come later – for now just focus on the concept
◗ Important: Prediction and Correction
This image IIIS 16
Conceptual Overview (1D)
◗ Lost on the 1-dimensional line◗ Position : y(t)◗ Assume Gaussian distributed measurements
y
This image IIIS 17
Conceptual Overview
0 10 20 30 40 50 60 70 80 90 1000
0.02
0.04
0.06
0.08
0.1
0.12
0.14
0.16
• Sextant Measurement at t1: Mean = z1 and Variance = sz1
• Optimal estimate of position is: ŷ(t1) = z1
• Variance of error in estimate: s2x (t1) = s2
z1
• Boat in same position at time t2 - Predicted position is z1
LEARN froma sensor(our initial state known)
This image IIIS 18
Conceptual Overview
0 10 20 30 40 50 60 70 80 90 1000
0.02
0.04
0.06
0.08
0.1
0.12
0.14
0.16
• Sextant Measurement at t1: Mean = z1 and Variance = sz1
• Optimal estimate of position is: ŷ(t1) = z1
• Variance of error in estimate: s2x (t1) = s2
z1
• Boat in same position at time t2 - Predicted position is z1
MOVEMENT KNOWN
This image IIIS 19
0 10 20 30 40 50 60 70 80 90 1000
0.02
0.04
0.06
0.08
0.1
0.12
0.14
0.16
Conceptual Overview
• So we have the prediction ŷ-(t2)• GPS Measurement at t2: Mean = z2 and Variance = s2
z2
• Need to correct the prediction due to measurement to get ŷ(t2)• Closer to more trusted measurement – linear interpolation?
prediction ŷ-(t2)measurement z(t2)
NEXTSensor reading
This image IIIS 20
0 10 20 30 40 50 60 70 80 90 1000
0.02
0.04
0.06
0.08
0.1
0.12
0.14
0.16
Conceptual Overview
• Corrected mean is the new optimal estimate of position• New variance is smaller than either of the previous two variances
measurement z(t2)
corrected optimal estimate ŷ(t2)
prediction ŷ-(t2)
This image IIIS 21
Conceptual Overview◗ Lessons so far (not moving):
Make prediction based on previous data : ŷ-, s-
Take measurement : zk, sz
Optimal estimate (ŷ) = Prediction + (Kalman Gain) * (Measurement - Prediction)
Variance of estimate = Variance of prediction * (1 – Kalman Gain)
This image IIIS 22
0 10 20 30 40 50 60 70 80 90 1000
0.02
0.04
0.06
0.08
0.1
0.12
0.14
0.16
Conceptual Overview
• At time t3, boat moves with velocity dy/dt=u• Naïve approach: Shift probability to the right to predict• This would work if we knew the velocity exactly (perfect model)
ŷ(t2)Naïve Prediction ŷ-(t3)
Motion Model:We move withconstant velocity(at least betweensensor readings)
This image IIIS 23
0 10 20 30 40 50 60 70 80 90 1000
0.02
0.04
0.06
0.08
0.1
0.12
0.14
0.16
Conceptual Overview
• Better to assume imperfect model by adding Gaussian noise• dy/dt = u + w• Distribution for prediction moves and spreads out
ŷ(t2)
Naïve Prediction ŷ-(t3)
Prediction ŷ-(t3)
This image IIIS 24
0 10 20 30 40 50 60 70 80 90 1000
0.02
0.04
0.06
0.08
0.1
0.12
0.14
0.16
Conceptual Overview
• Now we take a measurement at t3• Need to once again correct the prediction• Same as before
Prediction ŷ-(t3)
Measurement z(t3)
Corrected optimal estimate ŷ(t3)
This image IIIS 25
Conceptual Overview◗ Lessons learnt from conceptual overview:
• Initial conditions (ŷk-1 and sk-1)• Prediction (ŷ-
k , s-k)
• Use initial conditions and motion model (eg. constant velocity) to make prediction
• Measurement (zk)• Take measurement
• Correction (ŷk , sk)• Use measurement to correct prediction by ‘blending’
prediction and residual – always a case of merging only two Gaussians
• Optimal estimate with smaller variance
This image IIIS
Introduction to Kalman filter◗ What is the best measurement if I have two
measurements?◗ Two measurements
• q1’ = q1 with variance σ12
• q2’ = q2 with variance σ22
◗ Weighted least-squares• S (q*) =Σ i=1
n wi (q*-qi)2
◗ Finding the minimum error• δS/δq* = 2Σ i=1
n wi (q*-qi)=0◗ (some what complex) algebraic manipulations show
• q* = q1 + σ12 (q2 – q1 ) / (σ1
2 +σ22)
© V. Estivill-Castro 26
This image IIIS
In Kalman filter notation
◗ Static re-evaluation of sensor value• We are not moving!
◗ xk+1* = xk* + Kk+1 (zk+1-xk*)◗ Kk+1 =σk
2 / (σk2 +σz
2) ◗ σk
2 = σ12
◗ σz2 = σ2
2
◗ σk+12 = - Kk+1 σk
2
© V. Estivill-Castro 27
This image IIIS
Kalman filter: Dynamic prediction◗ u=velocity, w=noise dx/dt = u+w◗ Motion
• xk’* = xk* + u(tk+1-tk)◗ σk’
2 = σk2 + σw
2 (tk+1-tk)
◗ Combining fusion and dynamic prediction• xk+1* = xk’* + Kk+1 (zk+1-xk’*)• =[xk* + u(tk+1-tk)] + Kk+1 (zk+1- xk* - u(tk+1-tk))◗ Kk+1 =σk’
2 / (σk’2 +σz
2) =◗ [σk
2 - σw2 (tk+1-tk)]/[σk
2 + σw2 (tk+1-tk)+σk
2]
© V. Estivill-Castro 28
This image IIIS
Action and perception updates◗ In robot localization, we distinguish two
update steps1. Action update
• the robot moves and estimates its position through its propioceptive sensors (motion model). During this step, the robot uncertainty grows
2. Perception update• the robot makes an observation using
exteroceptive sensors and corrects its position by opportunely combining its belief before the observation with the probability of making exactly that observation. During this step, the robot’s uncertainty shrinks.
© V. Estivill-Castro 29
x1 x2 x3
x2x’2x’’2
Robots belief beforethe observation
Updated belief(fusion)
Probability of makingthis observation
This image IIIS
Derived by Bayes Theorem
◗ Sketch of the 1D Case in textbook◗ The new belief is the product of two beliefs
• The predicted belief• The sensed belief
◗ Both are represented as Gaussians, and the product is a Gaussian
◗ With the update being the Kalman rule
© V. Estivill-Castro 30
This image IIIS
The five steps of Map-Based Localization
1. Prediction based on previous estimate and odometry
2. Observation with on-board sensors3. Measurement prediction based on
prediction and map4. Matching of observation and map5. Estimation -> position update (posteriori
position)
© V. Estivill-Castro 31
This image IIIS
Extended Kalman Filer◗ We know motion model is not linear◗ But we can take the first order Taylor
approximation (once more)◗ It turns our the equations are almost the
same
© V. Estivill-Castro 32
This image IIIS
The (Extended) Kalman filter◗ In the first step. the robot position xt* at
time step t is predicted based on its old location (time step t-1) and its movement due to the control input ut:
◗ xt* = f (xt-1, ut) f: odometry function◗ Pt*= Fx Pt-1 Fx
T + Fu Qt FuT
• Jacobean of f : Fx Fu
• Covariance of previous state:Pt-1
• Covariance of noise associated to the motion: Qt
© V. Estivill-Castro 33
This image IIIS
Odometry in the robots frame of reference◗ b is the separation between the wheels◗ Kinematics:
© V. Estivill-Castro 34
This image IIIS
We still have to make some assumptions◗ For the position Σp initially can be all
zeros◗ Estimate an (initial) covariance matrix
Qt =ΣΔ = kr |Δsr| 00 kl |Δsrl|
• The two errors from the wheels are independent
• The variance of the errors in the wheels is proportional to the absolute value of the travel distance
• The constants kl kr determined experimentally
© V. Estivill-Castro 35
This image IIIS
Odometry in the robots frame of reference◗ b is the separation between the wheels◗ (Motion) Error Model:
© V. Estivill-Castro 36
This image IIIS 37
The Generic Problem (several dimensions/ state vector)
◗ System state cannot be measured directly◗ Need to estimate “optimally” from measurements
Measuring Devices Estimator
MeasurementError Sources
System State (desired but not known)
External Controls
Observed Measurements
Optimal Estimate of System State
SystemError Sources
System
Black Box
This image IIIS 38
Theoretical Basis◗ Process to be estimated:
yk = Ayk-1 + Buk + wk-1
zk = Hyk + vk
Process Noise (w) with covariance Q
Measurement Noise (v) with covariance R
• Kalman FilterPredicted: ŷ-
k is estimate based on measurements at previous time-steps
ŷk = ŷ-k + K(zk - H ŷ-
k )Corrected: ŷk has additional information – the measurement at time k
K = P-kHT(HP-
kHT + R)-1
ŷ-k = Ayk-1 + Buk
P-k = APk-1AT + Q
Pk = (I - KH)P-k
Motion Model
This image IIIS 39
Blending Factor
• If we are sure about measurements:– Measurement error covariance (R) decreases to zero– K decreases and weights residual more heavily than
prediction
• If we are sure about prediction– Prediction error covariance P-
k decreases to zero– K increases and weights prediction more heavily than
residual
This image IIIS 40
Theoretical Basis
ŷ-k = Ayk-1 + Buk
P-k = APk-1AT + Q
Prediction (Time Update)
(1) Project the state ahead
(2) Project the error covariance ahead
Correction (Measurement Update)
(1) Compute the Kalman Gain
(2) Update estimate with measurement zk
(3) Update Error Covariance
ŷk = ŷ-k + K(zk - H ŷ-
k )
K = P-kHT(HP-
kHT + R)-1
Pk = (I - KH)P-k
This image IIIS 41
Quick Example – Constant Model
Measuring Devices Estimator
MeasurementError Sources
System State
External Controls
Observed Measurements
Optimal Estimate of System State
SystemError Sources
System
Black Box
This image IIIS 42
Quick Example – Constant Model
Prediction
ŷk = ŷ-k + K(zk - H ŷ-
k )
Correction
K = P-k(P-
k + R)-1
ŷ-k = yk-1
P-k = Pk-1
Pk = (I - K)P-k
This image IIIS 43
Quick Example – Constant Model
0 10 20 30 40 50 60 70 80 90 100-0.7
-0.6
-0.5
-0.4
-0.3
-0.2
-0.1
0
This image IIIS 44
Quick Example – Constant Model
0 10 20 30 40 50 60 70 80 90 1000
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Convergence of Error Covariance - Pk
This image IIIS 45
0 10 20 30 40 50 60 70 80 90 100-0.7
-0.6
-0.5
-0.4
-0.3
-0.2
-0.1
0
Quick Example – Constant ModelLarger value of R – the measurement error covariance (indicates poorer quality of measurements)
Filter slower to ‘believe’ measurements –slower convergence
This image IIIS
Kalman filter illustration◗ Recall motion model for differential two-
wheel robot
© V. Estivill-Castro 46
This image IIIS
Prediction update◗ Pt*= Fx Pt-1 Fx
T + Fu Qt FuT
© V. Estivill-Castro 47
This image IIIS
Observation◗ The second step is to obtain the observation Z
(measurement) from the robot’s sensors at the new location.◗ The observation usually consists of a set n0 of single
observation zj extracted from the different sensor signals. It represents features like lines, doors or any kind of landmark
◗ The parameters of the targets are usually observed in the sensor/robot frame• Therefore, the observations have to be transformed to the world
frame {W} or• the measurement prediction has to be transformed to the sensor
frame {R}
◗ This transformation is specified in the function hj (see later).
© V. Estivill-Castro 48
This image IIIS
Observation
© V. Estivill-Castro 49
Raw Data ofLaser Scanner
ExtractedLines
Extracted Linesin Model Space
This image IIIS
Measurement Prediction◗ In the next step, we use the predicted robot position xt*
and the features mj in the map M to generate multiple predicted observations zt
j*◗ They have to be transformed into the sensor frame
• ztj*= hj (xt* , mj )
◗ We can now define the measurement prediction as a SET Z* containing all nj predicted observations• Z*= { zt
j*| 1 ≤ j ≤ nj }◗ The function hj is mainly the coordinate
transformation between the world frame {W} and the sensor/robot frame {R}
© V. Estivill-Castro 50
This image IIIS
Measurement Prediction◗ For prediction, only the walls that are in the field of view
of the robot are selected◗ This is done by linking the individual lines to the nodes of
the path
© V. Estivill-Castro 51
This image IIIS
An example:◗ The generated measurement predictions have to be
transformed to the robot frame {R}• mj = {W} [ αt
j rtj] è zt
j*=[αtj* rt
j*] ◗ According to the figure in the previous slide, the
transformation is given by◗ zt
j*=[αtj* rt
j*] =hj(xt*, mj )=[{W} αtj*-θt*
{W} rtj – xt* cos({W} αt
j*) + yt*sin({W} αtj*) ]
◗ and its Jacobian by• H’=[ δαt
j/δx* δαtj/δy* δαt
j/δθδrt
j/δx* δrtj/δy* δrt
j/δθ ]
• H’=[ 0 0 -1cos({W} αt
j*) sin({W}αtj*) 0 ]
© V. Estivill-Castro 52
This image IIIS
Matching◗ Assignment from observations m (gained by the sensors) to the
targets zt (stored in the map)◗ For each measurement prediction for which a corresponding
observation is found we calculate the innovation
and this innovation covariance found by applying the error propagation law:
ΣINtij = Hj Pt* HjT + Rt
i (Hj: Jacobian)The validity of the correspondence between measurement and prediction can be evaluated though the Mahalanobis distancevt
ijT (ΣINtij)-1 vt
ij ≤g2
© V. Estivill-Castro 53
This image IIIS
Matching
© V. Estivill-Castro 54
This image IIIS
Estimation
◗ Applying the Kalman filter• Kalman filter gain:
• Kt = Pt* HtT (ΣINt )-1
• Update of robot’s position estimate• xt = xt* + Kt vt
• The associate variance• Pt = Pt* - Kt ΣINt Kt
T
© V. Estivill-Castro 55
This image IIIS
Estimation◗ Kalman filter
estimation of the new robot position xt• by fusing the
prediction of the robot position (magenta) with the innovation gained by the measurement (green) we get the updated estimate of the robot red)
© V. Estivill-Castro 56
This image IIIS
Summary◗ Probabilistic-based Location is a general
method that includes• Kalman filter• Markov-based filter
• Particle-filter is a special case of Markov filter
• The main difference is the representation of the belief
• As a result, the– Action update– Sensor update
• may be different
© V. Estivill-Castro 57
This image IIIS © V. Estivill-Castro 58
Recommended