50
Chapter 5 Optimal Estimation Part 2 5.2 Covariance Propagation and Optimal Estimation Mobile Robotics - Prof Alonzo Kelly, CMU RI 1

Chapter 5 Optimal Estimation - Carnegie Mellon School of ...alonzo/books/unc2.pdf · Chapter 5 Optimal Estimation Part 2 5.2 Covariance Propagation and Optimal ... “summing” as

  • Upload
    others

  • View
    5

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Chapter 5 Optimal Estimation - Carnegie Mellon School of ...alonzo/books/unc2.pdf · Chapter 5 Optimal Estimation Part 2 5.2 Covariance Propagation and Optimal ... “summing” as

Chapter 5 Optimal Estimation

Part 2 5.2 Covariance Propagation and Optimal Estimation

Mobile Robotics - Prof Alonzo Kelly, CMU RI 1

Page 2: Chapter 5 Optimal Estimation - Carnegie Mellon School of ...alonzo/books/unc2.pdf · Chapter 5 Optimal Estimation Part 2 5.2 Covariance Propagation and Optimal ... “summing” as

Outline • 5.2 Random Variables, Processes and

Transformation – 5.2.1 Variance of Continuous Integration and

Averaging Processes – 5.2.2 Stochastic Integration – 5.2.3 Optimal Estimation – Summary

2 Mobile Robotics - Prof Alonzo Kelly, CMU RI

Page 3: Chapter 5 Optimal Estimation - Carnegie Mellon School of ...alonzo/books/unc2.pdf · Chapter 5 Optimal Estimation Part 2 5.2 Covariance Propagation and Optimal ... “summing” as

Outline • 5.2 Random Variables, Processes and

Transformation – 5.2.1 Variance of Continuous Integration and

Averaging Processes – 5.2.2 Stochastic Integration – 5.2.3 Optimal Estimation – Summary

3 Mobile Robotics - Prof Alonzo Kelly, CMU RI

Page 4: Chapter 5 Optimal Estimation - Carnegie Mellon School of ...alonzo/books/unc2.pdf · Chapter 5 Optimal Estimation Part 2 5.2 Covariance Propagation and Optimal ... “summing” as

State Estimation • Henceforth, reinterpret our “transformations” of

uncertainty to cover recursive relationships. • Our goal is a set of recursive algorithms to track

the state x, and its uncertainty P, of a dynamical system.

• Define: – xk : state estimate at time k

– Pk : (state) covariance estimate at time k

– zk : measurement at time k

– Rk : (measurement) covariance estimate at time k

Mobile Robotics - Prof Alonzo Kelly, CMU RI 4

Page 5: Chapter 5 Optimal Estimation - Carnegie Mellon School of ...alonzo/books/unc2.pdf · Chapter 5 Optimal Estimation Part 2 5.2 Covariance Propagation and Optimal ... “summing” as

5.2.1.2 Recursive Integration • Recall the result for a sum of iid

RVs and reinterpret the “summing” as integration as it occurs in dead reckoning. In our new notation:

• The summing process can be written:

• Its covariance can be written recursively as:

Mobile Robotics - Prof Alonzo Kelly, CMU RI 5

Variance grows linearly wrt time.

Page 6: Chapter 5 Optimal Estimation - Carnegie Mellon School of ...alonzo/books/unc2.pdf · Chapter 5 Optimal Estimation Part 2 5.2 Covariance Propagation and Optimal ... “summing” as

5.2.1.3 Variance of a Continuous Summing Process

Mobile Robotics - Prof Alonzo Kelly, CMU RI 6

0

1

2

3

4

0 5 10 15

Stan

dard

Dev

iatio

n

Time in Seconds

Standard Deviation grows linearly wrt

square root of time.

Page 7: Chapter 5 Optimal Estimation - Carnegie Mellon School of ...alonzo/books/unc2.pdf · Chapter 5 Optimal Estimation Part 2 5.2 Covariance Propagation and Optimal ... “summing” as

5.2.1.4 Recursive Averaging • The averaging process can

be written as: • Isolate the last estimate:

• Simplifies to the recursive form:

• Define the “Kalman Gain”

K=1/k:

Mobile Robotics - Prof Alonzo Kelly, CMU RI 7

“Innovation”

Page 8: Chapter 5 Optimal Estimation - Carnegie Mellon School of ...alonzo/books/unc2.pdf · Chapter 5 Optimal Estimation Part 2 5.2 Covariance Propagation and Optimal ... “summing” as

5.2.1.4 Recursive Averaging • Recall the result for an average

of iid RVs. For k measurements: • Note that:

• Which means:

• So, variances add by reciprocals, just like conductances in electric circuits. Mobile Robotics - Prof Alonzo Kelly, CMU RI 8

Page 9: Chapter 5 Optimal Estimation - Carnegie Mellon School of ...alonzo/books/unc2.pdf · Chapter 5 Optimal Estimation Part 2 5.2 Covariance Propagation and Optimal ... “summing” as

5.2.1.4 Recursive Averaging • Now because:

• Substitute to get:

• Substituting the Kalman gain (and adding 1 to k):

Mobile Robotics - Prof Alonzo Kelly, CMU RI 9

Typos In book Here

Page 10: Chapter 5 Optimal Estimation - Carnegie Mellon School of ...alonzo/books/unc2.pdf · Chapter 5 Optimal Estimation Part 2 5.2 Covariance Propagation and Optimal ... “summing” as

5.2.1.3 Variance of a Continuous Averaging Process

Mobile Robotics - Prof Alonzo Kelly, CMU RI 10

Standard Deviation Decreases linearly wrt square root of

time.

0

0.5

1

1.5

2

0 5 10 15

Stan

dard

Dev

iatio

n

Time in Seconds

Page 11: Chapter 5 Optimal Estimation - Carnegie Mellon School of ...alonzo/books/unc2.pdf · Chapter 5 Optimal Estimation Part 2 5.2 Covariance Propagation and Optimal ... “summing” as

5.2.1.5 Measuring “Stability” • Refers to changes in effective

(average) bias and scale errors. – Often quoted as change in bias or

scale as a function of temperature or time.

Mobile Robotics - Prof Alonzo Kelly, CMU RI 11

Bias is unrelated to noise amplitude This gyro has about 0.4 deg/sec peak-to-peak variation. Average bias is < 20 deg/hr.

From http://www.xbow.com

Somehow, bias instability means the same as bias stability in this context.

Page 12: Chapter 5 Optimal Estimation - Carnegie Mellon School of ...alonzo/books/unc2.pdf · Chapter 5 Optimal Estimation Part 2 5.2 Covariance Propagation and Optimal ... “summing” as

Allan Variance (Measure of Bias Stability) • Average all measurements over some time period ∆t. • Asks how much the average (over ∆t ) can change over a

period of time ∆t. • Take difference in average in successive bins. Square it. • Add up at least 9 of these and divide by 2(n-1)

Mobile Robotics - Prof Alonzo Kelly, CMU RI 12

Successive bins Intuitively:

Variance in the Bias For given level of averaging

What is the 2 for?

Page 13: Chapter 5 Optimal Estimation - Carnegie Mellon School of ...alonzo/books/unc2.pdf · Chapter 5 Optimal Estimation Part 2 5.2 Covariance Propagation and Optimal ... “summing” as

Allan Variance

• Compute the difference in average of successive bins.

Mobile Robotics - Prof Alonzo Kelly, CMU RI 13

y

t

y(t)i y(t)i+1

∆t

Page 14: Chapter 5 Optimal Estimation - Carnegie Mellon School of ...alonzo/books/unc2.pdf · Chapter 5 Optimal Estimation Part 2 5.2 Covariance Propagation and Optimal ... “summing” as

Allan Variance • Variance drops

initially as ∆t increases (effect of averaging). – Sensor noise

dominates for small ∆t (does not average out).

– Rate random walk dominates for large ∆t (bias really is changing).

Mobile Robotics - Prof Alonzo Kelly, CMU RI 14

Inertial sensor manufacturers quote the minimum (equals best achievable result with active bias estimation and fully modeled sensor)

Page 15: Chapter 5 Optimal Estimation - Carnegie Mellon School of ...alonzo/books/unc2.pdf · Chapter 5 Optimal Estimation Part 2 5.2 Covariance Propagation and Optimal ... “summing” as

Allan Deviation Graph

Mobile Robotics - Prof Alonzo Kelly, CMU RI 15

0.01 0.1 1 10 100 1000 0.001

0.01

0.1

Alla

n D

evia

tion

(deg

/sec

.) 1

Tau (sec)

Angle Random Walk

Rate Random Walk

Bias Stability

Page 16: Chapter 5 Optimal Estimation - Carnegie Mellon School of ...alonzo/books/unc2.pdf · Chapter 5 Optimal Estimation Part 2 5.2 Covariance Propagation and Optimal ... “summing” as

Outline • 5.2 Random Variables, Processes and

Transformation – 5.2.1 Variance of Continuous Integration and

Averaging Processes – 5.2.2 Stochastic Integration – 5.2.3 Optimal Estimation – Summary

16 Mobile Robotics - Prof Alonzo Kelly, CMU RI

Page 17: Chapter 5 Optimal Estimation - Carnegie Mellon School of ...alonzo/books/unc2.pdf · Chapter 5 Optimal Estimation Part 2 5.2 Covariance Propagation and Optimal ... “summing” as

So What?: Recursive Form • Recursive processes are also of the form y = f(x1,x2). • Let y mean the new value xi+1 of some state variables

that we are trying to estimate. • Let x1 mean the last estimate of state xi. • Let x2 mean the inputs ui that are required compute the

state.

Mobile Robotics - Prof Alonzo Kelly, CMU RI 17

y x1 x2

xi+1 xi ui

y f x1 x2,( )= xi 1+ f xi ui,( )=

Page 18: Chapter 5 Optimal Estimation - Carnegie Mellon School of ...alonzo/books/unc2.pdf · Chapter 5 Optimal Estimation Part 2 5.2 Covariance Propagation and Optimal ... “summing” as

5.2.2.1 Discrete Stochastic Integration • Recall our result for covariance of a partitioned

state vector:

• In our new notation, this becomes: • In a more standard notation:

Mobile Robotics - Prof Alonzo Kelly, CMU RI 18

State Uncertainty

Transition Matrix

Input Uncertainty

Input Jacobian

Covariance Propagation in any Decorrelated Estimation Process.

Pi 1+ ΦPiΦT ΓQ iΓ

T+=

Σy J1Σ11J 1T J2Σ22J2

T+=

Σ i 1+ JxΣiJ xT JuΣuJu

T+=

This is one of the Equations of the Kalman Filter

Page 19: Chapter 5 Optimal Estimation - Carnegie Mellon School of ...alonzo/books/unc2.pdf · Chapter 5 Optimal Estimation Part 2 5.2 Covariance Propagation and Optimal ... “summing” as

5.2.2.2 Example: Dead Reckoning (With Odometer Error Only)

Mobile Robotics - Prof Alonzo Kelly, CMU RI 19

xi xi yi

T=State:

Update:

ui li θi

T=

xi 1+ f xi ui,( )xi li θi( )cos+yi li θi( )sin+

= =

Measurements:

Page 20: Chapter 5 Optimal Estimation - Carnegie Mellon School of ...alonzo/books/unc2.pdf · Chapter 5 Optimal Estimation Part 2 5.2 Covariance Propagation and Optimal ... “summing” as

5.2.2.2 Example: Dead Reckoning (Jacobian)

Mobile Robotics - Prof Alonzo Kelly, CMU RI 20

Linearize:

Φi xi∂∂xi 1+ 1 0

0 1= =

Jacobians are functions of the present estimate and the present measurements.

Γ i ui∂∂xi 1+ ci li– si

si lici

= =

Page 21: Chapter 5 Optimal Estimation - Carnegie Mellon School of ...alonzo/books/unc2.pdf · Chapter 5 Optimal Estimation Part 2 5.2 Covariance Propagation and Optimal ... “summing” as

5.2.2.2 Example: Dead Reckoning (Input Uncertainty)

• The uncertainty in the current position and measurements is:

Mobile Robotics - Prof Alonzo Kelly, CMU RI 21

Piσxx σxy

σyx σyy i

=

Assume a perfect compass.

Assume Decorrelated errors.

Qiσ l

2 00 0

=

Page 22: Chapter 5 Optimal Estimation - Carnegie Mellon School of ...alonzo/books/unc2.pdf · Chapter 5 Optimal Estimation Part 2 5.2 Covariance Propagation and Optimal ... “summing” as

5.2.2.2 Example: Dead Reckoning (Answer)

Mobile Robotics - Prof Alonzo Kelly, CMU RI 22

Pi 1+ P ici

2σ l2

cis iσl2

cisiσ l2 si

2σ l2

+=

Note: Trace of Pi+1 increases monotonically

Pi 1+ ΦPiΦT Γ iQiΓi

T+=

Page 23: Chapter 5 Optimal Estimation - Carnegie Mellon School of ...alonzo/books/unc2.pdf · Chapter 5 Optimal Estimation Part 2 5.2 Covariance Propagation and Optimal ... “summing” as

5.2.2.3.1 Variance of a Continuous Random Walk

• Recall that for n summed iid random variables:

• Suppose the x’es were velocities at time 1,2…n. Then:

• But this means that 𝜎𝜎𝑦𝑦2 → ∞ as ∆𝑡𝑡 → 0 !!! • That cannot be right; it would require infinite

power. • What is more realistic is variance that grows

linearly wrt time:

Mobile Robotics - Prof Alonzo Kelly, CMU RI 23

σy2 nσx

2=

n t∆t-----= σy

2 σx2t∆t

--------= Constant

Page 24: Chapter 5 Optimal Estimation - Carnegie Mellon School of ...alonzo/books/unc2.pdf · Chapter 5 Optimal Estimation Part 2 5.2 Covariance Propagation and Optimal ... “summing” as

5.2.2.3.2 Integrating Stochastic Differential Equations

• Lets reinterpret our perturbative differential equation so mean a DE driven by random noise.

• Define the covariances:

• We might be tempted to solve this using the vector convolution integral:

Mobile Robotics - Prof Alonzo Kelly, CMU RI 24

White noise is uncorrelated in time.

Noise

Reimann says this integral does not converge

Page 25: Chapter 5 Optimal Estimation - Carnegie Mellon School of ...alonzo/books/unc2.pdf · Chapter 5 Optimal Estimation Part 2 5.2 Covariance Propagation and Optimal ... “summing” as

Motivation for Stochastic Calculus • An integral is a limit of a sum of products. • The limit exists when the wiggles go away when you

zoom in on a function:

• For a white random signal, autocorrelation is zero, and the wiggles never go away at any zoom level.

• The integral or derivative of a white signal is meaningless. – So what is “stochastic calculus”?

Mobile Robotics - Prof Alonzo Kelly, CMU RI 26

Page 26: Chapter 5 Optimal Estimation - Carnegie Mellon School of ...alonzo/books/unc2.pdf · Chapter 5 Optimal Estimation Part 2 5.2 Covariance Propagation and Optimal ... “summing” as

Deterministic Statistics • The statistics of a distribution of a random variables are

deterministic quantities. • i.e. s has a time derivative because s is not random.

• We will write differential equations for the statistics, not the random signals.

Mobile Robotics - Prof Alonzo Kelly, CMU RI 27

Individual random walk signal Variance of a zillion random walk signals

Page 27: Chapter 5 Optimal Estimation - Carnegie Mellon School of ...alonzo/books/unc2.pdf · Chapter 5 Optimal Estimation Part 2 5.2 Covariance Propagation and Optimal ... “summing” as

5.2.2.3.2 Integrating Stochastic Differential Equations

• Recall: we cannot integrate the following because it fails the Reimann condition.

• Trick: Introduce a differential random walk process:

• Now, integrate the following:

Mobile Robotics - Prof Alonzo Kelly, CMU RI 28

Page 28: Chapter 5 Optimal Estimation - Carnegie Mellon School of ...alonzo/books/unc2.pdf · Chapter 5 Optimal Estimation Part 2 5.2 Covariance Propagation and Optimal ... “summing” as

5.2.2.3.2 Integrating Stochastic Differential Equations

• The integral of (squared expectation) of the last result is:

Mobile Robotics - Prof Alonzo Kelly, CMU RI 29

Input Transition Matrix

Transition Matrix

Input Transition Matrix

Input Covariance

State Covariance

Initial State Covariance

Transition Matrix

Page 29: Chapter 5 Optimal Estimation - Carnegie Mellon School of ...alonzo/books/unc2.pdf · Chapter 5 Optimal Estimation Part 2 5.2 Covariance Propagation and Optimal ... “summing” as

5.2.2.3.4 Linear Variance Equation • We can differentiate the last result to find the

differential equation that is satisfied by: – The covariance matrix of a dynamical system – Driven by white noise

Mobile Robotics - Prof Alonzo Kelly, CMU RI 30

This term usually leads to unbounded growth

Page 30: Chapter 5 Optimal Estimation - Carnegie Mellon School of ...alonzo/books/unc2.pdf · Chapter 5 Optimal Estimation Part 2 5.2 Covariance Propagation and Optimal ... “summing” as

Outline • 5.2 Random Variables, Processes and

Transformation – 5.2.1 Variance of Continuous Integration and

Averaging Processes – 5.2.2 Stochastic Integration – 5.2.3 Optimal Estimation – Summary

31 Mobile Robotics - Prof Alonzo Kelly, CMU RI

Page 31: Chapter 5 Optimal Estimation - Carnegie Mellon School of ...alonzo/books/unc2.pdf · Chapter 5 Optimal Estimation Part 2 5.2 Covariance Propagation and Optimal ... “summing” as

5.2.3.1 Maximum Likelihood Estimation • Consider the problem of optimally estimating

state from a series of measurements: – Let 𝑥𝑥 ∈ ℜ𝑛𝑛 denote the state and 𝑧𝑧 ∈ ℜ𝑚𝑚 denote the

measurements. – Measurements relate to the state by a measurement

matrix: 𝑧𝑧 = 𝐻𝐻𝑥𝑥+ 𝑣𝑣 𝑤𝑤𝑤𝑤𝑤𝑤𝑤𝑤𝑤 𝑣𝑣 ∼ 𝑁𝑁 0,𝑅𝑅

– The measurements are assumed to be corrupted by a noise vector of covariance:

𝑅𝑅 = Exp(𝑣𝑣 𝑣𝑣𝑇𝑇)

Mobile Robotics - Prof Alonzo Kelly, CMU RI 32

Page 32: Chapter 5 Optimal Estimation - Carnegie Mellon School of ...alonzo/books/unc2.pdf · Chapter 5 Optimal Estimation Part 2 5.2 Covariance Propagation and Optimal ... “summing” as

5.2.3.1 Maximum Likelihood Estimation • The innovation 𝑧𝑧 − 𝐻𝐻𝑥𝑥 (= 𝑣𝑣) is Gaussian by assumption, so… • The probability of getting a measurement 𝑧𝑧 when the true state is 𝑥𝑥 is:

• This exponential will be maximized when the form in the exponent

(without negative sign) is minimized:

• If the system is overdetermined, the solution is simply the weighted left pseudoinverse:

Mobile Robotics - Prof Alonzo Kelly, CMU RI 33

Page 33: Chapter 5 Optimal Estimation - Carnegie Mellon School of ...alonzo/books/unc2.pdf · Chapter 5 Optimal Estimation Part 2 5.2 Covariance Propagation and Optimal ... “summing” as

5.2.3.1.1 Covariance of the MLE Estimate • The weighted left pseudoinverse is just a function that maps 𝑧𝑧 onto 𝑥𝑥�∗, so lets define its Jacobian:

• Therefore the covariance of the MLE result is:

• Which simplifies to:

• Note that this expression appears in the pseudoinverse:

Mobile Robotics - Prof Alonzo Kelly, CMU RI 34

Equation 5.80

Page 34: Chapter 5 Optimal Estimation - Carnegie Mellon School of ...alonzo/books/unc2.pdf · Chapter 5 Optimal Estimation Part 2 5.2 Covariance Propagation and Optimal ... “summing” as

5.2.3.2 Recursive Estimation of a Random Scalar

• Suppose: – Present state estimate 𝑥𝑥 has variance 𝜎𝜎𝑥𝑥2 – Measurement 𝑧𝑧 has variance 𝜎𝜎𝑧𝑧2

– Want to get new state estimate 𝑥𝑥′ and its variance 𝜎𝜎𝑥𝑥′2

• The trick to derive a Kalman filter is to pretend the present estimate comes in as a measurement with the same covariance.

• The measurement relationship for (both) measurements is:

Mobile Robotics - Prof Alonzo Kelly, CMU RI 35

Page 35: Chapter 5 Optimal Estimation - Carnegie Mellon School of ...alonzo/books/unc2.pdf · Chapter 5 Optimal Estimation Part 2 5.2 Covariance Propagation and Optimal ... “summing” as

5.2.3.2 Recursive Estimation of a Random Scalar

• That means the associated measurement and covariance matrices are:

• So, the weighted least squares solution is:

Mobile Robotics - Prof Alonzo Kelly, CMU RI 36

Page 36: Chapter 5 Optimal Estimation - Carnegie Mellon School of ...alonzo/books/unc2.pdf · Chapter 5 Optimal Estimation Part 2 5.2 Covariance Propagation and Optimal ... “summing” as

5.2.3.2 Recursive Estimation of a Random Scalar

• This simplifies to: • And, the uncertainty in the new estimate is (from

Equation 5.80): • Which is the same as saying the new information

is the sum of that of the measurement and state:

Mobile Robotics - Prof Alonzo Kelly, CMU RI 37

Equation 5.8.1

Equation 5.8.3

Page 37: Chapter 5 Optimal Estimation - Carnegie Mellon School of ...alonzo/books/unc2.pdf · Chapter 5 Optimal Estimation Part 2 5.2 Covariance Propagation and Optimal ... “summing” as

5.2.3.3 Example: Estimating Temperature from Two Sensors

• An ocean-going robot has to measure water temperature using two sensors.

• One of the measurements is 𝑧𝑧1 = 4 with variance 𝜎𝜎𝑧𝑧12 = 22. Therefore 𝑝𝑝(𝑥𝑥|𝑧𝑧1) is as shown:

Mobile Robotics - Prof Alonzo Kelly, CMU RI 38

Page 38: Chapter 5 Optimal Estimation - Carnegie Mellon School of ...alonzo/books/unc2.pdf · Chapter 5 Optimal Estimation Part 2 5.2 Covariance Propagation and Optimal ... “summing” as

5.2.3.3 Example: Estimating Temperature from Two Sensors

• Suppose the other measurement is 𝑧𝑧2 = 6 with variance 𝜎𝜎𝑧𝑧2

2 = (1.5)2. Therefore 𝑝𝑝(𝑥𝑥|𝑧𝑧2) is as shown:

Mobile Robotics - Prof Alonzo Kelly, CMU RI 39

Page 39: Chapter 5 Optimal Estimation - Carnegie Mellon School of ...alonzo/books/unc2.pdf · Chapter 5 Optimal Estimation Part 2 5.2 Covariance Propagation and Optimal ... “summing” as

5.2.3.3 Example: Estimating Temperature from Two Sensors

• Application of Eqns 5.81 and 5.83 gives: • The result is denoted graphically as follows:

Mobile Robotics - Prof Alonzo Kelly, CMU RI 40

Page 40: Chapter 5 Optimal Estimation - Carnegie Mellon School of ...alonzo/books/unc2.pdf · Chapter 5 Optimal Estimation Part 2 5.2 Covariance Propagation and Optimal ... “summing” as

5.2.3.2 Recursive Estimation of a Random Vector

• Suppose: – Present state estimate 𝑥𝑥 has variance 𝑃𝑃 – Measurement 𝑧𝑧 has variance 𝑅𝑅 – Want to get new state estimate 𝑥𝑥′ and its variance 𝑃𝑃′

• The measurement relationship for (both) measurements is:

Mobile Robotics - Prof Alonzo Kelly, CMU RI 41

Page 41: Chapter 5 Optimal Estimation - Carnegie Mellon School of ...alonzo/books/unc2.pdf · Chapter 5 Optimal Estimation Part 2 5.2 Covariance Propagation and Optimal ... “summing” as

5.2.3.2 Recursive Estimation of a Random Vector

• That means the associated measurement and covariance matrices are: – 𝐻𝐻′ = 𝐻𝐻 𝐼𝐼 𝑇𝑇 𝑅𝑅′ = 𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑([𝑅𝑅 𝑃𝑃])

• Having any measurement means the system is overdetermined.

• So, the weighted least squares solution is:

Mobile Robotics - Prof Alonzo Kelly, CMU RI 42

Page 42: Chapter 5 Optimal Estimation - Carnegie Mellon School of ...alonzo/books/unc2.pdf · Chapter 5 Optimal Estimation Part 2 5.2 Covariance Propagation and Optimal ... “summing” as

5.2.3.2 Recursive Estimation of a Random Vector

• Invert the covariance matrix on the right:

• Simplify the quadratic form on left:

• Multiply out the product on right:

• This looks like an inverse covariance weighted average. Mobile Robotics - Prof Alonzo Kelly, CMU RI 43

Page 43: Chapter 5 Optimal Estimation - Carnegie Mellon School of ...alonzo/books/unc2.pdf · Chapter 5 Optimal Estimation Part 2 5.2 Covariance Propagation and Optimal ... “summing” as

5.2.3.4.1 Efficient State Update • Apply the Matrix Inversion Lemma which states:

• Substituting:

• Where we define the innovation covariance:

• Define the Kalman Gain: • Which gives the famous result:

Mobile Robotics - Prof Alonzo Kelly, CMU RI 44

Equation A

Page 44: Chapter 5 Optimal Estimation - Carnegie Mellon School of ...alonzo/books/unc2.pdf · Chapter 5 Optimal Estimation Part 2 5.2 Covariance Propagation and Optimal ... “summing” as

Information Weighted Average • Once again, the result is:

• Multiply that by 𝑃𝑃𝑃𝑃−1 to get:

• So, the Kalman Filter is computing an information weighted average of the prior state and the innovation.

Mobile Robotics - Prof Alonzo Kelly, CMU RI 45

Page 45: Chapter 5 Optimal Estimation - Carnegie Mellon School of ...alonzo/books/unc2.pdf · Chapter 5 Optimal Estimation Part 2 5.2 Covariance Propagation and Optimal ... “summing” as

5.2.3.4.2 Covariance Update • Recall the MLE covariance: • Consider again: • So, the first part in brackets is just: • Substitute the Kalman Gain into Equation A:

• To get, the final form of covariance update:

Mobile Robotics - Prof Alonzo Kelly, CMU RI 46

Equation 5.80

Equation 5.85

[𝑯𝑯𝑻𝑻 𝑹𝑹−𝟏𝟏 𝑯𝑯]−𝟏𝟏

Page 46: Chapter 5 Optimal Estimation - Carnegie Mellon School of ...alonzo/books/unc2.pdf · Chapter 5 Optimal Estimation Part 2 5.2 Covariance Propagation and Optimal ... “summing” as

5.2.3.4.3 Covariance Update for Direct Measurements

• When H=I the sensor measures the state directly, so...

• Suppose:

• Consider the sequence of measurements:

Mobile Robotics - Prof Alonzo Kelly, CMU RI 47

Information Adds Directly

Page 47: Chapter 5 Optimal Estimation - Carnegie Mellon School of ...alonzo/books/unc2.pdf · Chapter 5 Optimal Estimation Part 2 5.2 Covariance Propagation and Optimal ... “summing” as

5.2.3.4.3 Covariance Update for Direct Measurements

• Regardless of the measurements themselves (linear case), the covariance evolves as follows:

Mobile Robotics - Prof Alonzo Kelly, CMU RI 48

Page 48: Chapter 5 Optimal Estimation - Carnegie Mellon School of ...alonzo/books/unc2.pdf · Chapter 5 Optimal Estimation Part 2 5.2 Covariance Propagation and Optimal ... “summing” as

5.2.3.5 Nonlinear Optimal Estimation • When the measurements are related to the state

nonlinearly:

• We simply use nonlinear weighted least squares. That means, we simply make one substitution:

• Whereupon the Kalman Filter becomes the Extended Kalman filter. – Which is no longer optimal, but is nonetheless super useful – Easily the estimation equivalent of PID control. – KF is just a special case of EKF.

Mobile Robotics - Prof Alonzo Kelly, CMU RI 49

Page 49: Chapter 5 Optimal Estimation - Carnegie Mellon School of ...alonzo/books/unc2.pdf · Chapter 5 Optimal Estimation Part 2 5.2 Covariance Propagation and Optimal ... “summing” as

Outline • 5.2 Random Variables, Processes and

Transformation – 5.2.1 Variance of Continuous Integration and

Averaging Processes – 5.2.2 Stochastic Integration – 5.2.3 Optimal Estimation – Summary

50 Mobile Robotics - Prof Alonzo Kelly, CMU RI

Page 50: Chapter 5 Optimal Estimation - Carnegie Mellon School of ...alonzo/books/unc2.pdf · Chapter 5 Optimal Estimation Part 2 5.2 Covariance Propagation and Optimal ... “summing” as

Summary • Compounding (adding) noisy measurements leads

to a result with more noise. • Merging (filtering) noisy redundant

measurements leads to a result with less noise. • Kalman Filters are just recursive weighted least

squares estimators. – That and matrix inversion Lemma is all it takes to

derive it. – We will shortly see that they are applicable to

dynamical systems.

Mobile Robotics - Prof Alonzo Kelly, CMU RI 51