45
Lecture 21: Graph-SLAM Dr. J.B. Hayet CENTRO DE INVESTIGACI ´ ON EN MATEM ´ ATICAS April 2014 , J.B. Hayet Probabilistic robotics, April 2014 1 / 41

Lecture 21: Graph-SLAMaplicaciones.cimat.mx/.../files/clase21-eng.pdf · Qo ˝ i=1 p(Zi ˝jX ˝;C ˝ i ... J.B. Hayet Probabilistic robotics, April 2014 19 / 41. Graph-SLAM: linearization

  • Upload
    others

  • View
    1

  • Download
    2

Embed Size (px)

Citation preview

Page 1: Lecture 21: Graph-SLAMaplicaciones.cimat.mx/.../files/clase21-eng.pdf · Qo ˝ i=1 p(Zi ˝jX ˝;C ˝ i ... J.B. Hayet Probabilistic robotics, April 2014 19 / 41. Graph-SLAM: linearization

Lecture 21: Graph-SLAM

Dr. J.B. Hayet

CENTRO DE INVESTIGACION EN MATEMATICAS

April 2014

,J.B. Hayet Probabilistic robotics, April 2014 1 / 41

Page 2: Lecture 21: Graph-SLAMaplicaciones.cimat.mx/.../files/clase21-eng.pdf · Qo ˝ i=1 p(Zi ˝jX ˝;C ˝ i ... J.B. Hayet Probabilistic robotics, April 2014 19 / 41. Graph-SLAM: linearization

SLAM online and full SLAM . . .

The first one estimates the posterior on Xt = (Rt ,M)T

p(Xt |Z1:t ,U1:t),

the second one is on trajectories

p(R1:t ,M |Z1:t ,U1:t),

with different resolution techniques, and we have

p(Xt |Z1:t ,U1:t) =

∫Rt−1

...

∫R1

p(R1:t ,M |Z1:t ,U1:t)dR1...dRt−1.

,J.B. Hayet Probabilistic robotics, April 2014 2 / 41

Page 3: Lecture 21: Graph-SLAMaplicaciones.cimat.mx/.../files/clase21-eng.pdf · Qo ˝ i=1 p(Zi ˝jX ˝;C ˝ i ... J.B. Hayet Probabilistic robotics, April 2014 19 / 41. Graph-SLAM: linearization

Full SLAM

EKF-SLAM is an online SLAM algorithm, it gives anestimation at each iteration;

Today algorithm is a full SLAM algorithm, that estimates

p(X0:t |Z1:t ,U1:t),

this is a joint estimation, and it is done only after some time t;one gets a map and trajectory after the running (which is themain limitation).

,J.B. Hayet Probabilistic robotics, April 2014 3 / 41

Page 4: Lecture 21: Graph-SLAMaplicaciones.cimat.mx/.../files/clase21-eng.pdf · Qo ˝ i=1 p(Zi ˝jX ˝;C ˝ i ... J.B. Hayet Probabilistic robotics, April 2014 19 / 41. Graph-SLAM: linearization

Full SLAM

The state vector X0:t combines positions along the trajectoryand positions of the map elements,

X0:t =

R0

R1

. . .Rt

M

and Xt =

(Rt

M

).

The posterior we want to estimate is p(X0:t |Z1:t ,U1:t ,C1:t).

,J.B. Hayet Probabilistic robotics, April 2014 4 / 41

Page 5: Lecture 21: Graph-SLAMaplicaciones.cimat.mx/.../files/clase21-eng.pdf · Qo ˝ i=1 p(Zi ˝jX ˝;C ˝ i ... J.B. Hayet Probabilistic robotics, April 2014 19 / 41. Graph-SLAM: linearization

Full SLAM: factorization

In a first step, we can make appear the likelihood of the lastobservation with the Bayes rule:

p(X0:t |Z1:t ,U1:t ,C1:t) = ηp(Zt |X0:t ,Z1:t−1,U1:t ,C1:t)p(X0:t |Z1:t−1,U1:t ,C1:t)= ηp(Zt |Xt ,Ct)p(X0:t |Z1:t−1,U1:t ,C1:t)

,

and by applying again the Bayes rule at second term

p(X0:t |Z1:t ,U1:t ,C1:t) = ηp(Zt |Xt ,Ct)p(Rt |X0:t−1,Z1:t−1,U1:t ,C1:t)p(X0:t−1|Z1:t−1,U1:t ,C1:t),= ηp(Zt |Xt ,Ct)p(Rt |Rt−1,Ut)p(X0:t−1|Z1:t−1,U1:t ,C1:t).

.

,J.B. Hayet Probabilistic robotics, April 2014 5 / 41

Page 6: Lecture 21: Graph-SLAMaplicaciones.cimat.mx/.../files/clase21-eng.pdf · Qo ˝ i=1 p(Zi ˝jX ˝;C ˝ i ... J.B. Hayet Probabilistic robotics, April 2014 19 / 41. Graph-SLAM: linearization

Full SLAM: factorization

By recursion, and by applying the same factorization to eachconsecutive term,

p(X0:t |Z1:t ,U1:t ,C1:t) = ηp(X0)t∏

τ=1p(Rτ |Rτ−1,Uτ )

oτ∏i=1

p(Z iτ |Xτ ,C i

τ ) ,

and as in general we do not have a priori knowledge on the map, wecan suppose uniformity and integrate it in the constant

p(X0:t |Z1:t ,U1:t ,C1:t) = ηp(R0)t∏

τ=1p(Rτ |Rτ−1,Uτ )

oτ∏i=1

p(Z iτ |Xτ ,C i

τ ) .

,J.B. Hayet Probabilistic robotics, April 2014 6 / 41

Page 7: Lecture 21: Graph-SLAMaplicaciones.cimat.mx/.../files/clase21-eng.pdf · Qo ˝ i=1 p(Zi ˝jX ˝;C ˝ i ... J.B. Hayet Probabilistic robotics, April 2014 19 / 41. Graph-SLAM: linearization

Full SLAM: factorization

Now, by using log-likelihoods l0:t = − log p(X0:t |Z1:t ,U1:t ,C1:t)

l0:t = cste. −log p(R0)−t∑

τ=1

[log p(Rτ |Rτ−1,Uτ )+oτ∑i=1

log p(Z iτ |Xτ ,C i

τ )].

Up to now, everything is in closed form (we used only the Markovassumption).

,J.B. Hayet Probabilistic robotics, April 2014 7 / 41

Page 8: Lecture 21: Graph-SLAMaplicaciones.cimat.mx/.../files/clase21-eng.pdf · Qo ˝ i=1 p(Zi ˝jX ˝;C ˝ i ... J.B. Hayet Probabilistic robotics, April 2014 19 / 41. Graph-SLAM: linearization

Full SLAM: factorization

The next step is possible when the observation and motion modelshave closed form with Gaussian additive noise, as in mostcases we have seen:

Rτ = g(Rτ−1,Uτ ) + νR , con νR ∼ N(0,ΣR),Z iτ = h(Xτ ,C

iτ ) + νM , con νM ∼ N(0,ΣM),

i.e.

p(Rτ |Rτ−1,Uτ ) = ηe−

12

(Rτ−g(Rτ−1|Uτ ))T Σ−1R (Rτ−g(Rτ−1|Uτ )),

p(Z iτ |Xτ ,C i

τ ) = ηe−12

(Z iτ−h(Xτ ,C i

τ ))T Σ−1M (Z i

τ−h(Xτ ,C iτ )).

,J.B. Hayet Probabilistic robotics, April 2014 8 / 41

Page 9: Lecture 21: Graph-SLAMaplicaciones.cimat.mx/.../files/clase21-eng.pdf · Qo ˝ i=1 p(Zi ˝jX ˝;C ˝ i ... J.B. Hayet Probabilistic robotics, April 2014 19 / 41. Graph-SLAM: linearization

Full SLAM: factorization

One deduces:

l0:t = cste. + RT0 Ω0R0+

12

t∑τ=1

(Rτ − g(Rτ−1|Uτ ))TΣ−1R (Rτ − g(Rτ−1|Uτ ))+

12

t∑τ=1

oτ∑i=1

(Z iτ − h(Xτ ,C

iτ ))TΣ−1

M (Z iτ − h(Xτ ,C

iτ ))

.

Considering p(R0) = ηe−12RT

0 Ω0R0 , where, in order to impose theconstraint that the initial position IS (0, 0, 0)T ,

Ω0 =

∞ 0 00 ∞ 00 0 ∞

.

(∞ is some very large number).

,J.B. Hayet Probabilistic robotics, April 2014 9 / 41

Page 10: Lecture 21: Graph-SLAMaplicaciones.cimat.mx/.../files/clase21-eng.pdf · Qo ˝ i=1 p(Zi ˝jX ˝;C ˝ i ... J.B. Hayet Probabilistic robotics, April 2014 19 / 41. Graph-SLAM: linearization

Full SLAM y Graph-SLAM

In this Full SLAM posterior, one can note that

we have a sum of quadratic constraints,

there are just two kinds of constraints:

between consecutive positions (controls),between positions and map elements (observations),

each constraint is a Mahalanobis distance (weighted by thecovariance matrix),

the whole set of these constraints form a sparse graph.

,J.B. Hayet Probabilistic robotics, April 2014 10 / 41

Page 11: Lecture 21: Graph-SLAMaplicaciones.cimat.mx/.../files/clase21-eng.pdf · Qo ˝ i=1 p(Zi ˝jX ˝;C ˝ i ... J.B. Hayet Probabilistic robotics, April 2014 19 / 41. Graph-SLAM: linearization

Graph-SLAM

[From Probabilistic Robotics, MIT Press]

,J.B. Hayet Probabilistic robotics, April 2014 11 / 41

Page 12: Lecture 21: Graph-SLAMaplicaciones.cimat.mx/.../files/clase21-eng.pdf · Qo ˝ i=1 p(Zi ˝jX ˝;C ˝ i ... J.B. Hayet Probabilistic robotics, April 2014 19 / 41. Graph-SLAM: linearization

Graph-SLAM

[From Probabilistic Robotics, MIT Press]

,J.B. Hayet Probabilistic robotics, April 2014 11 / 41

Page 13: Lecture 21: Graph-SLAMaplicaciones.cimat.mx/.../files/clase21-eng.pdf · Qo ˝ i=1 p(Zi ˝jX ˝;C ˝ i ... J.B. Hayet Probabilistic robotics, April 2014 19 / 41. Graph-SLAM: linearization

Graph-SLAM

[From Probabilistic Robotics, MIT Press]

,J.B. Hayet Probabilistic robotics, April 2014 11 / 41

Page 14: Lecture 21: Graph-SLAMaplicaciones.cimat.mx/.../files/clase21-eng.pdf · Qo ˝ i=1 p(Zi ˝jX ˝;C ˝ i ... J.B. Hayet Probabilistic robotics, April 2014 19 / 41. Graph-SLAM: linearization

Graph-SLAM

The idea of Graph-SLAM is to use the previous expression to

in a first time, get the marginal over trajectories R0:t ,

in a second time, get the map M .

But to go on, we need to linearize functions g and h around amean µ0:t of X0:t . We will also suppose that we know thecorrespondences.

,J.B. Hayet Probabilistic robotics, April 2014 12 / 41

Page 15: Lecture 21: Graph-SLAMaplicaciones.cimat.mx/.../files/clase21-eng.pdf · Qo ˝ i=1 p(Zi ˝jX ˝;C ˝ i ... J.B. Hayet Probabilistic robotics, April 2014 19 / 41. Graph-SLAM: linearization

Graph-SLAM: linearization

g(Rτ−1,Uτ ) = g(µR

τ−1,Uτ ) + Gt(Rτ−1 − µRτ−1),

h(Xτ ,Ciτ ) = h(µτ−1,C

iτ ) + H i

t(Xτ − µτ ),

where

Gt =∂g

∂Rt−1(µR

τ−1,Ut),

and

H it =

∂h

∂Xt(µτ ,C

iτ ).

,J.B. Hayet Probabilistic robotics, April 2014 13 / 41

Page 16: Lecture 21: Graph-SLAMaplicaciones.cimat.mx/.../files/clase21-eng.pdf · Qo ˝ i=1 p(Zi ˝jX ˝;C ˝ i ... J.B. Hayet Probabilistic robotics, April 2014 19 / 41. Graph-SLAM: linearization

Graph-SLAM: linearization

The log-likelihood l0:t can be re-written:

cste. + RT0 Ω0R0+

12

t∑τ=1

(Rτ − g(µRτ−1,Uτ )− Gt(Rτ−1 − µRτ−1))T Σ−1R (Rτ − g(µRτ−1,Uτ )− Gt(Rτ−1 − µRτ−1))+

12

t∑τ=1

oτ∑i=1

(Z iτ − h(µτ−1,C i

τ )− H it(Xτ − µτ ))T Σ−1

M (Z iτ − h(µτ−1,C i

τ )− H it(Xτ − µτ )).

that has several quadratic terms in the sub-elements of Xτ .

,J.B. Hayet Probabilistic robotics, April 2014 14 / 41

Page 17: Lecture 21: Graph-SLAMaplicaciones.cimat.mx/.../files/clase21-eng.pdf · Qo ˝ i=1 p(Zi ˝jX ˝;C ˝ i ... J.B. Hayet Probabilistic robotics, April 2014 19 / 41. Graph-SLAM: linearization

Graph-SLAM: linearization

By developing l0:t , there remains :

cste. + RT0 Ω0R0+

12

t∑τ=1

(Rτ − GtRτ−1)TΣ−1R (Rτ − GtRτ−1)−

t∑τ=1

(Rτ − GtRτ−1)TΣ−1R (g(µR

τ−1,Uτ )− GtµRτ−1)+

12

t∑τ=1

oτ∑i=1

XTτ (H i

t)TΣ−1

M H itXτ−

t∑τ=1

oτ∑i=1

XTτ (H i

t)TΣ−1

M (Z iτ − h(µτ−1,C

iτ ) + H i

tµτ ).

with quadratic and linear terms in the state.

,J.B. Hayet Probabilistic robotics, April 2014 15 / 41

Page 18: Lecture 21: Graph-SLAMaplicaciones.cimat.mx/.../files/clase21-eng.pdf · Qo ˝ i=1 p(Zi ˝jX ˝;C ˝ i ... J.B. Hayet Probabilistic robotics, April 2014 19 / 41. Graph-SLAM: linearization

Graph-SLAM: linearization

Finally, we can write l0:t as:

l0:t = cste. + XT0:tΩX0:t + XT

0:tξ

where Ω and ξ are directly deduced from the previous formula(information matrix and vector). The Graph-SLAM will takeadvantage of the additive nature of these two quantities toupdate them.

,J.B. Hayet Probabilistic robotics, April 2014 16 / 41

Page 19: Lecture 21: Graph-SLAMaplicaciones.cimat.mx/.../files/clase21-eng.pdf · Qo ˝ i=1 p(Zi ˝jX ˝;C ˝ i ... J.B. Hayet Probabilistic robotics, April 2014 19 / 41. Graph-SLAM: linearization

Graph-SLAM: linearization

In the previous equation:

the joint posterior is a Gaussian, written in its informationform;

the parameters can be easily identified;

even better, these parameters have a nice form, because oftheir additive form: while t increases, we can incrementthem, each action (control/observation) having its ownseparate contributions.

,J.B. Hayet Probabilistic robotics, April 2014 17 / 41

Page 20: Lecture 21: Graph-SLAMaplicaciones.cimat.mx/.../files/clase21-eng.pdf · Qo ˝ i=1 p(Zi ˝jX ˝;C ˝ i ... J.B. Hayet Probabilistic robotics, April 2014 19 / 41. Graph-SLAM: linearization

Graph-SLAM: linearization

While the robot is running:

1 Ω = Ω0 (adapts the size, Ω0 is relative to R0), ξ = 0.

2 at each τ , integrate controls:

Ω = Ω +

(−GT

t

I

)Σ−1

R (−Gt I )

ξ = ξ +

(−GT

t

I

)Σ−1

R [g(µRτ−1,Uτ )− Gtµ

Rτ−1]

3 at each τ , integrate observations:

Ω = Ω + (H it)

TΣ−1M H i

t

ξ = ξ + (H it)

TΣ−1M (Z i

τ − h(µτ−1,Ciτ ) + H i

tµτ ).

,J.B. Hayet Probabilistic robotics, April 2014 18 / 41

Page 21: Lecture 21: Graph-SLAMaplicaciones.cimat.mx/.../files/clase21-eng.pdf · Qo ˝ i=1 p(Zi ˝jX ˝;C ˝ i ... J.B. Hayet Probabilistic robotics, April 2014 19 / 41. Graph-SLAM: linearization

Graph-SLAM: linearization

In these three cases, the formula is adapted, since all additive termsare aggregated locally in the information matrix:

1 initialization: sub-matrix relative to R0;

2 control: sub-matrices relative to Rt−1 and Rt ;

3 observations: sub-matrices relative to Rt and Mi (but nosub-matrix relative to both Mi and Mj joint).

These steps are simple computationally.

,J.B. Hayet Probabilistic robotics, April 2014 19 / 41

Page 22: Lecture 21: Graph-SLAMaplicaciones.cimat.mx/.../files/clase21-eng.pdf · Qo ˝ i=1 p(Zi ˝jX ˝;C ˝ i ... J.B. Hayet Probabilistic robotics, April 2014 19 / 41. Graph-SLAM: linearization

Graph-SLAM: linearization

In the case of 2D landmarks,

controls are taken into account by adding 4 sub-matrices of size3× 3,

observations are taken into account by adding 4 sub-matrices ofsize 3× 3, 3× 2, 2× 3, 2× 2.

The total time of this step is linear in t .

,J.B. Hayet Probabilistic robotics, April 2014 20 / 41

Page 23: Lecture 21: Graph-SLAMaplicaciones.cimat.mx/.../files/clase21-eng.pdf · Qo ˝ i=1 p(Zi ˝jX ˝;C ˝ i ... J.B. Hayet Probabilistic robotics, April 2014 19 / 41. Graph-SLAM: linearization

Graph-SLAM: mean µ0:t

What mean µ0:t to consider?for trajectories (the µR

τ ), integrate controls;for the map, start with some µi , for example, obtained by theµRτ and the measurements inverse models;

we will refine the µi along iterations,

µ0:t =

µR0

µR1...µRt

µ1

...µN

.

,J.B. Hayet Probabilistic robotics, April 2014 21 / 41

Page 24: Lecture 21: Graph-SLAMaplicaciones.cimat.mx/.../files/clase21-eng.pdf · Qo ˝ i=1 p(Zi ˝jX ˝;C ˝ i ... J.B. Hayet Probabilistic robotics, April 2014 19 / 41. Graph-SLAM: linearization

Graph-SLAM: mean µ0:t

To start with, integrate controls without handling observations:

Initialize at (0, 0, 0)T .

Recursively,

µRτ = g(µR

τ−1,Uτ ).

It may have a lot of drift at the end, and Graph-SLAM will requiremore time to converge. . .

,J.B. Hayet Probabilistic robotics, April 2014 22 / 41

Page 25: Lecture 21: Graph-SLAMaplicaciones.cimat.mx/.../files/clase21-eng.pdf · Qo ˝ i=1 p(Zi ˝jX ˝;C ˝ i ... J.B. Hayet Probabilistic robotics, April 2014 19 / 41. Graph-SLAM: linearization

Graph-SLAM: mean µ0:t

Another option is to run, while the robot explores theenvironment, an extended Kalman filter and consider thefinal result for the first series of linearizations.

More precise but more costly. . .

,J.B. Hayet Probabilistic robotics, April 2014 23 / 41

Page 26: Lecture 21: Graph-SLAMaplicaciones.cimat.mx/.../files/clase21-eng.pdf · Qo ˝ i=1 p(Zi ˝jX ˝;C ˝ i ... J.B. Hayet Probabilistic robotics, April 2014 19 / 41. Graph-SLAM: linearization

Graph-SLAM: reduction

The problem now is to get the mean and covariance that correspondto information matrices and vector. . .

Σ = Ω−1

µ = Ω−1ξ,

which is very costly (dimension 3t + 2N). . .

The second step consists in reducing the problem to make itsimpler to handle computationally: we will marginalize the posteriorover all possible maps.

,J.B. Hayet Probabilistic robotics, April 2014 24 / 41

Page 27: Lecture 21: Graph-SLAMaplicaciones.cimat.mx/.../files/clase21-eng.pdf · Qo ˝ i=1 p(Zi ˝jX ˝;C ˝ i ... J.B. Hayet Probabilistic robotics, April 2014 19 / 41. Graph-SLAM: linearization

Graph-SLAM: reduction

The algorithm uses the following result: when handling a Gaussiandistribution with a joint state vector (x , y), and represented inthe form of information matrices and vectors,

Ω =

(Ωxx Ωxy

Ωyx Ωyy

)y ξ =

(ξxξy

)the marginal on x can be written through:

Ωxx = Ωxx − ΩxyΩ−1yy Ωyx ,

ξx = ξx − ΩxyΩ−1yy ξy .

(see demonstration p. 359, it is deduced by the inversion lemma),

J.B. Hayet Probabilistic robotics, April 2014 25 / 41

Page 28: Lecture 21: Graph-SLAMaplicaciones.cimat.mx/.../files/clase21-eng.pdf · Qo ˝ i=1 p(Zi ˝jX ˝;C ˝ i ... J.B. Hayet Probabilistic robotics, April 2014 19 / 41. Graph-SLAM: linearization

Graph-SLAM: reduction

From this property, we deduce the parameters of the marginalw.r.t. the trajectory:

Ω = ΩR0:t ,R0:t − ΩR0:tMΩ−1MMΩMR0:t ,

ξ = ξR0:t − ΩR0:tMΩ−1MMξM .

The important is to see that this formula is relatively simple tocompute.

,J.B. Hayet Probabilistic robotics, April 2014 26 / 41

Page 29: Lecture 21: Graph-SLAMaplicaciones.cimat.mx/.../files/clase21-eng.pdf · Qo ˝ i=1 p(Zi ˝jX ˝;C ˝ i ... J.B. Hayet Probabilistic robotics, April 2014 19 / 41. Graph-SLAM: linearization

Graph-SLAM: reduction

Note that ΩMM is made of diagonal blocks:

ΩMM =

Ωm1m1 0 . . . 0

0 Ωm2m2 . . . 0...

......

...0 0 . . . ΩmNmN

and then

Ω−1MM =

Ω−1

m1m10 . . . 0

0 Ω−1m2m2

. . . 0...

......

...0 0 . . . Ω−1

mNmN

,

or, again, Ω−1MM =

∑Ni=1 F

Ti Ω−1

mimiFi .

,J.B. Hayet Probabilistic robotics, April 2014 27 / 41

Page 30: Lecture 21: Graph-SLAMaplicaciones.cimat.mx/.../files/clase21-eng.pdf · Qo ˝ i=1 p(Zi ˝jX ˝;C ˝ i ... J.B. Hayet Probabilistic robotics, April 2014 19 / 41. Graph-SLAM: linearization

Graph-SLAM: reduction

Here Fi selects the i -th landmark

Fi =

(0 0 . . . 0 1 0 0 . . . 00 0 . . . 0 0 1 0 . . . 0

).

By the nature of ΩR0:tM one deduces

Ω = ΩR0:t ,R0:t −N∑i=1

ΩR0:tmiΩ−1

mimiΩmiR0:t .

Matrix ΩmiR0:t depends on how many times and at which positionlandmark i has been seen.

,J.B. Hayet Probabilistic robotics, April 2014 28 / 41

Page 31: Lecture 21: Graph-SLAMaplicaciones.cimat.mx/.../files/clase21-eng.pdf · Qo ˝ i=1 p(Zi ˝jX ˝;C ˝ i ... J.B. Hayet Probabilistic robotics, April 2014 19 / 41. Graph-SLAM: linearization

Graph-SLAM: reduction

Similarly, for the information vector

ξ = ξR0:t−

N∑i=1

ΩR0:tmiΩ−1

mimiξmi,

with the same dependencies, in N and in the number of times eachlandmark has been seen.

,J.B. Hayet Probabilistic robotics, April 2014 29 / 41

Page 32: Lecture 21: Graph-SLAMaplicaciones.cimat.mx/.../files/clase21-eng.pdf · Qo ˝ i=1 p(Zi ˝jX ˝;C ˝ i ... J.B. Hayet Probabilistic robotics, April 2014 19 / 41. Graph-SLAM: linearization

Graph-SLAM: reduction

We can simplify even more the reduction algorithm:1 Ω = Ω and ξ = ξ;2 for each characteristic i ,

define the set τ(i) of times τ in which the characteristic i hasbeen seen (by definition, it is non-null),

update Ω y ξ:

Ω = Ω− ΩRτ(i)miΩ−1

mimiΩmiRτ(i)

.

and

ξ = ξ − ΩRτ(i)miΩ−1

mimiξmi.

remove from Ω and ξ all lines/columns corresponding tofeature i . ,

J.B. Hayet Probabilistic robotics, April 2014 30 / 41

Page 33: Lecture 21: Graph-SLAMaplicaciones.cimat.mx/.../files/clase21-eng.pdf · Qo ˝ i=1 p(Zi ˝jX ˝;C ˝ i ... J.B. Hayet Probabilistic robotics, April 2014 19 / 41. Graph-SLAM: linearization

Graph-SLAM: reduction

This way, we have a computation of (ξ, Ω) which is not toocomplex. The complexity will depend on how many times thelandmarks have been seen. In the worst case, Nt (again, linear in t).

This marginalization can be interpreted as a process of variableelimination.

,J.B. Hayet Probabilistic robotics, April 2014 31 / 41

Page 34: Lecture 21: Graph-SLAMaplicaciones.cimat.mx/.../files/clase21-eng.pdf · Qo ˝ i=1 p(Zi ˝jX ˝;C ˝ i ... J.B. Hayet Probabilistic robotics, April 2014 19 / 41. Graph-SLAM: linearization

Graph-SLAM: reduction

[From Probabilistic Robotics, MIT Press]

,J.B. Hayet Probabilistic robotics, April 2014 32 / 41

Page 35: Lecture 21: Graph-SLAMaplicaciones.cimat.mx/.../files/clase21-eng.pdf · Qo ˝ i=1 p(Zi ˝jX ˝;C ˝ i ... J.B. Hayet Probabilistic robotics, April 2014 19 / 41. Graph-SLAM: linearization

Graph-SLAM: reduction

[From Probabilistic Robotics, MIT Press]

,J.B. Hayet Probabilistic robotics, April 2014 32 / 41

Page 36: Lecture 21: Graph-SLAMaplicaciones.cimat.mx/.../files/clase21-eng.pdf · Qo ˝ i=1 p(Zi ˝jX ˝;C ˝ i ... J.B. Hayet Probabilistic robotics, April 2014 19 / 41. Graph-SLAM: linearization

Graph-SLAM: reduction

[From Probabilistic Robotics, MIT Press]

,J.B. Hayet Probabilistic robotics, April 2014 32 / 41

Page 37: Lecture 21: Graph-SLAMaplicaciones.cimat.mx/.../files/clase21-eng.pdf · Qo ˝ i=1 p(Zi ˝jX ˝;C ˝ i ... J.B. Hayet Probabilistic robotics, April 2014 19 / 41. Graph-SLAM: linearization

Graph-SLAM: trajectory estimation

This step is natural: Σ = Ω−1

µ = Σξ

but it is the the most costly because of the matrixinversion (3t × 3t). . . (in t2.6).

,J.B. Hayet Probabilistic robotics, April 2014 33 / 41

Page 38: Lecture 21: Graph-SLAMaplicaciones.cimat.mx/.../files/clase21-eng.pdf · Qo ˝ i=1 p(Zi ˝jX ˝;C ˝ i ... J.B. Hayet Probabilistic robotics, April 2014 19 / 41. Graph-SLAM: linearization

Graph-SLAM: map estimation

For the map, we will use :

p(X0:t |Z1:t ,U1:t ,C1:t) = p(R0:t |Z1:t ,U1:t ,C1:t)p(M |Z1:t ,U1:t ,R0:t ,C1:t).

The reduction estimates the left term, and we still have toestimate the conditional p(M |Z1:t ,U1:t ,R0:t ,C1:t).

,J.B. Hayet Probabilistic robotics, April 2014 34 / 41

Page 39: Lecture 21: Graph-SLAMaplicaciones.cimat.mx/.../files/clase21-eng.pdf · Qo ˝ i=1 p(Zi ˝jX ˝;C ˝ i ... J.B. Hayet Probabilistic robotics, April 2014 19 / 41. Graph-SLAM: linearization

Graph-SLAM: map estimation

We use another known result: when handling a Gaussian on a jointstate vector (x , y), and represented in the form of itsinformation matrix and vector

Ω =

(Ωxx Ωxy

Ωyx Ωyy

)y ξ =

(ξxξy

)the conditional of x given y can be written through:

Ωx |y = Ωxx ,

ξx |y = ξx − Ωxyy .

(see demonstration p. 360),

J.B. Hayet Probabilistic robotics, April 2014 35 / 41

Page 40: Lecture 21: Graph-SLAMaplicaciones.cimat.mx/.../files/clase21-eng.pdf · Qo ˝ i=1 p(Zi ˝jX ˝;C ˝ i ... J.B. Hayet Probabilistic robotics, April 2014 19 / 41. Graph-SLAM: linearization

Graph-SLAM: map estimation

Hence, we can deduce easily the conditional, conditionally to theestimation of the trajectory, with y being the trajectory

ΩM|R0:t= ΩMM ,

and

ξM|R0:t= ξM − ΩMR0:tR0:t ,

where ξM , ΩMM and ΩMR0:t come from the joint estimation.

,J.B. Hayet Probabilistic robotics, April 2014 36 / 41

Page 41: Lecture 21: Graph-SLAMaplicaciones.cimat.mx/.../files/clase21-eng.pdf · Qo ˝ i=1 p(Zi ˝jX ˝;C ˝ i ... J.B. Hayet Probabilistic robotics, April 2014 19 / 41. Graph-SLAM: linearization

Graph-SLAM: map estimation

But we do not know the true trajectory, only its distribution.Hence, to get an estimate of the mean map, we need a deterministicversion of the trajectory, e.g. with the mean:

ξM|R0:t= ξM − ΩMR0:t µ,

i.e. the trick is to take the conditional p(M |Z1:t ,U1:t , µ,C1:t)instead of the marginal p(M |Z1:t ,U1:t ,C1:t), that cannot befactorized so easily.

,J.B. Hayet Probabilistic robotics, April 2014 37 / 41

Page 42: Lecture 21: Graph-SLAMaplicaciones.cimat.mx/.../files/clase21-eng.pdf · Qo ˝ i=1 p(Zi ˝jX ˝;C ˝ i ... J.B. Hayet Probabilistic robotics, April 2014 19 / 41. Graph-SLAM: linearization

Graph-SLAM: map estimation

A consequence is that we will not estimate thecovariance on M , nor on the joint vector X0:t , we justestimate the mean of X0:t .

We have a covariance on R0:t .

The three steps are repeated as many times as necessaryto converge.

,J.B. Hayet Probabilistic robotics, April 2014 38 / 41

Page 43: Lecture 21: Graph-SLAMaplicaciones.cimat.mx/.../files/clase21-eng.pdf · Qo ˝ i=1 p(Zi ˝jX ˝;C ˝ i ... J.B. Hayet Probabilistic robotics, April 2014 19 / 41. Graph-SLAM: linearization

Graph-SLAM: map estimation

Finally,

Σmjmj= Ω−1

mjmj,

and

µj = Σmjmj(ξj − ΩmjR0:t µ),

and we go for another iteration. . .

,J.B. Hayet Probabilistic robotics, April 2014 39 / 41

Page 44: Lecture 21: Graph-SLAMaplicaciones.cimat.mx/.../files/clase21-eng.pdf · Qo ˝ i=1 p(Zi ˝jX ˝;C ˝ i ... J.B. Hayet Probabilistic robotics, April 2014 19 / 41. Graph-SLAM: linearization

Graph-SLAM: general algorithm

1 Initialize µ.2 Repeat up to convergence:

(ξ,Ω) ← Linearize()(ξ, Ω) ← Reduce()(µ,Σ0:t) ← Resolve()

3 Returns µ and ΣR0:tR0:t .

,J.B. Hayet Probabilistic robotics, April 2014 40 / 41

Page 45: Lecture 21: Graph-SLAMaplicaciones.cimat.mx/.../files/clase21-eng.pdf · Qo ˝ i=1 p(Zi ˝jX ˝;C ˝ i ... J.B. Hayet Probabilistic robotics, April 2014 19 / 41. Graph-SLAM: linearization

Graph-SLAM: general algorithm

“Lazy” algorithm: processes all data up to the end; whilerunning, information is just accumulated.

Consequence: it only gives results at the end. . .

In some way, we are at one extreme of the spectrum of SLAMalgorithms (very different from Kalman).

Strong dependency on t. . .

,J.B. Hayet Probabilistic robotics, April 2014 41 / 41