7
1 CSE-571 Robotics Probabilistic Robotics Probabilities Bayes rule Bayes filters 1 Probabilistic Robotics Key idea: Explicit representation of uncertainty (using the calculus of probability theory) Perception = state estimation Action = utility optimization 2 Discrete Random Variables X denotes a random variable. X can take on a countable number of values in {x 1 , x 2 , …, x n }. P(X=x i ), or P(x i ), is the probability that the random variable X takes on value x i . P( ) is called probability mass function. E.g. 02 . 0 , 08 . 0 , 2 . 0 , 7 . 0 ) ( = Room P . 3 Joint and Conditional Probability P(X=x and Y=y) = P(x,y) If X and Y are independent then P(x,y) = P(x) P(y) P(x | y) is the probability of x given y P(x | y) = P(x,y) / P(y) P(x,y) = P(x | y) P(y) If X and Y are independent then P(x | y) = P(x) 4

CSE-571 Robotics (using the calculus of probability theory)2 Law of Total Probability, Marginals =å y P(x)P(x,y) =å y P(x)P(x|y)P(y) 1 x åPx= Discrete case òp x dx() 1= Continuous

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: CSE-571 Robotics (using the calculus of probability theory)2 Law of Total Probability, Marginals =å y P(x)P(x,y) =å y P(x)P(x|y)P(y) 1 x åPx= Discrete case òp x dx() 1= Continuous

1

CSE-571Robotics

Probabilistic Robotics

ProbabilitiesBayes rule

Bayes filters

1

Probabilistic Robotics

Key idea: Explicit representation of uncertainty

(using the calculus of probability theory)

• Perception = state estimation•Action = utility optimization

2

Discrete Random Variables

• X denotes a random variable.

• X can take on a countable number of values in {x1, x2, …, xn}.

• P(X=xi), or P(xi), is the probability that the random variable X takes on value xi.

• P( ) is called probability mass function.

• E.g. 02.0,08.0,2.0,7.0)( =RoomP

.

3

Joint and Conditional Probability

• P(X=x and Y=y) = P(x,y)

• If X and Y are independent then P(x,y) = P(x) P(y)

• P(x | y) is the probability of x given yP(x | y) = P(x,y) / P(y)P(x,y) = P(x | y) P(y)

• If X and Y are independent thenP(x | y) = P(x)

4

Page 2: CSE-571 Robotics (using the calculus of probability theory)2 Law of Total Probability, Marginals =å y P(x)P(x,y) =å y P(x)P(x|y)P(y) 1 x åPx= Discrete case òp x dx() 1= Continuous

2

Law of Total Probability, Marginals

å=y

yxPxP ),()(

å=y

yPyxPxP )()|()(

( ) 1xP x =å

Discrete case

( ) 1p x dx =ò

Continuous case

ò= dyypyxpxp )()|()(

ò= dyyxpxp ),()(

5

Events

• P(+x, +y) ?

• P(+x) ?

• P(-y OR +x) ?

• Independent?

X Y P

+x +y 0.2+x -y 0.3-x +y 0.4-x -y 0.1

6

Marginal Distributions

X Y P+x +y 0.2+x -y 0.3-x +y 0.4-x -y 0.1

X P+x-x

Y P+y-y

7

Conditional Probabilities

X Y P+x +y 0.2+x -y 0.3-x +y 0.4-x -y 0.1

• P(+x | +y) ?

• P(-x | +y) ?

• P(-y | +x) ?

8

Page 3: CSE-571 Robotics (using the calculus of probability theory)2 Law of Total Probability, Marginals =å y P(x)P(x,y) =å y P(x)P(x|y)P(y) 1 x åPx= Discrete case òp x dx() 1= Continuous

3

Bayes Formula

evidenceprior likelihood

)()()|()(

)()|()()|(),(

×==

Þ

==

yPxPxyPyxP

xPxyPyPyxPyxP

•Often causal knowledge is easier to obtain than diagnostic knowledge.•Bayes rule allows us to use causal

knowledge.

9

Simple Example of State Estimation

• Suppose a robot obtains measurement z• What is P(open|z)?

10

Example

67.032

5.03.05.06.05.06.0)|(

)()|()()|()()|()|(

==×+×

×=

¬¬+=

zopenP

openpopenzPopenpopenzPopenPopenzPzopenP

• z raises the probability that the door is open.

P(z | open) = 0.6 P(z |¬open) = 0.3P(open) = P(¬open) = 0.5

11

Normalization

1

'

( | ) ( )( ) ( | ) ( )( )

1( )( | ') ( ')

x

P y x P xP x y P y x P xP y

P yP y x P x

h

h -

= =

= =å

yx

xyx

yx

yxPx

xPxyPx

|

|

|

aux)|(:

aux1

)()|(aux:

h

h

="

=

="

å

Algorithm:

12

Page 4: CSE-571 Robotics (using the calculus of probability theory)2 Law of Total Probability, Marginals =å y P(x)P(x,y) =å y P(x)P(x|y)P(y) 1 x åPx= Discrete case òp x dx() 1= Continuous

4

Conditioning

• Bayes rule and background knowledge:

)|()|(),|(),|(

zyPzxPzxyPzyxP =

ò

ò

ò

=

=

=

dzzyPzyxP

dzyzPzyxP

dzzPzyxPyxP

)|(),|(

)|(),|(

)(),|()(

?

?

?

13

Conditioning

• Bayes rule and background knowledge:

)|()|(),|(),|(

zyPzxPzxyPzyxP =

( ) ( | , ) ( | )P x y P x y z P z y dz= ò

14

Conditional Independence

)|()|(),( zyPzxPzyxP =

),|()( yzxPzxP =

),|()( xzyPzyP =

• Equivalent to

and

15

Simple Example of State Estimation

• Suppose our robot obtains another observation z2.• What is P(open|z1, z2)?

16

Page 5: CSE-571 Robotics (using the calculus of probability theory)2 Law of Total Probability, Marginals =å y P(x)P(x,y) =å y P(x)P(x|y)P(y) 1 x åPx= Discrete case òp x dx() 1= Continuous

5

Recursive Bayesian Updating

),,|(),,|(),,,|(),,|(

11

11111

-

--=

nn

nnnn

zzzPzzxPzzxzPzzxP

!

!!!

Markov assumption: zn is conditionally independentof z1,...,zn-1 given x.

P(x | z1,…, zn) = P(zn | x) P(x | z1,…, zn − 1)P(zn | z1,…, zn − 1)

=η P(zn | x) P(x | z1,…, zn − 1)=η1...n P(zi | x)

i=1...n∏ P(x)

17

Example: Second Measurement

625.085

31

53

32

21

32

21

)|()|()|()|()|()|(),|(

1212

1212

==×+×

×=

¬¬+=

zopenPopenzPzopenPopenzPzopenPopenzPzzopenP

• z2 lowers the probability that the door is open.

P(z2 | open) = 0.5 P(z2 |¬open) = 0.6P(open | z1) = 2 / 3 P(¬open | z1) = 1/ 3

18

Bayes Filters: Framework• Given:

• Stream of observations z and action data u:

• Sensor model P(z|x).• Action model P(x|u,x’).• Prior probability of the system state P(x).

• Wanted: • Estimate of the state X of a dynamical system.• The posterior of the state is also called Belief:

),,,|()( 121 tttt zuzuxPxBel -= !

},,,{ 121 ttt zuzud -= !

19

Bayes Filters

),,,|(),,,,|( 1111 ttttt uzuxPuzuxzP !!h=Bayes

z = observationu = actionx = state

),,,|()( 11 tttt zuzuxPxBel !=

Markov ),,,|()|( 11 tttt uzuxPxzP !h=

111 )(),|()|( ---ò= ttttttt dxxBelxuxPxzPh

Markov11111 ),,,|(),|()|( ---ò= tttttttt dxuzuxPxuxPxzP !h

=η P(zt | xt ) P(xt | u1, z1,…,ut , xt−1)∫ P(xt−1 | u1, z1,…,ut ) dxt−1

Total prob.

20

Page 6: CSE-571 Robotics (using the calculus of probability theory)2 Law of Total Probability, Marginals =å y P(x)P(x,y) =å y P(x)P(x|y)P(y) 1 x åPx= Discrete case òp x dx() 1= Continuous

6

Bayes Filter Algorithm

1. Algorithm Bayes_filter( Bel(x),d ):2. n=03. If d is a perceptual data item z then4. For all x do5.6.7. For all x do8.

9. Else if d is an action data item u then10. For all x do11.

12. Return Bel’(x)

)()|()(' xBelxzPxBel =)(' xBel+=hh

)(')(' 1 xBelxBel -=h

Bel '(x) = P(x | u, x ')∫ Bel(x ') dx '

111 )(),|()|()( ---ò= tttttttt dxxBelxuxPxzPxBel h

21

Markov Assumption

Underlying Assumptions• Static world• Independent noise• Perfect model, no approximation errors

),|(),,|( 1:11:11:1 ttttttt uxxpuzxxp --- =)|(),,|( :11:1:0 tttttt xzpuzxzp =-

22

Dynamic Environments

• Two possible locations x1 and x2

• P(x1)=0.99 • P(z|x2)=0.09 P(z|x1)=0.07

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

5 10 15 20 25 30 35 40 45 50

p( x

| d)

Number of integrations

p(x2 | d)p(x1 | d)

23

Bayes Filters for Robot Localization

24

Page 7: CSE-571 Robotics (using the calculus of probability theory)2 Law of Total Probability, Marginals =å y P(x)P(x,y) =å y P(x)P(x|y)P(y) 1 x åPx= Discrete case òp x dx() 1= Continuous

7

Representations for Bayesian Robot Localization

Discrete approaches (’95)• Topological representation (’95)

• uncertainty handling (POMDPs)• occas. global localization, recovery

• Grid-based, metric representation (’96)• global localization, recovery

Multi-hypothesis (’00)• multiple Kalman filters• global localization, recovery

Particle filters (’99)• sample-based representation• global localization, recovery

Kalman filters (late-80s)• Gaussians, unimodal• approximately linear models• position tracking

AIRobotics

25

Bayes Filters are Familiar!

• Kalman filters• Particle filters• Hidden Markov models• Dynamic Bayesian networks• Partially Observable Markov Decision

Processes (POMDPs)

111 )(),|()|()( ---ò= tttttttt dxxBelxuxPxzPxBel h

26

Summary

•Bayes rule allows us to compute probabilities that are hard to assess otherwise.

•Under the Markov assumption, recursive Bayesian updating can be used to efficiently combine evidence.

•Bayes filters are a probabilistic tool for estimating the state of dynamic systems.

27