0902 regular meeting

Preview:

Citation preview

Outline

Inferring High-Level Behavior from Low-LevelSensors

Donald J. Patterson, Lin Liao, Dieter Fox, and Henry KautzUBICOMP 2003

Speaker: Cheng-Chang Hsieh (a.k.a. Don)

Database LabNCU CSIE, Taiwan

September 2, 2008

1/27 Don Inferring High-Level Behavior 1/27

Outline

Outline

1 Introduction2 Tracking on a Graph3 Parameter Learning4 Experiments5 Conclusions and Future Work

2/27 Don Inferring High-Level Behavior 2/27

Introduction

Part I

Introduction

3/27 Don Inferring High-Level Behavior 3/27

IntroductionThe ProblemIntroduction

The Problem

Transportation RoutinesHow to predict the transportation mode of a user?

4/27 Don Inferring High-Level Behavior 4/27

IntroductionThe ProblemIntroduction

Introduction

A method of learning a Bayesian model of a travelermoving through an urban environment.The model is

1 implemented using particle filters2 learned using Expectation-Maximization.

Particle filters are variants of Bayes filters for estimatingthe state of a dynamic system.Apply Expectation-Maximization (EM) to learn typicalmotion patterns in a completely unsupervised manner.

5/27 Don Inferring High-Level Behavior 5/27

Tracking on a Graph

Part II

Tracking on a Graph

6/27 Don Inferring High-Level Behavior 6/27

Tracking on a GraphBayesian Filtering on a GraphParticle Filter Based Implementation

Tracking on a Graph

The model is a graph G = (V,E).Edges correspond to straight sections of roads and footpaths.Vertices represent either an intersection, or to model acurved road as a set of short straight edges.

7/27 Don Inferring High-Level Behavior 7/27

Tracking on a GraphBayesian Filtering on a GraphParticle Filter Based Implementation

Bayesian Filtering on a Graph

The key idea of Bayes lters is to recursively estimate theposterior probability density over the state spaceconditionedon the data collected so far.

p(xt|z1:t) ∝ p(zt|xt)∫p(xt|xt−1)p(xt−1|z1:t−1)dxt−1

z1:t: a sequence of observations.xt: a state. (the position and velocity of the object)p(zt|xt): the likelihood of making observation zt given thelocation xt.mt ∈ {BUS,FOOT,CAR}

8/27 Don Inferring High-Level Behavior 8/27

Tracking on a GraphBayesian Filtering on a GraphParticle Filter Based Implementation

Fig.: Two-slice Dynamic Bayes Net Model

9/27 Don Inferring High-Level Behavior 9/27

Tracking on a GraphBayesian Filtering on a GraphParticle Filter Based Implementation

SISR: Sequential Importance Sampling with Re-sampling

10/27 Don Inferring High-Level Behavior 10/27

Parameter Learning

Part III

Parameter Learning

11/27 Don Inferring High-Level Behavior 11/27

Parameter LearningE-stepM-stepImplementation Details

Parameter Learning

Learning means adjusting the model parameters to betterfit the training data.Learn the motion model based solely on

1 a map and2 a stream of non-continuous and noisy GPS sensor data.

Location and transportation mode are hidden variables.they cannot be observed directlythey have to be inferred from the raw GPS measurements.

EM solves this problem by iterating between anExpectation step (E-step) and a Maximization step(M-step).

12/27 Don Inferring High-Level Behavior 12/27

Parameter LearningE-stepM-stepImplementation Details

E-step

The E-step estimates p(x1:t|z1:t,Θ(i−1)).x1:t: states.z1:t: observations.Θ(i−1): parameters we want to estimate at the i− 1-thiteration.

13/27 Don Inferring High-Level Behavior 13/27

Parameter LearningE-stepM-stepImplementation Details

M-step

The goal of the M-step is to maximize the expectation ofp(z1:t, x1:t|Θ) over the distribution of x1:t obtained in theE-step by updating the parameter estimations.

Θ(i) = arg maxΘ

n∑j=1

log p(z1:t, x(j)1:t |Θ)

Θ(i) = arg maxΘ

n∑j=1

(log p(z1:t|x(j)

1:t ) + log p(x(j)1:t |Θ)

)Θ(i) = arg max

Θ

n∑j=1

(log p(x(j)

1:t |Θ))

n: the number of particles.x

(j)1:t : the state history of the j-th particle.

14/27 Don Inferring High-Level Behavior 14/27

Parameter LearningE-stepM-stepImplementation Details

M-step (Cont.)

p(z1:t, x(j)1:t |Θ) =

p(z1:t, x(j)1:t ,Θ)

p(Θ)=p(z1:t, x

(j)1:t ,Θ)

p(x(j)1:t ,Θ)

p(x(j)1:t ,Θ)p(Θ)

p(z1:t, x(j)1:t |Θ) = p(z1:t|x(j)

1:t ,Θ) · p(x(j)1:t |Θ)

p(z1:t, x(j)1:t |Θ) = p(z1:t|x(j)

1:t ) · p(x(j)1:t |Θ)

log p(z1:t, x(j)1:t |Θ) = log p(z1:t|x(j)

1:t ) + log p(x(j)1:t |Θ)

15/27 Don Inferring High-Level Behavior 15/27

Parameter LearningE-stepM-stepImplementation Details

Implementation Details

p(et|et−1,mt−1): the edge transition probability.p(mt|et−1,mt−1): the mode transition probability.

Define:1 αt(et,mt): the number of particles on edge et and in modemt at time t in the forward pass of particle filtering.

2 βt(et,mt): the number of particles on edge et and in modemt at time t in the backward pass of particle filtering.

3 ξt−1(et, et−1,mt−1): the probability of transiting from edgeet−1 to et in mode mt−1 at time t− 1.

4 ψt−1(mt, et−1,mt−1): the probability of transiting frommode mt−1 to mt in edge et−1 at time t− 1.

16/27 Don Inferring High-Level Behavior 16/27

Parameter LearningE-stepM-stepImplementation Details

ξt−1 and ψt−1

ξt−1(et, et−1,mt−1) ∝αt−1(et−1,mt−1)p(et|et−1,mt−1)βt(et,mt−1)

ψt−1(mt, et−1,mt−1) ∝αt−1(et−1,mt−1)p(mt|et−1,mt−1)βt(et−1,mt)

17/27 Don Inferring High-Level Behavior 17/27

Parameter LearningE-stepM-stepImplementation Details

Update The Parameters

After we have ξt−1 and ψt−1 for all the t from 2 to T , weupdate the parameters as:

p(et|et−1,mt−1)

=expected number of transitions from et−1 to et in mode mt−1

expected number of transitions from et−1 in mode mt−1

=∑T

t=2 ξt−1(et, et−1,mt−1)∑Tt=2

∑et∈Neighbors of et−1

ξt−1(et, et−1,mt−1)andp(mt|et−1,mt−1) =expected number of transitions from mt−1 to mt on edge et−1

expected number of transitions from mt−1 on edge et−1

=∑T

t=2 ψt−1(mt, et−1,mt−1)∑Tt=2

∑mt∈{BUS,FOOT,CAR} ψt−1(mt, et−1,mt−1)

18/27 Don Inferring High-Level Behavior 18/27

Parameter LearningE-stepM-stepImplementation Details

Algorithm: EM-based Parameter Learning

19/27 Don Inferring High-Level Behavior 19/27

Parameter LearningE-stepM-stepImplementation Details

Algorithm: EM-based Parameter Learning (Cont.)

20/27 Don Inferring High-Level Behavior 20/27

Experiments

Part IV

Experiments

21/27 Don Inferring High-Level Behavior 21/27

ExperimentsMode Estimation and PredictionLocation Prediction

Experiments

The test data set consists of logs of GPS data.The data contains position and velocity informationcollected at 2-10 second intervals.

22/27 Don Inferring High-Level Behavior 22/27

ExperimentsMode Estimation and PredictionLocation Prediction

Prediction Accuracy of Mode Transition Changes

Model Precision RecallDecision Tree with Speed and Variance 2% 83%

Prior Graph Model, w/o bus stops and bus routes 6% 63%Prior Graph Model, w/ bus stops and bus routes 10% 80%

Learned Graph Model 40% 80%

23/27 Don Inferring High-Level Behavior 23/27

ExperimentsMode Estimation and PredictionLocation Prediction

Fig.: Location Prediction

24/27 Don Inferring High-Level Behavior 24/27

Conclusions and Future WorkThe END

Part V

Conclusions and Future Work

25/27 Don Inferring High-Level Behavior 25/27

Conclusions and Future WorkThe END

Conclusions and Future Work

The good predictive user-specic models can be learned inan unsupervised fashion.The key idea is to apply a graph-based Bayes lter to track aperson’s location and transportation mode on a street map.

1 Making positive use of negative information.2 Learning daily and weekly patterns.3 Modeling trip destination and purpose.4 Using relational models to make predictions about novel

events.

26/27 Don Inferring High-Level Behavior 26/27

Conclusions and Future WorkThe END

Q & A

Thanks for your attention!

Q & A

27/27 Don Inferring High-Level Behavior 27/27