Inferring High-Level Behavior from Low-Level Sensors Don Peterson, Lin Liao, Dieter Fox, Henry Kautz...

Preview:

Citation preview

Inferring High-Level Behavior from Low-Level Sensors

Don Peterson, Lin Liao, Dieter Fox, Henry Kautz

Published in UBICOMP 2003

ICS 280

Main References Voronoi Tracking: Location Estimation Using Sparse and

Noisy Sensor Data (Liao L., Fox D., Hightower J., Kautz H., Shultz D.) – in

International Conference on Intelligent Robots and Systems (2003)

Inferring High-Level Behavior from Low-Level Sensors (Paterson D., Liao L., Fox D., Kautz H.) – In UBICOMP 2003

Learning and Inferring Transportation Routines (Liao L., Fox D., Kautz H.) – In AAAI 2004

Outline Motivation Problem Definition Modeling and Inference

Dynamic Bayesian Networks Particle Filtering

Learning Results Conclusions

Motivation

ACTIVITY COMPASS - software which indirectly monitors your activity and offers proactive advice to aid in successfully accomplishing inferred plans.

Healthcare Monitoring Automated Planning Context Aware Computing Support

Research Goal

To bridge the gap between sensor data and symbolic reasoning. To allow sensor data to help

interpret symbolic knowledge. To allow symbolic knowledge

to aid sensor interpretation.

Executive Summary GPS data collection

3 months, 1 user’s daily life Inference Engine

Infers location and transportation “mode” on-line in real-time

Learning Transportation patterns

Results Better predictions Conceptual understanding of routines

Outline Motivation Problem Definition Modeling and Inference

Dynamic Bayesian Networks Particle Filtering

Learning Results Conclusions

Tracking on a Graph

Tracking person’s location and mode of transportation using street maps and GPS sensor data.

Formally, the world is modeled as: graph G = (V,E), where:

V is a set of vertices = intersections E is a set of directed edges = roads/foot

paths

Example

Outline Motivation Problem Definition Modeling and Inference

Dynamic Bayesian Networks Particle Filtering

Learning Results Conclusions

State Space Location

Which street user is on.

Position on that street

Velocity

GPS Offset Error

Transportation Mode

L = ‹Ls, Lp›

O = ‹Ox, Oy›

V

M ε {BUS, CAR, FOOT}

X = ‹ Ls, Lp, V, Ox, Oy, M ›

Dynamic Bayesian Networks Extension of a Markov Model Statistical model which handles

Sensor Error Enormous but Structured State Spaces

Probabilistic Temporal A single framework to manage all

levels of abstraction

Model (I)

Model (II)

Model (III)

Dependencies

Inference

We want to compute the posterior density:

Inference Particle Filtering

A Technique for Solving DBNs Approximate Solutions Stochastic/ Monte Carlo

In our case, a particle represents an instantiation of the random variables describing: the transportation mode: mt the location: lt (actually the edge et) the velocity: vt

Particle Filtering Step 1 (SAMPLING)

Draw n samples Xt-1 from the previous set St-1 and generate n new samples Xt according to the dynamics p(xt|xt-1) (i.e. motion model)

Step 2 (IMPORTANCE SAMPLING) assign each sample xt an importance weight according

to the likelihood of the observation zt: ωt ≈ p(zt|xt)

Step 3 (RE-SAMPLING) draw samples with replacement according to the

distribution defined by the importance weights, ωt

Motion Model – p(xt|xt-1)

Advancing particles along the graph G Sample transportation mode mt from the

distribution p(mt|mt-1,et-1)

Sample velocity vt from density p(vt|mt) - (mixture of Gauss densities – see picture)

Sample the location using current velocity: draw at random the traveled distance d (from a

Gauss density centered at vt). If the distance implies an edge transition then we select next edge et with probability p(et|et-1,mt-1). Otherwise we stay on the same edge et = et-1

Animation

Play short video clip

Outline Motivation Problem Definition Modeling and Inference

Dynamic Bayesian Networks Particle Filtering

Learning Results Conclusions

Learning We want to learn from history the

components of the motion model: p(et|et-1,mt-1) - is the transition probability on the

graph, conditioned on the mode of transportation just prior to transitioning to the new edge

p(mt|mt-1,et-1) - is the transportation mode transition probability. It depends on the previous mode mt-1 and the location of the person described by the edge et-1

Use the Monte Carlo version of EM algorithm

Learning At each iteration it performs both a forward and

a backward (in time) particle filtering step.

At each forward and backward filtering steps the algorithm counts the number of particles transiting between the different edges and modes.

To obtain probabilities for different transitions, the counts of the forward and backward pass are normalized and then multiplied at the corresponding time slices.

Implementation Details (I) αt(et,mt)

the number of particles on edge et and in mode mt at time t in the forward pass of particle filtering

βt(et,mt) the number of particles on edge et and in mode mt at

time t in the backward pass of particle filtering ξt-1(et,et-1,mt-1)

probability of transiting from edge et-1 to et at time t-1 and in mode mt-1

ψt-1(mt,mt-1,et-1) probability of transiting from mode mt-1 to mt on edge et-1

at time t-1

Implementation Details (II)

After we have ξt-1 and ψt-1 for all t from 2 to T, we can update the parameters as:

Implementation details (III)

E-step1. Generate n uniformly distributed samples

2. Perform forward particle filteringa) Sampling: generate n new samples from the existing ones

using current parameter estimation p(et|et-1,mt-1) and p(mt|mt-

1,et-1).b) Re-weight each sample, re-sample, count and save αt(et,mt).c) Move to next time slice (t = t+1).

3. Perform backward particle filteringa) Sampling: generate n new samples from the existing ones

using the backward parameter estimation p(et-1|et,mt) and p(mt-1|mt,et).

b) Re-weight each sample, re-sample, count and save β(et,mt).c) Move to previous time slice (t = t-1).

M-step

1. Compute ξt-1(et,et-1,mt-1) and ψt-1(mt,mt-1,et-1) using (5) and (6) and then normalize.

2. Update p(et|et-1,mt-1) and p(mt|mt-1,et-1) using (7) and (8).

LOOP: Repeat E-step and M-step using updated parameters until model converges.

Outline Motivation Problem Definition Modeling and Inference

Dynamic Bayesian Networks Particle Filtering

Learning Results Conclusions

Dataset Single user 3 months of daily life Collected GPS position and velocity

dataat 2 and 10 second sample intervals Evaluation data was

29 “trips” - 12 hours of logs All continuous outdoor data Divided chronologically into 3 cross-

validation groups

Goals

Mode Estimation and Prediction Learning a motion model that would be

able to estimate and predict the mode of transportation at any given instant.

Location Prediction Learning a motion model that would be

able to predict the location of the person into the future.

Results – Mode Estimation

ModelMode Prediction

Accuracy

Decision Tree (supervised)

55%

Prior w/o bus info 60%

Prior w/ bus info 78%

Learned 84%

Results – Mode Prediction Evaluate the ability to predict transitions

between transportation modes. Table shows the accuracy in predicting

qualitative change in transportation mode within 60 seconds of the actual transition (e.g. correctly predicting that the person goes off the bus).

PRECISION: percentage of time when the algorithm predicts a transition that will actually occur.

RECALL: percentage of real transitions that were correctly predicted.

Results – Mode Prediction

ModelMode Transition

Prediction Accuracy

Precision Recall

Decision Tree 2% 83%

Prior w/o bus info 6% 63%

Prior w/ bus info 10% 80%

Learned 40% 80%

Results – Location Prediction

Results – Location Prediction

Conclusions We developed a single integrated framework

to reason about transportation plans Probabilistic Successfully manages systemic GPS error

We integrate sensor data with high level concepts such as bus stops

We developed an unsupervised learning technique which greatly improves performance

Our results show high predictive accuracy and interesting conceptual conclusions