22
Sensor Fusion for Context Understanding Sensor Fusion for Context Understanding Huadong Wu, Mel Siegel The Robotics Institute, Carnegie Mellon University Sevim Ablay Applications Research Lab, Motorola Labs

Sensor Fusion for Context Understanding

  • Upload
    others

  • View
    11

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Sensor Fusion for Context Understanding

IMTC’2002, Anchorage, AK, USA 11

Robotics Institute, Carnegie Mellon UniversityRobotics Institute, Carnegie Mellon University

IMTC-2002-1077 Sensor Fusion [email protected]

Sensor Fusion for ContextUnderstanding

Sensor Fusion for ContextUnderstanding

Huadong Wu, Mel SiegelThe Robotics Institute, Carnegie Mellon University

Sevim AblayApplications Research Lab, Motorola Labs

Page 2: Sensor Fusion for Context Understanding

IMTC’2002, Anchorage, AK, USA 22

Robotics Institute, Carnegie Mellon UniversityRobotics Institute, Carnegie Mellon University

IMTC-2002-1077 Sensor Fusion [email protected]

sensor fusion• how to combine outputs of multiple

sensor perspectives on an observable?

• modalities may be “complementary”,“competitive”, or “cooperative”

• technologies may demand registration

• variety of historical approaches, e.g.:

– statistical (error and confidence measures)

– voting schemes (need at least three)

– Bayesian (probability inference)

– neural network, fuzzy logic, etc

Page 3: Sensor Fusion for Context Understanding

IMTC’2002, Anchorage, AK, USA 33

Robotics Institute, Carnegie Mellon UniversityRobotics Institute, Carnegie Mellon University

IMTC-2002-1077 Sensor Fusion [email protected]

context understanding

• best algorithm for human-computer

interaction tasks depends on context

• context can be difficult to discern

• multiple sensors give complementary

(and sometime contradictory) clues

• sensor fusion techniques needed

• (but best algorithm for sensor fusion

tasks may depend on context!)

Page 4: Sensor Fusion for Context Understanding

IMTC’2002, Anchorage, AK, USA 44

Robotics Institute, Carnegie Mellon UniversityRobotics Institute, Carnegie Mellon University

IMTC-2002-1077 Sensor Fusion [email protected]

agenda

• a generalizable sensor fusion architecture

for “context-aware computing”

– or (my preference, but not the standard term)

“context-aware human-computer interaction”

• a realistic test to demonstrate usability

and performance enhancement

• improved sensor fusion approach

(to be detailed in next paper)

Page 5: Sensor Fusion for Context Understanding

IMTC’2002, Anchorage, AK, USA 55

Robotics Institute, Carnegie Mellon UniversityRobotics Institute, Carnegie Mellon University

IMTC-2002-1077 Sensor Fusion [email protected]

background

• current context-sensing architectures

(e.g., Georgia Tech Context Toolkit)

tightly couple sensors and contexts

• difficult to substitute or add sensors, thus

difficult to extend scope of contexts

• we describe a modular hierarchical

architecture to overcome these limitations

Page 6: Sensor Fusion for Context Understanding

IMTC’2002, Anchorage, AK, USA 66

Robotics Institute, Carnegie Mellon UniversityRobotics Institute, Carnegie Mellon University

IMTC-2002-1077 Sensor Fusion [email protected]

Sensing hardware: cameras, microphones, etc.Environment situation: people in the meeting room, objects around a moving car, etc.

humansunderstand

contextnaturally &effortlessly

toward context understandingtoward context understanding

Identification, representation, and understanding of contextAdapt behavior to context

traditionalsystem

Information Separation+

Sensor Fusion

sensorsensor sensor

Page 7: Sensor Fusion for Context Understanding

IMTC’2002, Anchorage, AK, USA 77

Robotics Institute, Carnegie Mellon UniversityRobotics Institute, Carnegie Mellon University

IMTC-2002-1077 Sensor Fusion [email protected]

methodology• top-down

• adapt/extend Georgia Tech Context Toolkit

(Motorola helps support both groups)

• create realistic context and sensor prototypes

• implement a practical context architecture

for a plausible test application scenario

• implement sensor fusion as a mapping of

sensor data into the context database

• place heavy emphasis on real sensor device

characterization and (where needed) simulation

Page 8: Sensor Fusion for Context Understanding

IMTC’2002, Anchorage, AK, USA 88

Robotics Institute, Carnegie Mellon UniversityRobotics Institute, Carnegie Mellon University

IMTC-2002-1077 Sensor Fusion [email protected]

context-sensing methodology:sensor data-to-context mappingcontext-sensing methodology:sensor data-to-context mapping

context

observations& hypotheses

sensory output

⋅⋅⋅

⋅⋅⋅⋅⋅⋅

=

mnmnn

m

m

n sensor

sensor

sensor

fff

fff

fff

M

L

MOMM

L

L

M

2

1

21

22221

11211

2

1

)()()(

)()()(

)()()(

?

?

?

Page 9: Sensor Fusion for Context Understanding

IMTC’2002, Anchorage, AK, USA 99

Robotics Institute, Carnegie Mellon UniversityRobotics Institute, Carnegie Mellon University

IMTC-2002-1077 Sensor Fusion [email protected]

dynamic database

• example: user identification and posture

for discerning focus-of-attention in a meeting

• tables (next) list basic information about

environment (room) and parameters, e.g.,

– temperature, noise, lighting, available devices,

number of people, segmentation of area, etc

– initially many details are entered manually

– eventually a fully “tagged” and instrumented

environment can reasonably be anticipated

• weakest link: maintaining currency

Page 10: Sensor Fusion for Context Understanding

IMTC’2002, Anchorage, AK, USA 1010

Robotics Institute, Carnegie Mellon UniversityRobotics Institute, Carnegie Mellon University

IMTC-2002-1077 Sensor Fusion [email protected]

space time socialoutside nowactivity schedule

inside history

context

self, family,friends,colleague,acqauintance,etc.

moodagitation /tiredness

stress concentration preferencesphysicalinformation

merry, sad,satisfy…

nervousness focus ofattention

habits, current name, address,height, weight,fitness,metablism, etc.

inside (personal information, feeling & thinking, emotional)

context classification and modelingcontext classification and modeling

location proximity time people audiovisualcomputing &connectivity

city, altidude,weather(cloudyness,rain/snow,temperature,humidity,barometerpressure,forecast),location andorientation(absolute,

close to:building (name,structure,facilities, etc.knowledge),room, car,devices(function,states, etc.), …,vicinitytemperature,humidity,

day, date individuals orgroup (e.g.audience of ashow,attendees in acock-tail party):peopleinteraction,casual chatting,formal meeting,eye contact,attention

human talking(informationcollection),music, etc.; in-sight objects,surroundingscenery

computingenvironment(processing,memory, I/O,etc.,hardware/software resource &cost), networkconnectivity,communicationbandwidth,communication

change:travelling,speed, heading,

change:walking/runningspeed, heading

time of the day:office hour,lunch time, …,season of ayear, etc.

interruptionsource:imcoming calls,encounting,etc., …

noise-level,brightness

history,schedule,expectation

socialrelationship

outside (environment)

soundprocessing:speakerrecognition,speakingunderstanding

imageprocessing: facerecognition,objectrecognition, 3-Dobjectmeasureing

location,altitude, speed,orientation

ambientenvironment

personalphysical state:heart rate,respiration rate,blood pressure,blink rate,GalvanicResistance,bodytemperature,sweat

microphones cameras, infra-red sensors

GPS, DGPS,serverIP, RFID,gyro,accelerometers,dead-reckoning

networkresource,thermometer,barometer,humiditysensor, photo-diode sensors,accelerometers,gas sensor

biometricsensors: heart-rate/blood-pressure/GRS,temperature,respiration, etc.,…

information: sensors

work body visionaural(listen/talk)

hands

task ID drive, walk, sit,…

read, watch TV,sight-seeing, …,people: eyecontact

content: work,entertainment,living chore,etc., …

type, write, usemouse, etc., …

interruptable…

activity

Page 11: Sensor Fusion for Context Understanding

IMTC’2002, Anchorage, AK, USA 1111

Robotics Institute, Carnegie Mellon UniversityRobotics Institute, Carnegie Mellon University

IMTC-2002-1077 Sensor Fusion [email protected]

……

Preference-table-user[Hd]

Preference

144 lb (σ = 4 lb)Weight

5’6” (σ = 0.5” )Height

Huadong Wu (κ =1.0)Name

Preference-table-user[Hd]

……

Preference-table-user[Hd]

Preference

144 lb (σ = 4 lb)Weight

5’6” (σ = 0.5” )Height

Huadong Wu (κ =1.0)Name

Background-table-user[Hd]

……

Preference-table-user[Hd]

Preference

NSH 41029:06AM-10:55AM

Place Time

Huadong WuName

History-table-user[Hd]

context information architecture:dynamic context information database

context information architecture:dynamic context information database

6 (κ > 0.5)Detected people #

User-tableDetected users

Current

Device-tableDevices

60 db (σ = 6 db)Noise level

Brightness gradeLight condition

72 ºF (σ = 3 ºF)Temperature

Area-tableArea

Room-table: NSH A417

User-tableDetected user

……

4 (κ > 0.5)Detected people #

Device-tableDevices

60 db (σ = 6 db)Noise level

Brightness gradeLight condition

72 ºF (σ = 3 ºF)Temperature

NSH A417Of room

Inside Area

History-table-user[Chris]

10:45AM, 06/06/2001

Activity-table-user[Chris]

[0.4, 0.9]InsideBackground-table-user[Chris]

Chris

Background-table-

user[Alan]

Background-table-user[Mel]

Background-table-user[Hd]

Background

History-table-user[Alan]

History-table-user[Mel]

History-table-user[Hd]

history

2:48PM, 06/06/2001

11:48AM, 06/06/2001

10:32AM, 06/06/2001

First detected

Activity-table-

user[Alan]

Activity-table-user[Mel]

Activity-table-user[Hd]

Activity

Inside

Entrance

Entrance

Place

……

[0.9, 0.98]Alan

[0.3, 0.7]Mel

[0.5, 0.9]Hd

ConfidenceName

User-table

……

User-tableDetected user

2 (κ > 0.5)Detected people #

Device-tableDevices

60 db (σ = 6 db)Noise level

Brightness gradeLight condition

72 ºF (σ = 3 ºF)Temperature

NSH A417Of room

Entrance Area

user

PK user ID

contact info.preferenceconf. room usageactivity

user activity

PK user ID

in meetingspeekinghead posefocus of attention

user

PK user ID

namecontact info.preferenceconf. room usageactivity

conference room

PK room ID

function & locationfacilitiesusesage schedulecurrent usersbrightnessnoise

Page 12: Sensor Fusion for Context Understanding

IMTC’2002, Anchorage, AK, USA 1212

Robotics Institute, Carnegie Mellon UniversityRobotics Institute, Carnegie Mellon University

IMTC-2002-1077 Sensor Fusion [email protected]

implementation• low-level sensor fusion done

• sensing and knowledge integration via LAN

• context maintained in a dynamic database

• each significant entity (user, room, house, ...)

has its own dynamic context repository

• dynamic repository maintains and serves

context and data to all applications

• synchronization and cooperation among

disparate sensor modalities achieved by

“sensor fusion mediators” (agents)

Page 13: Sensor Fusion for Context Understanding

IMTC’2002, Anchorage, AK, USA 1313

Robotics Institute, Carnegie Mellon UniversityRobotics Institute, Carnegie Mellon University

IMTC-2002-1077 Sensor Fusion [email protected]

system architecture to support sensor fusionfor context-aware computing

system architecture to support sensor fusionfor context-aware computing

user contextdatabase

appliance

embedded OS

database sever

gatewayInternet

Intranet

appliance

embedded OS

sensor

smart sensornode

sensor

smart sensornode

sensors

applicationssensor fusion

sensor

smart sensornode

appliance

embedded OS

site contextdatabase

context serverhigher-levelsensor fusion

applicationslower-level

sensor fusion

sensors

Page 14: Sensor Fusion for Context Understanding

IMTC’2002, Anchorage, AK, USA 1414

Robotics Institute, Carnegie Mellon UniversityRobotics Institute, Carnegie Mellon University

IMTC-2002-1077 Sensor Fusion [email protected]

practical details ...• context type =>sensor fusion mediator

• mediator integrates corresponding

sensors, e.g., by designating some

primary others secondary based on

observed or specified performance

• Dempster-Shafer “theory of evidence”

implementation in accompanying paper

• (white-icon components in following

cartoon inherited from Georgia Tech CT)

Page 15: Sensor Fusion for Context Understanding

IMTC’2002, Anchorage, AK, USA 1515

Robotics Institute, Carnegie Mellon UniversityRobotics Institute, Carnegie Mellon University

IMTC-2002-1077 Sensor Fusion [email protected]

system configuration diagramsystem configuration diagram

DynamicContext

Database

user -mobilecomputer

site contextdatabase server

site contextserver

Widget

sensor

Widget

sensor

Widget

sensor

Widget

sensor

SF mediator

Aggregator

SF mediator

Aggregator

context data

ResourceRegistry

context data

ResourceRegistry

OtherAI algorithms

Interpreter

application application

OtherAI algorithms

Interpreter

AI algorithms

Discoverer

Dempster-Shafer rule

Page 16: Sensor Fusion for Context Understanding

IMTC’2002, Anchorage, AK, USA 1616

Robotics Institute, Carnegie Mellon UniversityRobotics Institute, Carnegie Mellon University

IMTC-2002-1077 Sensor Fusion [email protected]

details inherited from GT TC

• JAVA implementation

• BaseObject class provides communication

functionality: sending, receiving, initiating, and

responding via multithread server

• context widgets, interpreters, and discovers subclass

from the BaseObject, and inherit its functionality

• service is part of the widget object

• aggregators subclass from widgets, inheriting their

functionality

Page 17: Sensor Fusion for Context Understanding

IMTC’2002, Anchorage, AK, USA 1717

Robotics Institute, Carnegie Mellon UniversityRobotics Institute, Carnegie Mellon University

IMTC-2002-1077 Sensor Fusion [email protected]

object hierarchy and subclassrelationship in context toolkitobject hierarchy and subclassrelationship in context toolkit

BaseObject

Interpreter Discoverer

Aggregator

Widget

Service

Page 18: Sensor Fusion for Context Understanding

IMTC’2002, Anchorage, AK, USA 1818

Robotics Institute, Carnegie Mellon UniversityRobotics Institute, Carnegie Mellon University

IMTC-2002-1077 Sensor Fusion [email protected]

focus-of-attention application• neural network estimates head poses

• focus of attention estimate based on head

pose probability distribution analysis

• audio reports speaker, assumed to be focus

of other participants’ attention

• situation is not easy to analyze due to, e.g.,

dependence of behavior on discussion topic

• initial results suggests we need more general

fusion approach than provided by Bayesian

Page 19: Sensor Fusion for Context Understanding

IMTC’2002, Anchorage, AK, USA 1919

Robotics Institute, Carnegie Mellon UniversityRobotics Institute, Carnegie Mellon University

IMTC-2002-1077 Sensor Fusion [email protected]

next paper ...

• Dempster-Shafer approach ...

• provides mechanism for handling

“belief” and “plausibility”

• cautiously stated, generalizes Bayes’

Law a priori probabilities to distributions

• (difficulty, of course, is that usually

neither the requisite probabilities nor

the distributions are actually known)

Page 20: Sensor Fusion for Context Understanding

IMTC’2002, Anchorage, AK, USA 2020

Robotics Institute, Carnegie Mellon UniversityRobotics Institute, Carnegie Mellon University

IMTC-2002-1077 Sensor Fusion [email protected]

focus-of-attention estimation fromvideo and audio sensors

focus-of-attention estimation fromvideo and audio sensors

Page 21: Sensor Fusion for Context Understanding

IMTC’2002, Anchorage, AK, USA 2121

Robotics Institute, Carnegie Mellon UniversityRobotics Institute, Carnegie Mellon University

IMTC-2002-1077 Sensor Fusion [email protected]

expectationsexpectations

contextAI rules

Widget

sensor

Widget

sensor

e.g. Dempster-Shafer Belief Combination

Sensor fusion mediator

Observations & Hypotheses

DynamicConfigurationTime interval: T

Sensor listUpdating flag

… …

Expected Performance BoostExpected Performance Boost

1. Uncertainty & ambiguityrepresentation to user applications

2. Information consolidation &conflict resolving for users

3. Adaptive sensor fusion supportswitch to suitable algorithms

4. Robust to configuration change— and for some to die gracefully

5. Situational description support— using more & complex context

Page 22: Sensor Fusion for Context Understanding

IMTC’2002, Anchorage, AK, USA 2222

Robotics Institute, Carnegie Mellon UniversityRobotics Institute, Carnegie Mellon University

IMTC-2002-1077 Sensor Fusion [email protected]

conclusions

• preliminary experiments demonstrate

feasibility of context-sensing

architecture and methodology

• expect our further improvements via

– better uncertainty and ambiguity handling

– fusion of overlapping sensors

– context-adaptive widget capability

– sensor fusion mediator coordinates resources

– context information server supports applications