21
Hidden Process Models Rebecca Hutchinson Tom M. Mitchell Indrayana Rustandi June 28, 2006 ICML Carnegie Mellon University Computer Science Department

Hidden Process Models

  • Upload
    catrin

  • View
    34

  • Download
    0

Embed Size (px)

DESCRIPTION

Hidden Process Models. Rebecca Hutchinson Tom M. Mitchell Indrayana Rustandi June 28, 2006 ICML Carnegie Mellon University Computer Science Department. Introduction. Hidden Process Models (HPMs): A new probabilistic model for time series data. - PowerPoint PPT Presentation

Citation preview

Page 1: Hidden Process Models

Hidden Process Models

Rebecca HutchinsonTom M. Mitchell

Indrayana Rustandi

June 28, 2006ICML

Carnegie Mellon University Computer Science Department

Page 2: Hidden Process Models

2

Introduction

• Hidden Process Models (HPMs): – A new probabilistic model for time series data.– Designed for data generated by a collection of latent

processes.

• Potential domains:– Biological processes (e.g. synthesizing a protein) in

gene expression time series.– Human processes (e.g. walking through a room) in

distributed sensor network time series.– Cognitive processes (e.g. making a decision) in

functional Magnetic Resonance Imaging time series.

Page 3: Hidden Process Models

3

fMRI Data

Sign

al

Am

plitu

de

Time (seconds)

Hemodynamic Response

Neural activity

Features: 10,000 voxels, imaged every second.Training examples: 10-40 trials (task repetitions).

Page 4: Hidden Process Models

4

Study: Pictures and Sentences

• Task: Decide whether sentence describes picture correctly, indicate with button press.

• 13 normal subjects, 40 trials per subject.• Sentences and pictures describe 3 symbols: *,

+, and $, using ‘above’, ‘below’, ‘not above’, ‘not below’.

• Images are acquired every 0.5 seconds.

Read Sentence

View Picture Read Sentence

View PictureFixation

Press Button

4 sec. 8 sec.t=0

Rest

Page 5: Hidden Process Models

5

Goals for fMRI

• To track cognitive processes over time. – Estimate process hemodynamic responses.– Estimate process timings.

• Allowing processes that do not directly correspond to the stimuli timing is a key contribution of HPMs!

• To compare hypotheses of cognitive behavior.

Page 6: Hidden Process Models

6

HPM Modeling Assumptions

• Model latent time series at process-level. • Process instances share parameters

based on their process types. • Use prior knowledge from experiment

design. • Sum process responses linearly.

Page 7: Hidden Process Models

7

HPM FormalismHPM = <H,C,,>

H = <h1,…,hH>, a set of processes (e.g. ReadSentence)

h = <W,d,,>, a processW = response signature

d = process duration

= allowable offsets

= multinomial parameters over values in

C = <c1,…, cC>, a set of configurations

c = <1,…,L>, a set of process instances = <h,,O>, a process instance (e.g. ReadSentence(S1))

h = process ID = timing landmark (e.g. stimulus presentation of S1)

O = offset (takes values in h)

= <1,…,C>, priors over C

= <1,…,V>, standard deviation for each voxel

Page 8: Hidden Process Models

8

Process 1: ReadSentence Response signature W:

Duration d: 11 sec. Offsets : {0,1} P(): {0,1}

One configuration c of process instances 1, 2, … k: (with prior c)

Predicted mean:

Input stimulus :

1

Timing landmarks : 21

2

Process instance: 2 Process h: 2 Timing landmark: 2

Offset O: 1 (Start time: 2+ O)

sentencepicture

v1v2

Process 2: ViewPicture Response signature W:

Duration d: 11 sec. Offsets : {0,1} P(): {0,1}

v1v2

Processes of the HPM:

v1

v2

+ N(0,1)

+ N(0,2)

Page 9: Hidden Process Models

9

HPMs: the graphical model

Offset o

Process Type h

Start Time s

observed

unobserved

Timing Landmark

Yt,v

1,…,k

t=[1,T], v=[1,V]

The set C of configurations constrains the joint distribution on {h(k),o(k)} k.

Configuration c

Page 10: Hidden Process Models

10

Encoding Experiment Design

Configuration 1:

Input stimulus :

Timing landmarks :

21

ViewPicture = 2

ReadSentence = 1

Decide = 3

Configuration 2:

Configuration 3:

Configuration 4:

Constraints Encoded:

h(1) = {1,2}h(2) = {1,2}h(1) != h(2)o(1) = 0o(2) = 0h(3) = 3o(3) = {1,2}

Processes:

Page 11: Hidden Process Models

11

Inference• Over configurations

• Choose the most likely configuration, where:

• C=configuration, Y=observed data, =input stimuli, HPM=model

Page 12: Hidden Process Models

12

Learning

• Parameters to learn:– Response signature W for each process– Timing distribution for each process – Standard deviation for each voxel

• Expectation-Maximization (EM) algorithm to estimate W and .– E step: estimate a probability distribution over

configurations.– M step: update estimates of W (using reweighted

least squares) and (using standard MLEs) based on the E step.

– After convergence, use standard MLEs for

Page 13: Hidden Process Models

13

Learned HPM with 3 processes (S,P,D), and d=13sec.

P PS S

D?

observed

Learned models:

S

P

D

D start time chosen by program as t+18

predicted

P PS S

D D

D?

Page 14: Hidden Process Models

14

ViewPicture in Visual Cortex

Offset = P(Offset)0 0.7251 0.275

Page 15: Hidden Process Models

15

ReadSentence in Visual Cortex

Offset = P(Offset)0 0.6251 0.375

Page 16: Hidden Process Models

16

Decide in Visual CortexOffset = P(Offset)0 0.0751 0.0252 0.0253 0.0254 0.2255 0.625

Page 17: Hidden Process Models

17

Comparing Cognitive Hypotheses

• Use cross-validation to choose a model. – GNB = HPM w/ ViewPicture, ReadSentence w/ d=8s.– HPM-2 = HPM w/ ViewPicture, ReadSentence w/ d=13s.– HPM-3 = HPM-2 + Decide

Accuracy predictingpicture vs. sentence(random = 0.5)

Data log likelihood

Subject: A B C

GNB 0.725 0.750 0.750

HPM-2 0.750 0.875 0.787

HPM-3 0.775 0.875 0.812

GNB -896 -786 -476

HPM-2 -876 -751 -466

HPM-3 -864 -713 -447

Page 18: Hidden Process Models

18

Are we learning the right number of processes?

• Use synthetic data where we know ground truth.– Generate training and test sets with 2/3/4 processes.– Train HPMs with 2/3/4 processes on each.– For each test set, select the HPM with the highest data log

likelihood.

Number of processes in the training and test data

Number of times the correct number of

processes was chosen for the test set

2 5/5

3 5/5

4 4/5

Total: 14/15 = 93.3%

Page 19: Hidden Process Models

19

Related Work

• fMRI– General Linear Model (Dale99)

• Must assume timing of process onset to estimate hemodynamic response.

– Computer models of human cognition (Just99, Anderson04)• Predict fMRI data rather than learning parameters of processes from

the data.

• Machine Learning – Classification of windows of fMRI data (Cox03, Haxby01,

Mitchell04)• Does not typically model overlapping hemodynamic responses.

– Dynamic Bayes Networks (Murphy02, Ghahramani97)• HPM assumptions/constraints are difficult to encode in DBNs.

Page 20: Hidden Process Models

20

Conclusions

• Take-away messages:– HPMs are a probabilistic model for time series data

generated by a collection of latent processes.– In the fMRI domain, HPMs can simultaneously

estimate the hemodynamic response and localize the timing of cognitive processes.

• Future work:– Share parameters across voxels (extending

Niculescu05).– Use parametric hemodynamic responses (e.g.

Boynton96).– Improve algorithm complexities.– Learn process durations automatically.– Apply to open cognitive science problems.

Page 21: Hidden Process Models

21

ReferencesJohn R. Anderson, Daniel Bothell, Michael D. Byrne, Scott Douglass, Christian Lebiere, and Yulin Qin. An integrated theory of the mind. Psychological Review, 111(4):1036–1060, 2004. http://act-r.psy.cmu.edu/about/.

Geoffrey M. Boynton, Stephen A. Engel, Gary H. Glover, and David J. Heeger. Linear systems analysis of functional magnetic resonance imaging in human V1. The Journal of Neuroscience, 16(13):4207–4221, 1996.

David D. Cox and Robert L. Savoy. Functional magnetic resonance imaging (fMRI) ”brain reading”: detecting and classifying distributed patterns of fMRI activity in human visual cortex. NeuroImage, 19:261–270, 2003.

Anders M. Dale. Optimal experimental design for event-related fMRI. Human Brain Mapping, 8:109–114, 1999.

Zoubin Ghahramani and Michael I. Jordan. Factorial hidden Markov models. Machine Learning, 29:245–275, 1997.

James V. Haxby, M. Ida Gobbini, Maura L. Furey, Alumit Ishai, Jennifer L. Schouten, and Pietro Pietrini. Distributed and overlapping representations of faces and objects in ventral temporal cortex. Science, 293:2425–2430, September 2001.

Marcel Adam Just, Patricia A. Carpenter, and Sashank Varma. Computational modeling of high-level cognition and brain function. Human Brain Mapping, 8:128–136, 1999. http://www.ccbi.cmu.edu/project 10modeling4CAPS.htm.

Tom M. Mitchell et al. Learning to decode cognitive states from brain images. Machine Learning, 57:145–175, 2004.

Kevin P. Murphy. Dynamic bayesian networks. To appear in Probabilistic Graphical Models, M. Jordan, November 2002.

Radu Stefan Niculescu. Exploiting Parameter Domain Knowledge for Learning in Bayesian Networks. PhD thesis, Carnegie Mellon University, July 2005. CMU-CS-05-147.