33
236607 Visual Recognition Tutorial 1 Markov models Hidden Markov models Forward/Backward algorithm Viterbi algorithm Baum-Welch estimation algorithm Hidden Markov Models Outline

236607 Visual Recognition Tutorial1 Markov models Hidden Markov models Forward/Backward algorithm Viterbi algorithm Baum-Welch estimation algorithm Hidden

  • View
    254

  • Download
    10

Embed Size (px)

Citation preview

236607 Visual Recognition Tutorial 1

Markov models

Hidden Markov models

Forward/Backward algorithm

Viterbi algorithm

Baum-Welch estimation algorithm

Hidden Markov ModelsOutline

236607 Visual Recognition Tutorial 2

Markov Models

236607 Visual Recognition Tutorial 3

Observable states: S1 ,S2 ,…,SN Below we will designate them simply as

1,2,…,N

Actual state at time t is qt.

q1 ,q2 ,…,qt ,…,qT

First order Markov assumption:

Stationarity:

Observable Markov Models

1 2 1( | , ,...) ( | )t t t t tP q j q i q k P q j q i

1 1( | ) ( | )t t t l t lP q j q i P q j q i

236607 Visual Recognition Tutorial 4

State transition matrix A:

where

Constraints on :

Markov Models

11 12 1 1

21 22 2 2

1 2

1 2

... ...

... ...

... ...

... ...

j N

j N

i i ij iN

N N Nj NN

a a a a

a a a a

a a a a

a a a a

A

1( | ) 1 ,ij t ta P q j q i i j N

ija

1

0, ,

1,

ij

N

ijj

a i j

a i

236607 Visual Recognition Tutorial 5

States:1. Rainy (R)

2. Cloudy (C)

3. Sunny (S)

State transition probability matrix:

Compute the probability of observing SSRRSCS given that today is S.

Markov Models: Example

0.4

A

236607 Visual Recognition Tutorial 6

Basic conditional probability rule:

The Markov chain rule:

Markov Models: Example

( , ) ( | ) ( )P A B P A B P B

1 2

1 2 1 1 2 1

1 1 2 1

1 1 2 1 2 2

1 1 2 2 1 1

( , ,..., )

( | , ,..., ) ( , ,..., )

( | ) ( , ,..., )

( | ) ( | ) ( , ,..., )

( | ) ( | )... ( | ) ( )

T

T T T

T T T

T T T T T

T T T T

P q q q

P q q q q P q q q

P q q P q q q

P q q P q q P q q q

P q q P q q P q q P q

236607 Visual Recognition Tutorial 7

Observation sequence O:

Using the chain rule we get:

Markov Models: Example

33 33 31 11 13 32 23

2 4

( | model)

( , , , , , , , | model)

( ) ( | ) ( | ) ( | ) ( | )

( | ) ( | ) ( | )

(1)(0.8) (0.1)(0.4)(0.3)(0.1)(0.2) 1.536 10

P O

P S S S R R S C S

P S P S S P S S P R S P R R

P S R P C S P S C

a a a a a a a

( , , , , , , , )O S S S R R S C S

236607 Visual Recognition Tutorial 8

What is the probability that the sequence remains in state i for exactly d time units?

Exponential Markov chain duration density.

What is the expected value of the duration d in state i?

Markov Models: Example

1 2 1

1

( ) ( , ,..., , ,...)

( ) (1 )

i d d

di ii ii

p d P q i q i q i q i

a a

1

1

1

( )

( ) (1 )

i id

dii ii

d

d dp d

d a a

236607 Visual Recognition Tutorial 9

1

1

1

(1 ) ( )

(1 ) ( )

(1 )1

1

1

dii ii

d

dii ii

dii

iiii

ii ii

ii

a d a

a aa

aa

a a

a

Markov Models: Example

236607 Visual Recognition Tutorial 10

Markov Models: Example

• Avg. number of consecutive sunny days =

• Avg. number of consecutive cloudy days = 2.5

• Avg. number of consecutive rainy days = 1.67

33

1 15

1 1 0.8a

236607 Visual Recognition Tutorial 11

States are not observable

Observations are probabilistic functions of state

State transitions are still probabilistic

Hidden Markov Models

236607 Visual Recognition Tutorial 12

N urns containing colored balls

M distinct colors of balls

Each urn has a (possibly) different distribution of colors

Sequence generation algorithm:1. Pick initial urn according to some random process.

2. Randomly pick a ball from the urn and then replace it

3. Select another urn according a random selection process asociated with the urn

4. Repeat steps 2 and 3

Urn and Ball Model

236607 Visual Recognition Tutorial 13

The Trellis

ST

AT

ES

N

4

3

2

1

OBSERVATION

1 2 t-1 t t+1 t+2 T-1 TTIME

o1 o2 ot-1 ot ot+1 ot+2 oT-1 oT

236607 Visual Recognition Tutorial 14

N – the number of hidden statesQ – set of states Q={1,2,…,N}M – the number of symbolsV – set of symbols V ={1,2,…,M}A – the state-transition probability matrix

B – Observation probability distribution:

- the initial state distribution:

– the entire model

Elements of Hidden Markov Models

, 1( | ) 1 ,i j t ta P q j q i i j N

( ) ( | ) 1 ,j t k tB k P o v q j i j M

1( ) 1i P q i i N ( , , )A B

236607 Visual Recognition Tutorial 15

1. EVALUATION – given observation O=(o1 , o2 ,…,oT ) and model , efficiently compute

Hidden states complicate the evaluation

Given two models 1 and 1, this can be used to choose the better one.

2. DECODING - given observation O=(o1 , o2 ,…,oT ) and model find the optimal state sequence q=(q1 , q2 ,…,qT ) .

Optimality criterion has to be decided (e.g. maximum likelihood)

“Explanation” of the data.

3. LEARNING – given O=(o1 , o2 ,…,oT ), estimate model parameters that maximize

Three Basic Problems

( , , )A B ( | ).P O

( , , )A B ( | ).P O

236607 Visual Recognition Tutorial 16

Problem: Compute P(o1 , o2 ,…,oT |).

Algorithm:– Let q=(q1 , q2 ,…,qT ) be a state sequence.

– Assume the observations are independent:

– Probability of a particular state sequence is:

– Also,

Solution to Problem 1

1

1 1 2 2

( | , ) ( | , )

( ) ( )... ( )

T

t ti

q q qT T

P O q P o q

b o b o b o

1 1 2 2 3 1( | ) ...q q q q q qT qTP q a a a

( , | ) ( | , ) ( | )P O q P O q P q

236607 Visual Recognition Tutorial 17

– Enumerate paths and sum probabilities:

N T state sequences and O(T) calculations.

Complexity: O(T N T) calculations.

Solution to Problem 1

( | ) ( | , ) ( | )q

P O P O q P q

236607 Visual Recognition Tutorial 18

Forward Procedure: Intuition

ST

AT

ES

N

3

2

1

v

aNk

t t+1

TIME

a3k

a1k

k

236607 Visual Recognition Tutorial 19

Define forward variable as:

is the probability of observing the partial sequence

such that the state qt is i.

Induction:1. Initialization:

2. Induction:

3. Termination:

4. Complexity:

Forward Algorithm

( )t i1 1( ) ( , ,..., , | )t t ti P o o o q i

1 1( , ,..., )to o o

1 1( ) ( )i ii b o

1 11

( ) ( ) ( )N

t t ij j ti

j i a b o

1

( | ) ( )N

Ti

P O i

2( )O N T

( )t i

236607 Visual Recognition Tutorial 20

Consider the following coin-tossing experiment:

– state-transition probabilities equal to 1/3– initial state probabilities equal to 1/3

Example

State 1 State2 State3

P(H)

P(T)

0.5

0.5

0.75

0.25

0.25

0.75

236607 Visual Recognition Tutorial 21

1. You observe O=(H,H,H,H,T,H,T,T,T,T). What state

sequence q, is most likely? What is the joint probability,

, of the observation sequence and the state sequence?

2. What is the probability that the observation sequence came entirely of state 1?

3. Consider the observation sequence

4. How would your answers to parts 1 and 2 change?

Example

( , | )P O q

( , , , , , , , , , ). O H T T H T H H T T H

236607 Visual Recognition Tutorial 22

4. If the state transition probabilities were:

How would the new model ’ change your answers to parts 1-3?

Example

0.9 0.45

' 0.05 0.1

0.05 0.45

A

236607 Visual Recognition Tutorial 23

Backward Algorithm

ST

AT

ES

N

4

3

2

1

OBSERVATION

1 2 t-1 t t+1 t+2 T-1 TTIME

o1 o2 ot-1 ot ot+1 ot+2 oT-1 oT

236607 Visual Recognition Tutorial 24

Define backward variable as:

is the probability of observing the partial sequence

such that the state qt is i.

Induction:

1. Initialization:

2. Induction:

Backward Algorithm

( )t i

( )t i1 2( ) ( , ,..., | , )t t t T ti P o o o q i

1 2( , ,..., )t t To o o

( ) 1T i

1 11

( ) ( ) ( ), 1 , 1,...,1N

t ij j t tj

i a b o j i N t T

236607 Visual Recognition Tutorial 25

Choose the most likely path

Find the path (q1 , q2 ,…,qT ) that maximizes the likelihood:

Solution by Dynamic Programming

Define

is the highest prob. path ending in state i .

By induction we have:

Solution to Problem 2

1 2 11 2 1 1

, ,...,( ) max ( , ,..., , , ,..., | )

tt t t

q q qi P q q q i o o o

1 2( , ,..., | , )TP q q q O

( )t i

1 1( ) max[ ( ) ] ( )t t ij j ti

j i a b o

236607 Visual Recognition Tutorial 26

Viterbi Algorithm

ST

AT

ES

N

4

3

2

1

OBSERVATION

1 2 t-1 t t+1 t+2 T-1 TTIME

o1 o2 ot-1 ot ot+1 ot+2 oT-1 oT

aNk

k

a1k

236607 Visual Recognition Tutorial 27

Initialization:

Recursion:

Termination:

Path (state sequence) backtracking:

Viterbi Algorithm

1 1

1

( ) ( ), 1

( ) 0i ii b o i N

i

11

11

( ) max[ ( ) ] ( )

( ) arg max[ ( ) ]

, 1

t t ij j ti N

t t iji N

j i a b o

j i a

t T j N

*

1

*

1

max[ ( )]

arg max[ ( )]

T Ti N

T Ti N

P i

q i

* *1 1 1, 2,...,1t t tq q t T T

236607 Visual Recognition Tutorial 28

Estimate to maximize No analytic method because of complexity – iterative solution.Baum-Welch Algorithm (actually EM algorithm) :

1. Let initial model be

2. Compute new based on and observation O. f 4. Else set and go to step

Solution to Problem 3

( , , )A B ( | )P O

0log ( | ) log ( | ) stopP O P O DELTA

236607 Visual Recognition Tutorial 29

Baum-Welch: Preliminaries

1( , ) ( , | , )t t ti j P q i q j O

236607 Visual Recognition Tutorial 30

Define as the probability of being in state i at time t, and in state j at time t+1.

Define as the probability of being in state i at time t, given the observation sequence

Baum-Welch: Preliminaries

( , )i j

1 1

1 1

1 11 1

( ) ( ) ( )( , )

( | )

( ) ( ) ( )

( ) ( ) ( )

t ij j t t

t ij j t t

N N

t ij j t ti j

i a b o ji j

P O

i a b o j

i a b o j

( )t i

1

( ) ( , )N

t tj

i i j

236607 Visual Recognition Tutorial 31

is the expected number of times state i is visited.

is the expected number of transitions from state i to state j.

Baum-Welch: Preliminaries

1( )

T

tti

1

1( , )

T

tti j

236607 Visual Recognition Tutorial 32

expected frequency in state i at time (t=1) (expected number of transitions from state i to state j) /

expected number of transitions from state i):

(expected number of times in state j and observing symbol k) / (expected number of times in state j ):

Baum-Welch: Update Rules

_

i 1( )i_

ija

_ ( , )

( )t

ij

t

i ja

i

_

( )ib k

_,

( )( )

( )t

tt o kj

t

jb k

j

236607 Visual Recognition Tutorial 33

L. Rabiner and B. Juang. An introduction to hidden Markov models. IEEE ASSP Magazine, p. 4--16, Jan. 1986.

Thad Starner, Joshua Weaver, and Alex Pentland

Machine Vision Recognition of American Sign Language Using Hidden Markov Model

Thad Starner, Alex PentlandVisual Recognition of American Sign Language Using Hidden Markov Models. In International Workshop on Automatic Face and Gesture Recognition, pages 189--194, 1995. http://citeseer.nj.nec.com/starner95visual.html  

References