26
Markov Chain Nur Aini Masruroh

Markov Chain Nur Aini Masruroh. Discrete Time Markov Chains Consider a stochastic process {Xn, n=0, 1, 2, …} that takes on a finite or countable number

Embed Size (px)

Citation preview

Page 1: Markov Chain Nur Aini Masruroh. Discrete Time Markov Chains Consider a stochastic process {Xn, n=0, 1, 2, …} that takes on a finite or countable number

Markov Chain

Nur Aini Masruroh

Page 2: Markov Chain Nur Aini Masruroh. Discrete Time Markov Chains Consider a stochastic process {Xn, n=0, 1, 2, …} that takes on a finite or countable number

Discrete Time Markov Chains

Consider a stochastic process {Xn, n=0, 1, 2, …} that takes on a finite or countable number of possible values

– assume set of possible values are non negative integers {0, 1, 2, …}

• Let Xn= i denote the process in state i at time n• {Xn, n=0, 1, 2, …} describes the evolution of

process over time• Define Pij as the probability that the process will

next be in state j given that it is currently in state i Pij = P{Xn+1 = j | Xn= i}

• Such a stochastic process is known as a Discrete Time Markov chain (DTMC)

Page 3: Markov Chain Nur Aini Masruroh. Discrete Time Markov Chains Consider a stochastic process {Xn, n=0, 1, 2, …} that takes on a finite or countable number

Discrete Time Markov Chains

DTMC can be used to model a lot of real lifestochastic phenomena– Example: Xn can be the inventory on-hand of a

warehouse at the nth period– Example: Xn can be the amount of money a taxi

driver gets for his nth trip– Example: Xn can be the status of a machine on

the nth day of operation– Example: Xn can be the weather (rain/shine) on

the nth day

Page 4: Markov Chain Nur Aini Masruroh. Discrete Time Markov Chains Consider a stochastic process {Xn, n=0, 1, 2, …} that takes on a finite or countable number

DTMC properties

Markov Property: The conditional distribution of any future state Xn+1 given the past

state X0, X1, …, Xn-1 and the present state Xn, is independent of the past states and depends only on the

present state

P{Xn+1 = j | Xn=i, Xn-1=in-1, …, X1=i1, X0= i0}=P{Xn+1 = j | Xn=i}= Pij

– Pij represents the probability that the process will, when in state i, next make a transition into state j

Since probabilities are non negative and since the process must make a transition into some state, we have that

Pij≥0, i,j ≥ 0;

i = 0, 1, …1

0

jijp

Page 5: Markov Chain Nur Aini Masruroh. Discrete Time Markov Chains Consider a stochastic process {Xn, n=0, 1, 2, …} that takes on a finite or countable number

Short term analysis

The N2 transition probabilities can be represented by matrix P with element pij; 0 ≤ pij ≤ 1 Sum of the row should be 1

NNNN

N

N

ij

ppp

ppp

ppp

pP

21

22221

11211

}{

Page 6: Markov Chain Nur Aini Masruroh. Discrete Time Markov Chains Consider a stochastic process {Xn, n=0, 1, 2, …} that takes on a finite or countable number

Defining a DTMC

Specify the state State, s

• Variable to describe the present situation of the system, i.e. pressure, temperature, etc

Demonstrate the Markov property Find the probability transition

Page 7: Markov Chain Nur Aini Masruroh. Discrete Time Markov Chains Consider a stochastic process {Xn, n=0, 1, 2, …} that takes on a finite or countable number

Example 1: rain

Suppose the chance of rain tomorrow depends on previous weather conditions only through whether or not it rains today, and not the past weather conditions. Suppose also that if it rains today, then it will rain tomorrow with probability α; and if it does not rain today, then it will rain tomorrow with probability β. Let 0 be the state when it rains and 1 when it does not rain. Model this as a DTMC!

Page 8: Markov Chain Nur Aini Masruroh. Discrete Time Markov Chains Consider a stochastic process {Xn, n=0, 1, 2, …} that takes on a finite or countable number

Example 2: stock movement

Consider the following model for the value of a stock: At the end of a given day, the price is recorded If the stock has gone up, the probability that it will go up

tomorrow is 0.7 If the stock has gone down, the probability that it will go

up tomorrow is only 0.5 (assume stock staying the same as a decrease)

Model the price movement (up/down) as a DTMC

Page 9: Markov Chain Nur Aini Masruroh. Discrete Time Markov Chains Consider a stochastic process {Xn, n=0, 1, 2, …} that takes on a finite or countable number

Example 3: Mood

On any given day Gary is either cheerful (C), so-so (S), or glum (G).

If he is cheerful today, then he will be C, S or G tomorrow with respective probabilities 0.5, 0.4, 0.1.

If he is so-so today, then he will be C, S, or G tomorrow with probabilities 0.3, 0.4, 0.3.

If he is glum today, then he will be C, S, or G tomorrow with probabilities 0.2, 0.3, 0.5.

Model this as a DTMC

Page 10: Markov Chain Nur Aini Masruroh. Discrete Time Markov Chains Consider a stochastic process {Xn, n=0, 1, 2, …} that takes on a finite or countable number

Example 4: Gambler’s ruin

My initial fortune is $1 and my opponent’s fortune is $2. I win a play with probability p, in which case I receive $1 and my opponent loses $1. We play until one of us have fortune 0.

Model this as a DTMC

Page 11: Markov Chain Nur Aini Masruroh. Discrete Time Markov Chains Consider a stochastic process {Xn, n=0, 1, 2, …} that takes on a finite or countable number

Example 5: beer branding

A leading brewery company in Singapore (label T) has asked its IE team to analyze its market position. It is particularly concerned about its major competitor (label A).

It is believed (and somewhat verified by consumer surveys) that consumers chose their brand preference based on the brand of beer they are currently consuming.

From market survey data collected monthly, 95% of the current consumers of label T will still prefer label T in the

next month, while 3% will switch to label A and the remaining to label C (all other foreign brands)

90% of consumers of label A will remain loyal to label A, while 8% will shift preferences to label T

80% of consumers of label C will prefer label C, while 10% will shift preferences to label T.

Model the brand loyalty/switching of consumers as a DTMC

Page 12: Markov Chain Nur Aini Masruroh. Discrete Time Markov Chains Consider a stochastic process {Xn, n=0, 1, 2, …} that takes on a finite or countable number

State transition diagram

The transition matrix of the Markov chain can be represented by a state transition diagram (“bubble” diagram)

In the diagram: circles = nodes = states: one for each element of the state space arrows = archs = transitions: one for each non-zero probability

(labeled with probabilities)

Diagram Rule: arrows out of a state have label sums that add to 1.

Try to draw state transition diagram for the Gambler’s ruin and the beer branding examples!

Page 13: Markov Chain Nur Aini Masruroh. Discrete Time Markov Chains Consider a stochastic process {Xn, n=0, 1, 2, …} that takes on a finite or countable number

Assignment 2

Consider an experiment in which a rat is wandering inside a maze. The maze has six rooms labeled F, 2, 3, 4, 5 and S. If a room has k doors, the probability that the rat selects a particular door is 1/k. However, if the rat reaches room F, which contains food, or room S, which gives it an electrical shock, then it is kept there and the experiment stops.

Model the rat’s movement around the maze as a DTMC Draw the state transition diagram!

Page 14: Markov Chain Nur Aini Masruroh. Discrete Time Markov Chains Consider a stochastic process {Xn, n=0, 1, 2, …} that takes on a finite or countable number

Markov process behavior

Multistep transition probabilities Φij(n): probability that the process will occupy state j

at time n given that it occupied state i at time 0 (n-step transition probability from state i to state j)

Φij(n) = P{s(n) = j|s(0) = i}, 1 ≤ i, j ≤ N, n=0,1,2,…

Probability in state j after n+1 transition:

P{s(n+1)=j|s(0)=i}

rewritten in terms of joint probability that state j is occupied at time n+1 and state k is occupied at time n:

N

k

isknsjnsPisjnsP1

})0(|)(,)1({})0(|)1({

Page 15: Markov Chain Nur Aini Masruroh. Discrete Time Markov Chains Consider a stochastic process {Xn, n=0, 1, 2, …} that takes on a finite or countable number

Multistep transition probabilities

From the definition of conditional probability:

})0(,)(|)1({

})0(|)({})0(|)1({1

isknsjnsP

isknsPisjnsPN

k

If n≥1, by Markov assumption that the future trajectory only depends on the present state,

kjpknsjnsP

isknsjnsP

})(|)1({

})0(,)(|)1({

Page 16: Markov Chain Nur Aini Masruroh. Discrete Time Markov Chains Consider a stochastic process {Xn, n=0, 1, 2, …} that takes on a finite or countable number

Multistep transition probabilities

,...2,1,0,,...,2,11)(

,...2,1,0,,11)(0

,...2,1,0)()1(

1

1

nNin

nNjin

npnn

N

jij

ij

N

kkjikij

Based on the Markov property, the conditional probability can be rewritten:

Page 17: Markov Chain Nur Aini Masruroh. Discrete Time Markov Chains Consider a stochastic process {Xn, n=0, 1, 2, …} that takes on a finite or countable number

Multistep transition probabilities

Matrix formulation

)()()(

)()()(

)()()(

)}({)(

21

22221

11211

nnn

nnn

nnn

nn

NNNN

N

N

ij

Φ(n+1) = Φ(n)P

Φ(0) = I identity matrix

Φ(1) = Φ(0)P = IP = P

Φ(2) = Φ(1) P = P2

Φ(n) = Pn n=0,1,2,…

Page 18: Markov Chain Nur Aini Masruroh. Discrete Time Markov Chains Consider a stochastic process {Xn, n=0, 1, 2, …} that takes on a finite or countable number

Marketing example

In a hypothetical market there are only 2 brands, A and B. A typical consumer in this market buys brand A with probability of 0.8 if his last purchase was brand A and with probability of 0.3 if his last purchase was brand B.

How does the probability of the consumer’s purchasing each brand depend on the number of purchases he has made and on the brand he purchased initially?

Page 19: Markov Chain Nur Aini Masruroh. Discrete Time Markov Chains Consider a stochastic process {Xn, n=0, 1, 2, …} that takes on a finite or countable number

State probabilities

Suppose the probability that state I is occupied at time n will be given the symbol πi(n) and defined

πi(n) =P{s(n)=i} i=1,2,…,N, n=0,1,2,…

N

ii n

1

1)(

We have Φij(n) = P{s(n) = j|s(0) = i}

If we multiply both side by P{s(0)=i} and sum over I from 1 to N, we obtain

)()0()()()()0(

})({})(,)0({)()0(

})0(|)({})0({)(})0({

11

11

1 1

nnornn

jnsPjnsisPn

isjnsPisPnisP

ij

N

iijjij

N

ii

N

iij

N

ii

N

i

N

iij

Page 20: Markov Chain Nur Aini Masruroh. Discrete Time Markov Chains Consider a stochastic process {Xn, n=0, 1, 2, …} that takes on a finite or countable number

State probabilities

The state probabilities at time n can be determined by multiplying the multistep transition probabilities of starting in each state and summing all over stateπ(n) = [π1(n), π2(n), …, πN(n)]with this definition, π(n) = π(0)Φ(n) n=0,1,2,… π(n) = π(0)Pn

π(n+1) = π(0) Pn+1 = π(0)PnP π(n+1) = π(n) P n=0,1,2,…

The state probability vector at any time can be calculated by post-multiplying the state probability vector at the preceding time by P

Try the marketing example!

Page 21: Markov Chain Nur Aini Masruroh. Discrete Time Markov Chains Consider a stochastic process {Xn, n=0, 1, 2, …} that takes on a finite or countable number

Asymptotic behavior of state probabilities

When the process is allowed to make a large transition,

π(∞) = π(0) P∞

If we define limiting state probability as π, so

π = π(0)Φ

Where Φ is the limiting multistep transition probability matrix, Φ = Φ(∞) = P∞

Page 22: Markov Chain Nur Aini Masruroh. Discrete Time Markov Chains Consider a stochastic process {Xn, n=0, 1, 2, …} that takes on a finite or countable number

Long term behavior and analysis

• In designing physical systems, there are often “start-up” effects that are different from what can be expected in the “long-run”.– A designer might be interested in the long-run behavior, or operations

• The LLN holds for iid random variables.

• Question: Do similar limiting results hold for DTMC when n is large?– Limiting distributions– Long term averages

Page 23: Markov Chain Nur Aini Masruroh. Discrete Time Markov Chains Consider a stochastic process {Xn, n=0, 1, 2, …} that takes on a finite or countable number

Limiting probabilities

Monodesmic process: Markov process that has a Φ with equal rows

Monodesmic process: Sufficient condition: able to make transition Necessary condition: exit only one subset of states that must be

occupied after infinitely many transitions Recall:

NjN

iijij ,...,2,1)0(

1

Because all rows of Φ are equal for a monodesmic process, each element Φij is equal to a value Φj that depends only on the column index j. Then

j

N

iij

N

ijij

11

)0()0(

Page 24: Markov Chain Nur Aini Masruroh. Discrete Time Markov Chains Consider a stochastic process {Xn, n=0, 1, 2, …} that takes on a finite or countable number

Direct solution for limiting state probabilities

If the state probability vector has attained its limiting value, π, it must satisfy the equation

π = πPIt implies the N simultaneous equations:

1

,...,2,1

1

1

N

ii

N

iijij Njp

Try the marketing example!

Page 25: Markov Chain Nur Aini Masruroh. Discrete Time Markov Chains Consider a stochastic process {Xn, n=0, 1, 2, …} that takes on a finite or countable number

Example 1: rain

Suppose the chance of rain tomorrow depends on previous weather conditions only through whether or not it rains today, and not the past weather conditions. Suppose also that if it rains today, then it will rain tomorrow with probability α; and if it does not rain today, then it will rain tomorrow with probability β. Let 0 be the state when it rains and 1 when it does not rain. Find the limiting probabilities π0, π1 .

Page 26: Markov Chain Nur Aini Masruroh. Discrete Time Markov Chains Consider a stochastic process {Xn, n=0, 1, 2, …} that takes on a finite or countable number

Example 2: Tank warfare

One theoretical model for tank warfare expresses the firing mechanism as a two-state Markov process in which the states are 0: a hit, 1: a miss. Thus Xn is 1 or 0 depending on whether the nth shot is a miss or a hit. Suppose the probability of the tank’s hitting on a certain shot after it had hit on the previous shot is ¾, and the probability of the tank’s hitting on a certain shot after it had missed on the previous shot is ½,

– find the probability that the 11th shot fired from the tank (the 10th shot after the first) hits its target, given that the initial shot hit.

– Suppose that a tank commander on first encountering an enemy fires his first round for ranging purposes, and suppose that it is fairly unlikely that the first round hits. More specifically, suppose that the probability of a hit on the initial shot is ¼, and that the probability of a miss on the initial shot is ¾. What is the probability that the fourth shot hits?