21
1 ECE - 517 Reinforcement Learning in Artificial Intelligence Lecture 7: Finite Horizon MDPs, Dynamic Programming Dr. Itamar Arel College of Engineering Department of Electrical Engineering and Computer Science The University of Tennessee Fall 2015 September 10, 2015

ECE-517 Reinforcement Learning in Artificial Intelligenceweb.eecs.utk.edu/~ielhanan/courses/ECE-517/notes/lecture... · 2015-09-16 · ECE-517 Reinforcement Learning in Artificial

  • Upload
    others

  • View
    6

  • Download
    0

Embed Size (px)

Citation preview

Page 1: ECE-517 Reinforcement Learning in Artificial Intelligenceweb.eecs.utk.edu/~ielhanan/courses/ECE-517/notes/lecture... · 2015-09-16 · ECE-517 Reinforcement Learning in Artificial

1

ECE-517 Reinforcement Learningin Artificial Intelligence

Lecture 7: Finite Horizon MDPs,Dynamic Programming

Dr. Itamar Arel

College of EngineeringDepartment of Electrical Engineering and Computer Science

The University of TennesseeFall 2015

September 10, 2015

Page 2: ECE-517 Reinforcement Learning in Artificial Intelligenceweb.eecs.utk.edu/~ielhanan/courses/ECE-517/notes/lecture... · 2015-09-16 · ECE-517 Reinforcement Learning in Artificial

ECE 517 - Reinforcement Learning in AI 2

Outline

Finite Horizon MDPs

Dynamic Programming

Page 3: ECE-517 Reinforcement Learning in Artificial Intelligenceweb.eecs.utk.edu/~ielhanan/courses/ECE-517/notes/lecture... · 2015-09-16 · ECE-517 Reinforcement Learning in Artificial

ECE 517 - Reinforcement Learning in AI 3

Finite Horizon MDPs – Value Functions

The duration, or expected duration, of the process is finite

Let’s consider the following return functions: The expected sum of rewards

The expected discounted sum of rewards

a sufficient condition for the above to converge is rt < rmax …

)(lim

|lim)( 0

1

sVE

ssrEsV

NN

N

t

tN

10 |lim)( 0

1

1

ssrEsVN

t

t

t

N

expected sum

of rewards

for N steps

Page 4: ECE-517 Reinforcement Learning in Artificial Intelligenceweb.eecs.utk.edu/~ielhanan/courses/ECE-517/notes/lecture... · 2015-09-16 · ECE-517 Reinforcement Learning in Artificial

ECE 517 - Reinforcement Learning in AI 4

Return functions

If rt < rmax holds, then

note that this bound is very sensitive to the value of

The expected average reward

Note that the above limit does not always exist!

1)( max

1

max

1 rrsV

N

t

t

)(1

lim

|1

lim)( 0

1

sVN

ssrEN

sM

NN

N

t

tN

Page 5: ECE-517 Reinforcement Learning in Artificial Intelligenceweb.eecs.utk.edu/~ielhanan/courses/ECE-517/notes/lecture... · 2015-09-16 · ECE-517 Reinforcement Learning in Artificial

ECE 517 - Reinforcement Learning in AI 5

Relationship between and

Consider a finite horizon problem where the horizon is random, i.e.

Let’s also assume that the final value for all states is zero

Let N be geometrically distributed with parameter , such

that the probability of stopping at the Nth step is

Lemma: we’ll show that

under the assumption that |rt | < rmax

)(sV

)(sVN

ssrEEsVN

t

tNN 0

1

|)(

11Pr nnN

)()( sVsV N

Page 6: ECE-517 Reinforcement Learning in Artificial Intelligenceweb.eecs.utk.edu/~ielhanan/courses/ECE-517/notes/lecture... · 2015-09-16 · ECE-517 Reinforcement Learning in Artificial

ECE 517 - Reinforcement Learning in AI 6

Relationship between and (cont.))(sV

)(sVN

)(

11

1

1)(

1

1

1

1

1

1

1 1

1

sV

rE

rE

rE

rEsV

t

t

t

t

t

t

tn

n

t

t

n

n

t

t

n

N

Proof:

Page 7: ECE-517 Reinforcement Learning in Artificial Intelligenceweb.eecs.utk.edu/~ielhanan/courses/ECE-517/notes/lecture... · 2015-09-16 · ECE-517 Reinforcement Learning in Artificial

ECE 517 - Reinforcement Learning in AI 7

Outline

Finite Horizon MDPs (cont.)

Dynamic Programming

Page 8: ECE-517 Reinforcement Learning in Artificial Intelligenceweb.eecs.utk.edu/~ielhanan/courses/ECE-517/notes/lecture... · 2015-09-16 · ECE-517 Reinforcement Learning in Artificial

ECE 517 - Reinforcement Learning in AI 8

0

Example of a finite horizon MDP

Consider the following state diagram:

S1 S2

a11

{5,0.5}

a22

{-1,1)

a11

{5,0.5}

a12

{10,1}

a21

Page 9: ECE-517 Reinforcement Learning in Artificial Intelligenceweb.eecs.utk.edu/~ielhanan/courses/ECE-517/notes/lecture... · 2015-09-16 · ECE-517 Reinforcement Learning in Artificial

ECE 517 - Reinforcement Learning in AI 9

Why do we need DP techniques ?

Explicitly solving the Bellman Optimality equation is hard Computing the optimal policy solve the RL problem

Relies on the following three assumptions We have perfect knowledge of the dynamics of the

environment

We have enough computational resources

The Markov property holds

In reality, all three are problematic

e.g. Backgammon game: first and last conditions are ok, but computational resources are insufficient Approx. 1020 state

In many cases we have to settle for approximate solutions (much more on that later …)

Page 10: ECE-517 Reinforcement Learning in Artificial Intelligenceweb.eecs.utk.edu/~ielhanan/courses/ECE-517/notes/lecture... · 2015-09-16 · ECE-517 Reinforcement Learning in Artificial

ECE 517 - Reinforcement Learning in AI 10

Big Picture: Elementary Solution Methods

During the next few weeks we’ll talk about techniques for solving the RL problem

Dynamic programming – well developed, mathematically, but requires an accurate model of the environment

Monte Carlo methods – do not require a model, but are not suitable for step-by-step incremental computation

Temporal difference learning – methods that do not need a model and are fully incremental

More complex to analyze

Launched the revisiting of RL as a pragmatic framework (1988)

The methods also differ in efficiency and speed of convergence to the optimal solution

Page 11: ECE-517 Reinforcement Learning in Artificial Intelligenceweb.eecs.utk.edu/~ielhanan/courses/ECE-517/notes/lecture... · 2015-09-16 · ECE-517 Reinforcement Learning in Artificial

ECE 517 - Reinforcement Learning in AI 11

Dynamic Programming

Dynamic programming is the collection of algorithms that can be used to compute optimal policies given a perfect model of the environment as an MDP

DP constitutes a theoretically optimal methodology

In reality often limited since DP is computationally expensive

Important to understand reference to other models Do “just as well” as DP

Require less computations

Possibly require less memory

Most schemes will strive to achieve the same effect as DP, without the computational complexity involved

Page 12: ECE-517 Reinforcement Learning in Artificial Intelligenceweb.eecs.utk.edu/~ielhanan/courses/ECE-517/notes/lecture... · 2015-09-16 · ECE-517 Reinforcement Learning in Artificial

ECE 517 - Reinforcement Learning in AI 12

Dynamic Programming (cont.)

We will assume finite MDPs (states and actions)

The agent has knowledge of transition probabilities and expected immediate rewards, i.e.

The key idea of DP (as in RL) is the use of value functions to derive optimal/good policies

We’ll focus on the manner by which values are computed

Reminder: an optimal policy is easy to derive once the optimal value function (or action-value function) is attained

aassssP ttt

a

ss ,|'Pr 1'

',,| 11' ssaassrER tttt

a

ss

Page 13: ECE-517 Reinforcement Learning in Artificial Intelligenceweb.eecs.utk.edu/~ielhanan/courses/ECE-517/notes/lecture... · 2015-09-16 · ECE-517 Reinforcement Learning in Artificial

ECE 517 - Reinforcement Learning in AI 13

Dynamic Programming (cont.)

Employing the Bellman equation to the optimal value/ action-value function, yields

DP algorithms are obtained by turning the Bellman equations into update rules These rules help improve the approximations of the desired value functionsWe will discuss two main approaches: policy iteration and value iteration

)','(max),(

)'(max)(

*

'

'

'

*

*

'

'

'

*

asQRPasQ

sVRPsV

a

a

ss

s

a

ss

a

ss

s

a

ssa

Page 14: ECE-517 Reinforcement Learning in Artificial Intelligenceweb.eecs.utk.edu/~ielhanan/courses/ECE-517/notes/lecture... · 2015-09-16 · ECE-517 Reinforcement Learning in Artificial

ECE 517 - Reinforcement Learning in AI 14

Method #1: Policy Iteration

Technique for obtaining the optimal policy

Comprises of two complementing steps

Policy evaluation – updating the value function in view of current policy (which can be sub-optimal)

Policy improvement – updating the policy given the current value function (which can be sub-optimal)

The process converges by “bouncing” between these two steps

)(1 sV

** ),( sV

1

2

)(2 sV

Page 15: ECE-517 Reinforcement Learning in Artificial Intelligenceweb.eecs.utk.edu/~ielhanan/courses/ECE-517/notes/lecture... · 2015-09-16 · ECE-517 Reinforcement Learning in Artificial

ECE 517 - Reinforcement Learning in AI 15

Policy Evaluation

We’ll consider how to compute the state-value function for an arbitrary policy

Recall that

(assumes that policy is always followed)

The existence of a unique solution is guaranteed as long as

either <1 or eventual termination is guaranteed from all states under the policy

The Bellman equation translates into |S| simultaneous equations with |S| unknowns (the values)

Assuming we have an initial guess, we can use the Bellman equation as an update rule

0

Page 16: ECE-517 Reinforcement Learning in Artificial Intelligenceweb.eecs.utk.edu/~ielhanan/courses/ECE-517/notes/lecture... · 2015-09-16 · ECE-517 Reinforcement Learning in Artificial

ECE 517 - Reinforcement Learning in AI 16

Policy Evaluation (cont.)

We can write

The sequence {Vk} converges to the correct value function as k

In each iteration, all state-values are updated a.k.a. full backup

A similar method can be applied to state-action (Q(s,a)) functions

An underlying assumption: all states are visited each time Scheme is computationally heavy

Can be distributed – given sufficient resources (Q: How?)

“In-place” schemes – use a single array and update values based on new estimates Also converge to the correct solution

Order in which states are backed up determines rate of convergence

)'(),()( '

'

'1 sVRPassV k

a

ss

s

a

ss

a

k

Page 17: ECE-517 Reinforcement Learning in Artificial Intelligenceweb.eecs.utk.edu/~ielhanan/courses/ECE-517/notes/lecture... · 2015-09-16 · ECE-517 Reinforcement Learning in Artificial

ECE 517 - Reinforcement Learning in AI 17

0

Iterative Policy Evaluation algorithm

A key consideration is the termination condition

Typical stopping condition for iterative policy evaluation is )()(max 1 sVsV kk

Ss

Page 18: ECE-517 Reinforcement Learning in Artificial Intelligenceweb.eecs.utk.edu/~ielhanan/courses/ECE-517/notes/lecture... · 2015-09-16 · ECE-517 Reinforcement Learning in Artificial

ECE 517 - Reinforcement Learning in AI 18

Policy Improvement

Policy evaluation deals with finding the value function under a given policyHowever, we don’t know if the policy (and hence the value function) is optimalPolicy improvement has to do with the above, and with updating the policy if non-optimal values are reached

Suppose that for some arbitrary policy, , we’ve computed the value function (using policy evaluation)

Let policy ’ be defined such that in each state s it selects action a that maximizes the first-step value, i.e.

It can be shown that ’ is at least as good as , and if they are equal they are both the optimal policy.

)'(maxarg)(' '

'

' sVRPs a

ss

s

a

ssa

def

Page 19: ECE-517 Reinforcement Learning in Artificial Intelligenceweb.eecs.utk.edu/~ielhanan/courses/ECE-517/notes/lecture... · 2015-09-16 · ECE-517 Reinforcement Learning in Artificial

ECE 517 - Reinforcement Learning in AI 19

Policy Improvement (cont.)

Consider a greedy policy, ’, that selects the action that would yield the highest expected single-step return

Then, by definition,

this is the condition for the policy improvement theorem.

The above states that following the new policy one step is enough to prove that it is a better policy, i.e. that

)'(maxarg

),(maxarg)('

'

'

' sVRP

asQs

a

ss

s

a

ssa

a

)()(', sVssQ

)()(' sVsV

Page 20: ECE-517 Reinforcement Learning in Artificial Intelligenceweb.eecs.utk.edu/~ielhanan/courses/ECE-517/notes/lecture... · 2015-09-16 · ECE-517 Reinforcement Learning in Artificial

ECE 517 - Reinforcement Learning in AI 20

0

Proof of the Policy Improvement Theorem

Page 21: ECE-517 Reinforcement Learning in Artificial Intelligenceweb.eecs.utk.edu/~ielhanan/courses/ECE-517/notes/lecture... · 2015-09-16 · ECE-517 Reinforcement Learning in Artificial

ECE 517 - Reinforcement Learning in AI 21

0

Policy Iteration