47
Direct Gradient-Based Reinforcement Learning Jonathan Baxter Research School of Information Sciences and Engineering Australian National University http://csl.anu.edu.au/jon Joint work with Peter Bartlett and Lex Weaver December 5, 1999

Direct Gradient-Based Reinforcement Learninglearning/talks-1999/99-12-06.baxter.slides.pdf · Direct Gradient-Based Reinforcement Learning ... Chess 3 State: Board position. ... Policy

Embed Size (px)

Citation preview

Page 1: Direct Gradient-Based Reinforcement Learninglearning/talks-1999/99-12-06.baxter.slides.pdf · Direct Gradient-Based Reinforcement Learning ... Chess 3 State: Board position. ... Policy

Direct Gradient-BasedReinforcement Learning

Jonathan BaxterResearch School of Information Sciences and Engineering

Australian National Universityhttp://csl.anu.edu.au/∼jon

Joint work with Peter Bartlett and Lex Weaver

December 5, 1999

Page 2: Direct Gradient-Based Reinforcement Learninglearning/talks-1999/99-12-06.baxter.slides.pdf · Direct Gradient-Based Reinforcement Learning ... Chess 3 State: Board position. ... Policy

1

Reinforcement Learning

Models agent interacting with its environment .

1. Agent receives information about its state .

2. Agent chooses action or control based on state-information.

3. Agent receives a reward .

4. State is updated.

5. Goto ?? .

Page 3: Direct Gradient-Based Reinforcement Learninglearning/talks-1999/99-12-06.baxter.slides.pdf · Direct Gradient-Based Reinforcement Learning ... Chess 3 State: Board position. ... Policy

2

Reinforcement Learning

• Goal: Adjust agent’s behaviour to maximize long-termaverage reward.

• Key Assumption: state transitions are Markov .

Page 4: Direct Gradient-Based Reinforcement Learninglearning/talks-1999/99-12-06.baxter.slides.pdf · Direct Gradient-Based Reinforcement Learning ... Chess 3 State: Board position. ... Policy

3

Chess

• State: Board position.

• Control: Move pieces.

• State Transitions: My move, followed by opponent’smove.

• Reward: Win, draw, or lose.

Page 5: Direct Gradient-Based Reinforcement Learninglearning/talks-1999/99-12-06.baxter.slides.pdf · Direct Gradient-Based Reinforcement Learning ... Chess 3 State: Board position. ... Policy

4

Call Admission Control

Telecomms carrier selling bandwidth: queueing problem.

• State: Mix of call types on channel.

• Control: Accept calls of certain type.

• State Transitions: Calls finish. New calls arrive.

• Reward: Revenue from calls accepted.

Page 6: Direct Gradient-Based Reinforcement Learninglearning/talks-1999/99-12-06.baxter.slides.pdf · Direct Gradient-Based Reinforcement Learning ... Chess 3 State: Board position. ... Policy

5

Cleaning Robot

• State: Robot and environment (position, velocity, dustlevels, . . . ).

• Control: Actions available to robot.

• State Transitions: depend on dynamics of robot andstatistics of environment.

• Reward: Pick up rubbish, don’t damage the furniture.

Page 7: Direct Gradient-Based Reinforcement Learninglearning/talks-1999/99-12-06.baxter.slides.pdf · Direct Gradient-Based Reinforcement Learning ... Chess 3 State: Board position. ... Policy

6

Summary

Previous approaches:

• Dynamic Programming can find optimal policies insmall state spaces.

• Approximate Value-Function based approaches currentlythe method of choice in large state spaces.

• Numerous practical successes, BUT

• Policy performance can degrade at each step.

Page 8: Direct Gradient-Based Reinforcement Learninglearning/talks-1999/99-12-06.baxter.slides.pdf · Direct Gradient-Based Reinforcement Learning ... Chess 3 State: Board position. ... Policy

7

Summary

Alternative Approach:

• Policy parameters θ ∈ RK, Performance: η(θ).

• Compute∇η(θ) and step uphill (gradient ascent).

• Previous algorithms relied on accurate reward baselineor recurrent states .

Page 9: Direct Gradient-Based Reinforcement Learninglearning/talks-1999/99-12-06.baxter.slides.pdf · Direct Gradient-Based Reinforcement Learning ... Chess 3 State: Board position. ... Policy

8

Summary

Our Contribution:

• Approximation∇βη(θ) to∇η(θ).

• Parameter β ∈ [0, 1) related to Mixing Time ofproblem.

• Algorithm to approximate∇βη(θ) via simulation (POMDPG).

• Line search in the presence of noise.

Page 10: Direct Gradient-Based Reinforcement Learninglearning/talks-1999/99-12-06.baxter.slides.pdf · Direct Gradient-Based Reinforcement Learning ... Chess 3 State: Board position. ... Policy

9

Partially Observable Markov DecisionProcesses (POMDPs)

States: S= {1, 2, . . . , n} Xt∈ SObservations: Y= {1, 2, . . . ,M} Yt∈ Y

Actions or Controls: U= {1, 2, . . . , N} Ut∈ U

Observation Process ν: Pr(Yt = y|Xt = i)= νy(i)

Stochastic Policy µ: Pr(Ut = u|Yt = y)= µu(θ, y)

Rewards: r : S → R

Adjustable parameters: θ ∈ RK

Page 11: Direct Gradient-Based Reinforcement Learninglearning/talks-1999/99-12-06.baxter.slides.pdf · Direct Gradient-Based Reinforcement Learning ... Chess 3 State: Board position. ... Policy

10

POMDP

Transition Probabilities:

Pr(Xt+1 = j|Xt = i, Ut = u) = pij(u)

Page 12: Direct Gradient-Based Reinforcement Learninglearning/talks-1999/99-12-06.baxter.slides.pdf · Direct Gradient-Based Reinforcement Learning ... Chess 3 State: Board position. ... Policy

11

POMDP

r(X )t

Environment

Agent UtYt

X t

ν

µPolicy:

Page 13: Direct Gradient-Based Reinforcement Learninglearning/talks-1999/99-12-06.baxter.slides.pdf · Direct Gradient-Based Reinforcement Learning ... Chess 3 State: Board position. ... Policy

12

The Induced Markov Chain

• Transition Probabilities:

pij(θ)=Pr (Xt+1 = j|Xt = i)

=Ey∼ν(Xt) Eu∼µ(θ,y) pij(u)

• Transition Matrix:

P (θ) = [pij(θ)]

Page 14: Direct Gradient-Based Reinforcement Learninglearning/talks-1999/99-12-06.baxter.slides.pdf · Direct Gradient-Based Reinforcement Learning ... Chess 3 State: Board position. ... Policy

13

Stationary Distributions

q = [q1 · · · qn]′ ∈ Rn is a distribution over states.

Xt∼ q⇒ Xt+1∼ q′P (θ)

Definition: A probability distribution π ∈ Rn is astationary distribution of the Markov chain if

π′P (θ) = π′.

Page 15: Direct Gradient-Based Reinforcement Learninglearning/talks-1999/99-12-06.baxter.slides.pdf · Direct Gradient-Based Reinforcement Learning ... Chess 3 State: Board position. ... Policy

14

Stationary Distributions

Convenient Assumption: For all values of theparameters θ, there is a unique stationary distributionπ(θ).

Implies the Markov chain mixes :For all X0, the distribution of Xt approaches π(θ).

Inconvenient Assumption: Number of states n

“essentially infinite”.

Meaning: forget about storing a number for each state, orinverting n× n matrices.

Page 16: Direct Gradient-Based Reinforcement Learninglearning/talks-1999/99-12-06.baxter.slides.pdf · Direct Gradient-Based Reinforcement Learning ... Chess 3 State: Board position. ... Policy

15

Measuring Performance

• Average Reward:

η(θ) =n∑i=1

πi(θ)r(i)

• Goal: Find θ maximizing η(θ).

Page 17: Direct Gradient-Based Reinforcement Learninglearning/talks-1999/99-12-06.baxter.slides.pdf · Direct Gradient-Based Reinforcement Learning ... Chess 3 State: Board position. ... Policy

16

Summary

• Partially Observable Markov Decision Processes.

• Previous approaches: value function methods.

• Direct gradient ascent

• Approximating the gradient of the average reward.

• Estimating the approximate gradient: POMDPG.

• Line search in the presence of noise.

• Experimental results.

Page 18: Direct Gradient-Based Reinforcement Learninglearning/talks-1999/99-12-06.baxter.slides.pdf · Direct Gradient-Based Reinforcement Learning ... Chess 3 State: Board position. ... Policy

17

Approximate Value Functions

• Discount Factor β ∈ [0, 1), Discounted value of statei under policy µ:

Jµβ (i) = Eµ

[r(X0) + βr(X1) + β2r(X2) + · · ·

∣∣X0 = i].

• Idea: Choose restricted class of value functionsJ̃(θ, i), θ ∈ RK, i ∈ S (e.g neural network withparameters θ).

Page 19: Direct Gradient-Based Reinforcement Learninglearning/talks-1999/99-12-06.baxter.slides.pdf · Direct Gradient-Based Reinforcement Learning ... Chess 3 State: Board position. ... Policy

18

Policy IterationIterate:

• Given policy µ, find approximation J̃(θ, ·) to Jµβ .

• Many algorithms for finding θ: TD(λ), Q-learning,Bellman residuals, . . . .

• Simulation and non-simulation based.

• Generate new policy µ′ using J̃(θ, ·):

µ′u∗(θ, i) = 1⇔ u∗ = argmaxu∈U∑j∈S

pij(u)J̃(θ, j)

Page 20: Direct Gradient-Based Reinforcement Learninglearning/talks-1999/99-12-06.baxter.slides.pdf · Direct Gradient-Based Reinforcement Learning ... Chess 3 State: Board position. ... Policy

19

Approximate Value Functions

• The Good:

? Backgammon (world-champion), chess (InternationalMaster), job-shop scheduling, elevator control, . . .

? Notion of “backing-up” state values can be efficient.

• The Bad:

? Unless∣∣∣J̃(θ, i)− Jµβ (i)

∣∣∣ = 0 for all states i, the newpolicy µ′ can be a lot worse than the old one.

? “Essentially Infinite” state spaces means we are likelyto have very bad approximation error for some states.

Page 21: Direct Gradient-Based Reinforcement Learninglearning/talks-1999/99-12-06.baxter.slides.pdf · Direct Gradient-Based Reinforcement Learning ... Chess 3 State: Board position. ... Policy

20

Summary

• Partially Observable Markov Decision Processes.

• Previous approaches: value function methods.

• Direct gradient ascent.

• Approximating the gradient of the average reward.

• Estimating the approximate gradient: POMDPG.

• Line search in the presence of noise.

• Experimental results.

Page 22: Direct Gradient-Based Reinforcement Learninglearning/talks-1999/99-12-06.baxter.slides.pdf · Direct Gradient-Based Reinforcement Learning ... Chess 3 State: Board position. ... Policy

21

Direct Gradient Ascent

• Desideratum: Adjusting the agent’s parameters θ

should improve its performance.

• Implies. . .

• Adjust the parameters in the direction of thegradient of the average reward:

θ := θ + γ∇η(θ)

Page 23: Direct Gradient-Based Reinforcement Learninglearning/talks-1999/99-12-06.baxter.slides.pdf · Direct Gradient-Based Reinforcement Learning ... Chess 3 State: Board position. ... Policy

22

Direct Gradient Ascent: Main Results

1. Algorithm to estimate approximate gradient(∇βη) from asample path.

2. Accuracy of approximation depends on parameter of thealgorithm (β); bias/variance trade-off.

3. Line search algorithm using only gradient estimates.

Page 24: Direct Gradient-Based Reinforcement Learninglearning/talks-1999/99-12-06.baxter.slides.pdf · Direct Gradient-Based Reinforcement Learning ... Chess 3 State: Board position. ... Policy

23

Related WorkMachine Learning: Williams’ REINFORCE algorithm (1992).

• Gradient ascent algorithm for restricted class of MDPs.• Requires accurate reward baseline, i.i.d. transitions.

Kimura et. al. , 1998: extension to infinite horizon.

Discrete Event Systems: Algorithms that rely on recurrentstates. MDPs: (Cao and Chen, 1997), POMDPs: (Marbach and

Tsitsiklis, 1998).

Control Theory: Direct adaptive control using derivatives(Hjalmarsson, Gunnarsson, Gevers, 1994), (Kammer, Bitmead,

Bartlett, 1997), (DeBruyne, Anderson, Gevers, Linard, 1997).

Page 25: Direct Gradient-Based Reinforcement Learninglearning/talks-1999/99-12-06.baxter.slides.pdf · Direct Gradient-Based Reinforcement Learning ... Chess 3 State: Board position. ... Policy

24

Summary

• Partially Observable Markov Decision Processes.

• Previous approaches: value function methods.

• Direct gradient ascent.

• Approximating the gradient of the average reward.

• Estimating the approximate gradient: POMDPG.

• Line search in the presence of noise.

• Experimental results.

Page 26: Direct Gradient-Based Reinforcement Learninglearning/talks-1999/99-12-06.baxter.slides.pdf · Direct Gradient-Based Reinforcement Learning ... Chess 3 State: Board position. ... Policy

25

Approximating the gradientRecall: For β ∈ [0, 1), Discounted value of state i is

Jβ(i) = E[r(X0) + βr(X1) + β2r(X2) + · · ·

∣∣X0 = i].

Vector notation: Jβ = (Jβ(1), . . . , Jβ(n)).

Theorem: For all β ∈ [0, 1),

∇η(θ)= βπ′(θ)∇P (θ)Jβ + (1− β)∇π′(θ)Jβ.

= β∇βη(θ)︸ ︷︷ ︸estimate

+ (1− β)∇π′(θ)Jβ︸ ︷︷ ︸→0 asβ→1

.

Page 27: Direct Gradient-Based Reinforcement Learninglearning/talks-1999/99-12-06.baxter.slides.pdf · Direct Gradient-Based Reinforcement Learning ... Chess 3 State: Board position. ... Policy

26

Mixing Times of Markov Chains

• `1-distance: If p, q are distributions on the states,

‖p− q‖1 :=n∑i=1

|p(i)− q(i)|

• d(t)-distance: Let pt(i) be the distribution over statesat time t, starting from state i.

d(t) := maxij‖pt(i)− pt(j)‖1

• Unique stationary distribution ⇒ d(t)→ 0.

Page 28: Direct Gradient-Based Reinforcement Learninglearning/talks-1999/99-12-06.baxter.slides.pdf · Direct Gradient-Based Reinforcement Learning ... Chess 3 State: Board position. ... Policy

27

Approximating the gradient

Mixing time: τ ∗ := min{t : d(t) ≤ e−1}

Theorem: For all β ∈ [0, 1), θ ∈ Rk,

‖∇η(θ)−∇βη(θ)‖ ≤ constant× τ ∗(θ)(1− β).

That is, if 1/(1− β) is large compared with the mixingtime τ ∗(θ),∇βη(θ) accurately approximates the gradientdirection∇η(θ).

Page 29: Direct Gradient-Based Reinforcement Learninglearning/talks-1999/99-12-06.baxter.slides.pdf · Direct Gradient-Based Reinforcement Learning ... Chess 3 State: Board position. ... Policy

28

Summary

• Partially Observable Markov Decision Processes.

• Previous approaches: value function methods.

• Direct gradient ascent.

• Approximating the gradient of the average reward.

• Estimating the approximate gradient: POMDPG.

• Line search in the presence of noise.

• Experimental results.

Page 30: Direct Gradient-Based Reinforcement Learninglearning/talks-1999/99-12-06.baxter.slides.pdf · Direct Gradient-Based Reinforcement Learning ... Chess 3 State: Board position. ... Policy

29

Estimating ∇βη(θ): POMDPG

Given: parameterized policies, µu(θ, y),β ∈ [0, 1):

1. Set z0 = ∆0 = 0 ∈ RK.

2. for each observation yt, control ut, reward r(it+1) do

3. Set zt+1 = βzt +∇µut(θ, yt)µut(θ, yt)

(eligibility trace)

4. Set ∆t+1 = ∆t + 1t+1 [r(it+1)zt+1 −∆t]

5. end for

Page 31: Direct Gradient-Based Reinforcement Learninglearning/talks-1999/99-12-06.baxter.slides.pdf · Direct Gradient-Based Reinforcement Learning ... Chess 3 State: Board position. ... Policy

30

Convergence of POMDPG

Theorem: For all β ∈ [0, 1), θ ∈ RK,

∆t→ ∇βη(θ).

Page 32: Direct Gradient-Based Reinforcement Learninglearning/talks-1999/99-12-06.baxter.slides.pdf · Direct Gradient-Based Reinforcement Learning ... Chess 3 State: Board position. ... Policy

31

Explanation of POMDPG

Algorithm computes:

∆T =1

T

T−1∑t=0

∇µutµut

(r(it+1) + βr(it+2)+· · ·+βT−t−1r(iT)

)︸ ︷︷ ︸Estimate of discounted value ‘due to’ action ut

• ∇µut(θ, yt) is the direction to increase the probability ofthe action ut.

• It is weighted by something involving subsequentrewards, and

• divided by µut: ensures “popular” actions don’t dominate

Page 33: Direct Gradient-Based Reinforcement Learninglearning/talks-1999/99-12-06.baxter.slides.pdf · Direct Gradient-Based Reinforcement Learning ... Chess 3 State: Board position. ... Policy

32

POMDPG: Bias/Variance trade-off

∆tt→∞−−→ ∇βη(θ)

β→1−−→ ∇η(θ) .

• Bias/Variance Tradeoff: β ≈ 1 gives:

? Accurate gradient approximation (∇βη close to ∇η),but

? Large variance in estimates ∆t of∇βη for small t.

Page 34: Direct Gradient-Based Reinforcement Learninglearning/talks-1999/99-12-06.baxter.slides.pdf · Direct Gradient-Based Reinforcement Learning ... Chess 3 State: Board position. ... Policy

33

POMDPG: Bias/Variance trade-off

∆tt→∞−−→ ∇βη(θ)

β→1−−→ ∇η(θ) .

• Recall: 1/(1− β) ≈ τ ∗(θ) (mixing time).

? Small mixing time ⇒ small β ⇒ accurate gradientestimate from short POMDPG simulation .

? Large mixing time ⇒ large β ⇒ accurate gradientestimate only from long POMDPG simulation .

• Conjecture: Mixing time is an intrinsic constraint onany simulation-based algorithm.

Page 35: Direct Gradient-Based Reinforcement Learninglearning/talks-1999/99-12-06.baxter.slides.pdf · Direct Gradient-Based Reinforcement Learning ... Chess 3 State: Board position. ... Policy

34

Example: 3-state Markov Chain

Transition Probabilities:

P (u1) =

0 4/5 1/54/5 0 1/50 4/5 1/5

P (u2) =

0 1/5 4/51/5 0 4/50 1/5 4/5

Observations: (φ1(i), φ2(i)):State 1: (2/3, 1/3) State 2: (1/3, 2/3) State 3: (5/18, 5/18)Parameterized Policy: θ ∈ R2

µu1(θ, i)=e(θ1φ1(i)+θ2φ2(i))

1 + e(θ1φ1(i)+θ2φ2(i))µu2(θ, i)= 1− µu1(θ, i)

Rewards: (r(1), r(2), r(3)) = (0, 0, 1)

Page 36: Direct Gradient-Based Reinforcement Learninglearning/talks-1999/99-12-06.baxter.slides.pdf · Direct Gradient-Based Reinforcement Learning ... Chess 3 State: Board position. ... Policy

35

Bias/Variance Trade-off

Relative norm difference =‖∇η −∆T‖‖∇η‖

.

0

0.5

1

1.5

2

2.5

3

1 10 100 1000 10000 100000 1e+06 1e+07

Rel

ativ

e N

orm

Diff

eren

ce

T

τ = 1

0

0.5

1

1.5

2

2.5

3

3.5

4

4.5

1 10 100 1000 10000 100000 1e+06 1e+07

Rel

ativ

e N

orm

Diff

eren

ce

T

τ = 20

Page 37: Direct Gradient-Based Reinforcement Learninglearning/talks-1999/99-12-06.baxter.slides.pdf · Direct Gradient-Based Reinforcement Learning ... Chess 3 State: Board position. ... Policy

36

Bias/Variance Trade-off

0.001

0.01

0.1

1

10

1 10 100 1000 10000 100000 1e+06 1e+07

Rel

ativ

e N

orm

Diff

eren

ce

T

τ = 1τ = 5

τ = 20

Page 38: Direct Gradient-Based Reinforcement Learninglearning/talks-1999/99-12-06.baxter.slides.pdf · Direct Gradient-Based Reinforcement Learning ... Chess 3 State: Board position. ... Policy

37

Summary

• Partially Observable Markov Decision Processes.

• Previous approaches: value function methods.

• Direct gradient ascent.

• Approximating the gradient of the average reward.

• Estimating the approximate gradient: POMDPG.

• Line search in the presence of noise.

• Experimental results.

Page 39: Direct Gradient-Based Reinforcement Learninglearning/talks-1999/99-12-06.baxter.slides.pdf · Direct Gradient-Based Reinforcement Learning ... Chess 3 State: Board position. ... Policy

38

Line-search in the presence of noise

• Want to find maximum of η(θ) in direction∇βη(θ).

• Usual method: find 3 pointsθi = θ + γi∇βη(θ), i = 1, 2, 3,with γ1 < γ2 < γ3 such that:η(θ2) > η(θ1), η(θ2) > η(θ3) and interpolate.

• Problem: η(θ) only available by simulation (e.g. ηT(θ)),so noisy:

limθ1→θ2

var [sign (ηT(θ2)− ηT(θ1))] = 1

Page 40: Direct Gradient-Based Reinforcement Learninglearning/talks-1999/99-12-06.baxter.slides.pdf · Direct Gradient-Based Reinforcement Learning ... Chess 3 State: Board position. ... Policy

39

Line-search in the presence of noise• Solution: Use gradients to bracket (POMDPG).∇βη(θ1) · ∇βη(θ) > 0, ∇βη(θ2) · ∇βη(θ) < 0

• Variance independent of ‖θ2 − θ1‖.

-0.25

-0.2

-0.15

-0.1

-0.05

0

0.05

0.1

0.15

0.2

0.25

-0.4 -0.2 0 0.2 0.4

Page 41: Direct Gradient-Based Reinforcement Learninglearning/talks-1999/99-12-06.baxter.slides.pdf · Direct Gradient-Based Reinforcement Learning ... Chess 3 State: Board position. ... Policy

40

Example: Call Admission Control

Telecommunications carrier selling bandwidth: queueingproblem.From (Marbach and Tsitsiklis, 1998).

• Three call types, with differing arrival rates (Poisson),bandwidth requirements, rewards, holding times (exponential).

• State = observation = mix of calls.

• Policy = (squashed) linear controller.

Page 42: Direct Gradient-Based Reinforcement Learninglearning/talks-1999/99-12-06.baxter.slides.pdf · Direct Gradient-Based Reinforcement Learning ... Chess 3 State: Board position. ... Policy

41

Direct Reinforcement Learning: CallAdmission Control

0.45

0.5

0.55

0.6

0.65

0.7

0.75

0.8

0.85

1000 10000 100000

CO

NJG

RA

D F

inal

Rew

ard

Total Queue Iterations

class optimalτ = 1

Page 43: Direct Gradient-Based Reinforcement Learninglearning/talks-1999/99-12-06.baxter.slides.pdf · Direct Gradient-Based Reinforcement Learning ... Chess 3 State: Board position. ... Policy

42

Direct Reinforcement Learning: Puck World

• Puck moving around mountainous terrain.

• Aim is to get out of a valley and on to a plateau

• reward = 0 everywhere except plateau (=100)

• Observation = relative location, absolute location,velocity.

• Neural-Network Controller

• Insufficient thrust to climb directly out of valley, mustlearn to “oscillate”.

Page 44: Direct Gradient-Based Reinforcement Learninglearning/talks-1999/99-12-06.baxter.slides.pdf · Direct Gradient-Based Reinforcement Learning ... Chess 3 State: Board position. ... Policy

43

Direct Reinforcement Learning: Puck World

0

10

20

30

40

50

60

70

80

0 2e+07 4e+07 6e+07 8e+07 1e+08

Ave

rage

Rew

ard

Iterations

Page 45: Direct Gradient-Based Reinforcement Learninglearning/talks-1999/99-12-06.baxter.slides.pdf · Direct Gradient-Based Reinforcement Learning ... Chess 3 State: Board position. ... Policy

44

Direct Reinforcement Learning

• Philosophy:

? Adjusting policy should improve performance.? View average reward as function of policy parameters:η(θ).

? For suitably smooth policies: ∇η(θ) exists.? Compute∇η(θ) and step uphill.

Page 46: Direct Gradient-Based Reinforcement Learninglearning/talks-1999/99-12-06.baxter.slides.pdf · Direct Gradient-Based Reinforcement Learning ... Chess 3 State: Board position. ... Policy

45

Direct Reinforcement Learning

• Main results:

? Approximation∇βη(θ) to∇η(θ).? Algorithm to accurately estimate ∇βη from a single

sample path (POMDPG).? Accuracy of approximation depends on parameter of

the algorithm (β ∈ [0, 1)); bias/variance trade-off.? 1/(1− β) relates to mixing time of underlying Markov

chain.? Line search using only gradient estimates.? Many successful applications.

Page 47: Direct Gradient-Based Reinforcement Learninglearning/talks-1999/99-12-06.baxter.slides.pdf · Direct Gradient-Based Reinforcement Learning ... Chess 3 State: Board position. ... Policy

46

Advertisement

• Papers available from http://csl.anu.edu.au.

• Two research positions available in the MachineLearning Group at the Australian National University.