31
Solutions to Exercises in Reinforcement Learning by Richard S. Sutton and Andrew G. Barto Tianlin Liu Jacobs University Bremen [email protected] Contents 1 The Reinforcement Learning Problem 1 2 Multi-arm Bandits 3 3 Finite Markov Decision Processes 5 4 Dynamic Programming 15 5 Monte Carlo Methods 20 6 Temporal-Difference Learning 24 7 Multi-step Bootstrapping 28 8 Planning and Learning with Tabular Methods 29 9 On-Policy Prediction wIth Approximation 30 1 The Reinforcement Learning Problem Exercise 1.1. Self-Play. Suppose, instead of playing against a random opponent, the reinforce- ment learning algorithm described above played against itself, with both sides learning. What do you think would happen in this case? Would it learn a different policy for selecting moves? 1

Solutions to Exercises in Reinforcement Learning by Richard S. …tianlinliu.com/files/notes_exercise_RL.pdf · 2020. 11. 30. · Solutions to Exercises in Reinforcement Learning

  • Upload
    others

  • View
    6

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Solutions to Exercises in Reinforcement Learning by Richard S. …tianlinliu.com/files/notes_exercise_RL.pdf · 2020. 11. 30. · Solutions to Exercises in Reinforcement Learning

Solutions to Exercises in Reinforcement Learning by Richard S.

Sutton and Andrew G. Barto

Tianlin LiuJacobs University [email protected]

Contents

1 The Reinforcement Learning Problem 1

2 Multi-arm Bandits 3

3 Finite Markov Decision Processes 5

4 Dynamic Programming 15

5 Monte Carlo Methods 20

6 Temporal-Difference Learning 24

7 Multi-step Bootstrapping 28

8 Planning and Learning with Tabular Methods 29

9 On-Policy Prediction wIth Approximation 30

1 The Reinforcement Learning Problem

Exercise 1.1. Self-Play. Suppose, instead of playing against a random opponent, the reinforce-ment learning algorithm described above played against itself, with both sides learning. Whatdo you think would happen in this case? Would it learn a different policy for selecting moves?

1

Page 2: Solutions to Exercises in Reinforcement Learning by Richard S. …tianlinliu.com/files/notes_exercise_RL.pdf · 2020. 11. 30. · Solutions to Exercises in Reinforcement Learning

Tianlin Liu [email protected]

Solution. I would expect that after several games, both sides learn a set of moves, and it mightbe the case that they just keep playing that set of moves over and over. Since there is no newtraining data come in, the value function may stop updating.

Exercise 1.2. Symmetries. Many tic-tac-toe positions appear different but are really the samebecause of symmetries. How might we amend the learning process described above to takeadvantage of this? In what ways would this change improve the learning process? Now thinkagain. Suppose the opponent did not take advantage of symmetries. In that case, should we?Is it true, then, that symmetrically equivalent positions should necessarily have the same value?

Solution. Exploiting symmetry can reduce the number of states we needed to characterize thegame. In this case, we do not have to keep so many value numbers. I think we should takeadvantage of symmetries weather or not our opponent do. I would expect symmetrically equiv-alent positions have the same value, because the probability of winning/losing is the same forsymmetric states.

Exercise 1.3. Greedy Play. Suppose the reinforcement learning player was greedy, that is, italways played the move that brought it to the position that it rated the best. Might it learn toplay better, or worse, than a nongreedy player? What problems might occur?

Solution. I would expect the the agent will learn to play worse than a non-greedy player. Sup-pose the agent always playing greedily, the agent chooses “good” moves according to its ownexperience. Since its experience might be partial or limited, the greedy choice might excludesome better moves, which could be selected if there are exploratory moves.

Exercise 1.4. Learning from Exploration. Suppose learning updates occurred after all moves,including exploratory moves. If the step-size parameter is appropriately reduced over time (butnot the tendency to explore), then the state values would converge to a set of probabilities.What are the two sets of probabilities computed when we do, and when we do not, learn fromexploratory moves? Assuming that we do continue to make exploratory moves, which set ofprobabilities might be better to learn? Which would result in more wins?

2

Page 3: Solutions to Exercises in Reinforcement Learning by Richard S. …tianlinliu.com/files/notes_exercise_RL.pdf · 2020. 11. 30. · Solutions to Exercises in Reinforcement Learning

Tianlin Liu [email protected]

Solution. The set of probabilities when we do not learn from exploratory moves is the value wecan achieve by performing (our assumed) optimal move. The set of probabilities when we dolearn from exploratory moves is the value we can achieve by performing (our assumed) optimalmove corrupted by some random moves. I would expect the former is better and I will imagineit results more wins. We are interested to learn a policy which tells us what action to execute ifa state is given. The exploratory move, however, is out of our interests: exploratory moves arejust some random moves, which are not related to the state that the agent stands in. �

Exercise 1.5. Other Improvements. Can you think of other ways to improve the reinforcementlearning player? Can you think of any better way to solve the tic-tac-toe problem as posed?

Solution. For this 3-by-3 game, the state space is actually very small. It is possible to brute-forceto specify the optimal moves for each and every board states. �

2 Multi-arm Bandits

Exercise 2.1. In the comparison shown in Figure 2.2, which method will perform best in thelong run in terms of cumulative reward and cumulative probability of selecting the best action?How much better will it be? Express your answer quantitatively.

Solution. I would expect that the ε = 0.01 method will perform best in the long run in terms ofcumulative reward as well as cumulative probability of selecting the best action.

In terms of cumulative reward: In the long run, the ε = 0.1 method will achieve the rewardof (1.55 + δ), where δ is a noise with 0 mean and 1 variance, with probability of 0.9; theε = 0.01 method will achieve the reward of (1.55 + δ), where δ is a noise with 0 mean and 1variance, with probability of 0.99. Hence ε = 0.1 and ε = 0.01 will achieve cumulative reward of0.9× 1.55 = 1.395 and 0.99× 1.55 = 1.5345 respectively.

In terms of cumulative probability of selecting the best action: In the long run, the ε = 0.1method will select the optimal action with probability 0.9; the ε = 0.01 method will select theoptimal action with probability 0.99.

Exercise 2.2. If the step-size parameters, αn, are not constant, then the estimate Qn is aweighted average of previously received rewards with a weighting different from that given by

3

Page 4: Solutions to Exercises in Reinforcement Learning by Richard S. …tianlinliu.com/files/notes_exercise_RL.pdf · 2020. 11. 30. · Solutions to Exercises in Reinforcement Learning

Tianlin Liu [email protected]

(2.6). What is the weighting on each prior reward for the general case, analogous to (2.6), interms of the sequence of step-size parameters?

Solution.

Qn+1 := Qn + αn [Rn −Qn]

= Qn + αnRn.− αnQn= αnRn + (1− αn) [αn−1Rn−1 + (1− αn−1)Qn−1]

= Πni=1(1− αi)Q1 +

n−1∑1

αiΠnj=i+1(1− αj)Ri + αnRn.

Exercise 2.3. (programming) Design and conduct an experiment to demonstrate the diffi-culties that sample-average methods have for nonstationary problems. Use a modified version ofthe 10-armed testbed in which all the q∗(a) start out equal and then take independent randomwalks. Prepare plots like Figure 2.2 for an action-value method using sample averages, incre-mentally computed by α = 1

n , and another n action-value method using a constant step-sizeparameter, α = 0.1. Use ε = 0.1 and, if necessary, runs longer than 1000 steps.

Solution. Please see exer2.3.py. �

Exercise 2.4. The results shown in Figure 2.3 should be quite reliable because they are averagesover 2000 individual, randomly chosen 10-armed bandit tasks. Why, then, are there oscillationsand spikes in the early part of the curve for the optimistic method? In other words, what mightmake this method perform particularly better or worse, on average, on particular early steps?

Solution. The gap between different rewards is much larger in the early steps, which encouragesthe agent selects particular better or worse, on average, in early steps.

Exercise 2.5. In Figure 2.4 the UCB algorithm shows a distinct spike in performance on the11th step. Why is this? Note that for your answer to be fully satisfactory it must explain bothwhy the reward increases on the 11th step and why it decreases on the subsequent steps. Hint:if c = 1, then the spike is much less prominent.

4

Page 5: Solutions to Exercises in Reinforcement Learning by Richard S. …tianlinliu.com/files/notes_exercise_RL.pdf · 2020. 11. 30. · Solutions to Exercises in Reinforcement Learning

Tianlin Liu [email protected]

Solution. • The initial plateau: This is due to the fact that actions are essentially selectedrandomly in early steps. Large c suppresses the action to select the action greedily. Thismakes the rewards low in early steps.

• The spike: once a high-reward is reached by some almost randomly selected actions, theaverage reward for that step increases significantly.

• Decrease on subsequent move: In the subsequent moves, C suppresses the agent keepselecting the high reward action, and therefore the reward is lowered. But because the Q forhigh-reward action has already been updated, there is still a reasonable chance to select thataction. Hence the reward is not as low as before the spike.

3 Finite Markov Decision Processes

Exercise 3.1. Devise three example tasks of your own that fit into the reinforcement learningframework, identifying for each its states, actions, and rewards. Make the three examples asdifferent from each other as possible. The framework is abstract and flexible and can be appliedin many different ways. Stretch its limits in some way in at least one of your examples.

Solution. • One the example is soccer-playing robots. Consider the simple case that a singlerobot is on the court. The states of the robot might be the grid position that it stands in.The action might be move north/south/west/east. The action might be (i) dribbling the balland (ii) shooting the ball. The rewards might be 1 for each score and a small negative rewardfor each dribble-move.

• The second example is still the soccer playing robots. The action and rewards are the same asthe previous example. What makes this example different is that, the states of the robot arenot physical states, but a vector of probability distribution, which characterizes its predictionof possible future moves. This kind of states are used in OOM(Observable Operator Model)and PSR(Predictive State Representation) literatures.

• The third example is solving Jigsaw puzzles. The states might be the arrangements of allpieces. The actions might be (i)exchange the position of two pieces and (ii) do nothing.Suppose if two pieces are put next to each other correctly, then they are locked togetherautomatically by some oracle. The awards are (i) a small negative reward, say, 0.01, for eachposition exchange, (ii) a small positive reward, say, 0.1, for each “lock”, and (iii) a largepositive reward, say, 5, for completely solving the Jigsaw puzzles.

5

Page 6: Solutions to Exercises in Reinforcement Learning by Richard S. …tianlinliu.com/files/notes_exercise_RL.pdf · 2020. 11. 30. · Solutions to Exercises in Reinforcement Learning

Tianlin Liu [email protected]

Exercise 3.2. Is the reinforcement learning framework adequate to usefully represent all goal-directed learning tasks? Can you think of any clear exceptions?

Solution. One instance RI framework may fail is the case when reward hypothesis (see section3.2 of the book) is violated. Particularly, reward hypothesis fails to be true if we need a rewardvector, instead of a reward scalar. Each entry of the reward vector might represent related butdifferent goals that we might want to achieve. In this case, to maximize the reward, we have todo some trade-off between different goals, but the RL framework only considers a single goal. �

Exercise 3.3. Consider the problem of driving. You could define the actions in terms of theaccelerator, steering wheel, and brake, that is, where your body meets the machine. Or youcould define them farther out?say, where the rubber meets the road, considering your actionsto be tire torques. Or you could define them farther in?say, where your brain meets your body,the actions being muscle twitches to control your limbs. Or you could go to a really high leveland say that your actions are your choices of where to drive. What is the right level, the rightplace to draw the line between agent and environment? On what basis is one location of theline to be preferred over another? Is there any fundamental reason for preferring one locationover another, or is it a free choice?

Solution. I think the location depends on the goal of the given task. For instance, if the taskaims to train human to make better decision on braking or accelerating given the current trafficon the road, then the action at the level of “brain meets body” seems appropriate. If the taskaims to train human to make better traveling plan, the actions at the level of “choices of whereto drive” seems appropriate. If the tasks aims to assess and optimize the quality of the rubber,the actions where “rubber meets the road” seems appropriate. �

Exercise 3.4. Suppose you treated pole-balancing as an episodic task but also used discounting,with all rewards zero except for −1 upon failure. What then would the return be at each time?How does this return differ from that in the discounted, continuing formulation of this task?

Solution. The return will be−γk as before, where k is a time step in an episode. In the continuingformulation, it is helpful to have discounting factor: it prevents the return from blowing up toinfinity. However, in episodic formulation, as we do not need to worry about infinite return,using discounting factor in this particular episodic setting seems meaningless: as our goal isto make the balance lasts as long as possible, we should make the reward of each additional

6

Page 7: Solutions to Exercises in Reinforcement Learning by Richard S. …tianlinliu.com/files/notes_exercise_RL.pdf · 2020. 11. 30. · Solutions to Exercises in Reinforcement Learning

Tianlin Liu [email protected]

time-step equally valuable, or even make each additional time-step that the balance lasts morevaluable than its prior one, but certainly we do not want to discount the value of an additionaltime step– it discourages the balance-time to last longer. �

Exercise 3.5. Imagine that you are designing a robot to run a maze. You decide to give it areward of +1 for escaping from the maze and a reward of zero at all other times. The task seemsto break down naturally into episodes–the successive runs through the maze–so you decide totreat it as an episodic task, where the goal is to maximize expected total reward (3.1). Afterrunning the learning agent for a while, you find that it is showing no improvement in escapingfrom the maze. What is going wrong? Have you effectively communicated to the agent whatyou want it to achieve?

Solution. The communication is not effective. We want to train the agent to escape from themaze as quick as possible, but the agent understand it in a way that, as long as it can escapefrom the maze, it does not matter how long it takes to escape. To effectively communicate whatwe want, we need to add a small negative reward for each step the agent moves. �

Exercise 3.6. Broken Vision System. Imagine that you are a vision system. When you arefirst turned on for the day, an image floods into your camera. You can see lots of things, butnot all things. You can’t see objects that are occluded, and of course you can’t see objects thatare behind you. After seeing that first scene, do you have access to the Markov state of theenvironment? Suppose your camera was broken that day and you received no images at all, allday. Would you have access to the Markov state then?

Solution. After seeing that first scene, I do not have access to the Markov state of the environ-ment. Markov state includes everything to predict the immediate future, but this certainly isnot the case– I can not see the objects behind me, and the object behind me, say, a car whichabout to run into the objects in front of me, may have an impact on the scene I would see inthe immediate future.

Suppose my camera was broken for a whole day, certainly I do not have access to Markov state.The information I have is simply not enough to predict the future. �

Exercise 3.7. What is the Bellman equation for action values, that is, for qπ? It must givethe action value qπ(s, a) in terms of the action values, qπ(s′, a′), of possible successors to thestate?action pair (s, a). As a hint, the backup diagram corresponding to this equation is givenin Figure 3.4 (right). Show the sequence of equations analogous to (3.12), but for action values.

7

Page 8: Solutions to Exercises in Reinforcement Learning by Richard S. …tianlinliu.com/files/notes_exercise_RL.pdf · 2020. 11. 30. · Solutions to Exercises in Reinforcement Learning

Tianlin Liu [email protected]

Solution. One direct corollary of Conditional Expectation Theorem is that, informally, we have

E[X | info ] =∑i

E[X | info, Fi]P(Fi | info ).

http://www.stat.yale.edu/~pollard/Courses/600.spring08/Handouts/elem.

conditioning.pdf

Before finding out what is qπ(s′, a′), first we take a closer look at how vπ(s) is deducted in thetext.

vπ(s) := Eπ [Gt | St = s]

= Eπ

[ ∞∑k=0

γkRt+k+1 | St = s

]

= Eπ

[Rt+1 + γ

∞∑k=0

γkRt+k+1 | St = s

]

= Eπ [Rt+1 | St = s]︸ ︷︷ ︸Part 1

+ Eπ

∞∑k=0

γkRt+k+1 | St = s

]︸ ︷︷ ︸

Part 2

Part 1 := Eπ [Rt+1 | St = s]

=∑a

Eπ [Rt+1 | St = s,At = a] P(at = a | St = s)︸ ︷︷ ︸π(a | s)

=∑a

π(a | s)∑r

rP(Rt+1 = r | St = s,At = a)

=∑a

π(a | s)∑s′,r

rp(r, s′ | s, a)

Part 2 := Eπ

γ∞∑k=0

γkRt+k+1︸ ︷︷ ︸:=B

| St = s

=∑a

∑s′

Eπ[B | St = s,At = a, St+1 = s′

]︸ ︷︷ ︸vπ(s′)

π(a | s)P(St+1 = s′ | St = s,At = a)

=∑a

π(a | s)∑s′,r

p(s′, r | s, a)vπ(s′)

8

Page 9: Solutions to Exercises in Reinforcement Learning by Richard S. …tianlinliu.com/files/notes_exercise_RL.pdf · 2020. 11. 30. · Solutions to Exercises in Reinforcement Learning

Tianlin Liu [email protected]

Part 1 + Part 2 =∑a

π(a | s)∑s′,r

p(s′, r | s, a)[r + γvπ(s′)

],∀s ∈ S

as desired.

Now, using the exactly the same idea

qπ(s, a) := Eπ [Gt | St = s,At = a]

= Eπ

[ ∞∑k=0

γkRt+k+1 | St = s,At = a

]

= E

[Rt+1 + γ

∞∑k=0

γkRt+k+2|St = s,At = a

]

= E [Rt+1|St = s,At = a]︸ ︷︷ ︸Part 1

+ E

[γ∞∑k=0

γkRt+k+2|St = s,At = a

]︸ ︷︷ ︸

Part 2

Part 1 := E [Rt+1|St = s,At = a]

=∑s′,r

rp(s′, r|s, a)

Part 2 := E

γ∞∑k=0

γkRt+k+2︸ ︷︷ ︸:=B

|St = s,At = a

=∑s′

E[B|St = s,At = a, St+1 = s′

]P(St+1 = s′ | St = s,At = a)

=∑s′

∑a′

E[B|St = s,At = a, St+1 = s′, At+1 = a′

]︸ ︷︷ ︸γqπ(s′,a′)

p(s′|s, a) p(a′ | s′)︸ ︷︷ ︸π(a′ | s′)

=∑s′

p(s′|s, a)∑a′

γqπ(s′, a′) π(a′ | s′)

=∑s′,r

p(s′, r|s, a)∑a′

γqπ(s′, a′) π(a′ | s′)

qπ(s, a) = Part 1 + Part 2 =∑s′,r

p(s′, r | s, a)

[r + γ

∑a′

qπ(s′, a′)π(a′|s′)

]

9

Page 10: Solutions to Exercises in Reinforcement Learning by Richard S. …tianlinliu.com/files/notes_exercise_RL.pdf · 2020. 11. 30. · Solutions to Exercises in Reinforcement Learning

Tianlin Liu [email protected]

Exercise 3.8. The Bellman equation (3.12) must hold for each state for the value function vπshown in Figure 3.5 (right). As an example, show numerically that this equation holds for thecenter state, valued at +0.7, with respect to its four neighboring states, valued at +2.3, +0.4,-0.4, and +0.7. (These numbers are accurate only to one decimal place.)

Solution. It is easy to read out the values in (3.12) from the Figure 3.5: π(a|s) = 0.25 for all a,γ = 0.9, r = 0, p(s′|s, a) = 0.25 for all a. Consequently,

0.25× 0.9× (2.3 + 0.4− 0.4 + 0.7) ≈ 0.7.

Exercise 3.9. In the gridworld example, rewards are positive for goals, negative for running intothe edge of the world, and zero the rest of the time. Are the signs of these rewards important, oronly the intervals between them? Prove, using (3.2), that adding a constant c to all the rewardsadds a constant, vc, to the values of all states, and thus does not affect the relative values ofany states under any policies. What is vc in terms of c and γ?

Solution. The signs matter in episodic tasks, as we will see in exercise 3.10. However, forcontinuing task, signs do not matter– too see this, it is enough to prove the claim in thisquestion.

RecallGt = Rt+1 + γ(Rt+2) + γ2(Rt+3) + · · ·

andVπ(s) = Eπ[Gt|St = s].

Let C be the constant we add for each reward,

Gt = Rt+1 + γ(Rt+2 + C) + γ2(Rt+3 + C) + · · ·

=

∞∑k=0

γk(Rt+k+1 + C)

=∞∑k=0

γkRt+k+1︸ ︷︷ ︸Gt

+∞∑k=0

γkC

and

Vπ(s) = Eπ[Gt +∞∑k=0

γkC|St = s] = Vπ(s) +∞∑k=0

γkC = Vπ(s) +C

1− γ︸ ︷︷ ︸:=Vc

.

10

Page 11: Solutions to Exercises in Reinforcement Learning by Richard S. …tianlinliu.com/files/notes_exercise_RL.pdf · 2020. 11. 30. · Solutions to Exercises in Reinforcement Learning

Tianlin Liu [email protected]

Exercise 3.10. Now consider adding a constant c to all the rewards in an episodic task, suchas maze running. Would this have any effect, or would it leave the task unchanged as in thecontinuing task above? Why or why not? Give an example.

Solution. This will change the task. Consider the task of escaping from a maze. To accomplishthe task, it is reasonable to assign a big positive reward, say, 10, for successfully escaping fromthe maze, and a small negative value, say, −0.1, for each step the agent moves. Adding a constant1 to both 5 and -0.1, then there is a problem – the agent will never escape from the maze, sinceit wants to stay as long as it can, to earn more positive reward by hovering around. �

Exercise 3.11. The value of a state depends on the values of the actions possible in that stateand on how likely each action is to be taken under the current policy. We can think of this interms of a small backup diagram rooted at the state and considering each possible action.:

Give the equation corresponding to this intuition and diagram for the value at the root node,vπ(s), in terms of the value at the expected leaf node, qπ(s, a), given St = s. This equation shouldinclude an expectation conditioned on following the policy, π. Then give a second equation inwhich the expected value is written out explicitly in terms of π(a|s) such that no expected valuenotation appears in the equation.

Solution. First equation:

Vπ = Eπ[Gt | St = s]

=∑a

Eπ[Gt | St = s,At = a]P[At = a | St = s].

Second equation follows from the first one:

Vπ =∑a

qπ(s, a)π(a|s).

11

Page 12: Solutions to Exercises in Reinforcement Learning by Richard S. …tianlinliu.com/files/notes_exercise_RL.pdf · 2020. 11. 30. · Solutions to Exercises in Reinforcement Learning

Tianlin Liu [email protected]

Exercise 3.12. The value of an action, qπ(s, a), depends on the expected next reward and theexpected sum of the remaining rewards. Again we can think of this in terms of a small backupdiagram, this one rooted at an action (state?action pair) and branching to the possible nextstates:

Give the equation corresponding to this intuition and diagram for the action value, qπ(s, a), interms of the expected next reward, Rt+1, and the expected next state value, vπ(St+1), giventhat St = s and At = a. This equation should include an expectation but not one conditionedconditioned on following the policy. Then give a second equation, writing out the expected valueexplicitly in terms of p(s′, r|s, a) defined by (3.6), such that no expected value notation appearsin the equation.

Solution. First equation:

qπ(s, a) = Eπ[Gt | St = s,At = a]

=∑s′

Eπ[Gt | St = s,At = a, St+1 = s′]P[St+1 = s′ | St = s,At = a]

Second equation follows from the first one:

qπ(s, a) =∑s′

Eπ[Gt | St+1 = s′]P[St+1 = s′ | St = s,At = a]

=∑s′

Eπ[Rt+1 + γGt+1 | St = s,At = a, St+1 = s′]P[St+1 = s′ | St = s,At = a]

=∑s′,r

r + γ Eπ[Gt+1 | St+1 = s′]︸ ︷︷ ︸vπ(s′)

p(s′, r|s, a)

=∑s′,r

[r + γvπ(s′)

]p(s′, r|s, a)

12

Page 13: Solutions to Exercises in Reinforcement Learning by Richard S. …tianlinliu.com/files/notes_exercise_RL.pdf · 2020. 11. 30. · Solutions to Exercises in Reinforcement Learning

Tianlin Liu [email protected]

Exercise 3.13. Draw or describe the optimal state-value function for the golf example.

Solution. I would expect the contour graph of optimal state-value function is looks similar asthe lower figure of Figure 3.6, except the contour circle for -1 is larger– it should covers thewhole green.

The reasoning is that, suppose we stand in anywhere outside of the green, then the optimalpolicy is to hit the ball into the green using driver, and then from the green we hit the ball intothe hole using putter. Hence, if the location is outside of the green, the contour graph is exactlythe same as q∗(s, driver). Now suppose we stand inside of green, unlike the case of q∗(s, driver),we do not commit ourself on first using the driver – we are free to use putter in this case, andone stroke suffices to put the ball into the hole. Hence, inside of the green, the value is 1. �

Exercise 3.14. Draw or describe the contours of the optimal action-value function for putting,q∗(s, putter), for the golf example.

Solution. Suppose we stand in the green, and we commit ourself on first hitting the ball witha putter, then one stroke is enough – value for the whole green is -1. Now suppose we standoutside of the green, and particular suppose we stand in the sand. Then committing ourself firsthitting the ball with a putter, the first hit will result nothing, but subsequently we can use adriver and then another putter. So the value for the sand will be −3. Suppose we are furtherout, then the value will not be lower than that of vputter. In this case, we commit to use theputter for the first hit, and then we can use as many as drivers as it is needed to reach the green,and another putter to put the ball into the hole.

Exercise 3.15. Give the Bellman equation for q∗ for the recycling robot.

Solution.

q∗(h, s) = p(h | h, s)︸ ︷︷ ︸α

r(h, s, h)︸ ︷︷ ︸rsearch

+γmaxa′

q∗(h, a′)

+ p(l | h, s)︸ ︷︷ ︸1−α

r(h, s, l)︸ ︷︷ ︸rsearch

+γmaxa′

q∗(l, a′)

= rsearch + γ ×

[α×max

a′q∗(h, a

′) + (1− α)×maxa′

q∗(l, a′)

].

13

Page 14: Solutions to Exercises in Reinforcement Learning by Richard S. …tianlinliu.com/files/notes_exercise_RL.pdf · 2020. 11. 30. · Solutions to Exercises in Reinforcement Learning

Tianlin Liu [email protected]

q∗(h,w) = p(h | h,w)︸ ︷︷ ︸1

r(h,w, h)︸ ︷︷ ︸rwait

+γmaxa′

q∗(h, a′)

+ p(l | h,w)︸ ︷︷ ︸0

r(h,w, l)︸ ︷︷ ︸rwait

+γmaxa′

q∗(l, a′)

= rwait + γ ×

[maxa′

q∗(h, a′)

].

q∗(l, s) = p(h | l, s)︸ ︷︷ ︸1−β

r(l, s, h)︸ ︷︷ ︸−3

+γmaxa′

q∗(h, a′)

+ p(l | l, s)︸ ︷︷ ︸β

r(l, s, l)︸ ︷︷ ︸rsearch

+γmaxa′

q∗(l, a′)

= (1− β)

[−3 + γmax

a′q∗(h, a

′)

]+ β

[rsearch + γmax

a′q∗(l, a

′)

].

q∗(l, w) = p(h | l, w)︸ ︷︷ ︸0

r(l, w, h)︸ ︷︷ ︸rwait

+γmaxa′

q∗(h, a′)

+ p(l | l, w)︸ ︷︷ ︸1

r(l, w, l)︸ ︷︷ ︸rwait

+γmaxa′

q∗(l, a′)

= rwait + γ ×

[maxa′

q∗(l, a′)

].

q∗(l, r) = p(h | l, r)︸ ︷︷ ︸1

r(l, r, h)︸ ︷︷ ︸0

+γmaxa′

q∗(h, a′)

+ p(l | l, r)︸ ︷︷ ︸0

r(l, r, l)︸ ︷︷ ︸0

+γmaxa′

q∗(l, a′)

= γ ×

[maxa′

q∗(h, a′)

].

Exercise 3.16. Figure 3.8 gives the optimal value of the best state of the gridworld as 24.4,to one decimal place. Use your knowledge of the optimal policy and (3.2) to express this valuesymbolically, and then to compute it to three decimal places.

Solution. By definition,

v∗(s) = maxπ

vπ(s)

= maxπ

E [Gt | St = s] .

14

Page 15: Solutions to Exercises in Reinforcement Learning by Richard S. …tianlinliu.com/files/notes_exercise_RL.pdf · 2020. 11. 30. · Solutions to Exercises in Reinforcement Learning

Tianlin Liu [email protected]

It is easy to see that the optimal policy is do anything → go up → go up → go up → go up.Under this policy π∗,

v∗(s) = Eπ∗ [Gt | St = s]

=

∞∑k=0

γkEπ∗ [Rt+k+1 | St = s]

=

∞∑j=0

γ5j

=10

1− γ5

=10

1− 0.95

≈ 24.419

4 Dynamic Programming

Exercise 4.1. In Example 4.1, if π is the equiprobable random policy, what is qπ(11, down)?What is qπ(7,down)?

Solution. Suppose we commit ourselves on going down from the position 11. Then we get a −1reward deterministically, and the game is over (episode ends). Hence qπ(11, down) = −1.

Now suppose we commit ourselves on going down from position 7.

qπ(7, down) = Eπ[Gt | St = s,At = a]

= Rt + γ︸︷︷︸1

Eπ[Gt+1 | St+1 = s′]︸ ︷︷ ︸v(11)

= −1− 14

= −15.

15

Page 16: Solutions to Exercises in Reinforcement Learning by Richard S. …tianlinliu.com/files/notes_exercise_RL.pdf · 2020. 11. 30. · Solutions to Exercises in Reinforcement Learning

Tianlin Liu [email protected]

Exercise 4.2. In Example 4.1, suppose a new state 15 is added to the gridworld just belowstate 13, and its actions, left, up, right, and down, take the agent to states 12, 13, 14, and 15,respectively. Assume that the transitions from the original states are unchanged. What, then,is vπ(15) for the equiprobable random policy? Now suppose the dynamics of state 13 are alsochanged, such that action down from state 13 takes the agent to the new state 15. What isvπ(15) for the equiprobable random policy in this case?

Solution. Assume the transitions from the original states are unchanged.

vπ(15) =∑a

π(a|s)︸ ︷︷ ︸0.25

∑s′,r

p(s′, r | s, a)︸ ︷︷ ︸1

[ r︸︷︷︸−1

+ γ︸︷︷︸1

vπ(s′)]

= 0.25(−1− 22− 1− 20− 1− 14− 1 + vπ(15))

Solve the above equation gives us vπ(15) = −20.

Now suppose the dynamics of 13 also changed. We do the iterative policy evaluation. Theinitialization is natural: we let V0(s) = vπ(s), where vπ is the value when the dynamics areunchanged, as shown in the k = ∞ case in Figure 4.1. Note that, especially, V0(15) = −20,where −20 is what we just derived for vπ(15).

Particularly, we do the “immediate overwriting”. First we update the value of 13.

V1(13) =∑a

π(a|s)︸ ︷︷ ︸0.25

∑s′,r

p(s′, r | s, a)︸ ︷︷ ︸1

[ r︸︷︷︸−1

+ γ︸︷︷︸1

V0(s′)]

= 0.25(−1− 20− 1− 22− 1− 14− 1− 20)

= 20

Then we immediate update V1(15) using the updated V1(13). Note this is “overwriting”.

V1(15) =∑a

π(a|s)︸ ︷︷ ︸0.25

∑s′,r

p(s′, r | s, a)︸ ︷︷ ︸1

[ r︸︷︷︸−1

+ γ︸︷︷︸1

V0(s′)]

= 0.25(−1− 22− 1− 20− 1− 14− 1− 20)

= 20

As we proceed, we will see nothing is changed – the initialization is good enough. Hence theiterative policy evaluation ends with 1 iteration. V1(15) = −20 for this case.

16

Page 17: Solutions to Exercises in Reinforcement Learning by Richard S. …tianlinliu.com/files/notes_exercise_RL.pdf · 2020. 11. 30. · Solutions to Exercises in Reinforcement Learning

Tianlin Liu [email protected]

Exercise 4.3. What are the equations analogous to (4.3), (4.4), and (4.5) for the action-valuefunction qπ and its successive approximation by a sequence of functions q0, q1, q2, · · · ?

Solution. Equations that are analogous to (4.3) and (4.4)

qπ(s, a) = Eπ[Gt |Gt = s,At = a]

= Eπ[Rt+1 + γVπ(St+1) | St = s,At = a]

= Eπ[Rt+1 + γ∑a′

π(a′|s′)qπ(s′, a′) | St = s,At = a]

=∑s′,r

p(s′, r | s, a)

[r + γ

∑a′

qπ(s′, a′)π(a′|s′)

].

The equation that is analogous to (4.5)

qk+1(s, a) =∑s′,r

p(s′, r | s, a)

[r + γ

∑a′

qk(s′, a′)π(a′|s′)

].

Exercise 4.4. In some undiscounted episodic tasks there may be policies for which eventualtermination is not guaranteed. For example, in the grid problem above it is possible to goback and forth between two states forever. In a task that is otherwise perfectly sensible, vπ(s)may be negative infinity for some policies and states, in which case the algorithm for iterativepolicy evaluation given in Figure 4.1 will not terminate. As a purely practical matter, howmight we amend this algorithm to as- sure termination even in this case? Assume that eventualtermination is guaranteed under the optimal policy.

Solution. We can restricted to considering only policies that are ε-soft, as shown in exercise4.7. In this way, we will avoid trapping in some fixed loops, and every step our agent takes willhave some chance to freely explore its world.

Exercise 4.5. (programming) Write a program for policy iteration and re-solve Jack’s carrental problem with the following changes. One of Jack’s employees at the first location ridesa bus home each night and lives near the second location. She is happy to shuttle one car tothe second location for free. Each additional car still costs $2, as do all cars moved in the otherdirection. In addition, Jack has limited parking space at each location. If more than 10 carsare kept overnight at a location (after any moving of cars), then an additional cost of $4 must

17

Page 18: Solutions to Exercises in Reinforcement Learning by Richard S. …tianlinliu.com/files/notes_exercise_RL.pdf · 2020. 11. 30. · Solutions to Exercises in Reinforcement Learning

Tianlin Liu [email protected]

be incurred to use a second parking lot (independent of how many cars are kept there). Thesesorts of nonlinearities and arbitrary dynamics often occur in real problems and cannot easily behandled by optimization methods other than dynamic programming. To check your program,first replicate the results given for the original problem. If your computer is too slow for the fullproblem, cut all the numbers of cars in half.

Solution. Please see exer4.5.py �

Exercise 4.6. How would policy iteration be defined for action values? Give a complete al-gorithm for computing q∗, analogous to that on page 87 for computing v∗. Please pay specialattention to this exercise, because the ideas involved will be used throughout the rest of thebook.

Solution.

Algorithm 1: Policy iteration (using q(s, a))

1. inputs : q∗(s, a) = 0, for all s ∈ S+ and a ∈ A.

2. /* The policy evaluation step */

repeat∆← 0;foreach (s, a) ∈ (S ×A) do

q ← q∗(s, a);q∗(s, a)←

∑s′,r

p(s′, r | s, a) [r + γ∑

a′ qπ(s′, a′)π(a′|s′)] ;

∆← max(∆, |q − q∗(s, a)|)until ∆ < θ (a small positive number);

3. /* The policy improvement step */

policy-stable ← true.foreach s ∈ S do

old-action ← π(s) ;π(s)← arg max

aq∗(s, a) ;

If old-action 6= π(s), then policy-stale ← false.

If policy-stable, then stop and return q ≈ q∗ and π ≈ π∗; else go to 2.

Exercise 4.7. Suppose you are restricted to considering only policies that are ε-soft, meaningthat the probability of selecting each action in each state, s, is at least ε/|A(s)|. Describe

18

Page 19: Solutions to Exercises in Reinforcement Learning by Richard S. …tianlinliu.com/files/notes_exercise_RL.pdf · 2020. 11. 30. · Solutions to Exercises in Reinforcement Learning

Tianlin Liu [email protected]

qualitatively the changes that would be required in each of the steps 3, 2, and 1, in that order,of the policy iteration algorithm for v∗ (page 87).

Solution. For Step 3. Policy Improvement, instead of dictating the action specified by theargmax, we allow a uniform probability of ε

|A|−1 for all other actions.

For Step 2 and 1, nothing need to be changed.

Exercise 4.8. Why does the optimal policy for the gambler’s problem have such a curiousform? In particular, for capital of 50 it bets it all on one flip, but for capital of 51 it does not.Why is this a good policy?

Solution. �

Exercise 4.9. (programming) Implement value iteration for the gambler’s problem and solveit for ph = 0.25 and ph = 0.55. In programming, you may find it convenient to introduce twodummy states corresponding to termination with capital of 0 and 100, giving them values of 0and 1 respectively. Show your results graphically, as in Figure 4.3. Are your results stable asθ → 0?

Solution. Please see exer4.9.py. �

Exercise 4.10. What is the analog of the value iteration backup (4.10) for action values,qk+1(s, a)?

Solution. We simply re-write (4.2) into an update equation.

qk+1(s, a) =∑s′,r

p(s′, r|s, a)[r + γmax

aqk(s

′, a′)].

19

Page 20: Solutions to Exercises in Reinforcement Learning by Richard S. …tianlinliu.com/files/notes_exercise_RL.pdf · 2020. 11. 30. · Solutions to Exercises in Reinforcement Learning

Tianlin Liu [email protected]

5 Monte Carlo Methods

Exercise 5.1. Consider the diagrams on the right in Figure 5.1. Why does the estimated valuefunction jump up for the last two rows in the rear? Why does it drop off for the whole last rowon the left? Why are the frontmost values higher in the upper diagrams than in the lower?

Solution. (i) The estimated value function jump up for the last two rows in the rear becausesticks at 21 and 20 has a high chance to win. (ii) It drops off for the whole last row on the leftbecause if the dealer shows Ace, then the dealer has at least one Ace, which allows the dealerto have a reasonable chance to win or draw the game, compared to the case that the dealer hasno Ace. (iii) The frontmost values higher in the upper diagrams than in the lower, because theAce gives the flexibility of using it as 1 or 11, which is an advantage. Consider the case that theplayer sum is 12. If there is no usable Ace, and if the player gets an additional face card, theplayer goes bust. But suppose the player has a usable Ace, an additional face card is OK. Theusable Ace provides the advantage for upper diagram. �

Exercise 5.2. What is the backup diagram for Monte Carlo estimation of qπ?

Solution. I would expect the backup diagram looks like: filled circle (action) → circle (state)→ filled circle (action) → circle (state) · · · terminal state.

In fact, the diagram is given in Figure 7.3 of this book... It is the second diagram on the right:∞− step Sarsa aka Monte Carlo.

Exercise 5.3. What is the equation analogous to (5.5) for action values Q(s, a) instead of statevalues V (s), again given returns generated using µ?

Solution. Similar to the case of V(s), we write

Q(s, a) =

∑t∈T ρ

T (t)t Gt

|T (s, a)|.

In the above equation, for the every-visit method, T (s, a) denotes the set of all time steps that ais performed upon s; for the first-visit method, T (s, a) denotes the first time that a is performed

20

Page 21: Solutions to Exercises in Reinforcement Learning by Richard S. …tianlinliu.com/files/notes_exercise_RL.pdf · 2020. 11. 30. · Solutions to Exercises in Reinforcement Learning

Tianlin Liu [email protected]

upon s in the episode. {Gt}t∈T (s,a) are the returns that pertain to the state-action pair (s, a).

{ρT (t)t }t∈T (s) are the importance -sampling ratio, which is defined as

ρTt =p(St+1|St, At)

∏T−1k=t+1 π(Ak|Sk)p(Sk+1|Sk.Ak)

p(St+1|St, At)∏T−1k=t+1 µ(Ak|Sk)p(Sk+1|Sk.Ak)

=

∏T−1k=t+1 π(Ak|Sk)∏T−1k=t+1 µ(Ak|Sk)

Exercise 5.4. In learning curves such as those shown in Figure 5.4 error generally decreaseswith training, as indeed happened for the ordinary importance-sampling method. But for theweighted importance-sampling method error first increased and then decreased. Why do youthink this happened?

Solution. This happens because weighted importance sampling is biased, though this bias con-verge to 0 asymptotically.

Exercise 5.5. The results with Example 5.5 and shown in Figure 5.5 used a first- visit MCmethod. Suppose that instead an every-visit MC method was used on the same problem. Wouldthe variance of the estimator still be infinite? Why or why not?

Solution. Following the argument in the textbook about the relationship between expectationand variance, we only need to show the expected square of the importance-sampling-scaledreturn is infinite. That is to say, we need to show:

E

∑t∈T (s) ρT (t)t Gt

|T (s)|

=∞

where T (s) is the index set of all time steps in which state s is visited.

Note that the equation used in the textbook for first-visit-MC method is a special case of theabove equation: It makes T = {0} and |T | = 1, i.e., it is just the first visit to s within theepisode. That is to say, in the textbook, we have shown

E[(ρ0G0)

2]

=∞.

21

Page 22: Solutions to Exercises in Reinforcement Learning by Richard S. …tianlinliu.com/files/notes_exercise_RL.pdf · 2020. 11. 30. · Solutions to Exercises in Reinforcement Learning

Tianlin Liu [email protected]

But since returns are non-negative in this example,

E

∑t∈T (s) ρT (t)t Gt

|T (s)|

=1

|T (s)|E

∑t∈T (s)

ρT (t)t Gt

≥ 1

|T (s)|E[(ρ0G0)

2]

=∞.

Exercise 5.6. Modify the algorithm for first-visit MC policy evaluation (Section 5.1) to use theincremental implementation for sample averages described in Section 2.3.

Solution.

Algorithm 2: First-visit MC Policy Evaluation (using incremental implementation)

inputs : π ← policy to be evaluatedV (s)← 0 for all s.N(s)← 0 for all s.Return(s) ← an empty list, for all s ∈ S

RepeatForeverGenerate an episode using π;foreach state s appearing in the episode do

G← return following the first occurence of s ;N(s)← N(s) + 1 ;V (s)← V (s) + 1

N(s) [G− V (s)] ;

Exercise 5.7. Derive the weighted-average update rule (5.7) from (5.6). Follow the pattern ofthe derivation of the unweighted rule (2.3).

Solution.

Vn+1 =

∑nk=1WkGk∑nk=1Wk

=1∑n

k=1Wk

[WnGn +

n−1∑k=1

WkGk

]

=1∑n

k=1Wk

WnGn +

[n−1∑k=1

Wk

] ∑n−1k=1 WkGk∑n−1k=1 Wk︸ ︷︷ ︸Vn

= Vn(1− Wn∑n

k=1Wk) +

WnGn∑nk=1Wk

= Vn +Wn

Cn[Gn − Vn]

22

Page 23: Solutions to Exercises in Reinforcement Learning by Richard S. …tianlinliu.com/files/notes_exercise_RL.pdf · 2020. 11. 30. · Solutions to Exercises in Reinforcement Learning

Tianlin Liu [email protected]

where Cn =∑n

k=1Wk.

Exercise 5.8. Racetrack (programming). Consider driving a race car around a turn likethose shown in Figure 5.6. You want to go as fast as possible, but not so fast as to run offthe track. In our simplified racetrack, the car is at one of a discrete set of grid positions, thecells in the diagram. The velocity is also discrete, a number of grid cells moved horizontallyand vertically per time step. The actions are increments to the velocity components. Each maybe changed by +1, −1, or 0 in one step, for a total of nine actions. Both velocity componentsare restricted to be nonnegative and less than 5, and they cannot both be zero except at thestarting line. Each episode begins in one of the randomly selected start states with both velocitycomponents zero and ends when the car crosses the finish line. The rewards are -1 for each stepuntil the car crosses the finish line. If the car hits the track boundary, it is moved back to arandom position on the starting line, both velocity components are reduced to zero, and theepisode continues. Before updating the car?s location at each time step, check to see if theprojected path of the car intersects the track boundary. If it intersects the finish line, theepisode ends; if it intersects anywhere else, the car is considered to have hit the track boundaryand is sent back to the starting line. To make the task more challenging, with probability 0.1 ateach time step the velocity increments are both zero, independently of the intended increments.Apply a Monte Carlo control method to this task to compute the optimal policy from eachstarting state. Exhibit several trajectories following the optimal policy (but turn the noise offfor these trajectories).

Solution. Please see exer5.8.py

23

Page 24: Solutions to Exercises in Reinforcement Learning by Richard S. …tianlinliu.com/files/notes_exercise_RL.pdf · 2020. 11. 30. · Solutions to Exercises in Reinforcement Learning

Tianlin Liu [email protected]

*Exercise 5.9. Modify the algorithm for off-policy Monte Carlo control (page 119) to use theidea of the truncated weighted-average estimator (5.9). Note that you will first need to convertthis equation to action values.

Solution. �

6 Temporal-Difference Learning

Exercise 6.1. This is an exercise to help develop your intuition about why TD methods areoften more efficient than Monte Carlo methods. Consider the driving home example and howit is addressed by TD and Monte Carlo methods. Can you imagine a scenario in which a TDupdate would be better on average than a Monte Carlo update? Give an example scenario?adescription of past experience and a current state?in which you would expect the TD update tobe better. Here’s a hint: Suppose you have lots of experience driving home from work. Thenyou move to a new building and a new parking lot (but you still enter the highway at the sameplace). Now you are starting to learn predictions for the new building. Can you see why TDupdates are likely to be much better, at least initially, in this case? Might the same sort of thinghappen in the original task?

Solution. As long as I enter the highway, my estimates for the remaining time are supposed tobe very accurate as a result of previous experience of driving home from work. In this case, it isreasonable to update my estimation for the remaining time once I enter the highway (which isthe TD method), instead of only updating it once I arrive at home (which is the MC method).For this reason, TD updates are likely to be much better, at least initially. I assume the samesort of thing would happen in the original task, given that one can elaborate a prior knowledgeof some relatively accurate estimations.

Exercise 6.2. From Figure 6.2 (left) it appears that the first episode results in a change in onlyV (A). What does this tell you about what happened on the first episode? Why was only theestimate for this one state changed? By exactly how much was it changed?

24

Page 25: Solutions to Exercises in Reinforcement Learning by Richard S. …tianlinliu.com/files/notes_exercise_RL.pdf · 2020. 11. 30. · Solutions to Exercises in Reinforcement Learning

Tianlin Liu [email protected]

Solution.(i)This tells me in the first episode, the random walk ended up in the right.(ii) Suppose the random walk ends up in the right, one can see that only the estimate for A ischanged– suppose s’ is not A, s is the previous state of s, then we have

V (s) = V (s)︸︷︷︸0.5

R︸︷︷︸0

+V (s′)︸ ︷︷ ︸0.5

−V (s)︸︷︷︸0.5

= 0.5

(iii) Suppose the random walk ends up in the right, one can see that V(A) = 0.45

V (A) = V (A)︸ ︷︷ ︸0.5

+ α︸︷︷︸0.1

R︸︷︷︸0

+V (LeftBox)︸ ︷︷ ︸0

−V (A)︸ ︷︷ ︸0.5

= 0.45.

Exercise 6.3. The specific results shown in Figure 6.2 (right) are dependent on the value ofthe step-size parameter, α. Do you think the conclusions about which algorithm is better wouldbe affected if a wider range of α values were used? Is there a different, fixed value of α at whicheither algorithm would have performed significantly better than shown? Why or why not?

Solution. I would assume MC and TD perform equally bad if α is chosen to be too large, say,greater than 1.

I am not sure what “fixed value of α” exactly means. If “fixed value” = “constant value”, thanfurther experiments are needed. However, if fixed value means values with closed-form algebraicexpression, we shall set α = 1

N(s) , where N(s) is the number of visits to state s, and update each

V (s) using its α. Since this problem has a stationary environment, it is reasonable to assumeα = 1

N(s) performs well.

*Exercise 6.4. In Figure 6.2 (right) the RMS error of the TD method seems to go down andthen up again, particularly at high α’s. What could have caused this? Do you think this alwaysoccurs, or might it be a function of how the approximate value function was initialized?

Solution. �

25

Page 26: Solutions to Exercises in Reinforcement Learning by Richard S. …tianlinliu.com/files/notes_exercise_RL.pdf · 2020. 11. 30. · Solutions to Exercises in Reinforcement Learning

Tianlin Liu [email protected]

Exercise 6.5. 6.5 Above we stated that the true values for the random walk task are 16 ,

26 ,

36 ,

46 ,

and 56 , for states A through E. Describe at least two different ways that these could have been

computed. Which would you guess we actually used? Why?

Solution.

• Method 1: The true value of each state is the probability of terminating on the right ifstarting from that state. It is obvious that xC = 0.5, as ending up in the left or right has thesame probability for simple random walk.

Now Imagine if one stands in D, then one will go to C or go to E with equal probability. Oncearrive at C, the probability of ending up in the right is given by xC ; once arrive at E, theprobability of ending up in the right is given by xE . Hence

xD = P(go to C from D)× xC + P(go to E from E)× xE =xC + xE

2.

In the same fashion, one get

xA =xLeftBox + xB

2.

xB =xA + xC

2.

xE =xRightBox + xD

2.

Note xLeftBox = 0, xLeftBox = 1. Solving the above linear equations gives the desired results.

• Method 2 Use iterative value iteration bruteforcely. Since iterative value iteration guaranteesto converge, the algorithm gives out desired values.

Method 1 is much easier, and therefore I assume the authors use the first one.

*Exercise 6.6. Design an off-policy version of the TD(0) update that can be used with arbitrarytarget policy π and covering behavior policy µ, using at each step t the importance samplingratio ρt+1

t (5.3).

26

Page 27: Solutions to Exercises in Reinforcement Learning by Richard S. …tianlinliu.com/files/notes_exercise_RL.pdf · 2020. 11. 30. · Solutions to Exercises in Reinforcement Learning

Tianlin Liu [email protected]

Solution. �

Exercise 6.7. Re-solve the windy gridworld task assuming eight possible actions, including thediagonal moves, rather than the usual four. How much better can you do with the extra actions?Can you do even better by including a ninth action that causes no movement at all other thanthat caused by the wind?

Solution. Please see exer6.7.py

Exercise 6.8. Re-solve the windy gridworld task with King’s moves, assuming that the effectof the wind, if there is any, is stochastic, sometimes varying by 1 from the mean values given foreach column. That is, a third of the time you move exactly according to these values, as in theprevious exercise, but also a third of the time you move one cell above that, and another thirdof the time you move one cell below that. For example, if you are one cell to the right of thegoal and you move left, then one-third of the time you move one cell above the goal, one-thirdof the time you move two cells above the goal, and one-third of the time you move to the goal.

Solution. Please see exer6.8.py

Exercise 6.9. Why is Q-learning considered an off-policy control method?

Solution. This is because, as has already been stated in the textbook, “the learned action-valuefunction, Q, directly approximates Q∗, the optimal action-value function, independent of thepolicy being followed.”

More specifically, to update Q in each iteration in Q-learning, given S, A, R, and S′, we pretendthat we execute a′ = arg maxaQ(S′, a) for the next iteration. In other words, we pretend weexecute the absolutely greedy action. It is a pretend, as in the next iteration, we actually executethe ε−greedy action. In this sense, we improve a policy which is different from that used togenerate the data. Hence it is a off-policy method.

This solution partially owe to https://stats.stackexchange.com/questions/184657/

difference-between-off-policy-and-on-policy-learning

27

Page 28: Solutions to Exercises in Reinforcement Learning by Richard S. …tianlinliu.com/files/notes_exercise_RL.pdf · 2020. 11. 30. · Solutions to Exercises in Reinforcement Learning

Tianlin Liu [email protected]

*Exercise 6.10. What are the update equations for Double Expected Sarsa with an ε-greedytarget policy?

Solution. �

Exercise 6.11. Describe how the task of Jack’s Car Rental (Example 4.2) could be reformulatedin terms of afterstates. Why, in terms of this specific task, would such a reformulation be likelyto speed convergence?

Solution. The current definition of state in Jack’s Car Rental problem is a tuple, in which eachentry indicates the number of cars in a location at the end of the day.

We can define the afterstates as a tuple, in which each entry is the number of cars in a locationat the beginning of a day. That is to say, the states are defined after the action, i.e., car-movingduring the night, are executed.

This reformulation is likely to speed convergence, as this reformulation reduces the dimensionof states. Consider the case (i) at the end of a day, Jack has 4 cars in location A and 8 cars inlocation B, and he decides to transport 2 cars from B to A; case (ii) In the end of a day, Jackhas 6 cars in location A and 6 cars in location B, and he decides to do nothing. The cases (i)and (ii) boils down to the same aftertstate, namely, (6, 6).

7 Multi-step Bootstrapping

Exercise 7.1. Why do you think a larger random walk task (19 states instead of 5) was usedin the examples of this chapter? Would a smaller walk have shifted the advantage to a differentvalue of n? How about the change in left-side outcome from 0 to 1 made in the larger walk? Doyou think that made any difference in the best value of n?

Solution.

28

Page 29: Solutions to Exercises in Reinforcement Learning by Richard S. …tianlinliu.com/files/notes_exercise_RL.pdf · 2020. 11. 30. · Solutions to Exercises in Reinforcement Learning

Tianlin Liu [email protected]

• I would expect 5 states are too few to test the n-step TD if n is fairly large, say, 64, 128, 256,512 as shown in this example. It is very likely that the random walk reaches the terminalstate using less than n steps. Recall that the value function of n-step TD is not updated forthe first n− 1 step. Hence if the state number is significantly smaller than that of the n, it isvery likely that n-step TD algorithm do nothing before it reaches the terminal states, and inthis case, it is exactly the same as MC method. We want to test n-step TD method but notMC method.

• Following the same reasoning above, I would expect a smaller walk shifts to a different valueof n.

• I do not think changing in left-side outcome from 0 to 1 make any difference in the bestvalue of n. But perhaps this speeds up the learning efficiency for n-step TD. As now thereward for achieving the left terminal state (= −1) is different from the reward for achievingany intermediate state (= 0), this difference makes the agent discern that, after n steps, if itreaches the left terminal state or somewhere in the intermediate step.

8 Planning and Learning with Tabular Methods

Exercise 8.1. The nonplanning method looks particularly poor in Figure 8.4 because it is aone-step method; a method using multi-step bootstrapping would do better. Do you think oneof the multi-step bootstrapping methods from Chapter 7 could do as well as the Dyna method?Explain why or why not.

Solution. To my understanding, Dyna has two advantages: (i) it updates a set of values inan episode, instead of just one value as in the without-planning case (ii) it exploits the both“learning from real experience” and “learning from simulated experience”. Now suppose weuse a multi-step bootstrapping methods from Chapter 7, the advantage (i) vanishes as bothmethods enjoy this property, but the advantage (ii) of Dyna is not exploited by any multi-stepbootstrapping method. For this reason, I would not expect one of the multi-step bootstrappingmethods from Chapter 7 could do as well as the Dyna method.

Exercise 8.2. Why did the Dyna agent with exploration bonus, Dyna-Q+, perform better inthe first phase as well as in the second phase of the blocking and shortcut experiments?

29

Page 30: Solutions to Exercises in Reinforcement Learning by Richard S. …tianlinliu.com/files/notes_exercise_RL.pdf · 2020. 11. 30. · Solutions to Exercises in Reinforcement Learning

Tianlin Liu [email protected]

Solution. Dyna-Q+ tends to execute two kinds of actions: (i) actions which has high valuesaccording to previous experience (ii) actions that haven’t been chosen for a long time. For thisreason, Dyna-Q+ outperforms Dyna in both phases. In the first phase, initially, Dyna simplyrandomly choose actions, but Dyna-Q+ tend to try new actions or the effective old actions.This makes Dyna-Q+ learn the values more effectively. In the second phase, Dyna-Q+ is moreexploratory, and therefore performs better than Dyna.

Exercise 8.3. Careful inspection of Figure 8.6 reveals that the difference between Dyna-Q+and Dyna-Q narrowed slightly over the first part of the experiment. What is the reason for this?

Solution. Assume the environment is fixed, when the algorithm learns the environment rea-sonably well, keep trying exploratory actions as in Dyna-Q+ is not helpful anymore, or evenwasteful. Since both algorithm converge to the optimal policy, the difference of cumulativereward diminishes. �

Exercise 8.4. The exploration bonus described above actually changes the estimated valuesof states and actions. Is this necessary? Suppose the bonus κ

√τ was used not in backups,

but solely in action selection. That is, suppose the action selected was always that for whichQ(S, a) + κ

√τSa was maximal. Carry out a gridworld experiment that tests and illustrates the

strengths and weaknesses of this alternate approach.

Solution. Please see exer8.4.py.

9 On-Policy Prediction wIth Approximation

Exercise 9.1. Why does (9.17) define (N + 1)d distinct functions for dimension d?

Solution. We need to count how many different sn11 ...s

ndd are there, where nd is chosen from

{0, · · · , N}. There are (N + 1) way to choose an ni, and we ought to choose ni for d times.Hence there are (N + 1)d distinct ones. �

30

Page 31: Solutions to Exercises in Reinforcement Learning by Richard S. …tianlinliu.com/files/notes_exercise_RL.pdf · 2020. 11. 30. · Solutions to Exercises in Reinforcement Learning

Tianlin Liu [email protected]

Exercise 9.2. Give N and the ci,j defining the basis functions that produce feature vectors(1, s1, s2, s1s2, s

21, s

22, s1s

22, s

21s2, s

21s

22)T .

Solution. The basis functions are defined as

φi(s) := sci,11 sci,22

for (1, s1, s2, s1s2, s21, s

22, s1s

22, s

21s2, s

21s

22)T , where {ci,1, ci,2} for i from 1 to d is defined as

{0, 0}, {1, 0}, {0, 1}, {2, 0}, {0, 2}, {1, 2}, {2, 1}, {2, 2}

respectively.

Exercise 9.3. Why does (9.18) define (N + 1)d distinct functions for dimension d?

Solution. In the exactly the same fashion as that of the previous problem, we count how manysuch functions are there. There are (N+1) ways to choose cij and we ought to choose cij for d

times. Hence there are (N + 1)d numbers of such functions.

Exercise 9.4. Suppose we believe that one of two state dimensions is more likely to have ane?ect on the value function than is the other, that generalization should be primarily across thisdimension rather than along it. What kind of tilings could be used to take advantage of thisprior knowledge?

Solution. �

31