34
Multi-armed Bandit Problem and Multi-armed Bandit Problem and Bayesian Optimization in Bayesian Optimization in Reinforcement Learning Reinforcement Learning From Cognitive Science and Machine Learning Summer School 2010 Loris Bazzani 1

Multi-armed Bandit Problem and Bayesian Optimization in Reinforcement Learning

  • Upload
    deion

  • View
    36

  • Download
    1

Embed Size (px)

DESCRIPTION

Multi-armed Bandit Problem and Bayesian Optimization in Reinforcement Learning. From Cognitive Science and Machine Learning Summer School 2010. Loris Bazzani. Outline Summer School. www.videolectures.net. Outline Summer School. www.videolectures.net. Outline Presentation. - PowerPoint PPT Presentation

Citation preview

Page 1: Multi-armed Bandit Problem and Bayesian Optimization in Reinforcement Learning

Multi-armed Bandit Problem and Bayesian Multi-armed Bandit Problem and Bayesian Optimization in Reinforcement LearningOptimization in Reinforcement Learning

Multi-armed Bandit Problem and Bayesian Multi-armed Bandit Problem and Bayesian Optimization in Reinforcement LearningOptimization in Reinforcement Learning

From Cognitive Science and Machine Learning Summer School 2010

Loris Bazzani

1

Page 2: Multi-armed Bandit Problem and Bayesian Optimization in Reinforcement Learning

Outline Summer School

www.videolectures.net2

Page 3: Multi-armed Bandit Problem and Bayesian Optimization in Reinforcement Learning

Outline Summer School

www.videolectures.net3

Page 4: Multi-armed Bandit Problem and Bayesian Optimization in Reinforcement Learning

Outline Presentation

• What are Machine Learning and Cognitive Science?

• How are they related each other?• Reinforcement Learning

– Background– Discrete case– Continuous case

4

Page 5: Multi-armed Bandit Problem and Bayesian Optimization in Reinforcement Learning

Outline Presentation

• What are Machine Learning and Cognitive Science?

• How are they related each other?• Reinforcement Learning

– Background– Discrete case– Continuous case

5

Page 6: Multi-armed Bandit Problem and Bayesian Optimization in Reinforcement Learning

What is Machine Learning (ML)?• Endow computers with the ability to “learn” from

“data”• Present data from sensors, the internet,

experiments• Expect computer to make decisions• Traditionally categorized as:

– Supervised Learning: classification, regression– Unsupervised Learning: dimensionality reduction,

clustering– Reinforcement Learning: learning from feedback,

planning

From N. Lawrence slides6

Page 7: Multi-armed Bandit Problem and Bayesian Optimization in Reinforcement Learning

What is Cognitive Science (CogSci)?

• How does the mind get so much out of so little?– Rich models of the world– Make strong generalizations

• Process of reverse engineering of the brain– Create computational models of the brain

• Much of cognition involves induction: finding patterns in data

From N. Chater slides7

Page 8: Multi-armed Bandit Problem and Bayesian Optimization in Reinforcement Learning

Outline Presentation

• What are Machine Learning and Cognitive Science?

• How are they related each other?• Reinforcement Learning

– Background– Discrete case– Continuous case

8

Page 9: Multi-armed Bandit Problem and Bayesian Optimization in Reinforcement Learning

Link between CogSci and ML• ML takes inspiration from psychology, CogSci and

computer science– Rosenblatt’s Perceptron– Neural Networks– …

• CogSci uses ML as engineering toolkit– Bayesian inference in generative models– Hierarchical probabilistic models– Approximated methods of learning and inference– …

9

Page 10: Multi-armed Bandit Problem and Bayesian Optimization in Reinforcement Learning

Outline Presentation

• What are Machine Learning and Cognitive Science?

• How are they related each other?• Reinforcement Learning

– Background– Discrete case– Continuous case

11

Page 11: Multi-armed Bandit Problem and Bayesian Optimization in Reinforcement Learning

12

Page 12: Multi-armed Bandit Problem and Bayesian Optimization in Reinforcement Learning

13

Page 13: Multi-armed Bandit Problem and Bayesian Optimization in Reinforcement Learning

Outline Presentation

• What are Machine Learning and Cognitive Science?

• How are they related each other?• Reinforcement Learning

– Background– Discrete case– Continuous case

14

Page 14: Multi-armed Bandit Problem and Bayesian Optimization in Reinforcement Learning

Multi-armed Bandit Problem[Auer et al. ‘95]

I wanna win a lot of cash!

I wanna win a lot of cash!

15

Page 15: Multi-armed Bandit Problem and Bayesian Optimization in Reinforcement Learning

Multi-armed Bandit Problem[Auer et al. ‘95]

• Trade-off between Exploration and Exploitation

• Adversary controls payoffs• No statistical assumptions on the rewards

distribution• Performances measurement: Regret = Player

Reward – Best Reward• Upper Bound on the Expected Regret

16

Page 16: Multi-armed Bandit Problem and Bayesian Optimization in Reinforcement Learning

Multi-armed Bandit Problem[Auer et al. ‘95]

Actions

Sequence ofTrials

Reward(s)

Goal: define a probability distribution over 17

Page 17: Multi-armed Bandit Problem and Bayesian Optimization in Reinforcement Learning

The Full Information Game[Freund & Shapire ‘95]

Regret Bound:

Problem: Compute the reward for each action!18

Page 18: Multi-armed Bandit Problem and Bayesian Optimization in Reinforcement Learning

The Partial Information Game Exp3 = Exponential-weight algorithm for Exploration and Exploitation

Update only the selected action

Tries out all the possible actions

Bound for certain valuesof and dependingon the best reward

19

Page 19: Multi-armed Bandit Problem and Bayesian Optimization in Reinforcement Learning

The Partial Information Game Exp3.1 = Exp3 with rounds, where a round consists of a sequence of trials

Each round guesses a bound for the total reward of the best action

Bound:

20

Page 20: Multi-armed Bandit Problem and Bayesian Optimization in Reinforcement Learning

Applications [Hedge][Bazzani et al. ‘10]

25

Page 21: Multi-armed Bandit Problem and Bayesian Optimization in Reinforcement Learning

Outline Presentation

• What are Machine Learning and Cognitive Science?

• How are they related each other?• Reinforcement Learning

– Background– Discrete case– Continuous case

26

Page 22: Multi-armed Bandit Problem and Bayesian Optimization in Reinforcement Learning

Bayesian Optimization [Brochu et al. ‘10]

• Optimize a nonlinear function over a set:

Classic Optimization Tools Bayesian Optimization Tools

•Known math representation•Convex•Evaluation of the function on all the points

•Not close-form expression•Not convex•Evaluation of the function only on one point gets noisy response

actionsactions

Function that gives rewards

Function that gives rewards

27

Page 23: Multi-armed Bandit Problem and Bayesian Optimization in Reinforcement Learning

Bayesian Optimization [Brochu et al. ‘10]

• Uses the Bayesian Theorem

where

Prior: our beliefs about the space of possible objective functions

Prior: our beliefs about the space of possible objective functions

Posterior: our updated beliefs about the unknown objective function

Posterior: our updated beliefs about the unknown objective function

Likelihood: given what we think we know about the prior, how likely is the data we have seen?

Likelihood: given what we think we know about the prior, how likely is the data we have seen?

Goal: maximize the posterior at each step, so that each new evaluation decreases the distance between the true global maximum and the expected maximum given the model.

28

Page 24: Multi-armed Bandit Problem and Bayesian Optimization in Reinforcement Learning

Bayesian Optimization [Brochu et al. ‘10]

29

Page 25: Multi-armed Bandit Problem and Bayesian Optimization in Reinforcement Learning

Priors over Functions

• Convergence conditions of BO: – The acquisition function is continuous and

approximately minimizes the risk– Conditional variance converges to zero

Guaranteed by Gaussian Processes (GP)Guaranteed by Gaussian Processes (GP)

– The objective is continuous– The prior is homogeneous– The optimization is independent of the m-

th differences

30

Page 26: Multi-armed Bandit Problem and Bayesian Optimization in Reinforcement Learning

Priors over Functions

• GP = extension of the multivariate Gaussian distribution to an infinite dimension stochastic process

• Any finite linear combination of samples will be normally distributed

• Defined by its mean function and covariance function

• Focus on defining the covariance function31

Page 27: Multi-armed Bandit Problem and Bayesian Optimization in Reinforcement Learning

Why use GPs?• Assume zero-mean GP, function values are drawn according to

, where

• When a new observation comes

• Using Sherman-Morrison-Woodbury formula

32

Page 28: Multi-armed Bandit Problem and Bayesian Optimization in Reinforcement Learning

Choice of Covariance Functions

• Isotropic model with hyperparameter

• Squared Exponential Kernel

• Mater Kernel

Gamma function Bessel function33

Page 29: Multi-armed Bandit Problem and Bayesian Optimization in Reinforcement Learning

Acquisition Functions

• The role of the acquisition function is to guide the search for the optimum and the uncertainty is great

• Assumption: Optimize the acquisition function is simple and cheap

• Goal: high acquisition corresponds to potentially high values of the objective function

• Maximizing the probability of improvement

34

Page 30: Multi-armed Bandit Problem and Bayesian Optimization in Reinforcement Learning

Acquisition Functions• Expected improvement

• Confidence bound criterion

CDF and PDF of normal distribution

35

Page 31: Multi-armed Bandit Problem and Bayesian Optimization in Reinforcement Learning

Applications [BO]

Learn a set of robot gait parameters that maximize velocity of a Sony AIBO ERS-7 robot

Find a policy for robot path planning that would minimize uncertainty about its location and heading

Select the locations of a set of sensors (e.g., cameras) in a dynamic system

36

Page 32: Multi-armed Bandit Problem and Bayesian Optimization in Reinforcement Learning

Take-home Message

• ML and CogSci are connected

• Reinforcement Learning is useful for optimization when dealing with temporal information– Discrete case: Multi-armed bandit problem– Continuous case: Bayesian optimization

• We can employ these techniques for Computer Vision and System Control problems

37

Page 33: Multi-armed Bandit Problem and Bayesian Optimization in Reinforcement Learning

http://heli.stanford.edu/

[Abbeel et al. 2007]

38

Page 34: Multi-armed Bandit Problem and Bayesian Optimization in Reinforcement Learning

Some ReferencesP. Auer, N. Cesa-Bianchi, Y. Freund, and R. E. Schapire. 1995. Gambling in a rigged

casino: The adversarial multi-armed bandit problem. FOCS '95.Yoav Freund and Robert E. Schapire. 1995. A decision-theoretic generalization of on-

line learning and an application to boosting. EuroCOLT '95.Eric Brochu, Vlad Cora and Nando de Freitas. 2009. A Tutorial on Bayesian

Optimization of Expensive Cost Functions, with Application to Active User Modeling and Hierarchical Reinforcement Learning. Technical Report TR-2009-023. UBC.

Loris Bazzani, Nando de Freitas and Jo-Anne Ting. 2010. Learning attentional mechanisms for simultaneous object tracking and recognition with deep networks. NIPS 2010 Deep Learning and Unsupervised Feature Learning Workshop.

Carl Edward Rasmussen and Christopher K. I. Williams. 2005. Gaussian Processes for Machine Learning. The MIT Press.

Pieter Abbeel, Adam Coates, Morgan Quigley, and Andrew Y. Ng. 2007. An Application of Reinforcement Learning to Aerobatic Helicopter Flight. NIPS 2007.

39