34
Thomas Trappenberg Autonomous Robotics: Supervised and unsupervised learning

Thomas Trappenberg Autonomous Robotics: Supervised and unsupervised learning

Embed Size (px)

Citation preview

Page 1: Thomas Trappenberg Autonomous Robotics: Supervised and unsupervised learning

Thomas Trappenberg

Autonomous Robotics: Supervised and unsupervised learning

Page 2: Thomas Trappenberg Autonomous Robotics: Supervised and unsupervised learning

Three kinds of learning:

1. Supervised learning

2. Unsupervised Learning

3. Reinforcement learning

Detailed teacher that provides desired output y for a giveninput x: training set {x,y} find appropriate mapping function y=h(x;w) [= W j(x) ]

Delayed feedback from the environment in form of reward/punishment when reaching state s with action a: reward r(s,a) find optimal policy a=p*(s) Most general learning circumstances

Unlabeled samples are provided from which the system has tofigure out good representations: training set {x} find sparse basis functions bi so that x=Si ci bi

Page 3: Thomas Trappenberg Autonomous Robotics: Supervised and unsupervised learning

Some Pioneers

Page 4: Thomas Trappenberg Autonomous Robotics: Supervised and unsupervised learning

1.Supervised learning

• Maximum Likelihood (ML) estimation: Give hypothesis h(y|x; Q), what are the best parameters that describes the training data

• Bayesian Networks How to formulate detailed causal models with graphical means

• Universal Learners: Neural Networks, SVM & Kernel Machines What if we do not have a good hypothesis

Page 5: Thomas Trappenberg Autonomous Robotics: Supervised and unsupervised learning

Fundamental stochastisityIrreducible indeterminacyEpistemological limitations

Sources of fluctuations Probabilistic framework

Goal of learning: Make predictions !!!!!!!!!!!

learning vs memory

Page 6: Thomas Trappenberg Autonomous Robotics: Supervised and unsupervised learning

Goal of learning:

Plant equation for robot

Distance traveled when both motors are running with Power 50

Page 7: Thomas Trappenberg Autonomous Robotics: Supervised and unsupervised learning

Hypothesis:

Learning: Choose parameters that make training data most likely

The hard problem: How to come up with a useful hypothesis

Assume independence of training examples

and consider this as function of parameters (log likelihood)

MaximumLikelihoodEstimation

Page 8: Thomas Trappenberg Autonomous Robotics: Supervised and unsupervised learning

Minimize MSE

1. Random search2. Look where gradient is zero3. Gradient descent

Learning rule:

Page 9: Thomas Trappenberg Autonomous Robotics: Supervised and unsupervised learning

Nonlinear regression: Bias-variance tradeoff

Page 10: Thomas Trappenberg Autonomous Robotics: Supervised and unsupervised learning

Nonlinear regression: Bias-variance tradeoff

Page 11: Thomas Trappenberg Autonomous Robotics: Supervised and unsupervised learning

Feedback control

Adaptive control

Page 12: Thomas Trappenberg Autonomous Robotics: Supervised and unsupervised learning

MLE only looks at data …What is if we have some prior knowledge of q?

Bayes’ Theorem

Maximum a posteriori (MAP)

Page 13: Thomas Trappenberg Autonomous Robotics: Supervised and unsupervised learning

How about building more elaborate multivariate models?

Causal (graphical) models (Judea Pearl)

Parameters of CPT usually learned from data!

Page 14: Thomas Trappenberg Autonomous Robotics: Supervised and unsupervised learning

Hidden Markov Model (HMM) for localization

Page 15: Thomas Trappenberg Autonomous Robotics: Supervised and unsupervised learning

How about building more general multivariate models?1961: Outline of a theory of Thought-Processes and Thinking Machines

• Neuronic & Mnemonic Equation• Reverberation• Oscillations• Reward learning

Eduardo Renato Caianiello (1921-1993)

But: NOT STOCHASTIC (only small noise in weights)

Stochastic networks: The Boltzmann machine Hinton & Sejnowski 1983

Page 16: Thomas Trappenberg Autonomous Robotics: Supervised and unsupervised learning
Page 17: Thomas Trappenberg Autonomous Robotics: Supervised and unsupervised learning

McCulloch-Pitts neuron

Also popular:

Perceptron learning rule:

( )

Page 18: Thomas Trappenberg Autonomous Robotics: Supervised and unsupervised learning

MultiLayer Perceptron (MLP)

Universal approximator (learner)

but Overfitting Meaningful input

Unstructured learning

Only deterministic units

Stochastic version can represent density functions

(just use chain rule)

Page 19: Thomas Trappenberg Autonomous Robotics: Supervised and unsupervised learning

Linear large margin classifiers Support Vector Machines (SVM)

MLP: Minimize training error

SVM: Minimize generalization error (empirical risk)

Page 20: Thomas Trappenberg Autonomous Robotics: Supervised and unsupervised learning

Linear in parameter learning

+ constraints

Linear in parameters

Thanks to Doug Tweet (UoT) for pointing out LIP

Linear hypothesis

Non-Linear hypothesis

SVM in dual form

Page 21: Thomas Trappenberg Autonomous Robotics: Supervised and unsupervised learning

Linear in parameter learning

Primal problem:

Dual problem:

subject to

Page 22: Thomas Trappenberg Autonomous Robotics: Supervised and unsupervised learning

Nonlinear large margin classifiers Kernel Trick

Transform attributes (or create new feature values from attributes)and then use linear optimization

Can be implemented efficiently with Kernels in SVMs Since data only appear as linear products

for example, quadratic kernel

Page 23: Thomas Trappenberg Autonomous Robotics: Supervised and unsupervised learning

2. Sparse Unsupervised Learning

Page 24: Thomas Trappenberg Autonomous Robotics: Supervised and unsupervised learning

• How to scale to real (large) learning problems

• Structured (hierarchical) internal representation

• What are good features

• Lots of unlabeled data • Top-down (generative) models

• Temporal domain

Major issues not addressed by supervised learning

Page 25: Thomas Trappenberg Autonomous Robotics: Supervised and unsupervised learning

Sparse features are useful

What is a good representation?

Page 26: Thomas Trappenberg Autonomous Robotics: Supervised and unsupervised learning

Horace Barlow

Possible mechanisms underlying the transformations of sensory of sensory messages (1961)

``… reduction of redundancy is an important principle guiding the organization of sensory messages …”

Sparsness & Overcompleteness

The Ratio Club

Page 27: Thomas Trappenberg Autonomous Robotics: Supervised and unsupervised learning

minimizing reconstruction error and sparsity

PCA

Page 28: Thomas Trappenberg Autonomous Robotics: Supervised and unsupervised learning

Self-organized feature representation by hierarchical generative models

Page 29: Thomas Trappenberg Autonomous Robotics: Supervised and unsupervised learning

29

Restricted Boltzmann Machine (RBM)

Update rule: probabilistic units(Caianello: Neuronic equation)

Training rule: contrastive divergence(Caianello: Mnemonic equation)

Alternating Gibbs Sampling

Page 30: Thomas Trappenberg Autonomous Robotics: Supervised and unsupervised learning

30

Geoffrey E. Hinton

Deep believe networks: The stacked Restricted Boltzmann Machine

Page 31: Thomas Trappenberg Autonomous Robotics: Supervised and unsupervised learning

Sparse and Topographic RBM

… with Paul Hollensen

Page 32: Thomas Trappenberg Autonomous Robotics: Supervised and unsupervised learning

…with Pitoyo Hartono

Map Initialized Perceptron (MIP)

Page 33: Thomas Trappenberg Autonomous Robotics: Supervised and unsupervised learning

RBM features

Page 34: Thomas Trappenberg Autonomous Robotics: Supervised and unsupervised learning