38
Restricted Boltzmann Machines and Deep Belief Networks Presented by Matt Luciw USING A VAST, VAST MAJORITY OF SLIDES ORIGINALLY FROM: Geoffrey Hinton, Sue Becker, Yann Le Cun, Yoshua Bengio, Frank Wood

Restricted Boltzmann Machines and Deep Belief Networks Presented by Matt Luciw USING A VAST, VAST MAJORITY OF SLIDES ORIGINALLY FROM: Geoffrey Hinton,

Embed Size (px)

Citation preview

Page 1: Restricted Boltzmann Machines and Deep Belief Networks Presented by Matt Luciw USING A VAST, VAST MAJORITY OF SLIDES ORIGINALLY FROM: Geoffrey Hinton,

Restricted Boltzmann Machines and Deep Belief Networks

Presented by Matt Luciw

USING A VAST, VAST MAJORITY OF SLIDES ORIGINALLY FROM:

Geoffrey Hinton, Sue Becker, Yann Le Cun, Yoshua Bengio, Frank Wood

Page 2: Restricted Boltzmann Machines and Deep Belief Networks Presented by Matt Luciw USING A VAST, VAST MAJORITY OF SLIDES ORIGINALLY FROM: Geoffrey Hinton,

Motivations• Supervised training of deep models (e.g. many-layered

NNets) is difficult (optimization problem)• Shallow models (SVMs, one-hidden-layer NNets,

boosting, etc…) are unlikely candidates for learning high-level abstractions needed for AI

• Unsupervised learning could do “local-learning” (each module tries its best to model what it sees)

• Inference (+ learning) is intractable in directed graphical models with many hidden variables

• Current unsupervised learning methods don’t easily extend to learn multiple levels of representation

Page 3: Restricted Boltzmann Machines and Deep Belief Networks Presented by Matt Luciw USING A VAST, VAST MAJORITY OF SLIDES ORIGINALLY FROM: Geoffrey Hinton,

Belief Nets• A belief net is a directed acyclic

graph composed of stochastic variables.

• Can observe some of the variables and we would like to solve two problems:

• The inference problem: Infer the states of the unobserved variables.

• The learning problem: Adjust the interactions between variables to make the network more likely to generate the observed data.

stochastichidden cause

visible effect

Use nets composed of layers of stochastic binary variables with weighted connections. Later, we will generalize to other types of variable.

Page 4: Restricted Boltzmann Machines and Deep Belief Networks Presented by Matt Luciw USING A VAST, VAST MAJORITY OF SLIDES ORIGINALLY FROM: Geoffrey Hinton,

Stochastic binary neurons• These have a state of 1 or 0 which is a stochastic function of

the neuron’s bias, b, and the input it receives from other neurons.

0.5

00

1

jjiji

i wsbsp

)exp(1)( 11

j

jiji wsb

)( 1isp

Page 5: Restricted Boltzmann Machines and Deep Belief Networks Presented by Matt Luciw USING A VAST, VAST MAJORITY OF SLIDES ORIGINALLY FROM: Geoffrey Hinton,

Stochastic units • Replace the binary threshold units by binary stochastic units

that make biased random decisions.– The temperature controls the amount of noise.– Decreasing all the energy gaps between configurations is

equivalent to raising the noise level.

)()(

1

1

1

1)(

10

1

iii

TETwsi

sEsEEgapEnergy

eesp

ij ijj

temperature

Page 6: Restricted Boltzmann Machines and Deep Belief Networks Presented by Matt Luciw USING A VAST, VAST MAJORITY OF SLIDES ORIGINALLY FROM: Geoffrey Hinton,

The Energy of a joint configuration

ji

ijjiunitsi

ii wssbsE vhvhvhhv ),(

bias of unit i

weight between units i and j

Energy with configuration v on the visible units and h on the hidden units

binary state of unit i in joint configuration v, h

indexes every non-identical pair of i and j once

Page 7: Restricted Boltzmann Machines and Deep Belief Networks Presented by Matt Luciw USING A VAST, VAST MAJORITY OF SLIDES ORIGINALLY FROM: Geoffrey Hinton,

Weights Energies Probabilities

• Each possible joint configuration of the visible and hidden units has an energy– The energy is determined by the weights and biases

(as in a Hopfield net).• The energy of a joint configuration of the visible

and hidden units determines its probability:

• The probability of a configuration over the visible units is found by summing the probabilities of all the joint configurations that contain it.

),(),(

hvEhvp e

Page 8: Restricted Boltzmann Machines and Deep Belief Networks Presented by Matt Luciw USING A VAST, VAST MAJORITY OF SLIDES ORIGINALLY FROM: Geoffrey Hinton,

Restricted Boltzmann Machines

• Restrict the connectivity to make learning easier.– Only one layer of hidden units.

• Deal with more layers later

– No connections between hidden units.• In an RBM, the hidden units are conditionally

independent given the visible states. – So can quickly get an unbiased sample

from the posterior distribution when given a data-vector.

– This is a big advantage over directed belief nets

hidden

i

j

visible

Page 9: Restricted Boltzmann Machines and Deep Belief Networks Presented by Matt Luciw USING A VAST, VAST MAJORITY OF SLIDES ORIGINALLY FROM: Geoffrey Hinton,

Frank Wood - [email protected]

Maximizing the training data log likelihood

• We want maximizing parameters

• Differentiate w.r.t. to all parameters and perform gradient ascent to find optimal parameters.

• The derivation is nasty.

D,,1

,, |

|logmaxarg),,|(Dlogmaxarg

11 dc m

mm

mmm

n cf

dfp

nn

Assuming d’s drawn independently from p()

Assuming d’s drawn independently from p()

Standard PoE formStandard PoE form

Over all training data.Over all training data.

Page 10: Restricted Boltzmann Machines and Deep Belief Networks Presented by Matt Luciw USING A VAST, VAST MAJORITY OF SLIDES ORIGINALLY FROM: Geoffrey Hinton,

Frank Wood - [email protected]

Equilibrium Is Hard to Achieve• With:

can now train our PoE model. • But… there’s a problem:– is computationally infeasible to obtain (esp. in an

inner gradient ascent loop).– Sampling Markov Chain must converge to target

distribution. Often this takes a very long time!

Pm

mm

Pm

mm cfdf |log|log

0

m

np

),,|(Dlog 1

P

Page 11: Restricted Boltzmann Machines and Deep Belief Networks Presented by Matt Luciw USING A VAST, VAST MAJORITY OF SLIDES ORIGINALLY FROM: Geoffrey Hinton,

A very surprising fact• Everything that one weight needs to know about the

other weights and the data in order to do maximum likelihood learning is contained in the difference of two correlations.

freejijiij

ssssw

p

v

v)(log

Derivative of log probability of one training vector

Expected value of product of states at thermal equilibrium when the training vector is clamped on the visible units

Expected value of product of states at thermal equilibrium when nothing is clamped

Page 12: Restricted Boltzmann Machines and Deep Belief Networks Presented by Matt Luciw USING A VAST, VAST MAJORITY OF SLIDES ORIGINALLY FROM: Geoffrey Hinton,

The (theoretical) batch learning algorithm

• Positive phase– Clamp a data vector on the visible units. – Let the hidden units reach thermal equilibrium at a

temperature of 1– Sample for all pairs of units– Repeat for all data vectors in the training set.

• Negative phase– Do not clamp any of the units – Let the whole network reach thermal equilibrium at a

temperature of 1 (where do we start?)– Sample for all pairs of units– Repeat many times to get good estimates

• Weight updates– Update each weight by an amount proportional to the

difference in in the two phases.

jiss

jiss

jiss

Page 13: Restricted Boltzmann Machines and Deep Belief Networks Presented by Matt Luciw USING A VAST, VAST MAJORITY OF SLIDES ORIGINALLY FROM: Geoffrey Hinton,

Frank Wood - [email protected]

Solution: Contrastive Divergence!

• Now we don’t have to run the sampling Markov Chain to convergence, instead we can stop after 1 iteration (or perhaps a few iterations more typically)

• Why does this work?– Attempts to minimize the ways that the model

distorts the data.

10

|log|log

Pm

mm

Pm

mm cfdf

m

np

),,|(Dlog 1

Page 14: Restricted Boltzmann Machines and Deep Belief Networks Presented by Matt Luciw USING A VAST, VAST MAJORITY OF SLIDES ORIGINALLY FROM: Geoffrey Hinton,

Contrastive Divergence

• Maximum likelihood gradient: pull down energy surface at the examples and pull it up everywhere else, with more emphasis where model puts more probability mass

• Contrastive divergence updates: pull down energy surface at the examples and pull it up in their neighborhood, with more emphasis where model puts more probability mass

Page 15: Restricted Boltzmann Machines and Deep Belief Networks Presented by Matt Luciw USING A VAST, VAST MAJORITY OF SLIDES ORIGINALLY FROM: Geoffrey Hinton,

Restricted Boltzmann Machines• In an RBM, the hidden units are

conditionally independent given the visible states. It only takes one step to reach thermal equilibrium when the visible units are clamped. – Can quickly get the exact value of :

v jiss

hidden

i

j

visible

Page 16: Restricted Boltzmann Machines and Deep Belief Networks Presented by Matt Luciw USING A VAST, VAST MAJORITY OF SLIDES ORIGINALLY FROM: Geoffrey Hinton,

A picture of the Boltzmann machine learning algorithm for an RBM

0 jiss1 jiss

jiss

i

j

i

j

i

j

i

j

t = 0 t = 1 t = 2 t = infinity

)( 0 jijiij ssssw

Start with a training vector on the visible units.

Then alternate between updating all the hidden units in parallel and updating all the visible units in parallel.

a fantasy

Page 17: Restricted Boltzmann Machines and Deep Belief Networks Presented by Matt Luciw USING A VAST, VAST MAJORITY OF SLIDES ORIGINALLY FROM: Geoffrey Hinton,

Contrastive divergence learning: A quick way to learn an RBM

0 jiss1 jiss

i

j

i

j

t = 0 t = 1

)( 10 jijiij ssssw

Start with a training vector on the visible units.

Update all the hidden units in parallel

Update the all the visible units in parallel to get a “reconstruction”.

Update the hidden units again.

This is not following the gradient of the log likelihood. But it works well.

When we consider infinite directed nets it will be easy to see why it works.

reconstructiondata

Page 18: Restricted Boltzmann Machines and Deep Belief Networks Presented by Matt Luciw USING A VAST, VAST MAJORITY OF SLIDES ORIGINALLY FROM: Geoffrey Hinton,

How to learn a set of features that are good for reconstructing images of the digit 2

50 binary feature neurons

16 x 16 pixel

image

50 binary feature neurons

16 x 16 pixel

image

Increment weights between an active pixel and an active feature

Decrement weights between an active pixel and an active feature

data (reality)

reconstruction (better than reality)

Page 19: Restricted Boltzmann Machines and Deep Belief Networks Presented by Matt Luciw USING A VAST, VAST MAJORITY OF SLIDES ORIGINALLY FROM: Geoffrey Hinton,

Using an RBM to learn a model of a digit classReconstructions by model trained on 2’s

Reconstructions by model trained on 3’s

Data

0 jiss1 jiss

i

j

i

j

reconstructiondata

256 visible units (pixels)

100 hidden units (features)

Page 20: Restricted Boltzmann Machines and Deep Belief Networks Presented by Matt Luciw USING A VAST, VAST MAJORITY OF SLIDES ORIGINALLY FROM: Geoffrey Hinton,

The final 50 x 256 weights

Each neuron grabs a different feature.

Page 21: Restricted Boltzmann Machines and Deep Belief Networks Presented by Matt Luciw USING A VAST, VAST MAJORITY OF SLIDES ORIGINALLY FROM: Geoffrey Hinton,

Reconstruction from activated binary featuresData

Reconstruction from activated binary featuresData

How well can we reconstruct the digit images from the binary feature activations?

New test images from the digit class that the model was trained on

Images from an unfamiliar digit class (the network tries to see every image as a 2)

Page 22: Restricted Boltzmann Machines and Deep Belief Networks Presented by Matt Luciw USING A VAST, VAST MAJORITY OF SLIDES ORIGINALLY FROM: Geoffrey Hinton,

Deep Belief Networks

Page 23: Restricted Boltzmann Machines and Deep Belief Networks Presented by Matt Luciw USING A VAST, VAST MAJORITY OF SLIDES ORIGINALLY FROM: Geoffrey Hinton,

Explaining away (Judea Pearl)

• Even if two hidden causes are independent, they can become dependent when we observe an effect that they can both influence. – If we learn that there was an earthquake it reduces the

probability that the house jumped because of a truck.

truck hits house earthquake

house jumps

20 20

-20

-10 -10

p(1,1)=.0001p(1,0)=.4999p(0,1)=.4999p(0,0)=.0001

posterior

Page 24: Restricted Boltzmann Machines and Deep Belief Networks Presented by Matt Luciw USING A VAST, VAST MAJORITY OF SLIDES ORIGINALLY FROM: Geoffrey Hinton,

Why multilayer learning is hard in a sigmoid belief net.

• To learn W, we need the posterior distribution in the first hidden layer.

• Problem 1: The posterior is typically intractable because of “explaining away”.

• Problem 2: The posterior depends on the prior created by higher layers as well as the likelihood. – So to learn W, we need to know the

weights in higher layers, even if we are only approximating the posterior. All the weights interact.

• Problem 3: We need to integrate over all possible configurations of the higher variables to get the prior for first hidden layer. Yuk!

data

hidden variables

hidden variables

hidden variables

likelihood W

prior

Page 25: Restricted Boltzmann Machines and Deep Belief Networks Presented by Matt Luciw USING A VAST, VAST MAJORITY OF SLIDES ORIGINALLY FROM: Geoffrey Hinton,

Solution: Complementary Priors

• There is a special type of multi-layer directed model in which it is easy to infer the posterior distribution over the hidden units because it has complementary priors.

• This special type of directed model is equivalent to an undirected model.– At first, this equivalence just seems like a neat trick– But it leads to a very effective new learning algorithm that

allows multilayer directed nets to be learned one layer at a time.• The new learning algorithm resembles boosting with each layer

being like a weak learner.

Page 26: Restricted Boltzmann Machines and Deep Belief Networks Presented by Matt Luciw USING A VAST, VAST MAJORITY OF SLIDES ORIGINALLY FROM: Geoffrey Hinton,

An example of a complementary prior

• Infinite DAG with replicated weights.– An ancestral pass of the DAG is

exactly equivalent to letting a Restricted Boltzmann Machine settle to equilibrium.

– This infinite DAG defines the same distribution as an RBM. W

v1

h1

v0

h0

v2

h2

TW

TW

TW

W

W

etc.

Page 27: Restricted Boltzmann Machines and Deep Belief Networks Presented by Matt Luciw USING A VAST, VAST MAJORITY OF SLIDES ORIGINALLY FROM: Geoffrey Hinton,

• The variables in h0 are conditionally independent given v0.– Inference is trivial. We just multiply

v0 by – This is because the model above h0

implements a complementary prior.• Inference in the DAG is exactly

equivalent to letting a Restricted Boltzmann Machine settle to equilibrium starting at the data.

Inference in a DAG with replicated weights

W v1

h1

v0

h0

v2

h2

TW

TW

TW

W

W

etc.

TW

Page 28: Restricted Boltzmann Machines and Deep Belief Networks Presented by Matt Luciw USING A VAST, VAST MAJORITY OF SLIDES ORIGINALLY FROM: Geoffrey Hinton,

Divide and conquer multilayer learning

• Re-representing the data: Each time the base learner is called, it passes a transformed version of the data to the next learner. – Can we learn a deep, dense DAG one layer at a

time, starting at the bottom, and still guarantee that learning each layer improves the overall model of the training data?• This seems very unlikely. Surely we need to know the

weights in higher layers to learn lower layers?

Page 29: Restricted Boltzmann Machines and Deep Belief Networks Presented by Matt Luciw USING A VAST, VAST MAJORITY OF SLIDES ORIGINALLY FROM: Geoffrey Hinton,

Multilayer contrastive divergence

• Start by learning one hidden layer.• Then re-present the data as the activities of

the hidden units.– The same learning algorithm can now be applied

to the re-presented data. • Can we prove that each step of this greedy

learning improves the log probability of the data under the overall model?– What is the overall model?

Page 30: Restricted Boltzmann Machines and Deep Belief Networks Presented by Matt Luciw USING A VAST, VAST MAJORITY OF SLIDES ORIGINALLY FROM: Geoffrey Hinton,

• First learn with all the weights tied– This is exactly equivalent to learning

an RBM– Contrastive divergence learning is

equivalent to ignoring the small derivatives contributed by the tied weights between deeper layers.

Learning a deep directed network

W

W

v1

h1

v0

h0

v2

h2

TW

TW

TW

W

etc.

v0

h0

W

Page 31: Restricted Boltzmann Machines and Deep Belief Networks Presented by Matt Luciw USING A VAST, VAST MAJORITY OF SLIDES ORIGINALLY FROM: Geoffrey Hinton,

• Then freeze the first layer of weights in both directions and learn the remaining weights (still tied together).– This is equivalent to learning

another RBM, using the aggregated posterior distribution of h0 as the data.

W

v1

h1

v0

h0

v2

h2

TW

TW

TW

W

etc.

frozenW

v1

h0

W

TfrozenW

Page 32: Restricted Boltzmann Machines and Deep Belief Networks Presented by Matt Luciw USING A VAST, VAST MAJORITY OF SLIDES ORIGINALLY FROM: Geoffrey Hinton,

A simplified version with all hidden layers the same size

• The RBM at the top can be viewed as shorthand for an infinite directed net.

• When learning W1 we can view the model in two quite different ways:– The model is an RBM composed

of the data layer and h1.– The model is an infinite DAG

with tied weights. • After learning W1 we untie it from

the other weight matrices. • We then learn W2 which is still tied

to all the matrices above it.

h2

data

h1

h3

2W

3W

1WTW1

TW2

Page 33: Restricted Boltzmann Machines and Deep Belief Networks Presented by Matt Luciw USING A VAST, VAST MAJORITY OF SLIDES ORIGINALLY FROM: Geoffrey Hinton,

Why greedy learning works

• Each time we learn a new layer, the inference at the layer below becomes incorrect, but the variational bound on the log prob of the data improves (only true in theory -ml).

• Since the bound starts as an equality, learning a new layer never decreases the log prob of the data, provided we start the learning from the tied weights that implement the complementary prior.

• Now that we have a guarantee we can loosen the restrictions and still feel confident.– Allow layers to vary in size.– Do not start the learning at each layer from the weights in

the layer below.

Page 34: Restricted Boltzmann Machines and Deep Belief Networks Presented by Matt Luciw USING A VAST, VAST MAJORITY OF SLIDES ORIGINALLY FROM: Geoffrey Hinton,

• After learning the first layer of weights:

• If we freeze the generative weights that define the likelihood term and the recognition weights that define the distribution over hidden configurations, we get:

• Maximizing the RHS is equivalent to maximizing the log prob of “data” that occurs with probability

Why the hidden configurations should be treated as data when learning the next layer of weights

entropyhxphpxhp

xhentropyxenergyxp

)|(log)(log)|(

)||()()(log

111

1

antconsthpxhpxp )(log)|()(log 11

)|( 1 xhp

Page 35: Restricted Boltzmann Machines and Deep Belief Networks Presented by Matt Luciw USING A VAST, VAST MAJORITY OF SLIDES ORIGINALLY FROM: Geoffrey Hinton,

Fine-tuning with a contrastive version of the “wake-sleep” algorithm

After learning many layers of features, we can fine-tune the features to improve generation.

1. Do a stochastic bottom-up pass– Adjust the top-down weights to be good at reconstructing

the feature activities in the layer below.2. Do a few iterations of sampling in the top level RBM

-- Adjust the weights in the top-level RBM.3. Do a stochastic top-down pass– Adjust the bottom-up weights to be good at reconstructing

the feature activities in the layer above.4. Not required! But helps the recognition rate (-ml).

Page 36: Restricted Boltzmann Machines and Deep Belief Networks Presented by Matt Luciw USING A VAST, VAST MAJORITY OF SLIDES ORIGINALLY FROM: Geoffrey Hinton,

A neural network model of digit recognition

2000 top-level units

500 units

500 units

28 x 28 pixel image

10 label units

The model learns a joint density for labels and images. To perform recognition we can start with a neutral state of the label units and do one or two iterations of the top-level RBM.

Or we can just compute the free energy of the RBM with each of the 10 labels

The top two layers form a restricted Boltzmann machine whose free energy landscape models the low dimensional manifolds of the digits.

The valleys have names:

Page 37: Restricted Boltzmann Machines and Deep Belief Networks Presented by Matt Luciw USING A VAST, VAST MAJORITY OF SLIDES ORIGINALLY FROM: Geoffrey Hinton,

Show the movie of the network generating digits

(available at www.cs.toronto/~hinton)

Page 38: Restricted Boltzmann Machines and Deep Belief Networks Presented by Matt Luciw USING A VAST, VAST MAJORITY OF SLIDES ORIGINALLY FROM: Geoffrey Hinton,

Limits of the Generative Model

1. Designed for images where non-binary values can be treated as probabilities.

2. Top-down feedback only in the highest (associative) layer.3. No systematic way to deal with invariance.4. Assumes segmentation already performed and does not learn

to attend to the most informative parts of objects.