58
Artificial Neural Networks Notes based on Nilsson and Mitchell’s Machine learning

Artificial Neural Networks Notes based on Nilsson and Mitchell’s Machine learning

  • View
    229

  • Download
    1

Embed Size (px)

Citation preview

Page 1: Artificial Neural Networks Notes based on Nilsson and Mitchell’s Machine learning

Artificial Neural Networks

Notes based on Nilsson and Mitchell’s

Machine learning

Page 2: Artificial Neural Networks Notes based on Nilsson and Mitchell’s Machine learning

Outline

Perceptrons (LTU) Gradient descent Multi-layer networks Backpropagation

Page 3: Artificial Neural Networks Notes based on Nilsson and Mitchell’s Machine learning

Biological Neural Systems Neuron switching time : > 10-3 secs Number of neurons in the human

brain: ~1010

Connections (synapses) per neuron : ~104–105

Face recognition : 0.1 secs High degree of parallel

computation Distributed representations

Page 4: Artificial Neural Networks Notes based on Nilsson and Mitchell’s Machine learning

Properties of Artificial Neural Nets (ANNs)

Many simple neuron-like threshold switching units

Many weighted interconnections among units

Highly parallel, distributed processing

Learning by adaptation of the connection weights

Page 5: Artificial Neural Networks Notes based on Nilsson and Mitchell’s Machine learning

Appropriate Problem Domains for Neural Network Learning

Input is high-dimensional discrete or real-valued (e.g. raw sensor input)

Output is discrete or real valued Output is a vector of values Form of target function is unknown Humans do not need to interpret

the results (black box model)

Page 6: Artificial Neural Networks Notes based on Nilsson and Mitchell’s Machine learning

General Idea

• A network of neurons. Each neuron is characterized by:

• number of input/output wires• weights on each wire• threshold value

• These values are not explicitly programmed, but they evolve through a training process.

• During training phase, labeled samples are presented. If the network classifies correctly, no weight changes. Otherwise, the weights are adjusted.

• backpropagation algorithm used to adjust weights.

Page 7: Artificial Neural Networks Notes based on Nilsson and Mitchell’s Machine learning

ALVINN (Carnegie Mellon Univ)

Automated driving at 70 mph on a public highway

Camera image

30x32 pixelsas inputs

30 outputsfor steering

30x32 weightsinto one out offour hiddenunit

4 hiddenunits

Page 8: Artificial Neural Networks Notes based on Nilsson and Mitchell’s Machine learning

Another Example

NETtalk – Program that learns to pronounce English text. (Sejnowski and Rosenberg 1987).

-A difficult task using conventional programming models.- Rule-based approaches are too complex since pronunciations are very irregular.- NETtalk takes as input a sentence and produces a sequence of phonemes and an associated stress for each letter.

Page 9: Artificial Neural Networks Notes based on Nilsson and Mitchell’s Machine learning

NETtalk

A phoneme is a basic unit of sound in a language.

Stress – relative loudness of that sound.

Because the pronunciation of a single letter depends upon its context and the letters around it, NETtalk is given a seven character window.

Each position is encoded by one of 29 symbols, (26 letters and 3 punctuations.)

Letter in each position activates the corresponding unit.

Page 10: Artificial Neural Networks Notes based on Nilsson and Mitchell’s Machine learning

NETtalk

The output units encode phonemes using 21 different features of human articulation.

Remaining five units encode stress and syllable boundaries.

NETtalk also has a middle layer (hidden layer) that has 80 hidden units and nearly 18000 connections (edges).

NETtalk is trained by giving it a 7 character window so that it learns the pronounce the middle character.

It learns by comparing the computed pronunciation to the correct pronunciation.

Page 11: Artificial Neural Networks Notes based on Nilsson and Mitchell’s Machine learning

Handwritten character recognition

This is another area in which neural networks have been successful.

In fact, all the successful programs have a neural network component.

Page 12: Artificial Neural Networks Notes based on Nilsson and Mitchell’s Machine learning

Threshold Logic Unit (TLU)

x1

x2

xn

.

..

w1

w2

wn

a=i=1n wi xi

1 if a y= 0 if a <

y

{

inputsweights

activation output

Page 13: Artificial Neural Networks Notes based on Nilsson and Mitchell’s Machine learning
Page 14: Artificial Neural Networks Notes based on Nilsson and Mitchell’s Machine learning

Activation Functions

a

y

a

y

a

y

a

y

threshold linear

piece-wise linear sigmoid

Page 15: Artificial Neural Networks Notes based on Nilsson and Mitchell’s Machine learning

Decision Surface of a TLU

x1

x2

Decision linew1 x1 + w2 x2 = w

1

1 1

0

0

00

0

1

Page 16: Artificial Neural Networks Notes based on Nilsson and Mitchell’s Machine learning

Scalar Products & Projections

w • v > 0

v

w

w • v = 0

v

w

w • v < 0

v

w

w • v = |w||v| cos

v

w

Page 17: Artificial Neural Networks Notes based on Nilsson and Mitchell’s Machine learning

Geometric Interpretation

x1

x2

Decision linew

x

w•x=y=1

y=0

|xw|=/|w|

The relation w•x= defines the decision line

xw

Page 18: Artificial Neural Networks Notes based on Nilsson and Mitchell’s Machine learning

Geometric Interpretation

In n dimensions the relation w • x= defines a n-1 dimensional hyper-plane, which is perpendicular to the weight vector w.

On one side of the hyper-plane (w • x > ) all patterns are classified by the TLU as “1”, while those that get classified as “0” lie on the other side of the hyper-plane.

If patterns can be not separated by a hyper-plane then they cannot be correctly classified with a TLU.

Page 19: Artificial Neural Networks Notes based on Nilsson and Mitchell’s Machine learning

Linear Separability

x1

x2

10

0 0

Logical ANDx1 x2 a y

0 0 0 0

0 1 1 0

1 0 1 0

1 1 2 1

w1=1w2=1=1.5 x1

10

0

w1=?w2=?= ?

1

Logical XOR

x1 x2 y

0 0 0

0 1 1

1 0 1

1 1 0

Page 20: Artificial Neural Networks Notes based on Nilsson and Mitchell’s Machine learning

Threshold as Weight

x1

x2

xn

.

..

w1

w2

wn

wn+1

xn+1=-1

a= i=1n+1 wi xi

y

1 if a y = 0 if a <{

=wn+1

Page 21: Artificial Neural Networks Notes based on Nilsson and Mitchell’s Machine learning

Geometric Interpretation

x1

x2 Decision linew

x

w•x=y=1

y=0

The relation w • x= defines the decision line

Page 22: Artificial Neural Networks Notes based on Nilsson and Mitchell’s Machine learning

Training ANNs Training set S of examples {x, t}

x is an input vector and t the desired target vector Example: Logical And S = {(0,0),0}, {(0,1),0}, {(1,0),0}, {(1,1),1}

Iterative process Present a training example x , compute network

output y, compare output y with target t, adjust weights and thresholds

Learning rule Specifies how to change the weights w and

thresholds of the network as a function of the inputs x, output y and target t.

Page 23: Artificial Neural Networks Notes based on Nilsson and Mitchell’s Machine learning

Adjusting the Weight Vector

Target t=1Output y=0

x

w

>90

w

xw’ = w + x

x

Target t=0Output y=1

w

<90

x w

x

Move w in the direction of x

x

w’ = w - x

Move w away from the direction of x

Page 24: Artificial Neural Networks Notes based on Nilsson and Mitchell’s Machine learning

Perceptron Learning Rule w’= w + (t-y) xOr in components w’i = wi + wi = wi + (t-y) xi (i=1..n+1)

With wn+1 = and xn+1= –1 The parameter is called the learning rate. It

determines the magnitude of weight updates wi . If the output is correct (t = y) the weights are not

changed (wi =0). If the output is incorrect (t y) the weights wi are

changed such that the output of the TLU for the new weights w’i is closer/further to the input xi.

Page 25: Artificial Neural Networks Notes based on Nilsson and Mitchell’s Machine learning

Perceptron Training Algorithmrepeat

for each training vector pair (x,t)evaluate the output y when x is the inputif yt then

form a new weight vector w’ accordingto w’=w + (t-y) x

else do nothing

end if end foruntil y=t for all training vector pairs

Page 26: Artificial Neural Networks Notes based on Nilsson and Mitchell’s Machine learning

Perceptron Convergence Theorem

The algorithm converges to the correct

classification if the training data is linearly separable and is sufficiently small

If two classes of vectors X1 and X2 are linearly separable, the application of the perceptron training algorithm will eventually result in a weight vector w0, such that w0 defines a TLU whose decision hyper-plane separates X1 and X2 (Rosenblatt 1962).

Solution w0 is not unique, since if w0 x =0 defines a hyper-plane, so does w’0 = k w0.

Page 27: Artificial Neural Networks Notes based on Nilsson and Mitchell’s Machine learning

Example

x1 x2 output

1 1 19.4 6.4 -12.5 2.1 18.0 7.7 -10.5 2.2 17.9 8.4 -17.0 7.0 -12.8 0.8 11.2 3.0 17.8 6.1 -1

Initial weights: (0.75, -0.5, -0.6)

Page 28: Artificial Neural Networks Notes based on Nilsson and Mitchell’s Machine learning

Linear Unit

x1

x2

xn

.

..

w1

w2

wn

a=i=1n wi xi

y

y= a = i=1n wi xi

inputs weights

activation output

Page 29: Artificial Neural Networks Notes based on Nilsson and Mitchell’s Machine learning

Gradient Descent Learning Rule

Consider linear unit without threshold and continuous output o (not just –1,1) 0 =w0 + w1 x1 + … + wn xn

Train the wi’s such that they minimize the squared error = aD (fa-da)2

where D is the set of training examples Here fa is the actual output, da is the

desired output.

Page 30: Artificial Neural Networks Notes based on Nilsson and Mitchell’s Machine learning

Gradient Descent rule:

We want to choose the weights wi so that is minimized. Recall that

= aD (fa – da)2

Since our goal is to work this error function for one input at a time, let us consider a fixed input x in D, and define

(fx – dx)2

We will drop the subscript and just write this as:(f – d)2

Our goal is to find the weights that will minimize this expression.

Page 31: Artificial Neural Networks Notes based on Nilsson and Mitchell’s Machine learning

/W[/w1,… /wn+1]

Since s, the threshold function, is given by s = X . W, we have:

/W = /s * s/W. However, s/W = X. Thus,

/W = /s * X

Recall from the previous slide that (f – d)2

So, we have: /s = 2(f – d)* f /s (note: d is constant)

This gives the expression:

/W = 2(f – d)* f /s * X

A problem arises when dealing with TLU, namely f is not a continuous function of s.

Page 32: Artificial Neural Networks Notes based on Nilsson and Mitchell’s Machine learning

For a fixed input x, suppose the desired output is d, and the actual output is f, then the above expression becomes: w = - 2(d – f) x

This is what is known as the Widrow-Hoff procedure, with 2 replaced by c:

The key idea is to move the weight vector along the gradient.

When will this converge to the correct weights?• We are assuming that the data is linearly separable.• We are also assuming that the desired output from the linear threshold gate is available for the training set.• Under these conditions, perceptron convergence theorem shows that the above procedure will converge to the correct weights after a finite number of iterations.

XfdcWW )(

Page 33: Artificial Neural Networks Notes based on Nilsson and Mitchell’s Machine learning

Neuron with Sigmoid-Function

x1

x2

xn

.

..

w1

w2

wn

a=i=1n wi xi

y=(a) =1/(1+e-a)

y

inputsweights

activation output

Page 34: Artificial Neural Networks Notes based on Nilsson and Mitchell’s Machine learning

Sigmoid Unit

x1

x2

xn

.

..

w1

w2

wn

w0

x0=-1

a=i=0n wi xi

y

y=(a)=1/(1+e-a)

(x) is the sigmoid function: 1/(1+e-x)

d(x)/dx= (x) (1- (x))

Derive gradient descent rules to train:

Page 35: Artificial Neural Networks Notes based on Nilsson and Mitchell’s Machine learning

Sigmoid function

f

)1( ,

1

1)(

ffs

fThus

esf

s

s

Page 36: Artificial Neural Networks Notes based on Nilsson and Mitchell’s Machine learning

Gradient Descent Rule for Sigmoid Output Function

a

sigmoid

Ep/wi = /wi (tp-yp)2

= /wi(tp- (i wi xip))2

= (tp-yp) ’(i wi xip) (-xi

p)

for y=(a) = 1/(1+e-a)’(a)= e-a/(1+e-a)2=(a) (1-(a))

Ep[w1,…,wn] = (tp-yp)2

w’i= wi + wi = wi + y(1-y)(tp-yp) xip

a

Page 37: Artificial Neural Networks Notes based on Nilsson and Mitchell’s Machine learning

Presentation of Training Examples Presenting all training examples

once to the ANN is called an epoch. In incremental stochastic gradient

descent training examples can be presented in Fixed order (1,2,3…,M) Randomly permutated order (5,2,7,…,3) Completely random (4,1,7,1,5,4,……)

(repetitions allowed arbitrarily)

Page 38: Artificial Neural Networks Notes based on Nilsson and Mitchell’s Machine learning

Capabilities of Threshold NeuronsThe threshold neuron can realize any linearly separable function Rn {0, 1}. Although we only looked at two-dimensional input, our findings apply to any dimensionality n. For example, for n = 3, our neuron can realize any function that divides the three-dimensional input space along a two-dimension plane.

Page 39: Artificial Neural Networks Notes based on Nilsson and Mitchell’s Machine learning

Capabilities of Threshold Neurons What do we do if we need a more complex function? We can combine multiple artificial neurons to form networks with increased capabilities. For example, we can build a two-layer network with any number of neurons in the first layer giving input to a single neuron in the second layer. The neuron in the second layer could, for example, implement an AND function.

Page 40: Artificial Neural Networks Notes based on Nilsson and Mitchell’s Machine learning

Capabilities of Threshold Neurons

What kind of function can such a network realize?

o1

o2

o1

o2

o1

o2

...

oi

Page 41: Artificial Neural Networks Notes based on Nilsson and Mitchell’s Machine learning

Capabilities of Threshold Neurons Assume that the dotted lines in the diagram represent the input-dividing lines implemented by the neurons in the first layer:

1st comp.

2nd comp.

Then, for example, the second-layer neuron could output 1 if the input is within a polygon, and 0 otherwise.

Page 42: Artificial Neural Networks Notes based on Nilsson and Mitchell’s Machine learning

Capabilities of Threshold Neurons However, we still may want to implement functions that are more complex than that. An obvious idea is to extend our network even further. Let us build a network that has three layers, with arbitrary numbers of neurons in the first and second layers and one neuron in the third layer. The first and second layers are completely connected, that is, each neuron in the first layer sends its output to every neuron in the second layer.

Page 43: Artificial Neural Networks Notes based on Nilsson and Mitchell’s Machine learning

Capabilities of Threshold Neurons

What type of function can a three-layer network realize?

o1

o2

o1

o2

o1

o2

...

oi...

Page 44: Artificial Neural Networks Notes based on Nilsson and Mitchell’s Machine learning

Capabilities of Threshold Neurons Assume that the polygons in the diagram indicate the input regions for which each of the second-layer neurons yields output 1:

1st comp.

2nd comp.

Then, for example, the third-layer neuron could output 1 if the input is within any of the polygons, and 0 otherwise.

Page 45: Artificial Neural Networks Notes based on Nilsson and Mitchell’s Machine learning

Capabilities of Threshold Neurons The more neurons there are in the first layer, the more vertices can the polygons have. With a sufficient number of first-layer neurons, the polygons can approximate any given shape. The more neurons there are in the second layer, the more of these polygons can be combined to form the output function of the network. With a sufficient number of neurons and appropriate weight vectors wi, a three-layer network of threshold neurons can realize any function Rn {0, 1}.

Page 46: Artificial Neural Networks Notes based on Nilsson and Mitchell’s Machine learning

Terminology Usually, we draw neural networks in such a way that the input enters at the bottom and the output is generated at the top. Arrows indicate the direction of data flow. The first layer, termed input layer, just contains the input vector and does not perform any computations. The second layer, termed hidden layer, receives input from the input layer and sends its output to the output layer. After applying their activation function, the neurons in the output layer contain the output vector.

Page 47: Artificial Neural Networks Notes based on Nilsson and Mitchell’s Machine learning

Terminology Example: Network function f: R3 {0, 1}2

output layer

hidden layer

input layer

input vector

output vector

Page 48: Artificial Neural Networks Notes based on Nilsson and Mitchell’s Machine learning

Multi-Layer Networks

input layer

hidden layer

output layer

Page 49: Artificial Neural Networks Notes based on Nilsson and Mitchell’s Machine learning

Training-Rule for Weights to the

Output Layer

yj

xi

wj

i

Ep[wij] = ½ j (tjp-yj

p)2

Ep/wij = /wij ½ j (tjp-yj

p)2

= … = - yj

p(1-ypj)(tp

j-ypj)

xip

wij = yjp(1-yj

p) (tpj-yj

p) xi

p

= jp xi

p

with jp := yj

p(1-yjp) (tp

j-yjp)

Page 50: Artificial Neural Networks Notes based on Nilsson and Mitchell’s Machine learning

Training-Rule for Weights to the Hidden Layer

xk

xi

wki

Credit assignment problem: No target values t for hidden layer units. Error for hidden

units?

wjk

j

k

yj

k = j wjk j yj (1-yj)

wki = xkp(1-xk

p) kp xi

p

Page 51: Artificial Neural Networks Notes based on Nilsson and Mitchell’s Machine learning

Training-Rule for Weights to the Hidden Layer

xk

Ep[wki] = ½ j (tjp-yj

p)2

Ep/wki = /wki ½ j (tjp-yj

p)2

=/wki ½j (tjp-kwjk xk

p))2

=/wki ½j (tjp-kwjk iwki

xip)))2

= -j (tjp-yj

p) ’j(a) wjk ’k(a) xip

= -j j wjk ’k(a) xip

= -j j wjk xk (1-xk) xip

j

xi

wki

wj

k

k

yj

wki = k xip

with k = j j wjk xk(1-xk)

Page 52: Artificial Neural Networks Notes based on Nilsson and Mitchell’s Machine learning

Backpropagation

xk

xi

wki

wjk

j

k

yj

Backward step: propagate errors from output to hidden layer

Forward step: Propagate activation from input to output layer

Page 53: Artificial Neural Networks Notes based on Nilsson and Mitchell’s Machine learning

Backpropagation Algorithm Initialize weights wij with a small random value repeat

for each training pair {(x1,…xn)p,(t1,...,tm)p} Do Present (x1,…,xn)p to the network and compute

the outputs yj (forward step)

Compute the errors j in the output layer and propagate them to the hidden layer (backward step)

Update the weights in both layers according to

wki = k xi

end for loopuntil overall error E becomes acceptably low

Page 54: Artificial Neural Networks Notes based on Nilsson and Mitchell’s Machine learning

Backpropagation Algorithm Initialize each wi to some small random value Until the termination condition is met, Do

For each training example <(x1,…xn),t> Do Input the instance (x1,…,xn) to the network and

compute the network outputs yk For each output unit k

k=yk(1-yk)(tk-yk) For each hidden unit h

h=yh(1-yh) k wh,k k

For each network weight wi,j Do wi,j=wi,j+wi,j where

wi,j= j xi,j

Page 55: Artificial Neural Networks Notes based on Nilsson and Mitchell’s Machine learning

Backpropagation

Gradient descent over entire network weight vector Easily generalized to arbitrary directed graphs Will find a local, not necessarily global error

minimum-in practice often works well (can be invoked multiple times with different initial weights)

Often include weight momentum term wi,j(n)= j xi,j + wi,j (n-1)

Minimizes error training examples Will it generalize well to unseen instances (over-fitting)?

Training can be slow typical 1000-10000 iterations (Using network after training is fast)

Page 56: Artificial Neural Networks Notes based on Nilsson and Mitchell’s Machine learning

Backpropagation

Easily generalized to arbitrary directed graphs without clear layers.

BP finds a local, not necessarily global error minimum- in practice often works well (can be invoked multiple times with different initial weights)

Minimizes error over training examples How does it generalize to unseen instances ?

Training can be slow typical 1000-10000 iterations (use more efficient optimization methods than

gradient descent) Using network after training is fast

Page 57: Artificial Neural Networks Notes based on Nilsson and Mitchell’s Machine learning

Convergence of Backprop

Gradient descent to some local minimum perhaps not global minimum

Add momentum term: wki(n) wki(n) = k(n) xi (n) + wki(n-1)

with [0,1] Stochastic gradient descent Train multiple nets with different initial weightsNature of convergence Initialize weights near zero Therefore, initial networks near-linear Increasingly non-linear functions possible as training

progresses

Page 58: Artificial Neural Networks Notes based on Nilsson and Mitchell’s Machine learning

Expressive Capabilities of ANNBoolean functions Every boolean function can be represented by

network with single hidden layer But might require exponential (in number of

inputs) hidden units

Continuous functions Every bounded continuous function can be

approximated with arbitrarily small error, by network with one hidden layer [Cybenko 1989, Hornik 1989]

Any function can be approximated to arbitrary accuracy by a network with two hidden layers [Cybenko 1988]