14
Neuron Networks Omri Winner Dor Bracha

Neuron Networks Omri Winner Dor Bracha

  • Upload
    kirra

  • View
    24

  • Download
    0

Embed Size (px)

DESCRIPTION

Neuron Networks Omri Winner Dor Bracha. What is a neural network? Collection of interconnected neurons that compute and generate impulses. Components of a neural network include neurons, synapses , and activation functions. - PowerPoint PPT Presentation

Citation preview

Page 1: Neuron  Networks Omri  Winner Dor Bracha

Neuron

NetworksOmri WinnerDor Bracha

Page 2: Neuron  Networks Omri  Winner Dor Bracha

Introduction

What is a neural network?• Collection of interconnected neurons that compute and generate impulses.• Components of a neural network include neurons, synapses, and activation functions.• Neural networks modeled by mathematical or computer models are referred to as artificial neural networks.• Neural networks achieve functionality through learning/training functions.

Page 3: Neuron  Networks Omri  Winner Dor Bracha

Perceptron˃ N inputs: x1, ..., xn.˃ Weights for each input: w1, ..., wn.˃ A bias input x0 (constant) and associated weight w0.˃ The output Y is a weighted sum of the inputs:

y = w0x0 + w1x1 + ... + wnxn.˃ A threshold function, i.e:

1 if y > 0, else 0.

Page 4: Neuron  Networks Omri  Winner Dor Bracha

Diagramx1

x2

.

.

.

xn

Σ Threshold

y = Σ wixi

x0

w0

w1

w2

wn

1 if y >0-1 otherwise

Page 5: Neuron  Networks Omri  Winner Dor Bracha

Learning/Training

» The key to neural networks is weights. Weights determine the importance of signals and essentially determine the network output.

» Training cycles adjust weights to improve the performance of a network. The performance of an ANN is critically dependent on training performance.

» An untrained network is basically useless.» Different training algorithms lend themselves better to certain

network topologies, but some also lend themselves to parallelization.

Page 6: Neuron  Networks Omri  Winner Dor Bracha

Network Topologies » Network topologies describe how neurons map to each other.» Many artificial neural networks (ANNs) utilize network

topologies that are very parallelizable.

Page 7: Neuron  Networks Omri  Winner Dor Bracha

Network structurex1

x2

.

.

.

x960

3 Hidden Units

Outputs

Inputs

Page 8: Neuron  Networks Omri  Winner Dor Bracha

Why Parallel Computing?

» Training and evaluation of each node in large networks can take an incredibly long time

» However, neural networks are "embarrassingly parallel“ ˃ Computations for each node are generally independent of all other

nodes

Page 9: Neuron  Networks Omri  Winner Dor Bracha

Possible Levels of Parallelization

Typical structure of an ANN:» For each training session

˃ For each training example in the session+ For each layer (forward and backward)

– For each neuron in the layer» For all weights of the neuron

> For all bits of the weight value

Page 10: Neuron  Networks Omri  Winner Dor Bracha

Most Commonly Used Algorithms

●Multilayer feedforward networks Layered network where each layer of neurons only outputs to next layer.

●Feedback networks single layer of neurons with outputs that loops back to the inputs.

●Self-organizing maps (SOM) Forms mappings from a highdimensional space to a lower dimensional space(one layer of neurons)

●Sparse distributed memory (SDM) Special form of a two layer feedforward network that works as an associative memory.

Page 11: Neuron  Networks Omri  Winner Dor Bracha

Self Organizing Map (S.O.M.)

S.O.M. is producing a low dimensional of the training samples. They use neighborhood function to preserve the topological properties of the input space. S.O.M. is useful for visualizing.

Page 12: Neuron  Networks Omri  Winner Dor Bracha

Training Time of 25-node SOM depending on the number of used processors

Page 13: Neuron  Networks Omri  Winner Dor Bracha

1 2 3 4 50

0.2

0.4

0.6

0.8

1

1.2

1.4

Speedup (for Hetergeneous computer Computer’s cluster ,2 processors)

speedup

Number of processor

1 2 3 4 50

0.2

0.4

0.6

0.8

1

1.2

Efficiency (for Hetergeneous computer Computer’s cluster ,2 processors)

efficientcy

Number of processor

Page 14: Neuron  Networks Omri  Winner Dor Bracha

Appendix

[1 ]Tomas Nordstrom. Highly parallel computers for artificial neural networks.1995.

[2 ]George Dahl , Alan McAvinney and Tia Newhall. Parallelizing neural network training for cluster systems,2008.

[3 ]Hopfield , j.j. and D. Tank. Computing with neural circuits. Vol. ,2008.:pp. 624-633,1986.