43
SEMINAR ON: ANALOG VLSI NEURAL NETWORK Presented by: Monica Swain -1359017 Manisha charathi -1359016

neural Logic

Embed Size (px)

Citation preview

Page 1: neural Logic

SEMINAR ON:

ANALOG VLSI NEURAL NETWORK

Presented by: Monica Swain -1359017 Manisha charathi -1359016

Page 2: neural Logic

CONTENTS Introduction Basic neuron design Artificial neuron model Algorithm associated with ANN Implementing ANN WITH VLSI Neuron circuit description Operation of vlsi artificial NN Advantage and disadvantage Application Conclusion Reference

Page 3: neural Logic

INTRODUCTION An Artificial Neural Network (ANN) is an information

processing paradigm that is inspired by the way biological nervous systems, such as the brain, process information.

The key element of this paradigm is the novel structure of the information processing system. It is composed of a large number of highly interconnected processing elements (neurones) working in unison to solve specific problems. ANNs, like people, learn by example.

An ANN is configured for a specific application, such as pattern recognition or data classification, through a learning process. Learning in biological systems involves adjustments to the synaptic connections that exist between the neurones. This is true of ANNs as well.

Page 4: neural Logic

BASIC NEURON DESIGN

Page 5: neural Logic
Page 6: neural Logic

ARTIFICIAL NEURAL NETWORKA neural network is a powerful data modeling tool that is able to capture and represent complex input/output relationships.

weights

Page 7: neural Logic

COMPARASION BETWEEN Biological

Soma

Dendrite

Axon

Synapse

Artificial

Neuron

Input

Output

Weight

Page 8: neural Logic

TYPES OF ARTIFICIAL NETWORK NETWORK

1. Single layered Neural Network:

Page 9: neural Logic

TYPES OF ARTIFICIAL NEURAL NETWORK2. Multi layer perceptron (MLP) It consist of three layers: -> Input layer. -> Hidden layer. -> output layer.

Page 10: neural Logic

TYPE OF ARTIFICIAL NEURAL NETWORK HIDDEN LAYER

Page 11: neural Logic

ALGORITMS1.FEED FORWARD PROPAGATION

Information flow is unidirectional. Data is presented to the input. Passed on to the hidden layer. Information is distributed. Processing is parallel. No cycle or loop present.

Feedforward

Inpu

ts

Out

puts

Page 12: neural Logic

ALGORITHMS2.BACK PROPAGATION ALGORITHM Back propagation consists of the repeated application

of the following two passes:-> Forward pass (Network activation)-> Backward pass (Error propagation) Back propagation adjusts the weights of the NN in order to

minimize the network total mean squared error

Network activation

Back propagation

Page 13: neural Logic

ALGORITHM Consider a network of three

layers. Let us use i to represent nodes

in input layer, j to represent nodes in hidden layer and k represent nodes in output layer.

wij refers to weight of connection between a node in input layer and node in hidden layer.

The following equation is used to derive the output value Yj of node j

Weight Wij

j

k

Input i

jXe

1

1Yj

Page 14: neural Logic

ALGORITHM● The error of output neuron k after the

activation of the network on the n-th training example (x(n), d(n)) is:

ek(n) = dk(n) – yk(n) ● The network error is the sum of the squared

errors of the output neurons:

● The total mean squared error is the average of the network errors of the training examples.

N

1nN

1AV (n)EE

(n)eE(n) 2k

Page 15: neural Logic

ALGORITHM● The Backprop weight update

rule is based on the gradient descent method: − It takes a step in the

direction yielding the maximum decrease of the network error E.

− This direction is the opposite of the gradient of E.

ijijij www

Page 16: neural Logic

ALGORITHM

ijij w

-w

E

Page 17: neural Logic

VLSI NEURAL SYSTEM CHARACTERISTICS. The main characteristics of the analog neural

network chip are:

reduced dimensions and power consumption

high parallelism of data processing;

high speed of computations compatible with the precision required;

high error tolerance;

Page 18: neural Logic

VLSI NEURAL CIRCUIT AS A WHOLEThe circuit consist of: Synapse Summing element Activation function

Page 19: neural Logic

SYNAPSE DESIGN Neural network learn by modifying

weight(synapse)

REQUIREMENT: To get a storagable synaptic weight. Need for an alterable, non -volatile, on chip

analog memory. Use of a sample and hold circuit. Need for a capacitive storage.

Page 20: neural Logic

PROGRAMMABLE SYNAPSE DESIGNPROBLEM: Relative chip area is large. Excess power consumption Limit the no of synapse . Reduce the computational area. Suffers from short storage time.

SOLUTION: To use a programmable threshold voltage

transistor.

Page 21: neural Logic

PROGRAMMABLE THRESHOLD VOLTAGE TRANSISTOR

CHARACTERISTICS: Based on standard MOS transistor. Small size and power consumption. Slow memory decay. Compatibility with standard fabrication

process. Typically have one or more gate layers used

to store charge. Charge shift the threshold voltage. Generally of two types.

Page 22: neural Logic

TYPES OF PROGRAMMABLE THRESHOLD VOLTAGE TRANSISTOR

1.FLOATING GATE MOS TRANSISTOR

Page 23: neural Logic

TYPES OF PROGRAMMABLE THRESHOLD VOLTAGE TRANSISTOR

CHARACTERISTICS:

Addition of a new gate “floating gate” between original gate and channel.

Original gate is called control gate.

Floating gate is an isolated conducting island.

Electrons put in the floating gate stay for a long time.

Page 24: neural Logic

FLOATING GATE MOS TRANSISTOR

When electrons are present on the floating gate:

Attract holes from p substrate.

Enhanced holes change the conductivity of the channel.

No gate volt but electrons present

Case-1 P-doped floating gate transistor

Page 25: neural Logic

FLOATING GATE MOS TRANSISTOR

When the floating gate is devoid of electrons:

Holes are scattered in the channel.

Need of a high control gate voltage

Case-2 P-doped floating gate transistor

Page 26: neural Logic

PROGRAMMING OF FLOATING GATE MOS TRANSISTOR

PROGRAM (Write operation)

Putting electrons into the floating gate

It means logic 0

Accomplished by:

i) FN tunnelling ii) Channel hot electron

injection

Page 27: neural Logic

PROGRAMMING OF FLOATING GATE TRANSISTOR

ERASE OPERATION Removing the charge from the floating gate

IT means logic ‘1’

Accomplished by: i) Quantum tunnelling

Page 28: neural Logic

WORKING PRINCIPLE High voltage at the

control gate.

Electrons at the floating gate.

Holes accumulated at the channel region.

Barrier is reduced.

Tunnelling is enhanced.

Electrons in the floating gate as well as control gate.

Page 29: neural Logic

DISADVANTAGE OF FGMOS Defect in the tunnel

oxide.

Electrical stressing due to programming.

Difficult to scale down the oxide.

Page 30: neural Logic

SOLUTION Data retention is not

sensitive to oxide defect.

Programming speed increase.

FN tunnelling consumes less power.

Thin tunnelling oxide.

To use MNOS( Metal nitride oxide sillicon)

Page 31: neural Logic

COMPARING TWO TRANSISTOR

Page 32: neural Logic

PROGRAMMABLE SYNAPSE :DESIGN Concept is to provide a

small change in weight M1 and M2 operate in

triode region. Equal Vgs Equal Vth I proportional to(Vgs2-

Vgs1). Vgs1< Vgs2(+ve

weight) Vgs1> Vgs2 (-ve weight) Adaptive w achieved

using control unit

Page 33: neural Logic

PROGRAMMABLE SYNAPSE:DESIGN CONTROL UNITConsist of: 8 bit input (B0-B7) 1:8 analog MUX A mux with control code

(S0,S1,S2) Enable terminal ‘E’ to

enable the mux. Output current ‘I’

applied to the CMOS switch.

Page 34: neural Logic

NEURON CIRCUIT : DESIGN SUMMING ELEMENT Designed in weak inversion

region M4-M5 operate in sub

threshold region. Common mode input

should be wide. Input voltage of p-channel: Vss< Vcommon< (Vdd-Vsd(sat)-Vsg,p) Second stage consist of: CS amp (M9) CS amp(M10) Vss+ Vds< Vout< (Vdd-Vsd(sat).p)

Vdd

Vss

Page 35: neural Logic

NEURON CIRCUIT :DESIGN TANH ACTIVATION

FUNCTION Input is

differential(M13-M14)

The current generates ‘I’ proportional to difference between two Vds of the differential pair that represent tanh function.

Page 36: neural Logic

COMPLETE NEURON DESIGN

Page 37: neural Logic

ADVANTAGE DISADVANTAGE Massively

parallelism neural system.

Full potential of sillicon can be exploited.

Fault tolerance Low power Real world interface Computational

power being derived from massive parallelism

Mismatch Non-ideal behaviour

of the circuit

Page 38: neural Logic

APPLICATION Clustering

Prediction system

Artificial system building

Optical character recognization

Image processor

Page 39: neural Logic

OPTICAL CHARACTER RECOGNIZATION

Page 40: neural Logic

IMAGE PROCESSING

Page 41: neural Logic

CONCLUSION In this paper, a programmable analog VLSI neural

network has been introduced. The synaptic weight circuit was realized using four-MOS transistors and SR technique.

Moreover, the summing element and the activation function have been designed in sub threshold region. Such system was realized in a standard’0.8pm CMOS technology and operated with f 1V power supply.

This system has been used for recognizing handwritten words, and zip codes.

Page 42: neural Logic

REFERENCE 1. E, A. Viltoz , “Analog VLSI implementation of neural networks,” in IEEE Int.

Symp. Circuits Syst. Proc., 1990, PP. 2524-2527. 2. Alan Murray and Lionel Tarassenko , “Analogue Neural VLSI- A Pulse Stream

Approach,” Chapman & Hall Publishing Company, 1994. 3. J. B. Lont and W. Guggenbuhl , “Analog CMOS implementation of multilayer

perceptron with nonlinear synapses,” IEEE Trans. on neural networks, Vol. 3, No. 3, PP. 457-462, May 1992.

4. T. Morishita et ul , “A BiCMOS analog neural network with dyniunically updated weights,” in IEEE Int. Solid-Statc Circuits Conf. Proc., 1990, PP. 142- 143.

5. M A. Abo -Elsoud, “Analog Circuits for electronic neural network,” The Proc. of 35* Midwest Symp. on Circuits and

6. Andreas G. Andreou . Kwabena A. Boahen , Philippe 0. Pouliquen , Aleksandra Pavasovic Robert E. Jenkins and Kim Strohbehn , “Current-Mode Sub threshold MOS Circuits for Analog VLSI Neural Systems,” IEEE Trans. on neural networks,

7. Mead C., “Analog VLSI and Neural Systems,” Addison-Wesley Publishing Company, 1989.

8. Johan H. Huijsing , Ron Hogervorst , and KIass -Jan de Langen , “Low-power Low-voltage VLSI Operational Amplifier Cells,” IEEE Trans. on Circuits and Syst.-I: Fundamental theory and applications, Vol. 42, No. I 1, PP. 841 -852, November 1995.

Page 43: neural Logic

THANK YOU