NEURAL NETWORK BASED PREDICTION

Preview:

Citation preview

Presented by-PRITI(12003062011)ARCHANA KUMARI(12003062005)KALPANA CHOUDHARY(1200)SUCHANDRA MUKHOPADHYAY(12003062002)

Project supervisorsMRINMOY CHAKRABORTY

ABHIJIT BANNERJEERAJIB LOCHAN DAS

NEURAL NETWORK BASED PREDICTION

What is ANN? Different kinds of Neural Network Training of Neural Network using Back

propagation Result obtained in Visual Basic 6 Result obtained in MATLAB Comparison Further development of BPN-1.FANN 2.GA driven ANN

LIST OF TOPICS

Back Propagation Algorithm Multilayer ANN Implementation Using Two

Methods- a.Visual Basic b.MATLAB Functional ANN Basics Of Genetic Algorithm

PROGRESS

Artificial Neural Network is a mathematical or computational model that tries to simulate the structural and functional aspects of biological neural network.

It is a highly interconnected network of a large number of processing elements called neurons.

An ANN is an adaptive system that changes it’s structure based on external or internal information that flows through the network during learning phase.

ARTIFICIAL NEURAL NETWORKS- AN OVERVIEW

Neural networks learn by examples. They can be trained with known examples of a problem to acquire knowledge about it. After that they can be used to solve unknown problems.

Multilayer ANN Functional ANN GA driven ANN

METHODOLOGIES ADOPTED

Made up of multiple layers i.e. input , output and intermediary layers called hidden layers.

The input layer neurons are linked to the hidden layer neuron and the weights on these links are referred to as hidden – output layer weights.

The hidden layer aids in performing useful intermediary computations before directing the input to the output layer.

Multilayer ANN

It is a common method of teaching artificial neural networks ,how to perform a given task.

It is a supervised learning method and an implementation of delta rule.

Delta Rule:- It is a gradient descent learning rule for updating the weights of artificial neurons in a single layer perceptron.

BACK PROPAGATION ALGORITHM

1. A training sample is presented to the neural network.

2. The neural network output is compared to derived output from the sample and thereafter error is calculated in each output neuron.

3. Depending on how much the actual output is higher or lower than the desired output, the scaling factor is calculated.

STEPS INVOLVED IN BP ALGORITHM

4. The weights of each neuron is adjusted to lower the local error.

5. The above steps are repeated in order to minimize the error.

.

BACK PROPAGATION ALGORITHM for training the ANN

X1,x2…xn represents the input nodes and o1,o2…on represents the output nodes. The nodes between the input and output nodes are hidden nodes.

.

2

/

1/ 2 ( )

/ ( / )( / )( / )( ) (1 )

kj kj kj

kj kj

k

kj k k k k kj

kj kj k k k j

kj kj

W W WW E W

E dk O

E W E O O U U WW W dk O O O yW W

.

1 1

.........................1

1

...........................2

....................3

2...............4

1

( )( )

1/ 2 ( )

/ ( / )( / )( / )(

j j

I

j ji i

i

ji ji ji

ji ji

k

k

u

ji k k k k j

y f uy f u

u v x

v v vv v

E dk O

E v E O O U U y

1

1

............................5

1

From 2 and 3

............

/ )( / )

/ ( ) (1 ) (1 )

/ (1 ) ( ) (1 )

/ (1 )

j j j ji

K

ji k k k kj j j i

k

k

ji j j i k k k kj

u

K

ji j j i k kj

k

ji ji ji

y U U v

E v dk O O O W y y x

E v y y x dk O O O W

E v y y x OW

v v v

...............6

Put 5 into 6

1

(1 )K

ji ji j j i k kj

k

ji ji k i

v v y y x OW

v v O x

.

1

2/

( )

( )

1/ (1 )

(1 1) / (1 )

k

k k

k f k

j

k j kj

j

k f k

Uk

U Uk k

O U

U yW

O U

O e

O U e e

Study of multilayer ANN has been done here using two methods

A.Visual basic B.MATLAB

First lets see the errors and graphical representation of errors obtained by visual basics method.

The results achieved from the codings done in VB are as follows for different no of iterations

MULTILAYER ANN

VISUAL BASIC OUTPUT FOR 50 ITERATIONS

VB OUTPUT FOR 100 ITERATIONS

VB OUTPUT FOR 500 ITERATIONS

Output achieved by the coding done in MATLAB is as below

Desired output: 0 1 1 0 Results obtained after training the

network(10,000 iterations) 0.0295 0.9717 0.9713 0.0340 Next we have the graphical

representation of error verses no of iterations obtained by this coding method.

MATLAB Result

GRAPH

We Get An Error Of only 3% in MATLAB result Is Strictly Convergent. Error Distribution Is Random In Accordance

With Number Of Iteration In Case Of Visual Basics.

result Is Not Convergent. Error Reduction may Be Done By Making

improvements in VB coding.

Comparison

This is a pyramidal structure used to reduce the error and faster convergence.

Derivative of the data is generated using certain operator or function and divide the data set in “n” no segment depending on the no of data set.

Now keep on reducing no of nodes from the input to the output so that a pyramidal structure is developed.

FUNCTIONAL ANN

.

OUTPUTSHIDDEN LAYER

In the previous figure we have 12 input nodes which is divided into 2 groups each consisting of 6 nodes. The 4 input nodes are the result of AND operation between the first 2 input sets.

We have 8 hidden nodes. Each of them gets connected with all input nodes.

We calculate the activation function with the help of input nodes and weights.

We have 4 output nodes which are connected with all hidden nodes.

.

We calculate the activation function with the help of initial activation function and weights between hidden and output layer. This becomes our output.

Our target output is the exor result of first 2 set of input nodes.

Error is the difference between the target and calculated output.

Output achieved by the coding done in MATLAB is as below

Desired output: 0 Results obtained after training the

network(4500 iterations) 0.0153 Next we have the graphical

representation of error verses no of iterations obtained by this coding method.

MATLAB Result

GRAPH

Thus we are getting an error of one percent , which is less than the one which we obtained through backpropagtion using MATLAB.

Our task is to develop the error function which is the difference between the output to the target as a function of the weight vectors connecting input to the hidden layer and to the output.

We would like to minimize the error function and the corresponding values of the weight vectors to be noted as the final value of the weight vectors when training is complete.

Why GA?

The above problem of minimizing the error function is a multivariable optimization problem. To solve this we will first solve problem for a single variable optimization problem using basic steps of genetic algorithm as

1. Population2. Selection 3. Crossover4. Mutation

Genetic Algorithm

POPULATION-A set of chromosomes is obtained which is encoded binary data bearing values such as weight or bias as their parameters.

SELECTION-It refers to picking up the best possible chromosomes to act as parents for reproduction on the basis of their fitness after evaluating the fitness function.

CROSS OVER-A process of creating children from two parents containing some of the genetic material of each.

MUTATION-It is the process of changing some of the parameters of the child to obtain a better set.

Steps associated

First choose initial population No. of bit=[{log[(b-a)*10^PR +1]}/log 2] +1 Calculate fitness function value[ error =(o-t)]

in ANN for each value of the population Associate a random number with each of the

value Assume one constant PC named as

probability of crossover to select the set of value that will take part in crossover

STEPS FOR GA OVER ANN

Count the number of elements that will undergo crossover.

Count the total number of bit in the set and assume one constant PM called probability of mutation.

Keep on generating random number until you get a number less than PM if the ith iteration gives the number less than PM then that bit need to changed.

Repeat the steps until you get the convergent result.

It is a computerized search and optimization technique based on mechanics of natural genetics and survival of the fittest.

GA is good at taking large and potentially he search spaces with complex data set and navigating them looking for optimal combination of things which are difficult to obtain in generally.

Works with coding of variables which discretizes the search space even though the function may be continuous.

Advantage of GA?

In the present work ANN with 3% error during training has been developed and with limited number of nodes functional neural network also developed with around 1% error. Also some progress is made to develop hybridized modified GA driven ANN.

CONCLUSION

THANK YOU

Recommended