8
I n t e rn a ti o na l J o urn al o f App l icati o no r Inn o va ti on i nEng i ne e ri n g& Man a g e m e nt (IJ AI EM) We b Si te : ww w.ij aie m .org Em ail: e ditor@ij aie m .org, e ditori j aie m @g m ail.co m Volume 2, I s sue1, J anuary 20 13 ISSN 23 19 - 48 47  Volume 2, I s sue1, J anuary 2013 Pag e 75 ABSTRACT Hand Wri ting Rec ogniti on is a fi e ld of research in p att e rn re cog nition, Artif icial I nte lligence and Machine Vi sion. Hand w ri tte n digits recognition is considered as one of the difficult problems in the field of pattern recognition. Though academic research in the field continues, the focus on character recognition has shifted to implementation of proven techniques. The aim of the paper is to create a new handwritten language recognition method. The paper deals with recognition of isolated handwritten characte rs and w ords us ing Hybrid Parti cle sw arm Opt imization and Back Propaga tion Algorithm.  K e ywo r ds : Pattern Recognition , Hand writing Recognition, Particle Swarm Optimization, Back Propagation Algo ri thm , Neu ral Net wo rks. 1.  I NTRODUCTION Handwri ting re cogni tion has be e n studie d for ne arly forty ye ars and the re are m any p ropose d app roache s. The p roble m is qu ite com ple x, and e ve n now the re is no s ingle app roach that so lve s i t bo th e fficie ntly a nd com ple tely in all se ttings [5].  We s ee m any pe opl e usi ng ha ndwriting a cross a wide variety o f appli cations including schools, hospitals, ba nki ng, insurance , gove rnme nt, and m ore. It is e xciting to s e e this natural form of inte raction u se d in ne w s ce narios. Of course one thing we ne e d to continue to do is improve the qua lity o f re cognition a s we ll as the ava ilab ility of reco gnizers i n more languag es around the world.  I n the ha ndwriting reco gnition proce ss, an image containi ng te xt m ust b e app ropri ate ly supp lie d and pre-proce sse d. Next, the te xt m ust either undergo se gm e ntation or fe ature extraction. Sm al l proces se d pie ce s of the text wi ll be the res ult, and the se m ust unde rgo re cogni tion by the sys te m. Finally, contextua l information should be applied to the recognized symbols to verify the result. [1] Hybrid Particle Swarm Optimization and Back Prop ag ation Algo ri thm , ap plie d in han dwriting reco gniti on, a llow for hi gh ge ne ralization ab ility an d do not require dee p ba ckground and f orma lization to be ab leto solvethe wri tten languag e recog nition problem [2] [6]. 2 .  M ETHODS DESCRIPTION  Th e p ro p osed s y stem a t t e m p t s t o combine t wo me t hods for handwritin g reco g n it ion, Particle S w armO p t imization and Back Propa gation f or ge om etric fe ature s extraction. The sy ste m cons ists of t he p re-proce ssi ng sub sys te m and the ne ural network subsyste m . I n this pa pe r two type s of algorithms we re us ed na m e ly Particle Swarm Optimization a lgorithm and Back Propagation algorithm. I n this m e thod w e tra in Back Propa ga tion ne ural netwo rk using p article Swarm op tim ization algorithm . Back- propag ation can be app lie d to rea l im age recogniti on proble m s without a co m ple x pre-proce ssi ng stag e , whi ch require s a d e taile d e ngine e ri ng. The le arning ne twork i s fed directly w ith im ag e s rathe r than fea ture v e ctors. Be fore inputting the da ta into the ne twork, the image has to be close d first so the re wou ld have no m inor hol e s. The n the i m age is resi zed to 16 X 16 p ixels. Afte rwards, the i mage is thinne d so only the skeleton re m ains. When the ske le ton im ag e is ob tained , the h orizonta l, ve rti ca l, ri ght diagon al, and le ft diag ona l histogram of the im ag e is de te rmined . This is fed into the ne ural ne twork. A thre e -l aye red n e ural ne twork iss ue d. This is 94 inpu t units, 15 hidde n units, and 10 outpu t units. An im ag e , whi ch contains 100 sa m ple s of characte rs, is fed into the system to train and te st. The y are 1 00 sa m ple s of the sa m e c haracte rs wi th dif f e rent wri ting styles. The n a ne t-fi le is cre ate d and ca n be used to create an image file. This image-file shows the recognized number. HAND WRITING RECOGNITION USING HYBRID PARTI CL E S WARM OPTIMIZ ATI ON & BACK PROPAGATI ON ALGORITHM Satish L ag udu 1 , CH.V.Sarma 2  1 Post Grad ua te Stud en t (M.Tec h) i n Vign an 's institute of i nforma tion te chn olog y, vi sakh ap atnam. 2 Sr. Asst. Prof in Vignan's institute of information technology, visakhapatnam.

HAND WRITING RECOGNITION USING HYBRID PARTICLE SWARM OPTIMIZATION & BACK PROPAGATION ALGORITHM

Embed Size (px)

Citation preview

7/29/2019 HAND WRITING RECOGNITION USING HYBRID PARTICLE SWARM OPTIMIZATION & BACK PROPAGATION ALGORITHM

http://slidepdf.com/reader/full/hand-writing-recognition-using-hybrid-particle-swarm-optimization-back-propagation 1/7

International Journal of Application or Innovation in Engineering& Management (IJAIEM)Web Site: www.ijaiem.org Email: [email protected], [email protected]

Volume 2, Issue 1, J anuary 2013 ISSN 2319 - 4847 

Volume 2, Issue 1, J anuary 2013 Page 75

ABSTRACT 

Hand Writing Recognition is a field of research in pattern recognition, Artificial Intelligence and Machine Vision. Handwritten

digits recognition is considered as one of the difficult problems in the field of pattern recognition. Though academic researchin the field continues, the focus on character recognition has shifted to implementation of proven techniques. The aim of the

paper is to create a new handwritten language recognition method. The paper deals with recognition of isolated handwritten

characters and words using Hybrid Particle swarm Optimization and Back Propagation Algorithm. Keywords: Pattern Recognition , Hand writing Recognition, Particle Swarm Optimization, Back PropagationAlgorithm, Neural Networks.

1. INTRODUCTION 

Handwriting recognition has been studied for nearly forty years and there are many proposed approaches. The problemis quite complex, and even now there is no single approach that solves it both efficiently and completely in all settings[5]. We see many people using handwriting across a wide variety of applications including schools, hospitals, banking,insurance, government, and more. It is exciting to see this natural form of interaction used in new scenarios. Of courseone thing we need to continue to do is improve the quality of recognition as well as the availability of recognizers inmore languages around the world.  In the handwriting recognition process, an image containing text must beappropriately supplied and pre-processed. Next, the text must either undergo segmentation or feature extraction. Smallprocessed pieces of the text will be the result, and these must undergo recognition by the system. Finally, contextualinformation should be applied to the recognized symbols to verify the result. [1] Hybrid Particle Swarm Optimizationand Back Propagation Algorithm, applied in handwriting recognition, allow for high generalization ability and do notrequire deep background and formalization to be able to solve the written language recognition problem [2] [6].

2. METHODSDESCRIPTION 

 The proposed system attempts to combine two methods for handwriting recognition, Particle Swarm Optimization andBack Propagation for geometric features extraction. The system consists of the pre-processing subsystem and the neuralnetwork subsystem. In this paper two types of algorithms were used namely Particle Swarm Optimization algorithmand Back Propagation algorithm.In this method we train Back Propagation neural network using particle Swarm optimization algorithm. Back-propagation can be applied to real image recognition problems without a complex pre-processing stage, which requiresa detailed engineering. The learning network is fed directly with images rather than feature vectors. Before inputtingthe data into the network, the image has to be closed first so there would have no minor holes. Then the image isresized to 16 X 16 pixels. Afterwards, the image is thinned so only the skeleton remains.When the skeleton image is obtained, the horizontal, vertical, right diagonal, and left diagonal histogram of the imageis determined. This is fed into the neural network. A three-layered neural network issued. This is 94 input units, 15

hidden units, and 10 output units. An image, which contains 100 samples of characters, is fed into the system to trainand test. They are 100 samples of the same characters with different writing styles. Then a net-file is created and can beused to create an image file. This image-file shows the recognized number.

HAND WRITING RECOGNITION USING

HYBRID PARTICLE SWARMOPTIMIZATION & BACK PROPAGATION

ALGORITHM

Satish Lagudu1, CH.V.Sarma2 

1 Post Graduate Student (M.Tech) in Vignan's institute of information technology, visakhapatnam.

2Sr. Asst. Prof in Vignan's institute of information technology, visakhapatnam.

7/29/2019 HAND WRITING RECOGNITION USING HYBRID PARTICLE SWARM OPTIMIZATION & BACK PROPAGATION ALGORITHM

http://slidepdf.com/reader/full/hand-writing-recognition-using-hybrid-particle-swarm-optimization-back-propagation 2/7

International Journal of Application or Innovation in Engineering& Management (IJAIEM)Web Site: www.ijaiem.org Email: [email protected], [email protected]

Volume 2, Issue 1, J anuary 2013 ISSN 2319 - 4847 

Volume 2, Issue 1, J anuary 2013 Page 76

Figure 1: Steps involved in Handwriting Recognition 

2.1 Handwritten Recognition IN Pattern RecognitionLinear Classification [9] is a useful method to recognize handwritten characters. [4] The background basis of ArtificialNeural Network (ANN) [3] can be implemented as a classification function. Linear Classification works very similar toArtificial Neural Network because the mapping of the ANN cell or one layer of the ANN cell is equivalent to the linear

discrimination function. Therefore, if the ANN is a two-layer network, which is consisting of an input and an outputlayer, it can act as a linear classifier.

2.2 What is Neural Network?A Neural Network (NN) [3] is a function with adjustable or tunable parameters. Let the input to a neural network bedenoted byx. This is a real-valued or row vector of length and is typically referred to as input or input vector or regressor sometimes pattern vector. The length of the vector x is the number of inputs to the network. So let the networkoutput is denoted by Y. This is an approximation of the desired output y, which is also a real-valued vector having oneor more components and the number of outputs from the network. The data sets often contain many input and outputpairs. The x and y denote matrices with one input and one output vector on each row. A neural network [3] is astructure involving weighted interconnections between neurons or units. They are often non-linear scalartransformations but can also be linear scalar transformation. The following figure 2 shows an example of a one-hidden-

layer neural network with three inputs, x={x1, x2, x3}. The three inputs, along with a unity bias input, are fed each of the two neurons into the hidden layer. The two outputsfrom this layer and from a unity bias are then fed into the single output layer neuron. This produces the scalar output Y.

 The layer of neurons is called hidden layer because the outputs are not directly seen in the data. Each arrow in theFigure 2 corresponds to a real-valued parameter, or a weight, of the network. The values of these parameters are tunedin the training network.

Figure 2Feed-forward Neural Network with 3 inputs, two hidden neurons and one output neuron. 

A neuron is structured to process multiple inputs. This includes the unity bias in a non-linear way. Then, this producesa single output. All inputs to the neuron are first augmented by multiplicative weights. These weighted inputs aresummed and then transformed via a non-linear activation function and as indicated from the above Figure 2, the

neurons in the first layer of the network are non-linear. The single output neuron is linear because no activationfunction is used. The information in an ANN is always stored in a number of parameters. These parameters can be pre-set by the operator or trained by presenting the ANN with example.

7/29/2019 HAND WRITING RECOGNITION USING HYBRID PARTICLE SWARM OPTIMIZATION & BACK PROPAGATION ALGORITHM

http://slidepdf.com/reader/full/hand-writing-recognition-using-hybrid-particle-swarm-optimization-back-propagation 3/7

International Journal of Application or Innovation in Engineering& Management (IJAIEM)Web Site: www.ijaiem.org Email: [email protected], [email protected]

Volume 2, Issue 1, J anuary 2013 ISSN 2319 - 4847 

Volume 2, Issue 1, J anuary 2013 Page 77

2.3 Artificial Neural NetworkArtificial Neural Network (ANN) has been around since the late 1950's. But it was not until the mid-1980 that theybecame sophisticated enough for applications. Today, ANN [3] is applied to a lot of real- world problems. Theseproblems are considered complex problems. ANN‘s are also a good pattern recognition engines and robust classifiers.

 They have the ability to generalize by making decisions about imprecise input data. They also offer solutions to a

variety of classification problems such as speech, character and signal recognition. Artificial Neural Network (ANN) isa collection of very simple and massively interconnected cells. The cells are arranged in a way that each cell derives itsinput from one or more other cells. It is linked through weighted connections to one or more other cells. This way,input to the ANN is distributed throughout the network so that an output is in the form of one or more activated cells.es of input and also possibly together with the desired output [10]. The following figure 3 is an example of a simple of ANN:

Figure 3 Multi-Layer Artificial Neural Networks. 

3. BACK -PROPAGATION ALGORITHM 

Back-propagation algorithm [11] consists of two phases. First phase is the forward phase. This is the phase where theactivations propagate from the input layer to the output layer. The second phase is the backward phase. This is thephase where then the observed actual value and the requested nominal value in the output layer are propagatedbackwards so it can modify the weights and bias values. The following figure 4 is an example of the forwardpropagation and backward propagation.

Figure 4 Forward and Backward propagation. In forward propagation, the weights of the needed receptive connections of neuron j are in one row of the weightmatrix. In backward propagation, the neuron j in the output layer calculates the error between the expected nominaltargets. The error is propagated backwards to the previous hidden layer and the neuron i in the hidden layer calculatesthis error that is propagated backwards to its previous layer. This is why the column of the weight matrix is used. n theactual output values. This output value is known from both the forward propagation and backward propagation.

Step 0: Initialize weights. (Set to small random values).Step 1:  While stopping condition is false, do Steps 2-9,

Step 2: For each training pair, do Steps 3-8, Feed forward:

7/29/2019 HAND WRITING RECOGNITION USING HYBRID PARTICLE SWARM OPTIMIZATION & BACK PROPAGATION ALGORITHM

http://slidepdf.com/reader/full/hand-writing-recognition-using-hybrid-particle-swarm-optimization-back-propagation 4/7

International Journal of Application or Innovation in Engineering& Management (IJAIEM)Web Site: www.ijaiem.org Email: [email protected], [email protected]

Volume 2, Issue 1, J anuary 2013 ISSN 2319 - 4847 

Volume 2, Issue 1, J anuary 2013 Page 78

Step 3:  Each input unit (Xi, i=1, . . . . , n ) receives input signal xi and broadcasts this signal to all units in the layerabove (the hidden units).Step 4:  Each hidden unit (Z j, j=1 . . . p) sums its weighted inout signals, 

applies its activation function to compute its output signal,

z j = f (z_in j ),

and sends this signal to all units in the layer above (output units).Step 5:  Each output unit ( Yk, k=1 . . . m) sums its weighted input signals,

and applies its activation function to compute its output signal,

yk = f (y _ ink).

Back propagation of error:Step 6:  Each output unit ( Yk, k =1 . . . m) receives a target pattern corresponding to the input training pattern,computes its error information term,

k = (tk - yk)f’(y _ ink),calculates its weight correction term (used to updatew jk later),

w jk = kz j

calculates its bias correction term (used to updatewok later),

wok = k

and sends k to units in the layer below.

Step 7:  Each hidden unit (Z j, j =1, . . . . , p) sums its delta inputs (from units in the layer above),

multiplies by the derivative of its activation function to calculate its error information term,

 j = _in j f’(z_in j), 

calculates its weight correction term (used to updatevij later),

vij =  j xi,

and calculates its bias correction tern (used to update voj later),voj =  j 

Update weights and biases: Step 8:  Each output unit ( Yk, k=1 . . . m) updates its bias and weights ( j =0 . . . p):

w jk(new) = w jk(old) + w jk, Each hidden unit (Zj, j =1 . . . p) updates its bias and weights (i =0 . . . n):

vij(new) = vij(old) + vik,

7/29/2019 HAND WRITING RECOGNITION USING HYBRID PARTICLE SWARM OPTIMIZATION & BACK PROPAGATION ALGORITHM

http://slidepdf.com/reader/full/hand-writing-recognition-using-hybrid-particle-swarm-optimization-back-propagation 5/7

International Journal of Application or Innovation in Engineering& Management (IJAIEM)Web Site: www.ijaiem.org Email: [email protected], [email protected]

Volume 2, Issue 1, J anuary 2013 ISSN 2319 - 4847 

Volume 2, Issue 1, J anuary 2013 Page 79

Step 9:  Test stopping condition.

4. PARTICLE SWARM OPTIMIZATION 

Problem solving is a population-wide phenomenon, emerging from the individual behaviors of the particles throughtheir interactions. In any case, populations are organized according to some sort of communication structure or

topology, often thought of as a social network.In PSO, each particle keeps track of its coordinates in the solution space which are associated with the best solution(fitness) that has achieved so far by that particle. This value is called personal best, pbest.Another best value that is tracked by the PSO is the best value obtained so far by any particle in the neighborhood of that particle. This value is called gbest.In PSO each particle tries to modify its position using the following information:

   The current positions,   The current velocities,   The distance between the current position and pbest,   The distance between the current position and the gbest.

 The modification of the particle‘s position can be mathematically modeled according the following equation: 

Vik+1= wVi

k +c1*rand1 ( )*(pbesti-sik) + c2 *rand2 ( )* (gbest- si

k) (1)

where, Vi k : velocity of agent i at iterationk,

w : weighting function,c j : weighting factor for j=1,2.

Rand : uniformly distributed random number between 0 and 1,si

k  : current position of agent i at iterationk,pbesti : pbestof agent i,gbest : gbestof the group.

 The following weighting function is utilized in (1).

w = wMax-[(wMax-wMin)iter] / maxIter (2)

where, wMax =initial weight,wMin =final weight,maxIter =maximum iteration number,iter =current iteration number.

 The current position (searching point in the solution space) can be modified by the following equation:

Si k+1 = si

k + Vi k+1 (3)

5. H YBRID PSO-BP ALGORITHM 

 The PSO-BP is an optimization algorithm combining the PSO with the BP [7] [8]. Similar to the GA; the PSOalgorithm is a global algorithm, which has a strong ability to find global optimistic result, this PSO algorithm,however, has a disadvantage that the search around global optimum is very slow. The BP algorithm, on the contrary,has a strong ability to find local optimistic result, but its ability to find the global optimistic result is weak. Bycombining PSO with the BP, a new algorithm referred to as PSO-BP hybrid algorithm is formulated in this paper.

 The PSO–BP algorithm can be summarized as follows:

Step 1: Initialize the positions (weight and bias) and velocities of a group of particles randomly.Step 2: The PSO-BP is trained using the initial particles position.Step 3: The learning error produced from BP neural network can be treated as particles fitness value according toinitial weight and bias.Step 4: The learning error at current epoch will be reduced by changing the particles position, which will update theweight and bias of the network.

(i)  The “pbest” value (each particle’s lowest learning error so far) and

7/29/2019 HAND WRITING RECOGNITION USING HYBRID PARTICLE SWARM OPTIMIZATION & BACK PROPAGATION ALGORITHM

http://slidepdf.com/reader/full/hand-writing-recognition-using-hybrid-particle-swarm-optimization-back-propagation 6/7

International Journal of Application or Innovation in Engineering& Management (IJAIEM)Web Site: www.ijaiem.org Email: [email protected], [email protected]

Volume 2, Issue 1, J anuary 2013 ISSN 2319 - 4847 

Volume 2, Issue 1, J anuary 2013 Page 80

(ii)   The “gbest” value (lowest learning error found in entire learning process so far) are applied to thevelocity update equation (Eq. 1) to produce a value for positions adjustment to the best solution ortargeted learning error.

Step 5: The new sets of positions (NN weight and bias) are produced by adding the calculated velocity value to thecurrent position value using movement equation (Eq. 1). Then, the new sets of positions are used to produce new

learning error in feed-forward NN.Step 6: This process is repeated until the stopping conditions either minimum learning error or maximum number of iteration are met.Step 7: The optimization output, which is the solution for the optimization.

Figure 5 Hybrid PSO-BP Learning Process

Figure 6Flowchart for Hybrid PSO-BP Algorithm.

6. RESULTS 

We now assess the practical efficiency of the proposed Hybrid Particle Swarm Optimization and Back PropagationAlgorithm with screenshots. We implement the proposed mechanism including both the algorithms to recognize theisolated handwritten characters.

7/29/2019 HAND WRITING RECOGNITION USING HYBRID PARTICLE SWARM OPTIMIZATION & BACK PROPAGATION ALGORITHM

http://slidepdf.com/reader/full/hand-writing-recognition-using-hybrid-particle-swarm-optimization-back-propagation 7/7

International Journal of Application or Innovation in Engineering& Management (IJAIEM)Web Site: www.ijaiem.org Email: [email protected], [email protected]

Volume 2, Issue 1, J anuary 2013 ISSN 2319 - 4847 

Volume 2, Issue 1, J anuary 2013 Page 81

 To set up the handwriting recognizer follows the steps belowi.  Click the Handwriting Recognizer Button.ii. Select the input image by clicking the ‘select’ button and choosing the image using the dialogue shown.iii. Click the “Trace Characters” buttoniv. Once completed results will be shown in output log.

7. CONCLUSION 

We proposed an efficient Hybrid Particle Swarm Optimization and Back propagation algorithm to enhance theperformance of Artificial Neural Network for Hand Writing Recognition by means of optimizing ANN weights.Finding a more relevant fitness function or using a type of discriminative training may also improve the results andachieve superior performance than any other Hand Written Recognition methods existed before.

REFERENCES 

[1] Kuok King Kuok., Sobri Harun., Siti Mariyam Shamsuddin.,’’ Particle Swarm Optimization Feedforward NeuralNetwork for Hourly Rainfallrunoff Modeling in Bedup Basin, Malaysia’.

[2] Rita Yadav, Danvir Mandal, 2011,’’ Optimization of Artificial Neural Network for Speaker Recognition usingParticle Swarm Optimization’’

[3] Buffa F., Porceddu I. 1997”Temperature forecast and dome seeing minimization. A case study using neuralnetwork model‘, http://www.pd.astro.it/TNG/TechRep/rep67/rep67.html

[4] Claus, D.”Handwritten Digit Recognition‘, http://www.robots.ox.ac.uk/~dclaus[5] Malothu Nagun, N Vijay Shankar Annapurna,’ 2011,’’ A novel method for Handwritten Digit Recognition with

Neural Networks’’[6] Mahamed G. H. Omran, 2004,”Particle Swarm Optimization Methods for Pattern Recognition and Image

Processing”[7] J ing-Ru Zhang, Jun Zhang, Tat-Ming Lok, Michael R. Lyu, 2006,” A hybrid particle swarm optimization–back-

propagation algorithm for feedforward neural network training”[8] H. Shayeghi, H.A. Shayanfar, G. Azimi, 2010,’’ A Hybrid Particle Swarm Optimization Back Propagation

Algorithm for Short Term Load Forecasting”[9] Keysers D., NeyH, Paredes R., Vidal E. 2002,”Combination of Tangent Vectors and Local Representations for

Handwritten Digit Recognition’’[10] J onathan Mooney, 2008,’’ A Character Recognition Artificial Neural Network”[11] Nearest Neighbor Rule: A Short Tutorial. 1995,

http://cgm.cs.mcgill.ca/~soss/cs644/projects/simard/nn_theory.html