5
Design of an Interval Feed-Forward Neural Network AbstractDesign of neural networks is restricted due to some problems like stability, plasticity, computational complexity and memory consumption. These problems are overcome in the present work by using an interval feed-forward neural network (IFFNN). It has simple structure that reduces the computational complexity and memory consumption, and the use of Lyapunov stability (LS) based learning algorithm assures the stability. Effectiveness and applicability of the underlying IFFNN model is investigated on benchmark problems of identification. Keywords- Neural network, supervised learning, Lyapunov stability and identification. I. INTRODUCTION Several neural network based structures and different learning algorithms employed in them exist in the literature. Some popular neural network structures include feed-forward NN, recurrent NN, radial basis NN etc [1]. These structures have their own limitations in their functionality. Similarly, the learning of NN is categorized as either supervised or unsupervised. In the supervised learning the goal output is necessary to find the relation between the input and the output (examples are gradient descent learning, Hebbian learning, back propagation learning, genetic algorithm etc. [2], [3]) while in the unsupervised learning no external goal is required; it results in the exposition of clusters for the given input patterns. But, both types of learning approaches have their own limitations. Major limitation of neural network is the choice of structure as well as its learning scheme that rests on a particular application. But the complexity of application demands a complex NN structure and an effective learning resulting in an increase of memory consumption and the decrease of stability and plasticity of a system. To surmount these problems, several new neural schemes have been proposed but they all have some restriction [3]. Therefore, a new topology, called Interval Feed-Forward Neural Network (IFFNN), is proposed. This topology assures the global stability. Parameters (features) are defined in an interval range (hence plasticity), has simple structure involving less computation and low memory consumption. Here the supervised learning scheme employing gradient descent learning algorithm (which has a problem of local minima) is modified by using Lyapunov stability (LS) based learning algorithm. A simple interval feed-forward neural network is used for the identification of complex nonlinear systems, hence the cost of computation and memory consumption are also highly reduced. The paper is organized as follows: In Section II, a brief description of the structure of interval feed-forward neural network, the gradient descent (GD) and LS based learning algorithms are given. Section III discusses the simulated results and comparisons of our work with the existing works are discussed. Finally Section IV is dedicated to conclusion of the proposed work. II. INTERVAL FEED-FORWARD NEURAL NETWORK The structure of IFFNN is the same as topology of feed- forward neural network. The IFFNN is a combination of a structure of feed-forward neural network and LS based learning algorithm. The learning capability of LS assures the stability. An interval set defined by the upper and the lower bounds and the neural network can bestow the computation power. As mentioned in the introduction, the proposed IFFNN has stability, plasticity, less computation and low memory requirement; it permits an adaptive process for retaining the significant new information and saves the old information in memory. A. Architecture of the IFFNN The architecture of IFFNN is depicted in Fig. 1. It comprises an input layer and an output layer. The nodes in the input layer receive the input information and use a set of weighted layer parameters to generate the output. Fig. 1: Structure of IFFNN Let, x be the input, L w and U w be the lower and upper bound weights of IFFNN. Then, ( ,) L L y fw x ) L f ( , , ( (1) Smriti Srivastava Madhusudan Singh Deptt. of Instrumentation & Control Engineering Hitech Robotic Systemz Ltd. Netaji Subhas Institute of Technology Gurgaon , Haryana, India New Delhi- 110078 Email:[email protected] Email:[email protected] 2012 Fifth International Conference on Emerging Trends in Engineering and Technology 978-0-7695-4884-5/12 $26.00 © 2012 IEEE DOI 10.1109/ICETET.2012.59 211 2012 Fifth International Conference on Emerging Trends in Engineering and Technology 978-0-7695-4884-5/12 $26.00 © 2012 IEEE DOI 10.1109/ICETET.2012.59 211 2012 Fifth International Conference on Emerging Trends in Engineering and Technology 978-0-7695-4884-5/12 $26.00 © 2012 IEEE DOI 10.1109/ICETET.2012.59 211

[IEEE 2012 5th International Conference on Emerging Trends in Engineering and Technology (ICETET) - Himeji, Japan (2012.11.5-2012.11.7)] 2012 Fifth International Conference on Emerging

  • Upload
    smriti

  • View
    216

  • Download
    2

Embed Size (px)

Citation preview

Page 1: [IEEE 2012 5th International Conference on Emerging Trends in Engineering and Technology (ICETET) - Himeji, Japan (2012.11.5-2012.11.7)] 2012 Fifth International Conference on Emerging

Design of an Interval Feed-Forward Neural Network

Smriti Srivastava and Madhusudan Singh

Abstract— Design of neural networks is restricted due to some problems like stability, plasticity, computational complexity and memory consumption. These problems are overcome in the present work by using an interval feed-forward neural network (IFFNN). It has simple structure that reduces the computational complexity and memory consumption, and the use of Lyapunov stability (LS) based learning algorithm assures the stability. Effectiveness and applicability of the underlying IFFNN model is investigated on benchmark problems of identification.

Keywords- Neural network, supervised learning, Lyapunov stability and identification.

I. INTRODUCTION Several neural network based structures and different

learning algorithms employed in them exist in the literature. Some popular neural network structures include feed-forward NN, recurrent NN, radial basis NN etc [1]. These structures have their own limitations in their functionality. Similarly, the learning of NN is categorized as either supervised or unsupervised. In the supervised learning the goal output is necessary to find the relation between the input and the output (examples are gradient descent learning, Hebbian learning, back propagation learning, genetic algorithm etc. [2], [3]) while in the unsupervised learning no external goal is required; it results in the exposition of clusters for the given input patterns. But, both types of learning approaches have their own limitations.

Major limitation of neural network is the choice of structure as well as its learning scheme that rests on a particular application. But the complexity of application demands a complex NN structure and an effective learning resulting in an increase of memory consumption and the decrease of stability and plasticity of a system. To surmount these problems, several new neural schemes have been proposed but they all have some restriction [3]. Therefore, a new topology, called Interval Feed-Forward Neural Network (IFFNN), is proposed. This topology assures the global stability. Parameters (features) are defined in an interval range (hence plasticity), has simple structure involving less computation and low memory consumption. Here the supervised learning scheme employing gradient descent learning algorithm (which has a problem of local minima) is modified by using Lyapunov stability (LS) based learning algorithm. A simple interval feed-forward neural network is used for the identification of complex nonlinear systems,

hence the cost of computation and memory consumption are also highly reduced.

The paper is organized as follows: In Section II, a brief description of the structure of interval feed-forward neural network, the gradient descent (GD) and LS based learning algorithms are given. Section III discusses the simulated results and comparisons of our work with the existing works are discussed. Finally Section IV is dedicated to conclusion of the proposed work.

II. INTERVAL FEED-FORWARD NEURAL NETWORK

The structure of IFFNN is the same as topology of feed-forward neural network. The IFFNN is a combination of a structure of feed-forward neural network and LS based learning algorithm. The learning capability of LS assures the stability. An interval set defined by the upper and the lower bounds and the neural network can bestow the computation power. As mentioned in the introduction, the proposed IFFNN has stability, plasticity, less computation and low memory requirement; it permits an adaptive process for retaining the significant new information and saves the old information in memory.

A. Architecture of the IFFNN The architecture of IFFNN is depicted in Fig. 1. It

comprises an input layer and an output layer. The nodes in the input layer receive the input information and use a set of weighted layer parameters to generate the output.

Fig. 1: Structure of IFFNN

Let, x be the input, Lw and Uw be the lower and upper bound weights of IFFNN. Then,

( , )L Ly f w x, )Lf ( ,,( (1)

��������������������������������������������������������������������������������������������������������� ��������� Smriti Srivastava Madhusudan SinghDeptt. of Instrumentation & Control Engineering Hitech Robotic Systemz Ltd.Netaji Subhas Institute of Technology Gurgaon , Haryana, IndiaNew Delhi- 110078Email:[email protected] Email:[email protected]

2012 Fifth International Conference on Emerging Trends in Engineering and Technology

978-0-7695-4884-5/12 $26.00 © 2012 IEEEDOI 10.1109/ICETET.2012.59

211

2012 Fifth International Conference on Emerging Trends in Engineering and Technology

978-0-7695-4884-5/12 $26.00 © 2012 IEEEDOI 10.1109/ICETET.2012.59

211

2012 Fifth International Conference on Emerging Trends in Engineering and Technology

978-0-7695-4884-5/12 $26.00 © 2012 IEEE

DOI 10.1109/ICETET.2012.59

211

Page 2: [IEEE 2012 5th International Conference on Emerging Trends in Engineering and Technology (ICETET) - Himeji, Japan (2012.11.5-2012.11.7)] 2012 Fifth International Conference on Emerging

( , )U Uy f w xUf w x( ,,( (2) where, f is a linear or nonlinear function. Then final output is calculated as:

2

U Ly yyLy

2y y (3)

There are two connection weights between the input and the output nodes; one connection represents the lower bound and other represents the upper bound. The output layer contains the lower and the upper bounds. The lower output is derived by using the lower weights and the input information in Eq. (1). Similarly, the upper output is derived by using the upper weights and the input information in Eq. (2). The final output is taken as the average of the lower and the upper output values as in Eq. (3).

The lower and upper outputs of IFFNN are expressed as:

1( ) ( ) ( )

nL Li i

iy j w j x j b

nnn Lnb( ) b( )( ))( ))((( )L (

i 1i

i 1i j b( )iii ii( ))(i (ii ( (4)

1( ) ( ) ( )

nU Ui i

iy j w j x j b

nnn Unb( ) b( )( ))( ))((( )U (

i 1i

i 1i j b( )ii ( ))(i (ii ( (5)

where, xi(j) is the jth sample of the ith input, ( )Liw j is the jth

update of the lower bound weight connected between the ith

input and a output, ( )Uiw j is the jth update of the upper bound

weight connected between the ith input and a output, ( )Ly j is the jth sample of the lower bound output of output node and

( )Uy j is the jth sample of the upper bound output of output node. is the activation function and b is the threshold value. The activation functions are chosen different for different applications, viz., purelin, tansig, logsig etc. from Matlab.

B. Lyapunov Stability Based Learning Any global learning is bestowed with exploitation and

exploration abilities. An objective function J, also known as performance index (PI) is chosen in the form of normalized mean square error to be minimized with respect to the parameters (wL and wU) to learned:

2

2e j

J2

e j (6)

where, De j y j y jDy j y jD and yD is the desired output, y is the actual output and j is the jth training sample.

By suppressing the superscripts on Lw and Uw , w is taken

as the generalized weight vector (i.e. [ , ]L Uw w w[ ,L U[ ,, ). The gradient descent (GD) learning law is exploited in the updation of this weight vector as:

( 1) ( ) ( )w j w j w j1) ( ) ( )j j1) ( ) (( ) ( (7)

where, ( )( )Jw j

w jJ( )w j( J

)w j(( )(and is the learning rate > 0.

As GD algorithm suffers from the problem of local minima because of its inability to explore solution space. It is very difficult to the correct learning rate ( ) and moreover the gradient J with respect to weights may not change much so as

to move away from the local minima. So we introduce two parameters in Eq. (7) to improve the convergence and to allow the exploration of the solution space leading to the generalized updation law

1 2( 1) ( ) ( )w j w j w j1)1) 1 2( ) ( )j j1 2( ) (( ) (1 2 (8) where, 11 and 22 are the uncertain interval sets as

1 1 1,L U1

UL U1 11 11 1 and 2 2,L UUL U

2 22 22 2 . Next the Lyapunov function

is invoked to convert Eq. (8) in such a way that it explore the ability lacking in Eq. (7). The Lyapunov function is selected as is :

2 21( ) ( ) ( )2

V j e j w j2 2

21112

2 2( ) )2( ) (2 )( ) (( ( )j j( ) (( ) ( )( ) (( ) ( (9)

such that ( ) 0V j 0 and ( ) ( 1) ( ) 0V j V j V j( ) ( 1) ( ) 0V j j j( ) ( 1) () ( 1) ( . We will derive an expression for ( )V j( )V j( which will in turn help us derive an expression for ( )w j( )w j( .

2 2 2 21( ) ( 1) ( ) ( 1) ( )2

V j e j e j w j w j2 2 2 21( ) 12

V j( ) 12

2 2 2 2( 1) )2 2 22 2( 1) ( ) ( 1) (1) ( ) ( 1) (2 2 22 22 2 )( 1) ( ) ( 1) (( 1) ( ) ( 1) ( )j j j j( 1) ( ) ( 1) (( 1) ( ) ( 1) ( )( 1) ( ) ( 1) (( 1) ( ) ( 1) (

1 ( 1) ( ) ( 1) ( )2

e j e j e j e j1 ( ) ( 1) ( )2

j j j( ) ( 1) (( ) ( 1) (( 1)( 1)j( 1)e( 1)

( 1) ( ) ( 1) ( )w j w j w j w j( 1) ( ) ( 1) ( )w j j j j( 1) ( ) ( 1) (( ( (

1 ( ) 2 ( ) ( ) ( ) 2 ( ) ( )2

e j e j e j w j w j w j12

( ) 2 ( ) ( ) ( ) 2 ( ) ( )( ) 2 ( ) ( ) ( ) 2 ( ) () 2 ( ) ( ) ( ) 2 ( ) )j j j j j j( ) ( ) ( ) ( ) ( ) () ( ) ( ) ( ) ( ) )e( ) 2 ( ) ( ) ( ) 2 ( ) () 2 ( ) ( ) ( ) 2 ( )

2 21 ( ) ( ) 2 ( ) ( ) 2 ( ) ( )2

e j w j e j e j w j w j2 212

2 2( ) ( ) 2 ( ) ( ) 2 ( ) ( )2 2( ) ( ) 2 ( ) ( ) 2 ( ) () ( ) 2 ( ) ( ) 2 ( )2 2 )e( ) ( ) 2 ( ) ( ) 2 ( ) () ( ) 2 ( ) ( ) 2 ( ) ( )e j w j e j e j w j w j( ) ( ) 2 ( ) ( ) 2 ( ) () ( ) 2 ( ) ( ) 2 ( ) )e( ) ( ) 2 ( ) ( ) 2 ( ) (( ( (

2

21 ( )( ) 12 ( )

e jw jw j

222( )(2 ( )((221

22) 2w( 2 1 ( )((( )((( )j(( )(())(( )w( 1

( )((( )(( )j( )((( )((

( )2 ( ) ( ) ( )( )

e jw j w j e jw j

( )e(2 ( )j( )( e( ))( ) () (( ) ( )( ) ( e( ))(e()( ) (( ) ( )j)j(()(

)j j( ) () ()w j( )w( )w( )(((

(10)

Where we have used the following relations: ( 1) ( ) ( )e j e j e j1) ( ) ( )j j1) ( ) (( ) ( and ( 1) ( ) ( )w j w j w j1) ( ) ( )j j1) ( ) (( ) (

For very small values of ( )e j( )e j( and ( )w j( )w j( , we can write Eq. (10) can be written as:

221 ( )( ) ( ) 1

2 ( )e jV j w jw j

222( )(2 ( )((221( ) 1

2V j( ) 1

22) 2(( 2 1 ( )((( )((( )j(( )(())(( )(( 1

( )((( )( )j( )((( )((

( )2 ( ) ( ) ( )( )

e jw j w j e jw j

( )e(2 ( )j( )( e( ))( ) () (( ) ( )( ) () ( e( ))(e()( ) (( ) ( )j)j(()(

)j j( ) () ()w j( )w( )w( )(((

(11)

It follows from Eq.(6) that ( ) ( )e j y jw w( ) ( )e j( ) ( )y j(w w

(12)

In view of Eq. (12), Eq. (11) becomes: 2

21 ( )( ) ( ) 12 ( )

y jV j w jw j

222( )(2 ( )((221( ) 1

2V j( ) 1

22) 2(( 2 1 ( )((( )((( )y j((( )(())(( )( 1

( )((( )(( )j( )((( )((

212212212

Page 3: [IEEE 2012 5th International Conference on Emerging Trends in Engineering and Technology (ICETET) - Himeji, Japan (2012.11.5-2012.11.7)] 2012 Fifth International Conference on Emerging

( )2 ( ) ( ) ( )( )

y jw j w j e jw j

( )y(2 ( )j( )( y( ))( ) (( ) ( )( ) () ( y( ))y(y()( ) (( ) ( )j)y j(y()(

)j j( ) () ()w j( )w( )w( )(((

(13)

Applying the Lyapunov stability criterion, i.e. 00V , we obtain,

221 ( )( ) ( ) 1

2 ( )y jV j w jw j

222( )(2 ( )((221( ) 1

2V j( ) 1

22) 2(( 2 1 ( )((( )((( )y j((( )(())(( )( 1

)(( )(( )j(( )((( )((

( )2 ( ) ( ) ( ) 0( )

y jw j w j e jw j

( )y(2 ( )j( )( )y()( ) () (( ) ( )( ) () ( )y( )( ))y(( 000)( ) (( ) ( )( )( 000)j j( ) () ()w j( )w( )w( )( )(

(14)

Theorem: For the Lyapunov function,

2 21( ) ( ) ( )2

V j e j w j2 2

21112

2 2( ) )2( ) () (2 )( ) (( ) ( )j j( ) (( ) ( )( ) (( ) ( >0 the condition 0)( 0jV is

satisfied if and only if

2

( )( ) ( )( )

( )( )1( )

y jw j e jw j

w jy jw j

( )y(y( ))( ) () (( ) ( )( ) () ( y( ))y(y()w( ) () ( )j)y j(y()(

)j j( ) () ()w j( )w( )w( )w(((

( )w j( 22( )y( )y( )y j( )y(

)w j( )w(( )(

(15)

Proof: From Eq. (14), let:

221 ( )( ) ( ) 1

2 ( )y jV j w jw j

222( )(2 ( )((221( ) 1

2V j( ) 1

22) 2(( 2 1 ( )((( )((( )y j((( )(())(( )(( 1

)(( )(( )j(( )((( )((

( ) 12 ( ) ( ) ( )( ) 2

y jw j w j e j zw j

1( )y(2 ( )j( )( 1)y( 1( ) ( )( ) ( )( ) ( )) ( )y( 1)y( )y( )y( 11( ) ( )( ) ( )y j( )y( z( ) 2

j j( ) ( )) ()w j( )w( )w( )( )( )(( 22222222

(16)

where, z is a positive value. Simplification of Eq. (16) yields: 2

21 ( )1 ( )2 ( )

y j w jw j

222( )(((( 2)(1 ( 22)( )(((( ( )((( )()( ))(((((1 ( )((1 ( )j( )(

)( )((( )j( )( )((((((

( )2 ( ) ( ) ( ) 0( )

y jw j e j w j zw j

( )y(2 )y(( ) ( )) (( ) ( )( ) ( )) ( )y( 000( )( )( ))( ))y(( )(( ) ( ) 0( )( ))( )( 0( )j j( ) ( )) ( j( ))w j( )w( )w( )( ))((

(17)

This equation is in quadratic form, 0ˆˆ 2 0cxbxa and for deriving 0ˆ 2 0dx , where, d is the solution, we

substitute:2

( )0.5 1( )

y jaw j

222( )( )y(0.5 1 )y( )y( )y j( )y(1

)w(( )( )j( )w( )w(, ( )( ) ( )

( )y jb w j e jw j

( )y(y( ))w( ) () (( ) ( )( ) () ( y( ))y(y()w( ) () ( )j)y j(y()(

)j j( ) () ()w j( )w( )w( )(((

and zc 5.00 in 2 4 0b ac 0ac4 leading to:

2 2

( ) ( )( ) ( ) 1 0( ) ( )

y j y jw j e j zw j w j

22 22 22( )y( )

2( )( )((y( ) )((( )( )( ) ( )( )( ) y( ) )(( )(( 000( )y( )( )( ) )(( ))(((( )((w( ) ( )( ( )( ) 11 )1 (( )(( 01 01j( ) j( )

w j( )w( )w( )( )( )( ) )(( )( ))(( )j( )(( )(( )((

So we have, 2

2

( )( ) ( )( )

( )1( )

y jw j e jw j

zy jw j

22( )y(y(w( )( )( ) )j))((( )( y(y(w j( ) )j((

)w j(( )w(((22

( )y( )y( )y j( )y()w j( )w(( )(

, (18)

which is a positive value. Now using the parameters we can write,

2

( )( ) ( )( )

( )2 ( )1

( )

y jw j e jw jbw j

a y jw j

( )y( )y( )))w( ) ((( ) ( )( ) () ( )y j( )y(b

)w j e j( ) () ()w j( )w(((

( )2

w j( b22

( )y( )y( )y j( )y()w j( )w(( )(

(19)

Hence Eq. (15) is proved. Satisfaction of condition in Eq. (15) assures the stability in the sense of Lyapunov. Substituting

( )w j( )w j( from Eq. (15) in Eq. (8) we get:

1 2 2

( )( ) ( )( )

( 1) ( )( )1( )

y jw j e jw j

w j w jy jw j

( )y(y(w( )( )( ) )j))((( )( y(y(w j( ) )j(()w j(( )w(((

1)1) 1 2( )1 (1 22( )y( )y( )y j( )y(

)w j( )w(( )(

(20)

Alternatively, we can write Eq. (20) as 1 2( 1) ( )w j k w j k1 2k w j k1 21) ( )(1 (21)

21 1 2

( )1( )

ky jw j

122

1 22( )y( )y( )y j( )y(

)w j( )w(( )(

and 2

2 2

( )( )( )

( )1( )

y je jw jk

y jw j

2 ( )e j( ( )y j(y(( )w j(

22( )y( )y( )y j( )y(

)w j( )w(( )(

Eq. (21) is the Lyapunov stability based generalized update equation. This is the equation of a line with slope 1k responsible for stability and intercept 2k responsible for plasticity. Separating out the lower and upper bound weights in Eq. (21), we obtain

1 2( 1) ( )L L L Lw j k w j k1 2L L Lk w j k1 21) ( )(11) ( )( (21a)

Where, 21 1 2

( )1( )

LL L

L

ky j

w j

L2

12

1L

22( )y( )y( )y( )y( )y j(L )w j( )w (L ( )L (L ( )L

and 2

2 2

( )( )( )

( )1( )

LL

L

L

y je jw jk

y jw j

2 ( )L ye j( ( )y j(y( )Lw j(L

22( )y( )y( )y( )y(y j)y j(L j)w j( )w (L ( )L (L ( )L

1 2( 1) ( )U U U Uw j k w j k1 2U U Uk w j k1 21) ( )(11) ( ) (21b)

Where, 11 1 2

( )1( )

UU U

U

ky j

w j

U1

11

1U

22( )y( )y( )y( )y( )y j(

U )w j( )w (U ( )U (U (U

and 2

2 2

( )( )( )

( )1( )

UU

U

U

y je jw jk

y jw j

2 ( )U e j( ( )y j(( )Uw j(U

22( )y( )y( )y( )y(y j)y j(

U j)w j( )w (U ( )U (U (U

III. SIMULATION AND RESULTS As an example of identification and prediction by IFFNN

using the supervised learning scheme, the control of a chemical plant by a human operator is considered [5]. In this plant the inputs are denoted by ix and output is yD. These are:

1x : Monomer concentration,

2x : Change in monomer concentration

3x : Monomer flow rate,

213213213

Page 4: [IEEE 2012 5th International Conference on Emerging Trends in Engineering and Technology (ICETET) - Himeji, Japan (2012.11.5-2012.11.7)] 2012 Fifth International Conference on Emerging

4x : Local temperatures

5x : Local temperatures,

Dy : Monomer flow rate There are a total of 70 input-output pairs. For IFFNN,

single-layer interval feed-forward neural networks are designed one for each of the converging and diverging processes. The IFFNN has five inputs, one output and (5 1) upper and lower weights. During the learning, up to 50th sample, the desired goal of Performance Index (PI) (i.e. Eq. (6)) is achieved. Then learning is stopped and prediction is done for the next 20 samples

Fig. 5: Output of Converging and Diverging Process of Chemical Plant of IFFNN

Fig. 5 shows the identification and prediction of the chemical plant. The dark solid line denotes the desired output (yD), the solid line denotes the output of converging system ( cy ), the dotted lines are the lower and upper bound outputs of the converging process ( L

cy and Ucy respectively).

The dash-dot line denotes the output of diverging process ( dy ) and the dashed lines are the lower and upper bound outputs of the diverging process ( L

dy and Udy ). In general, using

the existing algorithms, the PI (or the minimum of objective function (J)) is achieved after several iterations but in the proposed method PI is achieved within the first iteration.

The proposed method assures the global stability, plasticity (as upper and lower bound outputs), less memory consumption (only 20 weights are used for both processes), less computation and fast learning algorithm. Table I gives the comparison of multi Layer feed-forward neural network (MLFFNN) and the proposed IFFNN with the learning schemes: gradient descent, back propagation and LS based learning scheme. For achieving the same PI, the number of iterations (one presentation of whole data set) required in the proposed method is always less than one. Hence the computation cost is highly reduced. Also, the total number of parameters used in IFFNN is very less as compared to MLFFNN. Table III shows the values of the different parameters used for identification.

Table I: PERFORMANCE OF DIFFERENT IDENTIFICATION METHODS

Method Structure Number of Parameters

Average learning epochs

MLFFNN+GD 5-20-1 120 45

MLFFNN+BP 5-20-1 120 39

MLFFNN+LS 5-20-1 120 34 IFFNN+GD 5-1 20 4 IFFNN+BP 5-1 20 3 IFFNN+LS 5-1 20 (0.7143) <1

IV. CONCLUSION This present work involves the identification of IFFNN

based system with the lower and the upper bounds. The global stability of system is assured as the system parameters are learned using the Lyapunov stability criterion. The use of solution of difference equation provides the lower and upper bounds of parameters within which the system remains stable. Based on these two bounds of parameters, the outputs are evaluated to yield converging and diverging outputs.

The results of simulation using the supervised learning on a benchmark systems show the superiority of the proposed method over other methods.

Present work thus opens up a new direction in the field of interval feed-forward neural network (IFFNN) systems for a variety of applications can be dealt with by the use of different types of learning.

V. REFERENCES [1] Kumpati S. Narendra and Kannan Parathasarathy, “Identification and

control of dynamic system using neural network”, IEEE Trans. on Neural Network, Vol. 1, no.1, pp.4-27, March 1990.

[2] Madan M. Gupta and Dandina H. Rao, “Neuro-Control Systems, Theory and Applications”, IEEE Press.

Udy

Ldy

Lcy

Ucy

cy

Dy

dy

214214214

Page 5: [IEEE 2012 5th International Conference on Emerging Trends in Engineering and Technology (ICETET) - Himeji, Japan (2012.11.5-2012.11.7)] 2012 Fifth International Conference on Emerging

[3] J.M.Zurada, Introduction to Artificial Neural Systems, St. Paul, MN (Editor): West Publishing, 1992.

[5] O. Adetona, and L.H. Keel, “A New Method for the Control of Discrete Nonlinear dynamic Systems Using Neural Networks”, IEEE Trans. on Neural Network, Vol. 11, no.1, pp.102-112, January 2000.

[6] Mohammad Fazle Azeem, Madasu Hanmandlu,and Nesar Ahmad, “Structure Identification of Generalized Adaptive Neuro-Fuzzy Inference Systems”, IEEE Trans. on Fuzzy Systems, Vol. 11, No. 5, pp-666-681, Oct 2003.

[7] Mang-Hui Wang, “ Extension neural network-type 2 and its applications”, IEEE Trans. on Neural Networks, Vol. 16, No. 6, pp. 1352-1361, November 2005.

[8] P. K. Simpson, “Fuzzy min-max neural networks-part 2: clustering,” IEEE Trans. Fuzzy Syst., vol. 1, pp. 32–45, 1993.

[9] J. Shi and J. Malik, “Normalized cuts and image segmentation,” IEEE Trans. Pattern Anal. Machine Intell., vol. 22, pp. 888–905, Aug. 2000.

[10] Baraldi and E. Alpaydin, “Constructive feedforward ART clustering networks-part I,” IEEE Trans. Neural Netw., vol. 13, pp. 645–661, 2002.

[11] C. J. Merz and P. M. Murphy, UCI Repository of Machine LearningDatabases. Irvine: Dept. of Information and Computer Science, Univ. of California, 1996.

[12] H. W.William and O. L. Mangasarian, “Multisurface method of pattern separation for medical diagnosis applied to breast cytology,” in Proc. Nat. Acad. Sci., vol. 87, Dec. 1990, pp. 9193–9196.

215215215